[jira] [Created] (KAFKA-10230) Include min.insync.replicas in MetadataResponse to make Producer smarter in determining unavailable partitions

2020-07-02 Thread Arvin Zheng (Jira)
Arvin Zheng created KAFKA-10230:
---

 Summary: Include min.insync.replicas in MetadataResponse to make 
Producer smarter in determining unavailable partitions
 Key: KAFKA-10230
 URL: https://issues.apache.org/jira/browse/KAFKA-10230
 Project: Kafka
  Issue Type: Improvement
  Components: clients, core
Reporter: Arvin Zheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4690

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10173: Use SmokeTest for upgrade system tests (#8938)


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED


Build failed in Jenkins: kafka-trunk-jdk14 #265

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10173: Use SmokeTest for upgrade system tests (#8938)


--
[...truncated 3.19 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] 

Build failed in Jenkins: kafka-trunk-jdk11 #1616

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10173: Use SmokeTest for upgrade system tests (#8938)


--
[...truncated 3.19 MB...]
org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task 

Build failed in Jenkins: kafka-trunk-jdk14 #264

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github]  KAFKA-10006: Don't create internal topics when


--
[...truncated 3.19 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1615

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github]  KAFKA-10006: Don't create internal topics when


--
[...truncated 3.19 MB...]
org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task 

Build failed in Jenkins: kafka-trunk-jdk8 #4689

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github]  KAFKA-10006: Don't create internal topics when


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED


Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-07-02 Thread John Roesler
Hi Navinder,

Thanks for the response. I’m sorry if I’m being dense... You said we are not 
currently using the config, but I thought we would pass the config through to 
the client.  Can you confirm whether or not the existing config works for your 
use case?

Thanks,
John

On Sun, Jun 28, 2020, at 14:09, Navinder Brar wrote:
> Sorry my bad. Found it.
> 
> 
> 
> Prefix used to override {@link KafkaConsumer consumer} configs for the 
> global consumer client from
> 
> * the general consumer client configs. The override precedence is the 
> following (from highest to lowest precedence):
> * 1. global.consumer.[config-name]..
> public static final String GLOBAL_CONSUMER_PREFIX = "global.consumer.";
> 
> 
> 
> So, that's great. We already have a config exposed to reset offsets for 
> global topics via global.consumer.auto.offset.reset just that we are 
> not actually using it inside GlobalStreamThread to reset.
> 
> -Navinder
> On Monday, 29 June, 2020, 12:24:21 am IST, Navinder Brar 
>  wrote:  
>  
>  Hi John,
> 
> Thanks for your feedback. 
> 1. I think there is some confusion on my first point, the enum I am 
> sure we can use the same one but the external config which controls the 
> resetting in global stream thread either we can the same one which 
> users use for source topics(StreamThread) or we can provide a new one 
> which specifically controls global topics. For e.g. currently if I get 
> an InvalidOffsetException in any of my source topics, I can choose 
> whether to reset from Earliest or Latest(with auto.offset.reset). Now 
> either we can use the same option and say if I get the same exception 
> for global topics I will follow same resetting. Or some users might 
> want to have totally different setting for both source and global 
> topics, like for source topic I want resetting from Latest but for 
> global topics I want resetting from Earliest so in that case adding a 
> new config might be better.
> 
> 2. I couldn't find this config currently 
> "global.consumer.auto.offset.reset". Infact in GlobalStreamThread.java 
> we are throwing a StreamsException for InvalidOffsetException and there 
> is a test as 
> well GlobalStreamThreadTest#shouldDieOnInvalidOffsetException(), so I 
> think this is the config we are trying to introduce with this KIP.
> 
> -Navinder  On Saturday, 27 June, 2020, 07:03:04 pm IST, John Roesler 
>  wrote:  
>  
>  Hi Navinder,
> 
> Thanks for this proposal!
> 
> Regarding your question about whether to use the same policy
> enum or not, the underlying mechanism is the same, so I think
> we can just use the same AutoOffsetReset enum.
> 
> Can you confirm whether setting the reset policy config on the
> global consumer currently works or not? Based on my reading
> of StreamsConfig, it looks like it would be:
> "global.consumer.auto.offset.reset".
> 
> If that does work, would you still propose to augment the
> Java API?
> 
> Thanks,
> -John
> 
> On Fri, Jun 26, 2020, at 23:52, Navinder Brar wrote:
> > Hi,
> > 
> > KIP: 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-406%3A+GlobalStreamThread+should+honor+custom+reset+policy
> > 
> > I have taken over this KIP since it has been dormant for a long time 
> > and this looks important for use-cases that have large global data, so 
> > rebuilding global stores from scratch might seem overkill in case of 
> > InvalidOffsetExecption.
> > 
> > We want to give users the control to use reset policy(as we do in 
> > StreamThread) in case they hit invalid offsets. I have still not 
> > decided whether to restrict this option to the same reset policy being 
> > used by StreamThread(using auto.offset.reset config) or add another 
> > reset config specifically for global stores 
> > "global.auto.offset.reset" which gives users more control to choose 
> > separate policies for global and stream threads.
> > 
> > I would like to hear your opinions on the KIP.
> > 
> > 
> > -Navinder


Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-07-02 Thread John Roesler
Hey Jorge,

Thanks for the details. That sounds like a mistake to me on both counts.

I don’t think you need to worry about those depreciations. If the interface 
methods aren’t deprecated, then the methods are not deprecated. We should 
remove the annotations, but it doesn’t need to be in the kip.

I think any query methods should have been in the ReadOnly interface. I guess 
it’s up to you whether you want to:
1. Add the reverse methods next to the existing methods (what you have in the 
kip right now)
2. Partially fix it by adding your new methods to the ReadOnly interface 
3. Fully fix the problem by moving the existing methods as well as your new 
ones. Since  SessionStore extends ReadOnlySessionStore, it’s ok just to move 
the definitions. 

I’m ok with whatever you prefer. 

Thanks,
John

On Thu, Jul 2, 2020, at 11:29, Jorge Esteban Quilcate Otoya wrote:
> (sorry for the spam)
> 
> Actually `findSessions` are only deprecated on `InMemorySessionStore`,
> which seems strange as RocksDB and interfaces haven't marked these methods
> as deprecated.
> 
> Any hint on how to handle this?
> 
> Cheers,
> 
> On Thu, Jul 2, 2020 at 4:57 PM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
> 
> > @John: I can see there are some deprecations in there as well. Will update
> > the KIP.
> >
> > Thanks,
> > Jorge
> >
> >
> > On Thu, Jul 2, 2020 at 3:29 PM Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> >> Thanks John.
> >>
> >> > it looks like there’s a revision error in which two methods are
> >> proposed for SessionStore, but seem like they should be in
> >> ReadOnlySessionStore. Do I read that right?
> >>
> >> Yes, I've opted to keep the new methods alongside the existing ones. In
> >> the case of SessionStore, `findSessions` are in `SessionStore`, and `fetch`
> >> are in `ReadOnlySessionStore`. If it makes more sense, I can move all of
> >> them to ReadOnlySessionStore.
> >> Let me know what you think.
> >>
> >> Thanks,
> >> Jorge.
> >>
> >> On Thu, Jul 2, 2020 at 2:36 PM John Roesler  wrote:
> >>
> >>> Hi Jorge,
> >>>
> >>> Thanks for the update. I think this is a good plan.
> >>>
> >>> I just took a look at the KIP again, and it looks like there’s a
> >>> revision error in which two methods are proposed for SessionStore, but 
> >>> seem
> >>> like they should be in ReadOnlySessionStore. Do I read that right?
> >>>
> >>> Otherwise, I’m happy with your proposal.
> >>>
> >>> Thanks,
> >>> John
> >>>
> >>> On Wed, Jul 1, 2020, at 17:01, Jorge Esteban Quilcate Otoya wrote:
> >>> > Quick update: KIP is updated with latest changes now.
> >>> > Will leave it open this week while working on the PR.
> >>> >
> >>> > Hope to open a new vote thread over the next few days if no additional
> >>> > feedback is provided.
> >>> >
> >>> > Cheers,
> >>> > Jorge.
> >>> >
> >>> > On Mon, Jun 29, 2020 at 11:30 AM Jorge Esteban Quilcate Otoya <
> >>> > quilcate.jo...@gmail.com> wrote:
> >>> >
> >>> > > Thanks, John!
> >>> > >
> >>> > > Make sense to reconsider the current approach. I was heading in a
> >>> similar
> >>> > > direction while drafting the implementation. Metered, Caching, and
> >>> other
> >>> > > layers will also have to get duplicated to build up new methods in
> >>> `Stores`
> >>> > > factory, and class casting issues would appear on stores created by
> >>> DSL.
> >>> > >
> >>> > > I will draft a proposal with new methods (move methods from proposed
> >>> > > interfaces to existing ones) with default implementation in a KIP
> >>> update
> >>> > > and wait for Matthias to chime in to validate this approach.
> >>> > >
> >>> > > Jorge.
> >>> > >
> >>> > >
> >>> > > On Sat, Jun 27, 2020 at 4:01 PM John Roesler 
> >>> wrote:
> >>> > >
> >>> > >> Hi Jorge,
> >>> > >>
> >>> > >> Sorry for my silence, I've been absorbed with the 2.6 and 2.5.1
> >>> releases.
> >>> > >>
> >>> > >> The idea to separate the new methods into "mixin" interfaces seems
> >>> > >> like a good one, but as we've discovered in KIP-614, it doesn't work
> >>> > >> out that way in practice. The problem is that the store
> >>> implementations
> >>> > >> are just the base layer that get composed with other layers in
> >>> Streams
> >>> > >> before they can be accessed in the DSL. This is extremely subtle, so
> >>> > >> I'm going to put everyone to sleep with a detailed explanation:
> >>> > >>
> >>> > >> For example, this is the mechanism by which all KeyValueStore
> >>> > >> implementations get added to Streams:
> >>> > >> org.apache.kafka.streams.state.internals.KeyValueStoreBuilder#build
> >>> > >> return new MeteredKeyValueStore<>(
> >>> > >>   maybeWrapCaching(maybeWrapLogging(storeSupplier.get())),
> >>> > >>   storeSupplier.metricsScope(),
> >>> > >>   time,
> >>> > >>   keySerde,
> >>> > >>   valueSerde
> >>> > >> );
> >>> > >>
> >>> > >> In the DSL, the store that a processor gets from the context would
> >>> be
> >>> > >> the result of this composition. So even if the storeSupplier.get()
> >>> returns
> >>> 

[jira] [Created] (KAFKA-10229) Kafka stream dies when earlier shut down node leaves group, no errors logged on client

2020-07-02 Thread Raman Gupta (Jira)
Raman Gupta created KAFKA-10229:
---

 Summary: Kafka stream dies when earlier shut down node leaves 
group, no errors logged on client
 Key: KAFKA-10229
 URL: https://issues.apache.org/jira/browse/KAFKA-10229
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.4.1
Reporter: Raman Gupta


My broker and clients are 2.4.1. I'm currently running a single broker. I have 
a Kafka stream with exactly once processing turned on. I also have an uncaught 
exception handler defined on the client. I have a stream which I noticed was 
lagging. Upon investigation, I see that the consumer group was empty.

On restarting the consumers, the consumer group re-established itself, but 
after about 8 minutes, the group became empty again. There is nothing logged on 
the client side about any stream errors, despite the existence of an uncaught 
exception handler.

In the broker logs, I see that about 8 minutes after the clients restart / the 
stream goes to RUNNING state:

```
[2020-07-02 17:34:47,033] INFO [GroupCoordinator 0]: Member 
cis-d7fb64c95-kl9wl-1-630af77f-138e-49d1-b76a-6034801ee359 in group 
produs-cisFileIndexer-stream has failed, removing it from the group 
(kafka.coordinator.group.GroupCoordinator)
[2020-07-02 17:34:47,033] INFO [GroupCoordinator 0]: Preparing to rebalance 
group produs-cisFileIndexer-stream in state PreparingRebalance with old 
generation 228 (__consumer_offsets-3) (reason: removing member 
cis-d7fb64c95-kl9wl-1-630af77f-138e-49d1-b76a-6034801ee359 on heartbeat 
expiration) (kafka.coordinator.group.GroupCoordinator)
```

so according to this the consumer heartbeat has expired. I don't know why this 
would be, logging shows that the stream was running and processing messages 
normally and then just stopped processing anything about 4 minutes before it 
dies, with no apparent errors or issues or anything logged via the uncaught 
exception handler.

It doesn't appear to be related to any specific poison pill type messages: 
restarting the stream causes it to reprocess a bunch more messages from the 
backlog, and then die again approximately 8 minutes later. At the time of the 
last message consumed by the stream, there are no `INFO`-level or above logs 
either in the client or the broker, or any errors whatsoever. The stream 
consumption simply stops.

There are two consumers -- even if I limit consumption to only a single 
consumer, the same thing happens.

The runtime environment is Kubernetes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10228) producer: NETWORK_EXCEPTION is thrown instead of a request timeout

2020-07-02 Thread Christian Becker (Jira)
Christian Becker created KAFKA-10228:


 Summary: producer: NETWORK_EXCEPTION is thrown instead of a 
request timeout
 Key: KAFKA-10228
 URL: https://issues.apache.org/jira/browse/KAFKA-10228
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.3.1
Reporter: Christian Becker


We're currently seeing an issue with the java client (producer), when message 
producing runs into a timeout. Namely a NETWORK_EXCEPTION is thrown instead of 
a timeout exception.

*Situation and relevant code:*

Config
{code:java}
request.timeout.ms: 200
retries: 3
acks: all{code}
{code:java}
for (UnpublishedEvent event : unpublishedEvents) {
ListenableFuture> future;
future = kafkaTemplate.send(new ProducerRecord<>(event.getTopic(), 
event.getKafkaKey(), event.getPayload()));
futures.add(future.completable());
}

CompletableFuture.allOf(futures.stream().toArray(CompletableFuture[]::new)).join();{code}
We're using the KafkaTemplate from SpringBoot here, but it shouldn't matter, as 
it's merely a wrapper. There we put in batches of messages to be sent.

200ms later, we can see the following in the logs:
{code:java}
[Producer clientId=producer-1] Received invalid metadata error in produce 
request on partition events-6 due to 
org.apache.kafka.common.errors.NetworkException: The server disconnected before 
a response was received.. Going to request metadata update now
[Producer clientId=producer-1] Got error produce response with correlation id 
3094 on topic-partition events-6, retrying (2 attempts left). Error: 
NETWORK_EXCEPTION {code}
This was somewhat unexpected and sent us for a hunt across the infrastructure 
for possible connection issues, but we've found none.

Side note: In some cases the retries worked and the messages were successfully 
produced.

Only many hours of heavy debugging, we've noticed, that the error might be 
related to the low timeout setting. We've removed that setting now, as it was a 
remnant from the past and no longer valid for our use-case. However in order to 
avoid other people having that issue again and to simplify future debugging, 
some form of timeout exception should be thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-2.4-jdk8 #231

2020-07-02 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-07-02 Thread Jorge Esteban Quilcate Otoya
(sorry for the spam)

Actually `findSessions` are only deprecated on `InMemorySessionStore`,
which seems strange as RocksDB and interfaces haven't marked these methods
as deprecated.

Any hint on how to handle this?

Cheers,

On Thu, Jul 2, 2020 at 4:57 PM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> @John: I can see there are some deprecations in there as well. Will update
> the KIP.
>
> Thanks,
> Jorge
>
>
> On Thu, Jul 2, 2020 at 3:29 PM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
>> Thanks John.
>>
>> > it looks like there’s a revision error in which two methods are
>> proposed for SessionStore, but seem like they should be in
>> ReadOnlySessionStore. Do I read that right?
>>
>> Yes, I've opted to keep the new methods alongside the existing ones. In
>> the case of SessionStore, `findSessions` are in `SessionStore`, and `fetch`
>> are in `ReadOnlySessionStore`. If it makes more sense, I can move all of
>> them to ReadOnlySessionStore.
>> Let me know what you think.
>>
>> Thanks,
>> Jorge.
>>
>> On Thu, Jul 2, 2020 at 2:36 PM John Roesler  wrote:
>>
>>> Hi Jorge,
>>>
>>> Thanks for the update. I think this is a good plan.
>>>
>>> I just took a look at the KIP again, and it looks like there’s a
>>> revision error in which two methods are proposed for SessionStore, but seem
>>> like they should be in ReadOnlySessionStore. Do I read that right?
>>>
>>> Otherwise, I’m happy with your proposal.
>>>
>>> Thanks,
>>> John
>>>
>>> On Wed, Jul 1, 2020, at 17:01, Jorge Esteban Quilcate Otoya wrote:
>>> > Quick update: KIP is updated with latest changes now.
>>> > Will leave it open this week while working on the PR.
>>> >
>>> > Hope to open a new vote thread over the next few days if no additional
>>> > feedback is provided.
>>> >
>>> > Cheers,
>>> > Jorge.
>>> >
>>> > On Mon, Jun 29, 2020 at 11:30 AM Jorge Esteban Quilcate Otoya <
>>> > quilcate.jo...@gmail.com> wrote:
>>> >
>>> > > Thanks, John!
>>> > >
>>> > > Make sense to reconsider the current approach. I was heading in a
>>> similar
>>> > > direction while drafting the implementation. Metered, Caching, and
>>> other
>>> > > layers will also have to get duplicated to build up new methods in
>>> `Stores`
>>> > > factory, and class casting issues would appear on stores created by
>>> DSL.
>>> > >
>>> > > I will draft a proposal with new methods (move methods from proposed
>>> > > interfaces to existing ones) with default implementation in a KIP
>>> update
>>> > > and wait for Matthias to chime in to validate this approach.
>>> > >
>>> > > Jorge.
>>> > >
>>> > >
>>> > > On Sat, Jun 27, 2020 at 4:01 PM John Roesler 
>>> wrote:
>>> > >
>>> > >> Hi Jorge,
>>> > >>
>>> > >> Sorry for my silence, I've been absorbed with the 2.6 and 2.5.1
>>> releases.
>>> > >>
>>> > >> The idea to separate the new methods into "mixin" interfaces seems
>>> > >> like a good one, but as we've discovered in KIP-614, it doesn't work
>>> > >> out that way in practice. The problem is that the store
>>> implementations
>>> > >> are just the base layer that get composed with other layers in
>>> Streams
>>> > >> before they can be accessed in the DSL. This is extremely subtle, so
>>> > >> I'm going to put everyone to sleep with a detailed explanation:
>>> > >>
>>> > >> For example, this is the mechanism by which all KeyValueStore
>>> > >> implementations get added to Streams:
>>> > >> org.apache.kafka.streams.state.internals.KeyValueStoreBuilder#build
>>> > >> return new MeteredKeyValueStore<>(
>>> > >>   maybeWrapCaching(maybeWrapLogging(storeSupplier.get())),
>>> > >>   storeSupplier.metricsScope(),
>>> > >>   time,
>>> > >>   keySerde,
>>> > >>   valueSerde
>>> > >> );
>>> > >>
>>> > >> In the DSL, the store that a processor gets from the context would
>>> be
>>> > >> the result of this composition. So even if the storeSupplier.get()
>>> returns
>>> > >> a store that implements the "reverse" interface, when you try to
>>> use it
>>> > >> from a processor like:
>>> > >> org.apache.kafka.streams.kstream.ValueTransformerWithKey#init
>>> > >> ReadOnlyBackwardWindowStore store =
>>> > >>   (ReadOnlyBackwardWindowStore) context.getStateStore(..)
>>> > >>
>>> > >> You'd just get a ClassCastException because it's actually a
>>> > >> MeteredKeyValueStore, which doesn't implement
>>> > >> ReadOnlyBackwardWindowStore.
>>> > >>
>>> > >> The only way to make this work would be to make the Metered,
>>> > >> Caching, and Logging layers also implement the new interfaces,
>>> > >> but this effectively forces implementations to also implement
>>> > >> the interface. Otherwise, the intermediate layers would have to
>>> > >> cast the store in each method, like this:
>>> > >> MeteredWindowStore#backwardFetch {
>>> > >>   ((ReadOnlyBackwardWindowStore) innerStore).backwardFetch(..)
>>> > >> }
>>> > >>
>>> > >> And then if the implementation doesn't "opt in" by implementing
>>> > >> the interface, you'd get a ClassCastException, not when you get the
>>> > >> store, 

Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-07-02 Thread Jorge Esteban Quilcate Otoya
@John: I can see there are some deprecations in there as well. Will update
the KIP.

Thanks,
Jorge


On Thu, Jul 2, 2020 at 3:29 PM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Thanks John.
>
> > it looks like there’s a revision error in which two methods are proposed
> for SessionStore, but seem like they should be in ReadOnlySessionStore. Do
> I read that right?
>
> Yes, I've opted to keep the new methods alongside the existing ones. In
> the case of SessionStore, `findSessions` are in `SessionStore`, and `fetch`
> are in `ReadOnlySessionStore`. If it makes more sense, I can move all of
> them to ReadOnlySessionStore.
> Let me know what you think.
>
> Thanks,
> Jorge.
>
> On Thu, Jul 2, 2020 at 2:36 PM John Roesler  wrote:
>
>> Hi Jorge,
>>
>> Thanks for the update. I think this is a good plan.
>>
>> I just took a look at the KIP again, and it looks like there’s a revision
>> error in which two methods are proposed for SessionStore, but seem like
>> they should be in ReadOnlySessionStore. Do I read that right?
>>
>> Otherwise, I’m happy with your proposal.
>>
>> Thanks,
>> John
>>
>> On Wed, Jul 1, 2020, at 17:01, Jorge Esteban Quilcate Otoya wrote:
>> > Quick update: KIP is updated with latest changes now.
>> > Will leave it open this week while working on the PR.
>> >
>> > Hope to open a new vote thread over the next few days if no additional
>> > feedback is provided.
>> >
>> > Cheers,
>> > Jorge.
>> >
>> > On Mon, Jun 29, 2020 at 11:30 AM Jorge Esteban Quilcate Otoya <
>> > quilcate.jo...@gmail.com> wrote:
>> >
>> > > Thanks, John!
>> > >
>> > > Make sense to reconsider the current approach. I was heading in a
>> similar
>> > > direction while drafting the implementation. Metered, Caching, and
>> other
>> > > layers will also have to get duplicated to build up new methods in
>> `Stores`
>> > > factory, and class casting issues would appear on stores created by
>> DSL.
>> > >
>> > > I will draft a proposal with new methods (move methods from proposed
>> > > interfaces to existing ones) with default implementation in a KIP
>> update
>> > > and wait for Matthias to chime in to validate this approach.
>> > >
>> > > Jorge.
>> > >
>> > >
>> > > On Sat, Jun 27, 2020 at 4:01 PM John Roesler 
>> wrote:
>> > >
>> > >> Hi Jorge,
>> > >>
>> > >> Sorry for my silence, I've been absorbed with the 2.6 and 2.5.1
>> releases.
>> > >>
>> > >> The idea to separate the new methods into "mixin" interfaces seems
>> > >> like a good one, but as we've discovered in KIP-614, it doesn't work
>> > >> out that way in practice. The problem is that the store
>> implementations
>> > >> are just the base layer that get composed with other layers in
>> Streams
>> > >> before they can be accessed in the DSL. This is extremely subtle, so
>> > >> I'm going to put everyone to sleep with a detailed explanation:
>> > >>
>> > >> For example, this is the mechanism by which all KeyValueStore
>> > >> implementations get added to Streams:
>> > >> org.apache.kafka.streams.state.internals.KeyValueStoreBuilder#build
>> > >> return new MeteredKeyValueStore<>(
>> > >>   maybeWrapCaching(maybeWrapLogging(storeSupplier.get())),
>> > >>   storeSupplier.metricsScope(),
>> > >>   time,
>> > >>   keySerde,
>> > >>   valueSerde
>> > >> );
>> > >>
>> > >> In the DSL, the store that a processor gets from the context would be
>> > >> the result of this composition. So even if the storeSupplier.get()
>> returns
>> > >> a store that implements the "reverse" interface, when you try to use
>> it
>> > >> from a processor like:
>> > >> org.apache.kafka.streams.kstream.ValueTransformerWithKey#init
>> > >> ReadOnlyBackwardWindowStore store =
>> > >>   (ReadOnlyBackwardWindowStore) context.getStateStore(..)
>> > >>
>> > >> You'd just get a ClassCastException because it's actually a
>> > >> MeteredKeyValueStore, which doesn't implement
>> > >> ReadOnlyBackwardWindowStore.
>> > >>
>> > >> The only way to make this work would be to make the Metered,
>> > >> Caching, and Logging layers also implement the new interfaces,
>> > >> but this effectively forces implementations to also implement
>> > >> the interface. Otherwise, the intermediate layers would have to
>> > >> cast the store in each method, like this:
>> > >> MeteredWindowStore#backwardFetch {
>> > >>   ((ReadOnlyBackwardWindowStore) innerStore).backwardFetch(..)
>> > >> }
>> > >>
>> > >> And then if the implementation doesn't "opt in" by implementing
>> > >> the interface, you'd get a ClassCastException, not when you get the
>> > >> store, but when you try to use it.
>> > >>
>> > >> The fact that we get ClassCastExceptions no matter which way we
>> > >> turn here indicates that we're really not getting any benefit from
>> the
>> > >> type system, which makes the extra interfaces seem not worth all the
>> > >> code involved.
>> > >>
>> > >> Where we landed in KIP-614 is that, unless we want to completely
>> > >> revamp the way that StateStores work in the DSL, you might as
>> > >> well just 

Re: Permission to create KIPs

2020-07-02 Thread Manikumar
Hi,

Thanks for your interest. I have given you wiki edit permissions.

Thanks,

On Thu, Jul 2, 2020 at 9:05 PM Arvin Zheng  wrote:

> Hi,
>
> Could someone please help grant permission to create KIPs to user
> arvin.zheng?
>
> Thanks,
> Arvin
>


Permission to create KIPs

2020-07-02 Thread Arvin Zheng
Hi,

Could someone please help grant permission to create KIPs to user
arvin.zheng?

Thanks,
Arvin


Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-07-02 Thread Jorge Esteban Quilcate Otoya
Thanks John.

> it looks like there’s a revision error in which two methods are proposed
for SessionStore, but seem like they should be in ReadOnlySessionStore. Do
I read that right?

Yes, I've opted to keep the new methods alongside the existing ones. In the
case of SessionStore, `findSessions` are in `SessionStore`, and `fetch` are
in `ReadOnlySessionStore`. If it makes more sense, I can move all of them
to ReadOnlySessionStore.
Let me know what you think.

Thanks,
Jorge.

On Thu, Jul 2, 2020 at 2:36 PM John Roesler  wrote:

> Hi Jorge,
>
> Thanks for the update. I think this is a good plan.
>
> I just took a look at the KIP again, and it looks like there’s a revision
> error in which two methods are proposed for SessionStore, but seem like
> they should be in ReadOnlySessionStore. Do I read that right?
>
> Otherwise, I’m happy with your proposal.
>
> Thanks,
> John
>
> On Wed, Jul 1, 2020, at 17:01, Jorge Esteban Quilcate Otoya wrote:
> > Quick update: KIP is updated with latest changes now.
> > Will leave it open this week while working on the PR.
> >
> > Hope to open a new vote thread over the next few days if no additional
> > feedback is provided.
> >
> > Cheers,
> > Jorge.
> >
> > On Mon, Jun 29, 2020 at 11:30 AM Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> > > Thanks, John!
> > >
> > > Make sense to reconsider the current approach. I was heading in a
> similar
> > > direction while drafting the implementation. Metered, Caching, and
> other
> > > layers will also have to get duplicated to build up new methods in
> `Stores`
> > > factory, and class casting issues would appear on stores created by
> DSL.
> > >
> > > I will draft a proposal with new methods (move methods from proposed
> > > interfaces to existing ones) with default implementation in a KIP
> update
> > > and wait for Matthias to chime in to validate this approach.
> > >
> > > Jorge.
> > >
> > >
> > > On Sat, Jun 27, 2020 at 4:01 PM John Roesler 
> wrote:
> > >
> > >> Hi Jorge,
> > >>
> > >> Sorry for my silence, I've been absorbed with the 2.6 and 2.5.1
> releases.
> > >>
> > >> The idea to separate the new methods into "mixin" interfaces seems
> > >> like a good one, but as we've discovered in KIP-614, it doesn't work
> > >> out that way in practice. The problem is that the store
> implementations
> > >> are just the base layer that get composed with other layers in Streams
> > >> before they can be accessed in the DSL. This is extremely subtle, so
> > >> I'm going to put everyone to sleep with a detailed explanation:
> > >>
> > >> For example, this is the mechanism by which all KeyValueStore
> > >> implementations get added to Streams:
> > >> org.apache.kafka.streams.state.internals.KeyValueStoreBuilder#build
> > >> return new MeteredKeyValueStore<>(
> > >>   maybeWrapCaching(maybeWrapLogging(storeSupplier.get())),
> > >>   storeSupplier.metricsScope(),
> > >>   time,
> > >>   keySerde,
> > >>   valueSerde
> > >> );
> > >>
> > >> In the DSL, the store that a processor gets from the context would be
> > >> the result of this composition. So even if the storeSupplier.get()
> returns
> > >> a store that implements the "reverse" interface, when you try to use
> it
> > >> from a processor like:
> > >> org.apache.kafka.streams.kstream.ValueTransformerWithKey#init
> > >> ReadOnlyBackwardWindowStore store =
> > >>   (ReadOnlyBackwardWindowStore) context.getStateStore(..)
> > >>
> > >> You'd just get a ClassCastException because it's actually a
> > >> MeteredKeyValueStore, which doesn't implement
> > >> ReadOnlyBackwardWindowStore.
> > >>
> > >> The only way to make this work would be to make the Metered,
> > >> Caching, and Logging layers also implement the new interfaces,
> > >> but this effectively forces implementations to also implement
> > >> the interface. Otherwise, the intermediate layers would have to
> > >> cast the store in each method, like this:
> > >> MeteredWindowStore#backwardFetch {
> > >>   ((ReadOnlyBackwardWindowStore) innerStore).backwardFetch(..)
> > >> }
> > >>
> > >> And then if the implementation doesn't "opt in" by implementing
> > >> the interface, you'd get a ClassCastException, not when you get the
> > >> store, but when you try to use it.
> > >>
> > >> The fact that we get ClassCastExceptions no matter which way we
> > >> turn here indicates that we're really not getting any benefit from the
> > >> type system, which makes the extra interfaces seem not worth all the
> > >> code involved.
> > >>
> > >> Where we landed in KIP-614 is that, unless we want to completely
> > >> revamp the way that StateStores work in the DSL, you might as
> > >> well just add the new methods to the existing interfaces. To prevent
> > >> compilation errors, we can add default implementations that throw
> > >> UnsupportedOperationException. If a store doesn't opt in by
> > >> implementing the methods, you'd get an UnsupportedOperationException,
> > >> which seems no worse, and maybe better, than the 

Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-07-02 Thread John Roesler
Hi Jorge,

Thanks for the update. I think this is a good plan. 

I just took a look at the KIP again, and it looks like there’s a revision error 
in which two methods are proposed for SessionStore, but seem like they should 
be in ReadOnlySessionStore. Do I read that right?

Otherwise, I’m happy with your proposal. 

Thanks,
John

On Wed, Jul 1, 2020, at 17:01, Jorge Esteban Quilcate Otoya wrote:
> Quick update: KIP is updated with latest changes now.
> Will leave it open this week while working on the PR.
> 
> Hope to open a new vote thread over the next few days if no additional
> feedback is provided.
> 
> Cheers,
> Jorge.
> 
> On Mon, Jun 29, 2020 at 11:30 AM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
> 
> > Thanks, John!
> >
> > Make sense to reconsider the current approach. I was heading in a similar
> > direction while drafting the implementation. Metered, Caching, and other
> > layers will also have to get duplicated to build up new methods in `Stores`
> > factory, and class casting issues would appear on stores created by DSL.
> >
> > I will draft a proposal with new methods (move methods from proposed
> > interfaces to existing ones) with default implementation in a KIP update
> > and wait for Matthias to chime in to validate this approach.
> >
> > Jorge.
> >
> >
> > On Sat, Jun 27, 2020 at 4:01 PM John Roesler  wrote:
> >
> >> Hi Jorge,
> >>
> >> Sorry for my silence, I've been absorbed with the 2.6 and 2.5.1 releases.
> >>
> >> The idea to separate the new methods into "mixin" interfaces seems
> >> like a good one, but as we've discovered in KIP-614, it doesn't work
> >> out that way in practice. The problem is that the store implementations
> >> are just the base layer that get composed with other layers in Streams
> >> before they can be accessed in the DSL. This is extremely subtle, so
> >> I'm going to put everyone to sleep with a detailed explanation:
> >>
> >> For example, this is the mechanism by which all KeyValueStore
> >> implementations get added to Streams:
> >> org.apache.kafka.streams.state.internals.KeyValueStoreBuilder#build
> >> return new MeteredKeyValueStore<>(
> >>   maybeWrapCaching(maybeWrapLogging(storeSupplier.get())),
> >>   storeSupplier.metricsScope(),
> >>   time,
> >>   keySerde,
> >>   valueSerde
> >> );
> >>
> >> In the DSL, the store that a processor gets from the context would be
> >> the result of this composition. So even if the storeSupplier.get() returns
> >> a store that implements the "reverse" interface, when you try to use it
> >> from a processor like:
> >> org.apache.kafka.streams.kstream.ValueTransformerWithKey#init
> >> ReadOnlyBackwardWindowStore store =
> >>   (ReadOnlyBackwardWindowStore) context.getStateStore(..)
> >>
> >> You'd just get a ClassCastException because it's actually a
> >> MeteredKeyValueStore, which doesn't implement
> >> ReadOnlyBackwardWindowStore.
> >>
> >> The only way to make this work would be to make the Metered,
> >> Caching, and Logging layers also implement the new interfaces,
> >> but this effectively forces implementations to also implement
> >> the interface. Otherwise, the intermediate layers would have to
> >> cast the store in each method, like this:
> >> MeteredWindowStore#backwardFetch {
> >>   ((ReadOnlyBackwardWindowStore) innerStore).backwardFetch(..)
> >> }
> >>
> >> And then if the implementation doesn't "opt in" by implementing
> >> the interface, you'd get a ClassCastException, not when you get the
> >> store, but when you try to use it.
> >>
> >> The fact that we get ClassCastExceptions no matter which way we
> >> turn here indicates that we're really not getting any benefit from the
> >> type system, which makes the extra interfaces seem not worth all the
> >> code involved.
> >>
> >> Where we landed in KIP-614 is that, unless we want to completely
> >> revamp the way that StateStores work in the DSL, you might as
> >> well just add the new methods to the existing interfaces. To prevent
> >> compilation errors, we can add default implementations that throw
> >> UnsupportedOperationException. If a store doesn't opt in by
> >> implementing the methods, you'd get an UnsupportedOperationException,
> >> which seems no worse, and maybe better, than the ClassCastException
> >> you'd get if we go with the "mixin interface" approach.
> >>
> >> A quick note: This entire discussion focuses on the DSL. If you're just
> >> using the Processor API by directly adding the a custom store to the
> >> Topology:
> >> org.apache.kafka.streams.Topology#addStateStore
> >> and then retrieving it in the processor via:
> >> org.apache.kafka.streams.processor.ProcessorContext#getStateStore
> >> in
> >> org.apache.kafka.streams.processor.Processor#init
> >>
> >> Then, you can both register and retrieve _any_ StateStore implementation.
> >> There's no need to use KeyValueStore or any other built-in interface.
> >> In other words, KeyValueStore and company are only part of the DSL,
> >> not the PAPI. So, 

Jenkins build is back to normal : kafka-2.5-jdk8 #162

2020-07-02 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10227) Enforce cleanup policy to only contain compact or delete once

2020-07-02 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-10227:
--

 Summary: Enforce cleanup policy to only contain compact or delete 
once
 Key: KAFKA-10227
 URL: https://issues.apache.org/jira/browse/KAFKA-10227
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.6.0
Reporter: Mickael Maison


When creating or altering a topic, it's possible to set cleanup.policy to 
values like "compact,compact,delete".

For example:
 {{./bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic 
test --partitions 1 --replication-factor 1 --config 
cleanup.policy=compact,compact,delete}}

{{./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe}}
 {{Topic: test PartitionCount: 1 ReplicationFactor: 1 Configs: 
cleanup.policy=compact,compact,delete,segment.bytes=1073741824}}

 

 We should prevent this and enforce cleanup policy contains each value only 
once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1614

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update Netty to 4.1.50.Final (#8972)


--
[...truncated 3.19 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4688

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update Netty to 4.1.50.Final (#8972)


--
[...truncated 3.17 MB...]

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaCogroupSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name PASSED

org.apache.kafka.streams.scala.kstream.StreamJoinedTest > Create a StreamJoined 
should create a StreamJoined with Serdes STARTED

org.apache.kafka.streams.scala.kstream.StreamJoinedTest > Create a StreamJoined 
should create a StreamJoined with Serdes PASSED

org.apache.kafka.streams.scala.kstream.StreamJoinedTest > Create a StreamJoined 
should create a StreamJoined with Serdes and Store Suppliers STARTED

org.apache.kafka.streams.scala.kstream.StreamJoinedTest > Create a StreamJoined 
should create a StreamJoined with Serdes and Store Suppliers PASSED

org.apache.kafka.streams.scala.kstream.StreamJoinedTest > Create a StreamJoined 
should create a StreamJoined with Serdes and a State Store name STARTED

org.apache.kafka.streams.scala.kstream.StreamJoinedTest > Create a StreamJoined 
should create a StreamJoined with Serdes and a State Store name PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED


Build failed in Jenkins: kafka-trunk-jdk14 #263

2020-07-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update Netty to 4.1.50.Final (#8972)


--
[...truncated 3.19 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Created] (KAFKA-10226) KStream without SASL information should return error in confluent cloud

2020-07-02 Thread Werner Daehn (Jira)
Werner Daehn created KAFKA-10226:


 Summary: KStream without SASL information should return error in 
confluent cloud
 Key: KAFKA-10226
 URL: https://issues.apache.org/jira/browse/KAFKA-10226
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.5.0
Reporter: Werner Daehn


I have create a KStream against the Confluent cloud and wondered why no data 
has been received from the source. Reason was that I forgot to add the SASL api 
keys and secrets.

 

For end users this might lead to usability issues. If the KStream wants to read 
from a topic and is not allowed to, this should raise an error, not be silently 
ignored.

 

Hoe do producer/consumer clients handle that situation?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10225) Increase default zk session timeout for system test

2020-07-02 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10225:
--

 Summary: Increase default zk session timeout for system test
 Key: KAFKA-10225
 URL: https://issues.apache.org/jira/browse/KAFKA-10225
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


I'm digging in the flaky system tests and then I noticed there are many flaky 
caused by following check.
{code}
with node.account.monitor_log(KafkaService.STDOUT_STDERR_CAPTURE) as 
monitor:
node.account.ssh(cmd)
# Kafka 1.0.0 and higher don't have a space between "Kafka" and 
"Server"
monitor.wait_until("Kafka\s*Server.*started", 
timeout_sec=timeout_sec, backoff_sec=.25,
   err_msg="Kafka server didn't finish startup in 
%d seconds" % timeout_sec)
{code}

And the error message in broker log is shown below.
{quote}
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for 
connection while in state: CONNECTING
at 
kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:262)
at kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:119)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1880)
at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:430)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:455)
at kafka.server.KafkaServer.startup(KafkaServer.scala:227)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
{quote}

I'm surprised the default timeout of zk connection in system test is only 2 
seconds as the default timeout in production is increased to 18s (see 
https://github.com/apache/kafka/commit/4bde9bb3ccaf5571be76cb96ea051dadaeeaf5c7)
{code}
config_property.ZOOKEEPER_CONNECTION_TIMEOUT_MS: 2000
{code}






--
This message was sent by Atlassian Jira
(v8.3.4#803005)