Build failed in Jenkins: kafka-2.3-jdk8 #46

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-8499: ensure java is in PATH for ducker system tests (#6898)

[cshapi] MINOR: Lower producer throughput in flaky upgrade system test

[cshapi] MINOR: Fix race condition on shutdown of verifiable producer

--
[...truncated 2.88 MB...]
kafka.utils.LoggingTest > testLog4jControllerIsRegistered STARTED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered PASSED

kafka.utils.LoggingTest > testLogName STARTED

kafka.utils.LoggingTest > testLogName PASSED

kafka.utils.LoggingTest > testLogNameOverride STARTED

kafka.utils.LoggingTest > testLogNameOverride PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.CoreUtilsTest > testAbs STARTED

kafka.utils.CoreUtilsTest > testAbs PASSED

kafka.utils.CoreUtilsTest > testReplaceSuffix STARTED

kafka.utils.CoreUtilsTest > testReplaceSuffix PASSED

kafka.utils.CoreUtilsTest > testCircularIterator STARTED

kafka.utils.CoreUtilsTest > testCircularIterator PASSED

kafka.utils.CoreUtilsTest > testReadBytes STARTED

kafka.utils.CoreUtilsTest > testReadBytes PASSED

kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator 

Build failed in Jenkins: kafka-1.0-jdk7 #272

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Lower producer throughput in flaky upgrade system test

[jason] MINOR: Fix race condition on shutdown of verifiable producer

--
[...truncated 175.96 KB...]

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound PASSED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
STARTED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
PASSED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn STARTED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn PASSED

kafka.cluster.ReplicaTest > testCannotIncrementLogStartOffsetPastHighWatermark 
STARTED

kafka.cluster.ReplicaTest > testCannotIncrementLogStartOffsetPastHighWatermark 
PASSED

kafka.cluster.ReplicaTest > testSegmentDeletionWithHighWatermarkInitialization 
STARTED

kafka.cluster.ReplicaTest > testSegmentDeletionWithHighWatermarkInitialization 
PASSED

kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
STARTED

kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
PASSED

kafka.cluster.PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset 
STARTED

kafka.cluster.PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset 
PASSED

kafka.cluster.PartitionTest > 
testAppendRecordsToFollowerWithNoReplicaThrowsException STARTED

kafka.cluster.PartitionTest > 
testAppendRecordsToFollowerWithNoReplicaThrowsException PASSED

kafka.cluster.PartitionTest > testGetReplica STARTED

kafka.cluster.PartitionTest > testGetReplica PASSED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testHashAndEquals STARTED

kafka.cluster.BrokerEndPointTest > testHashAndEquals PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack PASSED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV3 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV3 PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #3712

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[cshapi] MINOR: Lower producer throughput in flaky upgrade system test

[cshapi] MINOR: Fix race condition on shutdown of verifiable producer

--
[...truncated 2.51 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED


Build failed in Jenkins: kafka-2.1-jdk8 #202

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Lower producer throughput in flaky upgrade system test

[jason] MINOR: Fix race condition on shutdown of verifiable producer

--
[...truncated 922.53 KB...]

kafka.zk.ReassignPartitionsZNodeTest > testDecodeInvalidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeInvalidJson PASSED

kafka.zk.ReassignPartitionsZNodeTest > testEncode STARTED

kafka.zk.ReassignPartitionsZNodeTest > testEncode PASSED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson PASSED

kafka.zk.KafkaZkClientTest > testZNodeChangeHandlerForDataChange STARTED

kafka.zk.KafkaZkClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zk.KafkaZkClientTest > testCreateAndGetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testCreateAndGetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testLogDirGetters STARTED

kafka.zk.KafkaZkClientTest > testLogDirGetters PASSED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment STARTED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > 

Build failed in Jenkins: kafka-2.0-jdk8 #274

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Lower producer throughput in flaky upgrade system test

[jason] MINOR: Fix race condition on shutdown of verifiable producer

--
[...truncated 438.98 KB...]
kafka.log.LogCleanerManagerTest > 
testLogsWithSegmentsToDeleteShouldNotConsiderCleanupPolicyDeleteLogs STARTED

kafka.log.LogCleanerManagerTest > 
testLogsWithSegmentsToDeleteShouldNotConsiderCleanupPolicyDeleteLogs PASSED

kafka.log.LogCleanerManagerTest > testCleanableOffsetsForShortTime STARTED

kafka.log.LogCleanerManagerTest > testCleanableOffsetsForShortTime PASSED

kafka.log.LogCleanerManagerTest > testDoneCleaning STARTED

kafka.log.LogCleanerManagerTest > testDoneCleaning PASSED

kafka.log.LogCleanerManagerTest > testDoneDeleting STARTED

kafka.log.LogCleanerManagerTest > testDoneDeleting PASSED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord STARTED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch STARTED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
PASSED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile PASSED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire STARTED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire PASSED

kafka.log.ProducerStateManagerTest > testBasicIdMapping STARTED

kafka.log.ProducerStateManagerTest > testBasicIdMapping PASSED

kafka.log.ProducerStateManagerTest > updateProducerTransactionState STARTED

kafka.log.ProducerStateManagerTest > updateProducerTransactionState PASSED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshot STARTED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshot PASSED

kafka.log.ProducerStateManagerTest > testPrepareUpdateDoesNotMutate STARTED

kafka.log.ProducerStateManagerTest > testPrepareUpdateDoesNotMutate PASSED

kafka.log.ProducerStateManagerTest > 
testSequenceNotValidatedForGroupMetadataTopic STARTED

kafka.log.ProducerStateManagerTest > 
testSequenceNotValidatedForGroupMetadataTopic PASSED

kafka.log.ProducerStateManagerTest > testLastStableOffsetCompletedTxn STARTED

kafka.log.ProducerStateManagerTest > testLastStableOffsetCompletedTxn PASSED

kafka.log.ProducerStateManagerTest > 
testLoadFromSnapshotRemovesNonRetainedProducers STARTED

kafka.log.ProducerStateManagerTest > 
testLoadFromSnapshotRemovesNonRetainedProducers PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset PASSED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached STARTED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload PASSED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch STARTED

kafka.log.ProducerStateManagerTest > 

Jenkins build is back to normal : kafka-1.1-jdk7 #266

2019-06-07 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-8513) Add kafka-streams-application-reset.bat for Windows platform

2019-06-07 Thread Kengo Seki (JIRA)
Kengo Seki created KAFKA-8513:
-

 Summary: Add kafka-streams-application-reset.bat for Windows 
platform
 Key: KAFKA-8513
 URL: https://issues.apache.org/jira/browse/KAFKA-8513
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Kengo Seki
Assignee: Kengo Seki


For improving Windows support, it'd be nice if there were a batch file 
corresponding to bin/kafka-streams-application-reset.sh.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3711

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-8331: stream static membership system test (#6877)

[cmccabe] KAFKA-8499: ensure java is in PATH for ducker system tests (#6898)

--
[...truncated 2.50 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : kafka-2.2-jdk8 #133

2019-06-07 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-467: Augment ProduceResponse error messaging

2019-06-07 Thread Guozhang Wang
Bump up this thread again for whoever's interested.

On Sat, May 11, 2019 at 12:34 PM Guozhang Wang  wrote:

> Hello everyone,
>
> I'd like to start a discussion thread on this newly created KIP to improve
> error communication and handling for producer response:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-467%3A+Augment+ProduceResponse+error+messaging+for+specific+culprit+records
>
> Thanks,
> --
> -- Guozhang
>


-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk8 #3710

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[cshapi] KAFKA-8003; Fix flaky testFencingOnTransactionExpiration

--
[...truncated 2.50 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED


[jira] [Created] (KAFKA-8512) Looking into the Future: Assignor Version

2019-06-07 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8512:


 Summary: Looking into the Future: Assignor Version
 Key: KAFKA-8512
 URL: https://issues.apache.org/jira/browse/KAFKA-8512
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


I'd propose to modify the JoinGroup protocol in this KIP as well to take the 
read `protocol version` from the PartitionAssignor.

And then on the broker side, when choosing the leader it will pick the one with 
the highest protocol version instead of picking it "first come first serve".

Although this change will not benefit the upgrade path at this time, in the 
future if we need to upgrade the assignor again, as long as they would not 
change the rebalance semantics (e.g. like we did in this KIP, from "eager" to 
"cooperative") we can actually use one rolling bounce instead since as long as 
there's one member on the newer version, that consumer will be picked.

For example, this can also help saving "version probing" cost on Streams as 
well: suppose we augment the join group schema with `protocol version` in Kafka 
version 2.3, and then with both brokers and clients being in version 2.3+, on 
the first rolling bounce where subscription and assignment schema and / or user 
metadata has changed, this protocol version will be bumped. On the broker side, 
when receiving all member's join-group request, it will choose the one that has 
the highest protocol version (also it assumes higher versioned protocol is 
always backward compatible, i.e. the coordinator can recognize lower versioned 
protocol as well) and select it as the leader. Then the leader can decide, 
based on its received and deserialized subscription information, how to assign 
partitions and how to encode the assignment accordingly so that everyone can 
understand it. With this, in Streams for example, no version probing would be 
needed since we are guaranteed the leader knows everyone's version -- again it 
is assuming that higher versioned protocol is always backward compatible -- and 
hence can successfully do the assignment at that round.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8511) Looking into the Future: Heartbeat Communicated Protocol

2019-06-07 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8511:


 Summary: Looking into the Future: Heartbeat Communicated Protocol
 Key: KAFKA-8511
 URL: https://issues.apache.org/jira/browse/KAFKA-8511
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


Note that KIP-429 relies on the fact that COOPERATIVE and EAGER members can 
work together within the same generation as long as the leader recognize both; 
this however may not be true moving forward if we add a third rebalance 
protocol. One idea to resolve this in the future is that, instead of letting 
the members to decide which protocol to use "locally" before sending the 
join-group request, we will use Heartbeat request / response to piggy-back the 
communication of the group's supported protocols and let members to rely on 
that "global" information to make decisions. More specifically:

* On Heartbeat Request, we will add additional field as a list of protocols 
that this member supports.
* On Heartbeat Response, we will add additional field as a single protocol 
indicating which to use if the error code suggests re-joining the group.

The broker, upon receiving the heartbeat request, if the indicated supported 
protocols does not contain the one it has decided to use for the up-coming 
rebalance, then reply with an fatal error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8510) Update StreamsPartitionAssignor to use the built-in owned partitions to achieve stickiness

2019-06-07 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8510:


 Summary: Update StreamsPartitionAssignor to use the built-in owned 
partitions to achieve stickiness
 Key: KAFKA-8510
 URL: https://issues.apache.org/jira/browse/KAFKA-8510
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


Today this information is encoded as part of the user data bytes, we can now 
remove it and leverage on the owned partitions of the protocol directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: JIRA Contributor permissions

2019-06-07 Thread Bill Bejeck
Vikash,

Thanks for your interest in Apache Kafka.  You're all set now.

-Bill

On Fri, Jun 7, 2019 at 12:45 PM Vikash Kumar 
wrote:

> Sorry I forgot to mention my user ID.
>
> My User ID : krvikash
>
> On Fri, Jun 7, 2019 at 10:12 PM Vikash Kumar 
> wrote:
>
> > Hi,
> >
> > I want to start contributing to Apache Kafka.Can you please add me into
> > JIRA contributor list?
> >
> > Thanks,
> > Vikash
> >
>


[jira] [Resolved] (KAFKA-8331) Add system test for enabling static membership on KStream

2019-06-07 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8331.

Resolution: Fixed

> Add system test for enabling static membership on KStream
> -
>
> Key: KAFKA-8331
> URL: https://issues.apache.org/jira/browse/KAFKA-8331
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-06-07 Thread Patrik Kleindl
Hi Sophie
This will be a good change, I have been thinking about proposing something 
similar or even passing the properties per store.
RocksDB should probably know how much memory was reserved but maybe does not 
expose it.
We are limiting it already as you suggested but this is a rather crude tool.
Especially in a larger topology with mixed loads par topic it would be helpful 
to get more insights which store puts a lot of load on memory.
Regarding the limiting capability, I think I remember reading that those only 
affect some parts of the memory and others can still exceed this limit. I‘ll 
try to look up the difference.
Best regards
Patrik 

> Am 07.06.2019 um 21:03 schrieb Sophie Blee-Goldman :
> 
> Hi Patrik,
> 
> As of 2.3 you will be able to use the RocksDBConfigSetter to effectively
> bound the total memory used by RocksDB for a single app instance. You
> should already be able to limit the memory used per rocksdb store, though
> as you mention there can be a lot of them. I'm not sure you can monitor the
> memory usage if you are not limiting it though.
> 
>> On Fri, Jun 7, 2019 at 2:06 AM Patrik Kleindl  wrote:
>> 
>> Hi
>> Thanks Bruno for the KIP, this is a very good idea.
>> 
>> I have one question, are there metrics available for the memory consumption
>> of RocksDB?
>> As they are running outside the JVM we have run into issues because they
>> were using all the other memory.
>> And with multiple streams applications on the same machine, each with
>> several KTables and 10+ partitions per topic the number of stores can get
>> out of hand pretty easily.
>> Or did I miss something obvious how those can be monitored better?
>> 
>> best regards
>> 
>> Patrik
>> 
>>> On Fri, 17 May 2019 at 23:54, Bruno Cadonna  wrote:
>>> 
>>> Hi all,
>>> 
>>> this KIP describes the extension of the Kafka Streams' metrics to include
>>> RocksDB's internal statistics.
>>> 
>>> Please have a look at it and let me know what you think. Since I am not a
>>> RocksDB expert, I am thankful for any additional pair of eyes that
>>> evaluates this KIP.
>>> 
>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471:+Expose+RocksDB+Metrics+in+Kafka+Streams
>>> 
>>> Best regards,
>>> Bruno
>>> 
>> 


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-06-07 Thread Sophie Blee-Goldman
Hi Patrik,

As of 2.3 you will be able to use the RocksDBConfigSetter to effectively
bound the total memory used by RocksDB for a single app instance. You
should already be able to limit the memory used per rocksdb store, though
as you mention there can be a lot of them. I'm not sure you can monitor the
memory usage if you are not limiting it though.

On Fri, Jun 7, 2019 at 2:06 AM Patrik Kleindl  wrote:

> Hi
> Thanks Bruno for the KIP, this is a very good idea.
>
> I have one question, are there metrics available for the memory consumption
> of RocksDB?
> As they are running outside the JVM we have run into issues because they
> were using all the other memory.
> And with multiple streams applications on the same machine, each with
> several KTables and 10+ partitions per topic the number of stores can get
> out of hand pretty easily.
> Or did I miss something obvious how those can be monitored better?
>
> best regards
>
> Patrik
>
> On Fri, 17 May 2019 at 23:54, Bruno Cadonna  wrote:
>
> > Hi all,
> >
> > this KIP describes the extension of the Kafka Streams' metrics to include
> > RocksDB's internal statistics.
> >
> > Please have a look at it and let me know what you think. Since I am not a
> > RocksDB expert, I am thankful for any additional pair of eyes that
> > evaluates this KIP.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471:+Expose+RocksDB+Metrics+in+Kafka+Streams
> >
> > Best regards,
> > Bruno
> >
>


[jira] [Created] (KAFKA-8509) Add downgrade system tests

2019-06-07 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8509:
--

 Summary: Add downgrade system tests
 Key: KAFKA-8509
 URL: https://issues.apache.org/jira/browse/KAFKA-8509
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


We've been bitten a few times by downgrade incompatibilities. It should be 
straightforward to adapt our current upgrade system tests to support downgrades 
as well. The basic procedure should be: 
 # Roll the cluster with the updated binary, keep IBP on old version
 # Verify produce/consume
 # Roll the cluster with the old binary, keep IBP the same.
 # Verify produce/consume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8508) saslClient failed to initialize properly: it's null.

2019-06-07 Thread Caroline Liu (JIRA)
Caroline Liu created KAFKA-8508:
---

 Summary: saslClient failed to initialize properly: it's null.
 Key: KAFKA-8508
 URL: https://issues.apache.org/jira/browse/KAFKA-8508
 Project: Kafka
  Issue Type: Bug
  Components: zkclient
Affects Versions: 2.0.1
Reporter: Caroline Liu


After a network issue caused the last ISR to fail connecting to ZooKeeper, the 
attempt to reconnect failed with an ArrayIndexOutOfBoundsException. 
{code:java}
2019-05-31 15:54:38,823 [zk-session-expiry-handler0-SendThread(zk2-1:2181)] 
WARN (org.apache.zookeeper.ClientCnxn) - Client session timed out, have not 
heard from server in 20010ms for sessionid 0x1511b2b1042a
2019-05-31 15:54:38,823 [zk-session-expiry-handler0-SendThread(zk2-1:2181)] 
INFO (org.apache.zookeeper.ClientCnxn) - Client session timed out, have not 
heard from server in 20010ms for sessionid 0x1511b2b1042a, closing socket 
connection and attempting reconnect
2019-05-31 15:54:39,702 [zk-session-expiry-handler0-SendThread(zk1-2:2181)] 
INFO (org.apache.zookeeper.client.ZooKeeperSaslClient) - Client will use 
DIGEST-MD5 as SASL mechanism.
2019-05-31 15:54:39,702 [zk-session-expiry-handler0-SendThread(zk1-2:2181)] 
ERROR (org.apache.zookeeper.client.ZooKeeperSaslClient) - Exception while 
trying to create SASL client: java.lang.ArrayIndexOutOfBoundsException: 0
2019-05-31 15:54:39,702 [zk-session-expiry-handler0-SendThread(zk1-2:2181)] 
INFO (org.apache.zookeeper.ClientCnxn) - Opening socket connection to server 
zk1-2/1.3.6.1:2181. Will attempt to SASL-authenticate using Login Context 
section 'Client'
2019-05-31 15:54:39,702 [zk-session-expiry-handler0-SendThread(zk1-2:2181)] 
INFO (org.apache.zookeeper.ClientCnxn) - Socket connection established to 
zk1-2/1.3.6.1:2181, initiating session
2019-05-31 15:54:39,703 [zk-session-expiry-handler0-SendThread(zk1-2:2181)] 
INFO (org.apache.zookeeper.ClientCnxn) - Session establishment complete on 
server zk1-2/1.3.6.1:2181, sessionid = 0x1511b2b1042a, negotiated timeout = 
3
2019-05-31 15:54:39,703 [zk-session-expiry-handler0-SendThread(zk1-2:2181)] 
ERROR (org.apache.zookeeper.ClientCnxn) - SASL authentication with Zookeeper 
Quorum member failed: javax.security.sasl.SaslException: saslClient failed to 
initialize properly: it's null.{code}
Kafka was "not live" in zookeeper and had to be manually restarted to recover 
from this error. It would be better if the last ISR could retry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8507) Support --bootstrap-server in all command line tools

2019-06-07 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8507:
--

 Summary: Support --bootstrap-server in all command line tools
 Key: KAFKA-8507
 URL: https://issues.apache.org/jira/browse/KAFKA-8507
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Jason Gustafson


This is a unambitious initial move toward standardizing the command line tools. 
We have favored the name `--bootstrap-server` in all new tools since it matches 
the config `bootstrap.server` which is used by all clients. Some older commands 
use `--broker-list` or `--bootstrap-servers` and maybe other exotic variations. 
We should support `--bootstrap-server` in all commands and deprecate the other 
options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: JIRA Contributor permissions

2019-06-07 Thread Vikash Kumar
Sorry I forgot to mention my user ID.

My User ID : krvikash

On Fri, Jun 7, 2019 at 10:12 PM Vikash Kumar 
wrote:

> Hi,
>
> I want to start contributing to Apache Kafka.Can you please add me into
> JIRA contributor list?
>
> Thanks,
> Vikash
>


JIRA Contributor permissions

2019-06-07 Thread Vikash Kumar
Hi,

I want to start contributing to Apache Kafka.Can you please add me into
JIRA contributor list?

Thanks,
Vikash


Jenkins build is back to normal : kafka-trunk-jdk8 #3709

2019-06-07 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-8506) Kafka Consumer broker never stops on connection failure

2019-06-07 Thread Gaurav (JIRA)
Gaurav created KAFKA-8506:
-

 Summary: Kafka Consumer broker never stops on connection failure
 Key: KAFKA-8506
 URL: https://issues.apache.org/jira/browse/KAFKA-8506
 Project: Kafka
  Issue Type: Bug
Reporter: Gaurav


not able to stop the Kafka broker on connection failure it keeps retrying to 
create connection on calling consumer.poll(1000). We poll ondemand via rest if 
case of connection failure it should stop and throw exception. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Contributor permissions

2019-06-07 Thread Bill Bejeck
Hi Carlos,

Thanks again for your interest in Apache Kafka.  I've added you as a
contributor so you can now self-assign tickets.

-Bill

On Fri, Jun 7, 2019 at 6:14 AM Carlos Manuel Duclos-Vergara <
carlos.duc...@schibsted.com> wrote:

> Hi,
> Yes, I read the guidelines. I forgot to mention the JIRA ID. I already
> pushed a PR from my fork.
> My Jira-ID is carlos.duclos.
>
> Regards
>
> On Fri, 7 Jun 2019 at 12:03, Bruno Cadonna  wrote:
>
> > Hi Carlos,
> >
> > It's great that you want to contribute to Apache Kafka.
> >
> > Have you already read the instructions on how to contribute to Kafka?
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
> > https://kafka.apache.org/contributing.html
> >
> > To assign Jira tickets to yourself, you need to be added to the list
> > of contributors. For that you need to sign-up to the Apache Jira
> > (https://issues.apache.org/jira/projects/KAFKA) and choose a Jira-ID
> > for yourself. Then you send an e-mail that contains the Jira-ID and a
> > request to be added to the list of contributors to this mailing list
> > (you can find examples in the list archives). Once a project committer
> > has added you to the list of contributors, you can assign tickets to
> > yourself.
> >
> > Best,
> > Bruno
> >
> >
> >
> >
> > On Fri, Jun 7, 2019 at 11:18 AM Dulvin Witharane 
> > wrote:
> > >
> > > Hi,
> > >
> > > I think for the first JIRA ticket you have to request to be added as
> the
> > > assignee in a new thread. Afterwards you can assign them yourself and
> > keep
> > > working.
> > >
> > > On Fri, Jun 7, 2019 at 12:28 PM Carlos Manuel Duclos-Vergara <
> > > carlos.duc...@schibsted.com> wrote:
> > >
> > > > Hi,
> > > >
> > > > I'd like to start working on some of the newbie tasks (I got
> permission
> > > > from my employer to use up to 10% of my time on this project). I have
> > > > already found a couple of tasks that I'd like to work on, so I'd like
> > to be
> > > > able to assign those tasks to me. What is the procedure?
> > > >
> > > > Regards
> > > >
> > > > --
> > > > Carlos Manuel Duclos Vergara
> > > > Backend Software Developer
> > > >
> > > --
> > > Witharane, DRH
> > > R & D Engineer
> > > Synopsys Lanka (Pvt) Ltd.
> > > Borella, Sri Lanka
> > > 0776746781
> > >
> > > Sent from my iPhone
> >
>
>
> --
> Carlos Manuel Duclos Vergara
> Backend Software Developer
>


[jira] [Created] (KAFKA-8505) Limit the maximum number of connections per ip client ID

2019-06-07 Thread Igor Soarez (JIRA)
Igor Soarez created KAFKA-8505:
--

 Summary: Limit the maximum number of connections per ip client ID
 Key: KAFKA-8505
 URL: https://issues.apache.org/jira/browse/KAFKA-8505
 Project: Kafka
  Issue Type: New Feature
Reporter: Igor Soarez


As highlighted by KAFKA-1512 back in 2014, it is important to be able to limit 
the number of client connections to brokers to maintain service availability. 
With multiple use-cases on the same cluster, it's important to prevent one 
misconfigured use-case from affecting other use-cases.

Cloud infrastructure technology has come a long way since then. Nowadays days 
in a private network using container orchestration technology, IPs come cheap. 
Limiting connections solely on origin IP is no longer acceptable.

Kafka needs to support connection limits based on client identity.

A new configuration property - {{max.connections.per.clientid}} - should work 
similarly to {{max.connections.per.ip}} using ConnectionQuotas, managed 
straight after parsing the request header in SocketServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-476: Add Java AdminClient interface

2019-06-07 Thread Thomas Becker
+1 non-binding

We've run into issues trying to decorate the AdminClient due it being an 
abstract class. Will be nice to have consistency with Producer/Consumer as well.

On Tue, 2019-06-04 at 17:17 +0100, Andy Coates wrote:

Hi folks


As there's been no chatter on this KIP I'm assuming it's non-contentious,

(or just boring), hence I'd like to call a vote for KIP-476:


https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-476%253A%2BAdd%2BJava%2BAdminClient%2BInterfacedata=02%7C01%7CThomas.Becker%40tivo.com%7Ccf5880cd9d684d0489db08d6e9082607%7Cd05b7c6912014c0db45d7f1dcc227e4d%7C1%7C0%7C636952618549124417sdata=pC75MuDwA%2FzBeiGtIPwJVIjclchXO9Q9g5Uz4mTTjZY%3Dreserved=0


Thanks,


Andy


--
[cid:022b6797e938d93974764f5b6df045112e31ce5e.camel@tivo.com] Tommy Becker
Principal Engineer
Personalized Content Discovery
O +1 919.460.4747
tivo.com



This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
is authorized to conclude any binding agreement on behalf of TiVo by email. 
Binding agreements with TiVo may only be made by a signed written agreement.


Jenkins build is back to normal : kafka-2.3-jdk8 #44

2019-06-07 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-474: To deprecate WindowStore#put(key, value)

2019-06-07 Thread omkar mestry
Thanks to everyone who have voted! I'm closing this vote thread with a
final count:

binding +1: 2 (Matthias, Guozhang)

non-binding +1: 2 (Boyang, Dongjin)

Thanks & Regards
Omkar Mestry

On Tue, Jun 4, 2019 at 3:05 AM Guozhang Wang  wrote:

> +1 (binding).
>
> On Sat, Jun 1, 2019 at 3:19 PM Matthias J. Sax 
> wrote:
>
> > +1 (binding)
> >
> > On 5/31/19 10:58 PM, Dongjin Lee wrote:
> > > +1 (non-binding).
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > <
> >
> https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail_term=icon
> > >
> > > Virus-free.
> > > www.avast.com
> > > <
> >
> https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail_term=link
> > >
> > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> > >
> > > On Sat, Jun 1, 2019 at 2:45 PM Boyang Chen 
> wrote:
> > >
> > >> Thanks omkar for taking the initiative, +1 (non-binding).
> > >>
> > >> 
> > >> From: omkar mestry 
> > >> Sent: Saturday, June 1, 2019 1:40 PM
> > >> To: dev@kafka.apache.org
> > >> Subject: [VOTE] KIP-474: To deprecate WindowStore#put(key, value)
> > >>
> > >> Hi all,
> > >>
> > >> Since we seem to have an agreement in the discussion I would like to
> > >> start the vote on KIP-474.
> > >>
> > >> KIP 474 :-
> > >>
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=115526545
> > >>
> > >> Thanks & Regards
> > >> Omkar Mestry
> > >>
> > >
> > >
> >
> >
>
> --
> -- Guozhang
>


Build failed in Jenkins: kafka-trunk-jdk8 #3708

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8384; Leader election command integration tests (#6880)

[manikumar] KAFKA-8461: Wait for follower to join the ISR in

--
[...truncated 2.41 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: Contributor permissions

2019-06-07 Thread Carlos Manuel Duclos-Vergara
Hi,
Yes, I read the guidelines. I forgot to mention the JIRA ID. I already
pushed a PR from my fork.
My Jira-ID is carlos.duclos.

Regards

On Fri, 7 Jun 2019 at 12:03, Bruno Cadonna  wrote:

> Hi Carlos,
>
> It's great that you want to contribute to Apache Kafka.
>
> Have you already read the instructions on how to contribute to Kafka?
>
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
> https://kafka.apache.org/contributing.html
>
> To assign Jira tickets to yourself, you need to be added to the list
> of contributors. For that you need to sign-up to the Apache Jira
> (https://issues.apache.org/jira/projects/KAFKA) and choose a Jira-ID
> for yourself. Then you send an e-mail that contains the Jira-ID and a
> request to be added to the list of contributors to this mailing list
> (you can find examples in the list archives). Once a project committer
> has added you to the list of contributors, you can assign tickets to
> yourself.
>
> Best,
> Bruno
>
>
>
>
> On Fri, Jun 7, 2019 at 11:18 AM Dulvin Witharane 
> wrote:
> >
> > Hi,
> >
> > I think for the first JIRA ticket you have to request to be added as the
> > assignee in a new thread. Afterwards you can assign them yourself and
> keep
> > working.
> >
> > On Fri, Jun 7, 2019 at 12:28 PM Carlos Manuel Duclos-Vergara <
> > carlos.duc...@schibsted.com> wrote:
> >
> > > Hi,
> > >
> > > I'd like to start working on some of the newbie tasks (I got permission
> > > from my employer to use up to 10% of my time on this project). I have
> > > already found a couple of tasks that I'd like to work on, so I'd like
> to be
> > > able to assign those tasks to me. What is the procedure?
> > >
> > > Regards
> > >
> > > --
> > > Carlos Manuel Duclos Vergara
> > > Backend Software Developer
> > >
> > --
> > Witharane, DRH
> > R & D Engineer
> > Synopsys Lanka (Pvt) Ltd.
> > Borella, Sri Lanka
> > 0776746781
> >
> > Sent from my iPhone
>


-- 
Carlos Manuel Duclos Vergara
Backend Software Developer


Re: Contributor permissions

2019-06-07 Thread Bruno Cadonna
Hi Carlos,

It's great that you want to contribute to Apache Kafka.

Have you already read the instructions on how to contribute to Kafka?

https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
https://kafka.apache.org/contributing.html

To assign Jira tickets to yourself, you need to be added to the list
of contributors. For that you need to sign-up to the Apache Jira
(https://issues.apache.org/jira/projects/KAFKA) and choose a Jira-ID
for yourself. Then you send an e-mail that contains the Jira-ID and a
request to be added to the list of contributors to this mailing list
(you can find examples in the list archives). Once a project committer
has added you to the list of contributors, you can assign tickets to
yourself.

Best,
Bruno




On Fri, Jun 7, 2019 at 11:18 AM Dulvin Witharane  wrote:
>
> Hi,
>
> I think for the first JIRA ticket you have to request to be added as the
> assignee in a new thread. Afterwards you can assign them yourself and keep
> working.
>
> On Fri, Jun 7, 2019 at 12:28 PM Carlos Manuel Duclos-Vergara <
> carlos.duc...@schibsted.com> wrote:
>
> > Hi,
> >
> > I'd like to start working on some of the newbie tasks (I got permission
> > from my employer to use up to 10% of my time on this project). I have
> > already found a couple of tasks that I'd like to work on, so I'd like to be
> > able to assign those tasks to me. What is the procedure?
> >
> > Regards
> >
> > --
> > Carlos Manuel Duclos Vergara
> > Backend Software Developer
> >
> --
> Witharane, DRH
> R & D Engineer
> Synopsys Lanka (Pvt) Ltd.
> Borella, Sri Lanka
> 0776746781
>
> Sent from my iPhone


Re: Contributor permissions

2019-06-07 Thread Dulvin Witharane
Hi,

I think for the first JIRA ticket you have to request to be added as the
assignee in a new thread. Afterwards you can assign them yourself and keep
working.

On Fri, Jun 7, 2019 at 12:28 PM Carlos Manuel Duclos-Vergara <
carlos.duc...@schibsted.com> wrote:

> Hi,
>
> I'd like to start working on some of the newbie tasks (I got permission
> from my employer to use up to 10% of my time on this project). I have
> already found a couple of tasks that I'd like to work on, so I'd like to be
> able to assign those tasks to me. What is the procedure?
>
> Regards
>
> --
> Carlos Manuel Duclos Vergara
> Backend Software Developer
>
-- 
Witharane, DRH
R & D Engineer
Synopsys Lanka (Pvt) Ltd.
Borella, Sri Lanka
0776746781

Sent from my iPhone


[jira] [Created] (KAFKA-8504) Suppressed do not emit with TimeWindows

2019-06-07 Thread Simone (JIRA)
Simone created KAFKA-8504:
-

 Summary: Suppressed do not emit with TimeWindows
 Key: KAFKA-8504
 URL: https://issues.apache.org/jira/browse/KAFKA-8504
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.1
Reporter: Simone


Hi, I'm playing a bit with KafkaStream and the new suppress feature. I noticed 
that when using a {{TimeWindows}} without explicitly setting the grace 
{{suppress}} will not emit any message if used with 
{{Suppressed.untilWindowCloses.}}

I look a bit into the code and from what I understood with this configuration 
{{suppress}} should use the {{grace}} setting of the {{TimeWindows}}. But since 
using {{TimeWindows.of(Duration)}} default the grace to {{-1}} and when getting 
the grace using the method {{TimeWindows.gracePeriodMs()}} in case of grace 
equals to -1 the return value is set to {{maintainMs() - size()}} I think that 
the end of window is not properly calculated. 

Of course is possible to avoid this problem forcing the {{grace}} to 0 when 
creating the TimeWindows but I think that this should be the default behaviour 
at least when it comes to the suppress feature.

I hope I have not misunderstood the code in my analysis, thank you :)

Simone





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-06-07 Thread Patrik Kleindl
Hi
Thanks Bruno for the KIP, this is a very good idea.

I have one question, are there metrics available for the memory consumption
of RocksDB?
As they are running outside the JVM we have run into issues because they
were using all the other memory.
And with multiple streams applications on the same machine, each with
several KTables and 10+ partitions per topic the number of stores can get
out of hand pretty easily.
Or did I miss something obvious how those can be monitored better?

best regards

Patrik

On Fri, 17 May 2019 at 23:54, Bruno Cadonna  wrote:

> Hi all,
>
> this KIP describes the extension of the Kafka Streams' metrics to include
> RocksDB's internal statistics.
>
> Please have a look at it and let me know what you think. Since I am not a
> RocksDB expert, I am thankful for any additional pair of eyes that
> evaluates this KIP.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471:+Expose+RocksDB+Metrics+in+Kafka+Streams
>
> Best regards,
> Bruno
>


RE: kafka connect stops consuming data when kafka broker goes down

2019-06-07 Thread Srinivas, Kaushik (Nokia - IN/Bangalore)
Hi Paul,

We tried patching out kafka connect as well referencing the PR which you had 
shared. But no improvement in our case.
Have updated more details in the blocker issue which I have created for the 
same.
https://issues.apache.org/jira/browse/KAFKA-8485

But the issue seems to be coming from the internal producers/consumers getting 
into hung state after the broker restarts.

Any other lead/inputs would be useful.
This issue is always reproducible even with/without data streaming in progress.

Thanks,
Kaushik

-Original Message-
From: Paul Whalen  
Sent: Wednesday, June 05, 2019 5:17 PM
To: dev@kafka.apache.org
Cc: Basil Brito, Aldan (Nokia - IN/Bangalore) 
Subject: Re: kafka connect stops consuming data when kafka broker goes down

It’s not totally clear, but this may be 
https://issues.apache.org/jira/plugins/servlet/mobile#issue/KAFKA-7941

For which there is a fix that is very nearly approved: 
https://github.com/apache/kafka/pull/6283

Paul

> On Jun 5, 2019, at 1:26 AM, Srinivas, Kaushik (Nokia - IN/Bangalore) 
>  wrote:
> 
> Hello,
> Anyone has any information on this issue.
> Created a critical ticket for the same, since this is a major stability issue 
> for connect framework.
> https://issues.apache.org/jira/browse/KAFKA-8485?filter=-2
> 
> Thanks.
> Kaushik,
> NOKIA
> 
> From: Srinivas, Kaushik (Nokia - IN/Bangalore)
> Sent: Monday, June 03, 2019 5:22 PM
> To: dev@kafka.apache.org
> Cc: Basil Brito, Aldan (Nokia - IN/Bangalore) 
> 
> Subject: kafka connect stops consuming data when kafka broker goes 
> down
> 
> Hello kafka dev,
> 
> We are encountering an issue when kafka connect is running hdfs sink 
> connector pulling data from kafka and writing to hdfs location.
> In between when the data is flowing in the pipeline from producer -> 
> kafka topic -> kafka connect hdfs sink connector -> hdfs, If even one of the 
> kafka broker goes down, the connect framework stops responding. Stops 
> consuming records and REST API also becomes not interactive.
> 
> Until the kafka connect framework is restarted, it would not pull the data 
> from kafka and REST api remains inactive. Nothing is coming in the logs as 
> well.
> Checked the topics in kafka used by connect, everything has been reassigned 
> to another broker and has the leader available.
> 
> Has anyone encountered this issue ? what would be the expected behavior ?
> 
> Thanks in advance
> Kaushik


Build failed in Jenkins: kafka-trunk-jdk8 #3707

2019-06-07 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Remove redundant semicolons in `KafkaApis` imports (#6889)

[wangguoz] MINOR: Fixed compiler warnings in LogManagerTest (#6897)

--
[...truncated 4.74 MB...]

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.MetricsIntegrationTest > 
testStreamMetricOfSessionStore STARTED

org.apache.kafka.streams.integration.MetricsIntegrationTest > 
testStreamMetricOfSessionStore PASSED

org.apache.kafka.streams.integration.MetricsIntegrationTest > testStreamMetric 
STARTED

org.apache.kafka.streams.integration.MetricsIntegrationTest > testStreamMetric 
PASSED

org.apache.kafka.streams.integration.MetricsIntegrationTest > 
testStreamMetricOfWindowStore STARTED

org.apache.kafka.streams.integration.MetricsIntegrationTest > 
testStreamMetricOfWindowStore PASSED


Contributor permissions

2019-06-07 Thread Carlos Manuel Duclos-Vergara
Hi,

I'd like to start working on some of the newbie tasks (I got permission
from my employer to use up to 10% of my time on this project). I have
already found a couple of tasks that I'd like to work on, so I'd like to be
able to assign those tasks to me. What is the procedure?

Regards

-- 
Carlos Manuel Duclos Vergara
Backend Software Developer


[jira] [Resolved] (KAFKA-8461) Flakey test UncleanLeaderElectionTest#testUncleanLeaderElectionDisabledByTopicOverride

2019-06-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-8461.
--
   Resolution: Fixed
Fix Version/s: 2.4.0

Issue resolved by pull request 6887
[https://github.com/apache/kafka/pull/6887]

> Flakey test 
> UncleanLeaderElectionTest#testUncleanLeaderElectionDisabledByTopicOverride
> --
>
> Key: KAFKA-8461
> URL: https://issues.apache.org/jira/browse/KAFKA-8461
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Boyang Chen
>Priority: Major
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/5168/consoleFull]
> *15:47:56* kafka.integration.UncleanLeaderElectionTest > 
> testUncleanLeaderElectionDisabledByTopicOverride FAILED*15:47:56* 
> org.scalatest.exceptions.TestFailedException: Timing out after 3 ms since 
> expected new leader 1 was not elected for partition 
> topic-9147891452427084986-0, leader is Some(-1)*15:47:56* at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)*15:47:56*
>  at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)*15:47:56*
>  at 
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)*15:47:56*
>  at org.scalatest.Assertions.fail(Assertions.scala:1091)*15:47:56*
>  at org.scalatest.Assertions.fail$(Assertions.scala:1087)*15:47:56*   
>   at org.scalatest.Assertions$.fail(Assertions.scala:1389)*15:47:56* 
> at 
> kafka.utils.TestUtils$.$anonfun$waitUntilLeaderIsElectedOrChanged$8(TestUtils.scala:722)*15:47:56*
>  at scala.Option.getOrElse(Option.scala:138)*15:47:56* at 
> kafka.utils.TestUtils$.waitUntilLeaderIsElectedOrChanged(TestUtils.scala:712)*15:47:56*
>  at 
> kafka.integration.UncleanLeaderElectionTest.verifyUncleanLeaderElectionDisabled(UncleanLeaderElectionTest.scala:258)*15:47:56*
>  at 
> kafka.integration.UncleanLeaderElectionTest.testUncleanLeaderElectionDisabledByTopicOverride(UncleanLeaderElectionTest.scala:153)*15:47:56*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8503) AdminClient should ignore retries config if a custom timeout is provided

2019-06-07 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8503:
--

 Summary: AdminClient should ignore retries config if a custom 
timeout is provided
 Key: KAFKA-8503
 URL: https://issues.apache.org/jira/browse/KAFKA-8503
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


The admin client takes a `retries` config similar to the producer. The default 
value is 5. Individual APIs also accept an optional timeout, which is defaulted 
to `request.timeout.ms`. The call will fail if either `retries` or the API 
timeout is exceeded. This is not very intuitive. I think a user would expect to 
wait if they provided a timeout and the operation cannot be completed. In 
general, timeouts are much easier for users to work with and reason about.

A couple options are either to ignore `retries` in this case or to increase the 
default value of `retries` to something large and not likely to be exceeded. I 
propose to do the first. Longer term, we could consider deprecating `retries` 
and avoiding the overloading of `request.timeout.ms` by providing a 
`default.api.timeout.ms` similar to the consumer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8502) Flakey test AdminClientIntegrationTest#testElectUncleanLeadersForAllPartitions

2019-06-07 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8502:
--

 Summary: Flakey test 
AdminClientIntegrationTest#testElectUncleanLeadersForAllPartitions
 Key: KAFKA-8502
 URL: https://issues.apache.org/jira/browse/KAFKA-8502
 Project: Kafka
  Issue Type: Bug
Reporter: Boyang Chen


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/5355/consoleFull]

 
*18:06:01* *18:06:01* kafka.api.AdminClientIntegrationTest > 
testElectUncleanLeadersForAllPartitions FAILED*18:06:01* 
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Aborted due to 
timeout.*18:06:01* at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)*18:06:01*
 at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)*18:06:01*
 at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)*18:06:01*
 at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)*18:06:01*
 at 
kafka.api.AdminClientIntegrationTest.testElectUncleanLeadersForAllPartitions(AdminClientIntegrationTest.scala:1496)*18:06:01*
 *18:06:01* Caused by:*18:06:01* 
org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)