Build failed in Jenkins: kafka-2.3-jdk8 #2

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Work around OpenJDK 11 javadocs issue. (#6747)

--
[...truncated 2.89 MB...]

kafka.zk.KafkaZkClientTest > testLogDirGetters PASSED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment STARTED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testAclMethods STARTED

kafka.zk.KafkaZkClientTest > testAclMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED


Build failed in Jenkins: kafka-trunk-jdk11 #554

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Work around OpenJDK 11 javadocs issue. (#6747)

--
[...truncated 2.46 MB...]
org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testMetadataWithExistingAssignmentButOlderProtocolSelection[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testMetadataWithExistingAssignmentButOlderProtocolSelection[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testMetadataWithExistingAssignment[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testMetadataWithExistingAssignment[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testTaskAssignmentWhenWorkerBounces[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testTaskAssignmentWhenWorkerBounces[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testTaskAssignmentWhenWorkerJoins[0] STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #3661

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Work around OpenJDK 11 javadocs issue. (#6747)

--
[...truncated 2.46 MB...]

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddDate PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddTime STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddTime PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotAddHeadersWithObjectValuesAndMismatchedSchema STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotAddHeadersWithObjectValuesAndMismatchedSchema PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotValidateMismatchedValuesWithBuiltInTypes STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotValidateMismatchedValuesWithBuiltInTypes PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldRemoveAllHeadersWithSameKey STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldRemoveAllHeadersWithSameKey PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldTransformAndRemoveHeaders STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldTransformAndRemoveHeaders PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldBeEquals STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldBeEquals PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldValidateBuildInTypes 
STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldValidateBuildInTypes 
PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldRemoveAllHeaders 
STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldRemoveAllHeaders 
PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldTransformHeadersWhenEmpty STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldTransformHeadersWhenEmpty PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotValidateNullValuesWithBuiltInTypes STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotValidateNullValuesWithBuiltInTypes PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldTransformHeaders 
STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldTransformHeaders 
PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldAddMultipleHeadersWithSameKeyAndRetainLatest STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldAddMultipleHeadersWithSameKeyAndRetainLatest PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldRetainLatestWhenEmpty STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldRetainLatestWhenEmpty PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldAddHeadersWithPrimitiveValues STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldAddHeadersWithPrimitiveValues PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotAddHeadersWithNullObjectValuesWithNonOptionalSchema STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotAddHeadersWithNullObjectValuesWithNonOptionalSchema PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldDuplicateAndAlwaysReturnEquivalentButDifferentObject STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldDuplicateAndAlwaysReturnEquivalentButDifferentObject PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddTimestamp STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddTimestamp PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddDecimal STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddDecimal PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldAddHeadersWithNullObjectValuesWithOptionalSchema STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldAddHeadersWithNullObjectValuesWithOptionalSchema PASSED

org.apache.kafka.connect.connector.ConnectorReconfigurationTest > 
testReconfigureStopException STARTED

org.apache.kafka.connect.connector.ConnectorReconfigurationTest > 
testReconfigureStopException PASSED

org.apache.kafka.connect.connector.ConnectorReconfigurationTest > 
testDefaultReconfigure STARTED

org.apache.kafka.connect.connector.ConnectorReconfigurationTest > 
testDefaultReconfigure PASSED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordAndCloneHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordAndCloneHeaders PASSED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordUsingNewHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordUsingNewHeaders PASSED

org.apache.kafka.connect.source.SourceRecordTest > shouldModifyRecordHeader 
STARTED

org.apache.kafka.connect.source.SourceRecordTest > shouldModifyRecordHeader 
PASSED


[DISCUSS] KIP-472: Add header to RecordContext

2019-05-20 Thread Richard Yu
Hello,

I wish to introduce a minor addition present in RecordContext (a public
facing API). This addition works to both provide the user with
more information regarding the processing state of the partition, but also
help resolve a bug which Kafka is currently experiencing.
Here is the KIP Link:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-472%3A+%5BSTREAMS%5D+Add+partition+time+field+to+RecordContext

Cheers,
Richard Yu


[jira] [Created] (KAFKA-8400) Do not update follower replica state if the log read failed

2019-05-20 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8400:
--

 Summary: Do not update follower replica state if the log read 
failed
 Key: KAFKA-8400
 URL: https://issues.apache.org/jira/browse/KAFKA-8400
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


In {{ReplicaManager.fetchMessages}}, we have the following logic to read from 
the log and update follower state:
{code}
val result = readFromLocalLog(
replicaId = replicaId,
fetchOnlyFromLeader = fetchOnlyFromLeader,
fetchIsolation = fetchIsolation,
fetchMaxBytes = fetchMaxBytes,
hardMaxBytesLimit = hardMaxBytesLimit,
readPartitionInfo = fetchInfos,
quota = quota)
  if (isFromFollower) updateFollowerLogReadResults(replicaId, result)
  else result
{code}
The call to {{readFromLocalLog}} could fail for many reasons, in which case we 
return a LogReadResult with an error set and all fields set to -1. The problem 
is that we do not check for the error when updating the follower state. As far 
as I can tell, this does not cause any correctness issues, but we're just 
asking for trouble. It would be better to check the error before proceeding to 
`Partition.updateReplicaLogReadResult`. 

Perhaps even better would be to have {{readFromLocalLog}} return something like 
{{Either[LogReadResult, Errors]}} so that we are forced to handle the error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.3-jdk8 #1

2019-05-20 Thread Apache Jenkins Server
See 

--
[...truncated 2.60 MB...]
kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testAclMethods STARTED

kafka.zk.KafkaZkClientTest > testAclMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates STARTED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates PASSED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
STARTED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #553

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8265: Fix config name to match KIP-458. (#6755)

[github] MINOR: Bump version to 2.4.0-SNAPSHOT (#6774)

--
[...truncated 2.50 MB...]
org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader[1] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader[1] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testMetadataWithExistingAssignmentButOlderProtocolSelection[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testMetadataWithExistingAssignmentButOlderProtocolSelection[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testMetadataWithExistingAssignment[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testMetadataWithExistingAssignment[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testTaskAssignmentWhenWorkerBounces[0] STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testTaskAssignmentWhenWorkerBounces[0] PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorIncrementalTest > 
testTaskAssignmentWhenWorkerJoins[0] STARTED

Build failed in Jenkins: kafka-trunk-jdk8 #3660

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8265: Fix config name to match KIP-458. (#6755)

[github] MINOR: Bump version to 2.4.0-SNAPSHOT (#6774)

--
[...truncated 2.45 MB...]
org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldNotDeleteCheckpointFileAfterLoaded PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldInitializeStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldInitializeStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipNullKeysWhenRestoring STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipNullKeysWhenRestoring PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReturnInitializedStoreNames STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReturnInitializedStoreNames PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointRestoredOffsetsToFile STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointRestoredOffsetsToFile PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldUnlockGlobalStateDirectoryOnClose STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldUnlockGlobalStateDirectoryOnClose PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldAttemptToCloseAllStoresEvenWhenSomeException STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldAttemptToCloseAllStoresEvenWhenSomeException PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfCallbackIsNull STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfCallbackIsNull PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipGlobalInMemoryStoreOffsetsToFile STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipGlobalInMemoryStoreOffsetsToFile PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReadCheckpointOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReadCheckpointOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCloseStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCloseStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentIfTryingToRegisterStoreThatIsNotGlobal STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentIfTryingToRegisterStoreThatIsNotGlobal PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldFlushStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldFlushStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldLockGlobalStateDirectory STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldLockGlobalStateDirectory PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRestoreRecordsFromCheckpointToHighwatermark STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRestoreRecordsFromCheckpointToHighwatermark PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldConvertValuesIfStoreImplementsTimestampedBytesStore STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldConvertValuesIfStoreImplementsTimestampedBytesStore PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowLockExceptionIfIOExceptionCaughtWhenTryingToLockStateDir STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowLockExceptionIfIOExceptionCaughtWhenTryingToLockStateDir PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfAttemptingToRegisterStoreTwice STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfAttemptingToRegisterStoreTwice PASSED


[jira] [Created] (KAFKA-8399) Add back `internal.leave.group.on.close` config for KStreams

2019-05-20 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8399:
--

 Summary: Add back `internal.leave.group.on.close` config for 
KStreams
 Key: KAFKA-8399
 URL: https://issues.apache.org/jira/browse/KAFKA-8399
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Boyang Chen
Assignee: Boyang Chen


The behavior for KStream rebalance default has changed from no leave group to 
leave group. We should add it back for system test pass, reduce the risk of 
being detected not working in other public cases.

Reference: [https://github.com/apache/kafka/pull/6673]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8398) NPE when unmapping files after moving log directories using AlterReplicaLogDirs

2019-05-20 Thread Vikas Singh (JIRA)
Vikas Singh created KAFKA-8398:
--

 Summary: NPE when unmapping files after moving log directories 
using AlterReplicaLogDirs
 Key: KAFKA-8398
 URL: https://issues.apache.org/jira/browse/KAFKA-8398
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.2.0
Reporter: Vikas Singh
 Attachments: AlterReplicaLogDirs.txt

The NPE occurs after the AlterReplicaLogDirs command completes successfully and 
when unmapping older regions. The relevant part of log is in attached log file. 
Here is the stacktrace (which is repeated for both index files):

 
{code:java}
[2019-05-20 14:08:13,999] ERROR Error unmapping index 
/tmp/kafka-logs/test-0.567a0d8ff88b45ab95794020d0b2e66f-delete/.index
 (kafka.log.OffsetIndex)
java.lang.NullPointerException
at 
org.apache.kafka.common.utils.MappedByteBuffers.unmap(MappedByteBuffers.java:73)
at kafka.log.AbstractIndex.forceUnmap(AbstractIndex.scala:318)
at kafka.log.AbstractIndex.safeForceUnmap(AbstractIndex.scala:308)
at kafka.log.AbstractIndex.$anonfun$closeHandler$1(AbstractIndex.scala:257)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.log.AbstractIndex.closeHandler(AbstractIndex.scala:257)
at kafka.log.AbstractIndex.deleteIfExists(AbstractIndex.scala:226)
at kafka.log.LogSegment.$anonfun$deleteIfExists$6(LogSegment.scala:597)
at kafka.log.LogSegment.delete$1(LogSegment.scala:585)
at kafka.log.LogSegment.$anonfun$deleteIfExists$5(LogSegment.scala:597)
at kafka.utils.CoreUtils$.$anonfun$tryAll$1(CoreUtils.scala:115)
at kafka.utils.CoreUtils$.$anonfun$tryAll$1$adapted(CoreUtils.scala:114)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.utils.CoreUtils$.tryAll(CoreUtils.scala:114)
at kafka.log.LogSegment.deleteIfExists(LogSegment.scala:599)
at kafka.log.Log.$anonfun$delete$3(Log.scala:1762)
at kafka.log.Log.$anonfun$delete$3$adapted(Log.scala:1762)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at kafka.log.Log.$anonfun$delete$2(Log.scala:1762)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.log.Log.maybeHandleIOException(Log.scala:2013)
at kafka.log.Log.delete(Log.scala:1759)
at kafka.log.LogManager.deleteLogs(LogManager.scala:761)
at kafka.log.LogManager.$anonfun$deleteLogs$6(LogManager.scala:775)
at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] KIP-469: Pre-registration support for static members

2019-05-20 Thread Boyang Chen
Hey friends,

I would like to start a discussion thread for a low-hanging fruit KIP following 
up 
KIP-345
 here:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-469%3A+Pre-registration+support+for+static+members

The goal is to pre-allocate static members to reduce unnecessary rebalances 
during cluster scale up.

Let me know your thoughts, thank you!


[jira] [Created] (KAFKA-8397) Add pre-registration feature for static membership

2019-05-20 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8397:
--

 Summary: Add pre-registration feature for static membership
 Key: KAFKA-8397
 URL: https://issues.apache.org/jira/browse/KAFKA-8397
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen
Assignee: Boyang Chen


After 
[KIP-345|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]],
 we want to further improve user experience for reducing rebalances by 
providing admin tooling to batch send join group requests for multiple 
consumers if we know their `group.instance.id`s beforehand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


New release branch 2.3

2019-05-20 Thread Colin McCabe
Hi all,

At this point, 2.3 has hit feature freeze.  At this point, we expect bugfixes 
and stabilization efforts to go into 2.3, but not new features.  Several things 
slipped from this release.  Just off the top of my head, this list includes 
KIP-359: Verify epoch in produce requests, KIP-382: MirrorMaker 2.0, and 
KIP-412: Extend Admin API to support dynamic application log levels.  Hopefully 
these features will show up in a future release.

As promised, we now have a release branch for the 2.3 release, and trunk has 
been bumped to 2.4-SNAPSHOT. 
 From this point, most changes should go to trunk.  Blockers will be 
double-committed.  Please discuss with your reviewer whether your PR should go 
to trunk or to trunk+release so they can merge accordingly.

regards,
Colin


Build failed in Jenkins: kafka-trunk-jdk8 #3659

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8316; Remove deprecated usage of Slf4jRequestLog,

--
[...truncated 2.45 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED


Build failed in Jenkins: kafka-trunk-jdk11 #552

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8316; Remove deprecated usage of Slf4jRequestLog,

--
[...truncated 2.45 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

[VOTE] KIP-466: Add support for List serialization and deserialization

2019-05-20 Thread Development
Hello,

I want to start a vote for KIP-466 

 "Add support for List serialization and deserialization”.
The implementation can be found as a PR 
. 
In order to utilize list serde, a user needs to create it in the following 
nested fashion: Serdes.ListSerde(Serdes.someSerde());
Because of this difference, it also requires separate test cases.

Thank you all for your input and support.

Best,
Daniyar Yeralin

[jira] [Resolved] (KAFKA-5476) Implement a system test that creates network partitions

2019-05-20 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-5476.

Resolution: Duplicate

> Implement a system test that creates network partitions
> ---
>
> Key: KAFKA-5476
> URL: https://issues.apache.org/jira/browse/KAFKA-5476
> Project: Kafka
>  Issue Type: Test
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
>
> Implement a system test that creates network partitions



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Contributor

2019-05-20 Thread 15995451321
Hi,

 

I want to contribute to Apache Kafka.

Would you please give me the contributor permission?

My JIRA ID is shark.chen.



v2 ProduceResponse documentation wrong?

2019-05-20 Thread Graeme Jenkinson
Hi,

on parsing the a v2 ProduceResponse from Kafka I am seeing a discrepancy from 
the documentation. Specifically the base_offset and log_append_time fields 
appear to be in the opposite positions to those documented in the protocol 
guide: https://kafka.apache.org/protocol#The_Messages_Produce 


It's possible I’ve messed something up in the tcpdump and trimming of the Kafka 
responses, but I don’t think so. Is anyone familiar enough with the protocol to 
confirm/deny?

Kind regards,

Graeme

Build failed in Jenkins: kafka-trunk-jdk8 #3658

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Add log when the consumer does not send an offset commit due to

[github] KAFKA-8381; Disable hostname validation when verifying inter-broker SSL

--
[...truncated 2.45 MB...]
org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldNotDeleteCheckpointFileAfterLoaded PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldInitializeStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldInitializeStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipNullKeysWhenRestoring STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipNullKeysWhenRestoring PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReturnInitializedStoreNames STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReturnInitializedStoreNames PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointRestoredOffsetsToFile STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointRestoredOffsetsToFile PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldUnlockGlobalStateDirectoryOnClose STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldUnlockGlobalStateDirectoryOnClose PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldAttemptToCloseAllStoresEvenWhenSomeException STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldAttemptToCloseAllStoresEvenWhenSomeException PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfCallbackIsNull STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfCallbackIsNull PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipGlobalInMemoryStoreOffsetsToFile STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipGlobalInMemoryStoreOffsetsToFile PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReadCheckpointOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReadCheckpointOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCloseStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCloseStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentIfTryingToRegisterStoreThatIsNotGlobal STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentIfTryingToRegisterStoreThatIsNotGlobal PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldFlushStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldFlushStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldLockGlobalStateDirectory STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldLockGlobalStateDirectory PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRestoreRecordsFromCheckpointToHighwatermark STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRestoreRecordsFromCheckpointToHighwatermark PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldConvertValuesIfStoreImplementsTimestampedBytesStore STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldConvertValuesIfStoreImplementsTimestampedBytesStore PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowLockExceptionIfIOExceptionCaughtWhenTryingToLockStateDir STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowLockExceptionIfIOExceptionCaughtWhenTryingToLockStateDir PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfAttemptingToRegisterStoreTwice STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfAttemptingToRegisterStoreTwice PASSED


Re: Contributor

2019-05-20 Thread Bill Bejeck
Hi,

Thanks for your interest in contributing to Apache Kafka.

You should be all set now.

Thanks,
Bill

On Mon, May 20, 2019 at 2:31 PM  wrote:

> Hi,
>
>
>
> I want to contribute to Apache Kafka.
>
> Would you please give me the contributor permission?
>
> My JIRA ID is shark.chen.
>
>
>
>


Build failed in Jenkins: kafka-trunk-jdk11 #551

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Add log when the consumer does not send an offset commit due to

[github] KAFKA-8381; Disable hostname validation when verifying inter-broker SSL

--
[...truncated 2.46 MB...]
org.apache.kafka.connect.data.ValuesTest > 
shouldConvertMapWithStringKeysAndIntegerValues PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertDateValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertDateValues PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseInvalidBooleanValueString STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseInvalidBooleanValueString PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimestampValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimestampValues PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertNullValue STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertNullValue PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimeValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimeValues PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMalformedMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMalformedMap PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithShortValuesWithoutWhitespaceAsMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithShortValuesWithoutWhitespaceAsMap PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithStringValues 
STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithStringValues 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithEscapedDelimiters STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithEscapedDelimiters PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithOnlyNumericElementTypesIntoListOfLargestNumericType
 STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithOnlyNumericElementTypesIntoListOfLargestNumericType
 PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithIntValuesWithWhitespaceAsMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithIntValuesWithWhitespaceAsMap PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithIntegerValues 
STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithIntegerValues 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithMixedElementTypesIntoListWithDifferentElementTypes 
STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithMixedElementTypesIntoListWithDifferentElementTypes 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldEscapeStringsWithEmbeddedQuotesAndBackslashes STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldEscapeStringsWithEmbeddedQuotesAndBackslashes PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringListWithMultipleElementTypesAndReturnListWithNoSchema STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringListWithMultipleElementTypesAndReturnListWithNoSchema PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToConvertToListFromStringWithNonCommonElementTypeAndBlankElement 
STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToConvertToListFromStringWithNonCommonElementTypeAndBlankElement 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithMultipleDelimiters STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithMultipleDelimiters PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testStructEquality STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testStructEquality PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapValue STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapValue PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchStructWrongSchema STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchStructWrongSchema PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapSomeValues STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapSomeValues PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchBoolean STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchBoolean PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnStructSchema 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnStructSchema 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 

Contributor

2019-05-20 Thread chendehe22
Hi,

 

I want to contribute to Apache Kafka.

Would you please give me the contributor permission?

My JIRA ID is shark.chen.

 



[jira] [Resolved] (KAFKA-7921) Instable KafkaStreamsTest

2019-05-20 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-7921.

Resolution: Fixed

Let's re-open this if we see another failure.

> Instable KafkaStreamsTest
> -
>
> Key: KAFKA-7921
> URL: https://issues.apache.org/jira/browse/KAFKA-7921
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> {{KafkaStreamsTest}} failed multiple times, eg,
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
> or
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
>  
> The preserved logs are as follows:
> {quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
> (org.apache.kafka.common.utils.AppInfoParser:109)
> [2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
> (org.apache.kafka.common.utils.AppInfoParser:110)
> [2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
> CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
> REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining 
> group 

Build failed in Jenkins: kafka-2.1-jdk8 #197

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[bill] KAFKA-8290: Close producer for zombie task (#6636)

--
[...truncated 972.89 KB...]
kafka.coordinator.group.GroupCoordinatorTest > 
testFetchOffsetNotCoordinatorForGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testFetchOffsetNotCoordinatorForGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testLeaveGroupUnknownConsumerExistingGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testJoinGroupUnknownConsumerNewGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.group.GroupCoordinatorTest > testValidJoinGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > testValidJoinGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
shouldDelayRebalanceUptoRebalanceTimeout STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
shouldDelayRebalanceUptoRebalanceTimeout PASSED

kafka.coordinator.group.GroupCoordinatorTest > testFetchOffsets STARTED

kafka.coordinator.group.GroupCoordinatorTest > testFetchOffsets PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testSessionTimeoutDuringRebalance STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testSessionTimeoutDuringRebalance PASSED

kafka.coordinator.group.GroupCoordinatorTest > testNewMemberJoinExpiration 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testNewMemberJoinExpiration 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testFetchTxnOffsetsWithAbort 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testFetchTxnOffsetsWithAbort 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupLeaderAfterFollower 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupLeaderAfterFollower 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupFromUnknownMember 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupFromUnknownMember 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testValidLeaveGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > testValidLeaveGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > testDescribeGroupInactiveGroup 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testDescribeGroupInactiveGroup 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testFetchTxnOffsetsIgnoreSpuriousCommit STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testFetchTxnOffsetsIgnoreSpuriousCommit PASSED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupNotCoordinator 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupNotCoordinator 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testBasicFetchTxnOffsets STARTED

kafka.coordinator.group.GroupCoordinatorTest > testBasicFetchTxnOffsets PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
shouldResetRebalanceDelayWhenNewMemberJoinsGroupInInitialRebalance STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
shouldResetRebalanceDelayWhenNewMemberJoinsGroupInInitialRebalance PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testHeartbeatUnknownConsumerExistingGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testHeartbeatUnknownConsumerExistingGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > testValidHeartbeat STARTED

kafka.coordinator.group.GroupCoordinatorTest > testValidHeartbeat PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testRequestHandlingWhileLoadingInProgress STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testRequestHandlingWhileLoadingInProgress PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-05-20 Thread John Roesler
Hi Bruno,

Looks really good overall. This is going to be an awesome addition.

My only thought was that we have "bytes-flushed-(rate|total) and
flush-time-(avg|min|max)" metrics, and the description states that
these are specifically for Memtable flush operations. What do you
think about calling it "memtable-bytes-flushed... (etc)"? On one hand,
I could see this being redundant, since that's the only thing that
gets "flushed" inside of Rocks. But on the other, we have an
independent "flush" operation in streams, which might be confusing.
Plus, it might help people who are looking at the full "menu" of
hundreds of metrics. They can't read and remember every description
while trying to understand the full list of metrics, so going for
maximum self-documentation in the name seems nice.

But that's a minor thought. Modulo the others' comments, this looks good to me.

Thanks,
-John

On Mon, May 20, 2019 at 11:07 AM Bill Bejeck  wrote:
>
> Hi Bruno,
>
> Thanks for the KIP, this will be a useful addition.
>
> Overall the KIP looks good to me, and I have two minor comments.
>
> 1. For the tags should, I'm wondering if rocksdb-state-id should be
> rocksdb-store-id
> instead?
>
> 2. With the compaction metrics, would it be possible to add total
> compactions and an average number of compactions?  I've taken a look at the
> available RocksDB metrics, and I'm not sure.  But users can control how
> many L0 files it takes to trigger compaction so if it is possible; it may
> be useful.
>
> Thanks,
> Bill
>
>
> On Mon, May 20, 2019 at 9:15 AM Bruno Cadonna  wrote:
>
> > Hi Sophie,
> >
> > Thank you for your comments.
> >
> > It's a good idea to supplement the metrics with configuration option
> > to change the metrics. I also had some thoughts about it. However, I
> > think I need some experimentation to get this right.
> >
> > I added the block cache hit rates for index and filter blocks to the
> > KIP. As far as I understood, they should stay at zero, if users do not
> > configure RocksDB to include index and filter blocks into the block
> > cache. Did you also understand this similarly? I guess also in this
> > case some experimentation would be good to be sure.
> >
> > Best,
> > Bruno
> >
> >
> > On Sat, May 18, 2019 at 2:29 AM Sophie Blee-Goldman 
> > wrote:
> > >
> > > Actually I wonder if it might be useful to users to be able to break up
> > the
> > > cache hit stats by type? Some people may choose to store index and filter
> > > blocks alongside data blocks, and it would probably be very helpful for
> > > them to know who is making more effective use of the cache in order to
> > tune
> > > how much of it is allocated to each. I'm not sure how common this really
> > is
> > > but I think it would be invaluable to those who do. RocksDB performance
> > can
> > > be quite opaque..
> > >
> > > Cheers,
> > > Sophie
> > >
> > > On Fri, May 17, 2019 at 5:01 PM Sophie Blee-Goldman  > >
> > > wrote:
> > >
> > > > Hey Bruno!
> > > >
> > > > This all looks pretty good to me, but one suggestion I have is to
> > > > supplement each of the metrics with some info on how the user can
> > control
> > > > them. In other words, which options could/should they set in
> > > > RocksDBConfigSetter should they discover a particular bottleneck?
> > > >
> > > > I don't think this necessarily needs to go into the KIP, but I do
> > think it
> > > > should be included in the docs somewhere (happy to help build up the
> > list
> > > > of associated options when the time comes)
> > > >
> > > > Thanks!
> > > > Sophie
> > > >
> > > > On Fri, May 17, 2019 at 2:54 PM Bruno Cadonna 
> > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> this KIP describes the extension of the Kafka Streams' metrics to
> > include
> > > >> RocksDB's internal statistics.
> > > >>
> > > >> Please have a look at it and let me know what you think. Since I am
> > not a
> > > >> RocksDB expert, I am thankful for any additional pair of eyes that
> > > >> evaluates this KIP.
> > > >>
> > > >>
> > > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-471:+Expose+RocksDB+Metrics+in+Kafka+Streams
> > > >>
> > > >> Best regards,
> > > >> Bruno
> > > >>
> > > >
> >


[jira] [Resolved] (KAFKA-7992) Add a server start time metric

2019-05-20 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-7992.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Add a server start time metric
> --
>
> Key: KAFKA-7992
> URL: https://issues.apache.org/jira/browse/KAFKA-7992
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Minor
>  Labels: needs-kip
> Fix For: 2.3.0
>
>
> KIP: KIP-436
> As with all software systems, observability into their health is critical.
> With many deployment platforms (be them custom-built or open-source), tasks 
> like restarting a misbehaving server in a cluster are completely automated. 
> With Kafka, monitoring systems have no definitive source of truth to gauge 
> when a server/client has been started. They are left to either use arbitrary 
> Kafka-specific metrics as a heuristic or the JVM RuntimeMXBean's StartTime, 
> which is not exactly indicative of when the application itself started
> It would be useful to have a metric exposing when the kafka server has 
> started.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8316) Remove deprecated usage of Slf4jRequestLog, SslContextFactory

2019-05-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8316.

Resolution: Fixed

> Remove deprecated usage of Slf4jRequestLog, SslContextFactory
> -
>
> Key: KAFKA-8316
> URL: https://issues.apache.org/jira/browse/KAFKA-8316
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Lee Dongjin
>Priority: Major
>  Labels: newbie
>
> Jetty recently deprecated a few classes we use. The following commit 
> suppresses the deprecation warnings:
> https://github.com/apache/kafka/commit/e66bc6255b2ee42481b54b7fd1d256b9e4ff5741
> We should remove the suppressions and use the suggested alternatives.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7358) Alternative Partitioner to Support "Always Round-Robin" Selection

2019-05-20 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7358.

Resolution: Duplicate

> Alternative Partitioner to Support "Always Round-Robin" Selection
> -
>
> Key: KAFKA-7358
> URL: https://issues.apache.org/jira/browse/KAFKA-7358
> Project: Kafka
>  Issue Type: Wish
>  Components: clients
>Reporter: M. Manna
>Assignee: M. Manna
>Priority: Minor
>
> In my organisation, we have been using kafka as the basic publish-subscribe 
> messaging system provider. Our goal is the send event-based (secure, 
> encrypted) SQL messages reliably, and process them accordingly. For us, the 
> message keys represent some metadata which we use to either ignore messages 
> (if a loopback to the sender), or log some information. We have the following 
> use case for messaging:
> 1) A Database transaction event takes place
> 2) The event is captured and messaged across 10 data centres all around the 
> world.
> 3) A group of consumers (for each data centre with a unique consumer-group 
> ID) are will process messages from their respective partitions. 1 consumer 
> per partition.
> Under the circumstances, we only need a guarantee that same message won't be 
> sent to multiple partitions. In other words, 1 partition will +never+ be 
> sought by multiple consumers.
> Using DefaultPartitioner, we can achieve this only with NULL keys. But since 
> we need keys for metadata, we cannot maintain "Round-robin" selection of 
> partitions because a key hash will determine which partition to choose. We 
> need to have round-robin style selection regardless of key type (NULL or 
> not-NULL)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3657

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8290: Close producer for zombie task (#6636)

--
[...truncated 2.54 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-05-20 Thread Bill Bejeck
Hi Bruno,

Thanks for the KIP, this will be a useful addition.

Overall the KIP looks good to me, and I have two minor comments.

1. For the tags should, I'm wondering if rocksdb-state-id should be
rocksdb-store-id
instead?

2. With the compaction metrics, would it be possible to add total
compactions and an average number of compactions?  I've taken a look at the
available RocksDB metrics, and I'm not sure.  But users can control how
many L0 files it takes to trigger compaction so if it is possible; it may
be useful.

Thanks,
Bill


On Mon, May 20, 2019 at 9:15 AM Bruno Cadonna  wrote:

> Hi Sophie,
>
> Thank you for your comments.
>
> It's a good idea to supplement the metrics with configuration option
> to change the metrics. I also had some thoughts about it. However, I
> think I need some experimentation to get this right.
>
> I added the block cache hit rates for index and filter blocks to the
> KIP. As far as I understood, they should stay at zero, if users do not
> configure RocksDB to include index and filter blocks into the block
> cache. Did you also understand this similarly? I guess also in this
> case some experimentation would be good to be sure.
>
> Best,
> Bruno
>
>
> On Sat, May 18, 2019 at 2:29 AM Sophie Blee-Goldman 
> wrote:
> >
> > Actually I wonder if it might be useful to users to be able to break up
> the
> > cache hit stats by type? Some people may choose to store index and filter
> > blocks alongside data blocks, and it would probably be very helpful for
> > them to know who is making more effective use of the cache in order to
> tune
> > how much of it is allocated to each. I'm not sure how common this really
> is
> > but I think it would be invaluable to those who do. RocksDB performance
> can
> > be quite opaque..
> >
> > Cheers,
> > Sophie
> >
> > On Fri, May 17, 2019 at 5:01 PM Sophie Blee-Goldman  >
> > wrote:
> >
> > > Hey Bruno!
> > >
> > > This all looks pretty good to me, but one suggestion I have is to
> > > supplement each of the metrics with some info on how the user can
> control
> > > them. In other words, which options could/should they set in
> > > RocksDBConfigSetter should they discover a particular bottleneck?
> > >
> > > I don't think this necessarily needs to go into the KIP, but I do
> think it
> > > should be included in the docs somewhere (happy to help build up the
> list
> > > of associated options when the time comes)
> > >
> > > Thanks!
> > > Sophie
> > >
> > > On Fri, May 17, 2019 at 2:54 PM Bruno Cadonna 
> wrote:
> > >
> > >> Hi all,
> > >>
> > >> this KIP describes the extension of the Kafka Streams' metrics to
> include
> > >> RocksDB's internal statistics.
> > >>
> > >> Please have a look at it and let me know what you think. Since I am
> not a
> > >> RocksDB expert, I am thankful for any additional pair of eyes that
> > >> evaluates this KIP.
> > >>
> > >>
> > >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471:+Expose+RocksDB+Metrics+in+Kafka+Streams
> > >>
> > >> Best regards,
> > >> Bruno
> > >>
> > >
>


[jira] [Resolved] (KAFKA-8381) SSL factory for inter-broker listener is broken

2019-05-20 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8381.
---
Resolution: Fixed
  Reviewer: Manikumar

> SSL factory for inter-broker listener is broken
> ---
>
> Key: KAFKA-8381
> URL: https://issues.apache.org/jira/browse/KAFKA-8381
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.3.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.3.0
>
>
> From a system test failure:
> {code}
> [2019-05-17 15:48:12,453] ERROR [KafkaServer id=1] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> javax.net.ssl.SSLHandshakeException: General SSLEngine problem for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings.
>     at 
> org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:162)
>     at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
>     at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>     at kafka.network.Processor.(SocketServer.scala:747)
>     at kafka.network.SocketServer.newProcessor(SocketServer.scala:388)
>     at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:282)
>     at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
>     at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:281)
>     at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:244)
>     at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:241)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>     at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:241)
>     at kafka.network.SocketServer.startup(SocketServer.scala:120)
>     at kafka.server.KafkaServer.startup(KafkaServer.scala:293)
> {code}
> Looks like the changes under 
> https://github.com/apache/kafka/commit/0494cd329f3aaed94b3b46de0abe495f80faaedd
>  added validation for inter-broker SSL factory with hostname verification 
> enabled and `localhost` as the hostname. As a result, integration tests pass, 
> but system tests fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8396) Clean up Transformer API

2019-05-20 Thread John Roesler (JIRA)
John Roesler created KAFKA-8396:
---

 Summary: Clean up Transformer API
 Key: KAFKA-8396
 URL: https://issues.apache.org/jira/browse/KAFKA-8396
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


Currently, KStream operators transformValues and flatTransformValues disable 
context forwarding, and force operators to just return the new values.

The reason is that we wanted to prevent the key from changing, since the whole 
point of a `xValues` transformation is that we _do not_ change the key, and 
hence don't need to repartition.

However, the chosen mechanism has some drawbacks: The Transform concept is 
basically a way to plug in a custom Processor within the Streams DSL, but these 
restrictions make it more like a MapValues with access to the context. For 
example, even though you can still schedule punctuations, there's no way to 
forward values as a result of them. So, as a user, it's hard to build a mental 
model of how to use a TransformValues (because it's not quite a Transformer and 
not quite a Mapper).

Also, logically, a Transformer can call forward as much as it wants, so a 
Transformer and a FlatTransformer are effectively the same thing. Then, we also 
have TransformValues and FlatTransformValues that are also two more versions of 
the same thing, just to implement the key restrictions. Internally, some of 
these can send downstream by returning OR forwarding, and others can only 
return. It's a lot for users to keep in mind.

We can clean up this API significantly by just allowing all transformers to 
call `forward`. In the `Values` case, we can wrap the ProcessorContext in one 
that checks the key is `equal` to the one that got passed in (i.e., saves a 
reference and enforces equality with that reference in any call to `forward`). 
Then, we can actually deprecate the `*ValueTransformer*` interfaces and remove 
the restriction about calling forward.

We can consider a further cleanup (TBD) to deprecate the existing Transformer 
interface entirely, and replace it with one with a `void` return type. Then, 
the Transform and FlatTransform cases collapse together, and we just need 
Transform and TransformValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #550

2019-05-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8290: Close producer for zombie task (#6636)

--
[...truncated 2.46 MB...]
org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > non-windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > non-windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records STARTED


[jira] [Resolved] (KAFKA-8376) Flaky test ClientAuthenticationFailureTest.testTransactionalProducerWithInvalidCredentials test.

2019-05-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8376.

Resolution: Fixed

> Flaky test 
> ClientAuthenticationFailureTest.testTransactionalProducerWithInvalidCredentials
>  test.
> 
>
> Key: KAFKA-8376
> URL: https://issues.apache.org/jira/browse/KAFKA-8376
> Project: Kafka
>  Issue Type: Bug
>Reporter: Manikumar
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.3.0
>
> Attachments: t1.txt
>
>
> The test is going into hang state and test run was not completing. I think 
> the flakiness is due to timing issues and 
> https://github.com/apache/kafka/pull/5971
> Attaching the thread dump.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8395) Add an ability to backup log segment files on truncation

2019-05-20 Thread Michael Axiak (JIRA)
Michael Axiak created KAFKA-8395:


 Summary: Add an ability to backup log segment files on truncation
 Key: KAFKA-8395
 URL: https://issues.apache.org/jira/browse/KAFKA-8395
 Project: Kafka
  Issue Type: New Feature
  Components: core
Affects Versions: 2.2.0
Reporter: Michael Axiak
 Fix For: 2.2.1


At HubSpot, we believe we hit a combination of bugs [1] [2], which may have 
caused us to lose data. In this scenario, as part of metadata conflict 
resolution a slowly starting up broker recovered an offset of zero and 
truncated segment files.

As part of a belt-and-suspenders approach to reducing this risk in the future, 
I propose adding the ability to rename/backup these files and allowing kafka to 
move on. Note that this breaks the ordering guarantees, but allows one to 
recover the data and decide later how to approach it.

This feature should be turned off by default but enabled with a configuration 
option.

(A pull request is following soon on Github)

1: https://issues.apache.org/jira/browse/KAFKA-2178
 2: https://issues.apache.org/jira/browse/KAFKA-1120

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2019-05-20 Thread M. Manna
I have now submitted the PR for review. Thanks Matthias for pointing out
that KAFKA- was raised to address the same.

https://github.com/apache/kafka/pull/6771

if someone reviews after Jenkins build is complete, I would appreciate it.

Thanks,

On Fri, 17 May 2019 at 22:18, M. Manna  wrote:

> Apologies for the delay. As per the original thread, there have been 3
> binding votes.
>
> I will be closing this and update the confluence page with the results.
> Also, I will be submitting the PR soon.
>
> Regards,
>
> On Fri, 5 Apr 2019 at 00:18, M. Manna  wrote:
>
>> Thanks Harsha.
>>
>> As per your comments, I have counted 3 binding votes so far.
>>
>> Thanks everyone for your comments and support. I’ll update the kip next
>> morning and do  the needful.
>>
>> Regards,
>>
>> On Thu, 4 Apr 2019 at 22:10, Harsha  wrote:
>>
>>> Looks like the KIP is passed with 3 binding votes.  From Matthias, Bill
>>> Bejeck and myself you got 3 binding votes.
>>> You can do the full tally of the votes and send out a close of vote
>>> thread.
>>>
>>> Thanks,
>>> Harsha
>>>
>>> On Thu, Apr 4, 2019, at 12:24 PM, M. Manna wrote:
>>> > Hello,
>>> >
>>> > Trying to revive this thread again. Would anyone be interested in
>>> having
>>> > this KiP through
>>> >
>>> >
>>> > Thanks,
>>> >
>>> > On Fri, 25 Jan 2019 at 16:44, M. Manna  wrote:
>>> >
>>> > > Hello,
>>> > >
>>> > > I am trying to revive this thread. I only got 1 binding vote so far.
>>> > >
>>> > > Please feel free to revisit and comment here.
>>> > >
>>> > > Thanks,
>>> > >
>>> > > On Thu, 25 Oct 2018 at 00:15, M. Manna  wrote:
>>> > >
>>> > >> Hey IJ,
>>> > >>
>>> > >> Thanks for your interest in the KIP.
>>> > >>
>>> > >> My point was simply that the round-robin should happen even if the
>>> key is
>>> > >> not null. As for the importance of key in our case, we treat the
>>> key as
>>> > >> metadata. Each key is composed of certain info which are parsed by
>>> our
>>> > >> consumer thread. We will then determine whether it's an actionable
>>> message
>>> > >> (e.g. process it), or a loopback(ignore it). You could argue, "Why
>>> not
>>> > >> append this metadata with the record and parse it there?". But that
>>> means
>>> > >> the following:
>>> > >>
>>> > >> 1) I'm always passing null key to achieve this - I would like to
>>> pass
>>> > >> Null/Not-Null/Other key i.e. flexibility
>>> > >> 2) Suppose the message size is 99 KB and and max message bytes
>>> allowed is
>>> > >> 100K. Now prefixing metadata with message results into the actual
>>> message
>>> > >> being 101K. This will fail at producer level and cause a retry/log
>>> this in
>>> > >> our DB for future pickup.
>>> > >>
>>> > >> To avoid all these, we are simply proposing this new partitioner
>>> class.
>>> > >> but all Kafka new releases will still have DefaultPartitioner as
>>> default,
>>> > >> unless they change the prop file to use our new class.
>>> > >>
>>> > >> Regards,
>>> > >>
>>> > >> On Sun, 21 Oct 2018 at 04:05, Ismael Juma 
>>> wrote:
>>> > >>
>>> > >>> Thanks for the KIP. Can you please elaborate on the need for the
>>> key in
>>> > >>> this case? The KIP simply states that the key is needed for
>>> metadata, but
>>> > >>> doesn't give any more details.
>>> > >>>
>>> > >>> Ismael
>>> > >>>
>>> > >>> On Tue, Sep 4, 2018 at 3:39 AM M. Manna 
>>> wrote:
>>> > >>>
>>> > >>> > Hello,
>>> > >>> >
>>> > >>> > I have made necessary changes as per the original discussion
>>> thread,
>>> > >>> and
>>> > >>> > would like to put it for votes.
>>> > >>> >
>>> > >>> > Thank you very much for your suggestion and guidance so far.
>>> > >>> >
>>> > >>> > Regards,
>>> > >>> >
>>> > >>>
>>> > >>
>>> >
>>>
>>


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-05-20 Thread Bruno Cadonna
Hi Sophie,

Thank you for your comments.

It's a good idea to supplement the metrics with configuration option
to change the metrics. I also had some thoughts about it. However, I
think I need some experimentation to get this right.

I added the block cache hit rates for index and filter blocks to the
KIP. As far as I understood, they should stay at zero, if users do not
configure RocksDB to include index and filter blocks into the block
cache. Did you also understand this similarly? I guess also in this
case some experimentation would be good to be sure.

Best,
Bruno


On Sat, May 18, 2019 at 2:29 AM Sophie Blee-Goldman  wrote:
>
> Actually I wonder if it might be useful to users to be able to break up the
> cache hit stats by type? Some people may choose to store index and filter
> blocks alongside data blocks, and it would probably be very helpful for
> them to know who is making more effective use of the cache in order to tune
> how much of it is allocated to each. I'm not sure how common this really is
> but I think it would be invaluable to those who do. RocksDB performance can
> be quite opaque..
>
> Cheers,
> Sophie
>
> On Fri, May 17, 2019 at 5:01 PM Sophie Blee-Goldman 
> wrote:
>
> > Hey Bruno!
> >
> > This all looks pretty good to me, but one suggestion I have is to
> > supplement each of the metrics with some info on how the user can control
> > them. In other words, which options could/should they set in
> > RocksDBConfigSetter should they discover a particular bottleneck?
> >
> > I don't think this necessarily needs to go into the KIP, but I do think it
> > should be included in the docs somewhere (happy to help build up the list
> > of associated options when the time comes)
> >
> > Thanks!
> > Sophie
> >
> > On Fri, May 17, 2019 at 2:54 PM Bruno Cadonna  wrote:
> >
> >> Hi all,
> >>
> >> this KIP describes the extension of the Kafka Streams' metrics to include
> >> RocksDB's internal statistics.
> >>
> >> Please have a look at it and let me know what you think. Since I am not a
> >> RocksDB expert, I am thankful for any additional pair of eyes that
> >> evaluates this KIP.
> >>
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471:+Expose+RocksDB+Metrics+in+Kafka+Streams
> >>
> >> Best regards,
> >> Bruno
> >>
> >


Re: [DISCUSS] KIP-456: Helper classes to make it simpler to write test logic with TopologyTestDriver

2019-05-20 Thread Jukka Karvanen
Hi All,

Inspired by the discussion in this thread, there is a new KIP-470:
TopologyTestDriver test input and output usability improvements:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-470%3A+TopologyTestDriver+test+input+and+output+usability+improvements


This KIP can be discarded if KIP-470 get accepted.

Even this KIP might be rejected, migrating from this classes to KIP-470 is
rather straighforward.
There are addon package which can be used with any Kafka version
>=1.1.0.before these are included to release.

See info:
https://github.com/jukkakarvanen/kafka-streams-test-topics

Maven package:
https://mvnrepository.com/artifact/com.github.jukkakarvanen/kafka-streams-test-topics


Best Regards,
Jukka

to 9. toukok. 2019 klo 15.51 Patrik Kleindl (pklei...@gmail.com) kirjoitti:

> Hi Jukka
> Sorry, that was mostly what I had in mind, I didn't have enough time to
> look through the KIP.
>
> My question was also if this handling of topics wouldn't make more sense
> even outside the TTD, for the general API.
>
> regards
> Patrik
>
> On Thu, 9 May 2019 at 14:43, Jukka Karvanen 
> wrote:
>
> > Hi Patrick,
> >
> > Sorry, I need to clarify.
> > In this current version of KIP in wiki, topic object are created with
> > constructor where driver, topicName and serdes are provided.
> >
> > TestInputTopic inputTopic = new TestInputTopic > String>(testDriver, INPUT_TOPIC, new Serdes.StringSerde(), new
> > Serdes.StringSerde());
> >
> > So if TopologyTestDriver modified, this could be
> >
> > TestInputTopic inputTopic =
> > testDriver.getInputTopic(INPUT_TOPIC, new Serdes.StringSerde(), new
> > Serdes.StringSerde());
> >
> > or preferrable if serders can be found:
> >
> > TestInputTopic inputTopic =
> > testDriver.getInputTopic(INPUT_TOPIC);
> >
> > This initialization done normally in test setup and after it can be used
> > with topic object:
> >
> > inputTopic.pipeInput("Hello");
> >
> >
> > Or did you mean something else?
> >
> > Jukka
> >
> >
> >
> >
> > to 9. toukok. 2019 klo 15.14 Patrik Kleindl (pklei...@gmail.com)
> > kirjoitti:
> >
> > > Hi Jukka
> > > Regarding your comment
> > > > If there would be a way to find out needed serders for the topic, it
> > > would make API even simpler.
> > > I was wondering if it wouldn't make more sense to have a "topic object"
> > > including the Serdes and use this instead of only passing in the name
> as
> > a
> > > string everywhere.
> > > From a low-level perspective Kafka does and should not care what is
> > inside
> > > the topic, but from a user perspective this information usually belongs
> > > together.
> > > Sidenote: Having topics as objects would probably also make it easier
> to
> > > track them from the outside.
> > > regards
> > > Patrik
> > >
> > > On Thu, 9 May 2019 at 10:39, Jukka Karvanen <
> jukka.karva...@jukinimi.com
> > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I will write new KIP for the TestTopologyDriver Input and Output
> > > usability
> > > > changes.
> > > > It is out of the scope of the current title: "Helper classes to make
> it
> > > > simpler to write test logic with TopologyTestDriver"
> > > > and we can get back to this KIP if that alternative is not favored.
> > > >
> > > > So my original approach was not to modify existing classes, but if we
> > end
> > > > up modifing TTD, I would also change the
> > > > way to instantiate these topics. We could add
> > getInputTopic("my-topic") /
> > > > getOutputTopic("my-topic") to TTD, so it would work
> > > > same way as with getStateStore and related methods.
> > > >
> > > > If there would be a way to find out needed serders for the topic, it
> > > would
> > > > make API even simpler.
> > > >
> > > > Generally still as a end user, I would prefer not only swapping the
> > > > ConsumerRecord and ProducerRecord, but having
> > > > interface accepting and returning Record, not needing to think about
> > are
> > > > those ConsumerRecord or ProducerRecords.
> > > > and that way would could use same classes to pipe in and assert the
> > > > result.Something similar than  "private final static class Record"
> > > > in TopologyTestDriverTest.
> > > >
> > > > Jukka
> > > >
> > > > ke 8. toukok. 2019 klo 17.01 John Roesler (j...@confluent.io)
> > kirjoitti:
> > > >
> > > > > Hi Jukka, thanks for the reply!
> > > > >
> > > > > I think this is a good summary (the discussion was getting a little
> > > > > unwieldy. I'll reply inline.
> > > > >
> > > > > Also, thanks for clarify about your library vs. this KIP. That
> makes
> > > > > perfect sense to me.
> > > > > >
> > > > > > 1. Add JavaDoc for KIP
> > > > > >
> > > > > > Is there a good example of KIP where Javadoc is included, so I
> can
> > > > > follow?
> > > > > > I create this KIP based on this as an example::
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-247%3A+Add+public+test+utils+for+Kafka+Streams
> > > > > >
> > > > > >
> > > > > > Now added some comments to KIP page to clarify 

[DISCUSS] KIP-470: TopologyTestDriver test input and output usability improvements

2019-05-20 Thread Jukka Karvanen
Hi All,

I would like to start the discussion on KIP-470: TopologyTestDriver test
input and output usability improvements:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-470%3A+TopologyTestDriver+test+input+and+output+usability+improvements


This KIP is inspired by the Discussion in KIP-456: Helper classes to make
it simpler to write test logic with TopologyTestDriver:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-456
%3A+Helper+classes+to+make+it+simpler+to+write+test+logic+with+TopologyTestDriver


The proposal in KIP-456

was
to add alternate way to input and output topic, but this KIP enhance those
classes and deprecate old functionality to make clear interface for test
writer to use.

In current KIP-470 proposal, topic objects are created with topicName and
related serders.
public final  TestInputTopic createInputTopic(final String
topicName, final Serde keySerde, final Serde valueSerde);
public final  TestOutputTopic createOutputTopic(final String
topicName, final Serde keySerde, final Serde valueSerde);
One thing I wondered if there way to find out topic related serde from
TopologyTestDriver topology, it would simply creation of these Topic
objects:
public final  TestInputTopic createInputTopic(final String
topicName);
public final  TestOutputTopic createOutputTopic(final String
topicName);

KIP-456 can be discarded if this KIP get accepted.


Best Regards,
Jukka


[jira] [Created] (KAFKA-8394) Cannot Start a build with New Gradle Version

2019-05-20 Thread M. Manna (JIRA)
M. Manna created KAFKA-8394:
---

 Summary: Cannot Start a build with New Gradle Version
 Key: KAFKA-8394
 URL: https://issues.apache.org/jira/browse/KAFKA-8394
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0
Reporter: M. Manna


When I downloaded gradle 5.4.1 and ran `gradle wrapper` - the build failed 
because the scoverage plugin dependency encountered some build errors. The 
following is the output

 

org.gradle.api.GradleScriptException: A problem occurred evaluating root
 project 'kafka'.
         at
 
org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.java:92)
         at
 
org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl$2.run(DefaultScriptPluginFactory.java:221)
         at
 
org.gradle.configuration.ProjectScriptTarget.addConfiguration(ProjectScriptTarget.java:77)
         at
 
org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl.apply(DefaultScriptPluginFactory.java:226)
         at
 
org.gradle.configuration.BuildOperationScriptPlugin$1$1.run(BuildOperationScriptPlugin.java:69)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:402)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:394)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:92)
         at
 
org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
         at
 
org.gradle.configuration.BuildOperationScriptPlugin$1.execute(BuildOperationScriptPlugin.java:66)
         at
 
org.gradle.configuration.BuildOperationScriptPlugin$1.execute(BuildOperationScriptPlugin.java:63)
         at
 
org.gradle.configuration.internal.DefaultUserCodeApplicationContext.apply(DefaultUserCodeApplicationContext.java:48)
         at
 
org.gradle.configuration.BuildOperationScriptPlugin.apply(BuildOperationScriptPlugin.java:63)
         at
 
org.gradle.configuration.project.BuildScriptProcessor$1.run(BuildScriptProcessor.java:44)
         at org.gradle.internal.Factories$1.create(Factories.java:25)
         at
 
org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.withMutableState(DefaultProjectStateRegistry.java:200)
         at
 
org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.withMutableState(DefaultProjectStateRegistry.java:186)
         at
 
org.gradle.configuration.project.BuildScriptProcessor.execute(BuildScriptProcessor.java:41)
         at
 
org.gradle.configuration.project.BuildScriptProcessor.execute(BuildScriptProcessor.java:26)
         at
 
org.gradle.configuration.project.ConfigureActionsProjectEvaluator.evaluate(ConfigureActionsProjectEvaluator.java:34)
         at
 
org.gradle.configuration.project.LifecycleProjectEvaluator$EvaluateProject$1.run(LifecycleProjectEvaluator.java:106)
         at org.gradle.internal.Factories$1.create(Factories.java:25)
         at
 
[org.gradle.internal.work|http://org.gradle.internal.work/].DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:183)
         at
 
[org.gradle.internal.work|http://org.gradle.internal.work/].StopShieldingWorkerLeaseService.withLocks(StopShieldingWorkerLeaseService.java:40)
         at
 
org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.withProjectLock(DefaultProjectStateRegistry.java:226)
         at
 
org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.withMutableState(DefaultProjectStateRegistry.java:220)
         at
 
org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.withMutableState(DefaultProjectStateRegistry.java:186)
         at
 
org.gradle.configuration.project.LifecycleProjectEvaluator$EvaluateProject.run(LifecycleProjectEvaluator.java:95)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:402)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:394)
         at
 
org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
         at
 

Re: [kafka-clients] Re: [VOTE] 2.2.1 RC1

2019-05-20 Thread Jonathan Santilli
Hello, I have run the tests, all tests passed.
I have also run the quick start and test my apps with that
commit 55783d313, everything worked.

+1

Thanks a lot for the release, I have been waiting for a bug fix on this
release for long, yuju! :)

Cheers!
--
Jonathan



On Sat, May 18, 2019 at 2:17 AM Vahid Hashemian 
wrote:

> Thanks Ismael, and no worries.
> I'll look forward to hearing some feedback next week.
>
> --Vahid
>
> On Fri, May 17, 2019 at 12:41 AM Ismael Juma  wrote:
>
> > Sorry for the delay Vahid. I suspect votes are more likely next week,
> > after the feature freeze.
> >
> > Ismael
> >
> > On Thu, May 16, 2019 at 9:45 PM Vahid Hashemian <
> vahid.hashem...@gmail.com>
> > wrote:
> >
> >> Since there is no vote on this RC yet, I'll extend the deadline to
> Monday,
> >> May 20, at 9:00 am.
> >>
> >> Thanks in advance for checking / testing / voting.
> >>
> >> --Vahid
> >>
> >>
> >> On Mon, May 13, 2019, 20:15 Vahid Hashemian 
> >> wrote:
> >>
> >> > Hello Kafka users, developers and client-developers,
> >> >
> >> > This is the second candidate for release of Apache Kafka 2.2.1.
> >> >
> >> > Compared to RC0, this release candidate also fixes the following
> issues:
> >> >
> >> >- [KAFKA-6789] - Add retry logic in AdminClient requests
> >> >- [KAFKA-8348] - Document of kafkaStreams improvement
> >> >- [KAFKA-7633] - Kafka Connect requires permission to create
> internal
> >> >topics even if they exist
> >> >- [KAFKA-8240] - Source.equals() can fail with NPE
> >> >- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
> >> >record, causing unlimited growth of __consumer_offsets
> >> >- [KAFKA-8352] - Connect System Tests are failing with 404
> >> >
> >> > Release notes for the 2.2.1 release:
> >> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
> >> >
> >> > *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
> >> >
> >> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >> > https://kafka.apache.org/KEYS
> >> >
> >> > * Release artifacts to be voted upon (source and binary):
> >> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/
> >> >
> >> > * Maven artifacts to be voted upon:
> >> >
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >> >
> >> > * Javadoc:
> >> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
> >> >
> >> > * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> >> > https://github.com/apache/kafka/releases/tag/2.2.1-rc1
> >> >
> >> > * Documentation:
> >> > https://kafka.apache.org/22/documentation.html
> >> >
> >> > * Protocol:
> >> > https://kafka.apache.org/22/protocol.html
> >> >
> >> > * Successful Jenkins builds for the 2.2 branch:
> >> > Unit/integration tests:
> >> https://builds.apache.org/job/kafka-2.2-jdk8/115/
> >> >
> >> > Thanks!
> >> > --Vahid
> >> >
> >>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/kafka-clients/CAD5tkZbE-c6X2f4%2BKf%3DeX3WxVBJrXE%2BCqw9Z0%2BvufSZsOW1E%3Dw%40mail.gmail.com
> > <
> https://groups.google.com/d/msgid/kafka-clients/CAD5tkZbE-c6X2f4%2BKf%3DeX3WxVBJrXE%2BCqw9Z0%2BvufSZsOW1E%3Dw%40mail.gmail.com?utm_medium=email_source=footer
> >
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
>
>
> --
>
> Thanks!
> --Vahid
>


-- 
Santilli Jonathan


[jira] [Created] (KAFKA-8393) Kafka Connect: Kafka Connect: could not get type for name org.osgi.framework.BundleListener on Windows

2019-05-20 Thread JIRA
Loïc created KAFKA-8393:
---

 Summary: Kafka Connect: Kafka Connect: could not get type for name 
org.osgi.framework.BundleListener on Windows
 Key: KAFKA-8393
 URL: https://issues.apache.org/jira/browse/KAFKA-8393
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.2.0
 Environment: Windows 10
Reporter: Loïc


Hi guys,

according the documentation 
[https://kafka.apache.org/quickstart#quickstart_kafkaconnect]

I've tried the command

`c:\dev\Tools\servers\kafka_2.12-2.2.0>bin\windows\connect-standalone.bat 
config\connect-standalone.properties config\connect-file-source.properties 
config\connect-file-sink.properties`

and got this error:

c:\dev\Tools\servers\kafka_2.12-2.2.0>bin\windows\connect-standalone.bat 
config\connect-standalone.properties config\connect-file-source.properties 
config\connect-file-sink.properties
[2019-05-17 10:21:25,049] WARN could not get type for name 
org.osgi.framework.BundleListener from any class loader 
(org.reflections.Reflections)
org.reflections.ReflectionsException: could not get type for name 
org.osgi.framework.BundleListener
 at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390)
 at org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
 at org.reflections.Reflections.(Reflections.java:126)
 at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.(DelegatingClassLoader.java:400)
 at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:299)
 at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:237)
 at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:185)

 

Environment:  Windows 10, Kafka 2.12-2.2.0 [current]

 

Many thanks for your help.

Regards

Loïc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)