Build failed in Jenkins: kafka-2.1-jdk8 #15

2018-10-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: AbstractIndex.close should unmap (#5757)

--
[...truncated 437.02 KB...]

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerifyOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerifyOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutTopicsOption PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenRacks 
STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenRacks 
PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenReplicas 
STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenReplicas 
PASSED

kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned 
STARTED

kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware PASSED

kafka.admin.AdminRackAwareTest > testRackAwareExpansion STARTED

kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > testReplicaAssignment STARTED

kafka.admin.AdminRackAwareTest > testReplicaAssignment PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers STARTED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks STARTED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack STARTED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment STARTED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks STARTED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.ListConsumerGroupTest > classMethod STARTED

kafka.admin.ListConsumerGroupTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads during @BeforeClass, 
allThreads=Set(metrics-meter-tick-thread-2, main, Signal Dispatcher, Reference 
Handler, ExpirationReaper-0-Produce, ForkJoinPool-1-worker-1, 
ExpirationReaper-0-DeleteRecords, ThrottledChannelReaper-Produce, 
kafka-admin-client-thread | adminclient-84, /0:0:0:0:0:0:0:1:39030 to 
/0:0:0:0:0:0:0:1:42257 workers Thread 2, ThrottledChannelReaper-Request, Test 
worker, /0:0:0:0:0:0:0:1:39030 to /0:0:0:0:0:0:0:1:42257 workers Thread 3, 
shutdownable-thread-test, ExpirationReaper-0-Fetch, Finalizer, 
ThrottledChannelReaper-Fetch, metrics-meter-tick-thread-1), 
unexpected=Set(kafka-admin-client-thread | adminclient-84)

kafka.admin.ListConsumerGroupTe

Build failed in Jenkins: kafka-2.0-jdk8 #167

2018-10-10 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: Fix broken links (#5676)

--
[...truncated 434.14 KB...]
kafka.admin.ConfigCommandTest > shouldAddBrokerDynamicConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddBrokerDynamicConfig PASSED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnZkCommandError 
STARTED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnZkCommandError 
PASSED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts STARTED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED

kafka.admin.ConfigCommandTest > shouldAddTopicConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddTopicConfig PASSED

kafka.admin.ConfigCommandTest > shouldAddClientConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig STARTED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED

kafka.admin.ConfigCommandTest > shouldAddDefaultBrokerDynamicConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddDefaultBrokerDynamicConfig PASSED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity STARTED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig PASSED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType STARTED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted PASSED

kafka.admin.ConfigCommandTest > shouldAddBrokerQuotaConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddBrokerQuotaConfig PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues STARTED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
STARTED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
PASSED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnBrokerCommandError 
STARTED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnBrokerCommandError 
PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType PASSED

kafka.admin.ConfigCommandTest > testDynamicBrokerConfigUpdateUsingZooKeeper 
STARTED

kafka.admin.ConfigCommandTest > testDynamicBrokerConfigUpdateUsingZooKeeper 
PASSED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities STARTED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType PASSED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnArgError STARTED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnArgError PASSED

kafka.admin.DelegationTokenCommandTest > testDelegationTokenRequests STARTED

kafka.admin.DelegationTokenCommandTest > testDelegationTokenRequests PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeMembersOfNonExistingGroup 
STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeMembersOfNonExistingGroup 
PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeStateOfExistingGroup STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeStateOfExistingGroup PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroup STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeExistingGroup PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeStateWithConsumersWithoutAssignedPartitions STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeStateWithConsumersWithoutAssignedPartitions PASSED

kafka.admin.DescribeConsumerGroupTest > testDescribeWithMultipleSubActions 
STARTED

kafka.admin.DescribeConsumerGroupTest > testDescribeWithMultipleSubActions 
PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeGroupOffsetsWithShortInitializationTimeout STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeGroupOffsetsWithShortInitializationTimeout PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeOffsetsOfExistingGroupWithNoMembers STARTED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeOffsetsOfExistingGroupWithNoMembers PASSED

kafka.admin.DescribeConsumerGroupTest > 
testDescribeGroupMembersWithShortInitializationTimeout STARTED

kafka.admin.DescribeCo

Jenkins build is back to normal : kafka-trunk-jdk8 #3096

2018-10-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk11 #24

2018-10-10 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.0-jdk8 #166

2018-10-10 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H33 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4784, done.
remote: Counting objects:   0% (1/4784)   remote: Counting objects:   
1% (48/4784)   remote: Counting objects:   2% (96/4784)   
remote: Counting objects:   3% (144/4784)   remote: Counting objects:   
4% (192/4784)   remote: Counting objects:   5% (240/4784)   
remote: Counting objects:   6% (288/4784)   remote: Counting objects:   
7% (335/4784)   remote: Counting objects:   8% (383/4784)   
remote: Counting objects:   9% (431/4784)   remote: Counting objects:  
10% (479/4784)   remote: Counting objects:  11% (527/4784)   
remote: Counting objects:  12% (575/4784)   remote: Counting objects:  
13% (622/4784)   remote: Counting objects:  14% (670/4784)   
remote: Counting objects:  15% (718/4784)   remote: Counting objects:  
16% (766/4784)   remote: Counting objects:  17% (814/4784)   
remote: Counting objects:  18% (862/4784)   remote: Counting objects:  
19% (909/4784)   remote: Counting objects:  20% (957/4784)   
remote: Counting objects:  21% (1005/4784)   remote: Counting objects:  
22% (1053/4784)   remote: Counting objects:  23% (1101/4784)   
remote: Counting objects:  24% (1149/4784)   remote: Counting objects:  
25% (1196/4784)   remote: Counting objects:  26% (1244/4784)   
remote: Counting objects:  27% (1292/4784)   remote: Counting objects:  
28% (1340/4784)   remote: Counting objects:  29% (1388/4784)   
remote: Counting objects:  30% (1436/4784)   remote: Counting objects:  
31% (1484/4784)   remote: Counting objects:  32% (1531/4784)   
remote: Counting objects:  33% (1579/4784)   remote: Counting objects:  
34% (1627/4784)   remote: Counting objects:  35% (1675/4784)   
remote: Counting objects:  36% (1723/4784)   remote: Counting objects:  
37% (1771/4784)   remote: Counting objects:  38% (1818/4784)   
remote: Counting objects:  39% (1866/4784)   remote: Counting objects:  
40% (1914/4784)   remote: Counting objects:  41% (1962/4784)   
remote: Counting objects:  42% (2010/4784)   remote: Counting objects:  
43% (2058/4784)   remote: Counting objects:  44% (2105/4784)   
remote: Counting objects:  45% (2153/4784)   remote: Counting objects:  
46% (2201/4784)   remote: Counting objects:  47% (2249/4784)   
remote: Counting objects:  48% (2297/4784)   remote: Counting objects:  
49% (2345/4784)   remote: Counting objects:  50% (2392/4784)   
remote: Counting objects:  51% (2440/4784)   remote: Counting objects:  
52% (2488/4784)   remote: Counting objects:  53% (2536/4784)   
remote: Counting objects:  54% (2584/4784)   remote: Counting objects:  
55% (2632/4784)   remote: Counting objects:  56% (2680/4784)

Build failed in Jenkins: kafka-trunk-jdk11 #23

2018-10-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Fix broken standalone ReplicationQuotasTestRig test (#5773)

[github] MINOR: Fix remaining core, connect and clients tests to pass with Java

--
[...truncated 2.34 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue P

Build failed in Jenkins: kafka-trunk-jdk10 #618

2018-10-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-4932: Update docs for KIP-206 (#5769)

--
[...truncated 2.35 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.Top

Transitioning from Java 10 to Java 11 in Jenkins

2018-10-10 Thread Ismael Juma
Hi all,

Java 11 was released recently and Java 10 is no longer supported. Java 11
is the first LTS release since the new Java release cadence was announced.
As of today, all of our tests pass with Java 11 so it's time to transition
Jenkins builds to use Java 11 instead of Java 10. I have updated the trunk
job[1] and will update the PR job in a couple of days to give time for PRs
to be rebased to include the required commits.

Let me know if you have any questions.

Ismael

[1] https://builds.apache.org/job/kafka-trunk-jdk11/


KAFKA-6654 custom SSLContext

2018-10-10 Thread Pellerin, Clement
KAFKA-6654 correctly states that there will never be enough configuration 
parameters to fully configure the SSLContext/SSLSocketFactory created by Kafka.
For example, in our case, we need an alias to choose the key in the keystore, 
and we need an implementation of OCSP.
KAFKA-6654 suggests to make the creation of the SSLContext a pluggable 
implementation.
Maybe by declaring an interface and passing the name of an implementation class 
in a new parameter.

Many libraries solve this problem by accepting the SSLContextFactory instance 
from the application.
How about passing the instance as the value of a runtime configuration 
parameter?
If that parameter is set, all other ssl.* parameters would be ignored.
Obviously, this parameter could only be set programmatically.

I would like to hear the proposed solution by the Kafka maintainers.

I can help implementing a patch if there is an agreement on the desired 
solution.


Build failed in Jenkins: kafka-trunk-jdk8 #3095

2018-10-10 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4698, done.
remote: Counting objects:   0% (1/4698)   remote: Counting objects:   
1% (47/4698)   remote: Counting objects:   2% (94/4698)   
remote: Counting objects:   3% (141/4698)   remote: Counting objects:   
4% (188/4698)   remote: Counting objects:   5% (235/4698)   
remote: Counting objects:   6% (282/4698)   remote: Counting objects:   
7% (329/4698)   remote: Counting objects:   8% (376/4698)   
remote: Counting objects:   9% (423/4698)   remote: Counting objects:  
10% (470/4698)   remote: Counting objects:  11% (517/4698)   
remote: Counting objects:  12% (564/4698)   remote: Counting objects:  
13% (611/4698)   remote: Counting objects:  14% (658/4698)   
remote: Counting objects:  15% (705/4698)   remote: Counting objects:  
16% (752/4698)   remote: Counting objects:  17% (799/4698)   
remote: Counting objects:  18% (846/4698)   remote: Counting objects:  
19% (893/4698)   remote: Counting objects:  20% (940/4698)   
remote: Counting objects:  21% (987/4698)   remote: Counting objects:  
22% (1034/4698)   remote: Counting objects:  23% (1081/4698)   
remote: Counting objects:  24% (1128/4698)   remote: Counting objects:  
25% (1175/4698)   remote: Counting objects:  26% (1222/4698)   
remote: Counting objects:  27% (1269/4698)   remote: Counting objects:  
28% (1316/4698)   remote: Counting objects:  29% (1363/4698)   
remote: Counting objects:  30% (1410/4698)   remote: Counting objects:  
31% (1457/4698)   remote: Counting objects:  32% (1504/4698)   
remote: Counting objects:  33% (1551/4698)   remote: Counting objects:  
34% (1598/4698)   remote: Counting objects:  35% (1645/4698)   
remote: Counting objects:  36% (1692/4698)   remote: Counting objects:  
37% (1739/4698)   remote: Counting objects:  38% (1786/4698)   
remote: Counting objects:  39% (1833/4698)   remote: Counting objects:  
40% (1880/4698)   remote: Counting objects:  41% (1927/4698)   
remote: Counting objects:  42% (1974/4698)   remote: Counting objects:  
43% (2021/4698)   remote: Counting objects:  44% (2068/4698)   
remote: Counting objects:  45% (2115/4698)   remote: Counting objects:  
46% (2162/4698)   remote: Counting objects:  47% (2209/4698)   
remote: Counting objects:  48% (2256/4698)   remote: Counting objects:  
49% (2303/4698)   remote: Counting objects:  50% (2349/4698)   
remote: Counting objects:  51% (2396/4698)   remote: Counting objects:  
52% (2443/4698)   remote: Counting objects:  53% (2490/4698)   
remote: Counting objects:  54% (2537/4698)   remote: Counting objects:  
55% (2584/4698)   remote: Counting objects:  56% (2631/4

Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-10-10 Thread Patrick Huang
Hey Stanislav,

Sure. Thanks for your interest in this KIP. I am glad to provide more detail.

broker A is initiating a controlled shutdown (restart). The Controller
sends a StopReplicaRequest but it reaches broker A after it has started up
again. He therefore stops replicating those partitions even though he
should just be starting to
This is right.

Controller sends a LeaderAndIsrRequest before broker A initiates a restart.
Broker A restarts and receives the LeaderAndIsrRequest then. It therefore
starts leading for the partitions sent by that request and might stop
leading partitions that it was leading previously.
This was well explained in the linked JIRA, but I cannot understand why
that would happen due to my limited experience. If Broker A leads p1 and
p2, when would a Controller send a LeaderAndIsrRequest with p1 only and not
want Broker A to drop leadership for p2?
The root cause of the issue is that after a broker just restarts, it relies on 
the first LeaderAndIsrRequest to populate the partition state and initializes 
the highwater mark checkpoint thread. The highwater mark checkpoint thread will 
overwrite the highwater mark checkpoint file based on the broker's in-memory 
partition states. In other words, If a partition that is physically hosted by 
the broker is missing in the in-memory partition states map, its highwater mark 
will be lost after the highwater mark checkpoint thread overwrites the file. 
(Related codes: 
https://github.com/apache/kafka/blob/ed3bd79633ae227ad995dafc3d9f384a5534d4e9/core/src/main/scala/kafka/server/ReplicaManager.scala#L1091)
[https://avatars3.githubusercontent.com/u/47359?s=400&v=4]

apache/kafka
Mirror of Apache Kafka. Contribute to apache/kafka development by creating an 
account on GitHub.
github.com


In your example, assume the first LeaderAndIsrRequest broker A receives is the 
one initiated in the controlled shutdown logic in Controller to move leadership 
away from broker A. This LeaderAndIsrRequest only contains partitions that 
broker A leads, not all the partitions that broker A hosts (i.e. no follower 
partitions), so the highwater mark for the follower partitions will be lost. 
Also, the first LeaderAndIsrRequst broker A receives may not necessarily be the 
one initiated in controlled shutdown logic (e.g. there can be an ongoing 
preferred leader election), although I think this may not be very common.

Here the controller will start processing the BrokerChange event (that says
that broker A shutdown) after the broker has come back up and re-registered
himself in ZK?
How will the Controller miss the restart, won't he subsequently receive
another ZK event saying that broker A has come back up?
Controller will not miss the BrokerChange event and actually there will be two 
BrokerChange events fired in this case (one for broker deregistration in zk and 
one for registration). However, when processing the BrokerChangeEvent, 
controller needs to do a read from zookeeper to get back the current brokers in 
the cluster and if the bounced broker already joined the cluster by this time, 
controller will not know this broker has been bounced because it sees no diff 
between zk and its in-memory cache. So basically both of the BrokerChange event 
processing become no-op.


Hope that I answer your questions. Feel free to follow up if I am missing 
something.


Thanks,
Zhanxiang (Patrick) Huang


From: Stanislav Kozlovski 
Sent: Wednesday, October 10, 2018 7:22
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced 
brokers using broker generation

Hi Patrick,

Thanks for the KIP! Fixing such correctness issues is always very welcome -
they're commonly hard to diagnose and debug when they happen in production.

I was wondering if I understood the potential correctness issues correctly.
Here is what I got:


   - If a broker bounces during controlled shutdown, the bounced broker may
   accidentally process its earlier generation’s StopReplicaRequest sent from
   the active controller for one of its follower replicas, leaving the replica
   offline while its remaining replicas may stay online

broker A is initiating a controlled shutdown (restart). The Controller
sends a StopReplicaRequest but it reaches broker A after it has started up
again. He therefore stops replicating those partitions even though he
should just be starting to


   - If the first LeaderAndIsrRequest that a broker processes is sent by
   the active controller before its startup, the broker will overwrite the
   high watermark checkpoint file and may cause incorrect truncation (
   KAFKA-7235 

Build failed in Jenkins: kafka-trunk-jdk8 #3094

2018-10-10 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4698, done.
remote: Counting objects:   0% (1/4698)   remote: Counting objects:   
1% (47/4698)   remote: Counting objects:   2% (94/4698)   
remote: Counting objects:   3% (141/4698)   remote: Counting objects:   
4% (188/4698)   remote: Counting objects:   5% (235/4698)   
remote: Counting objects:   6% (282/4698)   remote: Counting objects:   
7% (329/4698)   remote: Counting objects:   8% (376/4698)   
remote: Counting objects:   9% (423/4698)   remote: Counting objects:  
10% (470/4698)   remote: Counting objects:  11% (517/4698)   
remote: Counting objects:  12% (564/4698)   remote: Counting objects:  
13% (611/4698)   remote: Counting objects:  14% (658/4698)   
remote: Counting objects:  15% (705/4698)   remote: Counting objects:  
16% (752/4698)   remote: Counting objects:  17% (799/4698)   
remote: Counting objects:  18% (846/4698)   remote: Counting objects:  
19% (893/4698)   remote: Counting objects:  20% (940/4698)   
remote: Counting objects:  21% (987/4698)   remote: Counting objects:  
22% (1034/4698)   remote: Counting objects:  23% (1081/4698)   
remote: Counting objects:  24% (1128/4698)   remote: Counting objects:  
25% (1175/4698)   remote: Counting objects:  26% (1222/4698)   
remote: Counting objects:  27% (1269/4698)   remote: Counting objects:  
28% (1316/4698)   remote: Counting objects:  29% (1363/4698)   
remote: Counting objects:  30% (1410/4698)   remote: Counting objects:  
31% (1457/4698)   remote: Counting objects:  32% (1504/4698)   
remote: Counting objects:  33% (1551/4698)   remote: Counting objects:  
34% (1598/4698)   remote: Counting objects:  35% (1645/4698)   
remote: Counting objects:  36% (1692/4698)   remote: Counting objects:  
37% (1739/4698)   remote: Counting objects:  38% (1786/4698)   
remote: Counting objects:  39% (1833/4698)   remote: Counting objects:  
40% (1880/4698)   remote: Counting objects:  41% (1927/4698)   
remote: Counting objects:  42% (1974/4698)   remote: Counting objects:  
43% (2021/4698)   remote: Counting objects:  44% (2068/4698)   
remote: Counting objects:  45% (2115/4698)   remote: Counting objects:  
46% (2162/4698)   remote: Counting objects:  47% (2209/4698)   
remote: Counting objects:  48% (2256/4698)   remote: Counting objects:  
49% (2303/4698)   remote: Counting objects:  50% (2349/4698)   
remote: Counting objects:  51% (2396/4698)   remote: Counting objects:  
52% (2443/4698)   remote: Counting objects:  53% (2490/4698)   
remote: Counting objects:  54% (2537/4698)   remote: Counting objects:  
55% (2584/4698)   remote: Counting objects:  56% (2631/4

Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-10 Thread Lucas Wang
Thanks for your review, Joel and Dong.
I've updated the KIP according to Dong's last comments.

Cheers!
Lucas

On Tue, Oct 9, 2018 at 10:06 PM Dong Lin  wrote:

> Hey Lucas,
>
> Thanks for the KIP. Looks good overall. +1
>
> I have two trivial comments which may be a bit useful to reader.
>
> - Can we include the default value for the new config in Public Interface
> section? Typically the default value of the new config is an important part
> of public interface and we usually specify it in the KIP's public interface
> section.
> - Can we change "whose default capacity is 20" to  "whose capacity is 20"
> in the section "How are controller requests handled over the dedicated
> connections"? The use of word "default" seems to suggest that this is
> configurable.
>
> Thanks,
> Dong
>
> On Mon, Jun 18, 2018 at 1:04 PM Lucas Wang  wrote:
>
> > Hi All,
> >
> > I've addressed a couple of comments in the discussion thread for KIP-291,
> > and
> > got no objections after making the changes. Therefore I would like to
> start
> > the voting thread.
> >
> > KIP:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-291%3A+Have+separate+queues+for+control+requests+and+data+requests
> >
> > Thanks for your time!
> > Lucas
> >
>


[jira] [Resolved] (KAFKA-7495) AdminClient thread dies on invalid input

2018-10-10 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-7495.

Resolution: Duplicate

> AdminClient thread dies on invalid input
> 
>
> Key: KAFKA-7495
> URL: https://issues.apache.org/jira/browse/KAFKA-7495
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Xavier Léauté
>Priority: Major
>
> The following code results in an uncaught IllegalArgumentException in the 
> admin client thread, resulting in a zombie admin client.
> {code}
> AclBindingFilter aclFilter = new AclBindingFilter(
> new ResourcePatternFilter(ResourceType.UNKNOWN, null, PatternType.ANY),
> AccessControlEntryFilter.ANY
> );
> kafkaAdminClient.describeAcls(aclFilter).values().get();
> {code}
> See the resulting stacktrace below
> {code}
> ERROR [kafka-admin-client-thread | adminclient-3] Uncaught exception in 
> thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> java.lang.IllegalArgumentException: Filter contain UNKNOWN elements
> at 
> org.apache.kafka.common.requests.DescribeAclsRequest.validate(DescribeAclsRequest.java:140)
> at 
> org.apache.kafka.common.requests.DescribeAclsRequest.(DescribeAclsRequest.java:92)
> at 
> org.apache.kafka.common.requests.DescribeAclsRequest$Builder.build(DescribeAclsRequest.java:77)
> at 
> org.apache.kafka.common.requests.DescribeAclsRequest$Builder.build(DescribeAclsRequest.java:67)
> at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:450)
> at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:411)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:910)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1107)
> at java.base/java.lang.Thread.run(Thread.java:844)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7496) KafkaAdminClient#describeAcls should handle invalid filters gracefully

2018-10-10 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-7496:
--

 Summary: KafkaAdminClient#describeAcls should handle invalid 
filters gracefully
 Key: KAFKA-7496
 URL: https://issues.apache.org/jira/browse/KAFKA-7496
 Project: Kafka
  Issue Type: Bug
  Components: admin
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


KafkaAdminClient#describeAcls should handle invalid filters gracefully.  
Specifically, it should return a future which yields an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7495) AdminClient thread dies on invalid input

2018-10-10 Thread JIRA
Xavier Léauté created KAFKA-7495:


 Summary: AdminClient thread dies on invalid input
 Key: KAFKA-7495
 URL: https://issues.apache.org/jira/browse/KAFKA-7495
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Xavier Léauté


The following code results in an uncaught IllegalArgumentException in the admin 
client thread, resulting in a zombie admin client.

{code}
AclBindingFilter aclFilter = new AclBindingFilter(
new ResourcePatternFilter(ResourceType.UNKNOWN, null, PatternType.ANY),
AccessControlEntryFilter.ANY
);
kafkaAdminClient.describeAcls(aclFilter).values().get();
{code}

See the resulting stacktrace below
{code}
ERROR [kafka-admin-client-thread | adminclient-3] Uncaught exception in thread 
'kafka-admin-client-thread | adminclient-3': 
(org.apache.kafka.common.utils.KafkaThread)
java.lang.IllegalArgumentException: Filter contain UNKNOWN elements
at 
org.apache.kafka.common.requests.DescribeAclsRequest.validate(DescribeAclsRequest.java:140)
at 
org.apache.kafka.common.requests.DescribeAclsRequest.(DescribeAclsRequest.java:92)
at 
org.apache.kafka.common.requests.DescribeAclsRequest$Builder.build(DescribeAclsRequest.java:77)
at 
org.apache.kafka.common.requests.DescribeAclsRequest$Builder.build(DescribeAclsRequest.java:67)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:450)
at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:411)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:910)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1107)
at java.base/java.lang.Thread.run(Thread.java:844)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-10-10 Thread vito jeng
Hi John,

Thanks for reviewing the KIP.

> I didn't follow the addition of a new method to the QueryableStoreType
> interface. Can you elaborate why this is necessary to support the new
> exception types?

To support the new exception types, I would check stream state in the
following classes:
  - CompositeReadOnlyKeyValueStore class
  - CompositeReadOnlySessionStore class
  - CompositeReadOnlyWindowStore class
  - DelegatingPeekingKeyValueIterator class

It is also necessary to keep backward compatibility. So I plan passing
stream
instance to QueryableStoreType instance during KafkaStreams#store()
invoked.
It looks a most simple way, I think.

It is why I add a new method to the QueryableStoreType interface. I can
understand
that we should try to avoid adding the public api method. However, at the
moment
I have no better ideas.

Any thoughts?


> Also, looking over your KIP again, it seems valuable to introduce
> "retriable store exception" and "fatal store exception" marker interfaces
> that the various exceptions can mix in. It would be nice from a usability
> perspective to be able to just log and retry on any "retriable" exception
> and log and shutdown on any fatal exception.

I agree that this is valuable to the user.
I'll update the KIP.


Thanks


---
Vito


On Tue, Oct 9, 2018 at 2:30 AM John Roesler  wrote:

> Hi Vito,
>
> I'm glad to hear you're well again!
>
> I didn't follow the addition of a new method to the QueryableStoreType
> interface. Can you elaborate why this is necessary to support the new
> exception types?
>
> Also, looking over your KIP again, it seems valuable to introduce
> "retriable store exception" and "fatal store exception" marker interfaces
> that the various exceptions can mix in. It would be nice from a usability
> perspective to be able to just log and retry on any "retriable" exception
> and log and shutdown on any fatal exception.
>
> Thanks,
> -John
>
> On Fri, Oct 5, 2018 at 11:47 AM Guozhang Wang  wrote:
>
> > Thanks for the explanation, that makes sense.
> >
> >
> > Guozhang
> >
> >
> > On Mon, Jun 25, 2018 at 2:28 PM, Matthias J. Sax 
> > wrote:
> >
> > > The scenario I had I mind was, that KS is started in one thread while a
> > > second thread has a reference to the object to issue queries.
> > >
> > > If a query is issue before the "main thread" started KS, and the "query
> > > thread" knows that it will eventually get started, it can retry. On the
> > > other hand, if KS is in state PENDING_SHUTDOWN or DEAD, it is
> impossible
> > > to issue any query against it now or in the future and thus the error
> is
> > > not retryable.
> > >
> > >
> > > -Matthias
> > >
> > > On 6/25/18 10:15 AM, Guozhang Wang wrote:
> > > > I'm wondering if StreamThreadNotStarted could be merged into
> > > > StreamThreadNotRunning, because I think users' handling logic for the
> > > third
> > > > case would be likely the same as the second. Do you have some
> scenarios
> > > > where users may want to handle them differently?
> > > >
> > > > Guozhang
> > > >
> > > > On Sun, Jun 24, 2018 at 5:25 PM, Matthias J. Sax <
> > matth...@confluent.io>
> > > > wrote:
> > > >
> > > >> Sorry to hear! Get well soon!
> > > >>
> > > >> It's not a big deal if the KIP stalls a little bit. Feel free to
> pick
> > it
> > > >> up again when you find time.
> > > >>
> > > > Is `StreamThreadNotRunningException` really an retryable error?
> > > 
> > >  When KafkaStream state is REBALANCING, I think it is a retryable
> > > error.
> > > 
> > >  StreamThreadStateStoreProvider#stores() will throw
> > >  StreamThreadNotRunningException when StreamThread state is not
> > > >> RUNNING. The
> > >  user can retry until KafkaStream state is RUNNING.
> > > >>
> > > >> I see. If this is the intention, than I would suggest to have two
> (or
> > > >> maybe three) different exceptions:
> > > >>
> > > >>  - StreamThreadRebalancingException (retryable)
> > > >>  - StreamThreadNotRunning (not retryable -- thrown if in state
> > > >> PENDING_SHUTDOWN or DEAD
> > > >>  - maybe StreamThreadNotStarted (for state CREATED)
> > > >>
> > > >> The last one is tricky and could also be merged into one of the
> first
> > > >> two, depending if you want to argue that it's retryable or not.
> (Just
> > > >> food for though -- not sure what others think.)
> > > >>
> > > >>
> > > >>
> > > >> -Matthias
> > > >>
> > > >> On 6/22/18 8:06 AM, vito jeng wrote:
> > > >>> Matthias,
> > > >>>
> > > >>> Thank you for your assistance.
> > > >>>
> > >  what is the status of this KIP?
> > > >>>
> > > >>> Unfortunately, there is no further progress.
> > > >>> About seven weeks ago, I was injured in sports. I had a broken
> wrist
> > on
> > > >>> my left wrist.
> > > >>> Many jobs are affected, including this KIP and implementation.
> > > >>>
> > > >>>
> > >  I just re-read it, and have a couple of follow up comments. Why do
> > we
> > >  discuss the internal exceptions you want to add? Also, do we
> really
> > > ne

[jira] [Resolved] (KAFKA-7307) Upgrade EasyMock for Java 11 support

2018-10-10 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7307.

Resolution: Not A Problem

We migrated to mockito with full Java 11 support in the clients module 
(KAFKA-7439) and updated a few tests elsewhere 
(https://github.com/apache/kafka/pull/5771) to work with Java 11 by various 
means. As such, we don't need a new version of EasyMock for Java 11 support.

> Upgrade EasyMock for Java 11 support
> 
>
> Key: KAFKA-7307
> URL: https://issues.apache.org/jira/browse/KAFKA-7307
> Project: Kafka
>  Issue Type: Sub-task
>  Components: packaging
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 2.2.0
>
>
> A new version of EasyMock and ASM7_EXPERIMENTAL (or ASM7 when Java 11 ships) 
> enabled: https://github.com/easymock/easymock/issues/224
> EasyMocks shades its dependencies so they can't be upgraded independently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7494) Update Jenkins jobs to use Java 11 instead of Java 10

2018-10-10 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-7494:
--

 Summary: Update Jenkins jobs to use Java 11 instead of Java 10
 Key: KAFKA-7494
 URL: https://issues.apache.org/jira/browse/KAFKA-7494
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
Assignee: Ismael Juma


Will update kafka-trunk first and the PR job in a few days to allow people to 
include the commits needed for it to work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-378: Enable Dependency Injection for Kafka Streams handlers

2018-10-10 Thread Damian Guy
Hi Wladimir,

Of the two approaches in the KIP - i feel the second approach is cleaner.
However, am i correct in assuming that you want to have the
`ConfiguredStreamsFactory` as a ctor arg in `StreamsConfig` so that Spring
can inject this for you?

Otherwise you could just put the ApplicationContext as a property in the
config and then use that via the configure method of the appropriate
handler to get your actual handler.

Thanks,
Damian

On Tue, 9 Oct 2018 at 01:55, Guozhang Wang  wrote:

> John, thanks for the explanation, now it makes much more sense to me.
>
> As for the concrete approach, to me it seems the first option requires less
> changes than the second (ConfiguredStreamsFactory based) approach, whereas
> the second one requires an additional interface that is overlapping with
> the AbstractConfig.
>
> I'm aware that in KafkaProducer / KafkaConsumer we do not have public
> constructors for taking a ProducerConfig or ConsumerConfig directly, and
> anyone using Spring can share how you've worked around it by far? If it is
> very awkward I'm not against just adding the XXXConfigs to the constructors
> directly.
>
> Guozhang
>
> On Fri, Oct 5, 2018 at 1:48 PM, John Roesler  wrote:
>
> > Hi Wladimir,
> >
> > Thanks for the KIP!
> >
> > As I mentioned in the PR discussion, I personally prefer not to recommend
> > overriding StreamsConfig for this purpose.
> >
> > It seems like a person wishing to create a DI shim would have to acquire
> > quite a deep understanding of the class and its usage to figure out what
> > exactly to override to accomplish their goals without breaking
> everything.
> > I'm honestly impressed with the method you came up with to create your
> > Spring/Streams shim.
> >
> > I think we can make to path for the next person smoother by going with
> > something more akin to the ConfiguredStreamsFactory. This is a
> constrained
> > interface that tells you exactly what you have to implement to create
> such
> > a shim.
> >
> > A few thoughts:
> > 1. it seems like we can keep all the deprecated constructors still
> > deprecated
> >
> > 2. we could add just one additional constructor to each of KafkaStreams
> and
> > TopologyTestDriver to still take a Properties, but also your new
> > ConfiguredStreamsFactory
> >
> > 3. I don't know if I'm sold on the name ConfiguredStreamsFactory, since
> it
> > does not produce configured streams. Instead, it produces configured
> > instances... How about ConfiguredInstanceFactory?
> >
> > 4. if I understand the usage correctly, it's actually a pretty small
> number
> > of classes that we actually make via getConfiguredInstance. Offhand, I
> can
> > think of the key/value Serdes, the deserialization exception handler, and
> > the production exception handler.
> > Perhaps, instead of maintaining a generic "class instantiator", we could
> > explore a factory interface that just has methods for creating exactly
> the
> > kinds of things we need to create. In fact, we already have something
> like
> > this: org.apache.kafka.streams.KafkaClientSupplier . Do you think we
> could
> > just add some more methods to that interface (and maybe rename it)
> instead?
> >
> > Thanks,
> > -John
> >
> > On Fri, Oct 5, 2018 at 3:31 PM John Roesler  wrote:
> >
> > > Hi Guozhang,
> > >
> > > I'm going to drop in a little extra context from the preliminary PR
> > > discussion (https://github.com/apache/kafka/pull/5344).
> > >
> > > The issue isn't that it's impossible to use Streams within a Spring
> app,
> > > just that the interplay between our style of construction/configuration
> > and
> > > Spring's is somewhat awkward compared to the normal experience with
> > > dependency injection.
> > >
> > > I'm guessing users of dependency injection would not like the approach
> > you
> > > offered. I believe it's commonly considered an antipattern when using
> DI
> > > frameworks to pass the injector directly into the class being
> > constructed.
> > > Wladimir has also offered an alternative usage within the current
> > framework
> > > of injecting pre-constructed dependencies into the Properties, and then
> > > retrieving and casting them inside the configured class.
> > >
> > > It seems like this KIP is more about offering a more elegant interface
> to
> > > DI users.
> > >
> > > One of the points that Wladimir raised on his PR discussion was the
> > desire
> > > to configure the classes in a typesafe way in the constructor (thus
> > > allowing the use of immutable classes).
> > >
> > > With this KIP, it would be possible for a DI user to:
> > > 1. register a Streams-Spring or Streams-Guice (etc) "plugin" (via
> either
> > > of the mechanisms he proposed)
> > > 2. simply make the Serdes, exception handlers, etc, available on the
> > class
> > > path with the DI annotations
> > > 3. start the app
> > >
> > > There's no need to mess with passing dependencies (or the injector)
> > > through the properties.
> > >
> > > Sorry for "injecting" myself into your discussion, but it 

[DISCUSS] KIP-381 Connect: Tell about records that had their offsets flushed in callback

2018-10-10 Thread Per Steffensen
Please help make the proposed changes in KIP-381 become reality. Please 
comment.


KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-381%3A+Connect%3A+Tell+about+records+that+had+their+offsets+flushed+in+callback


JIRA: https://issues.apache.org/jira/browse/KAFKA-5716

PR: https://github.com/apache/kafka/pull/3872

Thanks!




Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-10-10 Thread Stanislav Kozlovski
Hi Patrick,

Thanks for the KIP! Fixing such correctness issues is always very welcome -
they're commonly hard to diagnose and debug when they happen in production.

I was wondering if I understood the potential correctness issues correctly.
Here is what I got:


   - If a broker bounces during controlled shutdown, the bounced broker may
   accidentally process its earlier generation’s StopReplicaRequest sent from
   the active controller for one of its follower replicas, leaving the replica
   offline while its remaining replicas may stay online

broker A is initiating a controlled shutdown (restart). The Controller
sends a StopReplicaRequest but it reaches broker A after it has started up
again. He therefore stops replicating those partitions even though he
should just be starting to


   - If the first LeaderAndIsrRequest that a broker processes is sent by
   the active controller before its startup, the broker will overwrite the
   high watermark checkpoint file and may cause incorrect truncation (
   KAFKA-7235 )

Controller sends a LeaderAndIsrRequest before broker A initiates a restart.
Broker A restarts and receives the LeaderAndIsrRequest then. It therefore
starts leading for the partitions sent by that request and might stop
leading partitions that it was leading previously.
This was well explained in the linked JIRA, but I cannot understand why
that would happen due to my limited experience. If Broker A leads p1 and
p2, when would a Controller send a LeaderAndIsrRequest with p1 only and not
want Broker A to drop leadership for p2?


   - If a broker bounces very quickly, the controller may start processing
   the BrokerChange event after the broker already re-registers itself in zk.
   In this case, controller will miss the broker restart and will not send any
   requests to the broker for initialization. The broker will not be able to
   accept traffics.

Here the controller will start processing the BrokerChange event (that says
that broker A shutdown) after the broker has come back up and re-registered
himself in ZK?
How will the Controller miss the restart, won't he subsequently receive
another ZK event saying that broker A has come back up?


Could we explain these potential problems in a bit more detail just so they
could be more easily digestable by novices?

Thanks,
Stanislav

On Wed, Oct 10, 2018 at 9:21 AM Dong Lin  wrote:

> Hey Patrick,
>
> Thanks much for the KIP. The KIP is very well written.
>
> LGTM.  +1 (binding)
>
> Thanks,
> Dong
>
>
> On Tue, Oct 9, 2018 at 11:46 PM Patrick Huang  wrote:
>
> > Hi All,
> >
> > Please find the below KIP which proposes the concept of broker generation
> > to resolve issues caused by controller missing broker state changes and
> > broker processing outdated control requests.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-380%3A+Detect+outdated+control+requests+and+bounced+brokers+using+broker+generation
> >
> > All comments are appreciated.
> >
> > Best,
> > Zhanxiang (Patrick) Huang
> >
>


-- 
Best,
Stanislav


Re: [DISCUSS] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-10-10 Thread Dong Lin
Hey Patrick,

Thanks much for the KIP. The KIP is very well written.

LGTM.  +1 (binding)

Thanks,
Dong


On Tue, Oct 9, 2018 at 11:46 PM Patrick Huang  wrote:

> Hi All,
>
> Please find the below KIP which proposes the concept of broker generation
> to resolve issues caused by controller missing broker state changes and
> broker processing outdated control requests.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-380%3A+Detect+outdated+control+requests+and+bounced+brokers+using+broker+generation
>
> All comments are appreciated.
>
> Best,
> Zhanxiang (Patrick) Huang
>