Build failed in Jenkins: kafka-trunk-jdk8 #2379

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix brokerId passed to metrics reporters (#4497)

--
[...truncated 3.47 MB...]
kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic STARTED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout STARTED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED


Jenkins build is back to normal : kafka-trunk-jdk9 #351

2018-01-31 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #3139

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix brokerId passed to metrics reporters (#4497)

--
[...truncated 412.11 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs PASSED

kafka.server.MultipleListenersWithAdditionalJaasContextTest > 
testProduceConsume STARTED

kafka.server.MultipleListenersWithAdditionalJaasContextTest > 
testProduceConsume PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets STARTED

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload STARTED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.OffsetCommitTest > testOffsetsDeleteAfterTopicDeletion STARTED

kafka.server.OffsetCommitTest > testOffsetsDeleteAfterTopicDeletion PASSED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets STARTED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testOffsetExpiration STARTED

kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit STARTED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithLeaderThrottle STARTED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithLeaderThrottle PASSED

kafka.server.ReplicationQuotasTest > shouldThrottleOldSegments STARTED

kafka.server.ReplicationQuotasTest > shouldThrottleOldSegments PASSED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithFollowerThrottle STARTED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithFollowerThrottle PASSED

kafka.server.EdgeCaseRequestTest > testInvalidApiVersionRequest STARTED

kafka.server.EdgeCaseRequestTest > testInvalidApiVersionRequest PASSED

kafka.server.EdgeCaseRequestTest > testMalformedHeaderRequest STARTED

kafka.server.EdgeCaseRequestTest > testMalformedHeaderRequest PASSED

kafka.server.EdgeCaseRequestTest > testProduceRequestWithNullClientId STARTED

kafka.server.EdgeCaseRequestTest > testProduceRequestWithNullClientId PASSED

kafka.server.EdgeCaseRequestTest > testInvalidApiKeyRequest STARTED

kafka.server.EdgeCaseRequestTest > testInvalidApiKeyRequest PASSED

kafka.server.EdgeCaseRequestTest > testHeaderOnlyRequest STARTED

kafka.server.EdgeCaseRequestTest > testHeaderOnlyRequest PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest 

Build failed in Jenkins: kafka-trunk-jdk7 #3138

2018-01-31 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu xenial) in workspace 

java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
[FINDBUGS] Collecting findbugs analysis files...
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
[FINDBUGS] Computing warning deltas based on reference build #3130
Recording test results
ERROR: Build step failed with exception
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H27
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at hudson.FilePath.act(FilePath.java:986)
at hudson.FilePath.act(FilePath.java:975)
at 
hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114)
at 
hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:136)
at 
hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:166)
at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:153)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
at hudson.model.Run.execute(Run.java:1749)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:460)
at 

Build failed in Jenkins: kafka-trunk-jdk8 #2377

2018-01-31 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:825)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1092)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1123)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1970)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1938)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1934)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1572)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1584)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1218)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor237.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H27
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281)
at com.sun.proxy.$Proxy109.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:813)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1092)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1123)
at hudson.scm.SCM.checkout(SCM.java:495)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

Build failed in Jenkins: kafka-trunk-jdk7 #3137

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6275: Add DeleteGroups API (KIP-229) (#4479)

[mjsax] KAFKA-6138 Simplify StreamsBuilder#addGlobalStore (#4430)

--
[...truncated 235.77 KB...]

kafka.api.AuthorizerIntegrationTest > testDeleteGroupApiWithNoDeleteGroupAcl2 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionNotMatchingInternalTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionNotMatchingInternalTopic PASSED

kafka.api.AuthorizerIntegrationTest > testFetchAllOffsetsTopicAuthorization 
STARTED

kafka.api.AuthorizerIntegrationTest > testFetchAllOffsetsTopicAuthorization 
PASSED

kafka.api.AuthorizerIntegrationTest > 
shouldSendSuccessfullyWhenIdempotentAndHasCorrectACL STARTED

kafka.api.AuthorizerIntegrationTest > 
shouldSendSuccessfullyWhenIdempotentAndHasCorrectACL PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerTopicAuthorizationExceptionInSendCallback STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerTopicAuthorizationExceptionInSendCallback PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoWriteTransactionalIdAcl STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoWriteTransactionalIdAcl PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedCreatePartitions STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedCreatePartitions PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
PASSED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction
 STARTED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction
 PASSED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn
 STARTED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn
 PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicExisting STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicExisting PASSED

kafka.api.AuthorizerIntegrationTest > 
testUnauthorizedDeleteRecordsWithoutDescribe STARTED

kafka.api.AuthorizerIntegrationTest > 
testUnauthorizedDeleteRecordsWithoutDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl STARTED

kafka.api.AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > 

[jira] [Created] (KAFKA-6514) Add API version as a tag for the RequestsPerSec metric

2018-01-31 Thread Allen Wang (JIRA)
Allen Wang created KAFKA-6514:
-

 Summary: Add API version as a tag for the RequestsPerSec metric
 Key: KAFKA-6514
 URL: https://issues.apache.org/jira/browse/KAFKA-6514
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 1.0.0
Reporter: Allen Wang


After we upgrade broker to a new version, one important insight is to see how 
many clients have been upgraded so that we can switch the message format when 
most of the clients have also been updated to the new version to minimize the 
performance penalty. 

RequestsPerSec with the version tag will give us that insight.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6513) New Connect header support doesn't define `converter.type` property correctly

2018-01-31 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-6513:


 Summary: New Connect header support doesn't define 
`converter.type` property correctly
 Key: KAFKA-6513
 URL: https://issues.apache.org/jira/browse/KAFKA-6513
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.1.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 1.1.0


The recent feature (KAFKA-5142) added a new {{converter.type}} to make the 
{{Converter}} implementations now implement {{Configurable}}. However, the 
worker is not correctly setting these new property types and is instead 
incorrectly assuming the existing {{Converter}} implementations will set them. 
For example:

{noformat}
Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
Missing required configuration "converter.type" which has no default value.
at 
org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:472)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:462)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
at 
org.apache.kafka.connect.storage.ConverterConfig.(ConverterConfig.java:48)
at 
org.apache.kafka.connect.json.JsonConverterConfig.(JsonConverterConfig.java:59)
at 
org.apache.kafka.connect.json.JsonConverter.configure(JsonConverter.java:284)
at 
org.apache.kafka.connect.runtime.isolation.Plugins.newConfiguredPlugin(Plugins.java:77)
at 
org.apache.kafka.connect.runtime.isolation.Plugins.newConverter(Plugins.java:208)
at org.apache.kafka.connect.runtime.Worker.(Worker.java:107)
at 
io.confluent.connect.replicator.ReplicatorApp.config(ReplicatorApp.java:104)
at 
io.confluent.connect.replicator.ReplicatorApp.main(ReplicatorApp.java:60)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2376

2018-01-31 Thread Apache Jenkins Server
See 




RE: Excessive Memory Usage with Compression enabled and possible resolutions

2018-01-31 Thread Kyle Tinker
Ismael:

I'd be interested in submitting a PR for #1 & #2, but I think the fix for #4 
would be better suited to someone with a fuller understanding of Kafka.  I may 
also need a bit of help with how to structure the unit tests, if any are needed.

Do you have any recommendations for what unit tests you would want around this 
change?

- Kyle

-Original Message-
From: isma...@gmail.com [mailto:isma...@gmail.com] On Behalf Of Ismael Juma
Sent: Wednesday, January 31, 2018 6:35 PM
To: dev 
Subject: Re: Excessive Memory Usage with Compression enabled and possible 
resolutions

Hi Kyle,

Are you interested in submitting a pull request?

Ismael

On Wed, Jan 31, 2018 at 3:00 PM, Kyle Tinker 
wrote:

> Ismael,
>
> I have filed 
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissue
> s.apache.org%2Fjira%2Fbrowse%2FKAFKA-6512=02%7C01%7C%7C84151d600d
> c049e1daf208d56903565a%7Cc61157e903cb47589165ee7845cb0ca3%7C0%7C0%7C636530385420262643=O3clCfR0zikIeeGO5e0m%2FDG5yeB7mG3jPWzqpMi37Ac%3D=0
>  for this issue.  I could not find a target version field.  Let me know if 
> you need any additional information.  I'm new to this project so hopefully 
> the format is what you were looking for.
>
> - Kyle
>
> -Original Message-
> From: Ismael Juma [mailto:isma...@gmail.com]
> Sent: Tuesday, January 30, 2018 9:01 PM
> To: dev 
> Subject: Re: Excessive Memory Usage with Compression enabled and 
> possible resolutions
>
> Thanks for the report. I haven't looked at the code, but it seems like 
> we would want to do both 1 and 2. Can you please file a JIRA with 
> 1.1.0 as the target version?
>
> Ismael
>
> On 30 Jan 2018 5:46 pm, "Kyle Tinker" 
> wrote:
>
> > I'm using Kafka 1.0.0 and the Java producer.
> >
> > I've noticed high memory usage in the producer when enabling 
> > compression (gzip or lz4).  I don't observe the behavior with 
> > compression off, but with it on I'll run out of heap (2GB).  Using a
> Java profiler, I see the data is
> > in the KafkaLZ4BlockOutputStream (or related class for gzip).   I see
> that
> > MemoryRecordsBuilder:closeForRecordAppends() is trying to deal with 
> > this, but is not successful.  I'm most likely network bottlenecked, 
> > so I expect the producer buffers to be full while the job is running 
> > and potentially a lot of unacknowledged records.
> >
> > I've tried using the default buffer.memory with 20 producers (across
> > 20
> > threads) and sending data as quickly as I can.  I've also tried 1MB 
> > of buffer.memory, which seemed to reduce memory consumption but I 
> > could still run OOM in certain cases.  I have 
> > max.in.flight.requests.per.connection
> > set to 1.  In short, I should only have ~20 MB (20* 1MB) of data in 
> > buffers, but I can easily exhaust 2000 MB used by Kafka.
> >
> > In looking at the code more, it looks like the 
> > KafkaLZ4BlockOutputStream doesn't clear the compressedBuffer or 
> > buffer when close() is called.  In my heap dump, both of those are 
> > ~65k size each, meaning that each batch is taking up ~148k of space, 
> > of which 131k
> is buffers.
> > (buffer.memory=1,000,000 and messages are 1k each until the batch fills).
> >
> > Kafka tries to manage memory usage by calling 
> > MemoryRecordsBuilder:closeForRecordAppends(),
> > which as documented as "Release resources required for record 
> > appends
> (e.g.
> > compression buffers)".  However, this method doesn't actually clear 
> > those buffers because KafkaLZ4BlockOutputStream.close() only writes 
> > the block and end mark and closes the output stream.  It doesn't 
> > actually clear the buffer and compressedBuffer in 
> > KafkaLZ4BlockOutputStream.  Those stay allocated in RAM until the 
> > block is acknowledged by the broker, processed in 
> > Sender:handleProduceResponse(), and the batch is deallocated.  This 
> > memory usage therefore increases, possibly without bound.  In my 
> > test program, the program died with approximately 345 unprocessed 
> > batches per
> producer (20 producers), despite having max.in.flight.requests.per.
> > connection=1.
> >
> > There are a few possible optimizations I can think of:
> > 1) We could declare KafkaLZ4BlockOutputStream.buffer and 
> > compressedBuffer as non-final and null them in the close() method
> > 2) We could declare the MemoryRecordsBuilder.appendStream non-final 
> > and null it in the closeForRecordAppends() method
> > 3) We could have the ProducerBatch discard the recordsBuilder in 
> > closeForRecordAppends(), however, this is likely a bad idea because 
> > the recordsBuilder contains significant metadata that is likely 
> > needed after the stream is closed.  It is also final.
> > 4) We could try to limit the number of non-acknowledged batches in 
> > flight.  This would bound the maximum memory usage but may 
> > negatively impact performance.
> >
> > Fix #1 would only improve the LZ4 

Re: Excessive Memory Usage with Compression enabled and possible resolutions

2018-01-31 Thread Ismael Juma
Hi Kyle,

Are you interested in submitting a pull request?

Ismael

On Wed, Jan 31, 2018 at 3:00 PM, Kyle Tinker 
wrote:

> Ismael,
>
> I have filed https://issues.apache.org/jira/browse/KAFKA-6512 for this
> issue.  I could not find a target version field.  Let me know if you need
> any additional information.  I'm new to this project so hopefully the
> format is what you were looking for.
>
> - Kyle
>
> -Original Message-
> From: Ismael Juma [mailto:isma...@gmail.com]
> Sent: Tuesday, January 30, 2018 9:01 PM
> To: dev 
> Subject: Re: Excessive Memory Usage with Compression enabled and possible
> resolutions
>
> Thanks for the report. I haven't looked at the code, but it seems like we
> would want to do both 1 and 2. Can you please file a JIRA with 1.1.0 as the
> target version?
>
> Ismael
>
> On 30 Jan 2018 5:46 pm, "Kyle Tinker" 
> wrote:
>
> > I'm using Kafka 1.0.0 and the Java producer.
> >
> > I've noticed high memory usage in the producer when enabling
> > compression (gzip or lz4).  I don't observe the behavior with
> > compression off, but with it on I'll run out of heap (2GB).  Using a
> Java profiler, I see the data is
> > in the KafkaLZ4BlockOutputStream (or related class for gzip).   I see
> that
> > MemoryRecordsBuilder:closeForRecordAppends() is trying to deal with
> > this, but is not successful.  I'm most likely network bottlenecked, so
> > I expect the producer buffers to be full while the job is running and
> > potentially a lot of unacknowledged records.
> >
> > I've tried using the default buffer.memory with 20 producers (across
> > 20
> > threads) and sending data as quickly as I can.  I've also tried 1MB of
> > buffer.memory, which seemed to reduce memory consumption but I could
> > still run OOM in certain cases.  I have
> > max.in.flight.requests.per.connection
> > set to 1.  In short, I should only have ~20 MB (20* 1MB) of data in
> > buffers, but I can easily exhaust 2000 MB used by Kafka.
> >
> > In looking at the code more, it looks like the
> > KafkaLZ4BlockOutputStream doesn't clear the compressedBuffer or buffer
> > when close() is called.  In my heap dump, both of those are ~65k size
> > each, meaning that each batch is taking up ~148k of space, of which 131k
> is buffers.
> > (buffer.memory=1,000,000 and messages are 1k each until the batch fills).
> >
> > Kafka tries to manage memory usage by calling
> > MemoryRecordsBuilder:closeForRecordAppends(),
> > which as documented as "Release resources required for record appends
> (e.g.
> > compression buffers)".  However, this method doesn't actually clear
> > those buffers because KafkaLZ4BlockOutputStream.close() only writes
> > the block and end mark and closes the output stream.  It doesn't
> > actually clear the buffer and compressedBuffer in
> > KafkaLZ4BlockOutputStream.  Those stay allocated in RAM until the
> > block is acknowledged by the broker, processed in
> > Sender:handleProduceResponse(), and the batch is deallocated.  This
> > memory usage therefore increases, possibly without bound.  In my test
> > program, the program died with approximately 345 unprocessed batches per
> producer (20 producers), despite having max.in.flight.requests.per.
> > connection=1.
> >
> > There are a few possible optimizations I can think of:
> > 1) We could declare KafkaLZ4BlockOutputStream.buffer and
> > compressedBuffer as non-final and null them in the close() method
> > 2) We could declare the MemoryRecordsBuilder.appendStream non-final
> > and null it in the closeForRecordAppends() method
> > 3) We could have the ProducerBatch discard the recordsBuilder in
> > closeForRecordAppends(), however, this is likely a bad idea because
> > the recordsBuilder contains significant metadata that is likely needed
> > after the stream is closed.  It is also final.
> > 4) We could try to limit the number of non-acknowledged batches in
> > flight.  This would bound the maximum memory usage but may negatively
> > impact performance.
> >
> > Fix #1 would only improve the LZ4 algorithm, and not any other
> algorithms.
> > Fix #2 would improve all algorithms, compression and otherwise.  Of
> > the 3 proposed here, it seems the best.  This would also involve
> > having to check appendStreamIsClosed in every usage of appendStream
> > within MemoryRecordsBuilder to avoid NPE's.
> >
> > Are there any thoughts or suggestions on how to proceed?
> >
> > If requested I can provide standalone testcase code demonstrating this
> > problem.
> >
> > Thanks,
> > -Kyle
> >
> >
> >
> >
> >
> >
> > This message is intended exclusively for the individual or entity to
> > which it is addressed. This communication may contain information that
> > is proprietary, privileged, confidential or otherwise legally exempt
> > from disclosure. If you are not the named addressee, or have been
> > inadvertently and erroneously referenced in the address line, you are
> > not authorized 

Build failed in Jenkins: kafka-trunk-jdk9 #350

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6275: Add DeleteGroups API (KIP-229) (#4479)

[mjsax] KAFKA-6138 Simplify StreamsBuilder#addGlobalStore (#4430)

--
[...truncated 1.46 MB...]
kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent STARTED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo STARTED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator STARTED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testPeriodicTokenExpiry STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testPeriodicTokenExpiry PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED


RE: Excessive Memory Usage with Compression enabled and possible resolutions

2018-01-31 Thread Kyle Tinker
Ismael,

I have filed https://issues.apache.org/jira/browse/KAFKA-6512 for this issue.  
I could not find a target version field.  Let me know if you need any 
additional information.  I'm new to this project so hopefully the format is 
what you were looking for.

- Kyle

-Original Message-
From: Ismael Juma [mailto:isma...@gmail.com] 
Sent: Tuesday, January 30, 2018 9:01 PM
To: dev 
Subject: Re: Excessive Memory Usage with Compression enabled and possible 
resolutions

Thanks for the report. I haven't looked at the code, but it seems like we would 
want to do both 1 and 2. Can you please file a JIRA with 1.1.0 as the target 
version?

Ismael

On 30 Jan 2018 5:46 pm, "Kyle Tinker"  wrote:

> I'm using Kafka 1.0.0 and the Java producer.
>
> I've noticed high memory usage in the producer when enabling 
> compression (gzip or lz4).  I don't observe the behavior with 
> compression off, but with it on I'll run out of heap (2GB).  Using a Java 
> profiler, I see the data is
> in the KafkaLZ4BlockOutputStream (or related class for gzip).   I see that
> MemoryRecordsBuilder:closeForRecordAppends() is trying to deal with 
> this, but is not successful.  I'm most likely network bottlenecked, so 
> I expect the producer buffers to be full while the job is running and 
> potentially a lot of unacknowledged records.
>
> I've tried using the default buffer.memory with 20 producers (across 
> 20
> threads) and sending data as quickly as I can.  I've also tried 1MB of 
> buffer.memory, which seemed to reduce memory consumption but I could 
> still run OOM in certain cases.  I have 
> max.in.flight.requests.per.connection
> set to 1.  In short, I should only have ~20 MB (20* 1MB) of data in 
> buffers, but I can easily exhaust 2000 MB used by Kafka.
>
> In looking at the code more, it looks like the 
> KafkaLZ4BlockOutputStream doesn't clear the compressedBuffer or buffer 
> when close() is called.  In my heap dump, both of those are ~65k size 
> each, meaning that each batch is taking up ~148k of space, of which 131k is 
> buffers.
> (buffer.memory=1,000,000 and messages are 1k each until the batch fills).
>
> Kafka tries to manage memory usage by calling 
> MemoryRecordsBuilder:closeForRecordAppends(),
> which as documented as "Release resources required for record appends (e.g.
> compression buffers)".  However, this method doesn't actually clear 
> those buffers because KafkaLZ4BlockOutputStream.close() only writes 
> the block and end mark and closes the output stream.  It doesn't 
> actually clear the buffer and compressedBuffer in 
> KafkaLZ4BlockOutputStream.  Those stay allocated in RAM until the 
> block is acknowledged by the broker, processed in 
> Sender:handleProduceResponse(), and the batch is deallocated.  This 
> memory usage therefore increases, possibly without bound.  In my test 
> program, the program died with approximately 345 unprocessed batches per 
> producer (20 producers), despite having max.in.flight.requests.per.
> connection=1.
>
> There are a few possible optimizations I can think of:
> 1) We could declare KafkaLZ4BlockOutputStream.buffer and 
> compressedBuffer as non-final and null them in the close() method
> 2) We could declare the MemoryRecordsBuilder.appendStream non-final 
> and null it in the closeForRecordAppends() method
> 3) We could have the ProducerBatch discard the recordsBuilder in 
> closeForRecordAppends(), however, this is likely a bad idea because 
> the recordsBuilder contains significant metadata that is likely needed 
> after the stream is closed.  It is also final.
> 4) We could try to limit the number of non-acknowledged batches in 
> flight.  This would bound the maximum memory usage but may negatively 
> impact performance.
>
> Fix #1 would only improve the LZ4 algorithm, and not any other algorithms.
> Fix #2 would improve all algorithms, compression and otherwise.  Of 
> the 3 proposed here, it seems the best.  This would also involve 
> having to check appendStreamIsClosed in every usage of appendStream 
> within MemoryRecordsBuilder to avoid NPE's.
>
> Are there any thoughts or suggestions on how to proceed?
>
> If requested I can provide standalone testcase code demonstrating this 
> problem.
>
> Thanks,
> -Kyle
>
>
>
>
>
>
> This message is intended exclusively for the individual or entity to 
> which it is addressed. This communication may contain information that 
> is proprietary, privileged, confidential or otherwise legally exempt 
> from disclosure. If you are not the named addressee, or have been 
> inadvertently and erroneously referenced in the address line, you are 
> not authorized to read, print, retain, copy or disseminate this 
> message or any part of it. If you have received this message in error, 
> please notify the sender immediately by e-mail and delete all copies 
> of the message. (ID m031214)
>


[jira] [Created] (KAFKA-6512) Java Producer: Excessive memory usage with compression enabled

2018-01-31 Thread Kyle Tinker (JIRA)
Kyle Tinker created KAFKA-6512:
--

 Summary: Java Producer: Excessive memory usage with compression 
enabled
 Key: KAFKA-6512
 URL: https://issues.apache.org/jira/browse/KAFKA-6512
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.0.0
 Environment: Windows 10
Reporter: Kyle Tinker
 Attachments: KafkaSender.java

h2. User Story

As a user of the Java producer, I want a predictable memory usage for the Kafka 
client so that I can ensure that my system is sized appropriately and will be 
stable even under heavy usage.

As a user of the Java producer, I want a smaller memory footprint so that my 
systems don't consume as many resources.
h2. Acceptance Criteria
 * Enabling Compression in Kafka should not significantly increase the memory 
usage of Kafka
 * The memory usage of Kafka's Java Producer should be roughly in line with the 
buffer size (buffer.memory) and the number of producers declared.

h2. Additional Information

I've observed high memory usage in the producer when enabling compression (gzip 
or lz4).  I don't observe the behavior with compression off, but with it on 
I'll run out of heap (2GB).  Using a Java profiler, I see the data is in the 
KafkaLZ4BlockOutputStream (or related class for gzip).   I see that 
MemoryRecordsBuilder:closeForRecordAppends() is trying to deal with this, but 
is not successful.  I'm most likely network bottlenecked, so I expect the 
producer buffers to be full while the job is running and potentially a lot of 
unacknowledged records.

I've tried using the default buffer.memory with 20 producers (across 20 
threads) and sending data as quickly as I can.  I've also tried 1MB of 
buffer.memory, which seemed to reduce memory consumption but I could still run 
OOM in certain cases.  I have max.in.flight.requests.per.connection set to 1.  
In short, I should only have ~20 MB (20* 1MB) of data in buffers, but I can 
easily exhaust 2000 MB used by Kafka.

In looking at the code more, it looks like the KafkaLZ4BlockOutputStream 
doesn't clear the compressedBuffer or buffer when close() is called.  In my 
heap dump, both of those are ~65k size each, meaning that each batch is taking 
up ~148k of space, of which 131k is buffers. (buffer.memory=1,000,000 and 
messages are 1k each until the batch fills).

Kafka tries to manage memory usage by calling 
MemoryRecordsBuilder:closeForRecordAppends(), which as documented as "Release 
resources required for record appends (e.g. compression buffers)".  However, 
this method doesn't actually clear those buffers because 
KafkaLZ4BlockOutputStream.close() only writes the block and end mark and closes 
the output stream.  It doesn't actually clear the buffer and compressedBuffer 
in KafkaLZ4BlockOutputStream.  Those stay allocated in RAM until the block is 
acknowledged by the broker, processed in Sender:handleProduceResponse(), and 
the batch is deallocated.  This memory usage therefore increases, possibly 
without bound.  In my test program, the program died with approximately 345 
unprocessed batches per producer (20 producers), despite having 
max.in.flight.requests.per.connection=1.
h2. Steps to Reproduce
 # Create a topic test with plenty of storage
 # Use a connection with a very fast upload pipe and limited download.  This 
allows the outbound data to go out, but acknowledgements to be delayed flowing 
in.
 # Download KafkaSender.java (attached to this ticket)
 # Set line 17 to reference your Kafka broker
 # Run the program with a 1GB Xmx value

h2. Possible solutions

There are a few possible optimizations I can think of:
 # We could declare KafkaLZ4BlockOutputStream.buffer and compressedBuffer as 
non-final and null them in the close() method
 # We could declare the MemoryRecordsBuilder.appendStream non-final and null it 
in the closeForRecordAppends() method
 # We could have the ProducerBatch discard the recordsBuilder in 
closeForRecordAppends(), however, this is likely a bad idea because the 
recordsBuilder contains significant metadata that is likely needed after the 
stream is closed.  It is also final.
 # We could try to limit the number of non-acknowledged batches in flight.  
This would bound the maximum memory usage but may negatively impact performance.

 

Fix #1 would only improve the LZ4 algorithm, and not any other algorithms.

Fix #2 would improve all algorithms, compression and otherwise.  Of the 3 
proposed here, it seems the best.  This would also involve having to check 
appendStreamIsClosed in every usage of appendStream within MemoryRecordsBuilder 
to avoid NPE's.

Fix #4 is likely necessary if we want to bound the maximum memory usage of 
Kafka.  Removing the buffers in Fix 1 or 2 will reduce the memory usage by 
~90%, but theoretically there is still no limit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] KIP-252: Extend ACLs to allow filtering based on ip ranges and subnets

2018-01-31 Thread Sönke Liebau
Hey everybody,

following a brief inital discussion a couple of days ago on this list
I'd like to get a discussion going on KIP-252 which would allow
specifying ip ranges and subnets for the -allow-host and --deny-host
parameters of the acl tool.

The KIP can be found at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-252+-+Extend+ACLs+to+allow+filtering+based+on+ip+ranges+and+subnets

Best regards,
Sönke


[jira] [Resolved] (KAFKA-6138) Simplify StreamsBuilder#addGlobalStore

2018-01-31 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-6138.

Resolution: Fixed

> Simplify StreamsBuilder#addGlobalStore
> --
>
> Key: KAFKA-6138
> URL: https://issues.apache.org/jira/browse/KAFKA-6138
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Panuwat Anawatmongkhon
>Priority: Major
>  Labels: beginner, kip, newbie
> Fix For: 1.1.0
>
>
> {{StreamsBuilder#addGlobalStore}} is conceptually a 1:1 copy of 
> {{Topology#addGlobalStore}}, that would follow DSL design principles though. 
> Atm, {{StreamsBuilder#addGlobalStore}} does not follow provide a good user 
> experience as it forces users to specify names for processor names – 
> processor name are a Processor API detail should be hidden in the DSL. The 
> current API is the following:
> {noformat}
> public synchronized StreamsBuilder addGlobalStore(final StoreBuilder 
> storeBuilder,
>   final String topic,
>   final String sourceName,
>   final Consumed consumed,
>   final String 
> processorName,
>   final ProcessorSupplier 
> stateUpdateSupplier)
> {noformat}
> We should remove the two parameters {{sourceName}} and {{processorName}}. To 
> be backward compatible, the current method must be deprecated and a new 
> method should be added with reduced number of parameters. 
> KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-233%3A+Simplify+StreamsBuilder%23addGlobalStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk7 #3136

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5142: Add Connect support for message headers (KIP-145)

--
[...truncated 407.53 KB...]

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection STARTED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection PASSED

kafka.controller.ControllerIntegrationTest > testTopicCreation STARTED

kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > 
testBackToBackPreferredReplicaLeaderElections STARTED

kafka.controller.ControllerIntegrationTest > 
testBackToBackPreferredReplicaLeaderElections PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #2375

2018-01-31 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H24 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:825)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1092)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1123)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1970)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1938)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1934)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1572)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1584)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1218)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H24
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281)
at com.sun.proxy.$Proxy109.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:813)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1092)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1123)
at hudson.scm.SCM.checkout(SCM.java:495)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

Build failed in Jenkins: kafka-trunk-jdk9 #349

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5142: Add Connect support for message headers (KIP-145)

--
[...truncated 1.45 MB...]
kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #2374

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-4930: Enforce set of legal characters for connector names

--
[...truncated 3.44 MB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords STARTED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED


[jira] [Created] (KAFKA-6511) Connect header parser incorrectly parses arrays

2018-01-31 Thread Arjun Satish (JIRA)
Arjun Satish created KAFKA-6511:
---

 Summary: Connect header parser incorrectly parses arrays
 Key: KAFKA-6511
 URL: https://issues.apache.org/jira/browse/KAFKA-6511
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.1.0
Reporter: Arjun Satish
Assignee: Randall Hauch


An incorrect input like "[1, 2, 3,,,]" is misinterpreted by the Values parser. 
An example test can be found here: 
https://github.com/apache/kafka/pull/4319#discussion_r165155768



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk9 #348

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-4930: Enforce set of legal characters for connector names

--
[...truncated 1.45 MB...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > 

Build failed in Jenkins: kafka-trunk-jdk7 #3135

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-4930: Enforce set of legal characters for connector names

--
[...truncated 409.15 KB...]

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > 

[jira] [Resolved] (KAFKA-5142) KIP-145 - Expose Record Headers in Kafka Connect

2018-01-31 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5142.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4319
[https://github.com/apache/kafka/pull/4319]

> KIP-145 - Expose Record Headers in Kafka Connect
> 
>
> Key: KAFKA-5142
> URL: https://issues.apache.org/jira/browse/KAFKA-5142
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Michael Andre Pearce
>Assignee: Michael Andre Pearce
>Priority: Major
> Fix For: 1.1.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect
> As KIP-82 introduced Headers into the core Kafka Product, it would be 
> advantageous to expose them in the Kafka Connect Framework.
> Connectors that replicate data between Kafka cluster or between other 
> messaging products and Kafka would want to replicate the headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6510) WARN: Fail to send SSL Close message

2018-01-31 Thread John Chu (JIRA)
John Chu created KAFKA-6510:
---

 Summary: WARN: Fail to send SSL Close message
 Key: KAFKA-6510
 URL: https://issues.apache.org/jira/browse/KAFKA-6510
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: John Chu


I have a thread which once in a while is going to list the topics on the 
Message Hub. But once in a while, I am getting a :Failed to send SSL Close 
message.

Any ideas?

Noticed there is another similar defect open: 
https://issues.apache.org/jira/browse/KAFKA-3702

Not sure if they are the same.
{code:java}
KafkaConsumer consumer = new 
KafkaConsumer<>(getConsumerConfiguration());

try{
   Map topics = consumer.listTopics();
   return new ArrayList(topics.keySet());
}
finally{
if (consumer != null){
  consumer.close();
}
}
{code}
I am getting the warning from *consumer.close*.

The configuration of the consumer:
 * sasl.mechanism = PLAIN
 * security.protocol = SASL_SSL
 * group.id = consumer1
 * ssl.enabled.protocol = TLSv1.2
 * ssl.endpoint.identification.algorithm = HTTPS
 * ssl.protocol = TLSv1.2
 * sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule 
required username="USERNAME" password="PASSWORD";

{quote}[WARN ] 2018-01-25 20:12:23.204 [ClusterChannelMonitorTaskThread] 
org.apache.kafka.common.network.SslTransportLayer {} - Failed to send SSL Close 
message java.io.IOException: Unexpected status returned by SSLEngine.wrap, 
expected CLOSED, received OK. Will not send close message to peer. at 
org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:158)
 [kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.common.utils.Utils.closeAll(Utils.java:663) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:59) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.common.network.Selector.doClose(Selector.java:582) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.common.network.Selector.close(Selector.java:573) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.common.network.Selector.close(Selector.java:539) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.common.network.Selector.close(Selector.java:250) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.clients.NetworkClient.close(NetworkClient.java:505) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.close(ConsumerNetworkClient.java:439)
 [kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.clients.ClientUtils.closeQuietly(ClientUtils.java:71) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1613) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1573) 
[kafka-clients-0.11.0.0.jar:?] at 
org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1549) 
[kafka-clients-0.11.0.0.jar:?] at 
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6509) Add additional tests for validating store restoration completes before Topology is intitalized

2018-01-31 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-6509:
--

 Summary: Add additional tests for validating store restoration 
completes before Topology is intitalized
 Key: KAFKA-6509
 URL: https://issues.apache.org/jira/browse/KAFKA-6509
 Project: Kafka
  Issue Type: Test
  Components: streams
Reporter: Bill Bejeck
 Fix For: 1.2.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6508) Look into optimizing StreamPartitionAssignor StandbyTask Assignment

2018-01-31 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-6508:
--

 Summary: Look into optimizing StreamPartitionAssignor StandbyTask 
Assignment
 Key: KAFKA-6508
 URL: https://issues.apache.org/jira/browse/KAFKA-6508
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck
 Fix For: 1.2.0


Currently, we have two lists of tasks in the StreamsPartitionAssignor, active 
and standby; we should look into optimizing to have one list of tasks to ensure 
a balance of active and standby tasks during the assignment.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6507) NPE in KafkaStatusBackingStore

2018-01-31 Thread Itay Cohai (JIRA)
Itay Cohai created KAFKA-6507:
-

 Summary: NPE in KafkaStatusBackingStore
 Key: KAFKA-6507
 URL: https://issues.apache.org/jira/browse/KAFKA-6507
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.1
 Environment: We are using Kafka 0.10.0.1 with Kafka connect 0.11.0.1. 
Reporter: Itay Cohai


Found the following NPE in our kafka connect logs:

2018-01-30 13:15:34,391] ERROR Unexpected exception in Thread[KafkaBasedLog 
Work Thread - itay_test-connect-status,5,main] 
(org.apache.kafka.connect.util.KafkaBasedLog:334)

java.lang.NullPointerException

    at 
org.apache.kafka.connect.storage.KafkaStatusBackingStore.read(KafkaStatusBackingStore.java:441)

    at 
org.apache.kafka.connect.storage.KafkaStatusBackingStore$1.onCompletion(KafkaStatusBackingStore.java:148)

    at 
org.apache.kafka.connect.storage.KafkaStatusBackingStore$1.onCompletion(KafkaStatusBackingStore.java:145)

    at 
org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:258)

    at 
org.apache.kafka.connect.util.KafkaBasedLog.access$500(KafkaBasedLog.java:69)

    at 
org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:327)

 

If I look at the source, looks like the key comes up NULL from the status 
topic, strange.

void read(ConsumerRecord record) {
 String key = record.key();

//This line --> if (key.startsWith(CONNECTOR_STATUS_PREFIX)) {
 readConnectorStatus(key, record.value());
 } else if (key.startsWith(TASK_STATUS_PREFIX)) {
 readTaskStatus(key, record.value());
 } else {
 log.warn("Discarding record with invalid key {}", key);
 }
 }






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.0-jdk7 #143

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] KAFKA-6378 KStream-GlobalKTable null KeyValueMapper handling

--
[...truncated 164.77 KB...]

kafka.server.ReplicaManagerTest > 
testReceiveOutOfOrderSequenceExceptionWithLogStartOffset STARTED

kafka.server.ReplicaManagerTest > 
testReceiveOutOfOrderSequenceExceptionWithLogStartOffset PASSED

kafka.server.ReplicaManagerTest > testReadCommittedFetchLimitedAtLSO STARTED

kafka.server.ReplicaManagerTest > testReadCommittedFetchLimitedAtLSO PASSED

kafka.server.ReplicaManagerTest > testDelayedFetchIncludesAbortedTransactions 
STARTED

kafka.server.ReplicaManagerTest > testDelayedFetchIncludesAbortedTransactions 
PASSED

kafka.server.ReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.ReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
STARTED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
PASSED

kafka.server.FetchRequestTest > 
testDownConversionFromBatchedToUnbatchedRespectsOffset STARTED

kafka.server.FetchRequestTest > 
testDownConversionFromBatchedToUnbatchedRespectsOffset PASSED

kafka.server.FetchRequestTest > testFetchRequestV2WithOversizedMessage STARTED

kafka.server.FetchRequestTest > testFetchRequestV2WithOversizedMessage PASSED

kafka.server.FetchRequestTest > testDownConversionWithConnectionFailure STARTED
java.io.IOException: No space left on device
com.esotericsoftware.kryo.KryoException: java.io.IOException: No space left on 
device
at com.esotericsoftware.kryo.io.Output.flush(Output.java:156)
at com.esotericsoftware.kryo.io.Output.require(Output.java:134)
at com.esotericsoftware.kryo.io.Output.writeBoolean(Output.java:578)
at 
org.gradle.internal.serialize.kryo.KryoBackedEncoder.writeBoolean(KryoBackedEncoder.java:63)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestOutputStore$Writer.onOutput(TestOutputStore.java:99)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestReportDataCollector.onOutput(TestReportDataCollector.java:141)
at sun.reflect.GeneratedMethodAccessor295.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:42)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:230)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:234)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:140)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:37)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy73.onOutput(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.results.TestListenerAdapter.output(TestListenerAdapter.java:56)
at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:42)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:230)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:234)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:140)
at 

[jira] [Resolved] (KAFKA-6378) NullPointerException on KStream-GlobalKTable leftJoin when KeyValueMapper returns null

2018-01-31 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-6378.
---
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4424
[https://github.com/apache/kafka/pull/4424]

> NullPointerException on KStream-GlobalKTable leftJoin when KeyValueMapper 
> returns null
> --
>
> Key: KAFKA-6378
> URL: https://issues.apache.org/jira/browse/KAFKA-6378
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Andy Bryant
>Priority: Major
> Fix For: 1.1.0
>
>
> On a Stream->GlobalKTable leftJoin if the KeyValueMapper returns null, the 
> stream fails with a NullPointerException (see stacktrace below). On Kafka 
> 0.11.0.0 the stream processes this successfully, calling the ValueJoiner with 
> the table value set to null.
> The use-case for this is joining a stream to a table containing reference 
> data where the stream foreign key may be null. There is no straight-forward 
> workaround in this case with Kafka 1.0.0 without having to resort to either 
> generating a key that will never match or branching the stream for records 
> that don't have the foreign key.
> Exception in thread "workshop-simple-example-client-StreamThread-1" 
> java.lang.NullPointerException
>   at java.base/java.util.Objects.requireNonNull(Objects.java:221)
>   at 
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.get(CachingKeyValueStore.java:136)
>   at 
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.get(CachingKeyValueStore.java:35)
>   at 
> org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.get(InnerMeteredKeyValueStore.java:184)
>   at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.get(MeteredKeyValueBytesStore.java:116)
>   at 
> org.apache.kafka.streams.kstream.internals.KTableSourceValueGetterSupplier$KTableSourceValueGetter.get(KTableSourceValueGetterSupplier.java:49)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamKTableJoinProcessor.process(KStreamKTableJoinProcessor.java:56)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216)
>   at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:403)
>   at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:317)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:942)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:822)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-232: Detect outdated metadata by adding ControllerMetadataEpoch field

2018-01-31 Thread Dong Lin
Hey Jun, Jason,

Thanks for all the comments. Could you see if you can give +1 for the KIP?
I am open to make further improvements for the KIP.

Thanks,
Dong

On Tue, Jan 23, 2018 at 3:44 PM, Dong Lin  wrote:

> Hey Jun, Jason,
>
> Thanks much for all the review! I will open the voting thread.
>
> Regards,
> Dong
>
> On Tue, Jan 23, 2018 at 3:37 PM, Jun Rao  wrote:
>
>> Hi, Dong,
>>
>> The current KIP looks good to me.
>>
>> Thanks,
>>
>> Jun
>>
>> On Tue, Jan 23, 2018 at 12:29 PM, Dong Lin  wrote:
>>
>> > Hey Jun,
>> >
>> > Do you think the current KIP looks OK? I am wondering if we can open the
>> > voting thread.
>> >
>> > Thanks!
>> > Dong
>> >
>> > On Fri, Jan 19, 2018 at 3:08 PM, Dong Lin  wrote:
>> >
>> > > Hey Jun,
>> > >
>> > > I think we can probably have a static method in Util class to decode
>> the
>> > > byte[]. Both KafkaConsumer implementation and the user application
>> will
>> > be
>> > > able to decode the byte array and log its content for debug purpose.
>> So
>> > it
>> > > seems that we can still print the information we want. It is just not
>> > > explicitly exposed in the consumer interface. Would this address the
>> > > problem here?
>> > >
>> > > Yeah we can include OffsetEpoch in AdminClient. This can be added in
>> > > KIP-222? Is there something you would like me to add in this KIP?
>> > >
>> > > Thanks!
>> > > Dong
>> > >
>> > > On Fri, Jan 19, 2018 at 3:00 PM, Jun Rao  wrote:
>> > >
>> > >> Hi, Dong,
>> > >>
>> > >> The issue with using just byte[] for OffsetEpoch is that it won't be
>> > >> printable, which makes debugging harder.
>> > >>
>> > >> Also, KIP-222 proposes a listGroupOffset() method in AdminClient. If
>> > that
>> > >> gets adopted before this KIP, we probably want to include
>> OffsetEpoch in
>> > >> the AdminClient too.
>> > >>
>> > >> Thanks,
>> > >>
>> > >> Jun
>> > >>
>> > >>
>> > >> On Thu, Jan 18, 2018 at 6:30 PM, Dong Lin 
>> wrote:
>> > >>
>> > >> > Hey Jun,
>> > >> >
>> > >> > I agree. I have updated the KIP to remove the class OffetEpoch and
>> > >> replace
>> > >> > OffsetEpoch with byte[] in APIs that use it. Can you see if it
>> looks
>> > >> good?
>> > >> >
>> > >> > Thanks!
>> > >> > Dong
>> > >> >
>> > >> > On Thu, Jan 18, 2018 at 6:07 PM, Jun Rao  wrote:
>> > >> >
>> > >> > > Hi, Dong,
>> > >> > >
>> > >> > > Thanks for the updated KIP. It looks good to me now. The only
>> thing
>> > is
>> > >> > > for OffsetEpoch.
>> > >> > > If we expose the individual fields in the class, we probably
>> don't
>> > >> need
>> > >> > the
>> > >> > > encode/decode methods. If we want to hide the details of
>> > OffsetEpoch,
>> > >> we
>> > >> > > probably don't want expose the individual fields.
>> > >> > >
>> > >> > > Jun
>> > >> > >
>> > >> > > On Wed, Jan 17, 2018 at 10:10 AM, Dong Lin 
>> > >> wrote:
>> > >> > >
>> > >> > > > Thinking about point 61 more, I realize that the async
>> zookeeper
>> > >> read
>> > >> > may
>> > >> > > > make it less of an issue for controller to read more zookeeper
>> > >> nodes.
>> > >> > > > Writing partition_epoch in the per-partition znode makes it
>> > simpler
>> > >> to
>> > >> > > > handle the broker failure between zookeeper writes for a topic
>> > >> > creation.
>> > >> > > I
>> > >> > > > have updated the KIP to use the suggested approach.
>> > >> > > >
>> > >> > > >
>> > >> > > > On Wed, Jan 17, 2018 at 9:57 AM, Dong Lin > >
>> > >> wrote:
>> > >> > > >
>> > >> > > > > Hey Jun,
>> > >> > > > >
>> > >> > > > > Thanks much for the comments. Please see my comments inline.
>> > >> > > > >
>> > >> > > > > On Tue, Jan 16, 2018 at 4:38 PM, Jun Rao 
>> > >> wrote:
>> > >> > > > >
>> > >> > > > >> Hi, Dong,
>> > >> > > > >>
>> > >> > > > >> Thanks for the updated KIP. Looks good to me overall. Just a
>> > few
>> > >> > minor
>> > >> > > > >> comments.
>> > >> > > > >>
>> > >> > > > >> 60. OffsetAndMetadata positionAndOffsetEpoch(TopicPartition
>> > >> > > partition):
>> > >> > > > >> It
>> > >> > > > >> seems that there is no need to return metadata. We probably
>> > want
>> > >> to
>> > >> > > > return
>> > >> > > > >> sth like OffsetAndEpoch.
>> > >> > > > >>
>> > >> > > > >
>> > >> > > > > Previously I think we may want to re-use the existing class
>> to
>> > >> keep
>> > >> > our
>> > >> > > > > consumer interface simpler. I have updated the KIP to add
>> class
>> > >> > > > > OffsetAndOffsetEpoch. I didn't use OffsetAndEpoch because
>> user
>> > may
>> > >> > > > confuse
>> > >> > > > > this name with OffsetEpoch. Does this sound OK?
>> > >> > > > >
>> > >> > > > >
>> > >> > > > >>
>> > >> > > > >> 61. Should we store partition_epoch in
>> > >> > > > >> /brokers/topics/[topic]/partitions/[partitionId] in ZK?
>> > >> > > > >>
>> > >> > > > >
>> > >> > > > > I have considered this. I think the advantage of adding the
>> >