Build failed in Jenkins: kafka-trunk-jdk8 #3068

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3209, done.
remote: Counting objects:   0% (1/3209)   remote: Counting objects:   
1% (33/3209)   remote: Counting objects:   2% (65/3209)   
remote: Counting objects:   3% (97/3209)   remote: Counting objects:   
4% (129/3209)   remote: Counting objects:   5% (161/3209)   
remote: Counting objects:   6% (193/3209)   remote: Counting objects:   
7% (225/3209)   remote: Counting objects:   8% (257/3209)   
remote: Counting objects:   9% (289/3209)   remote: Counting objects:  
10% (321/3209)   remote: Counting objects:  11% (353/3209)   
remote: Counting objects:  12% (386/3209)   remote: Counting objects:  
13% (418/3209)   remote: Counting objects:  14% (450/3209)   
remote: Counting objects:  15% (482/3209)   remote: Counting objects:  
16% (514/3209)   remote: Counting objects:  17% (546/3209)   
remote: Counting objects:  18% (578/3209)   remote: Counting objects:  
19% (610/3209)   remote: Counting objects:  20% (642/3209)   
remote: Counting objects:  21% (674/3209)   remote: Counting objects:  
22% (706/3209)   remote: Counting objects:  23% (739/3209)   
remote: Counting objects:  24% (771/3209)   remote: Counting objects:  
25% (803/3209)   remote: Counting objects:  26% (835/3209)   
remote: Counting objects:  27% (867/3209)   remote: Counting objects:  
28% (899/3209)   remote: Counting objects:  29% (931/3209)   
remote: Counting objects:  30% (963/3209)   remote: Counting objects:  
31% (995/3209)   remote: Counting objects:  32% (1027/3209)   
remote: Counting objects:  33% (1059/3209)   remote: Counting objects:  
34% (1092/3209)   remote: Counting objects:  35% (1124/3209)   
remote: Counting objects:  36% (1156/3209)   remote: Counting objects:  
37% (1188/3209)   remote: Counting objects:  38% (1220/3209)   
remote: Counting objects:  39% (1252/3209)   remote: Counting objects:  
40% (1284/3209)   remote: Counting objects:  41% (1316/3209)   
remote: Counting objects:  42% (1348/3209)   remote: Counting objects:  
43% (1380/3209)   remote: Counting objects:  44% (1412/3209)   
remote: Counting objects:  45% (1445/3209)   remote: Counting objects:  
46% (1477/3209)   remote: Counting objects:  47% (1509/3209)   
remote: Counting objects:  48% (1541/3209)   remote: Counting objects:  
49% (1573/3209)   remote: Counting objects:  50% (1605/3209)   
remote: Counting objects:  51% (1637/3209)   remote: Counting objects:  
52% (1669/3209)   remote: Counting objects:  53% (1701/3209)   
remote: Counting objects:  54% (1733/3209)   remote: 

Jenkins build is back to normal : kafka-trunk-jdk8 #3067

2018-10-04 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-7483) Streams should allow headers to be passed to Serializer

2018-10-04 Thread Kamal Chandraprakash (JIRA)
Kamal Chandraprakash created KAFKA-7483:
---

 Summary: Streams should allow headers to be passed to Serializer
 Key: KAFKA-7483
 URL: https://issues.apache.org/jira/browse/KAFKA-7483
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Kamal Chandraprakash
Assignee: Kamal Chandraprakash


We are storing schema metadata for record key and value in the header. 
Serializer, includes this metadata in the record header. While doing simple 
record transformation (x transformed to y) in streams, the same header that was 
passed from source, pushed to the sink topic. This leads to error while reading 
the sink topic.

We should call the overloaded `serialize(topic, headers, object)` method in  
org.apache.kafka.streams.processor.internals.RecordCollectorImpl#L156, #L157 
which in-turn adds the correct metadata in the record header.

With this the sink topic reader have the option to read all the values for a 
header key using `Headers#headers`  [or] only the overwritten value using 
`Headers#lastHeader`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-0.11.0-jdk7 #404

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H22 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4294, done.
remote: Counting objects:   0% (1/4294)   remote: Counting objects:   
1% (43/4294)   remote: Counting objects:   2% (86/4294)   
remote: Counting objects:   3% (129/4294)   remote: Counting objects:   
4% (172/4294)   remote: Counting objects:   5% (215/4294)   
remote: Counting objects:   6% (258/4294)   remote: Counting objects:   
7% (301/4294)   remote: Counting objects:   8% (344/4294)   
remote: Counting objects:   9% (387/4294)   remote: Counting objects:  
10% (430/4294)   remote: Counting objects:  11% (473/4294)   
remote: Counting objects:  12% (516/4294)   remote: Counting objects:  
13% (559/4294)   remote: Counting objects:  14% (602/4294)   
remote: Counting objects:  15% (645/4294)   remote: Counting objects:  
16% (688/4294)   remote: Counting objects:  17% (730/4294)   
remote: Counting objects:  18% (773/4294)   remote: Counting objects:  
19% (816/4294)   remote: Counting objects:  20% (859/4294)   
remote: Counting objects:  21% (902/4294)   remote: Counting objects:  
22% (945/4294)   remote: Counting objects:  23% (988/4294)   
remote: Counting objects:  24% (1031/4294)   remote: Counting objects:  
25% (1074/4294)   remote: Counting objects:  26% (1117/4294)   
remote: Counting objects:  27% (1160/4294)   remote: Counting objects:  
28% (1203/4294)   remote: Counting objects:  29% (1246/4294)   
remote: Counting objects:  30% (1289/4294)   remote: Counting objects:  
31% (1332/4294)   remote: Counting objects:  32% (1375/4294)   
remote: Counting objects:  33% (1418/4294)   remote: Counting objects:  
34% (1460/4294)   remote: Counting objects:  35% (1503/4294)   
remote: Counting objects:  36% (1546/4294)   remote: Counting objects:  
37% (1589/4294)   remote: Counting objects:  38% (1632/4294)   
remote: Counting objects:  39% (1675/4294)   remote: Counting objects:  
40% (1718/4294)   remote: Counting objects:  41% (1761/4294)   
remote: Counting objects:  42% (1804/4294)   remote: Counting objects:  
43% (1847/4294)   remote: Counting objects:  44% (1890/4294)   
remote: Counting objects:  45% (1933/4294)   remote: Counting objects:  
46% (1976/4294)   remote: Counting objects:  47% (2019/4294)   
remote: Counting objects:  48% (2062/4294)   remote: Counting objects:  
49% (2105/4294)   remote: Counting objects:  50% (2147/4294)   
remote: Counting objects:  51% (2190/4294)   remote: Counting objects:  
52% (2233/4294)   remote: Counting objects:  53% (2276/4294)   
remote: Counting objects:  54% (2319/4294)   remote: Counting objects:  
55% (2362/4294)   remote: Counting objects:  56% 

Build failed in Jenkins: kafka-0.11.0-jdk7 #403

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H22 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4294, done.
remote: Counting objects:   0% (1/4294)   remote: Counting objects:   
1% (43/4294)   remote: Counting objects:   2% (86/4294)   
remote: Counting objects:   3% (129/4294)   remote: Counting objects:   
4% (172/4294)   remote: Counting objects:   5% (215/4294)   
remote: Counting objects:   6% (258/4294)   remote: Counting objects:   
7% (301/4294)   remote: Counting objects:   8% (344/4294)   
remote: Counting objects:   9% (387/4294)   remote: Counting objects:  
10% (430/4294)   remote: Counting objects:  11% (473/4294)   
remote: Counting objects:  12% (516/4294)   remote: Counting objects:  
13% (559/4294)   remote: Counting objects:  14% (602/4294)   
remote: Counting objects:  15% (645/4294)   remote: Counting objects:  
16% (688/4294)   remote: Counting objects:  17% (730/4294)   
remote: Counting objects:  18% (773/4294)   remote: Counting objects:  
19% (816/4294)   remote: Counting objects:  20% (859/4294)   
remote: Counting objects:  21% (902/4294)   remote: Counting objects:  
22% (945/4294)   remote: Counting objects:  23% (988/4294)   
remote: Counting objects:  24% (1031/4294)   remote: Counting objects:  
25% (1074/4294)   remote: Counting objects:  26% (1117/4294)   
remote: Counting objects:  27% (1160/4294)   remote: Counting objects:  
28% (1203/4294)   remote: Counting objects:  29% (1246/4294)   
remote: Counting objects:  30% (1289/4294)   remote: Counting objects:  
31% (1332/4294)   remote: Counting objects:  32% (1375/4294)   
remote: Counting objects:  33% (1418/4294)   remote: Counting objects:  
34% (1460/4294)   remote: Counting objects:  35% (1503/4294)   
remote: Counting objects:  36% (1546/4294)   remote: Counting objects:  
37% (1589/4294)   remote: Counting objects:  38% (1632/4294)   
remote: Counting objects:  39% (1675/4294)   remote: Counting objects:  
40% (1718/4294)   remote: Counting objects:  41% (1761/4294)   
remote: Counting objects:  42% (1804/4294)   remote: Counting objects:  
43% (1847/4294)   remote: Counting objects:  44% (1890/4294)   
remote: Counting objects:  45% (1933/4294)   remote: Counting objects:  
46% (1976/4294)   remote: Counting objects:  47% (2019/4294)   
remote: Counting objects:  48% (2062/4294)   remote: Counting objects:  
49% (2105/4294)   remote: Counting objects:  50% (2147/4294)   
remote: Counting objects:  51% (2190/4294)   remote: Counting objects:  
52% (2233/4294)   remote: Counting objects:  53% (2276/4294)   
remote: Counting objects:  54% (2319/4294)   remote: Counting objects:  
55% (2362/4294)   remote: Counting objects:  56% 

Build failed in Jenkins: kafka-0.11.0-jdk7 #402

2018-10-04 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Increase timeout for starting JMX tool (#5735)

--
[...truncated 189.29 KB...]
kafka.api.AdminClientIntegrationTest > testMinimumRequestTimeouts PASSED

kafka.api.AdminClientIntegrationTest > testForceClose STARTED

kafka.api.AdminClientIntegrationTest > testForceClose PASSED

kafka.api.AdminClientIntegrationTest > testListNodes STARTED

kafka.api.AdminClientIntegrationTest > testListNodes PASSED

kafka.api.AdminClientIntegrationTest > testDelayedClose STARTED

kafka.api.AdminClientIntegrationTest > testDelayedClose PASSED

kafka.api.AdminClientIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.AdminClientIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.AdminClientIntegrationTest > testAclOperations STARTED

kafka.api.AdminClientIntegrationTest > testAclOperations PASSED

kafka.api.AdminClientIntegrationTest > testDescribeCluster STARTED

kafka.api.AdminClientIntegrationTest > testDescribeCluster PASSED

kafka.api.AdminClientIntegrationTest > testDescribeNonExistingTopic STARTED

kafka.api.AdminClientIntegrationTest > testDescribeNonExistingTopic PASSED

kafka.api.AdminClientIntegrationTest > testDescribeAndAlterConfigs STARTED

kafka.api.AdminClientIntegrationTest > testDescribeAndAlterConfigs PASSED

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts STARTED

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.TransactionsTest > testBasicTransactions STARTED

kafka.api.TransactionsTest > testBasicTransactions PASSED

kafka.api.TransactionsTest > testFencingOnSendOffsets STARTED

kafka.api.TransactionsTest > testFencingOnSendOffsets PASSED

kafka.api.TransactionsTest > testFencingOnAddPartitions STARTED

kafka.api.TransactionsTest > testFencingOnAddPartitions PASSED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration STARTED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration PASSED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction STARTED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction PASSED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
STARTED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
PASSED

kafka.api.TransactionsTest > testFencingOnSend STARTED

kafka.api.TransactionsTest > testFencingOnSend PASSED

kafka.api.TransactionsTest > testFencingOnCommit STARTED

kafka.api.TransactionsTest > testFencingOnCommit PASSED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader STARTED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader PASSED

kafka.api.TransactionsTest > testSendOffsets STARTED

kafka.api.TransactionsTest > testSendOffsets PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII STARTED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII STARTED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 

Build failed in Jenkins: kafka-0.10.2-jdk7 #234

2018-10-04 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Increase timeout for starting JMX tool (#5735)

--
[...truncated 1.13 MB...]
org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > missingFields 
STARTED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > missingFields 
PASSED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
stringConversion STARTED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
stringConversion PASSED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
generateCredential STARTED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
generateCredential PASSED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
extraneousFields STARTED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
extraneousFields PASSED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
scramCredentialCache STARTED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
scramCredentialCache PASSED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
invalidCredential STARTED

org.apache.kafka.common.security.scram.ScramCredentialUtilsTest > 
invalidCredential PASSED

org.apache.kafka.common.security.scram.ScramFormatterTest > rfc7677Example 
STARTED

org.apache.kafka.common.security.scram.ScramFormatterTest > rfc7677Example 
PASSED

org.apache.kafka.common.security.scram.ScramFormatterTest > saslName STARTED

org.apache.kafka.common.security.scram.ScramFormatterTest > saslName PASSED

org.apache.kafka.common.security.scram.ScramSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.scram.ScramSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.scram.ScramSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.scram.ScramSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.scram.ScramSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.scram.ScramSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED


[jira] [Resolved] (KAFKA-7476) SchemaProjector is not properly handling Date-based logical types

2018-10-04 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7476.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   0.10.2.3
   2.0.1
   0.9.0.2
   1.0.3
   0.11.0.4
   0.10.1.2
   0.10.0.2
   2.2.0
   1.1.2

Issue resolved by pull request 5736
[https://github.com/apache/kafka/pull/5736]

> SchemaProjector is not properly handling Date-based logical types
> -
>
> Key: KAFKA-7476
> URL: https://issues.apache.org/jira/browse/KAFKA-7476
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Major
> Fix For: 1.1.2, 2.2.0, 0.10.0.2, 0.10.1.2, 0.11.0.4, 1.0.3, 
> 0.9.0.2, 2.0.1, 0.10.2.3, 2.1.0
>
>
> SchemaProjector is not properly handling Date-based logical types.  An 
> exception of the following form is thrown:  
> {{Caused by: java.lang.ClassCastException: java.util.Date cannot be cast to 
> java.lang.Number}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3066

2018-10-04 Thread Apache Jenkins Server
See 


Changes:

[mjsax] KAFKA-7277: Migrate Streams API to Duration instead of longMs times

[github] KAFKA-7415; Persist leader epoch and start offset on becoming a leader

--
[...truncated 2.73 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-1.1-jdk7 #216

2018-10-04 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Increase timeout for starting JMX tool (#5735)

--
[...truncated 419.76 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED


[jira] [Created] (KAFKA-7482) LeaderAndIsrRequest should be sent to the shutting down broker

2018-10-04 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-7482:
--

 Summary: LeaderAndIsrRequest should be sent to the shutting down 
broker
 Key: KAFKA-7482
 URL: https://issues.apache.org/jira/browse/KAFKA-7482
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Jun Rao
Assignee: Jun Rao


We introduced a regression in KAFKA-5642 in 1.1. Before 1.1, during a 
controlled shutdown, the LeaderAndIsrRequest is sent to the shutting down 
broker to inform it that it's no longer the leader for partitions whose leader 
have been moved. After 1.1, such LeaderAndIsrRequest is no longer sent to the 
shutting down broker. This can delay the time for the client to find out the 
new leader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk10 #566

2018-10-04 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-7481) Consider options for safer upgrade of offset commit value schema

2018-10-04 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7481:
--

 Summary: Consider options for safer upgrade of offset commit value 
schema
 Key: KAFKA-7481
 URL: https://issues.apache.org/jira/browse/KAFKA-7481
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
 Fix For: 2.1.0


KIP-211 and KIP-320 add new versions of the offset commit value schema. The use 
of the new schema version is controlled by the `inter.broker.protocol.version` 
configuration.  Once the new inter-broker version is in use, it is not possible 
to downgrade since the older brokers will not be able to parse the new schema. 

The options at the moment are the following:

1. Do nothing. Users can try the new version and keep 
`inter.broker.protocol.version` locked to the old release. Downgrade will still 
be possible, but users will not be able to test new capabilities which depend 
on inter-broker protocol changes.
2. Instead of using `inter.broker.protocol.version`, we could use 
`message.format.version`. This would basically extend the use of this config to 
apply to all persistent formats. The advantage is that it allows users to 
upgrade the broker and begin using the new inter-broker protocol while still 
allowing downgrade. But features which depend on the persistent format could 
not be tested.

Any other options?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


New release branch 2.1.0

2018-10-04 Thread Dong Lin
Hello Kafka developers and users,

As promised in the release plan
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044,
we now have a release branch for 2.1.0 release. Trunk will soon be bumped
to 2.2.0-SNAPSHOT.

I'll be going over the JIRAs to move every non-blocker from this release to
the next release.

>From this point, most changes should go to trunk. Blockers (existing and
new that we discover while testing the release) will be double-committed.
Please discuss with your reviewer whether your PR should go to trunk or to
trunk+release so they can merge accordingly.

Please help us test the release!

Thanks!
Dong


[jira] [Resolved] (KAFKA-7441) Allow LogCleanerManager.resumeCleaning() to be used concurrently

2018-10-04 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-7441.
-
Resolution: Fixed

> Allow LogCleanerManager.resumeCleaning() to be used concurrently
> 
>
> Key: KAFKA-7441
> URL: https://issues.apache.org/jira/browse/KAFKA-7441
> Project: Kafka
>  Issue Type: Improvement
>Reporter: xiongqi wu
>Assignee: xiongqi wu
>Priority: Blocker
> Fix For: 2.1.0
>
>
> LogCleanerManger provides APIs abortAndPauseCleaning(TopicPartition) and 
> resumeCleaning(Iterable[TopicPartition]). The abortAndPauseCleaning(...) will 
> do nothing if the partition is already in paused state. And 
> resumeCleaning(..) will always clear the state for the partition if the 
> partition is in paused state. Also, resumeCleaning(...) will throw 
> IllegalStateException if the partition does not have any state (e.g. its 
> state is cleared).
>  
> This will cause problem in the following scenario:
> 1) Background thread invokes LogManager.cleanupLogs() which in turn does  
> abortAndPauseCleaning(...) for a given partition. Now this partition is in 
> paused state.
> 2) User requests deletion for this partition. Controller sends 
> StopReplicaRequest with delete=true for this partition. RequestHanderThread 
> calls abortAndPauseCleaning(...) followed by resumeCleaning(...) for the same 
> partition. Now there is no state for this partition.
> 3) Background thread invokes resumeCleaning(...) as part of 
> LogManager.cleanupLogs(). Because there is no state for this partition, it 
> causes IllegalStateException.
>  
> This issue can also happen before KAFKA-7322 if unclean leader election 
> triggers log truncation for a partition at the same time that the partition 
> is deleted upon user request. But unclean leader election is very rare. The 
> fix made in https://issues.apache.org/jira/browse/KAFKA-7322 makes this issue 
> much more frequent.
> The solution is to record the number of pauses.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Downgradability affected by KIP-211/KAFKA-4682?

2018-10-04 Thread Jonghyun Lee
Hello,

KIP-211/KAFKA-4682 introduced a new offset value schema,
OFFSET_COMMIT_VALUE_SCHEMA_v2 in GroupMetadataManager.scala. This new
schema is used for offset commit messages if inter.broker.protocol.version
is set to >= 2.1 AND OffsetAndMetadata does not contain explicit
expireTimestamp  (in OffsetOffsetGroupMetadataManager.offsetCommitValue()).

However, this change seems affecting downgradability to an older broker
version which is not aware of this V2 schema. For example, when I tried to
downgrade a broker which had been running with the KAFKA-4682 patch and
inter.broker.protocol.version set to 2.1 for a while to an 0.11 broker, the
downgraded broker encountered the following error:

2018/10/04 00:10:42.844 ERROR [GroupMetadataManager]
[group-metadata-manager-0] [kafka-server] [] [Group Metadata Manager on
Broker 13337]: *Error loading offsets from __consumer_offsets-84*
kafka.common.KafkaException: *Unknown offset schema version 2*
at
kafka.coordinator.group.GroupMetadataManager$.schemaForOffset(GroupMetadataManager.scala:960)
~[kafka_2.11-0.11.1.57.jar:?]
at
kafka.coordinator.group.GroupMetadataManager$.readOffsetMessageValue(GroupMetadataManager.scala:1112)
~[kafka_2.11-0.11.1.57.jar:?]
at
kafka.coordinator.group.GroupMetadataManager$$anonfun$loadGroupsAndOffsets$2$$anonfun$apply$13.apply(GroupMetadataManager.scala:530)
~[kafka_2.11-0.11.1.57.jar:?]
at
kafka.coordinator.group.GroupMetadataManager$$anonfun$loadGroupsAndOffsets$2$$anonfun$apply$13.apply(GroupMetadataManager.scala:514)
~[kafka_2.11-0.11.1.57.jar:?]
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
~[scala-library-2.11.11.jar:?]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
~[scala-library-2.11.11.jar:?]
at
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
~[scala-library-2.11.11.jar:?]
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
~[scala-library-2.11.11.jar:?]
at
kafka.coordinator.group.GroupMetadataManager$$anonfun$loadGroupsAndOffsets$2.apply(GroupMetadataManager.scala:514)
~[kafka_2.11-0.11.1.57.jar:?]
at
kafka.coordinator.group.GroupMetadataManager$$anonfun$loadGroupsAndOffsets$2.apply(GroupMetadataManager.scala:498)
~[kafka_2.11-0.11.1.57.jar:?]
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
~[scala-library-2.11.11.jar:?]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
~[scala-library-2.11.11.jar:?]
at
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
~[scala-library-2.11.11.jar:?]
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
~[scala-library-2.11.11.jar:?]
at
kafka.coordinator.group.GroupMetadataManager.loadGroupsAndOffsets(GroupMetadataManager.scala:498)
~[kafka_2.11-0.11.1.57.jar:?]
at
kafka.coordinator.group.GroupMetadataManager.kafka$coordinator$group$GroupMetadataManager$$doLoadGroupsAndOffsets$1(GroupMetadataManager.scala:457)
~[kafka_2.11-0.11.1.57.jar:?]
at
kafka.coordinator.group.GroupMetadataManager$$anonfun$loadGroupsForPartition$1.apply$mcV$sp(GroupMetadataManager.scala:443)
~[kafka_2.11-0.11.1.57.jar:?]
at
kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
~[kafka_2.11-0.11.1.57.jar:?]
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
~[kafka_2.11-0.11.1.57.jar:?]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[?:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
~[?:1.8.0_121]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
~[?:1.8.0_121]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
~[?:1.8.0_121]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[?:1.8.0_121]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]

Unless the older broker code is patched to ignore/downconvert the offset
messages from the future, it seems to me that the broker cannot be
downgraded.

I skimmed through KIP-211 and its discussion thread (
https://www.mail-archive.com/dev@kafka.apache.org/msg81569.html), but I
don't think this issue was discussed.

Is this observation correct? What do you think?

Thanks,
Jon


[jira] [Resolved] (KAFKA-7415) OffsetsForLeaderEpoch may incorrectly respond with undefined epoch causing truncation to HW

2018-10-04 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7415.

   Resolution: Fixed
Fix Version/s: 2.1.0
   2.0.1

> OffsetsForLeaderEpoch may incorrectly respond with undefined epoch causing 
> truncation to HW
> ---
>
> Key: KAFKA-7415
> URL: https://issues.apache.org/jira/browse/KAFKA-7415
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 2.0.0
>Reporter: Anna Povzner
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.0.1, 2.1.0
>
>
> If the follower's last appended epoch is ahead of the leader's last appended 
> epoch, the OffsetsForLeaderEpoch response will incorrectly send 
> (UNDEFINED_EPOCH, UNDEFINED_EPOCH_OFFSET), and the follower will truncate to 
> HW. This may lead to data loss in some rare cases where 2 back-to-back leader 
> elections happen (failure of one leader, followed by quick re-election of the 
> next leader due to preferred leader election, so that all replicas are still 
> in the ISR, and then failure of the 3rd leader).
> The bug is in LeaderEpochFileCache.endOffsetFor(), which returns 
> (UNDEFINED_EPOCH, UNDEFINED_EPOCH_OFFSET) if the requested leader epoch is 
> ahead of the last leader epoch in the cache. The method should return (last 
> leader epoch in the cache, LEO) in this scenario.
> We don't create an entry in a leader epoch cache until a message is appended 
> with the new leader epoch. Every append to log calls 
> LeaderEpochFileCache.assign(). However, it would be much cleaner if 
> `makeLeader` created an entry in the cache as soon as replica becomes a 
> leader, which will fix the bug. In case the leader never appends any 
> messages, and the next leader epoch starts with the same offset, we already 
> have clearAndFlushLatest() that clears entries with start offsets greater or 
> equal to the passed offset. LeaderEpochFileCache.assign() could be merged 
> with clearAndFlushLatest(), so that we clear cache entries with offsets equal 
> or greater than the start offset of the new epoch, so that we do not need to 
> call these methods separately. 
>  
> Here is an example of a scenario where the issue leads to the data loss.
> Suppose we have three replicas: r1, r2, and r3. Initially, the ISR consists 
> of (r1, r2, r3) and the leader is r1. The data up to offset 10 has been 
> committed to the ISR. Here is the initial state:
> {code:java}
> Leader: r1
> leader epoch: 0
> ISR(r1, r2, r3)
> r1: [hw=10, leo=10]
> r2: [hw=8, leo=10]
> r3: [hw=5, leo=10]
> {code}
> Replica 1 fails and leaves the ISR, which makes Replica 2 the new leader with 
> leader epoch = 1. The leader appends a batch, but it is not replicated yet to 
> the followers.
> {code:java}
> Leader: r2
> leader epoch: 1
> ISR(r2, r3)
> r1: [hw=10, leo=10]
> r2: [hw=8, leo=11]
> r3: [hw=5, leo=10]
> {code}
> Replica 3 is elected a leader (due to preferred leader election) before it 
> has a chance to truncate, with leader epoch 2. 
> {code:java}
> Leader: r3
> leader epoch: 2
> ISR(r2, r3)
> r1: [hw=10, leo=10]
> r2: [hw=8, leo=11]
> r3: [hw=5, leo=10]
> {code}
> Replica 2 sends OffsetsForLeaderEpoch(leader epoch = 1) to Replica 3. Replica 
> 3 incorrectly replies with UNDEFINED_EPOCH_OFFSET, and Replica 2 truncates to 
> HW. If Replica 3 fails before Replica 2 re-fetches the data, this may lead to 
> data loss.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3065

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3121, done.
remote: Counting objects:   0% (1/3121)   remote: Counting objects:   
1% (32/3121)   remote: Counting objects:   2% (63/3121)   
remote: Counting objects:   3% (94/3121)   remote: Counting objects:   
4% (125/3121)   remote: Counting objects:   5% (157/3121)   
remote: Counting objects:   6% (188/3121)   remote: Counting objects:   
7% (219/3121)   remote: Counting objects:   8% (250/3121)   
remote: Counting objects:   9% (281/3121)   remote: Counting objects:  
10% (313/3121)   remote: Counting objects:  11% (344/3121)   
remote: Counting objects:  12% (375/3121)   remote: Counting objects:  
13% (406/3121)   remote: Counting objects:  14% (437/3121)   
remote: Counting objects:  15% (469/3121)   remote: Counting objects:  
16% (500/3121)   remote: Counting objects:  17% (531/3121)   
remote: Counting objects:  18% (562/3121)   remote: Counting objects:  
19% (593/3121)   remote: Counting objects:  20% (625/3121)   
remote: Counting objects:  21% (656/3121)   remote: Counting objects:  
22% (687/3121)   remote: Counting objects:  23% (718/3121)   
remote: Counting objects:  24% (750/3121)   remote: Counting objects:  
25% (781/3121)   remote: Counting objects:  26% (812/3121)   
remote: Counting objects:  27% (843/3121)   remote: Counting objects:  
28% (874/3121)   remote: Counting objects:  29% (906/3121)   
remote: Counting objects:  30% (937/3121)   remote: Counting objects:  
31% (968/3121)   remote: Counting objects:  32% (999/3121)   
remote: Counting objects:  33% (1030/3121)   remote: Counting objects:  
34% (1062/3121)   remote: Counting objects:  35% (1093/3121)   
remote: Counting objects:  36% (1124/3121)   remote: Counting objects:  
37% (1155/3121)   remote: Counting objects:  38% (1186/3121)   
remote: Counting objects:  39% (1218/3121)   remote: Counting objects:  
40% (1249/3121)   remote: Counting objects:  41% (1280/3121)   
remote: Counting objects:  42% (1311/3121)   remote: Counting objects:  
43% (1343/3121)   remote: Counting objects:  44% (1374/3121)   
remote: Counting objects:  45% (1405/3121)   remote: Counting objects:  
46% (1436/3121)   remote: Counting objects:  47% (1467/3121)   
remote: Counting objects:  48% (1499/3121)   remote: Counting objects:  
49% (1530/3121)   remote: Counting objects:  50% (1561/3121)   
remote: Counting objects:  51% (1592/3121)   remote: Counting objects:  
52% (1623/3121)   remote: Counting objects:  53% (1655/3121)   
remote: Counting objects:  54% (1686/3121)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk8 #3064

2018-10-04 Thread Apache Jenkins Server
See 


Changes:

[lindong28] KAFKA-7196; Remove heartbeat delayed operation for those removed

--
[...truncated 2.72 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] KIP-221: Repartition Topic Hints in Streams

2018-10-04 Thread Jeyhun Karimov
Hi Lei,

Please feel free to take over the KIP.

Cheers,
Jeyhun

On Fri, Sep 21, 2018, 22:27 Lei Chen  wrote:

> Hi,
>
> Just want to know is anyone actively working on this and also KAFKA-4835
> ? Seems like the JIRA
> has been inactive for couple months. We want this feature and would like to
> move it forward if no one else is working on it.
>
>
> Lei
>
> On Wed, Jun 20, 2018 at 7:27 PM Matthias J. Sax 
> wrote:
>
>> No worries. It's just good to know. It seems that some other people are
>> interested to drive this further. So we will just "reassign" it to them.
>>
>> Thanks for letting us know.
>>
>>
>> -Matthias
>>
>> On 6/20/18 2:51 PM, Jeyhun Karimov wrote:
>> > Hi Matthias, all,
>> >
>> > Currently, I am not able to complete this KIP. Please accept my
>> > apologies for that.
>> >
>> >
>> > Cheers,
>> > Jeyhun
>> >
>> > On Mon, Jun 11, 2018 at 2:25 AM Matthias J. Sax > > > wrote:
>> >
>> > What is the status of this KIP?
>> >
>> > -Matthias
>> >
>> >
>> > On 2/13/18 1:43 PM, Matthias J. Sax wrote:
>> > > Is there any update for this KIP?
>> > >
>> > >
>> > > -Matthias
>> > >
>> > > On 12/4/17 2:08 PM, Matthias J. Sax wrote:
>> > >> Jeyhun,
>> > >>
>> > >> thanks for updating the KIP.
>> > >>
>> > >> I am wondering if you intend to add a new class `Produced`?
>> There is
>> > >> already `org.apache.kafka.streams.kstream.Produced`. So if we
>> want to
>> > >> add a new class, it must have a different name -- or we might be
>> > able to
>> > >> merge both into one?
>> > >>
>> > >> Also, for the KStream overlaods of `through()` and `to()`, can
>> > you add
>> > >> the different behavior using different overloads? It's not clear
>> from
>> > >> the KIP what the semantics are.
>> > >>
>> > >>
>> > >> -Matthias
>> > >>
>> > >> On 11/17/17 3:27 PM, Jeyhun Karimov wrote:
>> > >>> Hi,
>> > >>>
>> > >>> Thanks for your comments. I agree with Matthias partially.
>> > >>> I think we should relax some requirements related with to() and
>> > through()
>> > >>> methods.
>> > >>> IMHO, Produced class can cover (existing/to be created) topic
>> > information,
>> > >>> and which will ease our effort:
>> > >>>
>> > >>> KStream.to(Produced topicInfo)
>> > >>> KStream.through(Produced topicInfo)
>> > >>>
>> > >>> This will decrease the number of overloads but we will need to
>> > deprecate
>> > >>> the existing to() and through() methods, perhaps.
>> > >>> I updated the KIP accordingly.
>> > >>>
>> > >>>
>> > >>> Cheers,
>> > >>> Jeyhun
>> > >>>
>> > >>> On Thu, Nov 16, 2017 at 10:21 PM Matthias J. Sax
>> > mailto:matth...@confluent.io>>
>> > >>> wrote:
>> > >>>
>> >  @Jan:
>> > 
>> >  The `Produced` class was introduced in 1.0 to specify key and
>> valud
>> >  Serdes (and partitioner) if data is written into a topic.
>> > 
>> >  Old API:
>> > 
>> >  KStream#to("topic", keySerde, valueSerde);
>> > 
>> >  New API:
>> > 
>> >  KStream#to("topic", Produced.with(keySerde, valueSerde));
>> > 
>> > 
>> >  This allows to reduce the number of overloads for `to()` (and
>> >  `through()` that follows the same pattern) -- the second
>> > parameter is
>> >  used to cover all different variations of option parameters
>> > users can
>> >  specify, while we only have 2 overload for `to()` itself.
>> > 
>> >  What is still unclear to me it, what you mean by this topic
>> prefix
>> >  thing? Either a user cares about the topic name and thus, must
>> > create
>> >  and manage it manually. Or the user does not care, and Streams
>> > create
>> >  it. How would this prefix idea fit in here?
>> > 
>> > 
>> > 
>> >  @Guozhang:
>> > 
>> >  My idea was to extend `Produced` with the hint we want to give
>> for
>> >  creating internal topic and pass a optional `Produced`
>> > parameter. There
>> >  are multiple things we can do here:
>> > 
>> >  1) stream.through(null, Produced...).groupBy().aggregate()
>> >  -> just allow for `null` topic name indicating that Streams
>> should
>> >  create an internal topic
>> > 
>> >  2) stream.through(Produced...).groupBy().aggregate()
>> >  -> add one overload taking an mandatory `Produced`
>> > 
>> >  We use `Serialized` to picky back the information
>> > 
>> >  3) stream.groupBy(Serialized...).aggregate()
>> >  and stream.groupByKey(Serialized...).aggregate()
>> >  -> we don't need new top level overloads
>> > 
>> > 
>> >  

Build failed in Jenkins: kafka-trunk-jdk10 #565

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 4866c33ac309ba5cc098a02948253f55a83666a3
error: Could not read 4866c33ac309ba5cc098a02948253f55a83666a3
error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
error: Could not read c7eee92ca0fe618da749d636179aacf9bc5b58a2
error: Could not read b74e7e407c0b065adf68bc45042063def922aa10
error: Could not read f26377352d14af38af5d6cf42531b940fafe7236
remote: Enumerating objects: 3038, done.
remote: Counting objects:   0% (1/3038)   remote: Counting objects:   
1% (31/3038)   remote: Counting objects:   2% (61/3038)   
remote: Counting objects:   3% (92/3038)   remote: Counting objects:   
4% (122/3038)   remote: Counting objects:   5% (152/3038)   
remote: Counting objects:   6% (183/3038)   remote: Counting objects:   
7% (213/3038)   remote: Counting objects:   8% (244/3038)   
remote: Counting objects:   9% (274/3038)   remote: Counting objects:  
10% (304/3038)   remote: Counting objects:  11% (335/3038)   
remote: Counting objects:  12% (365/3038)   remote: Counting objects:  
13% (395/3038)   remote: Counting objects:  14% (426/3038)   
remote: Counting objects:  15% (456/3038)   remote: Counting objects:  
16% (487/3038)   remote: Counting objects:  17% (517/3038)   
remote: Counting objects:  18% (547/3038)   remote: Counting objects:  
19% (578/3038)   remote: Counting objects:  20% (608/3038)   
remote: Counting objects:  21% (638/3038)   remote: Counting objects:  
22% (669/3038)   remote: Counting objects:  23% (699/3038)   
remote: Counting objects:  24% (730/3038)   remote: Counting objects:  
25% (760/3038)   remote: Counting objects:  26% (790/3038)   
remote: Counting objects:  27% (821/3038)   remote: Counting objects:  
28% (851/3038)   remote: Counting objects:  29% (882/3038)   
remote: Counting objects:  30% (912/3038)   remote: Counting objects:  
31% (942/3038)   remote: Counting objects:  32% (973/3038)   
remote: Counting objects:  33% (1003/3038)   remote: Counting objects:  
34% (1033/3038)   remote: Counting objects:  35% (1064/3038)   
remote: Counting objects:  36% (1094/3038)   remote: Counting objects:  
37% (1125/3038)   remote: Counting objects:  38% (1155/3038)   
remote: Counting objects:  39% (1185/3038)   remote: Counting objects:  
40% (1216/3038)   remote: Counting objects:  41% (1246/3038)   
remote: Counting objects:  42% (1276/3038)   remote: Counting objects:  
43% (1307/3038)   remote: Counting objects:  44% (1337/3038)   
remote: Counting objects:  45% (1368/3038)   remote: Counting objects:  
46% (1398/3038)   remote: Counting objects:  47% (1428/3038)   
remote: Counting objects:  48% (1459/3038)   remote: Counting objects:  
49% (1489/3038)   remote: Counting objects:  50% (1519/3038)   

Jenkins build is back to normal : kafka-1.1-jdk7 #215

2018-10-04 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk10 #564

2018-10-04 Thread Apache Jenkins Server
See 


Changes:

[lindong28] KAFKA-7196; Remove heartbeat delayed operation for those removed

--
[...truncated 2.24 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED


Re: [EXTERNAL] Incremental Cooperative Rebalancing

2018-10-04 Thread McCaig, Rhys
This is fantastic. Im really excited to see the work on this. 

> On Oct 2, 2018, at 4:22 PM, Konstantine Karantasis  
> wrote:
> 
> Hey everyone,
> 
> I'd like to bring to your attention a general design document that was just
> published in Apache Kafka's wiki space:
> 
> https://cwiki.apache.org/confluence/display/KAFKA/Incremental+Cooperative+Rebalancing%3A+Support+and+Policies
> 
> It deals with the subject of Rebalancing of groups in Kafka and proposes
> basic infrastructure to support improvements on the current rebalancing
> protocol as well as a set of policies that can be implemented to optimize
> rebalancing under a number of real-world scenarios.
> 
> Currently, this wiki page is meant to serve as a reference to the
> proposition of Incremental Cooperative Rebalancing overall. Specific KIPs
> will follow in order to describe in more detail - using the standard KIP
> format - the basic infrastructure and the first policies that will be
> proposed for implementation in components such as Connect, the Kafka
> Consumer and Streams.
> 
> Stay tuned!
> Konstantine



Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-04 Thread Ismael Juma
Have we considered control plane if we think control by itself is
ambiguous? I agree with the original concern that "controller" may be
confusing for something that affects all brokers.

Ismael


On 4 Oct 2018 11:08 am, "Lucas Wang"  wrote:

Thanks Jun. I've changed the KIP with the suggested 2 step upgrade.
Please take a look again when you have time.

Regards,
Lucas


On Thu, Oct 4, 2018 at 10:06 AM Jun Rao  wrote:

> Hi, Lucas,
>
> 200. That's a valid concern. So, we can probably just keep the current
> name.
>
> 201. I am thinking that you would upgrade in the same way as changing
> inter.broker.listener.name. This requires 2 rounds of rolling restart. In
> the first round, we add the controller endpoint to the listeners w/o
> setting controller.listener.name. In the second round, every broker sets
> controller.listener.name. At that point, the controller listener is ready
> in every broker.
>
> Thanks,
>
> Jun
>
> On Tue, Oct 2, 2018 at 10:38 AM, Lucas Wang  wrote:
>
> > Thanks for the further comments, Jun.
> >
> > 200. Currently in the code base, we have the term of "ControlBatch"
> related
> > to
> > idempotent/transactional producing. Do you think it's a concern for
> reusing
> > the term "control"?
> >
> > 201. It's not clear to me how it would work by following the same
> strategy
> > for "controller.listener.name".
> > Say the new controller has its "controller.listener.name" set to the
> value
> > "CONTROLLER", and broker 1
> > has picked up this KIP by announcing
> > "endpoints": [
> > "CONTROLLER://broker1.example.com:9091",
> > "INTERNAL://broker1.example.com:9092",
> > "EXTERNAL://host1.example.com:9093"
> > ],
> >
> > while broker2 has not picked up the change, and is announcing
> > "endpoints": [
> > "INTERNAL://broker2.example.com:9092",
> > "EXTERNAL://host2.example.com:9093"
> > ],
> > to support both broker 1 for the new behavior and broker 2 for the old
> > behavior, it seems the controller must
> > check their published endpoints. Am I missing something?
> >
> > Thanks!
> > Lucas
> >
> > On Mon, Oct 1, 2018 at 6:29 PM Jun Rao  wrote:
> >
> > > Hi, Lucas,
> > >
> > > Sorry for the delay. The updated wiki looks good to me overall. Just a
> > > couple more minor comments.
> > >
> > > 200.
kafka.network:name=ControllerRequestQueueSize,type=RequestChannel:
> > The
> > > name ControllerRequestQueueSize gives the impression that it's only
for
> > the
> > > controller broker. Perhaps we can just rename all metrics and configs
> > from
> > > controller to control. This indicates that the threads and the queues
> are
> > > for the control requests (as oppose to data requests).
> > >
> > > 201. ": In this scenario, the controller
> will
> > > have the "controller.listener.name" config set to a value like
> > > "CONTROLLER", however the broker's exposed endpoints do not have an
> entry
> > > corresponding to the new listener name. Hence the controller should
> > > preserve the existing behavior by determining the endpoint using
> > > *inter-broker-listener-name *value. The end result should be the same
> > > behavior as today." Currently, the controller makes connections based
> on
> > > its local inter.broker.listener.name config without checking the
> target
> > > broker's ZK registration. For consistency, perhaps we can just follow
> the
> > > same strategy for controller.listener.name. This existing behavior
> seems
> > > simpler to understand and has the benefit of catching inconsistent
> > configs
> > > across brokers.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Mon, Oct 1, 2018 at 8:43 AM, Lucas Wang 
> > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Sorry to bother you again. Can you please take a look at the wiki
> again
> > > > when you have time?
> > > >
> > > > Thanks a lot!
> > > > Lucas
> > > >
> > > > On Wed, Sep 19, 2018 at 3:57 PM Lucas Wang 
> > > wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > Thanks a lot for the detailed explanation.
> > > > > I've restored the wiki to a previous version that does not require
> > > config
> > > > > changes,
> > > > > and keeps the current behavior with the proposed changes turned
off
> > by
> > > > > default.
> > > > > I'd appreciate it if you can review it again.
> > > > >
> > > > > Thanks!
> > > > > Lucas
> > > > >
> > > > > On Tue, Sep 18, 2018 at 1:48 PM Jun Rao  wrote:
> > > > >
> > > > >> Hi, Lucas,
> > > > >>
> > > > >> When upgrading to a minor release, I think the expectation is
> that a
> > > > user
> > > > >> wouldn't need to make any config changes, other than the usual
> > > > >> inter.broker.protocol. If we require other config changes during
> an
> > > > >> upgrade, then it's probably better to do that in a major release.
> > > > >>
> > > > >> Regarding your proposal, I think removing host/advertised_host in
> > > favor
> > > > of
> > > > >> listeners:advertised_listeners seems useful regardless of this
> KIP.
> > > > >> However, that can probably wait until a major 

Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-04 Thread Lucas Wang
Thanks Jun. I've changed the KIP with the suggested 2 step upgrade.
Please take a look again when you have time.

Regards,
Lucas

On Thu, Oct 4, 2018 at 10:06 AM Jun Rao  wrote:

> Hi, Lucas,
>
> 200. That's a valid concern. So, we can probably just keep the current
> name.
>
> 201. I am thinking that you would upgrade in the same way as changing
> inter.broker.listener.name. This requires 2 rounds of rolling restart. In
> the first round, we add the controller endpoint to the listeners w/o
> setting controller.listener.name. In the second round, every broker sets
> controller.listener.name. At that point, the controller listener is ready
> in every broker.
>
> Thanks,
>
> Jun
>
> On Tue, Oct 2, 2018 at 10:38 AM, Lucas Wang  wrote:
>
> > Thanks for the further comments, Jun.
> >
> > 200. Currently in the code base, we have the term of "ControlBatch"
> related
> > to
> > idempotent/transactional producing. Do you think it's a concern for
> reusing
> > the term "control"?
> >
> > 201. It's not clear to me how it would work by following the same
> strategy
> > for "controller.listener.name".
> > Say the new controller has its "controller.listener.name" set to the
> value
> > "CONTROLLER", and broker 1
> > has picked up this KIP by announcing
> > "endpoints": [
> > "CONTROLLER://broker1.example.com:9091",
> > "INTERNAL://broker1.example.com:9092",
> > "EXTERNAL://host1.example.com:9093"
> > ],
> >
> > while broker2 has not picked up the change, and is announcing
> > "endpoints": [
> > "INTERNAL://broker2.example.com:9092",
> > "EXTERNAL://host2.example.com:9093"
> > ],
> > to support both broker 1 for the new behavior and broker 2 for the old
> > behavior, it seems the controller must
> > check their published endpoints. Am I missing something?
> >
> > Thanks!
> > Lucas
> >
> > On Mon, Oct 1, 2018 at 6:29 PM Jun Rao  wrote:
> >
> > > Hi, Lucas,
> > >
> > > Sorry for the delay. The updated wiki looks good to me overall. Just a
> > > couple more minor comments.
> > >
> > > 200. kafka.network:name=ControllerRequestQueueSize,type=RequestChannel:
> > The
> > > name ControllerRequestQueueSize gives the impression that it's only for
> > the
> > > controller broker. Perhaps we can just rename all metrics and configs
> > from
> > > controller to control. This indicates that the threads and the queues
> are
> > > for the control requests (as oppose to data requests).
> > >
> > > 201. ": In this scenario, the controller
> will
> > > have the "controller.listener.name" config set to a value like
> > > "CONTROLLER", however the broker's exposed endpoints do not have an
> entry
> > > corresponding to the new listener name. Hence the controller should
> > > preserve the existing behavior by determining the endpoint using
> > > *inter-broker-listener-name *value. The end result should be the same
> > > behavior as today." Currently, the controller makes connections based
> on
> > > its local inter.broker.listener.name config without checking the
> target
> > > broker's ZK registration. For consistency, perhaps we can just follow
> the
> > > same strategy for controller.listener.name. This existing behavior
> seems
> > > simpler to understand and has the benefit of catching inconsistent
> > configs
> > > across brokers.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Mon, Oct 1, 2018 at 8:43 AM, Lucas Wang 
> > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Sorry to bother you again. Can you please take a look at the wiki
> again
> > > > when you have time?
> > > >
> > > > Thanks a lot!
> > > > Lucas
> > > >
> > > > On Wed, Sep 19, 2018 at 3:57 PM Lucas Wang 
> > > wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > Thanks a lot for the detailed explanation.
> > > > > I've restored the wiki to a previous version that does not require
> > > config
> > > > > changes,
> > > > > and keeps the current behavior with the proposed changes turned off
> > by
> > > > > default.
> > > > > I'd appreciate it if you can review it again.
> > > > >
> > > > > Thanks!
> > > > > Lucas
> > > > >
> > > > > On Tue, Sep 18, 2018 at 1:48 PM Jun Rao  wrote:
> > > > >
> > > > >> Hi, Lucas,
> > > > >>
> > > > >> When upgrading to a minor release, I think the expectation is
> that a
> > > > user
> > > > >> wouldn't need to make any config changes, other than the usual
> > > > >> inter.broker.protocol. If we require other config changes during
> an
> > > > >> upgrade, then it's probably better to do that in a major release.
> > > > >>
> > > > >> Regarding your proposal, I think removing host/advertised_host in
> > > favor
> > > > of
> > > > >> listeners:advertised_listeners seems useful regardless of this
> KIP.
> > > > >> However, that can probably wait until a major release.
> > > > >>
> > > > >> As for the controller listener, I am not sure if one has to set
> it.
> > To
> > > > >> make
> > > > >> a cluster healthy, one sort of have to make sure that the request
> > > queue
> > > > is
> > > > >> never 

Build failed in Jenkins: kafka-1.1-jdk7 #214

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read 6962f260e03432b1db01ec6288db025b442be7d3
error: Could not read 77ab775fd225ff46355f615cceda61eeff32e29e
error: Could not read 8d6f833cd86b5cdd31a885e105c9e9b2cc99e2a0
remote: Enumerating objects: 3173, done.
remote: Counting objects:   0% (1/3173)   remote: Counting objects:   
1% (32/3173)   remote: Counting objects:   2% (64/3173)   
remote: Counting objects:   3% (96/3173)   remote: Counting objects:   
4% (127/3173)   remote: Counting objects:   5% (159/3173)   
remote: Counting objects:   6% (191/3173)   remote: Counting objects:   
7% (223/3173)   remote: Counting objects:   8% (254/3173)   
remote: Counting objects:   9% (286/3173)   remote: Counting objects:  
10% (318/3173)   remote: Counting objects:  11% (350/3173)   
remote: Counting objects:  12% (381/3173)   remote: Counting objects:  
13% (413/3173)   remote: Counting objects:  14% (445/3173)   
remote: Counting objects:  15% (476/3173)   remote: Counting objects:  
16% (508/3173)   remote: Counting objects:  17% (540/3173)   
remote: Counting objects:  18% (572/3173)   remote: Counting objects:  
19% (603/3173)   remote: Counting objects:  20% (635/3173)   
remote: Counting objects:  21% (667/3173)   remote: Counting objects:  
22% (699/3173)   remote: Counting objects:  23% (730/3173)   
remote: Counting objects:  24% (762/3173)   remote: Counting objects:  
25% (794/3173)   remote: Counting objects:  26% (825/3173)   
remote: Counting objects:  27% (857/3173)   remote: Counting objects:  
28% (889/3173)   remote: Counting objects:  29% (921/3173)   
remote: Counting objects:  30% (952/3173)   remote: Counting objects:  
31% (984/3173)   remote: Counting objects:  32% (1016/3173)   
remote: Counting objects:  33% (1048/3173)   remote: Counting objects:  
34% (1079/3173)   remote: Counting objects:  35% (/3173)   
remote: Counting objects:  36% (1143/3173)   remote: Counting objects:  
37% (1175/3173)   remote: Counting objects:  38% (1206/3173)   
remote: Counting objects:  39% (1238/3173)   remote: Counting objects:  
40% (1270/3173)   remote: Counting objects:  41% (1301/3173)   
remote: Counting objects:  42% (1333/3173)   remote: Counting objects:  
43% (1365/3173)   remote: Counting objects:  44% (1397/3173)   
remote: Counting objects:  45% (1428/3173)   remote: Counting objects:  
46% (1460/3173)   remote: Counting objects:  47% (1492/3173)   
remote: Counting objects:  48% (1524/3173)   remote: Counting objects:  
49% (1555/3173)   remote: Counting objects:  50% (1587/3173)   
remote: Counting objects:  51% (1619/3173)   remote: Counting objects:  
52% (1650/3173)   remote: Counting objects: 

Re: [VOTE] KIP-371: Add a configuration to build custom SSL principal name

2018-10-04 Thread Manikumar
Bump.

On Mon, Sep 24, 2018 at 8:44 PM Manikumar  wrote:

> Bump. This KIP requires one more binding vote.
> Please take a look.
>
> Thanks,
>
> On Sun, Sep 23, 2018 at 9:14 PM Satish Duggana 
> wrote:
>
>> +1 (non binding)
>>
>> Thanks,
>> Satish.
>>
>> On Fri, Sep 21, 2018 at 3:26 PM, Rajini Sivaram 
>> wrote:
>> > Hi Manikumar,
>> >
>> > Thanks for the KIP!
>> >
>> > +1 (binding)
>> >
>> > On Thu, Sep 20, 2018 at 8:53 PM, Priyank Shah 
>> wrote:
>> >
>> >> +1(non-binding)
>> >>
>> >> On 9/20/18, 9:18 AM, "Harsha Chintalapani"  wrote:
>> >>
>> >> +1 (binding).
>> >>
>> >> Thanks,
>> >> Harsha
>> >>
>> >>
>> >> On September 19, 2018 at 5:19:51 AM, Manikumar (
>> >> manikumar.re...@gmail.com) wrote:
>> >>
>> >> Hi All,
>> >>
>> >> I would like to start voting on KIP-371, which adds a configuration
>> >> option
>> >> for building custom SSL principal names.
>> >>
>> >> KIP:
>> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> >> 371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
>> >>
>> >> Discussion Thread:
>> >>
>> https://lists.apache.org/thread.html/e346f5e3e3dd1feb863594e40eac1e
>> >> d54138613a667f319b99344710@%3Cdev.kafka.apache.org%3E
>> >>
>> >> Thanks,
>> >> Manikumar
>> >>
>> >>
>> >>
>>
>


Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-04 Thread Jun Rao
Hi, Lucas,

200. That's a valid concern. So, we can probably just keep the current name.

201. I am thinking that you would upgrade in the same way as changing
inter.broker.listener.name. This requires 2 rounds of rolling restart. In
the first round, we add the controller endpoint to the listeners w/o
setting controller.listener.name. In the second round, every broker sets
controller.listener.name. At that point, the controller listener is ready
in every broker.

Thanks,

Jun

On Tue, Oct 2, 2018 at 10:38 AM, Lucas Wang  wrote:

> Thanks for the further comments, Jun.
>
> 200. Currently in the code base, we have the term of "ControlBatch" related
> to
> idempotent/transactional producing. Do you think it's a concern for reusing
> the term "control"?
>
> 201. It's not clear to me how it would work by following the same strategy
> for "controller.listener.name".
> Say the new controller has its "controller.listener.name" set to the value
> "CONTROLLER", and broker 1
> has picked up this KIP by announcing
> "endpoints": [
> "CONTROLLER://broker1.example.com:9091",
> "INTERNAL://broker1.example.com:9092",
> "EXTERNAL://host1.example.com:9093"
> ],
>
> while broker2 has not picked up the change, and is announcing
> "endpoints": [
> "INTERNAL://broker2.example.com:9092",
> "EXTERNAL://host2.example.com:9093"
> ],
> to support both broker 1 for the new behavior and broker 2 for the old
> behavior, it seems the controller must
> check their published endpoints. Am I missing something?
>
> Thanks!
> Lucas
>
> On Mon, Oct 1, 2018 at 6:29 PM Jun Rao  wrote:
>
> > Hi, Lucas,
> >
> > Sorry for the delay. The updated wiki looks good to me overall. Just a
> > couple more minor comments.
> >
> > 200. kafka.network:name=ControllerRequestQueueSize,type=RequestChannel:
> The
> > name ControllerRequestQueueSize gives the impression that it's only for
> the
> > controller broker. Perhaps we can just rename all metrics and configs
> from
> > controller to control. This indicates that the threads and the queues are
> > for the control requests (as oppose to data requests).
> >
> > 201. ": In this scenario, the controller will
> > have the "controller.listener.name" config set to a value like
> > "CONTROLLER", however the broker's exposed endpoints do not have an entry
> > corresponding to the new listener name. Hence the controller should
> > preserve the existing behavior by determining the endpoint using
> > *inter-broker-listener-name *value. The end result should be the same
> > behavior as today." Currently, the controller makes connections based on
> > its local inter.broker.listener.name config without checking the target
> > broker's ZK registration. For consistency, perhaps we can just follow the
> > same strategy for controller.listener.name. This existing behavior seems
> > simpler to understand and has the benefit of catching inconsistent
> configs
> > across brokers.
> >
> > Thanks,
> >
> > Jun
> >
> > On Mon, Oct 1, 2018 at 8:43 AM, Lucas Wang 
> wrote:
> >
> > > Hi Jun,
> > >
> > > Sorry to bother you again. Can you please take a look at the wiki again
> > > when you have time?
> > >
> > > Thanks a lot!
> > > Lucas
> > >
> > > On Wed, Sep 19, 2018 at 3:57 PM Lucas Wang 
> > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Thanks a lot for the detailed explanation.
> > > > I've restored the wiki to a previous version that does not require
> > config
> > > > changes,
> > > > and keeps the current behavior with the proposed changes turned off
> by
> > > > default.
> > > > I'd appreciate it if you can review it again.
> > > >
> > > > Thanks!
> > > > Lucas
> > > >
> > > > On Tue, Sep 18, 2018 at 1:48 PM Jun Rao  wrote:
> > > >
> > > >> Hi, Lucas,
> > > >>
> > > >> When upgrading to a minor release, I think the expectation is that a
> > > user
> > > >> wouldn't need to make any config changes, other than the usual
> > > >> inter.broker.protocol. If we require other config changes during an
> > > >> upgrade, then it's probably better to do that in a major release.
> > > >>
> > > >> Regarding your proposal, I think removing host/advertised_host in
> > favor
> > > of
> > > >> listeners:advertised_listeners seems useful regardless of this KIP.
> > > >> However, that can probably wait until a major release.
> > > >>
> > > >> As for the controller listener, I am not sure if one has to set it.
> To
> > > >> make
> > > >> a cluster healthy, one sort of have to make sure that the request
> > queue
> > > is
> > > >> never full and no request will be sitting in the request queue for
> > long.
> > > >> If
> > > >> one does that, setting the controller listener may not be necessary.
> > On
> > > >> the
> > > >> flip side, even if one sets the controller listener, but the request
> > > queue
> > > >> and the request time for the data part are still high, the cluster
> may
> > > >> still not be healthy. Given that we have already started the 2.1
> > release
> > > >> planning, perhaps we can 

Re: [VOTE] KIP-328: Ability to suppress updates for KTables

2018-10-04 Thread John Roesler
Update: Here's a link to the documented eviction behavior:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-328%3A+Ability+to+suppress+updates+for+KTables#KIP-328:AbilitytosuppressupdatesforKTables-BufferEvictionBehavior(akaSuppressEmitBehavior)

On Thu, Oct 4, 2018 at 11:12 AM John Roesler  wrote:

> Hello again, all,
>
> During review, we realized that there is a relationship between this
> (KIP-328) and KIP-372.
>
> KIP-372 proposed to allow naming *all* internal topics, and KIP-328 adds a
> new internal topic (the changelog for the suppression buffer).
>
> However, we didn't consider this relationship in either KIP discussion,
> possibly since they were discussed and accepted concurrently.
>
> I have updated KIP-328 to effectively "merge" the two KIPs by adding a
> `withName` builder to Suppressed in the style of the other builders added
> in KIP-372:
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=20=19
> .
>
> I think this should be uncontroversial, but as always, let me know of any
> objections you may have.
>
>
> Also, note that I'll be updating the KIP to document the exact buffer
> eviction behavior. I previously treated this as an internal implementation
> detail, but after consideration, I think users would want to know the
> eviction semantics, especially if they are debugging their applications and
> scrutinizing the sequence of emitted records.
>
> Thanks,
> -John
>
> On Thu, Sep 20, 2018 at 5:34 PM John Roesler  wrote:
>
>> Hello all,
>>
>> During review of https://github.com/apache/kafka/pull/5567 for KIP-328,
>> the reviewers raised many good suggestions for the API.
>>
>> The basic design of the suppress operation remains the same, but the
>> config object is (in my opinion) far more ergonomic with their
>> suggestions.
>>
>> I have updated the KIP to reflect the new config (
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-328%3A+Ability+to+suppress+updates+for+KTables#KIP-328:AbilitytosuppressupdatesforKTables-NewSuppressOperator
>> )
>>
>> Please let me know if anyone wishes to change their vote, and we call for
>> a recast.
>>
>> Thanks,
>> -John
>>
>> On Thu, Aug 23, 2018 at 12:54 PM Matthias J. Sax 
>> wrote:
>>
>>> It seems nobody has any objections against the change.
>>>
>>> That's for the KIP improvement. I'll go ahead and merge the PR.
>>>
>>>
>>> -Matthias
>>>
>>> On 8/21/18 2:44 PM, John Roesler wrote:
>>> > Hello again, all,
>>> >
>>> > I belatedly had a better idea for adding grace period to the Windows
>>> class
>>> > hierarchy (TimeWindows, UnlimitedWindows, JoinWindows). Instead of
>>> > providing the grace-setter in the abstract class and having to retract
>>> it
>>> > in UnlimitedWindows, I've made the getter abstract method in Windows
>>> and
>>> > only added setters to Time and Join windows.
>>> >
>>> > This should not only improve the ergonomics of grace period, but make
>>> the
>>> > whole class hierarchy more maintainable.
>>> >
>>> > See the PR for more details: https://github.com/apache/kafka/pull/5536
>>> >
>>> > I've updated the KIP accordingly. Here's the diff:
>>> >
>>> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=11=9
>>> >
>>> > Please let me know if this changes your vote.
>>> >
>>> > Thanks,
>>> > -John
>>> >
>>> > On Mon, Aug 13, 2018 at 5:20 PM John Roesler 
>>> wrote:
>>> >
>>> >> Hey all,
>>> >>
>>> >> I just wanted to let you know that a few small issues surfaced during
>>> >> implementation and review. I've updated the KIP. Here's the diff:
>>> >>
>>> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=9=8
>>> >>
>>> >> Basically:
>>> >> * the metrics named "*-event-*" are inconsistent with existing
>>> >> nomenclature, and will be "*-record-*" instead (late records instead
>>> of
>>> >> late events, for example)
>>> >> * the apis taking and returning Duration will use long millis
>>> instead. We
>>> >> do want to transition to Duration in the future, but we shouldn't do
>>> it
>>> >> piecemeal.
>>> >>
>>> >> Thanks,
>>> >> -John
>>> >>
>>> >> On Tue, Aug 7, 2018 at 12:07 PM John Roesler 
>>> wrote:
>>> >>
>>> >>> Thanks everyone, KIP-328 has passed with 3 binding votes (Guozhang,
>>> >>> Damian, and Matthias) and 3 non-binding (Ted, Bill, and me).
>>> >>>
>>> >>> Thanks for your time,
>>> >>> -John
>>> >>>
>>> >>> On Mon, Aug 6, 2018 at 6:35 PM Matthias J. Sax <
>>> matth...@confluent.io>
>>> >>> wrote:
>>> >>>
>>>  +1 (binding)
>>> 
>>>  Thanks for the KIP.
>>> 
>>> 
>>>  -Matthias
>>> 
>>>  On 8/3/18 12:52 AM, Damian Guy wrote:
>>> > Thanks John! +1
>>> >
>>> > On Mon, 30 Jul 2018 at 23:58 Guozhang Wang 
>>> wrote:
>>> >
>>> >> Yes, the addendum lgtm as well. Thanks!
>>> >>
>>> >> On Mon, Jul 30, 2018 at 3:34 PM, John Roesler 
>>>  wrote:
>>> >>
>>> >>> Another thing that came up after I started working on an
>>>  implementation
>>> >> is
>>> 

Jenkins build is back to normal : kafka-trunk-jdk8 #3063

2018-10-04 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-328: Ability to suppress updates for KTables

2018-10-04 Thread John Roesler
Hello again, all,

During review, we realized that there is a relationship between this
(KIP-328) and KIP-372.

KIP-372 proposed to allow naming *all* internal topics, and KIP-328 adds a
new internal topic (the changelog for the suppression buffer).

However, we didn't consider this relationship in either KIP discussion,
possibly since they were discussed and accepted concurrently.

I have updated KIP-328 to effectively "merge" the two KIPs by adding a
`withName` builder to Suppressed in the style of the other builders added
in KIP-372:
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=20=19
.

I think this should be uncontroversial, but as always, let me know of any
objections you may have.


Also, note that I'll be updating the KIP to document the exact buffer
eviction behavior. I previously treated this as an internal implementation
detail, but after consideration, I think users would want to know the
eviction semantics, especially if they are debugging their applications and
scrutinizing the sequence of emitted records.

Thanks,
-John

On Thu, Sep 20, 2018 at 5:34 PM John Roesler  wrote:

> Hello all,
>
> During review of https://github.com/apache/kafka/pull/5567 for KIP-328,
> the reviewers raised many good suggestions for the API.
>
> The basic design of the suppress operation remains the same, but the
> config object is (in my opinion) far more ergonomic with their suggestions.
>
> I have updated the KIP to reflect the new config (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-328%3A+Ability+to+suppress+updates+for+KTables#KIP-328:AbilitytosuppressupdatesforKTables-NewSuppressOperator
> )
>
> Please let me know if anyone wishes to change their vote, and we call for
> a recast.
>
> Thanks,
> -John
>
> On Thu, Aug 23, 2018 at 12:54 PM Matthias J. Sax 
> wrote:
>
>> It seems nobody has any objections against the change.
>>
>> That's for the KIP improvement. I'll go ahead and merge the PR.
>>
>>
>> -Matthias
>>
>> On 8/21/18 2:44 PM, John Roesler wrote:
>> > Hello again, all,
>> >
>> > I belatedly had a better idea for adding grace period to the Windows
>> class
>> > hierarchy (TimeWindows, UnlimitedWindows, JoinWindows). Instead of
>> > providing the grace-setter in the abstract class and having to retract
>> it
>> > in UnlimitedWindows, I've made the getter abstract method in Windows and
>> > only added setters to Time and Join windows.
>> >
>> > This should not only improve the ergonomics of grace period, but make
>> the
>> > whole class hierarchy more maintainable.
>> >
>> > See the PR for more details: https://github.com/apache/kafka/pull/5536
>> >
>> > I've updated the KIP accordingly. Here's the diff:
>> >
>> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=11=9
>> >
>> > Please let me know if this changes your vote.
>> >
>> > Thanks,
>> > -John
>> >
>> > On Mon, Aug 13, 2018 at 5:20 PM John Roesler  wrote:
>> >
>> >> Hey all,
>> >>
>> >> I just wanted to let you know that a few small issues surfaced during
>> >> implementation and review. I've updated the KIP. Here's the diff:
>> >>
>> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=87295409=9=8
>> >>
>> >> Basically:
>> >> * the metrics named "*-event-*" are inconsistent with existing
>> >> nomenclature, and will be "*-record-*" instead (late records instead of
>> >> late events, for example)
>> >> * the apis taking and returning Duration will use long millis instead.
>> We
>> >> do want to transition to Duration in the future, but we shouldn't do it
>> >> piecemeal.
>> >>
>> >> Thanks,
>> >> -John
>> >>
>> >> On Tue, Aug 7, 2018 at 12:07 PM John Roesler 
>> wrote:
>> >>
>> >>> Thanks everyone, KIP-328 has passed with 3 binding votes (Guozhang,
>> >>> Damian, and Matthias) and 3 non-binding (Ted, Bill, and me).
>> >>>
>> >>> Thanks for your time,
>> >>> -John
>> >>>
>> >>> On Mon, Aug 6, 2018 at 6:35 PM Matthias J. Sax > >
>> >>> wrote:
>> >>>
>>  +1 (binding)
>> 
>>  Thanks for the KIP.
>> 
>> 
>>  -Matthias
>> 
>>  On 8/3/18 12:52 AM, Damian Guy wrote:
>> > Thanks John! +1
>> >
>> > On Mon, 30 Jul 2018 at 23:58 Guozhang Wang 
>> wrote:
>> >
>> >> Yes, the addendum lgtm as well. Thanks!
>> >>
>> >> On Mon, Jul 30, 2018 at 3:34 PM, John Roesler 
>>  wrote:
>> >>
>> >>> Another thing that came up after I started working on an
>>  implementation
>> >> is
>> >>> that in addition to deprecating "retention" from the Windows
>>  interface,
>> >> we
>> >>> also need to deprecate "segmentInterval", for the same reasons. I
>>  simply
>> >>> overlooked it previously. I've updated the KIP accordingly.
>> >>>
>> >>> Hopefully, this doesn't change anyone's vote.
>> >>>
>> >>> Thanks,
>> >>> -John
>> >>>
>> >>> On Mon, Jul 30, 2018 at 5:31 PM John Roesler 
>>  wrote:
>> >>>
>>  Thanks Guozhang,
>> 

[jira] [Reopened] (KAFKA-7477) Improve Streams close timeout semantics

2018-10-04 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-7477:


> Improve Streams close timeout semantics
> ---
>
> Key: KAFKA-7477
> URL: https://issues.apache.org/jira/browse/KAFKA-7477
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: Nikolay Izhikov
>Priority: Minor
>  Labels: kip, newbie
>
> See [https://github.com/apache/kafka/pull/5682#discussion_r221473451]
> The current timeout semantics are a little "magical":
>  * 0 means to block forever
>  * negative numbers cause the close to complete immediately without checking 
> the state
> I think this would make more sense:
>  * reject negative numbers
>  * make 0 just signal and return immediately (after checking the state once)
>  * if I want to wait "forever", I can use {{ofYears(1)}} or 
> {{ofMillis(Long.MAX_VALUE)}} or some other intuitively "long enough to be 
> forever" value instead of a magic value.
>  
> Part of 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-358%3A+Migrate+Streams+API+to+Duration+instead+of+long+ms+times



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7477) Improve Streams close timeout semantics

2018-10-04 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7477.

Resolution: Fixed

> Improve Streams close timeout semantics
> ---
>
> Key: KAFKA-7477
> URL: https://issues.apache.org/jira/browse/KAFKA-7477
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: Nikolay Izhikov
>Priority: Minor
>  Labels: kip, newbie
>
> See [https://github.com/apache/kafka/pull/5682#discussion_r221473451]
> The current timeout semantics are a little "magical":
>  * 0 means to block forever
>  * negative numbers cause the close to complete immediately without checking 
> the state
> I think this would make more sense:
>  * reject negative numbers
>  * make 0 just signal and return immediately (after checking the state once)
>  * if I want to wait "forever", I can use {{ofYears(1)}} or 
> {{ofMillis(Long.MAX_VALUE)}} or some other intuitively "long enough to be 
> forever" value instead of a magic value.
>  
> Part of 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-358%3A+Migrate+Streams+API+to+Duration+instead+of+long+ms+times



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSSION] KIP-376: Implement AutoClosable on appropriate classes that has close()

2018-10-04 Thread John Roesler
> Overall, it seems that `AutoClosable` might be the better interface to
> use though because it's more generic.

This sounds good to me. I don't know whether or not it's worth actually
transitioning any existing classes from Closeable to AutoCloseable.
I don't think it would affect any invocations, but there's the off-chance
that someone has assigned an instance to a Closeable variable, which would
break.

Overalll, it seems like maybe we should leave any Closeable implementations
alone and just add AutoCloseable where there's no existing closeable
interface.

-John

On Wed, Oct 3, 2018 at 12:06 PM Matthias J. Sax 
wrote:

> Thanks for clarifying. I thought that if we inherit `close() throws
> Exception` we need to declare the same exception -- this would have been
> an issue. Thus, my backward compatibility concerns are resolved.
>
> About try-with-resources: I think, allowing to use try-with-resources is
> the core motivation of this KIP to begin with. Also note, that `Closable
> extends AutoClosable`. Thus, both interfaces work with try-with-resource.
>
> Overall, it seems that `AutoClosable` might be the better interface to
> use though because it's more generic.
>
> -Matthias
>
>
> On 10/3/18 9:48 AM, Colin McCabe wrote:
> > On Sun, Sep 30, 2018, at 13:19, Matthias J. Sax wrote:
> >> Closeable is part of `java.io` while AutoClosable is part of
> >> `java.lang`. Thus, the second one is more generic. Also, JavaDoc points
> >> out that `Closable#close()` must be idempotent while
> >> `AutoClosable#close()` can have side effects.
> >
> > That's an interesting note.   Looks like the exact JavaDoc text is:
> >
> >  > Note that unlike the close method of Closeable, this close method is
> not
> >  > required to be idempotent. In other words, calling this close method
> more
> >  > than once may have some visible side effect, unlike Closeable.close
> which
> >  > is required to have no effect if called more than once. However,
> >  > implementers of this interface are strongly encouraged to make their
> close
> >  > methods idempotent.
> >
> > So you can make it non-idempotent, but it's still recommended to make it
> idempotent.
> >
> >>
> >> Thus, I am not sure atm which one suits better.
> >>
> >> However, it's a good hint, that `AutoClosable#close()` declares `throws
> >> Exception` and thus, it seems to be a backward incompatible change.
> >> Hence, I am not sure if we can actually move forward easily with this
> KIP.
> >
> > I was worried about that too, but actually you can implement the
> AutoCloseable interface without declaring "throws Exception".  In general,
> you can implement an interface while throwing a subset of the possible
> checked exceptions.
> >
> > There is one big benefit of AutoCloseable that I haven't seen mentioned
> here yet: the ability to use constructrs like try-with-resources
> transparently.  So you can do things like
> >
> >> try (MyClass m = new MyClass()) {
> >>   m.doSomething(...);
> >> }
> >
> > best,
> > Colin
> >
> >>
> >> Nit: `RecordCollectorImpl` is an internal class that implements
> >> `RecordCollector` -- should `RecordCollector extends AutoCloseable`?
> >>
> >>
> >> -Matthias
> >>
> >>
> >> On 9/27/18 7:46 PM, Chia-Ping Tsai wrote:
>  (Although I am not quite sure
>  when one is more desirable than the other)
> >>>
> >>> Most kafka's classes implementing Closeable/AutoCloseable doesn't
> throw checked exception in close() method. Perhaps we should have a
> "KafkaCloseable" interface which has a close() method without throwing any
> checked exception...
> >>>
> >>> On 2018/09/27 19:11:20, Yishun Guan  wrote:
>  Hi All,
> 
>  Chia-Ping, I agree, similar to VarifiableConsumer, VarifiableProducer
>  should be implementing Closeable as well (Although I am not quite sure
>  when one is more desirable than the other), also I just looked through
>  your list - these are some great additions, I will add them to the
>  list.
> 
>  Thanks,
>  Yishun
>  On Thu, Sep 27, 2018 at 3:26 AM Dongjin Lee 
> wrote:
> >
> > Hi Yishun,
> >
> > Thank you for your great KIP. In fact, I have also encountered the
> cases
> > where Autoclosable is so desired several times! Let me inspect more
> > candidate classes as well.
> >
> > +1. I also refined your KIP a little bit.
> >
> > Best,
> > Dongjin
> >
> > On Thu, Sep 27, 2018 at 12:21 PM Chia-Ping Tsai 
> wrote:
> >
> >> hi Yishun
> >>
> >> Thanks for nice KIP!
> >>
> >> Q1)
> >> Why VerifiableProducer extend Closeable rather than AutoCloseable?
> >>
> >> Q2)
> >> I grep project and then noticed there are other close methods but
> do not
> >> implement AutoCloseable.
> >> For example:
> >> 1) WorkerConnector
> >> 2) MemoryRecordsBuilder
> >> 3) MetricsReporter
> >> 4) ExpiringCredentialRefreshingLogin
> >> 5) KafkaChannel
> >> 6) ConsumerInterceptor
> >> 7) 

[jira] [Created] (KAFKA-7480) GlobalThread should honor custom auto.offset.reset policy

2018-10-04 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7480:
--

 Summary: GlobalThread should honor custom auto.offset.reset policy
 Key: KAFKA-7480
 URL: https://issues.apache.org/jira/browse/KAFKA-7480
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


With KAFKA-6121 we improved Kafka Streams resilience and correctness with 
regard to consumer auto.offset.reset and state cleanup.

Back than, we decided to let GlobalStreamThread die and not handle 
InvalidOffsetException during regular processing, because this error indicates 
a fatal issue and the user should be notified about it. However, as reported on 
the user mailing list, the only thing a user can do is, to restart the 
application (and investigate the root cause). During restart, the state will be 
cleaned up and bootstrapped correctly.

Thus, we might want to allow users to specify a more resilient configuration 
for this case and log an ERROR message if the error occurs. To ensure 
consistency, we might not allow to set reset policy "latest" though (need 
discussion)? By default, we can still keep "none" and fail.

Note: `Topology.addGlobalStore` does not allow to set a reset policy. Thus, 
this might require a KIP to extend `Topology.addGlobalStore` accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Request for contributor permissions

2018-10-04 Thread Matthias J. Sax
Done

On 10/4/18 3:06 AM, 張雅涵 wrote:
> Hi, I'd like to request for contributor permission.
> 
> Thank you very much!!
> 
> Jira ID: littleskyqueen
> 
> cwiki ID: 張雅涵 littleskyqueen
> 
> 
> *Yvonne Chang 張雅涵*
> *亦思科技專業資訊服務團隊*
> 看見新世代資料庫---*HareDB*
> Tel:03-5630345 Ext.18
> Mobile: 0963-756811
> Fax:03-5631345
> 新竹科學園區展業二路4號3樓
> www.is-land.com.tw
> www.haredb.com
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-7479) Call to "producer.initTransaction" hangs when using IP for "bootstrap.servers"

2018-10-04 Thread Gene B. (JIRA)
Gene B. created KAFKA-7479:
--

 Summary: Call to "producer.initTransaction" hangs when using IP 
for "bootstrap.servers"
 Key: KAFKA-7479
 URL: https://issues.apache.org/jira/browse/KAFKA-7479
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.0.0
Reporter: Gene B.


When using IP address for "bootstrap.servers",
And Kafka server is installed in a VM (Virtual Box)

Then transactional Kafka client is hanging on call "producer.initTransaction", 
and the call never completes.

Current workaround is to add Kafka host's name to the "hosts" file, but this 
approach will not scale. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3062

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2974, done.
remote: Counting objects:   0% (1/2974)   remote: Counting objects:   
1% (30/2974)   remote: Counting objects:   2% (60/2974)   
remote: Counting objects:   3% (90/2974)   remote: Counting objects:   
4% (119/2974)   remote: Counting objects:   5% (149/2974)   
remote: Counting objects:   6% (179/2974)   remote: Counting objects:   
7% (209/2974)   remote: Counting objects:   8% (238/2974)   
remote: Counting objects:   9% (268/2974)   remote: Counting objects:  
10% (298/2974)   remote: Counting objects:  11% (328/2974)   
remote: Counting objects:  12% (357/2974)   remote: Counting objects:  
13% (387/2974)   remote: Counting objects:  14% (417/2974)   
remote: Counting objects:  15% (447/2974)   remote: Counting objects:  
16% (476/2974)   remote: Counting objects:  17% (506/2974)   
remote: Counting objects:  18% (536/2974)   remote: Counting objects:  
19% (566/2974)   remote: Counting objects:  20% (595/2974)   
remote: Counting objects:  21% (625/2974)   remote: Counting objects:  
22% (655/2974)   remote: Counting objects:  23% (685/2974)   
remote: Counting objects:  24% (714/2974)   remote: Counting objects:  
25% (744/2974)   remote: Counting objects:  26% (774/2974)   
remote: Counting objects:  27% (803/2974)   remote: Counting objects:  
28% (833/2974)   remote: Counting objects:  29% (863/2974)   
remote: Counting objects:  30% (893/2974)   remote: Counting objects:  
31% (922/2974)   remote: Counting objects:  32% (952/2974)   
remote: Counting objects:  33% (982/2974)   remote: Counting objects:  
34% (1012/2974)   remote: Counting objects:  35% (1041/2974)   
remote: Counting objects:  36% (1071/2974)   remote: Counting objects:  
37% (1101/2974)   remote: Counting objects:  38% (1131/2974)   
remote: Counting objects:  39% (1160/2974)   remote: Counting objects:  
40% (1190/2974)   remote: Counting objects:  41% (1220/2974)   
remote: Counting objects:  42% (1250/2974)   remote: Counting objects:  
43% (1279/2974)   remote: Counting objects:  44% (1309/2974)   
remote: Counting objects:  45% (1339/2974)   remote: Counting objects:  
46% (1369/2974)   remote: Counting objects:  47% (1398/2974)   
remote: Counting objects:  48% (1428/2974)   remote: Counting objects:  
49% (1458/2974)   remote: Counting objects:  50% (1487/2974)   
remote: Counting objects:  51% (1517/2974)   remote: Counting objects:  
52% (1547/2974)   remote: Counting objects:  53% (1577/2974)   
remote: Counting objects:  54% (1606/2974)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk8 #3061

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: remote: Enumerating objects: 3906, done.
remote: Counting objects:   0% (1/3906)   remote: Counting objects:   
1% (40/3906)   remote: Counting objects:   2% (79/3906)   
remote: Counting objects:   3% (118/3906)   remote: Counting objects:   
4% (157/3906)   remote: Counting objects:   5% (196/3906)   
remote: Counting objects:   6% (235/3906)   remote: Counting objects:   
7% (274/3906)   remote: Counting objects:   8% (313/3906)   
remote: Counting objects:   9% (352/3906)   remote: Counting objects:  
10% (391/3906)   remote: Counting objects:  11% (430/3906)   
remote: Counting objects:  12% (469/3906)   remote: Counting objects:  
13% (508/3906)   remote: Counting objects:  14% (547/3906)   
remote: Counting objects:  15% (586/3906)   remote: Counting objects:  
16% (625/3906)   remote: Counting objects:  17% (665/3906)   
remote: Counting objects:  18% (704/3906)   remote: Counting objects:  
19% (743/3906)   remote: Counting objects:  20% (782/3906)   
remote: Counting objects:  21% (821/3906)   remote: Counting objects:  
22% (860/3906)   remote: Counting objects:  23% (899/3906)   
remote: Counting objects:  24% (938/3906)   remote: Counting objects:  
25% (977/3906)   remote: Counting objects:  26% (1016/3906)   
remote: Counting objects:  27% (1055/3906)   remote: Counting objects:  
28% (1094/3906)   remote: Counting objects:  29% (1133/3906)   
remote: Counting objects:  30% (1172/3906)   remote: Counting objects:  
31% (1211/3906)   remote: Counting objects:  32% (1250/3906)   
remote: Counting objects:  33% (1289/3906)   remote: Counting objects:  
34% (1329/3906)   remote: Counting objects:  35% (1368/3906)   
remote: Counting objects:  36% (1407/3906)   remote: Counting objects:  
37% (1446/3906)   remote: Counting objects:  38% (1485/3906)   
remote: Counting objects:  39% (1524/3906)   remote: Counting objects:  
40% (1563/3906)   remote: Counting objects:  41% (1602/3906)   
remote: Counting objects:  42% (1641/3906)   remote: Counting objects:  
43% (1680/3906)   remote: Counting objects:  44% (1719/3906)   
remote: Counting objects:  45% (1758/3906)   remote: Counting objects:  
46% (1797/3906)   remote: Counting objects:  47% (1836/3906)   
remote: Counting objects:  48% (1875/3906)   remote: Counting objects:  
49% (1914/3906)   remote: Counting objects:  50% (1953/3906)   
remote: Counting objects:  51% (1993/3906)   remote: Counting objects:  
52% (2032/3906)   remote: Counting objects:  53% (2071/3906)   
remote: Counting objects:  54% (2110/3906)   remote: Counting objects:  
55% (2149/3906)   remote: Counting objects:  56% (2188/3906)   
remote: Counting objects:  57% (2227/3906) 

Build failed in Jenkins: kafka-trunk-jdk8 #3060

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2974, done.
remote: Counting objects:   0% (1/2974)   remote: Counting objects:   
1% (30/2974)   remote: Counting objects:   2% (60/2974)   
remote: Counting objects:   3% (90/2974)   remote: Counting objects:   
4% (119/2974)   remote: Counting objects:   5% (149/2974)   
remote: Counting objects:   6% (179/2974)   remote: Counting objects:   
7% (209/2974)   remote: Counting objects:   8% (238/2974)   
remote: Counting objects:   9% (268/2974)   remote: Counting objects:  
10% (298/2974)   remote: Counting objects:  11% (328/2974)   
remote: Counting objects:  12% (357/2974)   remote: Counting objects:  
13% (387/2974)   remote: Counting objects:  14% (417/2974)   
remote: Counting objects:  15% (447/2974)   remote: Counting objects:  
16% (476/2974)   remote: Counting objects:  17% (506/2974)   
remote: Counting objects:  18% (536/2974)   remote: Counting objects:  
19% (566/2974)   remote: Counting objects:  20% (595/2974)   
remote: Counting objects:  21% (625/2974)   remote: Counting objects:  
22% (655/2974)   remote: Counting objects:  23% (685/2974)   
remote: Counting objects:  24% (714/2974)   remote: Counting objects:  
25% (744/2974)   remote: Counting objects:  26% (774/2974)   
remote: Counting objects:  27% (803/2974)   remote: Counting objects:  
28% (833/2974)   remote: Counting objects:  29% (863/2974)   
remote: Counting objects:  30% (893/2974)   remote: Counting objects:  
31% (922/2974)   remote: Counting objects:  32% (952/2974)   
remote: Counting objects:  33% (982/2974)   remote: Counting objects:  
34% (1012/2974)   remote: Counting objects:  35% (1041/2974)   
remote: Counting objects:  36% (1071/2974)   remote: Counting objects:  
37% (1101/2974)   remote: Counting objects:  38% (1131/2974)   
remote: Counting objects:  39% (1160/2974)   remote: Counting objects:  
40% (1190/2974)   remote: Counting objects:  41% (1220/2974)   
remote: Counting objects:  42% (1250/2974)   remote: Counting objects:  
43% (1279/2974)   remote: Counting objects:  44% (1309/2974)   
remote: Counting objects:  45% (1339/2974)   remote: Counting objects:  
46% (1369/2974)   remote: Counting objects:  47% (1398/2974)   
remote: Counting objects:  48% (1428/2974)   remote: Counting objects:  
49% (1458/2974)   remote: Counting objects:  50% (1487/2974)   
remote: Counting objects:  51% (1517/2974)   remote: Counting objects:  
52% (1547/2974)   remote: Counting objects:  53% (1577/2974)   
remote: Counting objects:  54% (1606/2974)   remote: Counting objects:  
55% 

Request for contributor permissions

2018-10-04 Thread 張雅涵
Hi, I'd like to request for contributor permission.

Thank you very much!!

Jira ID: littleskyqueen

cwiki ID: 張雅涵 littleskyqueen


*Yvonne Chang 張雅涵*
*亦思科技專業資訊服務團隊*
看見新世代資料庫---*HareDB*
Tel:03-5630345 Ext.18
Mobile: 0963-756811
Fax:03-5631345
新竹科學園區展業二路4號3樓
www.is-land.com.tw
www.haredb.com


Build failed in Jenkins: kafka-trunk-jdk8 #3059

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2974, done.
remote: Counting objects:   0% (1/2974)   remote: Counting objects:   
1% (30/2974)   remote: Counting objects:   2% (60/2974)   
remote: Counting objects:   3% (90/2974)   remote: Counting objects:   
4% (119/2974)   remote: Counting objects:   5% (149/2974)   
remote: Counting objects:   6% (179/2974)   remote: Counting objects:   
7% (209/2974)   remote: Counting objects:   8% (238/2974)   
remote: Counting objects:   9% (268/2974)   remote: Counting objects:  
10% (298/2974)   remote: Counting objects:  11% (328/2974)   
remote: Counting objects:  12% (357/2974)   remote: Counting objects:  
13% (387/2974)   remote: Counting objects:  14% (417/2974)   
remote: Counting objects:  15% (447/2974)   remote: Counting objects:  
16% (476/2974)   remote: Counting objects:  17% (506/2974)   
remote: Counting objects:  18% (536/2974)   remote: Counting objects:  
19% (566/2974)   remote: Counting objects:  20% (595/2974)   
remote: Counting objects:  21% (625/2974)   remote: Counting objects:  
22% (655/2974)   remote: Counting objects:  23% (685/2974)   
remote: Counting objects:  24% (714/2974)   remote: Counting objects:  
25% (744/2974)   remote: Counting objects:  26% (774/2974)   
remote: Counting objects:  27% (803/2974)   remote: Counting objects:  
28% (833/2974)   remote: Counting objects:  29% (863/2974)   
remote: Counting objects:  30% (893/2974)   remote: Counting objects:  
31% (922/2974)   remote: Counting objects:  32% (952/2974)   
remote: Counting objects:  33% (982/2974)   remote: Counting objects:  
34% (1012/2974)   remote: Counting objects:  35% (1041/2974)   
remote: Counting objects:  36% (1071/2974)   remote: Counting objects:  
37% (1101/2974)   remote: Counting objects:  38% (1131/2974)   
remote: Counting objects:  39% (1160/2974)   remote: Counting objects:  
40% (1190/2974)   remote: Counting objects:  41% (1220/2974)   
remote: Counting objects:  42% (1250/2974)   remote: Counting objects:  
43% (1279/2974)   remote: Counting objects:  44% (1309/2974)   
remote: Counting objects:  45% (1339/2974)   remote: Counting objects:  
46% (1369/2974)   remote: Counting objects:  47% (1398/2974)   
remote: Counting objects:  48% (1428/2974)   remote: Counting objects:  
49% (1458/2974)   remote: Counting objects:  50% (1487/2974)   
remote: Counting objects:  51% (1517/2974)   remote: Counting objects:  
52% (1547/2974)   remote: Counting objects:  53% (1577/2974)   
remote: Counting objects:  54% (1606/2974)   remote: Counting objects:  
55% 

[jira] [Created] (KAFKA-7478) Reduce OAuthBearerLoginModule verbosity

2018-10-04 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7478:
--

 Summary: Reduce OAuthBearerLoginModule verbosity
 Key: KAFKA-7478
 URL: https://issues.apache.org/jira/browse/KAFKA-7478
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


The OAuthBearerLoginModule is pretty verbose by default and this fills logs in 
with too much information. It would be nice if we could reduce the verbosity by 
default and let the user opt in to inspect these debug-friendly messages
{code:java}
[INFO] 2018-10-03 16:58:11,986 [qtp1137078855-1798] 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule login - 
Login succeeded; invoke commit() to commit it; current committed token count=0 
[INFO] 2018-10-03 16:58:11,986 [qtp1137078855-1798] 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule commit - 
Committing my token; current committed token count = 0 
[INFO] 2018-10-03 16:58:11,986 [qtp1137078855-1798] 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule commit - 
Done committing my token; committed token count is now 1
[INFO] 2018-10-03 16:58:11,986 [qtp1137078855-1798] 
org.apache.kafka.common.security.oauthbearer.internals.expiring.ExpiringCredentialRefreshingLogin
 login - Successfully logged in.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3058

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2974, done.
remote: Counting objects:   0% (1/2974)   remote: Counting objects:   
1% (30/2974)   remote: Counting objects:   2% (60/2974)   
remote: Counting objects:   3% (90/2974)   remote: Counting objects:   
4% (119/2974)   remote: Counting objects:   5% (149/2974)   
remote: Counting objects:   6% (179/2974)   remote: Counting objects:   
7% (209/2974)   remote: Counting objects:   8% (238/2974)   
remote: Counting objects:   9% (268/2974)   remote: Counting objects:  
10% (298/2974)   remote: Counting objects:  11% (328/2974)   
remote: Counting objects:  12% (357/2974)   remote: Counting objects:  
13% (387/2974)   remote: Counting objects:  14% (417/2974)   
remote: Counting objects:  15% (447/2974)   remote: Counting objects:  
16% (476/2974)   remote: Counting objects:  17% (506/2974)   
remote: Counting objects:  18% (536/2974)   remote: Counting objects:  
19% (566/2974)   remote: Counting objects:  20% (595/2974)   
remote: Counting objects:  21% (625/2974)   remote: Counting objects:  
22% (655/2974)   remote: Counting objects:  23% (685/2974)   
remote: Counting objects:  24% (714/2974)   remote: Counting objects:  
25% (744/2974)   remote: Counting objects:  26% (774/2974)   
remote: Counting objects:  27% (803/2974)   remote: Counting objects:  
28% (833/2974)   remote: Counting objects:  29% (863/2974)   
remote: Counting objects:  30% (893/2974)   remote: Counting objects:  
31% (922/2974)   remote: Counting objects:  32% (952/2974)   
remote: Counting objects:  33% (982/2974)   remote: Counting objects:  
34% (1012/2974)   remote: Counting objects:  35% (1041/2974)   
remote: Counting objects:  36% (1071/2974)   remote: Counting objects:  
37% (1101/2974)   remote: Counting objects:  38% (1131/2974)   
remote: Counting objects:  39% (1160/2974)   remote: Counting objects:  
40% (1190/2974)   remote: Counting objects:  41% (1220/2974)   
remote: Counting objects:  42% (1250/2974)   remote: Counting objects:  
43% (1279/2974)   remote: Counting objects:  44% (1309/2974)   
remote: Counting objects:  45% (1339/2974)   remote: Counting objects:  
46% (1369/2974)   remote: Counting objects:  47% (1398/2974)   
remote: Counting objects:  48% (1428/2974)   remote: Counting objects:  
49% (1458/2974)   remote: Counting objects:  50% (1487/2974)   
remote: Counting objects:  51% (1517/2974)   remote: Counting objects:  
52% (1547/2974)   remote: Counting objects:  53% (1577/2974)   
remote: Counting objects:  54% (1606/2974)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk8 #3057

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2974, done.
remote: Counting objects:   0% (1/2974)   remote: Counting objects:   
1% (30/2974)   remote: Counting objects:   2% (60/2974)   
remote: Counting objects:   3% (90/2974)   remote: Counting objects:   
4% (119/2974)   remote: Counting objects:   5% (149/2974)   
remote: Counting objects:   6% (179/2974)   remote: Counting objects:   
7% (209/2974)   remote: Counting objects:   8% (238/2974)   
remote: Counting objects:   9% (268/2974)   remote: Counting objects:  
10% (298/2974)   remote: Counting objects:  11% (328/2974)   
remote: Counting objects:  12% (357/2974)   remote: Counting objects:  
13% (387/2974)   remote: Counting objects:  14% (417/2974)   
remote: Counting objects:  15% (447/2974)   remote: Counting objects:  
16% (476/2974)   remote: Counting objects:  17% (506/2974)   
remote: Counting objects:  18% (536/2974)   remote: Counting objects:  
19% (566/2974)   remote: Counting objects:  20% (595/2974)   
remote: Counting objects:  21% (625/2974)   remote: Counting objects:  
22% (655/2974)   remote: Counting objects:  23% (685/2974)   
remote: Counting objects:  24% (714/2974)   remote: Counting objects:  
25% (744/2974)   remote: Counting objects:  26% (774/2974)   
remote: Counting objects:  27% (803/2974)   remote: Counting objects:  
28% (833/2974)   remote: Counting objects:  29% (863/2974)   
remote: Counting objects:  30% (893/2974)   remote: Counting objects:  
31% (922/2974)   remote: Counting objects:  32% (952/2974)   
remote: Counting objects:  33% (982/2974)   remote: Counting objects:  
34% (1012/2974)   remote: Counting objects:  35% (1041/2974)   
remote: Counting objects:  36% (1071/2974)   remote: Counting objects:  
37% (1101/2974)   remote: Counting objects:  38% (1131/2974)   
remote: Counting objects:  39% (1160/2974)   remote: Counting objects:  
40% (1190/2974)   remote: Counting objects:  41% (1220/2974)   
remote: Counting objects:  42% (1250/2974)   remote: Counting objects:  
43% (1279/2974)   remote: Counting objects:  44% (1309/2974)   
remote: Counting objects:  45% (1339/2974)   remote: Counting objects:  
46% (1369/2974)   remote: Counting objects:  47% (1398/2974)   
remote: Counting objects:  48% (1428/2974)   remote: Counting objects:  
49% (1458/2974)   remote: Counting objects:  50% (1487/2974)   
remote: Counting objects:  51% (1517/2974)   remote: Counting objects:  
52% (1547/2974)   remote: Counting objects:  53% (1577/2974)   
remote: Counting objects:  54% (1606/2974)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk8 #3056

2018-10-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2974, done.
remote: Counting objects:   0% (1/2974)   remote: Counting objects:   
1% (30/2974)   remote: Counting objects:   2% (60/2974)   
remote: Counting objects:   3% (90/2974)   remote: Counting objects:   
4% (119/2974)   remote: Counting objects:   5% (149/2974)   
remote: Counting objects:   6% (179/2974)   remote: Counting objects:   
7% (209/2974)   remote: Counting objects:   8% (238/2974)   
remote: Counting objects:   9% (268/2974)   remote: Counting objects:  
10% (298/2974)   remote: Counting objects:  11% (328/2974)   
remote: Counting objects:  12% (357/2974)   remote: Counting objects:  
13% (387/2974)   remote: Counting objects:  14% (417/2974)   
remote: Counting objects:  15% (447/2974)   remote: Counting objects:  
16% (476/2974)   remote: Counting objects:  17% (506/2974)   
remote: Counting objects:  18% (536/2974)   remote: Counting objects:  
19% (566/2974)   remote: Counting objects:  20% (595/2974)   
remote: Counting objects:  21% (625/2974)   remote: Counting objects:  
22% (655/2974)   remote: Counting objects:  23% (685/2974)   
remote: Counting objects:  24% (714/2974)   remote: Counting objects:  
25% (744/2974)   remote: Counting objects:  26% (774/2974)   
remote: Counting objects:  27% (803/2974)   remote: Counting objects:  
28% (833/2974)   remote: Counting objects:  29% (863/2974)   
remote: Counting objects:  30% (893/2974)   remote: Counting objects:  
31% (922/2974)   remote: Counting objects:  32% (952/2974)   
remote: Counting objects:  33% (982/2974)   remote: Counting objects:  
34% (1012/2974)   remote: Counting objects:  35% (1041/2974)   
remote: Counting objects:  36% (1071/2974)   remote: Counting objects:  
37% (1101/2974)   remote: Counting objects:  38% (1131/2974)   
remote: Counting objects:  39% (1160/2974)   remote: Counting objects:  
40% (1190/2974)   remote: Counting objects:  41% (1220/2974)   
remote: Counting objects:  42% (1250/2974)   remote: Counting objects:  
43% (1279/2974)   remote: Counting objects:  44% (1309/2974)   
remote: Counting objects:  45% (1339/2974)   remote: Counting objects:  
46% (1369/2974)   remote: Counting objects:  47% (1398/2974)   
remote: Counting objects:  48% (1428/2974)   remote: Counting objects:  
49% (1458/2974)   remote: Counting objects:  50% (1487/2974)   
remote: Counting objects:  51% (1517/2974)   remote: Counting objects:  
52% (1547/2974)   remote: Counting objects:  53% (1577/2974)   
remote: Counting objects:  54% (1606/2974)   remote: Counting objects:  
55%