Build failed in Jenkins: kafka-trunk-jdk14 #190

2020-06-05 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress -- 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: error: cannot fork() for rev-list: Resource temporarily unavailable
error: Could not run 'git rev-list'
error: cannot fork() for fetch-pack: Resource temporarily unavailable

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2172)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1864)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(CliGitAPIImpl.java:78)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:545)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:758)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor1196.invoke(Unknown 
Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy137.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1152)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
ERROR: Error cloning remote repo 'origin'
Retrying after 10 seconds
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at 

Build failed in Jenkins: kafka-trunk-jdk14 #189

2020-06-05 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress -- 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 255:
stdout: 
stderr: error: cannot fork() for git-remote-https: Resource temporarily 
unavailable

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2172)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1864)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(CliGitAPIImpl.java:78)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:545)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:758)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor1196.invoke(Unknown 
Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy137.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1152)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
ERROR: Error cloning remote repo 'origin'
Retrying after 10 seconds
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at 

Build failed in Jenkins: kafka-trunk-jdk14 #188

2020-06-05 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress -- 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: error: cannot fork() for rev-list: Resource temporarily unavailable
error: Could not run 'git rev-list'
error: cannot fork() for fetch-pack: Resource temporarily unavailable

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2172)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1864)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(CliGitAPIImpl.java:78)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:545)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:758)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor1196.invoke(Unknown 
Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy137.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1152)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
ERROR: Error cloning remote repo 'origin'
Retrying after 10 seconds
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at 

Build failed in Jenkins: kafka-trunk-jdk14 #187

2020-06-05 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress -- 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: error: cannot fork() for rev-list: Resource temporarily unavailable
error: Could not run 'git rev-list'
error: cannot fork() for fetch-pack: Resource temporarily unavailable

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2172)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1864)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(CliGitAPIImpl.java:78)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:545)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:758)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor1196.invoke(Unknown 
Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy137.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1152)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
ERROR: Error cloning remote repo 'origin'
Retrying after 10 seconds
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at 

Jenkins build is back to normal : kafka-trunk-jdk11 #1544

2020-06-05 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #4615

2020-06-05 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk14 #186

2020-06-05 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress -- 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: error: cannot fork() for rev-list: Resource temporarily unavailable
error: Could not run 'git rev-list'
error: cannot fork() for fetch-pack: Resource temporarily unavailable

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2172)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1864)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(CliGitAPIImpl.java:78)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:545)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:758)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor1196.invoke(Unknown 
Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy137.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1152)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
ERROR: Error cloning remote repo 'origin'
Retrying after 10 seconds
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git --version # timeout=10
 > git --version # timeout=10
 > git --version # timeout=10
 > git --version # timeout=10
 > git fetch --tags https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed 

Build failed in Jenkins: kafka-trunk-jdk14 #185

2020-06-05 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress -- 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: error: cannot fork() for rev-list: Resource temporarily unavailable
error: Could not run 'git rev-list'
error: cannot fork() for fetch-pack: Resource temporarily unavailable

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2172)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1864)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(CliGitAPIImpl.java:78)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:545)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:758)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor1196.invoke(Unknown 
Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy137.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1152)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
ERROR: Error cloning remote repo 'origin'
Retrying after 10 seconds
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git --version # timeout=10
 > git --version # timeout=10
 > git --version # timeout=10
 > git --version # timeout=10
 > git fetch --tags https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed 

Jenkins build is back to normal : kafka-2.6-jdk8 #22

2020-06-05 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-06-05 Thread Anna Povzner
+1 (not binding)

Thanks for the KIP!

-Anna

On Thu, Jun 4, 2020 at 8:26 AM Mickael Maison 
wrote:

> +1 (binding)
> Thanks David for looking into this important issue
>
> On Thu, Jun 4, 2020 at 3:59 PM Tom Bentley  wrote:
> >
> > +1 (non binding).
> >
> > Thanks!
> >
> > On Wed, Jun 3, 2020 at 3:51 PM Rajini Sivaram 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > Thanks for the KIP, David!
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > >
> > > On Sun, May 31, 2020 at 3:29 AM Gwen Shapira 
> wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > Looks great. Thank you for the in-depth design and discussion.
> > > >
> > > > On Fri, May 29, 2020 at 7:58 AM David Jacot 
> wrote:
> > > >
> > > > > Hi folks,
> > > > >
> > > > > I'd like to start the vote for KIP-599 which proposes a new quota
> to
> > > > > throttle create topic, create partition, and delete topics
> operations
> > > to
> > > > > protect the Kafka controller:
> > > > >
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations
> > > > >
> > > > > Please, let me know what you think.
> > > > >
> > > > > Cheers,
> > > > > David
> > > > >
> > > >
> > > >
> > > > --
> > > > Gwen Shapira
> > > > Engineering Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter | blog
> > > >
> > >
>


Re: KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-06-05 Thread Anna Povzner
Hi David,

The KIP looks good to me. I am going to the voting thread...

Hi Jun,

Yes, exactly. That's a separate thing from this KIP, so working on the fix.

Thanks,
Anna

On Fri, Jun 5, 2020 at 4:36 PM Jun Rao  wrote:

> Hi, Anna,
>
> Thanks for the comment. For the problem that you described, perhaps we need
> to make the quota checking and recording more atomic?
>
> Hi, David,
>
> Thanks for the updated KIP.  Looks good to me now. Just one minor comment
> below.
>
> 30. controller_mutations_rate: For topic creation and deletion, is the rate
> accumulated at the topic or partition level? It would be useful to make it
> clear in the wiki.
>
> Jun
>
> On Fri, Jun 5, 2020 at 7:23 AM David Jacot  wrote:
>
> > Hi Anna and Jun,
> >
> > You are right. We should allocate up to the quota for each old sample.
> >
> > I have revamped the Throttling Algorithm section to better explain our
> > thought process and the token bucket inspiration.
> >
> > I have also added a chapter with few guidelines about how to define
> > the quota. There is no magic formula for this but I give few insights.
> > I don't have specific numbers that can be used out of the box so I
> > think that it is better to not put any for the time being. We can always
> > complement later on in the documentation.
> >
> > Please, take a look and let me know what you think.
> >
> > Cheers,
> > David
> >
> > On Fri, Jun 5, 2020 at 8:37 AM Anna Povzner  wrote:
> >
> > > Hi David and Jun,
> > >
> > > I dug a bit deeper into the Rate implementation, and wanted to confirm
> > that
> > > I do believe that the token bucket behavior is better for the reasons
> we
> > > already discussed but wanted to summarize. The main difference between
> > Rate
> > > and token bucket is that the Rate implementation allows a burst by
> > > borrowing from the future, whereas a token bucket allows a burst by
> using
> > > accumulated tokens from the previous idle period. Using accumulated
> > tokens
> > > smoothes out the rate measurement in general. Configuring a large burst
> > > requires configuring a large quota window, which causes long delays for
> > > bursty workload, due to borrowing credits from the future. Perhaps it
> is
> > > useful to add a summary in the beginning of the Throttling Algorithm
> > > section?
> > >
> > > In my previous email, I mentioned the issue we observed with the
> > bandwidth
> > > quota, where a low quota (1MB/s per broker) was limiting bandwidth
> > visibly
> > > below the quota. I thought it was strictly the issue with the Rate
> > > implementation as well, but I found a root cause to be different but
> > > amplified by the Rate implementation (long throttle delays of requests
> > in a
> > > burst). I will describe it here for completeness using the following
> > > example:
> > >
> > >-
> > >
> > >Quota = 1MB/s, default window size and number of samples
> > >-
> > >
> > >Suppose there are 6 connections (maximum 6 outstanding requests),
> and
> > >each produce request is 5MB. If all requests arrive in a burst, the
> > > last 4
> > >requests (20MB over 10MB allowed in a window) may get the same
> > throttle
> > >time if they are processed concurrently. We record the rate under
> the
> > > lock,
> > >but then calculate throttle time separately after that. So, for each
> > >request, the observed rate could be 3MB/s, and each request gets
> > > throttle
> > >delay = 20 seconds (instead of 5, 10, 15, 20 respectively). The
> delay
> > is
> > >longer than the total rate window, which results in lower bandwidth
> > than
> > >the quota. Since all requests got the same delay, they will also
> > arrive
> > > in
> > >a burst, which may also result in longer delay than necessary. It
> > looks
> > >pretty easy to fix, so I will open a separate JIRA for it. This can
> be
> > >additionally mitigated by token bucket behavior.
> > >
> > >
> > > For the algorithm "So instead of having one sample equal to 560 in the
> > last
> > > window, we will have 100 samples equal to 5.6.", I agree with Jun. I
> > would
> > > allocate 5 per each old sample that is still in the overall window. It
> > > would be a bit larger granularity than the pure token bucket (we lose 5
> > > units / mutation once we move past the sample window), but it is better
> > > than the long delay.
> > >
> > > Thanks,
> > >
> > > Anna
> > >
> > >
> > > On Thu, Jun 4, 2020 at 6:33 PM Jun Rao  wrote:
> > >
> > > > Hi, David, Anna,
> > > >
> > > > Thanks for the discussion and the updated wiki.
> > > >
> > > > 11. If we believe the token bucket behavior is better in terms of
> > > handling
> > > > the burst behavior, we probably don't need a separate KIP since it's
> > just
> > > > an implementation detail.
> > > >
> > > > Regarding "So instead of having one sample equal to 560 in the last
> > > window,
> > > > we will have 100 samples equal to 5.6.", I was thinking that we will
> > > > allocate 5 to each of the first 99 samples and 65 

Build failed in Jenkins: kafka-trunk-jdk14 #184

2020-06-05 Thread Apache Jenkins Server
See 

Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Could not init 

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:916)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:708)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor1196.invoke(Unknown 
Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy137.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1152)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Error performing git command
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2181)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2140)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2136)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1741)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:914)
... 11 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at hudson.Proc.joinWithTimeout(Proc.java:158)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2168)
... 15 more
ERROR: Error cloning remote repo 'origin'
Retrying after 10 seconds
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
ERROR: Workspace has a .git repository, but it appears to be corrupt.

Re: KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-06-05 Thread Jun Rao
Hi, Anna,

Thanks for the comment. For the problem that you described, perhaps we need
to make the quota checking and recording more atomic?

Hi, David,

Thanks for the updated KIP.  Looks good to me now. Just one minor comment
below.

30. controller_mutations_rate: For topic creation and deletion, is the rate
accumulated at the topic or partition level? It would be useful to make it
clear in the wiki.

Jun

On Fri, Jun 5, 2020 at 7:23 AM David Jacot  wrote:

> Hi Anna and Jun,
>
> You are right. We should allocate up to the quota for each old sample.
>
> I have revamped the Throttling Algorithm section to better explain our
> thought process and the token bucket inspiration.
>
> I have also added a chapter with few guidelines about how to define
> the quota. There is no magic formula for this but I give few insights.
> I don't have specific numbers that can be used out of the box so I
> think that it is better to not put any for the time being. We can always
> complement later on in the documentation.
>
> Please, take a look and let me know what you think.
>
> Cheers,
> David
>
> On Fri, Jun 5, 2020 at 8:37 AM Anna Povzner  wrote:
>
> > Hi David and Jun,
> >
> > I dug a bit deeper into the Rate implementation, and wanted to confirm
> that
> > I do believe that the token bucket behavior is better for the reasons we
> > already discussed but wanted to summarize. The main difference between
> Rate
> > and token bucket is that the Rate implementation allows a burst by
> > borrowing from the future, whereas a token bucket allows a burst by using
> > accumulated tokens from the previous idle period. Using accumulated
> tokens
> > smoothes out the rate measurement in general. Configuring a large burst
> > requires configuring a large quota window, which causes long delays for
> > bursty workload, due to borrowing credits from the future. Perhaps it is
> > useful to add a summary in the beginning of the Throttling Algorithm
> > section?
> >
> > In my previous email, I mentioned the issue we observed with the
> bandwidth
> > quota, where a low quota (1MB/s per broker) was limiting bandwidth
> visibly
> > below the quota. I thought it was strictly the issue with the Rate
> > implementation as well, but I found a root cause to be different but
> > amplified by the Rate implementation (long throttle delays of requests
> in a
> > burst). I will describe it here for completeness using the following
> > example:
> >
> >-
> >
> >Quota = 1MB/s, default window size and number of samples
> >-
> >
> >Suppose there are 6 connections (maximum 6 outstanding requests), and
> >each produce request is 5MB. If all requests arrive in a burst, the
> > last 4
> >requests (20MB over 10MB allowed in a window) may get the same
> throttle
> >time if they are processed concurrently. We record the rate under the
> > lock,
> >but then calculate throttle time separately after that. So, for each
> >request, the observed rate could be 3MB/s, and each request gets
> > throttle
> >delay = 20 seconds (instead of 5, 10, 15, 20 respectively). The delay
> is
> >longer than the total rate window, which results in lower bandwidth
> than
> >the quota. Since all requests got the same delay, they will also
> arrive
> > in
> >a burst, which may also result in longer delay than necessary. It
> looks
> >pretty easy to fix, so I will open a separate JIRA for it. This can be
> >additionally mitigated by token bucket behavior.
> >
> >
> > For the algorithm "So instead of having one sample equal to 560 in the
> last
> > window, we will have 100 samples equal to 5.6.", I agree with Jun. I
> would
> > allocate 5 per each old sample that is still in the overall window. It
> > would be a bit larger granularity than the pure token bucket (we lose 5
> > units / mutation once we move past the sample window), but it is better
> > than the long delay.
> >
> > Thanks,
> >
> > Anna
> >
> >
> > On Thu, Jun 4, 2020 at 6:33 PM Jun Rao  wrote:
> >
> > > Hi, David, Anna,
> > >
> > > Thanks for the discussion and the updated wiki.
> > >
> > > 11. If we believe the token bucket behavior is better in terms of
> > handling
> > > the burst behavior, we probably don't need a separate KIP since it's
> just
> > > an implementation detail.
> > >
> > > Regarding "So instead of having one sample equal to 560 in the last
> > window,
> > > we will have 100 samples equal to 5.6.", I was thinking that we will
> > > allocate 5 to each of the first 99 samples and 65 to the last sample.
> > Then,
> > > 6 new samples have to come before the balance becomes 0 again.
> > Intuitively,
> > > we are accumulating credits in each sample. If a usage comes in, we
> first
> > > use all existing credits to offset that. If we can't, the remaining
> usage
> > > will be recorded in the last sample, which will be offset by future
> > > credits. That seems to match the token bucket behavior the closest.
> > >
> > > 20. Could you provide some guidelines 

Build failed in Jenkins: kafka-trunk-jdk11 #1543

2020-06-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10110: Corrected potential NPE when null label value added to

[github] KAFKA-10111: Make SinkTaskContext.errantRecordReporter() a default

[github] KAFKA-9570: Define SSL configs in all worker config classes, not just


--
[...truncated 2.22 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Build failed in Jenkins: kafka-trunk-jdk14 #183

2020-06-05 Thread Apache Jenkins Server
See 


Changes:

[github] HOT_FIX: Update javadoc since imports added (#8817)

[github] KAFKA-9840; Skip End Offset validation when the leader epoch is not


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 910f3179960067135ec8ad4ab83d4582ff3847b5 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 910f3179960067135ec8ad4ab83d4582ff3847b5
Commit message: "KAFKA-9840; Skip End Offset validation when the leader epoch 
is not reliable (#8486)"
 > git rev-list --no-walk 5a0e65ed394da76ddebf387739f9dec8687a9485 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk14] $ /bin/bash -xe /tmp/jenkins153659247389586842.sh
+ rm -rf 
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk14] $ /bin/bash -xe /tmp/jenkins3724122406135006121.sh
+ ./gradlew --no-daemon --continue -PmaxParallelForks=2 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean test -PscalaVersion=2.12
[0.027s][warning][os,thread] Failed to start thread - pthread_create failed 
(EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.
Error occurred during initialization of VM
java.lang.OutOfMemoryError: unable to create native thread: possibly out of 
memory or process/resource limits reached
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
Not sending mail to unregistered user nore...@github.com


Build failed in Jenkins: kafka-trunk-jdk8 #4614

2020-06-05 Thread Apache Jenkins Server
See 


Changes:

[github] fix the broken links of streams javadoc (#8789)

[github] KAFKA-9441: Improve Kafka Streams task management (#8776)

[github] MINOR: Fix javadoc warnings (#8809)

[manikumar] MINOR: fix backwards incompatibility in JmxReporter introduced by

[github] MINOR: Change the order that Connect calls `config()` and `validate()`

[github] KAFKA-10110: Corrected potential NPE when null label value added to

[github] KAFKA-10111: Make SinkTaskContext.errantRecordReporter() a default

[github] KAFKA-9570: Define SSL configs in all worker config classes, not just


--
[...truncated 2.20 MB...]
org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task 

Re: [VOTE] KIP-601: Configurable socket connection timeout in NetworkClient

2020-06-05 Thread Cheng Tan
The KIP is approved upon 3 binding votes. Thanks for all the feedback and votes!

- Cheng

Re: [VOTE] KIP-601: Configurable socket connection timeout in NetworkClient

2020-06-05 Thread Gwen Shapira
+1 (binding)

Thank you for the contribution.

On Wed, Jun 3, 2020 at 12:53 AM Cheng Tan  wrote:

> Dear Rajini,
>
> Thanks for the feedback.
>
> 1)
> Because "request.timeout.ms" only affects in-flight requests, after the
> API NetworkClient.ready() is invoked, the connection won't get closed after
> "request.timeout.ms” hits. Before
> a) the SocketChannel is connected
> b) ssl handshake finished
> c) authentication has finished (sasl)
> clients cannot invoke NetworkClient.send() to send any request, which
> means no in-flight request targeting to the connection will be added.
>
>
> 2)
> I think a default value of 127 seconds make sense, which meets the timeout
> indirectly specified by the default value of “tcp.syn.retries”. I’ve added
> this into the KIP proposal.
>
>
> 3)
> Every time the timeout hits, the timeout value of the next connection try
> will increase.
>
> The timeout will hit iff a connection stays at the `connecting` state
> longer than the timeout value, as indicated by
> ClusterConnectionStates.NodeConnectionState. The connection state of a node
> will change iff `SelectionKey.OP_CONNECT` is detected by
> `nioSelector.Select()`. The connection state may transit from `connecting`
> to
>
> a) `disconnected` when SocketChannel.finishConnect() throws
> IOException.
> b) `connected` when SocketChannel.finishConnect() return TRUE.
>
> In other words, the timeout will hit and increase iff the interested
> SelectionKey.OP_CONNECT doesn't trigger before the timeout arrives, which
> means, for example, network congestion, failure of the ARP request, packet
> filtering, routing error, or a silent discard may happen. (I didn’t read
> the Java NIO source code. Please correct me the case when OP_CONNECT won’t
> get triggered if I’m wrong)
>
>
> 4)
>
> A) Connection timeout dominates both request timeout and API timeout
>
> When connection timeout hits, the connection will be closed. The client
> will be notified either by the responses constructed by NetworkClient or
> the callbacks attached to the request. As a result, the request failure
> will be handled before either connection timeout or API timeout arrives.
>
>
> B) Neither request timeout or API timeout dominates connection timeout
>
> i) Request timeout: Because request timeout only affects in-flight
> requests, after the API NetworkClient.ready() is invoked, the connection
> won't get closed after "request.timeout.ms” hits. Before
> 1. the SocketChannel is connected
> 2. SSL handshake finished
> 3. authentication has finished (SASL)
> , clients won't be able to invoke NetworkClient.send() to send any
> request, which means no in-flight request targeting to the connection will
> be added.
>
> ii) API timeout: In AdminClient, API timeout acts by putting a smaller and
> smaller timeout value to the chain of requests in a same API. After the API
> timeout hits, the retry logic won't close any connection. In consumer, API
> timeout acts as a whole by putting a limit to the code block executing
> time. The retry logic won't close any connection as well.
>
>
> Conclusion:
>
> Thanks again for the long feedback and I’m always enjoying them. I’ve
> supplement the above discussion into the KIP proposal. Please let me know
> what you think.
>
>
> Best, - Cheng Tan
>
>
> > On Jun 2, 2020, at 3:01 AM, Rajini Sivaram 
> wrote:
> >
> > Hi Cheng,
> >
> > Not sure if the discussion should move back to the DISCUSS thread. I
> have a
> > few questions:
> >
> > 1) The KIP motivation says that in some cases `request.timeout.ms`
> doesn't
> > timeout connections properly and as a result it takes 127s to detect a
> > connection failure. This sounds like a bug rather than a limitation of
> the
> > current approach. Can you explain the scenarios where this occurs?
> >
> > 2) I think the current proposal is to use non-exponential 10s connection
> > timeout as default with the option to use exponential timeout. So
> > connection timeouts for every connection attempt will be between 8s and
> 12s
> > by default. Is that correct? Should we use a default max timeout to
> enable
> > exponential timeout by default since 8s seems rather small?
> >
> > 3) What is the scope of `failures` used to determine connection timeout
> > with exponential timeouts? Will we always use 10s followed by 20s every
> > time a connection is attempted?
> >
> > 4) It will be good if we can include two flows with the relationship
> > between various timeouts in the KIP. One with a fixed node like a typical
> > produce/consume request to the leader and another that uses
> > `leastLoadedNode` like a metadata request. Having the comparison between
> > the current and proposed behaviour w.r.t all configurable timeouts (the
> two
> > new connection timeouts, request timeout, api timeout etc.) will be
> useful.
> >
> > Regards,
> >
> > Rajini
> >
>


-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: 

Build failed in Jenkins: kafka-trunk-jdk14 #182

2020-06-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8011: Fix flaky RegexSourceIntegrationTest (#8799)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5a0e65ed394da76ddebf387739f9dec8687a9485 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5a0e65ed394da76ddebf387739f9dec8687a9485
Commit message: "KAFKA-8011: Fix flaky RegexSourceIntegrationTest (#8799)"
 > git rev-list --no-walk 2d9376c8bb129d1f880887169890149b914e3a32 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk14] $ /bin/bash -xe /tmp/jenkins4406474828947017655.sh
+ rm -rf 
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk14] $ /bin/bash -xe /tmp/jenkins3615949145649526985.sh
+ ./gradlew --no-daemon --continue -PmaxParallelForks=2 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean test -PscalaVersion=2.12
[0.060s][warning][os,thread] Failed to start thread - pthread_create failed 
(EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.
Error occurred during initialization of VM
java.lang.OutOfMemoryError: unable to create native thread: possibly out of 
memory or process/resource limits reached
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
Not sending mail to unregistered user nore...@github.com


[jira] [Resolved] (KAFKA-9570) SSL cannot be configured for Connect in standalone mode

2020-06-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9570.
--
Fix Version/s: 2.5.1
   2.4.2
   2.6.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` and backported to the `2.6`, `2.5` and `2.4` branches.

> SSL cannot be configured for Connect in standalone mode
> ---
>
> Key: KAFKA-9570
> URL: https://issues.apache.org/jira/browse/KAFKA-9570
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2, 2.3.0, 2.1.2, 
> 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.2.3, 2.5.0, 2.3.2, 2.4.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.4.2, 2.5.1
>
>
> When Connect is brought up in standalone, if the worker config contains _any_ 
> properties that begin with the {{listeners.https.}} prefix, SSL will not be 
> enabled on the worker.
> This is because the relevant SSL configs are only defined in the [distributed 
> worker 
> config|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedConfig.java#L260]
>  instead of the [superclass worker 
> config|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java].
>  This, in conjunction with [a call 
> to|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/util/SSLUtils.java#L42]
>  
> [AbstractConfig::valuesWithPrefixAllOrNothing|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java],
>  causes all configs not defined in the {{WorkerConfig}} used by the worker to 
> be silently dropped when the worker configures its REST server if there is at 
> least one config present with the {{listeners.https.}} prefix.
> Unfortunately, the workaround of specifying all SSL configs without the 
> {{listeners.https.}} prefix will also fail if any passwords need to be 
> specified. This is because the password values in the {{Map}} returned from 
> {{AbstractConfig::valuesWithPrefixAllOrNothing}} aren't parsed as passwords, 
> but the [framework expects them to 
> be|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/util/SSLUtils.java#L87].
>  However, if no keystore, truststore, or key passwords need to be configured, 
> then it should be possible to work around the issue by specifying all of 
> those configurations without a prefix (as long as they don't conflict with 
> any other configs in that namespace).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8011) Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenCreated

2020-06-05 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8011.

  Assignee: Matthias J. Sax  (was: Sophie Blee-Goldman)
Resolution: Fixed

The currently failing test exposes a real bug (tracked via: 
https://issues.apache.org/jira/browse/KAFKA-10102) – hence, closing this ticket 
as the test itself is not flaky.

> Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenCreated
> 
>
> Key: KAFKA-8011
> URL: https://issues.apache.org/jira/browse/KAFKA-8011
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Matthias J. Sax
>Priority: Blocker
>  Labels: flaky-test, newbie
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.6.0, 2.2.0
>
> Attachments: 
> org.apache.kafka.streams.integration.RegexSourceIntegrationTest.html, 
> streams_1_0_test_results.png, streams_1_1_tests.png
>
>
> The RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenCreated
> and RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted  tests use 
> an ArrayList to assert the topics assigned to the Streams application. 
> The ConsumerRebalanceListener used in the test operates on this list as does 
> the TestUtils.waitForCondition() to verify the expected topic assignments.
> Using the same list in both places can cause a ConcurrentModficationException 
> if the rebalance listener modifies the assignment at the same time 
> TestUtils.waitForCondition() is using the list to verify the expected topics. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Permission to create KIP

2020-06-05 Thread Matthias J. Sax
Done.

On 6/5/20 12:05 AM, William Bottrell wrote:
> Hello,
> 
> I'd like to get permission to create a KIP for this JIRA issue:
> https://issues.apache.org/jira/browse/KAFKA-10062. Not sure what my WIki ID
> is, but my account name is wbottrell. Let me know if that is incorrect or
> if more information is needed.
> 
> Thanks,
> Will
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-620 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-06-05 Thread Matthias J. Sax
+1 (binding)

Thanks for the KIP!


-Matthias

On 6/4/20 11:25 PM, Chia-Ping Tsai wrote:
> hi All,
> 
> I would like to start the vote on KIP-620:
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118
> 
> --
> Chia-Ping
> 



signature.asc
Description: OpenPGP digital signature


Jenkins build is back to normal : kafka-trunk-jdk14 #180

2020-06-05 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10112) Consider making the number of threads configurable for offset/group metadata cache loading

2020-06-05 Thread Manikumar (Jira)
Manikumar created KAFKA-10112:
-

 Summary: Consider making the number of threads configurable for 
offset/group metadata cache loading
 Key: KAFKA-10112
 URL: https://issues.apache.org/jira/browse/KAFKA-10112
 Project: Kafka
  Issue Type: Task
  Components: core
Reporter: Manikumar


 Currently we use [single-thread 
scheduler|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L84]
 to handle offset/group metadata cache loading and unloading. If there are 
leadership changes for multiple offset topic partitions, overall loading time 
will be high. So if you have to load 10 partitions, the 10th one will have to 
wait
for the previous ones. We can consider making the number of threads 
configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-158 UPDATED: Enable source connectors to create new topics with specific configs in Kafka Connect during runtime

2020-06-05 Thread Randall Hauch
LGTM. Thanks!

On Fri, Jun 5, 2020 at 12:41 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Thanks for bringing up KIP-464 Jose and apologies for taking that long to
> respond.
>
> It made sense to allow the users to use the broker defaults for the
> replication factor and the number of partitions when their source
> connectors create topics and the implementation has incorporated this
> ability.
> I've now updated KIP-158 to reflect this feature in the doc. Given that
> this is a minor and useful amendment I think we don't have to vote again on
> this change but please let me know if you think otherwise.
>
> Best,
> Konstantine
>
> On Mon, Feb 3, 2020 at 3:17 PM Jose Garcia Sancio 
> wrote:
>
> > Thanks Konstantine. Looking forward to this feature.
> >
> > The KIP mentions:
> >
> > > For the *default* group this configuration is required. For any other
> > group defined in topic.creation.groups this config is optional and if
> it's
> > missing it gets the value the *default* group
> >
> > For the properties "topic.creation.$alias.replication.factor" and
> > "topic.creation.$alias.partitions". I think that we can and should make
> > this optional for all groups including the "default" group. Kafka's
> > CreateTopicRequest message allows these two fields to be optional. Here
> are
> > their descriptions respectively:
> >
> > > The number of replicas to create for each partition in the topic, or -1
> > if we are either specifying a manual partition assignment or using the
> > default repli
> > cation factor.
> > > The number of partitions to create in the topic, or -1 if we are either
> > specifying a manual partition assignment or using the default partitions.
> >
> > At the Java Client level this is model using Java's Optional type. I
> think
> > that we can make them both optional and resolve them to
> "Optional.empty()"
> > if neither the specific group or "default" is set.
> >
> > Thanks,
> > Jose
> >
> >
> > On Thu, Dec 19, 2019 at 8:27 PM Tom Bentley  wrote:
> >
> > > Thanks Konstantine, lgtm.
> > >
> > > On Thu, Dec 19, 2019 at 5:34 PM Ryanne Dolan 
> > > wrote:
> > >
> > > > Thanks for the reply Konstantine. Makes sense.
> > > >
> > > > Ryanne
> > > >
> > > > On Tue, Dec 17, 2019, 6:41 PM Konstantine Karantasis <
> > > > konstant...@confluent.io> wrote:
> > > >
> > > > > Thanks Randall and Ryanne for your comments.
> > > > >
> > > > > I'm replying to them below, in order of appearance:
> > > > >
> > > > > To Randall's comments:
> > > > > 1) I assumed these properties would be visible to connectors, since
> > by
> > > > > definition these are connector properties. I added a mention.
> However
> > > I'm
> > > > > not sure if you are also making a specific suggestion with this
> > > > question. I
> > > > > didn't find a similar mention in KIP-458, but 'override' directives
> > > also
> > > > > appear in both the connector and the task properties. Given this
> > > > precedent,
> > > > > I think it makes sense to forward these properties to the connector
> > as
> > > > > well.
> > > > >
> > > > > 2) Doesn't hurt to add a note in the KIP. Added in the table. This
> > > > > definitely belongs to the Kafka Connect docs that will describe how
> > to
> > > > > operate Connect with this feature enabled.
> > > > >
> > > > > 3) Added a note to mention that a task might fail during runtime
> and
> > > that
> > > > > early validation won't be in place for this feature.
> > > > >
> > > > > 4) Examples added and the sentence regarding ACLs and failure was
> > > > adjusted
> > > > > to reflect the new proposal.
> > > > >
> > > > > 5) Also addressed and the KIP now mentions that the task will fail
> if
> > > the
> > > > > feature is enabled and the broker does not support the Admin API.
> > > > >
> > > > > To your point Ryanne, I'm also often in favor of reserving some
> room
> > > for
> > > > > customizations that will be able to address specific user needs,
> but
> > I
> > > > > don't think we have a strong case for making this functionality
> > > pluggable
> > > > > at the moment. Topics are not very transient entities in Kafka. And
> > > this
> > > > > feature is focusing specifically on topic creation and does not
> > suggest
> > > > > altering configuration of existing topics, including topics that
> may
> > be
> > > > > created once by a connector that will use this new functionality.
> > > > > Therefore, adapting to changes to the attainable replication factor
> > > > during
> > > > > runtime, without expressing this in the configuration of a
> connector
> > > > seems
> > > > > to involve more risks than benefits. Overall, a generic topic
> > creation
> > > > hook
> > > > > shares similarities to exposing an admin client to the connector
> > itself
> > > > and
> > > > > based on previous discussions, seems that this approach will result
> > in
> > > > > considerable extensions in both configuration and implementation
> > > without
> > > > it
> > > > > being fully justified at the moment.
> > > 

Re: [DISCUSS] KIP-158 UPDATED: Enable source connectors to create new topics with specific configs in Kafka Connect during runtime

2020-06-05 Thread Konstantine Karantasis
Thanks for bringing up KIP-464 Jose and apologies for taking that long to
respond.

It made sense to allow the users to use the broker defaults for the
replication factor and the number of partitions when their source
connectors create topics and the implementation has incorporated this
ability.
I've now updated KIP-158 to reflect this feature in the doc. Given that
this is a minor and useful amendment I think we don't have to vote again on
this change but please let me know if you think otherwise.

Best,
Konstantine

On Mon, Feb 3, 2020 at 3:17 PM Jose Garcia Sancio 
wrote:

> Thanks Konstantine. Looking forward to this feature.
>
> The KIP mentions:
>
> > For the *default* group this configuration is required. For any other
> group defined in topic.creation.groups this config is optional and if it's
> missing it gets the value the *default* group
>
> For the properties "topic.creation.$alias.replication.factor" and
> "topic.creation.$alias.partitions". I think that we can and should make
> this optional for all groups including the "default" group. Kafka's
> CreateTopicRequest message allows these two fields to be optional. Here are
> their descriptions respectively:
>
> > The number of replicas to create for each partition in the topic, or -1
> if we are either specifying a manual partition assignment or using the
> default repli
> cation factor.
> > The number of partitions to create in the topic, or -1 if we are either
> specifying a manual partition assignment or using the default partitions.
>
> At the Java Client level this is model using Java's Optional type. I think
> that we can make them both optional and resolve them to "Optional.empty()"
> if neither the specific group or "default" is set.
>
> Thanks,
> Jose
>
>
> On Thu, Dec 19, 2019 at 8:27 PM Tom Bentley  wrote:
>
> > Thanks Konstantine, lgtm.
> >
> > On Thu, Dec 19, 2019 at 5:34 PM Ryanne Dolan 
> > wrote:
> >
> > > Thanks for the reply Konstantine. Makes sense.
> > >
> > > Ryanne
> > >
> > > On Tue, Dec 17, 2019, 6:41 PM Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > >
> > > > Thanks Randall and Ryanne for your comments.
> > > >
> > > > I'm replying to them below, in order of appearance:
> > > >
> > > > To Randall's comments:
> > > > 1) I assumed these properties would be visible to connectors, since
> by
> > > > definition these are connector properties. I added a mention. However
> > I'm
> > > > not sure if you are also making a specific suggestion with this
> > > question. I
> > > > didn't find a similar mention in KIP-458, but 'override' directives
> > also
> > > > appear in both the connector and the task properties. Given this
> > > precedent,
> > > > I think it makes sense to forward these properties to the connector
> as
> > > > well.
> > > >
> > > > 2) Doesn't hurt to add a note in the KIP. Added in the table. This
> > > > definitely belongs to the Kafka Connect docs that will describe how
> to
> > > > operate Connect with this feature enabled.
> > > >
> > > > 3) Added a note to mention that a task might fail during runtime and
> > that
> > > > early validation won't be in place for this feature.
> > > >
> > > > 4) Examples added and the sentence regarding ACLs and failure was
> > > adjusted
> > > > to reflect the new proposal.
> > > >
> > > > 5) Also addressed and the KIP now mentions that the task will fail if
> > the
> > > > feature is enabled and the broker does not support the Admin API.
> > > >
> > > > To your point Ryanne, I'm also often in favor of reserving some room
> > for
> > > > customizations that will be able to address specific user needs, but
> I
> > > > don't think we have a strong case for making this functionality
> > pluggable
> > > > at the moment. Topics are not very transient entities in Kafka. And
> > this
> > > > feature is focusing specifically on topic creation and does not
> suggest
> > > > altering configuration of existing topics, including topics that may
> be
> > > > created once by a connector that will use this new functionality.
> > > > Therefore, adapting to changes to the attainable replication factor
> > > during
> > > > runtime, without expressing this in the configuration of a connector
> > > seems
> > > > to involve more risks than benefits. Overall, a generic topic
> creation
> > > hook
> > > > shares similarities to exposing an admin client to the connector
> itself
> > > and
> > > > based on previous discussions, seems that this approach will result
> in
> > > > considerable extensions in both configuration and implementation
> > without
> > > it
> > > > being fully justified at the moment.
> > > >
> > > > I suggest moving forward without pluggable classes for now, and if in
> > the
> > > > future we wish to return to this topic for second iteration, then
> > > factoring
> > > > out the proposed functionality under the configuration of a module
> that
> > > > applies topic creation based on regular expressions should be easy to
> > do
> > > in
> > > > a 

Re: [VOTE] KIP-610: Error Reporting in Sink Connectors

2020-06-05 Thread Randall Hauch
Thanks again to everyone for all the work on this KIP and implementation!

I've discovered that it would be easier for downstream projects if the new
`SinkTaskContext.errantRecordReporter()` method were a default method that
returns null. Strictly speaking it's not required as Connect will provide
the implementation for the Connect runtime, but some downstream projects
may use their own implementations of this interface for testing purposes.
See https://issues.apache.org/jira/browse/KAFKA-10111 for details and
https://github.com/apache/kafka/pull/8814 for the suggested change. IMO
there is little harm in making the existing non-default method a default
that returns null, but please let me know if you object.

Best regards,

Randall

On Thu, May 21, 2020 at 2:10 PM Randall Hauch  wrote:

> The vote has been open for >72 hours, and the KIP is adopted with three +1
> binding votes (Konstantine, Ewen, me), one +1 non-binding vote (Andrew),
> and no -1 votes.
>
> I'll update the KIP and the AK 2.6.0 plan.
>
> Thanks, everyone.
>
> On Tue, May 19, 2020 at 4:33 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
>> +1 (binding)
>>
>> I like how the KIP looks now too. Quite active discussions within the past
>> few days, which I found very useful.
>>
>> There's some room to allow in the future the connector developers to
>> decide
>> whether they want greater control over error reporting or they want the
>> framework to keep providing the reasonable guarantees that this KIP now
>> describes. The API is expressive enough to accommodate such improvements
>> if
>> they are warranted, but its current form seems quite adequate to support
>> efficient end-to-end error reporting for sink connectors.
>>
>> Thanks for introducing this KIP Aakash!
>>
>> One last minor comment around naming:
>> Currently both the names ErrantRecordReporter and failedRecordReporter are
>> used. Using the same name everywhere seems preferable, so feel free to
>> choose the one that you prefer.
>>
>> Regards,
>> Konstantine
>>
>> On Tue, May 19, 2020 at 2:30 PM Ewen Cheslack-Postava 
>> wrote:
>>
>> > +1 (binding)
>> >
>> > This will be a nice improvement. From the discussion thread it's clear
>> this
>> > is tricky to get right, nice work!
>> >
>> > On Tue, May 19, 2020 at 8:16 AM Andrew Schofield <
>> > andrew_schofi...@live.com>
>> > wrote:
>> >
>> > > +1 (non-binding)
>> > >
>> > > This is now looking very nice.
>> > >
>> > > Andrew Schofield
>> > >
>> > > On 19/05/2020, 16:11, "Randall Hauch"  wrote:
>> > >
>> > > Thank you, Aakash, for putting together this KIP and shepherding
>> the
>> > > discussion. Also, many thanks to all those that participated in
>> the
>> > > very
>> > > active discussion. I'm actually very happy with the current
>> proposal,
>> > > am
>> > > confident that it is a valuable improvement to the Connect
>> framework,
>> > > and
>> > > know that it will be instrumental in making sink tasks easily
>> able to
>> > > report problematic records and keep running.
>> > >
>> > > +1 (binding)
>> > >
>> > > Best regards,
>> > >
>> > > Randall
>> > >
>> > > On Sun, May 17, 2020 at 6:59 PM Aakash Shah 
>> > > wrote:
>> > >
>> > > > Hello all,
>> > > >
>> > > > I'd like to open a vote for KIP-610:
>> > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors
>> > > >
>> > > > Thanks,
>> > > > Aakash
>> > > >
>> > >
>> > >
>> >
>>
>


[jira] [Created] (KAFKA-10111) SinkTaskContext.errantRecordReporter() added in KIP-610 should be a default method

2020-06-05 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10111:
-

 Summary: SinkTaskContext.errantRecordReporter() added in KIP-610 
should be a default method
 Key: KAFKA-10111
 URL: https://issues.apache.org/jira/browse/KAFKA-10111
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.6.0


[KIP-610|https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors]
 added a new `errantRecordReporter()` method to `SinkTaskContext`, but the KIP 
didn't make this method a default method. While the AK project can add this 
method to all of its implementations (actual and test), other projects such as 
connector projects might have their own mock implementations just to help test 
the connector implementation. That means when those projects upgrade, they'd 
get compilation problems for their own implementations of `SinkTaskContext`.

Making this method default will save such problems with downstream projects, 
and is actually easy since the method is already defined to return null if no 
reporter is configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10110) ConnectDistributed fails with NPE when Kafka cluster has no ID

2020-06-05 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10110:
-

 Summary: ConnectDistributed fails with NPE when Kafka cluster has 
no ID
 Key: KAFKA-10110
 URL: https://issues.apache.org/jira/browse/KAFKA-10110
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.6.0


When a Connect worker starts, recent changes from KIP-606 / KAFKA-9960 attempt 
to put the Kafka cluster ID into the new KafkaMetricsContext. But the Kafka 
cluster ID can be null, resulting in an NPE shown in the following log snippet:
{noformat}
[2020-06-04 15:01:02,900] INFO Kafka cluster ID: null 
(org.apache.kafka.connect.util.ConnectUtils)
...
[2020-06-04 15:01:03,271] ERROR Stopping due to error 
(org.apache.kafka.connect.cli.ConnectDistributed)[2020-06-04 15:01:03,271] 
ERROR Stopping due to error 
(org.apache.kafka.connect.cli.ConnectDistributed)java.lang.NullPointerException 
at 
org.apache.kafka.common.metrics.KafkaMetricsContext.lambda$new$0(KafkaMetricsContext.java:48)
 at java.util.HashMap.forEach(HashMap.java:1289) at 
org.apache.kafka.common.metrics.KafkaMetricsContext.(KafkaMetricsContext.java:48)
 at 
org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:100) 
at org.apache.kafka.connect.runtime.Worker.(Worker.java:135) at 
org.apache.kafka.connect.runtime.Worker.(Worker.java:121) at 
org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:111)
 at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-06-05 Thread David Jacot
Hi Anna and Jun,

You are right. We should allocate up to the quota for each old sample.

I have revamped the Throttling Algorithm section to better explain our
thought process and the token bucket inspiration.

I have also added a chapter with few guidelines about how to define
the quota. There is no magic formula for this but I give few insights.
I don't have specific numbers that can be used out of the box so I
think that it is better to not put any for the time being. We can always
complement later on in the documentation.

Please, take a look and let me know what you think.

Cheers,
David

On Fri, Jun 5, 2020 at 8:37 AM Anna Povzner  wrote:

> Hi David and Jun,
>
> I dug a bit deeper into the Rate implementation, and wanted to confirm that
> I do believe that the token bucket behavior is better for the reasons we
> already discussed but wanted to summarize. The main difference between Rate
> and token bucket is that the Rate implementation allows a burst by
> borrowing from the future, whereas a token bucket allows a burst by using
> accumulated tokens from the previous idle period. Using accumulated tokens
> smoothes out the rate measurement in general. Configuring a large burst
> requires configuring a large quota window, which causes long delays for
> bursty workload, due to borrowing credits from the future. Perhaps it is
> useful to add a summary in the beginning of the Throttling Algorithm
> section?
>
> In my previous email, I mentioned the issue we observed with the bandwidth
> quota, where a low quota (1MB/s per broker) was limiting bandwidth visibly
> below the quota. I thought it was strictly the issue with the Rate
> implementation as well, but I found a root cause to be different but
> amplified by the Rate implementation (long throttle delays of requests in a
> burst). I will describe it here for completeness using the following
> example:
>
>-
>
>Quota = 1MB/s, default window size and number of samples
>-
>
>Suppose there are 6 connections (maximum 6 outstanding requests), and
>each produce request is 5MB. If all requests arrive in a burst, the
> last 4
>requests (20MB over 10MB allowed in a window) may get the same throttle
>time if they are processed concurrently. We record the rate under the
> lock,
>but then calculate throttle time separately after that. So, for each
>request, the observed rate could be 3MB/s, and each request gets
> throttle
>delay = 20 seconds (instead of 5, 10, 15, 20 respectively). The delay is
>longer than the total rate window, which results in lower bandwidth than
>the quota. Since all requests got the same delay, they will also arrive
> in
>a burst, which may also result in longer delay than necessary. It looks
>pretty easy to fix, so I will open a separate JIRA for it. This can be
>additionally mitigated by token bucket behavior.
>
>
> For the algorithm "So instead of having one sample equal to 560 in the last
> window, we will have 100 samples equal to 5.6.", I agree with Jun. I would
> allocate 5 per each old sample that is still in the overall window. It
> would be a bit larger granularity than the pure token bucket (we lose 5
> units / mutation once we move past the sample window), but it is better
> than the long delay.
>
> Thanks,
>
> Anna
>
>
> On Thu, Jun 4, 2020 at 6:33 PM Jun Rao  wrote:
>
> > Hi, David, Anna,
> >
> > Thanks for the discussion and the updated wiki.
> >
> > 11. If we believe the token bucket behavior is better in terms of
> handling
> > the burst behavior, we probably don't need a separate KIP since it's just
> > an implementation detail.
> >
> > Regarding "So instead of having one sample equal to 560 in the last
> window,
> > we will have 100 samples equal to 5.6.", I was thinking that we will
> > allocate 5 to each of the first 99 samples and 65 to the last sample.
> Then,
> > 6 new samples have to come before the balance becomes 0 again.
> Intuitively,
> > we are accumulating credits in each sample. If a usage comes in, we first
> > use all existing credits to offset that. If we can't, the remaining usage
> > will be recorded in the last sample, which will be offset by future
> > credits. That seems to match the token bucket behavior the closest.
> >
> > 20. Could you provide some guidelines on the typical rate that an admin
> > should set?
> >
> > Jun
> >
> > On Thu, Jun 4, 2020 at 8:22 AM David Jacot  wrote:
> >
> > > Hi all,
> > >
> > > I just published an updated version of the KIP which includes:
> > > * Using a slightly modified version of our Rate. I have tried to
> > formalize
> > > it based on our discussion. As Anna suggested, we may find a better way
> > to
> > > implement it.
> > > * Handling of ValidateOnly as pointed out by Tom.
> > >
> > > Please, check it out and let me know what you think.
> > >
> > > Best,
> > > David
> > >
> > > On Thu, Jun 4, 2020 at 4:57 PM Tom Bentley 
> wrote:
> > >
> > > > Hi David,
> > > >
> > > > As a user I might expect the 

Permission to create KIP

2020-06-05 Thread William Bottrell
Hello,

I'd like to get permission to create a KIP for this JIRA issue:
https://issues.apache.org/jira/browse/KAFKA-10062. Not sure what my WIki ID
is, but my account name is wbottrell. Let me know if that is incorrect or
if more information is needed.

Thanks,
Will


[jira] [Created] (KAFKA-10109) kafka-acls.sh/AclCommand opens multiple AdminClients

2020-06-05 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10109:
---

 Summary: kafka-acls.sh/AclCommand opens multiple AdminClients
 Key: KAFKA-10109
 URL: https://issues.apache.org/jira/browse/KAFKA-10109
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Tom Bentley
Assignee: Tom Bentley


{{AclCommand.AclCommandService}} uses {{withAdminClient(opts: 
AclCommandOptions)(f: Admin => Unit)}} to abstract the execution of an action 
using an {{AdminClient}} instance. Unfortunately the use of this method in 
implemeting {{addAcls()}} and {{removeAcls()}} calls {{listAcls()}}. This 
causes the creation of a second {{AdminClient}} instance which then fails to 
register an MBean, resulting in a warning being logged.

{code}
./bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config 
config/broker_connection.conf.reproducing --add --allow-principal User:alice 
--operation Describe --topic 'test' --resource-pattern-type prefixed
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test, 
patternType=PREFIXED)`: 
(principal=User:alice, host=*, operation=DESCRIBE, 
permissionType=ALLOW) 

[2020-06-03 18:43:12,190] WARN Error registering AppInfo mbean 
(org.apache.kafka.common.utils.AppInfoParser)
javax.management.InstanceAlreadyExistsException: 
kafka.admin.client:type=app-info,id=administrator_data
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at 
org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64)
at 
org.apache.kafka.clients.admin.KafkaAdminClient.(KafkaAdminClient.java:500)
at 
org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:444)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:59)
at 
org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:39)
at 
kafka.admin.AclCommand$AdminClientService.withAdminClient(AclCommand.scala:105)
at 
kafka.admin.AclCommand$AdminClientService.listAcls(AclCommand.scala:146)
at 
kafka.admin.AclCommand$AdminClientService.$anonfun$addAcls$1(AclCommand.scala:123)
at 
kafka.admin.AclCommand$AdminClientService.$anonfun$addAcls$1$adapted(AclCommand.scala:116)
at 
kafka.admin.AclCommand$AdminClientService.withAdminClient(AclCommand.scala:108)
at 
kafka.admin.AclCommand$AdminClientService.addAcls(AclCommand.scala:116)
at kafka.admin.AclCommand$.main(AclCommand.scala:78)
at kafka.admin.AclCommand.main(AclCommand.scala)
Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test, 
patternType=PREFIXED)`: 
(principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Broker side round robin on topic partitions when receiving messages

2020-06-05 Thread Vinicius Scheidegger
Does anyone know how could I perform a load balance to distribute equally
the messages to all consumers within the same consumer group having
multiple producers?

Is this a conceptual flaw on Kafka, wasn't it thought for equal
distribution with multiple producers or am I missing something?
I've asked on Stack Overflow, on Kafka users mailing group, here (on Kafka
Devs) and on Slack - and still have no definitive answer (actually most of
the time I got no answer at all)

Would something like this even be possible in the way Kafka is currently
designed?
How does proposing for a KIP work?

Thanks,



On Thu, May 28, 2020, 3:44 PM Vinicius Scheidegger <
vinicius.scheideg...@gmail.com> wrote:

> Hi,
>
> I'm trying to understand a little bit more about how Kafka works.
> I have a design with multiple producers writing to a single topic and
> multiple consumers in a single Consumer Group consuming message from this
> topic.
>
> My idea is to distribute the messages from all producers equally. From
> reading the documentation I understood that the partition is always
> selected by the producer. Is that correct?
>
> I'd also like to know if there is an out of the box option to assign the
> partition via a round robin *on the broker side *to guarantee equal
> distribution of the load - if possible to each consumer, but if not
> possible, at least to each partition.
>
> If my understanding is correct, it looks like in a multiple producer
> scenario there is lack of support from Kafka regarding load balancing and
> customers have to either stick to the hash of the key (random distribution,
> although it would guarantee same key goes to the same partition) or they
> have to create their own logic on the producer side (i.e. by sharing memory)
>
> Am I missing something?
>
> Thank you,
>
> Vinicius Scheidegger
>


[jira] [Created] (KAFKA-10108) The cached configs of SslFactory should be updated only if the ssl Engine Factory is updated successfully

2020-06-05 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10108:
--

 Summary: The cached configs of SslFactory should be updated only 
if the ssl Engine Factory is updated successfully
 Key: KAFKA-10108
 URL: https://issues.apache.org/jira/browse/KAFKA-10108
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


The following cases should NOT change the cached configs of SslFactory.

1. validate reconfiguration
1. throw exception when checking the new ssl engine factory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10107) Producer snapshots LSO used in certain situations which can lead to data loss on compacted topics as LSO breach occurs and early offsets cleaned

2020-06-05 Thread William Reynolds (Jira)
William Reynolds created KAFKA-10107:


 Summary: Producer snapshots LSO used in certain situations which 
can lead to data loss on compacted topics as LSO breach occurs and early 
offsets cleaned
 Key: KAFKA-10107
 URL: https://issues.apache.org/jira/browse/KAFKA-10107
 Project: Kafka
  Issue Type: Bug
  Components: core, log cleaner
Affects Versions: 2.4.1
Reporter: William Reynolds


While upgading a 1.1.0 cluster to 2.4.1 and also adding an interbroker port 
using SSL we ran into a situation where producer snapshot offsets get set as 
the log start offset then logs truncate to nothing across 2 relatively unsafe 
restarts.

 

Here is the timeline of what we did to trigger this

Broker 40 is shutdown as first to go to 2.4.1 and switch to interbroker port 
9094.
 As it shuts down it writes producer snapshots
 Broker 40 starts on 2.4.1, loads the snapshots then compares checkpointed 
offsets to log start offset and finds them to be invalid (exact reason unknown 
but looks to be producer snapshot load related)
 On broker 40 all topics show an offset reset like this 2020-05-18 
15:22:21,106] WARN Resetting first dirty offset of topic-name-60 to log start 
offset 6009368 since the checkpointed offset 5952382 is invalid. 
(kafka.log.LogCleanerManager$)" which then triggers log cleanup on broker 40 
for all these topics which is where the data is lost
 At this point only partitions led by broker 40 have lost data and would be 
failing for client lookups on older data but this can't spread as 40 has 
interbroker port 9094 and brokers 50 and 60 have interbroker port 9092
 I stop start brokers 50 and 60 in quick succession to take them to 2.4.1 and 
onto the new interbroker port 9094
 This leaves broker 40 as the in sync replica for all but a couple of 
partitions which aren't on 40 at all shown in the attached image
 Brokers 50 and 60 start and then take their start offset from leader (or if 
there was no leader pulls from recovery on returning broker 50 or 60) and so 
all the replicas also clean logs to remove data to catch up to broker 40 as 
that is the in sync replica
 Then I shutdown 40 and 50 leading to 60 leading all partitions it holds and 
then we see this happen across all of those partitions
 "May 18, 2020 @ 15:48:28.252",hostname-1,30438,apache-kafka:2.4.1,"[2020-05-18 
15:48:28,251] INFO [Log partition=topic-name-60, dir=/kafka-topic-data] Loading 
producer state till offset 0 with message format version 2 (kafka.log.Log)" 
 "May 18, 2020 @ 15:48:28.252",hostname-1,30438,apache-kafka:2.4.1,"[2020-05-18 
15:48:28,252] INFO [Log partition=topic-name-60, dir=/kafka-topic-data] 
Completed load of log with 1 segments, log start offset 0 and log end offset 0 
in 2 ms (kafka.log.Log)"
 "May 18, 2020 @ 15:48:45.883",hostname,7805,apache-kafka:2.4.1,"[2020-05-18 
15:48:45,883] WARN [ReplicaFetcher replicaId=50, leaderId=60, fetcherId=0] 
Leader or replica is on protocol version where leader epoch is not considered 
in the OffsetsForLeaderEpoch response. The leader's offset 0 will be used for 
truncation in topic-name-60. (kafka.server.ReplicaFetcherThread)" 
 "May 18, 2020 @ 15:48:45.883",hostname,7805,apache-kafka:2.4.1,"[2020-05-18 
15:48:45,883] INFO [Log partition=topic-name-60, dir=/kafka-topic-data] 
Truncating to offset 0 (kafka.log.Log)"

 

I believe the truncation has always been a problem but recent 
https://issues.apache.org/jira/browse/KAFKA-6266 fix allowed truncation to 
actually happen where it wouldn't have before. 
 The producer snapshots setting as log start offset is a mystery to me so any 
light you could shed on why that yhappened and how to avoid would be great.

 

I am sanitising full logs and will upload here soon



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-06-05 Thread Anna Povzner
Hi David and Jun,

I dug a bit deeper into the Rate implementation, and wanted to confirm that
I do believe that the token bucket behavior is better for the reasons we
already discussed but wanted to summarize. The main difference between Rate
and token bucket is that the Rate implementation allows a burst by
borrowing from the future, whereas a token bucket allows a burst by using
accumulated tokens from the previous idle period. Using accumulated tokens
smoothes out the rate measurement in general. Configuring a large burst
requires configuring a large quota window, which causes long delays for
bursty workload, due to borrowing credits from the future. Perhaps it is
useful to add a summary in the beginning of the Throttling Algorithm
section?

In my previous email, I mentioned the issue we observed with the bandwidth
quota, where a low quota (1MB/s per broker) was limiting bandwidth visibly
below the quota. I thought it was strictly the issue with the Rate
implementation as well, but I found a root cause to be different but
amplified by the Rate implementation (long throttle delays of requests in a
burst). I will describe it here for completeness using the following
example:

   -

   Quota = 1MB/s, default window size and number of samples
   -

   Suppose there are 6 connections (maximum 6 outstanding requests), and
   each produce request is 5MB. If all requests arrive in a burst, the last 4
   requests (20MB over 10MB allowed in a window) may get the same throttle
   time if they are processed concurrently. We record the rate under the lock,
   but then calculate throttle time separately after that. So, for each
   request, the observed rate could be 3MB/s, and each request gets throttle
   delay = 20 seconds (instead of 5, 10, 15, 20 respectively). The delay is
   longer than the total rate window, which results in lower bandwidth than
   the quota. Since all requests got the same delay, they will also arrive in
   a burst, which may also result in longer delay than necessary. It looks
   pretty easy to fix, so I will open a separate JIRA for it. This can be
   additionally mitigated by token bucket behavior.


For the algorithm "So instead of having one sample equal to 560 in the last
window, we will have 100 samples equal to 5.6.", I agree with Jun. I would
allocate 5 per each old sample that is still in the overall window. It
would be a bit larger granularity than the pure token bucket (we lose 5
units / mutation once we move past the sample window), but it is better
than the long delay.

Thanks,

Anna


On Thu, Jun 4, 2020 at 6:33 PM Jun Rao  wrote:

> Hi, David, Anna,
>
> Thanks for the discussion and the updated wiki.
>
> 11. If we believe the token bucket behavior is better in terms of handling
> the burst behavior, we probably don't need a separate KIP since it's just
> an implementation detail.
>
> Regarding "So instead of having one sample equal to 560 in the last window,
> we will have 100 samples equal to 5.6.", I was thinking that we will
> allocate 5 to each of the first 99 samples and 65 to the last sample. Then,
> 6 new samples have to come before the balance becomes 0 again. Intuitively,
> we are accumulating credits in each sample. If a usage comes in, we first
> use all existing credits to offset that. If we can't, the remaining usage
> will be recorded in the last sample, which will be offset by future
> credits. That seems to match the token bucket behavior the closest.
>
> 20. Could you provide some guidelines on the typical rate that an admin
> should set?
>
> Jun
>
> On Thu, Jun 4, 2020 at 8:22 AM David Jacot  wrote:
>
> > Hi all,
> >
> > I just published an updated version of the KIP which includes:
> > * Using a slightly modified version of our Rate. I have tried to
> formalize
> > it based on our discussion. As Anna suggested, we may find a better way
> to
> > implement it.
> > * Handling of ValidateOnly as pointed out by Tom.
> >
> > Please, check it out and let me know what you think.
> >
> > Best,
> > David
> >
> > On Thu, Jun 4, 2020 at 4:57 PM Tom Bentley  wrote:
> >
> > > Hi David,
> > >
> > > As a user I might expect the validateOnly option to do everything
> except
> > > actually make the changes. That interpretation would imply the quota
> > should
> > > be checked, but the check should obviously be side-effect free. I think
> > > this interpretation could be useful because it gives the caller either
> > some
> > > confidence that they're not going to hit the quota, or tell them, via
> the
> > > exception, when they can expect the call to work. But for this to be
> > useful
> > > it would require the retry logic to not retry the request when
> > validateOnly
> > > was set.
> > >
> > > On the other hand, if validateOnly is really about validating only some
> > > aspects of the request (which maybe is what the name implies), then we
> > > should clarify in the Javadoc that the quota is not included in the
> > > validation.
> > >
> > > On balance, I agree with what 

[VOTE] KIP-620 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-06-05 Thread Chia-Ping Tsai
hi All,

I would like to start the vote on KIP-620:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118

--
Chia-Ping


Re: [DISCUSS] KIP-619 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-06-05 Thread Chia-Ping Tsai
> I think the KIP is quite straightforward and you could even skip the
> DISCUSS and call for a VOTE directly.

Copy that

On 2020/06/04 23:43:12, "Matthias J. Sax"  wrote: 
> Btw:
> 
> I think the KIP is quite straightforward and you could even skip the
> DISCUSS and call for a VOTE directly.
> 
> 
> -Matthias
> 
> On 6/4/20 4:40 PM, Matthias J. Sax wrote:
> > @Chia-Ping
> > 
> > Can you maybe start a new DISCUSS thread using the new KIP number? This
> > would help to keep the threads separated.
> > 
> > Thanks!
> > 
> > 
> > -Matthias
> > 
> > On 6/3/20 6:56 AM, Chia-Ping Tsai wrote:
> >> When I created the KIP, the next number was 619 and not sure why the 
> >> number is out of sync.
> >>
> >> At any rate, I will update the KIP number :_
> >>
> >> On 2020/06/03 05:06:39, Cheng Tan  wrote: 
> >>> Hi Chia, 
> >>>
> >>> Hope you are doing well. I already took KIP-619 as my KIP identification 
> >>> number. Could you change your KIP id? Thank you.
> >>>
> >>> Best, - Cheng
> >>>
>  On May 31, 2020, at 8:08 PM, Chia-Ping Tsai  wrote:
> 
>  hi All,
> 
>  This KIP plans to deprecate two unused methods without replacement.
> 
>  All suggestions are welcome!
> 
>  KIP: 
>  https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118
>  ISSUE: https://issues.apache.org/jira/browse/KAFKA-10044
> 
>  ---
>  Chia-Ping
> >>>
> >>>
> > 
> 
>