Re: kubernets plugin jnlp4

2018-05-04 Thread Edward Bond
Carlos,

After upgrading to the 1.9 of k8s for the host / ingress etc of the 
Jenkins backend. I haven’t seen a disconnection since. Maybe it was just the 
way 1.6 k8s was working. 


I will keep an eye on it.

- Ed


> On Apr 27, 2018, at 7:15 AM, Carlos Sanchez  wrote:
> 
> The exceptions may be irrelevant, what errors do you get in the build log?
> 
> On Fri, Apr 27, 2018, 04:25 Edward Bond  > wrote:
> Carlos, 
> 
>   Jenkins: Jenkins ver. 2.107.2 
>   kubernetes plugin: 1.6.0
>   Master log file: https://pastebin.com/SBs1iTpy 
> 
>   Using jnlp-slave:3.10-1
>   
> Kube svc
> >
> NAME  CLUSTER-IP   EXTERNAL-IPPORT(S) 
>  AGE
> jenkins-discovery 5/TCP195d
> jenkins-ui 80/TCP   195d
> 
> > Top part of docker file 
> FROM jenkins/jnlp-slave:3.10-1
> USER root
> 
> > 
> def runDefault(closure){
>   podTemplate(label: 'default-pod',
>   containers:
>   [
> containerTemplate(name: 'jnlp', image: 'build-agent:jnlp4',
>   command: "jenkins-slave",
>   resourceRequestCpu: '450m',
>   resourceLimitCpu: '850m',
>   resourceRequestMemory: '3400Mi',
>   resourceLimitMemory: '5608Mi',
>   args: '${computer.jnlpmac} 
> ${computer.name }'),
> containerTemplate(name: 'postgres', image: 'postgres:9.6.2'),
>   containerTemplate(name: 'elastic', image: 
> 'docker.elastic.co/elasticsearch/elasticsearch:6.2.3 
> '),
> containerTemplate(name: 'redis', image: 'redis:3.2.10')
>   ],
>   volumes: [
>   hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: 
> '/var/run/docker.sock'),
>   hostPathVolume(hostPath: '/usr/bin/docker', mountPath: 
> '/usr/bin/docker')
>   ]
>   ) {
> closure()
>   }
> }
> 
> I don’t know the fullness of what all the connection issues mean so I don’t 
> know what to do to fix it.
> 
> The k8s nodes are v1.6.7 running on kops in aws. 
> 
> 
> Any ideas why I am getting:
> java.nio.channels.ClosedChannelException
> at 
> org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:208)
> WARNING: IOHub#1: Worker[channel:java.nio.channels.SocketChannel[connected 
> local=/100.96.3.136:5  
> remote=100.96.2.194/100.96.2.194:49506 
> ]] / Computer.threadPoolForRemoting 
> [#248] for jenkins-slave-z19q3-vpgcb terminated
> java.nio.channels.ClosedChannelException
> at 
> org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
> at 
> org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:179)
> at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:789)
> at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 
> 
> Thanks again,
> 
> - Ed
> 
> 
> 
>> On Apr 24, 2018, at 10:44 AM, Edward Bond > > wrote:
>> 
>> Carlos,
>> 
>> Was this a known issue before? I will snapshot the drive and try the upgrade 
>> to the latest of everything next week. 
>> 
>> Maybe it is a kops / aws issue.
>> 
>> Thanks!
>> 
>> - Ed
>> 
>>> On Apr 24, 2018, at 4:54 AM, Carlos Sanchez >> > wrote:
>>> 
>>> All the plugin builds are running in GKE and I haven't experienced any 
>>> failed jobs for a while. Jenkins weekly and k8s latest release with default 
>>> jnlp image
>>> 
>>> On Mon, Apr 23, 2018 at 9:58 PM, Edward Bond >> > wrote:
>>> Hello All,
>>> 
>>> I am using Jenkins LTS, 2.107.2. with kubernetes plugin: 1.1.3, jnlp-2, 
>>> using kops inside aws. 
>>> 
>>> I have to have jnlp2-connect enabled because using jnlp4 has a higher 
>>> failure rate.
>>> 
>>> Currently I am getting abount a 80% success rate with my current setup. 
>>> When I upgraded ( 2 months ago ) to latest master and latest k8s plugin I 
>>> was having about a 15% success rate of builds.
>>> 
>>> WARNING: Failed to send back a reply to the request 
>>> hudson.remoting.Request$2@725700df
>>> hudson.remoting.ChannelClosedException: Channel 
>>> "hudson.remoting.Channel@7083e923:Channel to /100.96.3.149 
>>> ": channel is already closed
>>> at 

Re: kubernets plugin jnlp4

2018-04-27 Thread Carlos Sanchez
The exceptions may be irrelevant, what errors do you get in the build log?

On Fri, Apr 27, 2018, 04:25 Edward Bond  wrote:

> Carlos,
>
> Jenkins: Jenkins ver. 2.107.2 
> kubernetes plugin: 1.6.0
> Master log file: https://pastebin.com/SBs1iTpy
> Using jnlp-slave:3.10-1
> Kube svc
> >
> NAME  CLUSTER-IP   EXTERNAL-IP
>  PORT(S)  AGE
> jenkins-discovery 5/TCP195d
> jenkins-ui 80/TCP
> 195d
>
> > Top part of docker file
> FROM jenkins/jnlp-slave:3.10-1
> USER root
>
> >
> def runDefault(closure){
>   podTemplate(label: 'default-pod',
>   containers:
>   [
> containerTemplate(name: 'jnlp', image: 'build-agent:jnlp4',
> command: "jenkins-slave",
>resourceRequestCpu: '450m',
>resourceLimitCpu: '850m',
>resourceRequestMemory: '3400Mi',
>resourceLimitMemory: '5608Mi',
> args: '${computer.jnlpmac} ${computer.name}'),
> containerTemplate(name: 'postgres', image: 'postgres:9.6.2'),
> containerTemplate(name: 'elastic', image: '
> docker.elastic.co/elasticsearch/elasticsearch:6.2.3'),
> containerTemplate(name: 'redis', image: 'redis:3.2.10')
>   ],
>   volumes: [
>   hostPathVolume(hostPath: '/var/run/docker.sock', mountPath:
> '/var/run/docker.sock'),
>   hostPathVolume(hostPath: '/usr/bin/docker', mountPath:
> '/usr/bin/docker')
>   ]
>   ) {
> closure()
>   }
> }
>
> I don’t know the fullness of what all the connection issues mean so I
> don’t know what to do to fix it.
>
> The k8s nodes are v1.6.7 running on kops in aws.
>
>
> Any ideas why I am getting:
>
>1. java.nio.channels.ClosedChannelException
>2. at
>
> org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:208)
>
>
>1. WARNING: IOHub#1:
>Worker[channel:java.nio.channels.SocketChannel[connected local=/
>100.96.3.136:5 remote=100.96.2.194/100.96.2.194:49506]] /
>Computer.threadPoolForRemoting [#248] for jenkins-slave-z19q3-vpgcb
>terminated
>2. java.nio.channels.ClosedChannelException
>3. at
>
> org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
>4. at
>
> org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:179)
>5. at
>org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:789)
>6. at
>
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>
>
>1. at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>2. at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>3. at java.lang.Thread.run(Thread.java:748)
>
>
>
>
>
> Thanks again,
>
> - Ed
>
>
>
> On Apr 24, 2018, at 10:44 AM, Edward Bond  wrote:
>
> Carlos,
>
> Was this a known issue before? I will snapshot the drive and try the
> upgrade to the latest of everything next week.
>
> Maybe it is a kops / aws issue.
>
> Thanks!
>
> - Ed
>
> On Apr 24, 2018, at 4:54 AM, Carlos Sanchez  wrote:
>
> All the plugin builds are running in GKE and I haven't experienced any
> failed jobs for a while. Jenkins weekly and k8s latest release with default
> jnlp image
>
> On Mon, Apr 23, 2018 at 9:58 PM, Edward Bond  wrote:
>
>> Hello All,
>>
>> I am using Jenkins LTS, 2.107.2. with kubernetes plugin: 1.1.3, jnlp-2,
>> using kops inside aws.
>>
>> I have to have jnlp2-connect enabled because using jnlp4 has a higher
>> failure rate.
>>
>> Currently I am getting abount a 80% success rate with my current setup.
>> When I upgraded ( 2 months ago ) to latest master and latest k8s plugin I
>> was having about a 15% success rate of builds.
>>
>> WARNING: Failed to send back a reply to the request
>> hudson.remoting.Request$2@725700df
>> hudson.remoting.ChannelClosedException: Channel
>> "hudson.remoting.Channel@7083e923:Channel to /100.96.3.149": channel is
>> already closed
>> at hudson.remoting.Channel.send(Channel.java:715)
>> at hudson.remoting.Request$2.run(Request.java:377)
>> at
>> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
>> at
>> org.jenkinsci.remoting.CallableDecorator.call(CallableDecorator.java:19)
>> at
>> hudson.remoting.CallableDecoratorList$1.call(CallableDecoratorList.java:21)
>> at
>> jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>> at java.lang.Thread.run(Thread.java:748)
>> Caused by: java.io.IOException
>> at 

Re: kubernets plugin jnlp4

2018-04-26 Thread Edward Bond
Carlos, 

Jenkins: Jenkins ver. 2.107.2 
kubernetes plugin: 1.6.0
Master log file: https://pastebin.com/SBs1iTpy 

Using jnlp-slave:3.10-1

Kube svc
>
NAME  CLUSTER-IP   EXTERNAL-IPPORT(S)   
   AGE
jenkins-discovery 5/TCP195d
jenkins-ui 80/TCP   195d

> Top part of docker file 
FROM jenkins/jnlp-slave:3.10-1
USER root

> 
def runDefault(closure){
  podTemplate(label: 'default-pod',
  containers:
  [
containerTemplate(name: 'jnlp', image: 'build-agent:jnlp4',
command: "jenkins-slave",
resourceRequestCpu: '450m',
resourceLimitCpu: '850m',
resourceRequestMemory: '3400Mi',
resourceLimitMemory: '5608Mi',
args: '${computer.jnlpmac} 
${computer.name}'),
containerTemplate(name: 'postgres', image: 'postgres:9.6.2'),
containerTemplate(name: 'elastic', image: 
'docker.elastic.co/elasticsearch/elasticsearch:6.2.3'),
containerTemplate(name: 'redis', image: 'redis:3.2.10')
  ],
  volumes: [
  hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: 
'/var/run/docker.sock'),
  hostPathVolume(hostPath: '/usr/bin/docker', mountPath: 
'/usr/bin/docker')
  ]
  ) {
closure()
  }
}

I don’t know the fullness of what all the connection issues mean so I don’t 
know what to do to fix it.

The k8s nodes are v1.6.7 running on kops in aws. 


Any ideas why I am getting:
java.nio.channels.ClosedChannelException
at 
org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:208)
 
WARNING: IOHub#1: Worker[channel:java.nio.channels.SocketChannel[connected 
local=/100.96.3.136:5 remote=100.96.2.194/100.96.2.194:49506]] / 
Computer.threadPoolForRemoting [#248] for jenkins-slave-z19q3-vpgcb terminated
java.nio.channels.ClosedChannelException
at 
org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at 
org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:179)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:789)
at 
jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


Thanks again,

- Ed



> On Apr 24, 2018, at 10:44 AM, Edward Bond  wrote:
> 
> Carlos,
> 
> Was this a known issue before? I will snapshot the drive and try the upgrade 
> to the latest of everything next week. 
> 
> Maybe it is a kops / aws issue.
> 
> Thanks!
> 
> - Ed
> 
>> On Apr 24, 2018, at 4:54 AM, Carlos Sanchez > > wrote:
>> 
>> All the plugin builds are running in GKE and I haven't experienced any 
>> failed jobs for a while. Jenkins weekly and k8s latest release with default 
>> jnlp image
>> 
>> On Mon, Apr 23, 2018 at 9:58 PM, Edward Bond > > wrote:
>> Hello All,
>> 
>> I am using Jenkins LTS, 2.107.2. with kubernetes plugin: 1.1.3, jnlp-2, 
>> using kops inside aws. 
>> 
>> I have to have jnlp2-connect enabled because using jnlp4 has a higher 
>> failure rate.
>> 
>> Currently I am getting abount a 80% success rate with my current setup. When 
>> I upgraded ( 2 months ago ) to latest master and latest k8s plugin I was 
>> having about a 15% success rate of builds.
>> 
>> WARNING: Failed to send back a reply to the request 
>> hudson.remoting.Request$2@725700df
>> hudson.remoting.ChannelClosedException: Channel 
>> "hudson.remoting.Channel@7083e923:Channel to /100.96.3.149 
>> ": channel is already closed
>>  at hudson.remoting.Channel.send(Channel.java:715)
>>  at hudson.remoting.Request$2.run(Request.java:377)
>>  at 
>> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
>>  at 
>> org.jenkinsci.remoting.CallableDecorator.call(CallableDecorator.java:19)
>>  at 
>> hudson.remoting.CallableDecoratorList$1.call(CallableDecoratorList.java:21)
>>  at 
>> jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
>>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  at java.lang.Thread.run(Thread.java:748)
>> Caused by: java.io.IOException
>>  at 

Re: kubernets plugin jnlp4

2018-04-24 Thread Edward Bond
Carlos,

Was this a known issue before? I will snapshot the drive and try the upgrade to 
the latest of everything next week. 

Maybe it is a kops / aws issue.

Thanks!

- Ed

> On Apr 24, 2018, at 4:54 AM, Carlos Sanchez  wrote:
> 
> All the plugin builds are running in GKE and I haven't experienced any failed 
> jobs for a while. Jenkins weekly and k8s latest release with default jnlp 
> image
> 
> On Mon, Apr 23, 2018 at 9:58 PM, Edward Bond  > wrote:
> Hello All,
> 
> I am using Jenkins LTS, 2.107.2. with kubernetes plugin: 1.1.3, jnlp-2, using 
> kops inside aws. 
> 
> I have to have jnlp2-connect enabled because using jnlp4 has a higher failure 
> rate.
> 
> Currently I am getting abount a 80% success rate with my current setup. When 
> I upgraded ( 2 months ago ) to latest master and latest k8s plugin I was 
> having about a 15% success rate of builds.
> 
> WARNING: Failed to send back a reply to the request 
> hudson.remoting.Request$2@725700df
> hudson.remoting.ChannelClosedException: Channel 
> "hudson.remoting.Channel@7083e923:Channel to /100.96.3.149 
> ": channel is already closed
>   at hudson.remoting.Channel.send(Channel.java:715)
>   at hudson.remoting.Request$2.run(Request.java:377)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
>   at 
> org.jenkinsci.remoting.CallableDecorator.call(CallableDecorator.java:19)
>   at 
> hudson.remoting.CallableDecoratorList$1.call(CallableDecoratorList.java:21)
>   at 
> jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException
>   at hudson.remoting.Channel.close(Channel.java:1443)
>   at hudson.remoting.Channel.close(Channel.java:1399)
>   at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:746)
>   at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:99)
>   at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:664)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   ... 4 more
> 
> These are the type of errors I get.
> 
> Anyone have a list of versions that work for them and jnlp4?
> 
> It would be awesome to have a known good config of:
> Master docker image version
> K8s plugin version
> jnlp pod agent base image version
> 
> Thanks in advance.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Jenkins Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to jenkinsci-users+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/jenkinsci-users/9c07626f-a538-4e9a-9d3b-0cc8c3948355%40googlegroups.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Jenkins Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to jenkinsci-users+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/jenkinsci-users/CALHFn6Mp2ZVbEFhrX72a-0-a-ut83hDDNC%3DJRQn4JWD2Ey0vVA%40mail.gmail.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/30039996-EE9E-4874-9767-B9B720E3A78B%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: kubernets plugin jnlp4

2018-04-24 Thread Carlos Sanchez
All the plugin builds are running in GKE and I haven't experienced any
failed jobs for a while. Jenkins weekly and k8s latest release with default
jnlp image

On Mon, Apr 23, 2018 at 9:58 PM, Edward Bond  wrote:

> Hello All,
>
> I am using Jenkins LTS, 2.107.2. with kubernetes plugin: 1.1.3, jnlp-2,
> using kops inside aws.
>
> I have to have jnlp2-connect enabled because using jnlp4 has a higher
> failure rate.
>
> Currently I am getting abount a 80% success rate with my current setup.
> When I upgraded ( 2 months ago ) to latest master and latest k8s plugin I
> was having about a 15% success rate of builds.
>
> WARNING: Failed to send back a reply to the request
> hudson.remoting.Request$2@725700df
> hudson.remoting.ChannelClosedException: Channel 
> "hudson.remoting.Channel@7083e923:Channel
> to /100.96.3.149": channel is already closed
> at hudson.remoting.Channel.send(Channel.java:715)
> at hudson.remoting.Request$2.run(Request.java:377)
> at hudson.remoting.InterceptingExecutorService$1.call(
> InterceptingExecutorService.java:72)
> at org.jenkinsci.remoting.CallableDecorator.call(
> CallableDecorator.java:19)
> at hudson.remoting.CallableDecoratorList$1.call(
> CallableDecoratorList.java:21)
> at jenkins.util.ContextResettingExecutorService$2.call(
> ContextResettingExecutorService.java:46)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException
> at hudson.remoting.Channel.close(Channel.java:1443)
> at hudson.remoting.Channel.close(Channel.java:1399)
> at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:746)
> at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:99)
> at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:664)
> at jenkins.util.ContextResettingExecutorService$1.run(
> ContextResettingExecutorService.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ... 4 more
>
> These are the type of errors I get.
>
> Anyone have a list of versions that work for them and jnlp4?
>
> It would be awesome to have a known good config of:
>
>1. Master docker image version
>2. K8s plugin version
>3. jnlp pod agent base image version
>
>
> Thanks in advance.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to jenkinsci-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/jenkinsci-users/9c07626f-a538-4e9a-9d3b-0cc8c3948355%40googlegroups.
> com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/CALHFn6Mp2ZVbEFhrX72a-0-a-ut83hDDNC%3DJRQn4JWD2Ey0vVA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


kubernets plugin jnlp4

2018-04-23 Thread Edward Bond
Hello All,

I am using Jenkins LTS, 2.107.2. with kubernetes plugin: 1.1.3, jnlp-2, 
using kops inside aws. 

I have to have jnlp2-connect enabled because using jnlp4 has a higher 
failure rate.

Currently I am getting abount a 80% success rate with my current setup. 
When I upgraded ( 2 months ago ) to latest master and latest k8s plugin I 
was having about a 15% success rate of builds.

WARNING: Failed to send back a reply to the request 
hudson.remoting.Request$2@725700df
hudson.remoting.ChannelClosedException: Channel 
"hudson.remoting.Channel@7083e923:Channel to /100.96.3.149": channel is 
already closed
at hudson.remoting.Channel.send(Channel.java:715)
at hudson.remoting.Request$2.run(Request.java:377)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at org.jenkinsci.remoting.CallableDecorator.call(CallableDecorator.java:19)
at 
hudson.remoting.CallableDecoratorList$1.call(CallableDecoratorList.java:21)
at 
jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException
at hudson.remoting.Channel.close(Channel.java:1443)
at hudson.remoting.Channel.close(Channel.java:1399)
at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:746)
at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:99)
at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:664)
at 
jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more

These are the type of errors I get.

Anyone have a list of versions that work for them and jnlp4?

It would be awesome to have a known good config of:

   1. Master docker image version
   2. K8s plugin version
   3. jnlp pod agent base image version


Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/9c07626f-a538-4e9a-9d3b-0cc8c3948355%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.