we will address this on the issue you opened.

(https://github.com/openshift/origin/issues/17019)


On Wed, Oct 25, 2017 at 10:36 AM, Tien Hung Nguyen <[email protected]
> wrote:

> Hello,
>
> I have a problem with setting up a Jenkins pipeline while carrying out
> this tutorial: https://blog.openshift.com/openshift
> -pipelines-jenkins-blue-ocean/
>
>
> When I try to Start the pipeline, the build image stage fails witth the
> following log message in the Jenkins pod:
> ...
> INFO: Waiting for Jenkins to be started
>
>   | Oct 20, 2017 3:26:02 PM 
> io.fabric8.jenkins.openshiftsync.BuildConfigWatcher
> start
>   | INFO: Now handling startup build configs!!
>   | Oct 20, 2017 3:26:02 PM io.fabric8.jenkins.openshiftsync.ConfigMapWatcher
> start
>   | INFO: Now handling startup config maps!!
>   | Oct 20, 2017 3:26:02 PM 
> io.fabric8.jenkins.openshiftsync.ImageStreamWatcher
> start
>   | INFO: Now handling startup image streams!!
>   | Oct 20, 2017 3:26:02 PM org.springframework.context.su
> pport.AbstractApplicationContext prepareRefresh
>   | INFO: Refreshing org.springframework.web.contex
> t.support.StaticWebApplicationContext@1538cca: display name [Root
> WebApplicationContext]; startup date [Fri Oct 20 15:26:02 UTC 2017]; root
> of context hierarchy
>   | Oct 20, 2017 3:26:02 PM org.springframework.context.su
> pport.AbstractApplicationContext obtainFreshBeanFactory
>   | INFO: Bean factory for application context
> [org.springframework.web.context.support.StaticWebApplicatio
> nContext@1538cca]: org.springframework.beans.fact
> ory.support.DefaultListableBeanFactory@1dc8eb5
>   | Oct 20, 2017 3:26:02 PM org.springframework.beans.fact
> ory.support.DefaultListableBeanFactory preInstantiateSingletons
>   | INFO: Pre-instantiating singletons in org.springframework.beans.fact
> ory.support.DefaultListableBeanFactory@1dc8eb5: defining beans
> [filter,legacy]; root of factory hierarchy
>   | Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:26:03 PM 
> io.fabric8.jenkins.openshiftsync.ConfigMapWatcher$1
> doRun
>   | INFO: creating ConfigMap watch for namespace ci and resource version
> 8430
>   | Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:26:03 PM 
> io.fabric8.jenkins.openshiftsync.BuildConfigWatcher$1
> doRun
>   | INFO: creating BuildConfig watch for namespace ci and resource version
> 8430
>   | Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:26:03 PM io.fabric8.jenkins.openshiftsync.BuildWatcher$1
> doRun
>   | INFO: creating Build watch for namespace ci and resource version 8430
>   | Oct 20, 2017 3:26:03 PM hudson.WebAppMain$3 run
>   | INFO: Jenkins is fully up and running
>   | Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:26:04 PM 
> io.fabric8.jenkins.openshiftsync.ImageStreamWatcher$1
> doRun
>   | INFO: creating ImageStream watch for namespace ci and resource version
> 8430
>   | Oct 20, 2017 3:26:05 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:26:07 PM org.openshift.jenkins.plugins.
> openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
>   | INFO: OpenShift OAuth: provider: OpenShiftProviderInfo: issuer:
> https://127.0.0.1:8443 auth ep: https://127.0.0.1:8443/oauth/authorize
> token ep: https://127.0.0.1:8443/oauth/token
>   | Oct 20, 2017 3:26:07 PM org.openshift.jenkins.plugins.
> openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
>   | INFO: OpenShift OAuth returning true with namespace ci SA dir null
> default /run/secrets/kubernetes.io/serviceaccount SA name null default
> jenkins client ID null default system:serviceaccount:ci:jenkins secret
> null default eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGV
> zL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3V
> udC9uYW1lc3BhY2UiOiJjaSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY29
> 1bnQvc2VjcmV0Lm5hbWUiOiJqZW5raW5zLXRva2VuLXhtMWN4Iiwia3ViZXJ
> uZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI
> 6ImplbmtpbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZ
> pY2UtYWNjb3VudC51aWQiOiI3NzE1ODQxNy1iNThjLTExZTctODU3NS0wZTR
> kZWZjODRiNDMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2k6amV
> ua2lucyJ9.TstDfIRNnKuiVV_SPB8kz9-I3t1XslIbjniahULDdmD4Z64v6C1ZY_
> WviauPqzta8oPLDt4mKm7XSw_6l1UtuFE_dIFaRhDgGR0rYoYsKrH4LOX979
> nnd0_zJa-COI4-Ew7yVHQwTGicQU9JSNg0cUlQRl4uxUf1IFkcaAiRfWKtGP
> 3XAVHOEKNNHNaqVJ_i-zFHfPauFR4Y0nvxO3x3Qh3hsktt4bMihNoQSlNuEL
> 7B7ktsITF942lRrSoXYGjKqu6hAh7vyZMs_c8ecaL25CVvyn8MunJsae1XmS
> RRO5Tvz42lwc5qJce3rOi3GWSGfMTwJleX4udMinFoi-7l2Q redirect null default
> https://127.0.0.1:8443 server null default https://openshift.default.svc
>   | Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.
> openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
>   | INFO: OpenShift OAuth: provider: OpenShiftProviderInfo: issuer:
> https://127.0.0.1:8443 auth ep: https://127.0.0.1:8443/oauth/authorize
> token ep: https://127.0.0.1:8443/oauth/token
>   | Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.
> openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
>   | INFO: OpenShift OAuth returning true with namespace ci SA dir null
> default /run/secrets/kubernetes.io/serviceaccount SA name null default
> jenkins client ID null default system:serviceaccount:ci:jenkins secret
> null default eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGV
> zL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3V
> udC9uYW1lc3BhY2UiOiJjaSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY29
> 1bnQvc2VjcmV0Lm5hbWUiOiJqZW5raW5zLXRva2VuLXhtMWN4Iiwia3ViZXJ
> uZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI
> 6ImplbmtpbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZ
> pY2UtYWNjb3VudC51aWQiOiI3NzE1ODQxNy1iNThjLTExZTctODU3NS0wZTR
> kZWZjODRiNDMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2k6amV
> ua2lucyJ9.TstDfIRNnKuiVV_SPB8kz9-I3t1XslIbjniahULDdmD4Z64v6C1ZY_
> WviauPqzta8oPLDt4mKm7XSw_6l1UtuFE_dIFaRhDgGR0rYoYsKrH4LOX979
> nnd0_zJa-COI4-Ew7yVHQwTGicQU9JSNg0cUlQRl4uxUf1IFkcaAiRfWKtGP
> 3XAVHOEKNNHNaqVJ_i-zFHfPauFR4Y0nvxO3x3Qh3hsktt4bMihNoQSlNuEL
> 7B7ktsITF942lRrSoXYGjKqu6hAh7vyZMs_c8ecaL25CVvyn8MunJsae1XmS
> RRO5Tvz42lwc5qJce3rOi3GWSGfMTwJleX4udMinFoi-7l2Q redirect null default
> https://127.0.0.1:8443 server null default https://openshift.default.svc
>   | Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.
> openshiftlogin.OpenShiftOAuth2SecurityRealm updateAuthorizationStrategy
>   | INFO: OpenShift OAuth: user developer, stored in the matrix as
> developer-admin, based on OpenShift roles [view, edit, admin] already
> exists in Jenkins
>   | Oct 20, 2017 3:27:10 PM 
> io.fabric8.jenkins.openshiftsync.BuildConfigWatcher
> updateJob
>   | INFO: Updated job ci-cart-service-pipeline from BuildConfig
> NamespaceName{ci:cart-service-pipeline} with revision: 8462
>   | Oct 20, 2017 3:27:10 PM 
> io.fabric8.jenkins.openshiftsync.BuildSyncRunListener
> onStarted
>   | INFO: starting polling build job/ci-cart-service-pipeline/7/
>   | Oct 20, 2017 3:27:42 PM 
> org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud
> provision
>   | INFO: Excess workload after pending Spot instances: 1
>   | Oct 20, 2017 3:27:42 PM 
> org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud
> provision
>   | INFO: Template: Kubernetes Pod Template
>   | Oct 20, 2017 3:27:42 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:27:43 PM hudson.slaves.NodeProvisioner$StandardStrategyImpl
> apply
>   | INFO: Started provisioning Kubernetes Pod Template from openshift
> with 1 executors. Remaining excess workload: 0
>   | Oct 20, 2017 3:27:43 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:27:43 PM 
> org.csanchez.jenkins.plugins.kubernetes.ProvisioningCallback
> call
>   | INFO: Created Pod: maven-xf7cn in namespace ci
>   | Oct 20, 2017 3:27:43 PM 
> org.csanchez.jenkins.plugins.kubernetes.ProvisioningCallback
> call
>   | INFO: Waiting for Pod to be scheduled (0/100): maven-xf7cn
>   | Oct 20, 2017 3:27:45 PM hudson.TcpSlaveAgentListener$ConnectionHandler
> run
>   | INFO: Accepted JNLP4-connect connection #1
> <https://github.com/jboss-openshift/application-templates/pull/1> from /
> 172.17.0.2:49622
>   | Oct 20, 2017 3:27:49 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:27:52 PM hudson.slaves.NodeProvisioner$2 run
>   | INFO: Kubernetes Pod Template provisioning successfully completed. We
> have now 2 computer(s)
>   | Oct 20, 2017 3:29:39 PM 
> org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave
> _terminate
>   | INFO: Terminating Kubernetes instance for agent maven-xf7cn
>   | Oct 20, 2017 3:29:39 PM org.jenkinsci.plugins.workflow.job.WorkflowRun
> finish
>   | INFO: ci-cart-service-pipeline #7
> <https://github.com/jboss-openshift/application-templates/issues/7>
> completed: FAILURE
>   | Oct 20, 2017 3:29:39 PM okhttp3.internal.platform.Platform log
>   | INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the
> boot class path?
>   | Oct 20, 2017 3:29:39 PM 
> org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave
> _terminate
>   | INFO: Terminated Kubernetes instance for agent ci/maven-xf7cn
>   | Terminated Kubernetes instance for agent ci/maven-xf7cn
>   | Oct 20, 2017 3:29:39 PM 
> org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave
> _terminate
>   | INFO: Disconnected computer maven-xf7cn
>   | Oct 20, 2017 3:29:39 PM jenkins.slaves.DefaultJnlpSlaveReceiver
> channelClosed
>   | WARNING: Computer.threadPoolForRemoting [#15
> <https://github.com/jboss-openshift/application-templates/pull/15>] for
> maven-xf7cn terminated
>   | java.nio.channels.ClosedChannelException
>   | at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer
> .onReadClosed(ChannelApplicationLayer.java:208)
>   | at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClose
> d(ApplicationLayer.java:222)
>   | at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClos
> ed(ProtocolStack.java:832)
>   | at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(
> FilterLayer.java:287)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.on
> RecvClosed(SSLEngineFilterLayer.java:181)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.sw
> itchToNoSecure(SSLEngineFilterLayer.java:283)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.pr
> ocessWrite(SSLEngineFilterLayer.java:503)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.pr
> ocessQueuedWrites(SSLEngineFilterLayer.java:248)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.
> doSend(SSLEngineFilterLayer.java:200)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.do
> CloseSend(SSLEngineFilterLayer.java:213)
>   | at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSen
> d(ProtocolStack.java:800)
>   | at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrit
> e(ApplicationLayer.java:173)
>   | at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer
> $ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:311)
>   | at hudson.remoting.Channel.close(Channel.java:1403)
>   | at hudson.remoting.Channel.close(Channel.java:1356)
>   | at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:708)
>   | at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:96)
>   | at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:626)
>   | at jenkins.util.ContextResettingExecutorService$1.run(ContextRe
> settingExecutorService.java:28)
>   | at java.util.concurrent.Executors$RunnableAdapter.call(
> Executors.java:511)
>   | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1149)
>   | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:624)
>   | at java.lang.Thread.run(Thread.java:748)
>   |
>   | Oct 20, 2017 3:29:39 PM hudson.remoting.Request$2 run
>   | WARNING: Failed to send back a reply to the request
> hudson.remoting.Request$2@1078adc
>   | hudson.remoting.ChannelClosedException: channel is already closed
>   | at hudson.remoting.Channel.send(Channel.java:667)
>   | at hudson.remoting.Request$2.run(Request.java:372)
>   | at hudson.remoting.InterceptingExecutorService$1.call(Intercept
> ingExecutorService.java:68)
>   | at org.jenkinsci.remoting.CallableDecorator.call(CallableDecora
> tor.java:19)
>   | at hudson.remoting.CallableDecoratorList$1.call(CallableDecorat
> orList.java:21)
>   | at jenkins.util.ContextResettingExecutorService$2.call(ContextR
> esettingExecutorService.java:46)
>   | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1149)
>   | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:624)
>   | at java.lang.Thread.run(Thread.java:748)
>   | Caused by: java.nio.channels.ClosedChannelException
>   | at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer
> .onReadClosed(ChannelApplicationLayer.java:208)
>   | at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClose
> d(ApplicationLayer.java:222)
>   | at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClos
> ed(ProtocolStack.java:832)
>   | at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(
> FilterLayer.java:287)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.on
> RecvClosed(SSLEngineFilterLayer.java:181)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.sw
> itchToNoSecure(SSLEngineFilterLayer.java:283)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.pr
> ocessWrite(SSLEngineFilterLayer.java:503)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.pr
> ocessQueuedWrites(SSLEngineFilterLayer.java:248)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.
> doSend(SSLEngineFilterLayer.java:200)
>   | at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.do
> CloseSend(SSLEngineFilterLayer.java:213)
>   | at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSen
> d(ProtocolStack.java:800)
>   | at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrit
> e(ApplicationLayer.java:173)
>   | at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer
> $ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:311)
>   | at hudson.remoting.Channel.close(Channel.java:1403)
>   | at hudson.remoting.Channel.close(Channel.java:1356)
>   | at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:708)
>   | at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:96)
>   | at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:626)
>   | at jenkins.util.ContextResettingExecutorService$1.run(ContextRe
> settingExecutorService.java:28)
>   | at java.util.concurrent.Executors$RunnableAdapter.call(
> Executors.java:511)
>   | ... 4 more
>   |
>   | Oct 20, 2017 3:29:39 PM 
> io.fabric8.jenkins.openshiftsync.BuildSyncRunListener
> onCompleted
>   | INFO: onCompleted job/ci-cart-service-pipeline/7/
>   | Oct 20, 2017 3:29:39 PM 
> io.fabric8.jenkins.openshiftsync.BuildSyncRunListener
> onFinalized
>   | INFO: onFinalized job/ci-cart-service-pipeline/7/
>
>
> The jenkins build log shows me the following messages:
>
> [Pipeline] // stage
> [Pipeline] stage
> [Pipeline] { (Build Image)
> [Pipeline] unstash
> [Pipeline] sh
> [cicd-cart-service-pipeline] Running shell script
> + oc start-build cart --from-file=target/cart.jar --follow
> Uploading file "target/cart.jar" as binary input for the build ...
> build "cart-4" started
> Receiving source from STDIN as file cart.jar
> ==================================================================
> Starting S2I Java Build .....
> S2I source build with plain binaries detected
> Copying binaries from /tmp/src to /deployments ...
> ... done
>
> Pushing image 172.30.1.1:5000/cicd/cart:latest ...
> Pushed 5/6 layers, 84% complete
> Pushed 6/6 layers, 100% complete
> Push successful
> Error from server (BadRequest): No field label conversion function found for 
> version: build.openshift.io/v1
> [Pipeline] }
> [Pipeline] // stage
>
>
> This is my environment:
> Jenkins 2.73.2 on OpenShift (Persistent)
> Plugins:
> OpenShift Pipeline Jenkins Plugin 1.0.52
> OpenShift Sync 0.1.31
>
> oc v3.6.0+c4dd4cf
> kubernetes v1.6.1+5115d708d7
> features: Basic-Auth
>
> Server https://127.0.0.1:8443
> openshift v3.6.0+c4dd4cf
> kubernetes v1.6.1+5115d708d7
>
> => I'm running a local OpenShift Origin installation running via Docker
> for Mac. I installed it followinth this tutorial https://github.com/
> openshift/origin/blob/master/docs/cluster_up_down.md with the CLI
> command: oc cluster up
>
>
> When I carry out the CLI command - oc describes nodes -  I get the
> following informations:
>
> Name: localhost
> Role:
> Labels: beta.kubernetes.io/arch=amd64
> beta.kubernetes.io/os=linux
> kubernetes.io/hostname=localhost
> Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true
> Taints:
> CreationTimestamp: Sat, 21 Oct 2017 10:42:06 +0200
> Phase:
> Conditions:
> Type Status LastHeartbeatTime LastTransitionTime Reason Message
> ------------------------------
>
> OutOfDisk False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017 22:27:45
> +0200 KubeletHasSufficientDisk kubelet has sufficient disk space available
> MemoryPressure False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017
> 22:27:45 +0200 KubeletHasSufficientMemory kubelet has sufficient memory
> available
> DiskPressure False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017
> 22:27:45 +0200 KubeletHasNoDiskPressure kubelet has no disk pressure
> Ready True Mon, 23 Oct 2017 23:16:21 +0200 Mon, 23 Oct 2017 22:58:39 +0200
> KubeletReady kubelet is posting ready status
> Addresses: 192.168.65.2,192.168.65.2,localhost
> Capacity:
> cpu: 4
> memory: 6100352Ki
> pods: 40
> Allocatable:
> cpu: 4
> memory: 5997952Ki
> pods: 40
> System Info:
> Machine ID:
> System UUID: 69BC2037-4931-F334-95AC-C0CCC0A84389
> Boot ID: 6f25d3c7-9564-41aa-90ce-0060181ed1a4
> Kernel Version: 4.9.49-moby
> OS Image: CentOS Linux 7 (Core)
> Operating System: linux
> Architecture: amd64
> Container Runtime Version: docker://Unknown
> Kubelet Version: v1.6.1+5115d708d7
> Kube-Proxy Version: v1.6.1+5115d708d7
> ExternalID: localhost
> Non-terminated Pods: (4 in total)
> Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
> ------------------------------
>
> ci cart-3-3plz6 200m (5%) 1 (25%) 512Mi (8%) 1Gi (17%)
> ci jenkins-1-jq1wd 0 (0%) 0 (0%) 512Mi (8%) 512Mi (8%)
> default docker-registry-1-j3pdr 100m (2%) 0 (0%) 256Mi (4%) 0 (0%)
> default router-1-zg2cd 100m (2%) 0 (0%) 256Mi (4%) 0 (0%)
> Allocated resources:
> (Total limits may be over 100 percent, i.e., overcommitted.)
> CPU Requests CPU Limits Memory Requests Memory Limits
> ------------------------------
>
> 400m (10%) 1 (25%) 1536Mi (26%) 1536Mi (26%)
> Events:
> FirstSeen LastSeen Count From SubObjectPath Type Reason Message
> ------------------------------
>
> 2d 1d 13 kubelet, localhost Normal NodeHasSufficientDisk Node localhost
> status is now: NodeHasSufficientDisk
> 2d 1d 13 kubelet, localhost Normal NodeHasSufficientMemory Node localhost
> status is now: NodeHasSufficientMemory
> 2d 1d 13 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost
> status is now: NodeHasNoDiskPressure
> 2d 1d 18 kubelet, localhost Normal NodeReady Node localhost status is now:
> NodeReady
> 1h 1h 1 kubelet, localhost Normal Starting Starting kubelet.
> 1h 1h 1 kubelet, localhost Warning ImageGCFailed unable to find data for
> container /
> 1h 1h 1 kubelet, localhost Normal NodeHasSufficientDisk Node localhost
> status is now: NodeHasSufficientDisk
> 1h 1h 1 kubelet, localhost Normal NodeHasSufficientMemory Node localhost
> status is now: NodeHasSufficientMemory
> 1h 1h 1 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost
> status is now: NodeHasNoDiskPressure
> 1h 1h 1 kubelet, localhost Warning Rebooted Node localhost has been
> rebooted, boot id: 16be43b9-bdb6-4048-9949-786989bf572c
> 18m 18m 1 kubelet, localhost Normal Starting Starting kubelet.
> 18m 18m 1 kubelet, localhost Warning ImageGCFailed unable to find data for
> container /
> 18m 18m 1 kubelet, localhost Normal NodeHasSufficientDisk Node localhost
> status is now: NodeHasSufficientDisk
> 18m 18m 1 kubelet, localhost Normal NodeHasSufficientMemory Node localhost
> status is now: NodeHasSufficientMemory
> 18m 18m 1 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost
> status is now: NodeHasNoDiskPressure
> 18m 18m 1 kubelet, localhost Warning Rebooted Node localhost has been
> rebooted, boot id: 6f25d3c7-9564-41aa-90ce-0060181ed1a4
> 18m 18m 1 kubelet, localhost Normal NodeNotReady Node localhost status is
> now: NodeNotReady
> 17m 17m 1 kubelet, localhost Normal NodeReady Node localhost status is
> now: NodeReady
>
>
> I would be very thankful if you can help me in this matter.
>
>
> Best regards,
>
> Tien
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to