Are there actually any logs?
Couldn't it be the case when the deployment has unsatisfied dependencies?
Such as secrets, persistent volumes?

You can also check the output of `oc status -v`


Josef Karasek, xPaaS

On Wed, Dec 21, 2016 at 10:22 AM, Michail Kargakis <[email protected]>
wrote:

> Can you paste logs from the application pod?
>
> On Wed, Dec 21, 2016 at 1:05 AM, Srinivas Naga Kotaru (skotaru) <
> [email protected]> wrote:
>
>>
>>
>>
>>
>>
>>
>> LASTSEEN   FIRSTSEEN   COUNT     NAME                 KIND
>> SUBOBJECT                     TYPE      REASON
>> SOURCE                            MESSAGE
>>
>> 37m        37m         1         sbpprovui-5-deploy
>> Pod                                     Normal    Scheduled
>> {default-scheduler }              Successfully assigned sbpprovui-5-deploy
>> to cae-ga2-005.cisco.com
>>
>> 37m        37m         1         sbpprovui-5-deploy   Pod
>> spec.containers{deployment}   Normal    Pulled       {kubelet
>> cae-ga2-005.cisco.com}   Container image "openshift3/ose-deployer:v3.3.1.3"
>> already present on machine
>>
>> 37m        37m         1         sbpprovui-5-deploy   Pod
>> spec.containers{deployment}   Normal    Created      {kubelet
>> cae-ga2-005.cisco.com}   Created container with docker id abca66ca23a4
>>
>> 37m        37m         1         sbpprovui-5-deploy   Pod
>> spec.containers{deployment}   Normal    Started      {kubelet
>> cae-ga2-005.cisco.com}   Started container with docker id abca66ca23a4
>>
>> 37m        37m         1         sbpprovui-5-deploy   Pod
>> spec.containers{deployment}   Normal    Killing      {kubelet
>> cae-ga2-005.cisco.com}   Killing container with docker id abca66ca23a4:
>> Need to kill pod.
>>
>> 37m        37m         1         sbpprovui-5-deploy
>> Pod                                     Warning   FailedSync   {kubelet
>> cae-ga2-005.cisco.com}   Error syncing pod, skipping: failed to
>> "TeardownNetwork" for "sbpprovui-5-deploy_sbpprovui-alln-dev" with
>> TeardownNetworkError: "Failed to teardown network for pod
>> \"0a85c561-c6fa-11e6-b3f1-005056ac66ba\" using network plugins
>> \"redhat/openshift-ovs-multitenant\": Error running network teardown
>> script: Could not find IP address for container
>> 706d6056ba952f2c55699ed61c687b6dcee59a0751b17a72c1199555d8495c43"
>>
>>
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-5-rl4sf to cae-ga2-004.cisco.com
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulling            {kubelet cae-ga2-004.cisco.com}   pulling image "
>> containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:de74f2b2fd3c4410544e9b3815ad755c6885d65989e8
>> 7355cc3cca129af7d86a"
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulled             {kubelet cae-ga2-004.cisco.com}   Successfully pulled
>> image "containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:de74f2b2fd3c4410544e9b3815ad755c6885d65989e8
>> 7355cc3cca129af7d86a"
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal
>> Created            {kubelet cae-ga2-004.cisco.com}   Created container
>> with docker id 69e8fb3310d3
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal    Started
>>            {kubelet cae-ga2-004.cisco.com}   Started container with
>> docker id 69e8fb3310d3
>>
>> 29m       29m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal
>> Killing            {kubelet cae-ga2-004.cisco.com}   Killing container
>> with docker id 69e8fb3310d3: Need to kill pod.
>>
>> 37m       37m       1         sbpprovui-5
>> ReplicationController                                 Normal
>> SuccessfulCreate   {replication-controller }         Created pod:
>> sbpprovui-5-rl4sf
>>
>> 29m       29m       1         sbpprovui-5
>> ReplicationController                                 Normal
>> SuccessfulDelete   {replication-controller }         Deleted pod:
>> sbpprovui-5-rl4sf
>>
>> 30m       30m       1         sbpprovui-6-deploy
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-6-deploy to cae-ga2-007.cisco.com
>>
>> 30m       30m       1         sbpprovui-6-deploy   Pod
>>        spec.containers{deployment}   Normal    Pulled
>> {kubelet cae-ga2-007.cisco.com}   Container image
>> "openshift3/ose-deployer:v3.3.1.3" already present on machine
>>
>> 30m       30m       1         sbpprovui-6-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Created            {kubelet cae-ga2-007.cisco.com}   Created container
>> with docker id 37b6e930b607
>>
>> 30m       30m       1         sbpprovui-6-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Started            {kubelet cae-ga2-007.cisco.com}   Started container
>> with docker id 37b6e930b607
>>
>> 29m       29m       1         sbpprovui-6-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Killing            {kubelet cae-ga2-007.cisco.com}   Killing container
>> with docker id 37b6e930b607: Need to kill pod.
>>
>> 29m       29m       1         sbpprovui-6-deploy
>> Pod                                                   Warning
>> FailedSync         {kubelet cae-ga2-007.cisco.com}   Error syncing pod,
>> skipping: failed to "TeardownNetwork" for 
>> "sbpprovui-6-deploy_sbpprovui-alln-dev"
>> with TeardownNetworkError: "Failed to teardown network for pod
>> \"25f6c3de-c6fb-11e6-b3f1-005056ac66ba\" using network plugins
>> \"redhat/openshift-ovs-multitenant\": Error running network teardown
>> script: Could not find IP address for container
>> e8feb854e48a2033278972d99baf8144f952dc6cdf5b9a4ccaf7667a14346e40"
>>
>>
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-6-jptzp to cae-ga2-006.cisco.com
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulling            {kubelet cae-ga2-006.cisco.com}   pulling image "
>> containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:68c6ed8d3717e9f389bb7d225705ffd135030cc3e3bb
>> 2c1248d93670415ee7ca"
>>
>> 29m       29m       1         sbpprovui-6-jptzp    Pod
>>   spec.containers{sbpprovui}    Normal    Pulled             {kubelet
>> cae-ga2-006.cisco.com}   Successfully pulled image "
>> containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:68c6ed8d3717e9f389bb7d225705ffd135030cc3e3bb
>> 2c1248d93670415ee7ca"
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal
>> Created            {kubelet cae-ga2-006.cisco.com}   Created container
>> with docker id 1fc1ff05c2a6
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal
>> Started            {kubelet cae-ga2-006.cisco.com}   Started container
>> with docker id 1fc1ff05c2a6
>>
>> 23m       23m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal
>> Killing            {kubelet cae-ga2-006.cisco.com}   Killing container
>> with docker id 1fc1ff05c2a6: Need to kill pod.
>>
>> 29m       29m       1         sbpprovui-6
>> ReplicationController                                 Normal
>> SuccessfulCreate   {replication-controller }         Created pod:
>> sbpprovui-6-jptzp
>>
>> 23m       23m       1         sbpprovui-6
>> ReplicationController                                 Normal
>> SuccessfulDelete   {replication-controller }         Deleted pod:
>> sbpprovui-6-jptzp
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-7-07tt5 to cae-ga2-010.cisco.com
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulling            {kubelet cae-ga2-010.cisco.com}   pulling image "
>> containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:5afb02555b682b77d50e0ed4000e4b14726786308476
>> 3257d761d1b318c73e3f"
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulled             {kubelet cae-ga2-010.cisco.com}   Successfully pulled
>> image "containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:5afb02555b682b77d50e0ed4000e4b14726786308476
>> 3257d761d1b318c73e3f"
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>> Pod                     spec.containers{sbpprovui}    Normal
>> Created            {kubelet cae-ga2-010.cisco.com}   Created container
>> with docker id 0273a0d504b0
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>> Pod                     spec.containers{sbpprovui}    Normal
>> Started            {kubelet cae-ga2-010.cisco.com}   Started container
>> with docker id 0273a0d504b0
>>
>> 23m       23m       1         sbpprovui-7-deploy
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-7-deploy to cae-ga2-009.cisco.com
>>
>> 23m       23m       1         sbpprovui-7-deploy   Pod
>>            spec.containers{deployment}   Normal    Pulled
>> {kubelet cae-ga2-009.cisco.com}   Container image
>> "openshift3/ose-deployer:v3.3.1.3" already present on machine
>>
>> 23m       23m       1         sbpprovui-7-deploy   Pod
>>  spec.containers{deployment}   Normal    Created            {kubelet
>> cae-ga2-009.cisco.com}   Created container with docker id c41632c30ca7
>>
>> 23m       23m       1         sbpprovui-7-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Started            {kubelet cae-ga2-009.cisco.com}   Started container
>> with docker id c41632c30ca7
>>
>> 23m       23m       1         sbpprovui-7-deploy
>> Pod                                                   Warning
>> FailedSync         {kubelet cae-ga2-009.cisco.com}   Error syncing pod,
>> skipping: failed to "TeardownNetwork" for 
>> "sbpprovui-7-deploy_sbpprovui-alln-dev"
>> with TeardownNetworkError: "Failed to teardown network for pod
>> \"0a6029c1-c6fc-11e6-b3f1-005056ac66ba\" using network plugins
>> \"redhat/openshift-ovs-multitenant\": Error running network teardown
>> script: Could not find IP address for container
>> b97395b0b786f17815f10889c75c921c87a65d55323eac202b6639544649137a"
>>
>>
>>
>> 23m       23m       1         sbpprovui-7   ReplicationController
>> Normal    SuccessfulCreate    {replication-controller }        Created pod:
>> sbpprovui-7-07tt5
>>
>> 40m       40m       1         sbpprovui     DeploymentConfig
>> Normal    DeploymentCreated   {deploymentconfig-controller }   Created new
>> deployment "sbpprovui-3" for version 3
>>
>> 38m       38m       1         sbpprovui     DeploymentConfig
>> Normal    DeploymentCreated   {deploymentconfig-controller }   Created new
>> deployment "sbpprovui-4" for version 4
>>
>> 37m       37m       1         sbpprovui     DeploymentConfig
>> Normal    DeploymentCreated   {deploymentconfig-controller }   Created new
>> deployment "sbpprovui-5" for version 5
>>
>> 30m       30m       1         sbpprovui     DeploymentConfig
>> Normal    DeploymentCreated   {deploymentconfig-controller }   Created new
>> deployment "sbpprovui-6" for version 6
>>
>> 23m       23m       1         sbpprovui     DeploymentConfig
>> Normal    DeploymentCreated   {deploymentconfig-controller }   Created new
>> deployment "sbpprovui-7" for version 7
>>
>> SKOTARU-M-H06U at ~ ❯
>>
>> SKOTARU-M-H06U at ~ ❯
>>
>> SKOTARU-M-H06U at ~ ❯ oc get events
>>
>> LASTSEEN   FIRSTSEEN   COUNT     NAME                 KIND
>> SUBOBJECT                     TYPE      REASON
>> SOURCE                            MESSAGE
>>
>> 38m        38m         1         sbpprovui-5-deploy
>> Pod                                     Normal    Scheduled
>> {default-scheduler }              Successfully assigned sbpprovui-5-deploy
>> to cae-ga2-005.cisco.com
>>
>> 38m        38m         1         sbpprovui-5-deploy   Pod
>> spec.containers{deployment}   Normal    Pulled       {kubelet
>> cae-ga2-005.cisco.com}   Container image "openshift3/ose-deployer:v3.3.1.3"
>> already present on machine
>>
>> 37m        37m         1         sbpprovui-5-deploy   Pod
>> spec.containers{deployment}   Normal    Created      {kubelet
>> cae-ga2-005.cisco.com}   Created container with docker id abca66ca23a4
>>
>> 37m        37m         1         sbpprovui-5-deploy   Pod
>> spec.containers{deployment}   Normal    Started      {kubelet
>> cae-ga2-005.cisco.com}   Started container with docker id abca66ca23a4
>>
>> 37m        37m         1         sbpprovui-5-deploy   Pod
>> spec.containers{deployment}   Normal    Killing      {kubelet
>> cae-ga2-005.cisco.com}   Killing container with docker id abca66ca23a4:
>> Need to kill pod.
>>
>> 37m        37m         1         sbpprovui-5-deploy
>> Pod                                     Warning   FailedSync   {kubelet
>> cae-ga2-005.cisco.com}   Error syncing pod, skipping: failed to
>> "TeardownNetwork" for "sbpprovui-5-deploy_sbpprovui-alln-dev" with
>> TeardownNetworkError: "Failed to teardown network for pod
>> \"0a85c561-c6fa-11e6-b3f1-005056ac66ba\" using network plugins
>> \"redhat/openshift-ovs-multitenant\": Error running network teardown
>> script: Could not find IP address for container
>> 706d6056ba952f2c55699ed61c687b6dcee59a0751b17a72c1199555d8495c43"
>>
>>
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-5-rl4sf to cae-ga2-004.cisco.com
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulling            {kubelet cae-ga2-004.cisco.com}   pulling image "
>> containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:de74f2b2fd3c4410544e9b3815ad755c6885d65989e8
>> 7355cc3cca129af7d86a"
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulled             {kubelet cae-ga2-004.cisco.com}   Successfully pulled
>> image "containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:de74f2b2fd3c4410544e9b3815ad755c6885d65989e8
>> 7355cc3cca129af7d86a"
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>>   Pod                     spec.containers{sbpprovui}    Normal
>> Created            {kubelet cae-ga2-004.cisco.com}   Created container
>> with docker id 69e8fb3310d3
>>
>> 37m       37m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal
>> Started            {kubelet cae-ga2-004.cisco.com}   Started container
>> with docker id 69e8fb3310d3
>>
>> 29m       29m       1         sbpprovui-5-rl4sf
>> Pod                     spec.containers{sbpprovui}    Normal
>> Killing            {kubelet cae-ga2-004.cisco.com}   Killing container
>> with docker id 69e8fb3310d3: Need to kill pod.
>>
>> 37m       37m       1         sbpprovui-5
>> ReplicationController                                 Normal
>> SuccessfulCreate   {replication-controller }         Created pod:
>> sbpprovui-5-rl4sf
>>
>> 29m       29m       1         sbpprovui-5
>> ReplicationController                                 Normal
>> SuccessfulDelete   {replication-controller }         Deleted pod:
>> sbpprovui-5-rl4sf
>>
>> 30m       30m       1         sbpprovui-6-deploy
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-6-deploy to cae-ga2-007.cisco.com
>>
>> 30m       30m       1         sbpprovui-6-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Pulled             {kubelet cae-ga2-007.cisco.com}   Container image
>> "openshift3/ose-deployer:v3.3.1.3" already present on machine
>>
>> 30m       30m       1         sbpprovui-6-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Created            {kubelet cae-ga2-007.cisco.com}   Created container
>> with docker id 37b6e930b607
>>
>> 30m       30m       1         sbpprovui-6-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Started            {kubelet cae-ga2-007.cisco.com}   Started container
>> with docker id 37b6e930b607
>>
>> 29m       29m       1         sbpprovui-6-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Killing            {kubelet cae-ga2-007.cisco.com}   Killing container
>> with docker id 37b6e930b607: Need to kill pod.
>>
>> 29m       29m       1         sbpprovui-6-deploy
>> Pod                                                   Warning
>> FailedSync         {kubelet cae-ga2-007.cisco.com}   Error syncing pod,
>> skipping: failed to "TeardownNetwork" for 
>> "sbpprovui-6-deploy_sbpprovui-alln-dev"
>> with TeardownNetworkError: "Failed to teardown network for pod
>> \"25f6c3de-c6fb-11e6-b3f1-005056ac66ba\" using network plugins
>> \"redhat/openshift-ovs-multitenant\": Error running network teardown
>> script: Could not find IP address for container
>> e8feb854e48a2033278972d99baf8144f952dc6cdf5b9a4ccaf7667a14346e40"
>>
>>
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-6-jptzp to cae-ga2-006.cisco.com
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulling            {kubelet cae-ga2-006.cisco.com}   pulling image "
>> containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:68c6ed8d3717e9f389bb7d225705ffd135030cc3e3bb
>> 2c1248d93670415ee7ca"
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulled             {kubelet cae-ga2-006.cisco.com}   Successfully pulled
>> image "containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:68c6ed8d3717e9f389bb7d225705ffd135030cc3e3bb
>> 2c1248d93670415ee7ca"
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal
>> Created            {kubelet cae-ga2-006.cisco.com}   Created container
>> with docker id 1fc1ff05c2a6
>>
>> 29m       29m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal    Started
>>         {kubelet cae-ga2-006.cisco.com}   Started container with docker
>> id 1fc1ff05c2a6
>>
>> 23m       23m       1         sbpprovui-6-jptzp
>> Pod                     spec.containers{sbpprovui}    Normal
>> Killing            {kubelet cae-ga2-006.cisco.com}   Killing container
>> with docker id 1fc1ff05c2a6: Need to kill pod.
>>
>> 29m       29m       1         sbpprovui-6
>> ReplicationController                                 Normal
>> SuccessfulCreate   {replication-controller }         Created pod:
>> sbpprovui-6-jptzp
>>
>> 23m       23m       1         sbpprovui-6
>> ReplicationController                                 Normal
>> SuccessfulDelete   {replication-controller }         Deleted pod:
>> sbpprovui-6-jptzp
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>>   Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-7-07tt5 to cae-ga2-010.cisco.com
>>
>> 23m       23m       1         sbpprovui-7-07tt5    Pod
>>    spec.containers{sbpprovui}    Normal    Pulling            {kubelet
>> cae-ga2-010.cisco.com}   pulling image "containers.cisco.com/it_cits_
>> connected_software/sbpprovui_sbpprovui@sha256:5afb02555b682b
>> 77d50e0ed4000e4b147267863084763257d761d1b318c73e3f"
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>> Pod                     spec.containers{sbpprovui}    Normal
>> Pulled             {kubelet cae-ga2-010.cisco.com}   Successfully pulled
>> image "containers.cisco.com/it_cits_connected_software/sbpprovui_s
>> bpprovui@sha256:5afb02555b682b77d50e0ed4000e4b14726786308476
>> 3257d761d1b318c73e3f"
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>> Pod                     spec.containers{sbpprovui}    Normal
>> Created            {kubelet cae-ga2-010.cisco.com}   Created container
>> with docker id 0273a0d504b0
>>
>> 23m       23m       1         sbpprovui-7-07tt5
>> Pod                     spec.containers{sbpprovui}    Normal
>> Started            {kubelet cae-ga2-010.cisco.com}   Started container
>> with docker id 0273a0d504b0
>>
>> 23m       23m       1         sbpprovui-7-deploy
>> Pod                                                   Normal
>> Scheduled          {default-scheduler }              Successfully assigned
>> sbpprovui-7-deploy to cae-ga2-009.cisco.com
>>
>> 23m       23m       1         sbpprovui-7-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Pulled             {kubelet cae-ga2-009.cisco.com}   Container image
>> "openshift3/ose-deployer:v3.3.1.3" already present on machine
>>
>> 23m       23m       1         sbpprovui-7-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Created            {kubelet cae-ga2-009.cisco.com}   Created container
>> with docker id c41632c30ca7
>>
>> 23m       23m       1         sbpprovui-7-deploy
>> Pod                     spec.containers{deployment}   Normal
>> Started            {kubelet cae-ga2-009.cisco.com}   Started container
>> with docker id c41632c30ca7
>>
>> 23m       23m       1         sbpprovui-7-deploy
>> Pod                                                   Warning
>> FailedSync         {kubelet cae-ga2-009.cisco.com}   Error syncing pod,
>> skipping: failed to "TeardownNetwork" for 
>> "sbpprovui-7-deploy_sbpprovui-alln-dev"
>> with TeardownNetworkError: "Failed to teardown network for pod
>> \"0a6029c1-c6fc-11e6-b3f1-005056ac66ba\" using network plugins
>> \"redhat/openshift-ovs-multitenant\": Error running network teardown
>> script: Could not find IP address for container
>> b97395b0b786f17815f10889c75c921c87a65d55323eac202b6639544649137a"
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>> *Srinivas Kotaru*
>>
>>
>>
>> *From: *Marky Jackson <[email protected]>
>> *Date: *Tuesday, December 20, 2016 at 4:02 PM
>> *To: *Srinivas Naga Kotaru <[email protected]>
>> *Cc: *dev <[email protected]>
>> *Subject: *Re: FW: Issues with CAE - sbpprovui
>>
>>
>>
>> Can you show the events tab and also your .yaml (excluding any secret
>> info ;-)
>>
>>
>>
>> On Tue, Dec 20, 2016 at 3:54 PM, Srinivas Naga Kotaru (skotaru) <
>> [email protected]> wrote:
>>
>> Any idea why deployments fails occasionally without much info at events
>> or deploy pod or dc logs
>>
>>
>>
>> --
>>
>> *Srinivas Kotaru*
>>
>>
>>
>> *From: *"Abhishek Priyadarshi
>>
>>
>>
>> *1.  Failed first time deployment*
>>
>>
>>
>>
>>
>>
>>
>> *2. Redeploy*
>>
>>
>>
>>
>>
>> *3. Redeploy*
>>
>>
>>
>>
>>
>> Regards,
>>
>> Abhishek
>>
>> *Software Development Service*
>>
>>
>>
>>
>> _______________________________________________
>> dev mailing list
>> [email protected]
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>>
>>
>>
>> --
>>
>> *Marky Jackson*
>>
>> DevTools Software Engineer, Taulia Inc.
>>
>> m: (408) 464 2965 <(408)%20464-2965> | e: [email protected] <s
>> [email protected]>
>>  | w: www.taulia.com <https://mail.google.com/mail/u/0/www.taulia.com>
>> | a: 201
>> Mission St. Suite 900 San Francisco CA 94105
>>
>> _______________________________________________
>> dev mailing list
>> [email protected]
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
> _______________________________________________
> dev mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
_______________________________________________
dev mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to