Hi Michael,

Parker from UNH lab says they can't find a pod has more memory then the one I 
applied (60G memory) . So I have to perform partial deploy using "-a" option 
and i still got stuck by init pod when deploy sdc. Partial deploy should have 
no insufficient resource issue and I wonder which cause this problem.

Here are the pods status of sdc:
NAME                      READY     STATUS     RESTARTS   AGE
sdc-be-491907944-65z37    0/1       Init:1/2   18         3h
sdc-cs-2937804434-mlnvm   1/1       Running    0          3h
sdc-es-2514443912-hxc1t   1/1       Running    0          3h
sdc-fe-1656442225-thnfp   0/1       Init:0/1   18         3h
sdc-kb-281446026-s9b1b    1/1       Running    0          3h

Here is the pod describe of sdc-fe-1656442225-thnfp:
Name:        sdc-fe-1656442225-thnfp
Namespace:    onap-sdc
Node:        pod2.opnfv.iol.unh.edu/10.10.30.248
Start Time:    Mon, 25 Sep 2017 03:53:58 +0000
Labels:        app=sdc-fe
        pod-template-hash=1656442225
Annotations:    
kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"onap-sdc","name":"sdc-fe-1656442225","uid":"2894f5f0-a1a5-11e7-9ddc-02ac7d868f68"...
Status:        Pending
IP:        10.42.104.83
Created By:    ReplicaSet/sdc-fe-1656442225
Controlled By:    ReplicaSet/sdc-fe-1656442225
Init Containers:
  sdc-fe-readiness:
    Container ID:    
docker://43908f683d32063756d7c0d8a65b1c1ea9efc0f9e478513f1bbd6ec8d5337b1d
    Image:        oomk8s/readiness-check:1.0.0
    Image ID:        
docker-pullable://oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
    Port:        <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      sdc-es
      --container-name
      sdc-cs
      --container-name
      sdc-kb
      --container-name
      sdc-be
    State:        Running
      Started:        Mon, 25 Sep 2017 06:48:17 +0000
    Last State:        Terminated
      Reason:        Error
      Exit Code:    1
      Started:        Mon, 25 Sep 2017 06:38:11 +0000
      Finished:        Mon, 25 Sep 2017 06:48:15 +0000
    Ready:        False
    Restart Count:    17
    Environment:
      NAMESPACE:    onap-sdc (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hq3px 
(ro)
Containers:
  sdc-fe:
    Container ID:
    Image:        
nexus3.onap.org:10001/openecomp/sdc-frontend:1.1-STAGING-latest
    Image ID:
    Ports:        9443/TCP, 8181/TCP
    State:        Waiting
      Reason:        PodInitializing
    Ready:        False
    Restart Count:    0
    Readiness:        tcp-socket :8181 delay=5s timeout=1s period=10s 
#success=1 #failure=3
    Environment:
      ENVNAME:    AUTO
      HOST_IP:     (v1:status.podIP)
    Mounts:
      /etc/localtime from sdc-localtime (ro)
      
/root/chef-solo/cookbooks/sdc-catalog-fe/recipes/FE_2_setup_configuration.rb 
from sdc-fe-config (rw)
      /root/chef-solo/environments/ from sdc-environments (rw)
      /usr/share/elasticsearch/data/ from sdc-sdc-es-es (rw)
      /var/lib/jetty/logs from sdc-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hq3px 
(ro)
Conditions:
  Type        Status
  Initialized     False
  Ready     False
  PodScheduled     True
Volumes:
  sdc-sdc-es-es:
    Type:    HostPath (bare host directory volume)
    Path:    /dockerdata-nfs/onap/sdc/sdc-es/ES
  sdc-environments:
    Type:    HostPath (bare host directory volume)
    Path:    /dockerdata-nfs/onap/sdc/environments
  sdc-localtime:
    Type:    HostPath (bare host directory volume)
    Path:    /etc/localtime
  sdc-logs:
    Type:    HostPath (bare host directory volume)
    Path:    /dockerdata-nfs/onap/sdc/logs
  sdc-fe-config:
    Type:    HostPath (bare host directory volume)
    Path:    /dockerdata-nfs/onap/sdc/sdc-fe/FE_2_setup_configuration.rb
  default-token-hq3px:
    Type:    Secret (a volume populated by a Secret)
    SecretName:    default-token-hq3px
    Optional:    false
QoS Class:    BestEffort
Node-Selectors:    <none>
Tolerations:    node.alpha.kubernetes.io/notReady:NoExecute for 300s
        node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath           
     Type        Reason            Message
  ---------    --------    -----    ----                -------------           
     --------    ------            -------
  2h        1m        18    kubelet, pod2.opnfv.iol.unh.edu    
spec.initContainers{sdc-fe-readiness}    Normal        Pulling            
pulling image "oomk8s/readiness-check:1.0.0"
  2h        1m        18    kubelet, pod2.opnfv.iol.unh.edu                     
   Warning        FailedSync        Error syncing pod
  2h        1m        18    kubelet, pod2.opnfv.iol.unh.edu    
spec.initContainers{sdc-fe-readiness}    Normal        Pulled            
Successfully pulled image "oomk8s/readiness-check:1.0.0"
  2h        1m        18    kubelet, pod2.opnfv.iol.unh.edu    
spec.initContainers{sdc-fe-readiness}    Normal        Created            
Created container
  2h        1m        18    kubelet, pod2.opnfv.iol.unh.edu    
spec.initContainers{sdc-fe-readiness}    Normal        Started            
Started container
  2h        7s        208    kubelet, pod2.opnfv.iol.unh.edu                    
    Warning        DNSSearchForming    Search Line limits were exceeded, some 
dns names have been omitted, the applied search line is: 
onap-sdc.svc.cluster.local svc.cluster.local cluster.local 
kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal

Do you have a clue which cause this "failed sync" ? BTW, I just do another test 
with aai and every pod is running. Another question is Does each component of 
onap has its own db and message queue  or do I have to deploy a component first 
to like set up the infrastructure for others?

Thanks
Harry

_______________________________________________
onap-discuss mailing list
[email protected]
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to