We are facing a similar issue with the SDC that is affecting the 
dev-SDC-cassandra-config pods.
Now trying to change the SDC charts config to local instead of the common 
Cassandra to avoid this recurring problem.

Thanks and Regards,
M Seshu Kumar
Senior System Architect
Single OSS India Branch Department. S/W BU.
Huawei Technologies India Pvt. Ltd.
Survey No. 37, Next to EPIP Area, Kundalahalli, Whitefield
Bengaluru-560066, Karnataka.
Tel: + 91-80-49160700 , Mob: 9845355488

___________________________________________________________________________________________________
This e-mail and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
-------------------------------------------------------------------------------------------------------------------------------------------------------------------

-----Original Message-----
From: [email protected] [mailto:[email protected]] On 
Behalf Of Keong Lim
Sent: Wednesday, August 14, 2019 12:35 PM
To: Keong Lim <[email protected]>; [email protected]
Subject: Re: [onap-discuss] Cassandra problem and Kubernetes job problem #aai 
#oom #dublin

Digging deeper into ready.py, wait_for_statefulset_complete has this condition:

===
        s = response.status
        if (s.updated_replicas == response.spec.replicas and
                s.replicas == response.spec.replicas and
                s.ready_replicas == response.spec.replicas and
                s.current_replicas == response.spec.replicas and
                s.observed_generation == response.metadata.generation):
            log.info("Statefulset " + statefulset_name + "  is ready")
            return True
===

but when I check on the info provided by kubectl:

===
# kubectl -n onap get statefulset/dev-cassandra-cassandra -o yaml ...
status:
  collisionCount: 0
  currentReplicas: 3
  currentRevision: dev-cassandra-cassandra-84f4d86c9f
  observedGeneration: 1
  readyReplicas: 3
  replicas: 3
  updateRevision: dev-cassandra-cassandra-84f4d86c9f

===

The status section does not include an "updated replicas" value. Does that mean 
it would be null in the ready.py script?
Should the readiness-check fail when 4 out of 5 conditions are met and the 
other condition is null?

This issue appears to be related to 
https://github.com/kubernetes/kubernetes/issues/52653

So, does anyone know how to get an "updated replicas" value in the status?
I am guessing that a spurious kubectl scale command would be enough of an 
update, but is there any better recommendation?

Thanks,
Keong




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#18561): https://lists.onap.org/g/onap-discuss/message/18561
Mute This Topic: https://lists.onap.org/mt/32859796/21656
Mute #aai: https://lists.onap.org/mk?hashtag=aai&subid=2740164
Mute #oom: https://lists.onap.org/mk?hashtag=oom&subid=2740164
Mute #dublin: https://lists.onap.org/mk?hashtag=dublin&subid=2740164
Group Owner: [email protected]
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to