Sylvain,
Thanks for the heads up, I verified behavior on a couple environments – the
AWS CD system and a dev VMware deploy on my laptop are OK.
Like James states the timing may be off depending on what is on that
particular VM in the 1+13 cluster – bringing up only POMBA is fine for example
but not realistic – LOG-845 raised by James should fix this thanks for the
triage Sylvain.
2 different deployments
CD
16:34:15 onap onap-pomba-pomba-kibana-849cb74588-2k2hv
1/1 Running 0 13m
Dev vm
onap onap-pomba-pomba-kibana-64f8788bbd-zkq69
1/1 Running 0 1h
CD system: OK
http://jenkins.onap.info/job/oom-cd-master/4700/console
16:34:14 kube-system heapster-7b48b696fc-hbb86
1/1 Running 0 2h
16:34:15 kube-system kube-dns-6655f78c68-zqnkh
3/3 Running 0 2h
16:34:15 kube-system kubernetes-dashboard-6f54f7c4b-b9phq
1/1 Running 0 2h
16:34:15 kube-system monitoring-grafana-7877679464-6jxbk
1/1 Running 0 2h
16:34:15 kube-system monitoring-influxdb-64664c6cf5-6swxb
1/1 Running 0 2h
16:34:15 kube-system tiller-deploy-6f4745cbcf-247xx
1/1 Running 0 2h
16:34:15 onap onap-dmaap-dbc-pg-0
1/1 Running 0 13m
16:34:15 onap onap-dmaap-dbc-pg-1
1/1 Running 0 11m
16:34:15 onap onap-dmaap-dbc-pgpool-c5f8498-n9xc5
1/1 Running 0 13m
16:34:15 onap onap-dmaap-dbc-pgpool-c5f8498-wwskh
1/1 Running 0 13m
16:34:15 onap onap-dmaap-dmaap-bus-controller-557dc8c59c-rr5pr
1/1 Running 0 13m
16:34:15 onap onap-dmaap-dmaap-dr-db-576f7968b8-z474s
1/1 Running 0 13m
16:34:15 onap onap-dmaap-dmaap-dr-node-7647f9d6d8-bp796
1/1 Running 0 13m
16:34:15 onap onap-dmaap-dmaap-dr-prov-f4d84869f-zlpwq
1/1 Running 0 13m
16:34:15 onap onap-dmaap-message-router-76f4799d-h44dm
1/1 Running 0 13m
16:34:15 onap onap-dmaap-message-router-kafka-757cf5cfb5-h8cpj
1/1 Running 0 13m
16:34:15 onap onap-dmaap-message-router-zookeeper-5fb67dfdb5-tlhc4
1/1 Running 0 13m
16:34:15 onap onap-log-log-elasticsearch-9798b655f-szr9j
1/1 Running 0 13m
16:34:15 onap onap-log-log-kibana-6b89fd4858-22brs
1/1 Running 0 13m
16:34:15 onap onap-log-log-logstash-8676887cd4-7nff7
1/1 Running 0 13m
16:34:15 onap onap-log-log-logstash-8676887cd4-f5xkj
1/1 Running 0 13m
16:34:15 onap onap-log-log-logstash-8676887cd4-fg8b9
1/1 Running 0 13m
16:34:15 onap onap-log-log-logstash-8676887cd4-xt289
1/1 Running 0 13m
16:34:15 onap onap-log-log-logstash-8676887cd4-zpt9t
1/1 Running 0 13m
16:34:15 onap onap-pomba-pomba-aaictxbuilder-8565f7d75c-6rfn5
2/2 Running 0 13m
16:34:15 onap onap-pomba-pomba-contextaggregator-d9f888c4-7bkrs
1/1 Running 0 13m
16:34:15 onap onap-pomba-pomba-data-router-66c5544446-526hc
1/1 Running 0 13m
16:34:15 onap onap-pomba-pomba-elasticsearch-5c4f8c6b5b-w5t49
1/1 Running 0 13m
16:34:15 onap onap-pomba-pomba-kibana-849cb74588-2k2hv
1/1 Running 0 13m
16:34:15 onap onap-pomba-pomba-networkdiscovery-556bb9d7b8-59vnn
2/2 Running 0 13m
16:34:15 onap
onap-pomba-pomba-networkdiscoveryctxbuilder-6ff7d49464-s5wrv 2/2
Running 0 13m
16:34:15 onap onap-pomba-pomba-sdcctxbuilder-7799ffccc9-qz5tq
1/1 Running 0 13m
16:34:15 onap onap-pomba-pomba-search-data-9896cf798-7kfdt
2/2 Running 0 13m
16:34:15 onap onap-pomba-pomba-servicedecomposition-5f8bb9d6f6-wxzsw
2/2 Running 0 13m
16:34:15 onap onap-pomba-pomba-validation-service-79b8d5cb9f-zstqr
1/1 Running 0 13m
16:34:15 onap onap-robot-robot-64886c7774-lxkg7
1/1 Running 0 13m
Dev system: OK
(note the ca container is not started because I am not running dmaap in this
dev 16g vm
root@ubuntu:~/_dev/oom/kubernetes# kubectl get pods --all-namespaces
kube-system heapster-7b48b696fc-tksqt
1/1 Running 6 2d
kube-system kube-dns-6655f78c68-75m59
3/3 Running 18 2d
kube-system kubernetes-dashboard-6f54f7c4b-8mbwm
1/1 Running 7 2d
kube-system monitoring-grafana-7877679464-5bnh7
1/1 Running 7 2d
kube-system monitoring-influxdb-64664c6cf5-tzv9d
1/1 Running 7 2d
kube-system tiller-deploy-6f4745cbcf-2bjhr
1/1 Running 6 2d
onap onap-pomba-pomba-aaictxbuilder-67ccd944f-vrttr
2/2 Running 0 1h
onap onap-pomba-pomba-contextaggregator-678d4587cd-sspb9
0/1 Init:0/1 2 1h
onap onap-pomba-pomba-data-router-6c8cf96c8d-j6s7z
1/1 Running 0 1h
onap onap-pomba-pomba-elasticsearch-7b8bc5f864-c9xbz
1/1 Running 0 1h
onap onap-pomba-pomba-kibana-64f8788bbd-zkq69
1/1 Running 0 1h
onap onap-pomba-pomba-networkdiscovery-5bd8f8b96d-2ftnp
2/2 Running 0 1h
onap onap-pomba-pomba-networkdiscoveryctxbuilder-5bf84c9f6d-qdc4p
2/2 Running 0 1h
onap onap-pomba-pomba-sdcctxbuilder-5b688d6fd5-srzgb
1/1 Running 0 1h
onap onap-pomba-pomba-search-data-5b4d8f7dc6-ndgh7
2/2 Running 0 1h
onap onap-pomba-pomba-servicedecomposition-9885f8f88-95jr8
2/2 Running 0 1h
onap onap-pomba-pomba-validation-service-54598588fc-wb9m2
1/1 Running 0 1h
You bring up another point – healthcheck and container verification. I
think Logging/Pomba should take the lead and expand healthcheck beyond the
primary REST endpoint and DB containers. I will also add checks on the
elasticsearch and kibana container – like we do for the logstash, elasticsearch
and kibana containers in the LOG pod.
Add ES, KB Healtcheck
https://jira.onap.org/browse/LOG-856
expands on 3 current HCs in POMBA
https://jira.onap.org/browse/LOG-224
From: [email protected] <[email protected]> On Behalf Of
Sylvain Desbureaux
Sent: Friday, November 23, 2018 8:45 AM
To: James MacNider <[email protected]>; [email protected]
Subject: Re: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled
all the time
Helllo James,
That’s why I said the other kibana were workins as one as higher resources
limits (logging) and the other is not using resources limits (see
https://jira.onap.org/browse/CLAMP-250).
I can test with higher values if you want.
Regards,
---
Sylvain Desbureaux
De : James MacNider
<[email protected]<mailto:[email protected]>>
Date : vendredi 23 novembre 2018 à 14:18
À : "[email protected]<mailto:[email protected]>"
<[email protected]<mailto:[email protected]>>, DESBUREAUX
Sylvain TGI/OLN
<[email protected]<mailto:[email protected]>>
Objet : RE: [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all the time
Hi Sylvain,
There have been occasional reports of this problem with this pod but it’s not
been readily repeatable. Believe it or not, it is probably due to the resource
limits being too conservative. The other ONAP kibana instances you mention
both have higher resource limits (2Gi for clamp, 4Gi for logging). I’ve raised
https://jira.onap.org/browse/LOG-855 to track this issue.
Thanks,
James
From: [email protected]<mailto:[email protected]>
<[email protected]<mailto:[email protected]>> On Behalf Of
Sylvain Desbureaux
Sent: Friday, November 23, 2018 5:27 AM
To: [email protected]<mailto:[email protected]>
Subject: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all
the time
Hello,
I’m doing a daily deployment of a full ONAP (using
https://gitlab.com/Orange-OpenSource/lfn/onap/onap_oom_automatic_installation).
Most of the deployment is working but I keep getting some errors.
In particular, I’ve got pomba-kibana pods which gets “OOMKilled” at every
deployment on the last week.
I’m deploying a “small” flavor with request at 1 Cpu (which is HUGE by the way)
and 600M RAM and limits at 2 Cpu, 1.2G RAM.
The 2 other kibana (from log and clamp) are working fine.
Am I the only one having this issue?
--
[cid:[email protected]]<http://www.orange.com/>
Sylvain Desbureaux
Senior Automation Architect
ORANGE/IMT/OLN/CNC/NCA/SINA
Fixe : +33 2 96 07 13 80
<https://monsi.sso.francetelecom.fr/index.asp?target=http%3A%2F%2Fclicvoice.sso.francetelecom.fr%2FClicvoiceV2%2FToolBar.do%3Faction%3Ddefault%26rootservice%3DSIGNATURE%26to%3D+33%202%2096%2007%2013%2080>
Mobile : +33 6 71 17 25 57
<https://monsi.sso.francetelecom.fr/index.asp?target=http%3A%2F%2Fclicvoice.sso.francetelecom.fr%2FClicvoiceV2%2FToolBar.do%3Faction%3Ddefault%26rootservice%3DSIGNATURE%26to%3D+33%206%2071%2017%2025%2057>
[email protected]<mailto:[email protected]>
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou
falsifie. Merci.
This message and its attachments may contain confidential or privileged
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been
modified, changed or falsified.
Thank you.
This email and the information contained herein is proprietary and confidential
and subject to the Amdocs Email Terms of Service, which you may review at
https://www.amdocs.com/about/email-terms-of-service
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou
falsifie. Merci.
This message and its attachments may contain confidential or privileged
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been
modified, changed or falsified.
Thank you.
This email and the information contained herein is proprietary and confidential
and subject to the Amdocs Email Terms of Service, which you may review at
https://www.amdocs.com/about/email-terms-of-service
<https://www.amdocs.com/about/email-terms-of-service>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#14018): https://lists.onap.org/g/onap-discuss/message/14018
Mute This Topic: https://lists.onap.org/mt/28292082/21656
Group Owner: [email protected]
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-