Good questions.
   This discussion is just for the kubernetes side of deploying onap (not the 
DCAE heatbridge in amsterdam or running VNFs on openstack – those are outside 
the context of the kubernetes deployment for now).

   Currently we deploy VNFs to VMs on openstack – we do not have containerized 
VNF’s yet – so the memory requirements are separate.  A simple use case like 
the vFW will consume 12G for example outside of where we run K8S.
   For the memory increase you are referring to these questions from Beka?
   I have not looked into the full cause but SDNC has been having issues since 
a refactor in mid Jan – I put an automated restart of the SDNC pod part way 
through the CD startup when all dependencies are up.
   There is a comment on the ueb listener – this is in the amsterdam branch and 
not master but may be related.
   We can do a full analysis of the startup and idle behavior of the containers 
   I have been noticing the memory footprint enlarging for most of 2018 – 
Usually I shutdown vfc to be able to keep room for heaps.
   As to the cause – we are speculating unless we have a baseline from a tool 
like New Relic and track the sizes of our java, python and db based images.  
Any one or all of these could be demanding more runtime memory based on code 
changes across onap.  You would need to track all the projects.
   In kubernetes you can get a snapshot of each container – keep these 
periodically and compare this.  Ideally we run ram profiles as part of our 
CI/CD pipeline in the future.

   Thank you

From: Kumar Skand Priya, Viswanath V 
Sent: Saturday, March 10, 2018 09:01
To: Michael O'Brien <>
Subject: Re: [E] [onap-discuss] [OOM] Heads up: ONAP Kubernetes master has 
crossed the 64G VM barrier

Hi Michael,

Does this includes the memory occupied due to deployed VNF & NS as well? If so, 
what are the requirements for running just the ONAP ( for both R1&R2) ?
And more particularly, what's eating more space compared to R2 & R1 ( assuming 
both are plan ONAP installation i.e without DCAE & VNFs) ?

Few days back, someone else from community have highlighted about the 
possibility of memory leaks in ONAP code, which might account to increase 
memory consumption. Do you have any thoughts on the same?



Viswanath Kumar Skand Priya
Verizon India ( VDSI )

On Sat, Mar 10, 2018 at 2:12 AM, Michael O'Brien 
<<>> wrote:
   ONAP Beijing is currently crossing the 64G boundary as of a couple weeks ago.
   If you run the system on a 128G VM then heaps will expand past 64G within 24 
   If you stay on 64G (which you can) – reduce the optional pods or you will be 
getting OOME’s
   Use the ongoing POC JIRA as a guide – we need a full set of 
deploytime/runtime dependency trees to be able to know what to shutdown.<>

   The recommended VM size (1 or a cluster) is now 80 to 128G – for Beijing 
(without the upcoming DCAE port)
   For Amsterdam the OOM side still fits in a 64G vm (you can shutdown 
vCPE/vVOLTE required pods like vfc) – heatbridge works there to DCAE which 
bring you up to 150G when the full heat side is up.<>
ONAP startup now reaches a peak of 60 cores so the more vCores you have the 
less CPU bound you will be.

Thank you
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at<>

onap-discuss mailing list<>

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at 
onap-discuss mailing list

Reply via email to