Thanks Alexis!
Lusheng

From: Alexis de Talhouët <adetalhoue...@gmail.com>
Date: Thursday, January 4, 2018 at 4:30 PM
To: "JI, LUSHENG (LUSHENG)" <l...@research.att.com>
Cc: "Gaurav Gupta (c)" <guptagau...@vmware.com>, "onap-discuss@lists.onap.org" 
<onap-discuss@lists.onap.org>
Subject: Re: [onap-discuss] OOM Resource Requirement

Hi, yes, typo. I fixed. it.

I see, it’s based on my deployment. It used only m1.large flavor, which have 8 
vCPU, 16G RAM and 160G storage.
So based on what you just told me, here are the new accurate requirements:

HEAT

  *   29 VM
  *   148 vCPU
  *   336 GB RAM
  *   3 TB Storage
  *   29 floating IP addresses

OOM

  *   17 VM
  *   54 vCPU
  *   156 GB RAM
  *   1020 GB Storage
  *   15 floating IP addresses

DCAE itself

  *   15 VM
  *   44 vCPU
  *   88 GB RAM
  *   880 GB Storage
  *   15 floating IP addresses

Thanks,
Alexis



On Jan 4, 2018, at 4:24 PM, JI, LUSHENG (LUSHENG) 
<l...@research.att.com<mailto:l...@research.att.com>> wrote:

Alexis,

The 2GB RAM for DCAE is probably a typo?

Not sure what VM flavors you use in your deployment.  Among the 15 DCAE VMs, 7 
(CDAP/Hadoop cluster) need to be m1.large size (4 vCPU, 8G RAM, 80G storage) 
and the rest are m1.medium size (2 vCPUs, 4G RAM, 40G storage).

So the total should be 44 vCPUs, 88G RAM, and 880G disk.  The actual usage of 
disk is much smaller.  So if there were a flavor supporting smaller disks (e.g. 
20% of current values), the disk usage can be smaller.

Lusheng


From: 
<onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org>>
 on behalf of Alexis de Talhouët 
<adetalhoue...@gmail.com<mailto:adetalhoue...@gmail.com>>
Date: Thursday, January 4, 2018 at 4:02 PM
To: "Gaurav Gupta (c)" <guptagau...@vmware.com<mailto:guptagau...@vmware.com>>
Cc: "onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>" 
<onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>>
Subject: Re: [onap-discuss] OOM Resource Requirement

FYI, I’ve just made the comparison with the HEAT requirements. The footprint 
for OOM is slightly smaller, but actually, it’s more than 80% DCAE’s footprint.

HEAT

  *   29 VM
  *   148 vCPU
  *   336 GB RAM
  *   3 TB Storage
  *   29 floating IP addresses
OOM

  *   17 VM
  *   123 vCPU
  *   294 GB RAM
  *   2300 TB Storage
  *   15 floating IP addresses
DCAE itself

  *   15 VM
  *   113 vCPU
  *   2 GB RAM
  *   2300 TB Storage
  *   15 floating IP addresses
Hope it helps,
Alexis

On Jan 4, 2018, at 3:46 PM, Alexis de Talhouët 
<adetalhoue...@gmail.com<mailto:adetalhoue...@gmail.com>> wrote:

Guarav, here are the exact numbers for DCAE requirement.

15 instances
113 vCPU
226 GB RAM
2260 GB disk
15 floating IP



On Jan 4, 2018, at 12:22 PM, Alexis de Talhouët 
<adetalhoue...@gmail.com<mailto:adetalhoue...@gmail.com>> wrote:

Gaurav, happy new year to you to! See answers inline.



On Jan 4, 2018, at 12:14 PM, Gaurav Gupta (c) 
<guptagau...@vmware.com<mailto:guptagau...@vmware.com>> wrote:

Alexis , Michael

Happy new year ,

I had couple of questions

a- about what is the Clean requirement for OOM in terms of Memory /vCPU if the 
closed loop demo needs to be attempted implying DCAE also to be part of .

AdT:
- 1 VM for rancher: 2 vCPU - 4 GO RAM - 40 GO disk
- 1 VM for ONAP - 8 vCPUS - 64 GO RAM - 100 GO disk - 16 GO swap (I added some 
swap because in ONAP, most of the app are not always active, most of them are 
idle, so it's fine to let the host store dirty page in the swap memory.)
- 14 VMs for DCAE - 130 vCPUS - 300 GO RAM - 1.5 TO disk (note: this it’s based 
on memory, I just torn down my setup, and I don’t recall well)





b - How many VM if we were to use Rancher based OOM Setup .

AdT: As specified above, only one VM for Rancher.



c - what is the release I should be using  if I were toinstall   amsterdam 
mainteance release .

AdT: OOM amsterdam branch. Note: DCAE isn’t yet merged.





thanks
Gaurav




_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to