That does make sense – perhaps another set of columns or separate table for 
container data.

How do you measure/estimate the memory needed for a container ?

Brian



From: PLATANIA, MARCO
Sent: Thursday, November 30, 2017 9:41 AM
To: FREEMAN, BRIAN D <[email protected]>; Gildas Lanilis 
<[email protected]>; TIMONEY, DAN <[email protected]>; onap-release 
<[email protected]>
Cc: onap-discuss <[email protected]>
Subject: Re: [onap-discuss] [Onap-release] ACTION REQUIRED: Estimating LAB 
resource needs

Brian,

Thanks for the clarification, I was also confused. Resource assessment is 
important for HEAT, which needs to know #CPUs, memory, etc. of each specific VM.

If we use OOM for Beijing (as I understand), in my opinion the right way to 
think about it is container-level, rather than VM-level. This means that PTLs 
should tell how many instances of their component they need (or, put in another 
way, how many replicas of their containers they need). Resource sizing is then 
made at cluster level. Depending on how many containers we’ll need to support, 
the OOM cluster size will be adjusted.

Does it make sense?

Marco

From: 
<[email protected]<mailto:[email protected]>>
 on behalf of "FREEMAN, BRIAN D" <[email protected]<mailto:[email protected]>>
Date: Thursday, November 30, 2017 at 9:07 AM
To: Gildas Lanilis 
<[email protected]<mailto:[email protected]>>, "TIMONEY, DAN" 
<[email protected]<mailto:[email protected]>>, onap-release 
<[email protected]<mailto:[email protected]>>
Cc: onap-discuss 
<[email protected]<mailto:[email protected]>>
Subject: Re: [onap-discuss] [Onap-release] ACTION REQUIRED: Estimating LAB 
resource needs

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

Folks,

Just some clarity based on responses so far.

The goal is to estimate the cloud resources we will need for testing and to 
help guide the testing strategy since likely will not have enough main memory 
for all the instances we had in Amsterdam with HA and Geo.

We are assuming OOM to minimize the footprint so the VM numbers are less 
relevant.

Today in HEAT a full Amsterdam is about 300 GB of main memory (SB01) – DCAE is 
already clustered

If we go with HA/Clustering that becomes “roughly”  600 GB (assuming DCAE is 
bulk of current memory utilization)  for one site and 1.2 TB for two sites but 
not all components need to scale the same way and there is  efficiency with OOM.

To the extent you can assess the memory you need in OOM with clustering that 
would be really helpful.

Brian



From: Gildas Lanilis [mailto:[email protected]]
Sent: Wednesday, November 29, 2017 5:31 PM
To: TIMONEY, DAN <[email protected]<mailto:[email protected]>>; onap-release 
<[email protected]<mailto:[email protected]>>
Cc: onap-discuss 
<[email protected]<mailto:[email protected]>>; FREEMAN, 
BRIAN D <[email protected]<mailto:[email protected]>>
Subject: RE: [Onap-release] ACTION REQUIRED: Estimating LAB resource needs

Thanks Dan for your prompt feedback.
Good point. I would suggest you add this point as a comment and we will take it 
from there.

Thanks,
Gildas
ONAP Release Manager
1 415 238 6287

From: TIMONEY, DAN [mailto:[email protected]]
Sent: Wednesday, November 29, 2017 1:56 PM
To: Gildas Lanilis 
<[email protected]<mailto:[email protected]>>; onap-release 
<[email protected]<mailto:[email protected]>>
Cc: onap-discuss 
<[email protected]<mailto:[email protected]>>; FREEMAN, 
BRIAN D <[email protected]<mailto:[email protected]>>
Subject: Re: [Onap-release] ACTION REQUIRED: Estimating LAB resource needs

Gildas,

This is an excellent topic – thanks for starting this discussion!

One suggestion – we might want to separate out the HA/Geo Redundancy columns 
into 2 sets : one set for local HA only, and a second set for Geo Redundancy.

Just by way of example, in the case of SDN-C, we’d need 1 VM for no redundancy, 
8 for local HA, and 16 for geo redundancy.

I know that our target is geo redundancy, but it might be good to have a view 
of resources for local HA only as well.  That way, if we find we can’t afford 
the VMs for full geo redundancy, we’ll know the minimal set for local HA as 
well.

Dan

--
Dan Timoney
SDN-CP / OpenECOMP SDN-C SSO

Please go to  D2 ECOMP Release Planning 
Wiki<https://wiki.web.att.com/display/DERP/D2+ECOMP+Release+Planning+Home> for 
D2 ECOMP Project In-take, 2016 Release Planning, Change Management, and find 
key Release Planning Contact Information.


From: 
<[email protected]<mailto:[email protected]>>
 on behalf of Gildas Lanilis 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, November 29, 2017 at 3:31 PM
To: onap-release 
<[email protected]<mailto:[email protected]>>
Cc: onap-discuss 
<[email protected]<mailto:[email protected]>>, "FREEMAN, 
BRIAN D" <[email protected]<mailto:[email protected]>>
Subject: [Onap-release] ACTION REQUIRED: Estimating LAB resource needs

Hi PTLs,

We need your help.
We are working with Catherine, Helen and Brian on estimating the needs in term 
of Integration Lab resources for Beijing Release. This is important as it will 
most certainly impact $ budget and thus may require TSC and GB approval.
The details have been posted in the wiki at 
https://wiki.onap.org/display/DW/Integration+labs+need+for+Beijing<https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.onap.org_display_DW_Integration-2Blabs-2Bneed-2Bfor-2BBeijing&d=DwMFAg&c=LFYZ-o9_HUMeMTSQicvjIg&r=qLcfee4a2vOwYSub0bljcQ&m=CaUG4r_Cugx2h70vsTNpSuic88WvQvOkdEn6HN4L51g&s=PkQYN-ujTSkhOXsPBggjxW5UrwxxkAhonWZo1SoY9rE&e=>
 and we need every team to fill out the table.

Please let my Catherine, Helen, Brian and Brian know if you have any questions.

It will be greatly appreciated if the table could be filled out by Monday, Dec 
4.

Thanks,
Gildas

[aweiLogowithName]
Gildas Lanilis
ONAP Release Manager
Santa Clara CA, USA
[email protected]<mailto:[email protected]>
Mobile: 1 415 238 6287

_______________________________________________
onap-discuss mailing list
[email protected]
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to