Hi Gildas and all,
Besides the estimation from each project PTL/committers, I would
suggest we do a quick check with resource usage by existing ONAP deployment for
Amsterdam release. I did that few weeks ago and found that the resource usage
is relatively low . I believe that makes sense in a real deployment scenario
since the data and artifacts will eating up resources (e.g. storage), however,
to the specific purposes of integration lab, Perhaps it might not harm to
reduce the flavor in order to make best usage of resources. So I did a try on
my local cloud instance and the ONAP deployment pass the health test , and the
vFWCL demo has been deployed via this ONAP instance.
For your reference, below is the detailed comparison about the resource
consumption between my deployment and the one from Integration lab
(Integration-Jenkins)
Note, the storage comsumption data might confuse you, so you can simply combine
the "local storage" and "Cinder Volume" together to compare the storage
comsuption.
ONAP instance with reduced Flavor (add more cinder Volume)
OANP instance by Integration-Jenkins
Instance name
VCPUs
Local Storage(GB)
RAM(GB)
Cinder Volume(GB)
VCPUs
Local Storage(GB)
RAM(GB)
Cinder Volume(GB)
Resource Total
100
420
152
420
114
2200
268
100
dcaecdap00
2
20
4
dcaecdap00
4
80
8
dcaecdap01
2
20
4
dcaecdap01
4
80
8
dcaecdap02
2
20
4
dcaecdap02
4
80
8
dcaecdap03
2
20
4
dcaecdap03
4
80
8
dcaecdap04
2
20
4
dcaecdap04
4
80
8
dcaecdap05
2
20
4
dcaecdap05
4
80
8
dcaecdap06
2
20
4
dcaecdap06
4
80
8
dcaecnsl00
2
20
4
dcaecnsl00
2
40
4
dcaecnsl01
2
20
4
dcaecnsl01
2
40
4
dcaecnsl02
2
20
4
dcaecnsl02
2
40
4
dcaedokp00
2
20
4
dcaedokp00
2
40
4
dcaedoks00
2
20
4
dcaedoks00
2
40
4
dcaeorcl00
2
20
4
dcaeorcl00
2
40
4
dcaepgvm00
2
20
4
dcaepgvm00
2
40
4
vm00-aai-inst1
8
0
8
onap-aai-inst1
8
160
16
vm00-aai-inst2
8
0
8
onap-aai-inst2
8
160
16
vm00-appc
4
0
8
onap-appc
4
80
8
vm00-clamp
2
20
4
onap-clamp
2
40
4
vm00-dcae-bootstrap
1
20
2
onap-dcae-bootstrap
1
20
2
vm00-dns-server
1
20
2
onap-dns-server
1
20
2
vm00-message-router
4
0
8
onap-message-router
4
80
8
vm00-multi-service
12
0
8
onap-multi-service
12
160
64
vm00-policy
8
0
8
onap-policy
8
160
16
vm00-portal
4
0
8
onap-portal
4
80
8
vm00-robot
2
20
4
onap-robot
2
40
4
vm00-sdc
8
40
8
onap-sdc
8
160
16
vm00-sdnc
4
0
8
onap-sdnc
4
80
8
vm00-so
4
0
8
onap-so
4
80
8
vm00-vid
2
20
4
onap-vid
2
40
4
Best Regards,
Bin Yang, Solution Readiness Team, Wind River
Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189
Skype: yangbincs993
From: [email protected]
[mailto:[email protected]] On Behalf Of Michael O'Brien
Sent: Thursday, November 30, 2017 5:33 AM
To: Gildas Lanilis; onap-release
Cc: onap-discuss
Subject: Re: [onap-discuss] ACTION REQUIRED: Estimating LAB resource needs
Gildas,
Hi, Yes good topic. I'll post preliminary requirements to the page.
Observations on the Open-Lab quota:
The quota does not reflect the actual availability of RAM of
large 16-64g VMs - because we don't distribute a VM across blades - we need to
have at least one blade that can handle a larger VM.
HEAT
83G = Minimal HEAT deployment - enough to run the
vFW without closed-loop
When bringing up a HEAT deployment I have run
into the fragmented quota issue where we may have 500G allocated for the tenant
but in reality we have trouble bringing up 100G because each of the 20+ blades
may not have 8+ g available for that tenant. The result of this is that I have
only been able to bring up the minimal ONAP R1 install (no, clamp, dcae, open-o
VM's)
OOM
16-64G per developer/tester + 64G for DCAE
(possibly shared)
Ideally we run 64g VM's for a full ONAP
deployment (minus DCAE still in HEAT) - however due to the same blade
restriction the largest blade quota in the OOM tenant for example is 40G.
The solution to this is 4 x 16G VMs clustered -
which works fine.
Also for individual developers that would like to
run a component like SDNC/APPC + maybe AAI and robot - they will be ok with 16G
(3g for the Kubernetes undercloud on the VM
All this assumes 2 models
:Unshared - Developer/tester needs to redeploy with latest branch
:Shared - Demo/tester just needs to perform use cases on "any" latest deployment
We may want to seriously look at AWS EC2 (not GCE or Azure until
the support Spot/market VMs) - the SPOT model of AWS will run about 20x less
than Rackspace at about $0.20./hr per 64g VM which comes out to about
$200/month or $400 in the future if we provision DCAE in Kubernetes.
Thank you
/michael
From:
[email protected]<mailto:[email protected]>
[mailto:[email protected]] On Behalf Of Gildas Lanilis
Sent: Wednesday, November 29, 2017 15:31
To: onap-release
<[email protected]<mailto:[email protected]>>
Cc: onap-discuss
<[email protected]<mailto:[email protected]>>
Subject: [onap-discuss] ACTION REQUIRED: Estimating LAB resource needs
Hi PTLs,
We need your help.
We are working with Catherine, Helen and Brian on estimating the needs in term
of Integration Lab resources for Beijing Release. This is important as it will
most certainly impact $ budget and thus may require TSC and GB approval.
The details have been posted in the wiki at
https://wiki.onap.org/display/DW/Integration+labs+need+for+Beijing and we need
every team to fill out the table.
Please let my Catherine, Helen, Brian and Brian know if you have any questions.
It will be greatly appreciated if the table could be filled out by Monday, Dec
4.
Thanks,
Gildas
[HuaweiLogowithName]
Gildas Lanilis
ONAP Release Manager
Santa Clara CA, USA
[email protected]<mailto:[email protected]>
Mobile: 1 415 238 6287
This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer
_______________________________________________
onap-discuss mailing list
[email protected]
https://lists.onap.org/mailman/listinfo/onap-discuss