Hi Gildas and all,
Besides the estimation from each project PTL/committers, I would
suggest we do a quick check with resource usage by existing ONAP deployment for
Amsterdam release. I did that few weeks ago and found that the resource usage
is relatively low . I believe that makes sense in a real deployment scenario
since the data and artifacts will eating up resources (e.g. storage), however,
to the specific purposes of integration lab, Perhaps it might not harm to
reduce the flavor in order to make best usage of resources. So I did a try on
my local cloud instance and the ONAP deployment pass the health test , and the
vFWCL demo has been deployed via this ONAP instance.
For your reference, below is the detailed comparison about the resource
consumption between my deployment and the one from Integration lab
(Integration-Jenkins)
Note, the storage comsumption data might confuse you, so you can simply combine
the "local storage" and "Cinder Volume" together to compare the storage
comsuption.
ONAP instance with reduced Flavor (add more cinder Volume)
OANP instance by Integration-Jenkins
Instance name
VCPUs
Local Storage(GB)
RAM(GB)
Cinder Volume(GB)
VCPUs
Local Storage(GB)
RAM(GB)
Cinder Volume(GB)
Resource Total
100
420
152
420
114
2200
268
100
dcaecdap00
2
20
4
dcaecdap00
4
80
8
dcaecdap01
2
20
4
dcaecdap01
4
80
8
dcaecdap02
2
20
4
dcaecdap02
4
80
8
dcaecdap03
2
20
4
dcaecdap03
4
80
8
dcaecdap04
2
20
4
dcaecdap04
4
80
8
dcaecdap05
2
20
4
dcaecdap05
4
80
8
dcaecdap06
2
20
4
dcaecdap06
4
80
8
dcaecnsl00
2
20
4
dcaecnsl00
2
40
4
dcaecnsl01
2
20
4
dcaecnsl01
2
40
4
dcaecnsl02
2
20
4
dcaecnsl02
2
40
4
dcaedokp00
2
20
4
dcaedokp00
2
40
4
dcaedoks00
2
20
4
dcaedoks00
2
40
4
dcaeorcl00
2
20
4
dcaeorcl00
2
40
4
dcaepgvm00
2
20
4
dcaepgvm00
2
40
4
vm00-aai-inst1
8
0
8
onap-aai-inst1
8
160
16
vm00-aai-inst2
8
0
8
onap-aai-inst2
8
160
16
vm00-appc
4
0
8
onap-appc
4
80
8
vm00-clamp
2
20
4
onap-clamp
2
40
4
vm00-dcae-bootstrap
1
20
2
onap-dcae-bootstrap
1
20
2
vm00-dns-server
1
20
2
onap-dns-server
1
20
2
vm00-message-router
4
0
8
onap-message-router
4
80
8
vm00-multi-service
12
0
8
onap-multi-service
12
160
64
vm00-policy
8
0
8
onap-policy
8
160
16
vm00-portal
4
0
8
onap-portal
4
80
8
vm00-robot
2
20
4
onap-robot
2
40
4
vm00-sdc
8
40
8
onap-sdc
8
160
16
vm00-sdnc
4
0
8
onap-sdnc
4
80
8
vm00-so
4
0
8
onap-so
4
80
8
vm00-vid
2
20
4
onap-vid
2
40
4
Best Regards,
Bin Yang,Solution Readiness Team,Wind River
Direct +86,10,84777126Mobile +86,13811391682Fax +86,10,64398189
Skype: yangbincs993
From: onap-discuss-boun...@lists.onap.org
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Michael O'Brien
Sent: Thursday, November 30, 2017 5:33 AM
To: Gildas Lanilis; onap-release
Cc: onap-discuss
Subject: Re: [onap-discuss] ACTION REQUIRED: Estimating LAB resource needs
Gildas,
Hi, Yes good topic. I'll post preliminary requirements to the page.
Observations on the Open-Lab quota:
The quota does not reflect the actual availability of RAM of
large 16-64g VMs - because we don't distribute a VM across blades - we need to
have at least one blade that can handle a larger VM.
HEAT
83G = Minimal HEAT deployment - enough to run the
vFW without closed-loop
When bringing up a HEAT deployment I have run
into the fragmented quota issue where we may have 500G allocated for the tenant
but in reality we have trouble bringing up 100G because each of the 20+ blades
may not have 8+ g available for that tenant. The result of this is that I have
only been able to bring up the minimal ONAP R1 install (no, clamp, dcae, open-o
VM's)
OOM
16-64G per developer/tester + 64G for DCAE
(possibly shared)
Ideally we run 64g VM's for a full ONAP
deployment (minus DCAE still in HEAT) - however due to the same blade
restriction the largest blade quota in the OOM tenant for example is 40G.
The solution to this is 4 x 16G VMs clustered -
which works fine.
Also for individual developers that would like to
run a component like SDNC/APPC + maybe AAI and robot - they will be ok with 16G
(3g for the Kubernetes undercloud on the VM
All this assumes 2 models
:Unshared - Developer/tester needs to redeploy with latest branch
:Shared - Demo/tester just needs to perform use cases on "any" latest depl