In view of the inadvertent error in a tenant cleanup script that ended up 
removing all IP addresses assigned via a specific router - what is our 
procedure to deal with a full reset/fix of the lab.

   Option 1) re-associate new IPs (I have a copy of my server list - others may 
not) - getting the same IP may be an issue
   Option 2) full delete/recreate - when we get our IP pool back (17 IPs are 
required for an OOM deployment including DCAE)
   Going with option 2 - I tested running a stack - looks OK - got a IP.
   Work left is to manually clean hanging zones and ports (IPs already deleted)

   For myself I had a couple OOM instances and one full DCAE deployment - all 
my configuration is saved offline so I can delete the dcae and oom stacks and 
run cleanups on the ports, vms and IPs.
   Unrelated, but there are left over 4 month old DCAE deployments in the 
designate openstack on (60G and 35 IPs) - I would expect you can 
delete these as they look to be in the wrong openstack by error.

   One question, it looks like we are using now (we also have - but there are no longer IPs like we used to have 
this morning right?

Tested bringing back up my oom stack - OK for now - will verify DCAE and the 
cloudify manager orchestration
obrienbiometrics:lab_logging michaelobrien$ openstack stack create -t 
oom_openstack.yaml -e oom_openstack_oom.env oom20181214a
| Field               | Value                                   |
| id                  | 82fdbb99-6b79-435f-bb27-5de1b38a649a    |
| stack_name          | oom20181214a                            |
| description         | Heat template to install OOM components |
| creation_time       | 2018-02-15T01:02:32Z                    |
| updated_time        | 2018-02-15T01:02:33Z                    |
| stack_status        | CREATE_IN_PROGRESS                      |
| stack_status_reason | Stack CREATE started                    |
obrienbiometrics:lab_logging michaelobrien$ ssh ubuntu@
Warning: Permanently added '' (ECDSA) to the list of known hosts.
ubuntu@onap-oom-kubeadm:~$ free
              total        used        free      shared  buff/cache   available
Mem:       65976368      163272    65479500        8848      333596    65319388

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at 
onap-discuss mailing list

Reply via email to