Hi Bryan,
If the steps I sent you work, then please let me know and I can write up a 
patch to add pxe_wol to Apex.

Tim Rozet
Red Hat SDN Team

----- Original Message -----
From: "BRYAN L SULLIVAN" <[email protected]>
To: "Tim Rozet" <[email protected]>
Cc: [email protected]
Sent: Monday, January 23, 2017 1:57:53 PM
Subject: RE: [opnfv-tech-discuss] [Apex] Danube install help

Yes, the src reference was an error in my notes. I'll try again with the notes 
you added.

Not supporting PXE_WOL significantly reduces the types of machines that we can 
use for OPNFV development. I recommend that all bare-metal installers consider 
that PXE_WOL may be required, and provide a deploy parameter and hook at the 
proper time so that the PXE_WOL wakeup can be sent by the script to the power 
address (MAC) referenced in the lab config. At the very least if PXE_WOL is 
indicated as the power-type in the lab config, the install script should pause, 
inform the user that they need to wakeup a specific (or all) nodes at that 
time, and proceed when the user hits enter.

Thanks,
Bryan Sullivan | AT&T

-----Original Message-----
From: Tim Rozet [mailto:[email protected]] 
Sent: Monday, January 23, 2017 7:35 AM
To: SULLIVAN, BRYAN L <[email protected]>
Cc: [email protected]
Subject: Re: [opnfv-tech-discuss] [Apex] Danube install help

Hi Bryan,
The package requirements should be the same.  One problem I see from your 
output is you are trying to install source rpms.  The one thing that has 
changed are the network settings format.  We added a lot of comments there so 
hopefully that all makes sense.  To use a single network, just only enable the 
first network in the network settings.  There is no need to use the deprecated 
'--flat' argument anymore.  The deployment will also use a single compute node 
by default.

You can take the latest 4.0 artifacts, they should be stable.

For pxe_wol, you will need to interrupt the deployment and make a couple 
changes as we dont currently support pxe_wol:
1.  add at 
https://github.com/opnfv/apex/blob/master/lib/overcloud-deploy-functions.sh#L289
 :
cat > deploy_command << EOF
openstack overcloud deploy --templates $DEPLOY_OPTIONS --timeout 90 EOF exit 0 
2.  login to undercloud: opnfv-util undercloud 3.  sudo -i; vi 
/etc/ironic/ironic.conf 4.  set enabled drivers: enabled_drivers = 
pxe_ipmitool,pxe_ssh,pxe_drac,pxe_ilo,pxe_wol
5.  systemctl restart openstack-ironic-conductor;exit 6.  cd /home/stack; . 
stackrc; vi instackenv.json (this is your parsed inventory file for OOO) 7.  
for each "pm_type", change it to be "pxe_wol"
8.  for each "pm_addr", change it to be the broadcast address of your ctlplane 
subnet (example 192.0.20/24 would be 192.0.2.255) 9.  add "pm_port", under each 
"pm_addr" and set it to be the dest port for your WOL setup, default is 9 10. 
Set global bash var: dns_server_ext="--nameserver <your DNS server>"
11. Do the following commands:
for flavor in baremetal control compute; do
  if openstack flavor list | grep ${flavor}; then
    openstack flavor delete ${flavor}
  fi
  openstack flavor create --id auto --ram 4096 --disk 39 --vcpus 1 ${flavor} 
done openstack baremetal import --json instackenv.json openstack baremetal 
configure boot openstack flavor set --property "cpu_arch"="x86_64" --property 
"capabilities:boot_option"="local" baremetal openstack flavor set --property 
"cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property 
"capabilities:profile"="control" control openstack flavor set --property 
"cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property 
"capabilities:profile"="compute" compute

neutron subnet-update $(neutron subnet-list | grep -Ev 
"id|tenant|external|storage" | grep -v \\\\-\\\\- | awk {'print $2'}) 
${dns_server_ext}

12.  cat deploy_command
13.  Copy the output of step 12 and execute in terminal

Ping me on #opnfv-apex if something doesnt work.

Tim Rozet
Red Hat SDN Team

----- Original Message -----
From: "BRYAN L SULLIVAN" <[email protected]>
To: [email protected]
Sent: Saturday, January 21, 2017 7:41:17 PM
Subject: Re: [opnfv-tech-discuss] [Apex] Danube install help



Also, it’s clear that something has changed re the dependency upon jinja2, 
python-3-ipmi, etc … per the error I got below when trying to install the RPMs: 

--> Processing Dependency: python3-ipmi for package: 
--> opnfv-apex-common-4.0-20170121.noarch

--> Processing Dependency: python3-jinja2 for package: 
--> opnfv-apex-common-4.0-20170121.noarch

--> Finished Dependency Resolution

Error: Package: opnfv-apex-common-4.0-20170121.noarch 
(/opnfv-apex-common-4.0-20170121.noarch) 

Requires: python3-jinja2 

Error: Package: opnfv-apex-common-4.0-20170121.noarch 
(/opnfv-apex-common-4.0-20170121.noarch) 

Requires: python3-ipmi 

You could try using --skip-broken to work around the problem 

You could try running: rpm -Va --nofiles –nodigest 



[bryan@opnfv apex]$ sudo yum install -y python-jinja2-2.8-5.el7.centos.src.rpm 
python-markupsafe-0.23-9.el7.centos.src.rpm python3-ipmi-0.3.0-1.noarch.rpm 

Loaded plugins: fastestmirror, langpacks 

Loading mirror speeds from cached hostfile 

* base: mirror.hostduplex.com 

* epel: mirrors.kernel.org 

* extras: mirror.scalabledns.com 

* updates: mirrors.kernel.org 

No package python-jinja2-2.8-5.el7.centos.src.rpm available. 

No package python-markupsafe-0.23-9.el7.centos.src.rpm available. 

No package python3-ipmi-0.3.0-1.noarch.rpm available. 

Error: Nothing to do 

[bryan@opnfv apex]$ 




Thanks, 

Bryan Sullivan | AT&T 





From: SULLIVAN, BRYAN L
Sent: Saturday, January 21, 2017 4:36 PM
To: [email protected]
Subject: RE: [Apex] Danube install help 




One other thing I forgot in the list added below. 




Thanks, 

Bryan Sullivan | AT&T 





From: SULLIVAN, BRYAN L
Sent: Saturday, January 21, 2017 4:33 PM
To: [email protected]
Subject: [Apex] Danube install help 




Hi Apex team, 



I want to get going on installing Apex in a virtual or non-HA/3-node (jumphost, 
control, compute) environment, for Copper, Models, VES. I need to know how the 
install process has changed for Danube, e.g. pre-reqs and command line options. 



Here is the process I used for Colorado virtual install. Is all this still 
correct? 



· sudo yum install -y epel-release 

· sudo yum install -y python34 

· sudo yum install -y https://www.rdoproject.org/repos/rdo-release.rpm 

· sudo yum install -y python-jinja2-2.8-5.el7.centos.src.rpm 
python-markupsafe-0.23-9.el7.centos.src.rpm python3-ipmi-0.3.0-1.noarch.rpm 

· Download latest Apex RPMs, e.g. ( How do we know which build is stable enough 
to test with? Digging through Jenkins is not a great solution…) 

o wget http://artifacts.opnfv.org/apex/opnfv-apex-4.0-20170121.noarch.rpm 

o wget 
http://artifacts.opnfv.org/apex/opnfv-apex-common-4.0-20170121.noarch.rpm 

o wget 
http://artifacts.opnfv.org/apex/opnfv-apex-undercloud-4.0-20170121.noarch.rpm 

· sudo yum install -y opnfv-*.rpm 

· sudo opnfv-deploy -v -d /etc/opnfv-apex/os-nosdn-nofeature-noha.yaml -n 
/etc/opnfv-apex/network_settings.yaml 

· sudo opnfv-util undercloud 



And for the non-HA bare metal install with flat networking and PXE_WOL, what 
are the options I need to include for the deploy? 



If you’ve documented all this for Danube, a link to the docs/wiki would help. 



Thanks, 

Bryan Sullivan | AT&T 



_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to