to give my update on status so far, I have deployed xenial - queens
using juju 2.3.8-bionic-amd64
Model Controller Cloud/Region Version SLA
default icarus icarus 2.3.8 unsupported
App Version Status Scale Charm
Store Rev OS Notes
ceph-mon 12.2.4 active 3 ceph-mon
jujucharms 24 ubuntu
ceph-osd 12.2.4 active 3 ceph-osd
jujucharms 261 ubuntu
ceph-radosgw 12.2.4 active 1 ceph-radosgw
jujucharms 257 ubuntu
cinder 12.0.1 active 1 cinder
jujucharms 271 ubuntu
cinder-ceph 12.0.1 active 1 cinder-ceph
jujucharms 232 ubuntu
glance 16.0.1 active 1 glance
jujucharms 264 ubuntu
keystone 13.0.0 active 1 keystone
jujucharms 278 ubuntu
mysql 5.6.37-26.21 active 1 percona-cluster
jujucharms 263 ubuntu
neutron-api 12.0.1 active 1 neutron-api
jujucharms 259 ubuntu
neutron-gateway 12.0.1 active 1 neutron-gateway
jujucharms 248 ubuntu
neutron-openvswitch 12.0.1 active 3 neutron-openvswitch
jujucharms 249 ubuntu
nova-cloud-controller 17.0.3 active 1 nova-cloud-controller
jujucharms 309 ubuntu
nova-compute 17.0.3 active 3 nova-compute
jujucharms 282 ubuntu
rabbitmq-server 3.5.7 active 1 rabbitmq-server
jujucharms 73 ubuntu
I created 2 instances - 1 w/ an attached volume and 1 without
$ openstack server list
+--------------------------------------+--------------+--------+-----------------------------------+--------+----------+
| ID | Name | Status | Networks
| Image | Flavor |
+--------------------------------------+--------------+--------+-----------------------------------+--------+----------+
| ed754278-ab06-4b22-b88c-505bd5ff0316 | xenial-test2 | ACTIVE |
internal=10.5.5.13, 10.246.114.83 | xenial | m1.small |
| ccee393e-768f-463b-ae94-8193c4c54bba | xenial-test | ACTIVE |
internal=10.5.5.5, 10.246.114.88 | xenial | m1.small |
+--------------------------------------+--------------+--------+-----------------------------------+--------+----------+
$ openstack volume list
+--------------------------------------+----------+-----------+------+---------------------------------------+
| ID | Name | Status | Size | Attached
to |
+--------------------------------------+----------+-----------+------+---------------------------------------+
| b5fa9e73-56d3-4c76-a7dc-7d10115860d1 | testvol2 | in-use | 10 | Attached
to xenial-test2 on /dev/vdb |
| 9832d124-11f7-44a4-9293-fe87a535f637 | testvol1 | available | 10 |
|
+--------------------------------------+----------+-----------+------+---------------------------------------+
I restarted both compute which these 2 instances reside in with 'sudo reboot',
upon returning back online both of the instances were in the offline state as
seen below.
ubuntu@node-laveran:~$ sudo virsh list --all
Id Name State
----------------------------------------------------
- instance-00000001 shut off
ubuntu@node-husband:~$ sudo virsh list --all
Id Name State
----------------------------------------------------
- instance-00000002 shut off
Both instances were turned back on via the openstack cli, and powered on
successfully. This was with default bundle settings via stable/openstack-base.
I will sort through the bundle provided by the field and see if there are any
additional config options we can set to attempt to reproduce the problem.
** Attachment added: "Instance-start logs, Post Hypervisor Reboot"
https://bugs.launchpad.net/nova/+bug/1773449/+attachment/5146190/+files/logs.tar
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1773449
Title:
VMs do not survive host reboot
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1773449/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs