------- Comment From y...@cn.ibm.com 2016-12-01 21:35 EDT-------
(In reply to comment #24)

Thank you for looking into this. Please see my comments below.

> As Ryan indicated in his previous comment, the reason that the instances are
> unable to launch is due to the libvirt error:
> libvirtError: internal error: process exited while connecting to monitor:
> 2016-11-23T08:18:06.762943Z qemu-system-s390x: -drive
> file=/dev/disk/by-path/ip-
> volume-f36c890c-1313-41de-b56d-991a2ece094c-lun-1,format=raw,if=none,
> id=drive-virtio-disk0,serial=f36c890c-1313-41de-b56d-991a2ece094c,cache=none,
> aio=native: The device is not writable: Bad file descriptor
> This indicates that the block device mapped by the file path
> /dev/disk/by-path ... has a bad file descriptor. Bad file descriptor
> suggests to me either that the device came and went away, the device is (for
> some reason) visible as read-only, or that the device hasn't yet been fully
> attached. This error messages is provided by the qemu code and only logged
> when checking to ensure the block-device's file descriptor is writable.

Right.. As I mentioned in the previous reply, the volume was created and
attached to the instance. And then detached when it got this error.. It
happened during the instance boot up after the volume was attached to.

> I think it'd be useful to get a bit more data from the compute and cinder
> nodes. Can you collect a bunch of data regarding the system using a tool
> called sosreport (in the xenial archives)? It will collect various logs,
> metrics, and system configuration which is useful to perform diagnostics.
> Just make sure to run the sosreport tool with the -a command to ensure that
> all of the information is captured (more efficient to get it the first go
> 'round).
> Before collecting the sosreport, it probably makes sense to increase the
> debug logging for the nova and qemu services prior to collecting the data.
> Setting logging levels for both to debug would provide lots of useful
> information.
> Also note: the 2.2 GB partition isn't an issue AIUI. The volume is created
> by:
> 1. downloading the image from glance
> 2. expanding and writing the block-level content to the cinder volume (where
> content is expanded from 320 MB to 2.2 GB).
> 3. When cloud-init runs on startup of the VM, the code detects the
> underlying disk is bigger than what is currently seen and will attempt to
> expand the partition and filesystem to consume the full content of the disk.

Understood.. I manually checked the volume after the deployment failed.
The contents on 2.2GB partition has a full ubuntu linux file structure.
So just don't know why it cannot be booted up during that time.

I have attached the sos file.

You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:

ubuntu-bugs mailing list

Reply via email to