------- Comment From [email protected] 2016-11-30 18:34 EDT-------
(In reply to comment #21)
> Thanks for the logs.  Nova conductor is reporting "NoValidHost: No valid
> host was found. There are not enough hosts available." when attempting to
> schedule and spawn an instance.  That is a somewhat generic failure message.
> But the specific failure seems to be a backing storage issue:
>
> 2016-11-23T08:17:48.103748Z qemu-system-s390x: -drive
> file=/dev/disk/by-path/ip-148.100.42.50:3260-iscsi-iqn.2010-10.org.openstack:
> volume-cfbc521f-4d9f-4ecc-8feb-777d1e5446e1-lun-1,format=raw,if=none,
> id=drive-virtio-disk0,serial=cfbc521f-4d9f-4ecc-8feb-777d1e5446e1,cache=none,
> aio=native: The device is not writable: Bad file descriptor
>
> ...
>
> 2016-11-23 03:17:54.654 6858 WARNING nova.scheduler.utils
> [req-ed481e89-154a-4002-b402-30002ed6a80b 7cce79da15be4daf9189541d1d5650be
> 63958815625d4108970e78bacf578e32 - - -] [instance:
> 89240ffa-7dcf-444d-8a8f-b751cd8b5e19] Setting instance to ERROR state.
> 2016-11-23 03:17:58.965 6833 ERROR nova.scheduler.utils
> [req-ed481e89-154a-4002-b402-30002ed6a80b 7cce79da15be4daf9189541d1d5650be
> 63958815625d4108970e78bacf578e32 - - -] [instance:
> 0cb365ea-550f-4720-a630-93821d50d43b] Error from last host: ub01 (node
> ub01.marist.edu): [u'Traceback (most recent call last):\n', u'  File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1926, in
> _do_build_and_run_instance\n    filter_properties)\n', u'  File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2116, in
> _build_and_run_instance\n    instance_uuid=instance.uuid,
> reason=six.text_type(e))\n', u'RescheduledException: Build of instance
> 0cb365ea-550f-4720-a630-93821d50d43b was re-scheduled: internal error:
> process exited while connecting to monitor: 2016-11-23T08:17:48.103748Z
> qemu-system-s390x: -drive
> file=/dev/disk/by-path/ip-148.100.42.50:3260-iscsi-iqn.2010-10.org.openstack:
> volume-cfbc521f-4d9f-4ecc-8feb-777d1e5446e1-lun-1,format=raw,if=none,
> id=drive-virtio-disk0,serial=cfbc521f-4d9f-4ecc-8feb-777d1e5446e1,cache=none,
> aio=native: The device is not writable: Bad file descriptor\n']
> 2016-11-23 03:17:59.011 6833 WARNING nova.scheduler.utils
> [req-ed481e89-154a-4002-b402-30002ed6a80b 7cce79da15be4daf9189541d1d5650be
> 63958815625d4108970e78bacf578e32 - - -] Failed to
> compute_task_build_instances: No valid host was found. There are not enough
> hosts available.
> Traceback (most recent call last):
>
> File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line
> 150, in inner
> return func(*args, **kwargs)
>
> File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 104,
> in select_destinations
> dests = self.driver.select_destinations(ctxt, spec_obj)
>
> File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py",
> line 74, in select_destinations
> raise exception.NoValidHost(reason=reason)
>
> NoValidHost: No valid host was found. There are not enough hosts available.
>
> I wonder if that is caused by one of the quota/limits in place.  One thing
> to check would be Cinder.  It has a default of 1000MB for
> maxTotalVolumeGigabytes.
>
> $ cinder absolute-limits
> +--------------------------+-------+
> |           Name           | Value |
> +--------------------------+-------+
> | maxTotalBackupGigabytes  |  1000 |
> |     maxTotalBackups      |   10  |
> |    maxTotalSnapshots     |   10  |
> | maxTotalVolumeGigabytes  |  1000 |
> |     maxTotalVolumes      |   10  |
> | totalBackupGigabytesUsed |   0   |
> |     totalBackupsUsed     |   0   |
> |    totalGigabytesUsed    |   0   |
> |    totalSnapshotsUsed    |   0   |
> |     totalVolumesUsed     |   0   |
> +--------------------------+-------+

Actually I have already changed this quota limit 10 times at the very
beginning.  So I don't think it's the quota issue.  And I think if it
exceeds the quota, i'm supposed to see related message somewhere in the
logs, right?

+--------------------------+-------+
|           Name           | Value |
+--------------------------+-------+
| maxTotalBackupGigabytes  |  1000 |
|     maxTotalBackups      |   10  |
|    maxTotalSnapshots     |   10  |
| maxTotalVolumeGigabytes  | 10000 |
|     maxTotalVolumes      |  100  |
| totalBackupGigabytesUsed |   0   |
|     totalBackupsUsed     |   0   |
|    totalGigabytesUsed    |  970  |
|    totalSnapshotsUsed    |   0   |
|     totalVolumesUsed     |   47  |
+--------------------------+-------+

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to