Re: [Bug 1644785] Comment bridged from LTC Bugzilla

2017-01-30 Thread Dimitri John Ledkov
On 26 January 2017 at 01:17, bugproxy  wrote:
> --- Comment From y...@cn.ibm.com 2017-01-25 20:08 EDT---
> (In reply to comment #49)
>> KVM for IBM z uses the following sysctl defaults:
>>
>> fs.inotify.max_user_watches = 32768
>> fs.aio-max-nr = 4194304
>>
>> Can you try with these values?
>
> Thank you for the suggestion!
>
> Having set the system with these values, I can deploy way more 52
> instances.. I tested up to 200+ instances.. As expected, the deployment
> time is increasing linearly.  I think it's related to the asynchronized
> io increase in KVM hypervisor.
>

Should these sysctl defaults be shipped in Ubuntu by default for s390x?

On my amd64 xenial desktop my max_user_watches is a lot higher (~524k
vs ~8k) than what I see on s390x zesty;
fs.aio-max-nr is set to ~65k in both. Why is the default so low or
(vice versa) why such a high number is required?

-- 
Regards,

Dimitri.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2017-01-25 Thread bugproxy
--- Comment From y...@cn.ibm.com 2017-01-25 20:08 EDT---
(In reply to comment #49)
> KVM for IBM z uses the following sysctl defaults:
>
> fs.inotify.max_user_watches = 32768
> fs.aio-max-nr = 4194304
>
> Can you try with these values?

Thank you for the suggestion!

Having set the system with these values, I can deploy way more 52
instances.. I tested up to 200+ instances.. As expected, the deployment
time is increasing linearly.  I think it's related to the asynchronized
io increase in KVM hypervisor.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2017-01-17 Thread bugproxy
--- Comment From mihaj...@de.ibm.com 2017-01-17 06:46 EDT---
KVM for IBM z uses the following sysctl defaults:

fs.inotify.max_user_watches = 32768
fs.aio-max-nr = 4194304

Can you try with these values?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2017-01-16 Thread bugproxy
--- Comment From y...@cn.ibm.com 2017-01-16 14:38 EDT---
Thanks for looking into this!

Regarding the multipath error, I changed the multipath configurations to put 
Linux instance volumes in multipath blacklist.. Then I don't see the multipath 
error. And I was able to deploy couple more instances successfully in this 
environment. However, unfortunately, when I tried to deploy more, I got the 
same problem. The message shows the disk is not writable during the 
deployment..  And I noticed that the async io is still very high (see below 
number.)
fs.aio-max-nr = 65536
fs.aio-nr = 131072

Then I also increased the air-max-nr to a bigger number. And I can deploy more 
instances successfully.. But the deployment time is also getting much longer!  
I think it's related to the io waiting/notification time. As you can see from 
below number, the aio-nr increase a lot with just one instance deployed (from 
131072 to 133120).
fs.aio-max-nr = 1048576
fs.aio-nr = 133120

So I'm thinking the problem may tie to the KVM/QEMU. Maybe the qemu
couldn't handle the async io properly on s390x? Or need some changes in
the qemu/libvirt configuration?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2016-12-15 Thread bugproxy
--- Comment From vmor...@us.ibm.com 2016-12-15 12:23 EDT---
An observation:

Having Cinder and Nova-compute on the same host, and using LVM and iSCSI
backends might be causing some race conditions in multipath.

The syslog is littered with messages like this:
Nov 23 03:17:47 ub01 multipathd[179286]: io_setup failed
Nov 23 03:17:47 ub01 multipathd[179286]: uevent trigger error
Nov 23 03:17:47 ub01 systemd[188085]: 
dev-disk-by\x2dlabel-cloudimg\x2drootfs.device: Dev 
dev-disk-by\x2dlabel-cloudimg\x2drootfs.device appeared twice with different 
sysfs paths 
/sys/devices/platform/host16/session13/target16:0:0/16:0:0:1/block/sdav/sdav1 
and 
/sys/devices/platform/host64/session61/target64:0:0/64:0:0:1/block/sdco/sdco1
Nov 23 03:17:47 ub01 systemd[1]: 
dev-disk-by\x2dlabel-cloudimg\x2drootfs.device: Dev 
dev-disk-by\x2dlabel-cloudimg\x2drootfs.device appeared twice with different 
sysfs paths 
/sys/devices/platform/host8/session5/target8:0:0/8:0:0:1/block/sdam/sdam1 and 
/sys/devices/platform/host64/session61/target64:0:0/64:0:0:1/block/sdco/sdco1
Nov 23 03:17:47 ub01 systemd[188085]: 
dev-disk-by\x2duuid-74722b08\x2d9998\x2d44e7\x2d8898\x2d5d5e96f303af.device: 
Dev dev-disk-by\x2duuid-74722b08\x2d9998\x2d44e7\x2d8898\x2d5d5e96f303af.device 
appeared twice with different sysfs paths 
/sys/devices/platform/host16/session13/target16:0:0/16:0:0:1/block/sdav/sdav1 
and 
/sys/devices/platform/host64/session61/target64:0:0/64:0:0:1/block/sdco/sdco1
Nov 23 03:17:47 ub01 systemd[1]: 
dev-disk-by\x2duuid-74722b08\x2d9998\x2d44e7\x2d8898\x2d5d5e96f303af.device: 
Dev dev-disk-by\x2duuid-74722b08\x2d9998\x2d44e7\x2d8898\x2d5d5e96f303af.device 
appeared twice with different sysfs paths 
/sys/devices/platform/host8/session5/target8:0:0/8:0:0:1/block/sdam/sdam1 and 
/sys/devices/platform/host64/session61/target64:0:0/64:0:0:1/block/sdco/sdco1

No idea if this is the reason you're seeing the problem or not..

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2016-12-01 Thread bugproxy
--- Comment From y...@cn.ibm.com 2016-12-01 22:02 EDT---
Comment on attachment 114452
sosreport-part-aa

Please ignore this as it is still too big..

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2016-12-01 Thread bugproxy
--- Comment From y...@cn.ibm.com 2016-12-01 21:35 EDT---
(In reply to comment #24)

Thank you for looking into this. Please see my comments below.

> As Ryan indicated in his previous comment, the reason that the instances are
> unable to launch is due to the libvirt error:
>
> libvirtError: internal error: process exited while connecting to monitor:
> 2016-11-23T08:18:06.762943Z qemu-system-s390x: -drive
> file=/dev/disk/by-path/ip-148.100.42.50:3260-iscsi-iqn.2010-10.org.openstack:
> volume-f36c890c-1313-41de-b56d-991a2ece094c-lun-1,format=raw,if=none,
> id=drive-virtio-disk0,serial=f36c890c-1313-41de-b56d-991a2ece094c,cache=none,
> aio=native: The device is not writable: Bad file descriptor
>
> This indicates that the block device mapped by the file path
> /dev/disk/by-path ... has a bad file descriptor. Bad file descriptor
> suggests to me either that the device came and went away, the device is (for
> some reason) visible as read-only, or that the device hasn't yet been fully
> attached. This error messages is provided by the qemu code and only logged
> when checking to ensure the block-device's file descriptor is writable.
>

Right.. As I mentioned in the previous reply, the volume was created and
attached to the instance. And then detached when it got this error.. It
happened during the instance boot up after the volume was attached to.

> I think it'd be useful to get a bit more data from the compute and cinder
> nodes. Can you collect a bunch of data regarding the system using a tool
> called sosreport (in the xenial archives)? It will collect various logs,
> metrics, and system configuration which is useful to perform diagnostics.
> Just make sure to run the sosreport tool with the -a command to ensure that
> all of the information is captured (more efficient to get it the first go
> 'round).
>
> Before collecting the sosreport, it probably makes sense to increase the
> debug logging for the nova and qemu services prior to collecting the data.
> Setting logging levels for both to debug would provide lots of useful
> information.
>
> Also note: the 2.2 GB partition isn't an issue AIUI. The volume is created
> by:
> 1. downloading the image from glance
> 2. expanding and writing the block-level content to the cinder volume (where
> content is expanded from 320 MB to 2.2 GB).
> 3. When cloud-init runs on startup of the VM, the code detects the
> underlying disk is bigger than what is currently seen and will attempt to
> expand the partition and filesystem to consume the full content of the disk.

Understood.. I manually checked the volume after the deployment failed.
The contents on 2.2GB partition has a full ubuntu linux file structure.
So just don't know why it cannot be booted up during that time.

I have attached the sos file.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2016-11-30 Thread bugproxy
--- Comment From y...@cn.ibm.com 2016-11-30 18:34 EDT---
(In reply to comment #21)
> Thanks for the logs.  Nova conductor is reporting "NoValidHost: No valid
> host was found. There are not enough hosts available." when attempting to
> schedule and spawn an instance.  That is a somewhat generic failure message.
> But the specific failure seems to be a backing storage issue:
>
> 2016-11-23T08:17:48.103748Z qemu-system-s390x: -drive
> file=/dev/disk/by-path/ip-148.100.42.50:3260-iscsi-iqn.2010-10.org.openstack:
> volume-cfbc521f-4d9f-4ecc-8feb-777d1e5446e1-lun-1,format=raw,if=none,
> id=drive-virtio-disk0,serial=cfbc521f-4d9f-4ecc-8feb-777d1e5446e1,cache=none,
> aio=native: The device is not writable: Bad file descriptor
>
> ...
>
> 2016-11-23 03:17:54.654 6858 WARNING nova.scheduler.utils
> [req-ed481e89-154a-4002-b402-30002ed6a80b 7cce79da15be4daf9189541d1d5650be
> 63958815625d4108970e78bacf578e32 - - -] [instance:
> 89240ffa-7dcf-444d-8a8f-b751cd8b5e19] Setting instance to ERROR state.
> 2016-11-23 03:17:58.965 6833 ERROR nova.scheduler.utils
> [req-ed481e89-154a-4002-b402-30002ed6a80b 7cce79da15be4daf9189541d1d5650be
> 63958815625d4108970e78bacf578e32 - - -] [instance:
> 0cb365ea-550f-4720-a630-93821d50d43b] Error from last host: ub01 (node
> ub01.marist.edu): [u'Traceback (most recent call last):\n', u'  File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1926, in
> _do_build_and_run_instance\nfilter_properties)\n', u'  File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2116, in
> _build_and_run_instance\ninstance_uuid=instance.uuid,
> reason=six.text_type(e))\n', u'RescheduledException: Build of instance
> 0cb365ea-550f-4720-a630-93821d50d43b was re-scheduled: internal error:
> process exited while connecting to monitor: 2016-11-23T08:17:48.103748Z
> qemu-system-s390x: -drive
> file=/dev/disk/by-path/ip-148.100.42.50:3260-iscsi-iqn.2010-10.org.openstack:
> volume-cfbc521f-4d9f-4ecc-8feb-777d1e5446e1-lun-1,format=raw,if=none,
> id=drive-virtio-disk0,serial=cfbc521f-4d9f-4ecc-8feb-777d1e5446e1,cache=none,
> aio=native: The device is not writable: Bad file descriptor\n']
> 2016-11-23 03:17:59.011 6833 WARNING nova.scheduler.utils
> [req-ed481e89-154a-4002-b402-30002ed6a80b 7cce79da15be4daf9189541d1d5650be
> 63958815625d4108970e78bacf578e32 - - -] Failed to
> compute_task_build_instances: No valid host was found. There are not enough
> hosts available.
> Traceback (most recent call last):
>
> File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line
> 150, in inner
> return func(*args, **kwargs)
>
> File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 104,
> in select_destinations
> dests = self.driver.select_destinations(ctxt, spec_obj)
>
> File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py",
> line 74, in select_destinations
> raise exception.NoValidHost(reason=reason)
>
> NoValidHost: No valid host was found. There are not enough hosts available.
>
> I wonder if that is caused by one of the quota/limits in place.  One thing
> to check would be Cinder.  It has a default of 1000MB for
> maxTotalVolumeGigabytes.
>
> $ cinder absolute-limits
> +--+---+
> |   Name   | Value |
> +--+---+
> | maxTotalBackupGigabytes  |  1000 |
> | maxTotalBackups  |   10  |
> |maxTotalSnapshots |   10  |
> | maxTotalVolumeGigabytes  |  1000 |
> | maxTotalVolumes  |   10  |
> | totalBackupGigabytesUsed |   0   |
> | totalBackupsUsed |   0   |
> |totalGigabytesUsed|   0   |
> |totalSnapshotsUsed|   0   |
> | totalVolumesUsed |   0   |
> +--+---+

Actually I have already changed this quota limit 10 times at the very
beginning.  So I don't think it's the quota issue.  And I think if it
exceeds the quota, i'm supposed to see related message somewhere in the
logs, right?

+--+---+
|   Name   | Value |
+--+---+
| maxTotalBackupGigabytes  |  1000 |
| maxTotalBackups  |   10  |
|maxTotalSnapshots |   10  |
| maxTotalVolumeGigabytes  | 1 |
| maxTotalVolumes  |  100  |
| totalBackupGigabytesUsed |   0   |
| totalBackupsUsed |   0   |
|totalGigabytesUsed|  970  |
|totalSnapshotsUsed|   0   |
| totalVolumesUsed |   47  |
+--+---+

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2016-11-30 Thread bugproxy
--- Comment From y...@cn.ibm.com 2016-11-30 08:58 EDT---
Some info for your reference.

When the deployment failed, I noticed that the volume had been created
successfully and attached to the instance. But somehow the VM cannot be
booted due to the disk issue and detached from the instance.  From that
volume (20GB), I saw a 2.2GB partition was created, which is not
supposed to be. It should be a 20GB partition.  After the failure, I
manually created the iscsi target and connected to an existing Ubuntu
VM.  I can see the file contents on this 2.2GB partition.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2016-11-30 Thread bugproxy
--- Comment From y...@cn.ibm.com 2016-11-30 08:49 EDT---
(In reply to comment #14)
> Before I can do much with this, I need more information.
>
> How were the applications/services deployed? Please provide a Juju bundle
> and Juju status output, sanitized.
>
> What are the specs of the machine(s)?
>
> Logs from api and compute services should be indicative.  Are those
> available or can those be analyzed, then post relevant errors and tracebacks?
>
> Thank you.

I have uploaded the log files. Please check the volume id b794c66f-
6ed1-44ef-9223-5632969a1d5d. The is the 53th instance and deployed
failed.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2016-11-29 Thread bugproxy
--- Comment From y...@cn.ibm.com 2016-11-29 20:16 EDT---
(In reply to comment #10)
> Can we please get juju status output from this deployment?  That, plus the
> bundle used to deploy, will be helpful in understanding which services are
> where, and how they are configured.  Thank you.
>
The openstack is not installed via juju.  We manually installed the openstack 
components.

> Can we please get juju status output from this deployment? That, plus the
> bundle used to deploy, will be helpful in understanding which applications
> are where, and how they are configured. Thank you.
>
> ... and please also the status about the disk utilization before and after
> you hit that situation?
> Thx

Before I hit the situation, below is the disk utilization. cloudvg is the LVM 
for cinder.
VG  #PV #LV #SN Attr   VSize   VFree
cloudvg  10  48   0 wz--n-  10.00t8.83t
ub01-vg   2   2   0 wz--n- 199.52g 1020.00m

When I get the problem, below is the disk utilization.
VG  #PV #LV #SN Attr   VSize   VFree
cloudvg  10  57   0 wz--n-  10.00t8.65t
ub01-vg   2   2   0 wz--n- 199.52g 1020.00m

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openstack/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1644785] Comment bridged from LTC Bugzilla

2016-11-28 Thread bugproxy
--- Comment From y...@cn.ibm.com 2016-11-28 09:37 EDT---
I found something new today. Just for your reference.

When the deployment gets failure. I can see below info in the syslog.  It shows 
io_setup failed from multipathd.
Nov 27 22:24:31 ub01 kernel: [561221.664513] scsi host153: iSCSI Initiator over 
TCP/IP
Nov 27 22:24:32 ub01 kernel: [561222.677294] scsi 153:0:0:0: RAID  
IET  Controller   0001 PQ: 0 ANSI: 5
Nov 27 22:24:32 ub01 kernel: [561222.677684] scsi 153:0:0:0: Attached scsi 
generic sg142 type 12
Nov 27 22:24:32 ub01 kernel: [561222.678596] scsi 153:0:0:1: Direct-Access 
IET  VIRTUAL-DISK 0001 PQ: 0 ANSI: 5
Nov 27 22:24:32 ub01 kernel: [561222.679067] sd 153:0:0:1: Attached scsi 
generic sg143 type 0
Nov 27 22:24:32 ub01 kernel: [561222.679105] sd 153:0:0:1: [sdcm] 41943040 
512-byte logical blocks: (21.5 GB/20.0 GiB)
Nov 27 22:24:32 ub01 kernel: [561222.679305] sd 153:0:0:1: [sdcm] Write Protect 
is off
Nov 27 22:24:32 ub01 kernel: [561222.679308] sd 153:0:0:1: [sdcm] Mode Sense: 
69 00 10 08
Nov 27 22:24:32 ub01 kernel: [561222.679393] sd 153:0:0:1: [sdcm] Write cache: 
enabled, read cache: enabled, supports DPO and FUA
Nov 27 22:24:32 ub01 kernel: [561222.680868]  sdcm: sdcm1
Nov 27 22:24:32 ub01 kernel: [561222.681803] sd 153:0:0:1: [sdcm] Attached SCSI 
disk
Nov 27 22:24:32 ub01 multipathd[179286]: sdcm: add path (uevent)
Nov 27 22:24:32 ub01 multipathd[179286]: io_setup failed
Nov 27 22:24:32 ub01 multipathd[179286]: uevent trigger error
Nov 27 22:24:32 ub01 systemd-udevd[251528]: Could not generate persistent MAC 
address for qbrb21abf90-f6: No such file or directory
Nov 27 22:24:33 ub01 systemd-udevd[251596]: Could not generate persistent MAC 
address for qvob21abf90-f6: No such file or directory
Nov 27 22:24:33 ub01 systemd-udevd[251597]: Could not generate persistent MAC 
address for qvbb21abf90-f6: No such file or directory
Nov 27 22:24:33 ub01 iscsid: Connection150:0 to [target: 
iqn.2010-10.org.openstack:volume-54921f21-d2e1-49d6-a39c-4c21c458750b, portal: 
148.100.42.50,3260] through [iface: default] is operational now
Nov 27 22:24:33 ub01 kernel: [561223.395321] IPv6: ADDRCONF(NETDEV_UP): 
qvbb21abf90-f6: link is not ready
Nov 27 22:24:33 ub01 kernel: [561223.465740] device qvbb21abf90-f6 entered 
promiscuous mode
Nov 27 22:24:33 ub01 kernel: [561223.613720] IPv6: ADDRCONF(NETDEV_CHANGE): 
qvbb21abf90-f6: link becomes ready
Nov 27 22:24:33 ub01 kernel: [561223.687587] device qvob21abf90-f6 entered 
promiscuous mode
Nov 27 22:24:33 ub01 kernel: [561223.919008] qbrb21abf90-f6: port 
1(qvbb21abf90-f6) entered forwarding state
Nov 27 22:24:33 ub01 kernel: [561223.919014] qbrb21abf90-f6: port 
1(qvbb21abf90-f6) entered forwarding state
Nov 27 22:24:33 ub01 ovs-vsctl: ovs|1|vsctl|INFO|Called as 
/usr/bin/ovs-vsctl --timeout=120 -- --if-exists del-port qvob21abf90-f6 -- 
add-port br-int qvob21abf90-f6 -- set Interface qvob21abf90-f6 
external-ids:iface-id=b21abf90-f649-4501-ae8f-f0fffa44090b 
external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:5d:f3:1d 
external-ids:vm-uuid=fc797a01-9f8d-4bbd-bd93-c14eb14b880b

And from the multipath -ll output. I see some below messages.

Nov 28 09:20:41 | io_setup failed
Nov 28 09:20:41 | io_setup failed
Nov 28 09:20:41 | io_setup failed
Nov 28 09:20:41 | io_setup failed
Nov 28 09:20:41 | io_setup failed
Nov 28 09:20:41 | io_setup failed
.

mpathr (36005076306ffd700010d) dm-7 IBM,2107900
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 3:0:0:1074610177 sdd  8:48   active ready  running
`- 0:0:1:1074610177 sdab 65:176 active ready  running
.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1644785

Title:
  Ubuntu Openstack Mitaka can only deploy up to 52 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openstack/+bug/1644785/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs