[Group.of.nepali.translators] [Bug 1662324] Re: linux bridge agent disables ipv6 before adding an ipv6 address

2020-05-26 Thread Edward Hope-Morley
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1662324

Title:
  linux bridge agent disables ipv6 before adding an ipv6 address

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  In Progress

Bug description:
  [Impact]
  When using linuxbridge and after creating network & interface to ext-net, 
disable_ipv6 is 1. then linuxbridge-agent doesn't add ipv6 properly to newly 
created bridge.

  [Test Case]

  1. deploy basic mitaka env
  2. create external network(ext-net)
  3. create ipv6 network and interface to ext-net
  4. check if related bridge has ipv6 ip
  - no ipv6 originally
  or
  - cat /proc/sys/net/ipv6/conf/[BRIDGE]/disable_ipv6

  after this commit, I was able to see ipv6 address properly.

  [Regression]
  You need to restart neutron-linuxbridge-agent then there could be short 
downtime needed.

  [Others]

  -- original description --

  Summary:
  
  I have a dual-stack NIC with only an IPv6 SLAAC and link local address 
plumbed. This is the designated provider network nic. When I create a network 
and then a subnet, the linux bridge agent first disables IPv6 on the bridge and 
then tries to add the IPv6 address from the NIC to the bridge. Since IPv6 was 
disabled on the bridge, this fails with 'RTNETLINK answers: Permission denied'. 
My intent was to create an IPv4 subnet over this interface with floating IPv4 
addresses for assignment to VMs via this command:
    openstack subnet create --network provider \
  --allocation-pool start=10.54.204.200,end=10.54.204.217 \
  --dns-nameserver 69.252.80.80 --dns-nameserver 69.252.81.81 \
  --gateway 10.54.204.129 --subnet-range 10.54.204.128/25 provider

  I don't know why the agent is disabling IPv6 (I wish it wouldn't),
  that's probably the problem. However, if the agent knows to disable
  IPv6 it should also know not to try to add an IPv6 address.

  Details:
  
  Version: Newton on CentOS 7.3 minimal (CentOS-7-x86_64-Minimal-1611.iso) as 
per these instructions: http://docs.openstack.org/newton/install-guide-rdo/

  Seemingly relevant section of /var/log/neutron/linuxbridge-agent.log:
  2017-02-06 15:09:20.863 1551 INFO 
neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Skipping ARP spoofing 
rules for port 'tap3679987e-ce' because it has port security disabled
  2017-02-06 15:09:20.863 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'-o', 'link', 'show', 'tap3679987e-ce'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.870 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.871 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'addr', 'show', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.878 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.879 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'route', 'list', 'dev', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.885 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.886 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'brqe1623c94-1f', 'up'] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:105
  2017-02-06 15:09:20.895 1551 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Starting bridge 
brqe1623c94-1f for subinterface eno1 ensure_bridge 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:367
  2017-02-06 15:09:20.895 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['brctl', 'addbr', 

[Group.of.nepali.translators] [Bug 1723030] Re: Under certain conditions check_rules is very sluggish

2020-01-13 Thread Edward Hope-Morley
** No longer affects: cloud-archive/mitaka

** No longer affects: python-oslo.policy (Ubuntu Xenial)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1723030

Title:
  Under certain conditions check_rules is very sluggish

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in oslo.policy:
  Fix Released
Status in python-oslo.policy package in Ubuntu:
  Fix Released
Status in python-oslo.policy source package in Bionic:
  Triaged

Bug description:
  In Horizon we observe in certain projects function check_rules()
  taking up to 10 seconds while in the others maximum 2 seconds.

  In order to remedy this, we would like to have check_rules function
  executed only if oslo.policy is configured to check the syntax, i.e.
  to introduce a config parameter defaulting to True but with a
  possibility to disable it. In that case (operator setting it to False)
  there wouldn't be any checks from oslo.policy side if the syntax of
  provided JSON file is correct.

  Current behavior shouldn't be changed, i.e. syntax checks should be
  opt-out.

  ==

  SRU for UCA Queens

  [Impact]
  In Horizon, Admin->Network->Networks page takes longer time to load. One of 
the improvements is to avoid any redundant policy checks which consume time.

  This fix optimizes the policy logic to avoid any redundant policy
  checks and further enhancing API response times.

  [Test Case]
  Create 240 networks and observe page load time before and after fix

  1. Collect information before fix

  1a. Increase network and subnet quota for the admin project
  openstack quota set --networks -1 
  openstack quota set --subnets -1 

  1b. Create 240 networks using openstack cli
  for i in {1..240}; do openstack network create test$i; openstack subnet 
create --subnet-range 10.1.$i.0/24 --network test$i test$i; done

  1c. Login to dashboard as admin user in a preferred browser
  1d. Open Developer options --> Network tab. This should show all the outgoing 
network traffic information from browser.
  1e. Open Admin->Network->Networks page
  1f. Note down the time taken for /admin/networks in Network tab of browser
  1g. Repeat 1c 3-5 times and take average time taken for /admin/networks

  2. Install the package with the fixed code and restart apache service

  3. Collect information after fix

  3a. Repeat steps 1c-1g

  3b.You should observe an improvement in time reduction of
  /admin/networks after the fix.

  [Regression Potential]

  Given the following indicators, the regression potential is negligible

  a. Upstream CI passed with all tempest test cases passed. Indicates no break 
in functionality.
  b. The fix is available on releases Rocky, Stein, Train since a year ago and 
no problems reported with this functionality.

  There will be downtime of milliseconds during restart of apache
  services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1723030/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1633120] Re: [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance

2019-08-01 Thread Edward Hope-Morley
Mitaka not backportable so abandoning:

$ git-deps -e mitaka-eol 5c5a6b93a07b0b58f513396254049c17e2883894^!
c2c3b97259258eec3c98feabde3b411b519eae6e

$ git-deps -e mitaka-eol c2c3b97259258eec3c98feabde3b411b519eae6e^!
a023c32c70b5ddbae122636c26ed32e5dcba66b2
74fbff88639891269f6a0752e70b78340cf87e9a
e83842b80b73c451f78a4bb9e7bd5dfcebdefcab
1f259e2a9423a4777f79ca561d5e6a74747a5019
b01187eede3881f72addd997c8fd763ddbc137fc
49d9433c62d74f6ebdcf0832e3a03e544b1d6c83


** Changed in: cloud-archive/mitaka
   Status: Triaged => Won't Fix

** Changed in: nova (Ubuntu Xenial)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1633120

Title:
  [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to
  a new instance

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Won't Fix
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Fix Committed
Status in OpenStack Compute (nova) pike series:
  Fix Committed
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Won't Fix
Status in nova source package in Bionic:
  Fix Released
Status in nova source package in Cosmic:
  Fix Released
Status in nova source package in Disco:
  Fix Released
Status in nova source package in Eoan:
  Fix Released

Bug description:
  [Impact]
  This patch is required to prevent nova from accidentally marking pci_device 
allocations as deleted when it incorrectly reads the passthrough whitelist 

  [Test Case]
  * deploy openstack (any version that supports sriov)
  * single compute configured for sriov with at least once device in 
pci_passthrough_whitelist
  * create a vm and attach sriov port
  * remove device from pci_passthrough_whitelist and restart nova-compute
  * check that pci_devices allocations have not been marked as deleted

  [Regression Potential]
  None anticipated
  
  Upon trying to create VM instance (Say A) with one QAT VF, it fails with the 
following error i.e., “Requested operation is not valid: PCI device 
:88:04.7 is in use by driver QEMU, domain instance-0081”. Please note 
that, PCI device :88:04.7 is already being assigned to another VM (Say B) . 
 We have installed openstack-mitaka release on CentO7 system. It has two Intel 
QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device 
Out of 64 VFs, only 8 VFs are allocated (to VM instances) and rest should be 
available.
  But the nova scheduler tries to assign an already-in-use SRIOV VF to a new 
instance and instance fails. It appears that the nova database is not tracking 
which VF's have already been taken. But if I shut down VM B instance, then 
other instance VM A boots up and vice-versa. Note that, both the VM instances 
cannot run simultaneously because of the aforesaid issue.

  We should always be able to create as many instances with the
  requested PCI devices as there are available VFs.

  Please feel free to let me know if additional information is needed.
  Can anyone please suggest why it tries to assign same PCI device which
  has been assigned already? Is there any way to resolve this issue?
  Thank you in advance for your support and help.

  [root@localhost ~(keystone_admin)]# lspci -d:435
  83:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
  88:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
  [root@localhost ~(keystone_admin)]#

  [root@localhost ~(keystone_admin)]# lspci -d:443 | grep "QAT Virtual 
Function" | wc -l
  64
  [root@localhost ~(keystone_admin)]#

  [root@localhost ~(keystone_admin)]# mysql -u root nova -e "SELECT 
hypervisor_hostname, address, instance_uuid, status FROM pci_devices JOIN 
compute_nodes oncompute_nodes.id=compute_node_id" | grep :88:04.7
  localhost  :88:04.7e10a76f3-e58e-4071-a4dd-7a545e8000deallocated
  localhost  :88:04.7c3dbac90-198d-4150-ba0f-a80b912d8021allocated
  localhost  :88:04.7c7f6adad-83f0-4881-b68f-6d154d565ce3allocated
  localhost.nfv.benunets.com :88:04.7
0c3c11a5-f9a4-4f0d-b120-40e4dde843d4allocated
  [root@localhost ~(keystone_admin)]#

  [root@localhost ~(keystone_admin)]# grep -r 
e10a76f3-e58e-4071-a4dd-7a545e8000de /etc/libvirt/qemu
  /etc/libvirt/qemu/instance-0081.xml:  
e10a76f3-e58e-4071-a4dd-7a545e8000de
  

[Group.of.nepali.translators] [Bug 1744079] Re: [SRU] disk over-commit still not correctly calculated during live migration

2019-05-23 Thread Edward Hope-Morley
** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

** Tags removed: sts-sru-needed
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1744079

Title:
  [SRU] disk over-commit still not correctly calculated during live
  migration

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in Ubuntu Cloud Archive pike series:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released
Status in nova source package in Bionic:
  Fix Released
Status in nova source package in Cosmic:
  Fix Released
Status in nova source package in Disco:
  Fix Released

Bug description:
  [Impact]
  nova compares disk space with disk_available_least field, which is possible 
to be negative, due to overcommit.

  So the migration may fail because of a "Migration pre-check error:
  Unable to migrate dfcd087a-5dff-439d-8875-2f702f081539: Disk of
  instance is too large(available on destination host:-3221225472 <
  need:22806528)" when trying a migration to another compute that has
  plenty of free space in his disk.

  [Test Case]
  Deploy openstack environment. Make sure there is a negative 
disk_available_least and a adequate free_disk_gb in one test compute node, then 
migrate a VM to it with disk-overcommit (openstack server migrate --live 
 --block-migration --disk-overcommit ). You will 
see above migration pre-check error.

  This is the formula to compute disk_available_least and free_disk_gb.

  disk_free_gb = disk_info_dict['free']
  disk_over_committed = self._get_disk_over_committed_size_total()
  available_least = disk_free_gb * units.Gi - disk_over_committed
  data['disk_available_least'] = available_least / units.Gi

  The following command can be used to query the value of
  disk_available_least

  nova hypervisor-show  |grep disk

  Steps to Reproduce:
  1. set disk_allocation_ratio config option > 1.0 
  2. qemu-img resize cirros-0.3.0-x86_64-disk.img +40G
  3. glance image-create --disk-format qcow2 ...
  4. boot VMs based on resized image
  5. we see disk_available_least becomes negative

  [Regression Potential]
  Minimal - we're just changing from the following line:

  disk_available_gb = dst_compute_info['disk_available_least']

  to the following codes:

  if disk_over_commit:
  disk_available_gb = dst_compute_info['free_disk_gb']
  else:
  disk_available_gb = dst_compute_info['disk_available_least']

  When enabling overcommit, disk_available_least is possible to be
  negative, so we should use free_disk_gb instead of it by backporting
  the following two fixes.

  
https://git.openstack.org/cgit/openstack/nova/commit/?id=e097c001c8e0efe8879da57264fcb7bdfdf2
  
https://git.openstack.org/cgit/openstack/nova/commit/?id=e2cc275063658b23ed88824100919a6dfccb760d

  This is the code path for check_can_live_migrate_destination:

  _migrate_live(os-migrateLive API, migrate_server.py) -> migrate_server
  -> _live_migrate -> _build_live_migrate_task ->
  _call_livem_checks_on_host -> check_can_live_migrate_destination

  BTW, redhat also has a same bug -
  https://bugzilla.redhat.com/show_bug.cgi?id=1477706

  
  [Original Bug Report]
  Change I8a705114d47384fcd00955d4a4f204072fed57c2 (written by me... sigh) 
addressed a bug which prevented live migration to a target host with 
overcommitted disk when made with microversion <2.25. It achieved this, but the 
fix is still not correct. We now do:

  if disk_over_commit:
  disk_available_gb = dst_compute_info['local_gb']

  Unfortunately local_gb is *total* disk, not available disk. We
  actually want free_disk_gb. Fun fact: due to the way we calculate this
  for filesystems, without taking into account reserved space, this can
  also be negative.

  The test we're currently running is: could we fit this guest's
  allocated disks on the target if the target disk was empty. This is at
  least better than it was before, as we don't spuriously fail early. In
  fact, we're effectively disabling a test which is disabled for
  microversion >=2.25 anyway. IOW we should fix it, but it's probably
  not a high priority.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1744079/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : 

[Group.of.nepali.translators] [Bug 1744079] Re: [SRU] disk over-commit still not correctly calculated during live migration

2019-04-16 Thread Edward Hope-Morley
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1744079

Title:
  [SRU] disk over-commit still not correctly calculated during live
  migration

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive mitaka series:
  New
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in Ubuntu Cloud Archive pike series:
  Fix Committed
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Triaged
Status in nova source package in Bionic:
  Fix Released
Status in nova source package in Cosmic:
  Fix Released
Status in nova source package in Disco:
  Fix Released

Bug description:
  [Impact]
  nova compares disk space with disk_available_least field, which is possible 
to be negative, due to overcommit.

  So the migration may fail because of a "Migration pre-check error:
  Unable to migrate dfcd087a-5dff-439d-8875-2f702f081539: Disk of
  instance is too large(available on destination host:-3221225472 <
  need:22806528)" when trying a migration to another compute that has
  plenty of free space in his disk.

  [Test Case]
  Deploy openstack environment. Make sure there is a negative 
disk_available_least and a adequate free_disk_gb in one test compute node, then 
migrate a VM to it with disk-overcommit (openstack server migrate --live 
 --block-migration --disk-overcommit ). You will 
see above migration pre-check error.

  This is the formula to compute disk_available_least and free_disk_gb.

  disk_free_gb = disk_info_dict['free']
  disk_over_committed = self._get_disk_over_committed_size_total()
  available_least = disk_free_gb * units.Gi - disk_over_committed
  data['disk_available_least'] = available_least / units.Gi

  The following command can be used to query the value of
  disk_available_least

  nova hypervisor-show  |grep disk

  Steps to Reproduce:
  1. set disk_allocation_ratio config option > 1.0 
  2. qemu-img resize cirros-0.3.0-x86_64-disk.img +40G
  3. glance image-create --disk-format qcow2 ...
  4. boot VMs based on resized image
  5. we see disk_available_least becomes negative

  [Regression Potential]
  Minimal - we're just changing from the following line:

  disk_available_gb = dst_compute_info['disk_available_least']

  to the following codes:

  if disk_over_commit:
  disk_available_gb = dst_compute_info['free_disk_gb']
  else:
  disk_available_gb = dst_compute_info['disk_available_least']

  When enabling overcommit, disk_available_least is possible to be
  negative, so we should use free_disk_gb instead of it by backporting
  the following two fixes.

  
https://git.openstack.org/cgit/openstack/nova/commit/?id=e097c001c8e0efe8879da57264fcb7bdfdf2
  
https://git.openstack.org/cgit/openstack/nova/commit/?id=e2cc275063658b23ed88824100919a6dfccb760d

  This is the code path for check_can_live_migrate_destination:

  _migrate_live(os-migrateLive API, migrate_server.py) -> migrate_server
  -> _live_migrate -> _build_live_migrate_task ->
  _call_livem_checks_on_host -> check_can_live_migrate_destination

  BTW, redhat also has a same bug -
  https://bugzilla.redhat.com/show_bug.cgi?id=1477706

  
  [Original Bug Report]
  Change I8a705114d47384fcd00955d4a4f204072fed57c2 (written by me... sigh) 
addressed a bug which prevented live migration to a target host with 
overcommitted disk when made with microversion <2.25. It achieved this, but the 
fix is still not correct. We now do:

  if disk_over_commit:
  disk_available_gb = dst_compute_info['local_gb']

  Unfortunately local_gb is *total* disk, not available disk. We
  actually want free_disk_gb. Fun fact: due to the way we calculate this
  for filesystems, without taking into account reserved space, this can
  also be negative.

  The test we're currently running is: could we fit this guest's
  allocated disks on the target if the target disk was empty. This is at
  least better than it was before, as we don't spuriously fail early. In
  fact, we're effectively disabling a test which is disabled for
  microversion >=2.25 anyway. IOW we should fix it, but it's probably
  not a high priority.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1744079/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : 

[Group.of.nepali.translators] [Bug 1818880] Re: Deadlock when detaching network interface

2019-04-01 Thread Edward Hope-Morley
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1818880

Title:
  Deadlock when detaching network interface

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  New
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in Ubuntu Cloud Archive pike series:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Won't Fix
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Xenial:
  Fix Released
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [Impact]
  Qemu guests hang indefinitely

  [Description]
  When running a Qemu guest with VirtIO network interfaces, detaching an 
interface that's currently being used can result in a deadlock. The guest 
instance will hang and become unresponsive to commands, and the only option is 
to kill -9 the instance.
  The reason for this is a dealock between a monitor and an RCU thread, which 
will fight over the BQL (qemu_global_mutex) and the critical RCU section locks. 
The monitor thread will acquire the BQL for detaching the network interface, 
and fire up a helper thread to deal with detaching the network adapter. That 
new thread needs to wait on the RCU thread to complete the deletion, but the 
RCU thread wants the BQL to commit its transactions.
  This bug is already fixed upstream (73c6e4013b4c rcu: completely disable 
pthread_atfork callbacks as soon as possible) and included for other series 
(see below), so we don't need to backport it to Bionic onwards.

  Upstream commit:
  https://git.qemu.org/?p=qemu.git;a=commit;h=73c6e4013b4c

  $ git describe --contains 73c6e4013b4c
  v2.10.0-rc2~1^2~8

  $ rmadison qemu
  ===> qemu | 1:2.5+dfsg-5ubuntu10.34 | xenial-updates/universe   | amd64, ...
   qemu | 1:2.11+dfsg-1ubuntu7| bionic/universe   | amd64, ...
   qemu | 1:2.12+dfsg-3ubuntu8| cosmic/universe   | amd64, ...
   qemu | 1:3.1+dfsg-2ubuntu2 | disco/universe| amd64, ...

  [Test Case]
  Being a racing condition, this is a tricky bug to reproduce consistently. 
We've had reports of users running into this with OpenStack deployments and 
Windows Server guests, and the scenario is usually like this:
  1) Deploy a 16vCPU Windows Server 2012 R2 guest with a virtio network 
interface
  2) Stress the network interface with e.g. Windows HLK test suite or similar
  3) Repeatedly attach/detach the network adapter that's in use
  It usually takes more than ~4000 attach/detach cycles to trigger the bug.

  [Regression Potential]
  Regressions for this might arise from the fact that the fix changes RCU lock 
code. Since this patch has been upstream and in other series for a while, it's 
unlikely that it would regressions in RCU code specifically. Other code that 
makes use of the RCU locks (MMIO and some monitor events) will be thoroughly 
tested for any regressions with use-case scenarios and scripted runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1818880/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1818880] Re: Deadlock when detaching network interface

2019-03-25 Thread Edward Hope-Morley
** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: Confirmed

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1818880

Title:
  Deadlock when detaching network interface

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in Ubuntu Cloud Archive pike series:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in Ubuntu Cloud Archive stein series:
  Triaged
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Xenial:
  Fix Released
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [Impact]
  Qemu guests hang indefinitely

  [Description]
  When running a Qemu guest with VirtIO network interfaces, detaching an 
interface that's currently being used can result in a deadlock. The guest 
instance will hang and become unresponsive to commands, and the only option is 
to kill -9 the instance.
  The reason for this is a dealock between a monitor and an RCU thread, which 
will fight over the BQL (qemu_global_mutex) and the critical RCU section locks. 
The monitor thread will acquire the BQL for detaching the network interface, 
and fire up a helper thread to deal with detaching the network adapter. That 
new thread needs to wait on the RCU thread to complete the deletion, but the 
RCU thread wants the BQL to commit its transactions.
  This bug is already fixed upstream (73c6e4013b4c rcu: completely disable 
pthread_atfork callbacks as soon as possible) and included for other series 
(see below), so we don't need to backport it to Bionic onwards.

  Upstream commit:
  https://git.qemu.org/?p=qemu.git;a=commit;h=73c6e4013b4c

  $ git describe --contains 73c6e4013b4c
  v2.10.0-rc2~1^2~8

  $ rmadison qemu
  ===> qemu | 1:2.5+dfsg-5ubuntu10.34 | xenial-updates/universe   | amd64, ...
   qemu | 1:2.11+dfsg-1ubuntu7| bionic/universe   | amd64, ...
   qemu | 1:2.12+dfsg-3ubuntu8| cosmic/universe   | amd64, ...
   qemu | 1:3.1+dfsg-2ubuntu2 | disco/universe| amd64, ...

  [Test Case]
  Being a racing condition, this is a tricky bug to reproduce consistently. 
We've had reports of users running into this with OpenStack deployments and 
Windows Server guests, and the scenario is usually like this:
  1) Deploy a 16vCPU Windows Server 2012 R2 guest with a virtio network 
interface
  2) Stress the network interface with e.g. Windows HLK test suite or similar
  3) Repeatedly attach/detach the network adapter that's in use
  It usually takes more than ~4000 attach/detach cycles to trigger the bug.

  [Regression Potential]
  Regressions for this might arise from the fact that the fix changes RCU lock 
code. Since this patch has been upstream and in other series for a while, it's 
unlikely that it would regressions in RCU code specifically. Other code that 
makes use of the RCU locks (MMIO and some monitor events) will be thoroughly 
tested for any regressions with use-case scenarios and scripted runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1818880/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1686437] Re: [SRU] glance sync: need keystone v3 auth support

2019-01-14 Thread Edward Hope-Morley
Also, it seems that the xenial-proposed (1.3) package has been deleted
so we need to get the PPA version resubmitted as an sru and test that.

** Tags removed: sts verification-failed verification-failed-xenial

** Changed in: simplestreams (Ubuntu Xenial)
   Status: Fix Committed => New

** No longer affects: simplestreams (Ubuntu Zesty)

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Felipe Reyes (freyes) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1686437

Title:
  [SRU] glance sync: need keystone v3 auth support

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  New

Bug description:
  [Impact]

  simplestreams can't sync images when keystone is configured to use v3,
  keystone v2 is deprecated since mitaka[0] (the version shipped with
  xenial)

  The OpenStack Keystone charm supports v3 only since Queens and
  later[1]

  [Test Case]

  * deploy a openstack environment with keystone v3 enabled
- get a copy of the bundle available at 
http://paste.ubuntu.com/p/hkhsHKqt4h/ , this bundle deploys a minimal version 
of xenial-mitaka.

  Expected result:

  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
 contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )

  Actual result:

  - "glance image-list" is empty
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
 contains the following stacktrace
  INFO  * 04-09 22:04:06 [PID:14571] * root * Calling DryRun mirror to get 
item list
  ERROR * 04-09 22:04:06 [PID:14571] * root * Exception during syncing:
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 471, in main
  do_sync(charm_conf, status_exchange)
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 232, in do_sync
  objectstore=store)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 374, in __init__
  super(ItemInfoDryRunMirror, self).__init__(config, objectstore)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 126, in __init__
  self.keystone_creds = openstack.load_keystone_creds()
File "/usr/lib/python2.7/dist-packages/simplestreams/openstack.py", line 
61, in load_keystone_creds
  raise ValueError("(tenant_id or tenant_name)")
  ValueError: (tenant_id or tenant_name)

  
  [Regression Potential]

  * A possible regression will manifest itself figuring out if v2 or v3
  should be used, after the connection is made there are no further
  changes introduced by this SRU

  
  [Other Info]

  When trying to test my changes for bug 1686086, I was unable to auth
  to keystone, which means glance image sync just doesn't work with
  a v3 keystone.

  Related bugs:
   * bug 1719879: swift client needs to use v1 auth prior to ocata
   * bug 1728982: openstack mirror with keystone v3 always imports new images
   * bug 1611987: glance-simplestreams-sync charm doesn't support keystone v3

  [0] 
https://docs.openstack.org/releasenotes/keystone/mitaka.html#deprecation-notes
  [1] 
https://docs.openstack.org/charm-guide/latest/1802.html#keystone-support-is-v3-only-for-queens-and-later

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1686437/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1737866] Re: Too many open files when large number of routers on a host

2018-10-04 Thread Edward Hope-Morley
** Changed in: cloud-archive/pike
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1737866

Title:
  Too many open files when large number of routers on a host

Status in OpenStack neutron-openvswitch charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in Ubuntu Cloud Archive pike series:
  Fix Committed
Status in openvswitch package in Ubuntu:
  Fix Released
Status in openvswitch source package in Xenial:
  Fix Committed
Status in openvswitch source package in Artful:
  Won't Fix
Status in openvswitch source package in Bionic:
  Fix Released

Bug description:
  [Impact]
  OpenStack environments running large numbers of routers and dhcp agents on a 
single host can hit the NOFILES limit in OVS, resulting in broken operation of 
virtual networking.

  [Test Case]
  Deploy openstack environment; create large number of virtual networks and 
routers.
  OVS will start to error with 'Too many open files'

  [Regression Potential]
  Minimal - we're just increasing the NOFILE limit via the systemd service 
definition.

  [Original Bug Report]
  When there are a large number of routers and dhcp agents on a host, we see a 
syslog error repeated:

  "hostname ovs-vswitchd: ovs|1762125|netlink_socket|ERR|fcntl: Too many
  open files"

  If I check the number of filehandles owned by the pid for "ovs-
  vswitchd unix:/var/run/openvswitch/db.sock" I see close to/at 65535
  files.

  If I then run the following, we double the limit and (in our case) saw
  the count rise to >8:

  prlimit -p $pid --nofile=131070

  We need to be able to:
  - monitor via nrpe, if the process is running short on filehandles
  - configure the limit so we have the option to not run out.

  Currently, if I restart the process, we'll lose this setting.

  Needless to say, openvswitch running out of filehandles causes all
  manner of problems for services which use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1744062] Re: [SRU] L3 HA: multiple agents are active at the same time

2018-09-24 Thread Edward Hope-Morley
** Changed in: cloud-archive
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1744062

Title:
  [SRU] L3 HA: multiple agents are active at the same time

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in neutron:
  New
Status in keepalived package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Invalid
Status in keepalived source package in Xenial:
  Fix Released
Status in neutron source package in Xenial:
  Invalid
Status in keepalived source package in Bionic:
  Fix Released
Status in neutron source package in Bionic:
  Invalid

Bug description:
  [Impact]

  This is the same issue reported in
  https://bugs.launchpad.net/neutron/+bug/1731595, however that is
  marked as 'Fix Released' and the issue is still occurring and I can't
  change back to 'New' so it seems best to just open a new bug.

  It seems as if this bug surfaces due to load issues. While the fix
  provided by Venkata in https://bugs.launchpad.net/neutron/+bug/1731595
  (https://review.openstack.org/#/c/522641/) should help clean things up
  at the time of l3 agent restart, issues seem to come back later down
  the line in some circumstances. xavpaice mentioned he saw multiple
  routers active at the same time when they had 464 routers configured
  on 3 neutron gateway hosts using L3HA, and each router was scheduled
  to all 3 hosts. However, jhebden mentions that things seem stable at
  the 400 L3HA router mark, and it's worth noting this is the same
  deployment that xavpaice was referring to.

  keepalived has a patch upstream in 1.4.0 that provides a fix for
  removing left-over addresses if keepalived aborts. That patch will be
  cherry-picked to Ubuntu keepalived packages.

  [Test Case]
  The following SRU process will be followed:
  https://wiki.ubuntu.com/OpenStackUpdates

  In order to avoid regression of existing consumers, the OpenStack team
  will run their continuous integration test against the packages that
  are in -proposed. A successful run of all available tests will be
  required before the proposed packages can be let into -updates.

  The OpenStack team will be in charge of attaching the output summary
  of the executed tests. The OpenStack team members will not mark
  ‘verification-done’ until this has happened.

  [Regression Potential]
  The regression potential is lowered as the fix is cherry-picked without 
change from upstream. In order to mitigate the regression potential, the 
results of the aforementioned tests are attached to this bug.

  [Discussion]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1744062/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1725421] Re: [mitaka] Hide nova lock/unlock if nova api <2.9 and when >= 2.9, "Lock/Unlock Instance" should not be shown at same time.

2018-04-23 Thread Edward Hope-Morley
for ref: sru cancelled

** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Invalid

** Changed in: horizon (Ubuntu Xenial)
   Status: Fix Committed => Invalid

** Tags removed: sts-sru-needed

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1725421

Title:
  [mitaka] Hide nova lock/unlock if nova api <2.9 and when >= 2.9,
  "Lock/Unlock Instance" should not be shown at same time.

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Invalid
Status in horizon package in Ubuntu:
  Invalid
Status in horizon source package in Xenial:
  Invalid

Bug description:
  [Impact]

  * Similar/related to lp:1505845, which was fixed for Newton.
    Lock/unlock options are displayed in Horizon despite Nova API <2.9,
    which causes user confusion.
  * Additionally, when Nova API is >= 2.9, the actions "Lock Instance"
and "Unlock Instance" should not be shown at the same time.

  [Test Case]

  case 1:
    1. Deploy nova and horizon both at mitaka level.
2. Log into Horizon dashboard.
    3. Create new instance.
    4. Wait for instance to switch to 'running' state.
    5. On the newly created instance, under Actions, you'll
   see both 'Lock Instance' and 'Unlock Instance' are
   available, even after locking the instance.

  case 2:
1. Deploy nova at kilo level and horizon at mitaka level (test with Nova 
API < 2.9)
2. Log into Horizon dashboard.
3. Create new instance.
4. Wait for instance to switch to 'running' state.
5. On the newly created instance, under Actions, you'll
   'Lock Instance' and 'Unlock Instance' should be hidden.

  [Regression Potential]

  * Regression potential is low. This bug has been fixed in all releases of
    OpenStack starting with Newton. The patch did have to be modified
    slightly to apply to mitaka.

  [Discussion]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1725421/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1715254] Re: nova-novncproxy process gets wedged, requiring kill -HUP

2017-12-11 Thread Edward Hope-Morley
** No longer affects: cloud-archive/icehouse

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1715254

Title:
  nova-novncproxy process gets wedged, requiring kill -HUP

Status in OpenStack nova-cloud-controller charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in websockify package in Ubuntu:
  Invalid
Status in websockify source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  affected
  - UCA Mitaka, Kilo
  - Xenial

  not affected
  - UCA Icehouse
  - Trusty
  ( log symptom is different, there is no reaing(which is errata) zombie... etc)

  When number of connections are many or frequently reconnecting to
  console, nova-novncproxy daemon is stuck because websockify is hang.

  [Test case]

  1. Deploy openstack
  2. Creating instances
  3. open console in browser with auto refresh extension ( set 5 seconds )
  4. after several hours connection rejected

  [Regression Potential]

  Components that using websockify, escpecially nova-novncproxy, will be
  affected by this patch. However, After upgrading this and refreshing
  test above mentioned for 2 days without restarting any services, no
  hang happens. I tested this test in my local simple environment, so
  need to be considered possibility in different circumstances.

  [Others]

  related commits

  - https://github.com/novnc/websockify/pull/226
  - https://github.com/novnc/websockify/pull/219

  [Original Description]

  Users reported they were unable to connect to instance consoles via
  either Horizon or direct URL. Upon investigation we found errors
  suggesting the address and port were in use:

  2017-08-23 14:51:56.248 1355081 INFO nova.console.websocketproxy [-] 
WebSocket server settings:
  2017-08-23 14:51:56.248 1355081 INFO nova.console.websocketproxy [-]   - 
Listen on 0.0.0.0:6080
  2017-08-23 14:51:56.248 1355081 INFO nova.console.websocketproxy [-]   - 
Flash security policy server
  2017-08-23 14:51:56.248 1355081 INFO nova.console.websocketproxy [-]   - Web 
server (no directory listings). Web root: /usr/share/novnc
  2017-08-23 14:51:56.248 1355081 INFO nova.console.websocketproxy [-]   - No 
SSL/TLS support (no cert file)
  2017-08-23 14:51:56.249 1355081 CRITICAL nova [-] error: [Errno 98] Address 
already in use
  2017-08-23 14:51:56.249 1355081 ERROR nova Traceback (most recent call last):
  2017-08-23 14:51:56.249 1355081 ERROR nova   File "/usr/bin/nova-novncproxy", 
line 10, in 
  2017-08-23 14:51:56.249 1355081 ERROR nova sys.exit(main())
  2017-08-23 14:51:56.249 1355081 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/novncproxy.py", line 41, in main
  2017-08-23 14:51:56.249 1355081 ERROR nova port=CONF.vnc.novncproxy_port)
  2017-08-23 14:51:56.249 1355081 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/baseproxy.py", line 73, in proxy
  2017-08-23 14:51:56.249 1355081 ERROR nova 
RequestHandlerClass=websocketproxy.NovaProxyRequestHandler
  2017-08-23 14:51:56.249 1355081 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/websockify/websocket.py", line 909, in 
start_server
  2017-08-23 14:51:56.249 1355081 ERROR nova 
tcp_keepintvl=self.tcp_keepintvl)
  2017-08-23 14:51:56.249 1355081 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/websockify/websocket.py", line 698, in socket
  2017-08-23 14:51:56.249 1355081 ERROR nova sock.bind(addrs[0][4])
  2017-08-23 14:51:56.249 1355081 ERROR nova   File 
"/usr/lib/python2.7/socket.py", line 224, in meth
  2017-08-23 14:51:56.249 1355081 ERROR nova return 
getattr(self._sock,name)(*args)
  2017-08-23 14:51:56.249 1355081 ERROR nova error: [Errno 98] Address already 
in use
  2017-08-23 14:51:56.249 1355081 ERROR nova

  This lead us to the discovery of a stuck nova-novncproxy process after
  stopping the service. Once we sent a kill -HUP to that process, we
  were able to start the nova-novncproxy and restore service to the
  users.

  This was not the first time we have had to restart nova-novncproxy
  services after users reported that were unable to connect with VNC.
  This time, as well as at least 2 other times, we have seen the
  following errors in the nova-novncproxy.log during the time frame of
  the issue:

  gaierror: [Errno -8] Servname not supported for ai_socktype

  which seems to correspond to a log entries for connection strings with
  an invalid port ('port': u'-1'). As well as a bunch of:

  error: [Errno 104] Connection reset by peer

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1715254/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : 

[Group.of.nepali.translators] [Bug 1708305] Re: Realtime feature mlockall: Cannot allocate memory

2017-12-04 Thread Edward Hope-Morley
** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

** Tags removed: sts-sru-needed
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1708305

Title:
  Realtime feature mlockall: Cannot allocate memory

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Xenial:
  Fix Released
Status in libvirt source package in Zesty:
  Fix Released
Status in libvirt source package in Artful:
  Fix Released

Bug description:
  [Impact]

   * Guest definitions that uses locked memory + hugepages fail to spawn

   * Backport upstream fix to solve that issue

  
  [Environment]

  root@buneary:~# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 16.04.2 LTS
  Release:  16.04
  Codename: xenial

  root@buneary:~# uname -r
  4.10.0-29-generic

  Reproducible also with the 4.4 kernel.

  [Detailed Description]

  When a guest memory backing stanza is defined using the  stanza + 
hugepages,
  as follows:

    
  
    
    
  
  
  
    

  (Full guest definition: http://paste.ubuntu.com/25229162/)

  The guest fails to start due to the following error:

  2017-08-02 20:25:03.714+: starting up libvirt version: 1.3.1, package: 
1ubuntu10.12 (Christian Ehrhardt  Wed, 19 Jul 
2017 08:28:14 +0200), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.14), 
hostname: buneary.seg
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name reproducer2 -S -machine 
pc-i440fx-2.5,accel=kvm,usb=off -cpu host -m 124928 -realtime mlock=on -smp 
32,sockets=16,cores=1,threads=2 -object 
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=64424509440,host-nodes=0,policy=bind
 -numa node,nodeid=0,cpus=0-15,memdev=ram-node0 -object 
memory-backend-file,id=ram-node1,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=66571993088,host-nodes=1,policy=bind
 -numa node,nodeid=1,cpus=16-31,memdev=ram-node1 -uuid 
2460778d-979b-4024-9a13-0c3ca04b18ec -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-reproducer2/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/var/lib/uvtool/libvirt/images/test-ds.qcow,format=qcow2,if=none,id=drive-virtio-disk0,cache=none
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
  Domain id=14 is tainted: host-cpu
  char device redirected to /dev/pts/1 (label charserial0)

  mlockall: Cannot allocate memory

  2017-08-02T20:25:37.732772Z qemu-system-x86_64: locking memory failed
  2017-08-02 20:25:37.811+: shutting down

  This seems to be due to the setrlimit for RLIMIT_MEMLOCK is too low for 
mlockall
  to work given the large amount of memory.

  There is a libvirt upstream patch that enforces the existence of the
  hard_limit stanza when using with  in the memory backing settings.

  
https://github.com/libvirt/libvirt/commit/c2e60ad0e5124482942164e5fec088157f5e716a

  Memory locking can only work properly if the memory locking limit
  for the QEMU process has been raised appropriately: the default one
  is extremely low, so there's no way the guest will fit in there.

  The commit
  
https://github.com/libvirt/libvirt/commit/7e667664d28f90bf6916604a55ebad7e2d85305b
  is also required when using hugepages and the locked stanza.

  [Test Case]
   * Define a guest that uses the following stanzas (See for a full guest 
reference: http://paste.ubuntu.com/25288141/)

    120
    120

    
  
    
    
  
  
  
    

  * virsh define guest.xml
  * virsh start guest.xml
  * Without the fix, the following error will be raised and the guest
  will not start.

  root@buneary:/home/ubuntu# virsh start reproducer2
  error: Failed to start domain reproducer2
  error: internal error: process exited while connecting to monitor: mlockall: 
Cannot allocate memory
  2017-08-11T03:59:54.936275Z qemu-system-x86_64: locking memory failed

  * With the fix, the error shouldn't be displayed and the guest started

  [Suggested Fix]

  *
  
https://github.com/libvirt/libvirt/commit/7e667664d28f90bf6916604a55ebad7e2d85305b
  

[Group.of.nepali.translators] [Bug 1502136] Re: Everything returns 403 if show_multiple_locations is true and get_image_location policy is set

2017-08-07 Thread Edward Hope-Morley
** No longer affects: glance (Ubuntu Trusty)

** Tags removed: sts-sru-needed
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1502136

Title:
  Everything returns 403 if show_multiple_locations is true and
  get_image_location policy is set

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in Glance:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in glance source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  If, in glance-api.conf you set:

   show_multiple_locations = true

  Things work as expected:

   $ glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   
+--+--+
   | Property | Value   
 |
   
+--+--+
   | checksum | 9cb02fe7fcac26f8a25d6db3109063ae
 |
   | container_format | bare
 |
   | created_at   | 2015-10-02T12:43:33Z
 |
   | disk_format  | raw 
 |
   | id   | 13ae74f0-74bf-4792-a8bb-7c622abc5410
 |
   | locations| [{"url": 
"swift+config://ref1/glance/13ae74f0-74bf-4792-a8bb-7c622abc5410",  |
   |  | "metadata": {}}]
 |
   | min_disk | 0   
 |
   | min_ram  | 0   
 |
   | name | good-image  
 |
   | owner| 88cffb9c8aee457788066c97b359585b
 |
   | protected| False   
 |
   | size | 145 
 |
   | status   | active  
 |
   | tags | []  
 |
   | updated_at   | 2015-10-02T12:43:34Z
 |
   | virtual_size | None
 |
   | visibility   | private 
 |
   
+--+--+

  but if you then set the get_image_location policy to role:admin, most
  calls return 403:

   $ glance --os-image-api-version 2 image-list
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

   $ glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

   $ glance --os-image-api-version 2 image-delete 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

  etc.

  As https://review.openstack.org/#/c/48401/ says:

   1. A user should be able to list/show/update/download image without
   needing permission on get_image_location.
   2. A policy failure should result in a 403 return code. We're
   getting a 500

  This is v2 only, v1 works ok.

  [Test Case]

  - Set show_multiple_locations = true on glance-api.conf 
  - Set get_image_location policy to role:admin in /etc/glance/policy.json
  - Run glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410 , This should work.

  [Regression Potential]

  * None Identified

  [Other Info]

  * Already backported to mitaka/newton.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1502136/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1694337] Re: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover

2017-07-05 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1694337

Title:
  Port information (binding:host_id) not updated for
  network:router_gateway after qRouter failover

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  New
Status in Ubuntu Cloud Archive newton series:
  New
Status in Ubuntu Cloud Archive ocata series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  In Progress
Status in neutron source package in Yakkety:
  Won't Fix
Status in neutron source package in Zesty:
  In Progress

Bug description:
  [Impact]

  When using l3 ha and a router agent fails over, the interface holding
  the network:router_gateway interface does not get its property
  binding:host_id updated to reflect where the keepalived moved the
  router.

  [Steps to reproduce]

  0) Deploy a cloud with l3ha enabled
    - If familiar with juju, it's possible to use this bundle 
http://paste.ubuntu.com/24707730/ , but the deployment tool is not relevant

  1) Once it's deployed, configure it and create a router see 
https://docs.openstack.org/mitaka/networking-guide/deploy-lb-ha-vrrp.html )
    - This is the script used during the troubleshooting
  -8<--
  #!/bin/bash -x

  source novarc  # admin

  neutron net-create ext-net --router:external True
  --provider:physical_network physnet1 --provider:network_type flat

  neutron subnet-create ext-net 10.5.0.0/16 --name ext-subnet
  --allocation-pool start=10.5.254.100,end=10.5.254.199 --disable-dhcp
  --gateway 10.5.0.1 --dns-nameserver 10.5.0.3

  keystone tenant-create --name demo 2>/dev/null
  keystone user-role-add --user admin --tenant demo --role Admin 2>/dev/null

  export TENANT_ID_DEMO=$(keystone tenant-list | grep demo | awk -F'|'
  '{print $2}' | tr -d ' ' 2>/dev/null )

  neutron net-create demo-net --tenant-id ${TENANT_ID_DEMO}
  --provider:network_type vxlan

  env OS_TENANT_NAME=demo neutron subnet-create demo-net 192.168.1.0/24 --name 
demo-subnet --gateway 192.168.1.1
  env OS_TENANT_NAME=demo neutron router-create demo-router
  env OS_TENANT_NAME=demo neutron router-interface-add demo-router demo-subnet
  env OS_TENANT_NAME=demo neutron router-gateway-set demo-router ext-net

  # verification
  neutron net-list
  neutron l3-agent-list-hosting-router demo-router
  neutron router-port-list demo-router
  - 8< ---

  2) Kill the associated master keepalived process for the router
  ps aux | grep keepalived | grep $ROUTER_ID
  kill $PID

  3) Wait until "neutron l3-agent-list-hosting-router demo-router" shows the 
other host as active
  4) Check the binding:host_id property for the interfaces of the router
  for ID in `neutron port-list --device-id $ROUTER_ID | tail -n +4 | head 
-n -1| awk -F' ' '{print $2}' `; do neutron port-show $ID ; done

  Expected results:

  The interface where the device_owner is network:router_gateway has its
  property binding:host_id set to where the keepalived process is master

  Actual result:

  The binding:host_id is never updated, it stays set with the value
  obtainer during the creation of the port.

  [Regression Potential]
  - This patch changes the UPDATE query to the port bindings in the database, a 
possible regression will express as failures in the query or binding:host_id 
property outdated.

  [Other Info]

  The patches for zesty and yakkety are a direct backport from
  stable/ocata and stable/newton respectively. The patch for xenial is
  NOT merged in stable/xenial because it's already EOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1694337/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1636322] Re: [SRU] upstart: ceph-all service starts before networks up

2017-06-12 Thread Edward Hope-Morley
** Changed in: cloud-archive/icehouse
   Status: Triaged => Invalid

** Changed in: cloud-archive
   Status: Triaged => Fix Released

** Tags removed: sts-sru-needed
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1636322

Title:
  [SRU] upstart: ceph-all service starts before networks up

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive icehouse series:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in ceph package in Ubuntu:
  Invalid
Status in ceph source package in Trusty:
  Fix Released
Status in ceph source package in Xenial:
  Fix Released
Status in ceph source package in Yakkety:
  Invalid
Status in ceph source package in Zesty:
  Invalid

Bug description:
  As reported in upstream bug http://tracker.ceph.com/issues/17689, the
  ceph-all service starts at runlevels [2345] and introduces a race
  condition which allows the ceph service (e.g. ceph-mon) to start prior
  to the network the service binds to is up on the server. This causes
  the service to fail on start because it was unable to bind to the
  specific network the service is configured to listen on.

  A work around is to provide a post-up directive to the network stanza
  configuring the network device in the /etc/network/interfaces file
  which restarts the necessary ceph service.

  [Impact]

   * Ceph service fails to start on reboot of machine/container when
  networking takes some time to come up.

   * The provided patch to the upstart service configuration adds the
  static-network-up event as a dependency for the start on service
  directive. The static-network-up event is started after all the
  network stanzas have been processed in the necessary config files.

  [Test Case]

  * Configure multiple network interfaces and have the ceph service bind
  to one of the last configured network devices to introduce a delayed
  start of the network interface.

  [Regression Potential]

  * Upstream previously had the directive to start the service after any
  network-device-up for a network which is not the loopback interface.
  This caused some "weirdness" to be seen when the multiple network
  interfaces were configured. This was likely due the events that it
  keyed on being the local filesystems being available and a single
  network interface being available. This would add the change to start
  only after all the network interface stanzas are processed in the
  /e/n/i configuration files.

  * Additionally, this will cause some ceph services to start later than
  they previously would have since this change causes additional start
  dependencies. However, the results should be that the interfaces have
  always had a chance to be started prior to the attempt to start the
  ceph service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1636322/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1631561] Re: [SRU] volume usage audit print excessive debug log for deleted resources

2017-06-12 Thread Edward Hope-Morley
** Changed in: cloud-archive/newton
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1631561

Title:
  [SRU] volume usage audit print excessive debug log for deleted
  resources

Status in Cinder:
  Fix Released
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in cinder source package in Xenial:
  Fix Released
Status in cinder source package in Yakkety:
  Fix Released

Bug description:
  [Impact]

  * Current volume usage audit tool queries all
  volumes/snapshots/backups including those resources have been deleted,
  as a result, every single deleted resource would trigger a few
  debug/exception level messages, which is too much for a running
  production cluster at hour/day period.

  * notify_snapshot_usage() doesn't handles the case when
   when the source volume for the snapshot has been deleted.

  The following exception is raised:

  2017-02-17 20:20:10.067 1921 ERROR cinder 
[req-7e273ce0-1ae8-410e-a814-0f444364c028 - - - - -] Exists snapshot 
notification failed: Volume 2c84e585-9947-4ad9-bd93-2fc3a5cf9a08 could not be 
found.
  2017-02-17 20:20:10.067 1921 ERROR cinder Traceback (most recent call last):
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/cmd/volume_usage_audit.py", line 188, 
in main
  2017-02-17 20:20:10.067 1921 ERROR cinder extra_info)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/volume/utils.py", line 174, in 
notify_about_snapshot_usage
  2017-02-17 20:20:10.067 1921 ERROR cinder usage_info = 
_usage_from_snapshot(snapshot, **extra_usage_info)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/volume/utils.py", line 151, in 
_usage_from_snapshot
  2017-02-17 20:20:10.067 1921 ERROR cinder 'availability_zone': 
snapshot.volume['availability_zone'],
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 67, in 
getter
  2017-02-17 20:20:10.067 1921 ERROR cinder self.obj_load_attr(name)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/objects/snapshot.py", line 196, in 
obj_load_attr
  2017-02-17 20:20:10.067 1921 ERROR cinder self.volume_id)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 181, in 
wrapper
  2017-02-17 20:20:10.067 1921 ERROR cinder result = fn(cls, context, *args, 
**kwargs)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/objects/base.py", line 166, in 
get_by_id
  2017-02-17 20:20:10.067 1921 ERROR cinder orm_obj = db.get_by_id(context, 
model, id, *args, **kwargs)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/db/api.py", line 1127, in get_by_id
  2017-02-17 20:20:10.067 1921 ERROR cinder return IMPL.get_by_id(context, 
model, id, *args, **kwargs)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 193, in 
wrapper
  2017-02-17 20:20:10.067 1921 ERROR cinder return f(*args, **kwargs)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 4451, in 
get_by_id
  2017-02-17 20:20:10.067 1921 ERROR cinder return _GET_METHODS[model](context, 
id, *args, **kwargs)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 193, in 
wrapper
  2017-02-17 20:20:10.067 1921 ERROR cinder return f(*args, **kwargs)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 1480, in 
volume_get
  2017-02-17 20:20:10.067 1921 ERROR cinder return _volume_get(context, 
volume_id)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 193, in 
wrapper
  2017-02-17 20:20:10.067 1921 ERROR cinder return f(*args, **kwargs)
  2017-02-17 20:20:10.067 1921 ERROR cinder File 
"/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 1425, in 
_volume_get
  2017-02-17 20:20:10.067 1921 ERROR cinder raise 
exception.VolumeNotFound(volume_id=volume_id)
  2017-02-17 20:20:10.067 1921 ERROR cinder VolumeNotFound: Volume 
2c84e585-9947-4ad9-bd93-2fc3a5cf9a08 could not be found.

  [Test Case]

   *The steps to reproduce this bug:

  1) Deploy Mitaka/Newton cloud
  2) Set the volumes as deleted on the database

  mysql> update volumes set deleted = 1;
  Query OK, 2 rows affected (0.00 sec)
  Rows matched: 2 

[Group.of.nepali.translators] [Bug 1666827] Re: Backport fixes for Rename Network return 403 Error

2017-05-17 Thread Edward Hope-Morley
The horizon 2:9.1.2-0ubuntu1~cloud0 point release has been uploaded to
the Trusty Mitaka UCA [1] and will be available shortly so closing this
bug.

[1]
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1680098/comments/16

** Changed in: cloud-archive/mitaka
   Status: Triaged => Fix Released

** Changed in: horizon (Ubuntu Xenial)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1666827

Title:
  Backport fixes for Rename Network return 403 Error

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released
Status in horizon source package in Yakkety:
  Fix Released

Bug description:
  [Impact]
  Non-admin users are not allowed to change the name of a network using the 
OpenStack Dashboard GUI

  [Test Case]
  1. Deploy trusty-mitaka or xenial-mitaka OpenStack Cloud
  2. Create demo project
  3. Create demo user
  4. Log into OpenStack Dashboard using demo user
  5. Go to Project -> Network and create a network
  6. Go to Project -> Network and Edit the just created network
  7. Change the name and click Save
  8. Observe that your request is denied with an error message

  [Regression Potential]
  Minimal.

  We are adding a patch already merged into upstream stable/mitaka for
  the horizon call to policy_check before sending request to Neutron
  when updating networks.

  The addition of rule "update_network:shared" to horizon's copy of
  Neutron policy.json is our own due to upstream not willing to back-
  port this required change. This rule is not referenced anywhere else
  in the code base so it will not affect other policy_check calls.

  Upstream bug: https://bugs.launchpad.net/horizon/+bug/1609467

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1666827/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1664203] Re: [SRU] v1 driver does not delete namespace when pool deleted

2017-04-03 Thread Edward Hope-Morley
** Changed in: neutron-lbaas (Ubuntu Trusty)
   Status: New => Won't Fix

** Tags removed: sts-sru-needed
** Tags added: sts-sru

** Tags removed: sts-sru
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1664203

Title:
  [SRU] v1 driver does not delete namespace when pool deleted

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in neutron-lbaas package in Ubuntu:
  Invalid
Status in neutron-lbaas source package in Trusty:
  Won't Fix
Status in neutron-lbaas source package in Xenial:
  Fix Released
Status in neutron-lbaas source package in Yakkety:
  Won't Fix

Bug description:
  [Impact]

  The v1 services.loadbalancer.drivers.haproxy.namespace_driver has a
  bug in that it deletes the haproxy state directory for a pool when
  it's vip is deleted. This means that when the pool itself is deleted,
  its associated namespace is never deleted since the delete is
  predicated on the state path being extant.

  The v1 driver is deprecated as of the Liberty release and was totally
  removed from the codebase in the Newton release. However, Openstack
  Kilo and Mitaka are still supported in Ubuntu, the former requiring
  the v1 driver and the latter still capable of using it so while
  upstream will not accept a patch we will still patch the neutron-
  lbaas-agent Ubuntu package to fix this issue.

  [Test Case]

  Please see http://pastebin.ubuntu.com/24058957/

  [Regression Potential]

  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1664203/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1636322] Re: upstart: ceph-all service starts before networks up

2017-03-28 Thread Edward Hope-Morley
** Changed in: ceph (Ubuntu Xenial)
   Status: Invalid => New

** Changed in: ceph (Ubuntu Yakkety)
   Status: Invalid => New

** Summary changed:

- upstart: ceph-all service starts before networks up
+ [SRU] upstart: ceph-all service starts before networks up

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1636322

Title:
  [SRU] upstart: ceph-all service starts before networks up

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive icehouse series:
  Triaged
Status in Ubuntu Cloud Archive kilo series:
  Triaged
Status in Ubuntu Cloud Archive liberty series:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in ceph package in Ubuntu:
  Fix Released
Status in ceph source package in Trusty:
  New
Status in ceph source package in Xenial:
  New
Status in ceph source package in Yakkety:
  New
Status in ceph source package in Zesty:
  Fix Released

Bug description:
  As reported in upstream bug http://tracker.ceph.com/issues/17689, the
  ceph-all service starts at runlevels [2345] and introduces a race
  condition which allows the ceph service (e.g. ceph-mon) to start prior
  to the network the service binds to is up on the server. This causes
  the service to fail on start because it was unable to bind to the
  specific network the service is configured to listen on.

  A work around is to provide a post-up directive to the network stanza
  configuring the network device in the /etc/network/interfaces file
  which restarts the necessary ceph service.

  [Impact]

   * Ceph service fails to start on reboot of machine/container when
  networking takes some time to come up.

   * The provided patch to the upstart service configuration adds the
  static-network-up event as a dependency for the start on service
  directive. The static-network-up event is started after all the
  network stanzas have been processed in the necessary config files.

  [Test Case]

  * Configure multiple network interfaces and have the ceph service bind
  to one of the last configured network devices to introduce a delayed
  start of the network interface.

  [Regression Potential]

  * Upstream previously had the directive to start the service after any
  network-device-up for a network which is not the loopback interface.
  This caused some "weirdness" to be seen when the multiple network
  interfaces were configured. This was likely due the events that it
  keyed on being the local filesystems being available and a single
  network interface being available. This would add the change to start
  only after all the network interface stanzas are processed in the
  /e/n/i configuration files.

  * Additionally, this will cause some ceph services to start later than
  they previously would have since this change causes additional start
  dependencies. However, the results should be that the interfaces have
  always had a chance to be started prior to the attempt to start the
  ceph service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1636322/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1621340] Re: [SRU]'multipath -r' causes /dev/mapper/ being removed

2017-03-23 Thread Edward Hope-Morley
** Changed in: multipath-tools (Ubuntu Yakkety)
   Status: Fix Committed => Fix Released

** Tags removed: sts-sru
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1621340

Title:
  [SRU]'multipath -r' causes /dev/mapper/ being removed

Status in multipath-tools package in Ubuntu:
  Fix Released
Status in multipath-tools source package in Xenial:
  Fix Released
Status in multipath-tools source package in Yakkety:
  Fix Released

Bug description:
  [Impact]

  "multipath -r" causes the /dev/mapper/ to disappear momentarily,
  which leads to some issue in consumer applications as such OpenStack.

  [Test Case]

   * connect to an multipath iscsi target
   * multipath -r
   * /dev/mapper/ disappears momentarily

  [Regression Potential]

   * None

  
  "multipath -r" causes the /dev/mapper/ to disappear momentarily, which 
leads to some issue in consumer applications as such OpenStack. After some 
investigation, I found that /dev/mapper/ was deleted by udev during the 
reload, and it was re-created soon later by multipathd (livdevmapper code of 
cause).  Detailed findings are as follows:

  For reload in domap (rename as well),

  case ACT_RELOAD:
  r = dm_addmap_reload(mpp, params);
  if (r)
  r = dm_simplecmd_noflush(DM_DEVICE_RESUME, mpp->alias,
   0, MPATH_UDEV_RELOAD_FLAG);
  break;

  it passes 0 to dm_simplecmd_noflush as argument for needsync, which
  makes dm_task_set_cookie call being skipped in dm_simplecmd,

  if (udev_wait_flag && !dm_task_set_cookie(dmt, , 
((conf->daemon)? DM_UDEV_DISABLE_LIBRARY_FALLBACK : 0) | udev_flags)) {
  dm_udev_complete(cookie);
  goto out;
  }

  because of the short-circuit evaluation. Thus _do_dm_ioctl in
  libdevmapper will add DM_UDEV_DISABLE_DM_RULES_FLAG flag to
  dmi->event_nr, and that will eventually be used in the udev rules
  (55-dm.rules),

  ENV{DM_UDEV_DISABLE_DM_RULES_FLAG}!="1", ENV{DM_NAME}=="?*",
  SYMLINK+="mapper/$env{DM_NAME}"

  Since the DM_UDEV_DISABLE_DM_RULES_FLAG is set, the rule will not
  match. As a result the link is removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1621340/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1585940] Re: [SRU] ERROR "can't get udev device" when iscsi multipath enabled

2017-03-23 Thread Edward Hope-Morley
** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

** Tags removed: sts-sru
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1585940

Title:
  [SRU] ERROR "can't get udev device" when iscsi multipath enabled

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in os-brick:
  Fix Released
Status in python-os-brick package in Ubuntu:
  Invalid
Status in python-os-brick source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * Fallback device detection for multipath connections is currently
  broken.

  [Test Case]

   * Enable cinder/nova multipath but do not make multiple paths
  available (such that are no /dev/disk/path/by-id/dm* devices)

   * Create a cinder volume and attach to Nova instance

   * Check that volume was attached successfully and that /var/log/nova
  /nova-compute.log does not display the error mentioned below.

  [Regression Potential]

   * None

  
  SUMMARY:
  Wrong path used when detecting multipath id

  ANALYSE:
  when multipath device path was not found via LUN wwn, os-brick would use 
device path like below:

  /dev/disk/by-path/ip-192.168.3.52:3260-iscsi-
  iqn.1992-04.com.emc:cx.apm00153906536.b5-lun-155

  to detect multipath id via 'multipath -l '

  but this cause error:
  : can't get udev device

  actually, we should use its real path like /dev/sdb, dev/sda to detect
  the multipath id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1585940/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1602057] Re: (libvirt) KeyError updating resources for some node, guest.uuid is not in BDM list

2017-03-16 Thread Edward Hope-Morley
** No longer affects: ubuntu

** No longer affects: Ubuntu Xenial

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1602057

Title:
  (libvirt) KeyError updating resources for some node, guest.uuid is not
  in BDM list

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Won't Fix
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in nova package in Ubuntu:
  New
Status in nova source package in Xenial:
  New

Bug description:
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
[req-d5d5d486-b488-4429-bbb5-24c9f19ff2c0 - - - - -] Error updating resources 
for node controller.
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6726, in 
update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 500, 
in update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5728, in 
get_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
disk_over_committed = self._get_disk_over_committed_size_total()
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7397, in 
_get_disk_over_committed_size_total
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager KeyError: 
'0a5c5743-9555-4dfd-b26e-198449ebeee5'
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1602057/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1623700] Re: [SRU] multipath iscsi does not logout of sessions on xenial

2017-01-26 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1623700

Title:
  [SRU] multipath iscsi does not logout of sessions on xenial

Status in Ubuntu Cloud Archive:
  New
Status in os-brick:
  Fix Released
Status in python-os-brick package in Ubuntu:
  Invalid
Status in python-os-brick source package in Xenial:
  New

Bug description:
  [Impact]

   * The reload (multipath -r) in _rescan_multipath can cause
  /dev/mapper/ to be deleted and re-created (bug #1621340 is used
  to track this problem), it would cause a lot more downstream openstack
  issues. For example, and right in between that, os.stat(mdev) called
  by _discover_mpath_device() will fail to find the file. For example,
  when detaching a volume the iscsi sessions are not logged out. This
  leaves behind a mpath device and the iscsi /dev/disk/by-path devices
  as broken luns. So we should stop calling multipath -r when
  attaching/detaching iSCSI volumes, multipath will load devices on its
  own.

  [Test Case]

   * Enable iSCSI driver and cinder/nova multipath
   * Detach a iSCSI volume
   * Check that devices/symlinks do not get messed up mentioned below

  [Regression Potential]

   * None

  
  stack@xenial-devstack-master-master-20160914-092014:~$ nova volume-attach 
6e1017a7-6dea-418f-ad9b-879da085bd13 d1d68e04-a217-44ea-bb74-65e0de73e5f8
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | d1d68e04-a217-44ea-bb74-65e0de73e5f8 |
  | serverId | 6e1017a7-6dea-418f-ad9b-879da085bd13 |
  | volumeId | d1d68e04-a217-44ea-bb74-65e0de73e5f8 |
  +--+--+

  stack@xenial-devstack-master-master-20160914-092014:~$ cinder list
  
+--++--+--+-+--+--+
  | ID   | Status | Name | Size | Volume Type | 
Bootable | Attached to  |
  
+--++--+--+-+--+--+
  | d1d68e04-a217-44ea-bb74-65e0de73e5f8 | in-use | -| 1| pure-iscsi  | 
false| 6e1017a7-6dea-418f-ad9b-879da085bd13 |
  
+--++--+--+-+--+--+

  stack@xenial-devstack-master-master-20160914-092014:~$ nova list
  
+--+--+++-+-+
  | ID   | Name | Status | Task State | Power 
State | Networks|
  
+--+--+++-+-+
  | 6e1017a7-6dea-418f-ad9b-879da085bd13 | test | ACTIVE | -  | Running 
| public=172.24.4.12, 2001:db8::b |
  
+--+--+++-+-+

  stack@xenial-devstack-master-master-20160914-092014:~$ sudo iscsiadm -m 
session
  tcp: [5] 10.0.1.10:3260,1 
iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [6] 10.0.5.10:3260,1 
iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [7] 10.0.1.11:3260,1 
iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [8] 10.0.5.11:3260,1 
iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  stack@xenial-devstack-master-master-20160914-092014:~$ sudo iscsiadm -m node
  10.0.1.11:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
  10.0.5.11:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
  10.0.5.10:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
  10.0.1.10:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873

  stack@xenial-devstack-master-master-20160914-092014:~$ sudo tail -f 
/var/log/syslog
  Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get udev uid: 
Invalid argument
  Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get sysfs uid: 
Invalid argument
  Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get sgio uid: 
No such file or directory
  Sep 14 22:33:14 xenial-qemu-tester systemd[1347]: 
dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device: Dev 
dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device appeared 
twice with different sysfs paths 
/sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and 
/sys/devices/virtual/block/dm-0
  Sep 14 22:33:14 

[Group.of.nepali.translators] [Bug 1587261] Re: [SRU] Swift bucket X-Timestamp not set by Rados Gateway

2016-10-27 Thread Edward Hope-Morley
** Changed in: ceph (Ubuntu Zesty)
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1587261

Title:
  [SRU] Swift bucket X-Timestamp not set by Rados Gateway

Status in Ubuntu Cloud Archive:
  New
Status in ceph package in Ubuntu:
  In Progress
Status in ceph source package in Xenial:
  New
Status in ceph source package in Yakkety:
  New
Status in ceph source package in Zesty:
  In Progress

Bug description:
  [Impact]

   * A basic characteristic of a object store is the ability to create
     buckets and objects and to query for information about said
     buckets and objects.

   * In the current version of the ceph radosgw package it is not
     possible to get creation time for buckets. This is a serious
     defect and makes it impossible to use Ubuntu with ceph as a
     object store for some applications.

   * The issue has been fixed in upstream master

   * The proposed debdiff solves the issue by including patches cherry
     picked and adapted from upstream master branch fixing this issue.

  [Test Case]

   * Use Juju to deploy Ceph cluster with radosgw and relation to OpenStack
     Keystone. Example bundle: http://pastebin.ubuntu.com/23374308/

   * Install OpenStack Swift client

  sudo apt-get install python-swiftclient

   * Load OpenStack Credentials pointing to your test deployment

  wget 
https://raw.githubusercontent.com/openstack-charmers/openstack-bundles/master/development/shared/novarc
  . novarc

   * Create swift bucket

  swift post test

   * Display information about newly created bucket

  swift stat test

   * Observe that key 'X-Timestamp' has value 0.0

   * Delete bucket

  swift delete test

   * Install patched radosgw packages on 'ceph-radosgw' unit and repeat

   * Verify that key 'X-Timestamp' now has a value > 0.0 and corrensponding
 to the unixtime of when you created the bucket.

  [Regression Potential]

   * The patch is simple and I see little potential for any regression as a
     result of it being applied.

  [Original bug description]
  When creating a swift/radosgw bucket in horizon the bucket gets created, but 
shows up with a creation date of 19700101

  In the apache log one can observe

  curl -i http://10.11.140.241:80/swift/v1/bucket1 -I -H "X-Auth-Token:  ...
  Container HEAD failed: http://10.11.140.241:80/swift/v1/bucket1 404 Not Found

  However a manual curl call succeeds. Also the radosgw.log shows
  successful PUT/GET requests.

  I get similar results using the swift command line utility with
  containers inheriting a creation date of 19700101 even though I can
  see the correct date being passed to rados in the headers of the
  request.

  Also similarly issues with ceilometer intergration, similarly linked:

  2016-05-31 06:28:16.931 1117922 WARNING ceilometer.agent.manager [-] Continue 
after error from storage.containers.objects: Account GET failed: 
http://10.101.140.241:80/swift/v1/AUTH_025d6aa2af18415a87c012211edb7fea?format=json
 404 Not Found  [first 60 chars of response] 
{"Code":"NoSuchBucket","BucketName":"AUTH_025d6aa2af18415a87
  2016-05-31 06:28:16.931 1117922 ERROR ceilometer.agent.manager Traceback 
(most recent call last):

  This is using charm version: 86 against Openstack Mitaka

  This also seems pretty reproduceable with any ceph, ceph-rados and
  mitaka install via the juju charms.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1587261/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1587261] Re: Swift container X-Timestamp not set by Rados Gateway

2016-10-24 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1587261

Title:
  Swift container X-Timestamp not set by Rados Gateway

Status in Ubuntu Cloud Archive:
  New
Status in ceph package in Ubuntu:
  Confirmed
Status in ceph source package in Xenial:
  New
Status in ceph source package in Yakkety:
  New
Status in ceph source package in Zesty:
  Confirmed

Bug description:
  When creating a swift/radosgw container in horizon the container gets
  created, but shows up with a creation date of 19700101

  In the apache log one can observe

  curl -i http://10.11.140.241:80/swift/v1/bucket1 -I -H "X-Auth-Token:  ...
  Container HEAD failed: http://10.11.140.241:80/swift/v1/bucket1 404 Not Found

  However a manual curl call succeeds. Also the radosgw.log shows
  successful PUT/GET requests.

  I get similar results using the swift command line utility with
  containers inheriting a creation date of 19700101 even though I can
  see the correct date being passed to rados in the headers of the
  request.

  Also similarly issues with ceilometer intergration, similarly linked:

  2016-05-31 06:28:16.931 1117922 WARNING ceilometer.agent.manager [-] Continue 
after error from storage.containers.objects: Account GET failed: 
http://10.101.140.241:80/swift/v1/AUTH_025d6aa2af18415a87c012211edb7fea?format=json
 404 Not Found  [first 60 chars of response] 
{"Code":"NoSuchBucket","BucketName":"AUTH_025d6aa2af18415a87
  2016-05-31 06:28:16.931 1117922 ERROR ceilometer.agent.manager Traceback 
(most recent call last):

  
  This is using charm version: 86 against Openstack Mitaka

  This also seems pretty reproduceable with any ceph, ceph-rados and
  mitaka install via the juju charms.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1587261/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1606652] Re: [SRU] Syntax error 'type' in neutron-linuxbridge-cleanup.service

2016-10-20 Thread Edward Hope-Morley
** Patch added: "lp1606652-xenial-mitaka.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1606652/+attachment/4764476/+files/lp1606652-xenial-mitaka.debdiff

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Xenial)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1606652

Title:
  [SRU] Syntax error 'type' in neutron-linuxbridge-cleanup.service

Status in Ubuntu Cloud Archive:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  In Progress
Status in neutron source package in Yakkety:
  Fix Released

Bug description:
  [Impact]

   * neutron-linuxbridge-cleanup service start warning message

  [Test Case]

   * Execute the following on Xenial (Mitaka):
  - sudo systemctl start neutron-linuxbridge-cleanup
  - sudo systemctl status neutron-linuxbridge-cleanup| grep "Unknown lvalue 
'type' in section 'Service'"

  [Regression Potential]

   * None

  The service file /lib/systemd/system/neutron-linuxbridge-
  cleanup.service contains a syntax error.

  It contains a type on line 5: "type=oneshot"
  But it should be: "Type=oneshot"

  1)
  Version information:
  neutron-linuxbridge-agent:
    Installed: 2:8.1.2-0ubuntu1
    Candidate: 2:8.1.2-0ubuntu1
    Version table:
   *** 2:8.1.2-0ubuntu1 500
  500 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  500 http://de.archive.ubuntu.com/ubuntu xenial-updates/main i386 
Packages
  500 http://security.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  500 http://security.ubuntu.com/ubuntu xenial-updates/main i386 
Packages
  100 /var/lib/dpkg/status
   2:8.0.0-0ubuntu1 500
  500 http://de.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  500 http://de.archive.ubuntu.com/ubuntu xenial/main i386 Packages
  500 http://security.ubuntu.com/ubuntu xenial/main amd64 Packages
  500 http://security.ubuntu.com/ubuntu xenial/main i386 Packages

  2)
  lsb:
  Description:  Ubuntu 16.04 LTS
  Release:  16.04

  3)
  Service should start successfull on service start

  4)
  systemd[1]: [/lib/systemd/system/neutron-linuxbridge-cleanup.service:5] 
Unknown lvalue 'type' in section 'Service'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1606652/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1609733] Re: glance-common package doesn't contains dependencies/files for using cinder store

2016-08-26 Thread Edward Hope-Morley
https://review.openstack.org/#/c/348336/

** Also affects: glance (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Changed in: glance (Juju Charms Collection)
   Status: New => Fix Committed

** Changed in: glance (Juju Charms Collection)
Milestone: None => 16.10

** Changed in: glance (Juju Charms Collection)
 Assignee: (unassigned) => Andrey Pavlov (apavlov-e)

** Changed in: glance (Juju Charms Collection)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1609733

Title:
  glance-common package doesn't contains dependencies/files for using
  cinder store

Status in python-glance-store package in Ubuntu:
  Fix Released
Status in python-glance-store source package in Xenial:
  Fix Committed
Status in python-glance-store source package in Yakkety:
  Fix Released
Status in glance package in Juju Charms Collection:
  Fix Committed

Bug description:
  glance is being installed from debian packages can not be used with
  storing images in cinder.

  1) python-glance-store package doesn't depend from 'python-cinderclient', 
'oslo.rootwrap' and 'os-brick'
  I thing that is due to strange 'extra' dependencies definition in the pip 
package https://pypi.python.org/pypi/glance_store/0.15.0

  information from the url: http://packages.ubuntu.com/xenial/python-
  glance-store

  2) glance-common debian package doesn't contain files for rootwrap 
functionality -
  /etc/glance/rootwrap.conf
  /etc/sudoers.d/glance_sudoers
  and directory /etc/glance/rootwrap.d

  from the url: http://packages.ubuntu.com/xenial/all/glance-
  common/filelist

  for example in nova-common -
  http://packages.ubuntu.com/xenial/all/nova-common/filelist

  maybe glance-store should contain these files because it contains file
  /usr/bin/python2-glance-rootwrap

  1) Ubuntu 14.04 with repositories where glance of version 12.0.0 is available 
(installed by Ubuntu Juju Chrams)
  2) glance version is 12.0.0
  3) I expect that all needed files/packages are installed and all that I need 
- is to change glance.conf and restart glance-api
  4) I need to install packages, create files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-glance-store/+bug/1609733/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1570748] Re: Bug: resize instance after edit flavor with horizon

2016-05-23 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1570748

Title:
  Bug: resize instance after edit flavor with horizon

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed
Status in nova-powervm:
  Fix Committed
Status in tempest:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Wily:
  New
Status in nova source package in Xenial:
  New
Status in nova source package in Yakkety:
  Fix Released

Bug description:
  Error occured when resize instance after edit flavor with horizon (and
  also delete flavor used by instance)

  Reproduce step :

  1. create flavor A
  2. boot instance using flavor A
  3. edit flavor with horizon (or delete flavor A)
  -> the result is same to edit or to delelet flavor because edit flavor 
means delete/recreate flavor)
  4. resize or migrate instance
  5. Error occured

  Log : 
  nova-compute.log
 File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
   return getattr(target, method)(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
   result = fn(cls, context, *args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
   db_flavor = db.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
   return IMPL.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
   return f(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
   raise exception.FlavorNotFound(flavor_id=id)

   FlavorNotFound: Flavor 7 could not be found.

  
  This Error is occured because of below code:
  /opt/openstack/src/nova/nova/compute/manager.py

  def resize_instance(self, context, instance, image,
  reservations, migration, instance_type,
  clean_shutdown=True):
  
  if (not instance_type or
  not isinstance(instance_type, objects.Flavor)):
  instance_type = objects.Flavor.get_by_id(
  context, migration['new_instance_type_id'])
  

  I think that deleted flavor should be taken when resize instance. 
  I tested this in stable/kilo, but I think stable/liberty and stable/mitaka 
has same bug because source code is not changed.

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1570748/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp