[Yahoo-eng-team] [Bug 1835037] Re: Upgrade from bionic-rocky to bionic-stein failed migrations.

2019-11-10 Thread Ryan Beisner
The fix merged in master and is in the current stable charms as of
19.10.

** Changed in: charm-nova-cloud-controller
   Status: In Progress => Fix Committed

** Changed in: charm-nova-cloud-controller
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1835037

Title:
  Upgrade from bionic-rocky to bionic-stein failed migrations.

Status in OpenStack nova-cloud-controller charm:
  Fix Released
Status in OpenStack Compute (nova):
  New

Bug description:
  We were trying to upgrade from rocky to stein using the charm
  procedure described here:

  https://docs.openstack.org/project-deploy-guide/charm-deployment-
  guide/latest/app-upgrade-openstack.html

  and we got into this problem,

  
  2019-07-02 09:56:44 ERROR juju-log online_data_migrations failed
  b'Running batches of 50 until complete\nError attempting to run \n9 rows matched query 
populate_user_id, 0 
migrated\n+-+--+---+\n|
  Migration  | Total Needed | Completed 
|\n+-+--+---+\n|
 create_incomplete_consumers |  0   | 0 |\n| 
delete_build_requests_with_no_instance_uuid |  0   | 0 |\n| 
fill_virtual_interface_list |  0   | 0 |\n| 
migrate_empty_ratio |  0   | 0 |\n|  
migrate_keypairs_to_api_db |  0   | 0 |\n|   
migrate_quota_classes_to_api_db   |  0   | 0 |\n|
migrate_quota_limits_to_api_db   |  0   | 0 |\n|  
migration_migrate_to_uuid  |  0   | 0 |\n| 
populate_missing_availability_zones |  0   | 0 |\n| 
 populate_queued_for_delete |  0   | 0 |\n| 
  populate_user_id  |  9   | 0 |\n|
populate_uuids   |  0   | 0 |\n| 
service_uuids_online_data_migration |  0   | 0 
|\n+-+--+---+\nSome
 migrations failed unexpectedly. Check log for details.\n'

  What should we do to get this fixed?

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1835037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815844] Re: iscsi multipath dm-N device only used on first volume attachment

2019-02-15 Thread Ryan Beisner
** Changed in: charm-nova-compute
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815844

Title:
  iscsi multipath dm-N device only used on first volume attachment

Status in OpenStack nova-compute charm:
  Invalid
Status in OpenStack Compute (nova):
  New

Bug description:
  With nova-compute from cloud:xenial-queens and use-multipath=true
  iscsi multipath is configured and the dm-N devices used on the first
  attachment but subsequent attachments only use a single path.

  The back-end storage is a Purestorage array.
  The multipath.conf is attached
  The issue is easily reproduced as shown below:

  jog@pnjostkinfr01:~⟫ openstack volume create pure2 --size 10 --type pure
  +-+--+
  | Field   | Value|
  +-+--+
  | attachments | []   |
  | availability_zone   | nova |
  | bootable| false|
  | consistencygroup_id | None |
  | created_at  | 2019-02-13T23:07:40.00   |
  | description | None |
  | encrypted   | False|
  | id  | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status| None |
  | multiattach | False|
  | name| pure2|
  | properties  |  |
  | replication_status  | None |
  | size| 10   |
  | snapshot_id | None |
  | source_volid| None |
  | status  | creating |
  | type| pure |
  | updated_at  | None |
  | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 |
  +-+--+

  jog@pnjostkinfr01:~⟫ openstack volume show pure2
  ++--+
  | Field  | Value|
  ++--+
  | attachments| []   |
  | availability_zone  | nova |
  | bootable   | false|
  | consistencygroup_id| None |
  | created_at | 2019-02-13T23:07:40.00   |
  | description| None |
  | encrypted  | False|
  | id | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status   | None |
  | multiattach| False|
  | name   | pure2|
  | os-vol-host-attr:host  | cinder@cinder-pure#cinder-pure   |
  | os-vol-mig-status-attr:migstat | None |
  | os-vol-mig-status-attr:name_id | None |
  | os-vol-tenant-attr:tenant_id   | 9be499fd1eee48dfb4dc6faf3cc0a1d7 |
  | properties |  |
  | replication_status | None |
  | size   | 10   |
  | snapshot_id| None |
  | source_volid   | None |
  | status | available|
  | type   | pure |
  | updated_at | 2019-02-13T23:07:41.00   |
  | user_id| c1fa4ae9a0b446f2ba64eebf92705d53 |
  ++--+

  Add the volume to an instance:
  jog@pnjostkinfr01:~⟫ openstack server add volume T1 pure2
  jog@pnjostkinfr01:~⟫ openstack server show T1 
 
  
+-+--+
  | Field   | Value

[Yahoo-eng-team] [Bug 1796200] Re: Network security group logging: only DROP events being logged

2018-10-06 Thread Ryan Beisner
*** This bug is a duplicate of bug 1782576 ***
https://bugs.launchpad.net/bugs/1782576

** This bug has been marked a duplicate of bug 1782576
   Logging - No SG-log data found at /var/log/syslog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1796200

Title:
  Network security group logging: only DROP events being logged

Status in neutron:
  New

Bug description:
  Network security group logging not working: empty file being created
  w/o actual logs

  On the clear Openstack (Ubuntu Xenial, Queens release) I have tried to
  enable a security groups logging as stated in
  https://docs.openstack.org/neutron/queens/admin/config-logging.html
  doc, and it's not working as expected.

  =

  Actual behaviour: Logfile has been created in place specified in config from 
"neutron" user, but:
  - only DROP events has been created; ACCEPT events are missing;
  - ICMP traffic is not logged at all.

  Expected behaviour: Logfile has been created & NSG traffic data also
  being logged into for bot ACCEPT and DROP events.

  ==

  Additional information:

  a) OpenStack has been deployed from scratch using Juju and upstream
  bundles (with only two charms being modified locally, enabling
  necessary config changes for following upstream documentation
  mentioned above), here is actual charm link:
  http://paste.openstack.org/show/731530/

  b) Full OpenStack configuration commands from flavors till verifying
  that networking itself is working:
  http://paste.openstack.org/show/731529/ (take a look at the EOF: I'm
  trying to ping my instance floating IP, I cannot, but after enabling a
  rule in NSG it succeeded - so traffic is actually being passed to
  instance and security groups are working);

  c) Config files that should be modified, according to documentation:

  neutron-api neutron.conf: http://paste.openstack.org/show/731531/
  neutron-gateway /etc/neutron/plugins/ml2/openvswitch_agent.ini: 
http://paste.openstack.org/show/731534/
  nova-compute /etc/neutron/plugins/ml2/openvswitch_agent.ini: 
http://paste.openstack.org/show/731535/

  Security groups rules: http://paste.openstack.org/show/731541/
  OVS firewall log without any traffic yet: 
http://paste.openstack.org/show/731542/

  Try to reach HTTPS (which is blocked by security groups):
  http://paste.openstack.org/show/731543/ - all OK, is't being logged.

  But, if try to login to SSH (it's enabled via NSG rules) - nothing
  appears in NSG log; however, corresponding rules has been applied to
  Open vSwitch: http://paste.openstack.org/show/731544/

  Also, nothing also happens in NSG log when trying to reach instance by
  ICMP (regular ping, for example).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1796200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452641] Re: Static Ceph mon IP addresses in connection_info can prevent VM startup

2018-06-11 Thread Ryan Beisner
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452641

Title:
  Static Ceph mon IP addresses in connection_info can prevent VM startup

Status in OpenStack Compute (nova):
  Confirmed
Status in nova package in Ubuntu:
  New

Bug description:
  The Cinder rbd driver extracts the IP addresses of the Ceph mon servers from 
the Ceph mon map when the instance/volume connection is established. This info 
is then stored in nova's block-device-mapping table and is never re-validated 
down the line. 
  Changing the Ceph mon servers' IP adresses will prevent the instance from 
booting as the stale connection info will enter the instance's XML. One idea to 
fix this would be to use the information from ceph.conf, which should be an 
alias or a loadblancer, directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1452641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773449] Re: VMs do not survive host reboot

2018-05-29 Thread Ryan Beisner
Thank you for your report.  I added UCA task for SLA tracking.  We're
working on reproducing it now, and we will update the status/level as
soon as we have confirmation.

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
 Assignee: (unassigned) => Sean Feole (sfeole)

** Changed in: cloud-archive
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773449

Title:
  VMs do not survive host reboot

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  New

Bug description:
  Reboot host that contains VMs with volumes and all VMs fail to boot.
  Happens with Queens on Bionic and Xenial

  [0.00] Initializing cgroup subsys cpuset

  [0.00] Initializing cgroup subsys cpu

  [0.00] Initializing cgroup subsys cpuacct

  [0.00] Linux version 4.4.0-124-generic
  (buildd@lcy01-amd64-028) (gcc version 5.4.0 20160609 (Ubuntu
  5.4.0-6ubuntu1~16.04.9) ) #148-Ubuntu SMP Wed May 2 13:00:18 UTC 2018
  (Ubuntu 4.4.0-124.148-generic 4.4.117)

  [0.00] Command line:
  BOOT_IMAGE=/boot/vmlinuz-4.4.0-124-generic
  root=UUID=bca2de6e-f774-4203-ae05-e8deeb05f64a ro console=tty1
  console=ttyS0

  [0.00] KERNEL supported cpus:

  [0.00]   Intel GenuineIntel

  [0.00]   AMD AuthenticAMD

  [0.00]   Centaur CentaurHauls

  [0.00] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256

  [0.00] x86/fpu: Supporting XSAVE feature 0x01: 'x87 floating
  point registers'

  [0.00] x86/fpu: Supporting XSAVE feature 0x02: 'SSE registers'

  [0.00] x86/fpu: Supporting XSAVE feature 0x04: 'AVX registers'

  [0.00] x86/fpu: Enabled xstate features 0x7, context size is
  832 bytes, using 'standard' format.

  [0.00] x86/fpu: Using 'eager' FPU context switches.

  [0.00] e820: BIOS-provided physical RAM map:

  [0.00] BIOS-e820: [mem 0x-0x0009fbff]
  usable

  [0.00] BIOS-e820: [mem 0x0009fc00-0x0009]
  reserved

  [0.00] BIOS-e820: [mem 0x000f-0x000f]
  reserved

  [0.00] BIOS-e820: [mem 0x0010-0x7ffdbfff]
  usable

  [0.00] BIOS-e820: [mem 0x7ffdc000-0x7fff]
  reserved

  [0.00] BIOS-e820: [mem 0xfeffc000-0xfeff]
  reserved

  [0.00] BIOS-e820: [mem 0xfffc-0x]
  reserved

  [0.00] NX (Execute Disable) protection: active

  [0.00] SMBIOS 2.8 present.

  [0.00] Hypervisor detected: KVM

  [0.00] e820: last_pfn = 0x7ffdc max_arch_pfn = 0x4

  [0.00] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WC
  UC- WT

  [0.00] found SMP MP-table at [mem 0x000f6a20-0x000f6a2f]
  mapped at [880f6a20]

  [0.00] Scanning 1 areas for low memory corruption

  [0.00] Using GB pages for direct mapping

  [0.00] RAMDISK: [mem 0x361f4000-0x370f1fff]

  [0.00] ACPI: Early table checksum verification disabled

  [0.00] ACPI: RSDP 0x000F6780 14 (v00 BOCHS )

  [0.00] ACPI: RSDT 0x7FFE1649 2C (v01 BOCHS
  BXPCRSDT 0001 BXPC 0001)

  [0.00] ACPI: FACP 0x7FFE14CD 74 (v01 BOCHS
  BXPCFACP 0001 BXPC 0001)

  [0.00] ACPI: DSDT 0x7FFE0040 00148D (v01 BOCHS
  BXPCDSDT 0001 BXPC 0001)

  [0.00] ACPI: FACS 0x7FFE 40

  [0.00] ACPI: APIC 0x7FFE15C1 88 (v01 BOCHS
  BXPCAPIC 0001 BXPC 0001)

  [0.00] No NUMA configuration found

  [0.00] Faking a node at [mem
  0x-0x7ffdbfff]

  [0.00] NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdbfff]

  [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00

  [0.00] kvm-clock: cpu 0, msr 0:7ffcf001, primary cpu clock

  [0.00] kvm-clock: using sched offset of 17590935813 cycles

  [0.00] clocksource: kvm-clock: mask: 0x
  max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns

  [0.00] Zone ranges:

  [0.00]   DMA  [mem 0x1000-0x00ff]

  [0.00]   DMA32[mem 0x0100-0x7ffdbfff]

  [0.00]   Normal   empty

  [0.00]   Device   empty

  [0.00] Movable zone start for each node

  [0.00] Early memory node ranges

  [0.00]   node   0: [mem 0x1000-0x0009efff]

  [0.00]   node   0: [mem 0x0010-0x7ffdbfff]

  [0.00] Initmem setup node 0 [mem
  

[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2018-03-09 Thread Ryan Beisner
** Changed in: charm-barbican
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in OpenStack Barbican Charm:
  Fix Released
Status in OpenStack heat charm:
  Triaged
Status in Cinder:
  Fix Released
Status in cloudkitty:
  Fix Released
Status in congress:
  Triaged
Status in OpenStack Backup/Restore and DR (Freezer):
  Fix Released
Status in Glance:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in neutron:
  Fix Released
Status in Panko:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736171] Re: Update OS API charm default haproxy timeout values

2018-03-09 Thread Ryan Beisner
** Changed in: charm-neutron-api
   Status: Fix Committed => Fix Released

** Changed in: charm-keystone
   Status: Fix Committed => Fix Released

** Changed in: charm-nova-cloud-controller
   Status: Fix Committed => Fix Released

** Changed in: charm-cinder
   Status: Fix Committed => Fix Released

** Changed in: charm-glance
   Status: Fix Committed => Fix Released

** Changed in: charm-ceph-radosgw
   Status: Fix Committed => Fix Released

** Changed in: charm-heat
   Status: Fix Committed => Fix Released

** Changed in: charm-openstack-dashboard
   Status: Fix Committed => Fix Released

** Changed in: charm-barbican
   Status: Fix Committed => Fix Released

** Changed in: charm-ceilometer
   Status: Fix Committed => Fix Released

** Changed in: charm-swift-proxy
   Status: Fix Committed => Fix Released

** Changed in: charm-manila
   Status: Fix Committed => Fix Released

** Changed in: charm-aodh
   Status: Fix Committed => Fix Released

** Changed in: charm-designate
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736171

Title:
  Update OS API charm default haproxy timeout values

Status in OpenStack AODH Charm:
  Fix Released
Status in OpenStack Barbican Charm:
  Fix Released
Status in OpenStack ceilometer charm:
  Fix Released
Status in OpenStack ceph-radosgw charm:
  Fix Released
Status in OpenStack cinder charm:
  Fix Released
Status in OpenStack Designate Charm:
  Fix Released
Status in OpenStack glance charm:
  Fix Released
Status in OpenStack heat charm:
  Fix Released
Status in OpenStack keystone charm:
  Fix Released
Status in OpenStack Manila Charm:
  Fix Released
Status in OpenStack neutron-api charm:
  Fix Released
Status in OpenStack neutron-gateway charm:
  Invalid
Status in OpenStack nova-cloud-controller charm:
  Fix Released
Status in OpenStack openstack-dashboard charm:
  Fix Released
Status in OpenStack swift-proxy charm:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  Change OpenStack API charm haproxy timeout values

haproxy-server-timeout: 9
haproxy-client-timeout: 9
haproxy-connect-timeout: 9000
haproxy-queue-timeout: 9000

  Workaround until this lands is to set these values in config:

  juju config neutron-api haproxy-server-timeout=9 haproxy-client-
  timeout=9 haproxy-queue-timeout=9000 haproxy-connect-timeout=9000

  
  --- Original Bug -
  NeutronNetworks.create_and_delete_subnets is failing when run with 
concurrency greater than 1.

  Here's a snippet of a failure: http://paste.ubuntu.com/25927074/

  Here is my rally yaml: http://paste.ubuntu.com/26112719/

  This is happening using pike on xenial, from the ubuntu cloud
  archive's.  The deployment is distributed across 9 nodes, with HA
  services.

  For now we have adjusted our test scenario to be more realistic.  When
  we spread the test over 30 tenants, instead of 3 and if we simulate 2
  users per tenant, instead of 3, we do not hit the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-aodh/+bug/1736171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750705] Re: glance db_sync requires mysql db to have log_bin_trust_function_creators = 1

2018-03-09 Thread Ryan Beisner
** Changed in: charm-percona-cluster
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1750705

Title:
  glance db_sync requires mysql db to have
  log_bin_trust_function_creators = 1

Status in OpenStack percona-cluster charm:
  Fix Released
Status in Glance:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released

Bug description:
  Upon deploying glance via cs:~openstack-charmers-next/xenial/glance
  glance appears to throw a CRIT unhandled error, so far I have
  experienced this on arm64. Not sure about other archs at this point in
  time. Decided to bug and will investigate further.

  Cloud- xenial-queens/proposed

  This occurs when the the shared-db-relation hook fires for mysql
  :shared-db.

  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed CRITI 
[glance] Unhandled error
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
Traceback (most recent call last):
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/bin/glance-manage", line 10, in 
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
sys.exit(main())
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 528, in main
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
return CONF.command.action_fn()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 360, in sync
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
self.command_object.sync(CONF.command.version)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 153, in sync
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
self.expand()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 208, in expand
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
self._sync(version=expand_head)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 168, in _sync
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
alembic_command.upgrade(a_config, version)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/command.py", line 254, in upgrade
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
script.run_env()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/script/base.py", line 425, in run_env
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
util.load_python_file(self.dir, 'env.py')
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in 
load_python_file
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
module = load_module_py(module_id, path)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/util/compat.py", line 75, in 
load_module_py
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
mod = imp.load_source(module_id, path, fp)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/alembic_migrations/env.py",
 line 88, in 
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
run_migrations_online()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/alembic_migrations/env.py",
 line 83, in run_migrations_online
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
context.run_migrations()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"", line 8, in run_migrations
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/runtime/environment.py", line 836, in 
run_migrations
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
self.get_context().run_migrations(**kw)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/runtime/migration.py", line 330, in 
run_migrations
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 

[Yahoo-eng-team] [Bug 1741319] Re: arm64: Migration pre-check error: CPU doesn't have compatibility.

2018-02-06 Thread Ryan Beisner
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741319

Title:
  arm64: Migration pre-check error: CPU doesn't have compatibility.

Status in OpenStack nova-compute charm:
  Incomplete
Status in OpenStack Compute (nova):
  New

Bug description:
  Pike/openstack-base running on identical servers (HiSilicon D05):

  ubuntu@ike-hisi-maas:~$ openstack server migrate --live strong-emu dannf
  Migration pre-check error: CPU doesn't have compatibility.

  XML error: Missing CPU model name

  Refer to http://libvirt.org/html/libvirt-libvirt-
  host.html#virCPUCompareResult (HTTP 400) (Request-ID: req-
  c5ec9320-d111-40b7-af0e-d8414df3925c)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1741319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736171] Re: Update OS API charm default haproxy timeout values

2018-01-03 Thread Ryan Beisner
The heat charm previously lacked the haproxy timeout controls, and that
was resolved with https://review.openstack.org/#/c/526674/.  With that
landed, the default values should now be proposed against it.

** Also affects: charm-barbican
   Importance: Undecided
   Status: New

** Changed in: charm-barbican
   Importance: Undecided => Medium

** Changed in: charm-barbican
   Status: New => Fix Committed

** Changed in: charm-barbican
Milestone: None => 18.02

** Changed in: charm-barbican
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-keystone
   Status: Triaged => Fix Committed

** Changed in: charm-keystone
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-glance
   Status: Triaged => Fix Committed

** Changed in: charm-glance
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-cinder
   Status: Triaged => Fix Committed

** Changed in: charm-cinder
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-neutron-api
   Status: Triaged => Fix Committed

** Changed in: charm-neutron-api
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-nova-cloud-controller
   Status: Triaged => Fix Committed

** Changed in: charm-nova-cloud-controller
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-ceilometer
   Importance: Undecided
   Status: New

** Changed in: charm-ceilometer
   Importance: Undecided => Medium

** Changed in: charm-ceilometer
   Status: New => Fix Committed

** Changed in: charm-ceilometer
Milestone: None => 18.02

** Changed in: charm-ceilometer
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-swift-proxy
   Importance: Undecided
   Status: New

** Changed in: charm-swift-proxy
   Importance: Undecided => Medium

** Changed in: charm-swift-proxy
   Status: New => Fix Committed

** Changed in: charm-swift-proxy
Milestone: None => 18.02

** Changed in: charm-swift-proxy
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-ceph-radosgw
   Status: Triaged => Fix Committed

** Changed in: charm-ceph-radosgw
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-openstack-dashboard
   Status: Triaged => Fix Committed

** Changed in: charm-openstack-dashboard
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-manila
   Importance: Undecided
   Status: New

** Changed in: charm-manila
   Importance: Undecided => Medium

** Changed in: charm-manila
   Status: New => Fix Committed

** Changed in: charm-manila
Milestone: None => 18.02

** Changed in: charm-manila
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-aodh
   Importance: Undecided
   Status: New

** Changed in: charm-aodh
   Importance: Undecided => Medium

** Changed in: charm-aodh
   Status: New => Fix Committed

** Changed in: charm-aodh
Milestone: None => 18.02

** Changed in: charm-aodh
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-designate
   Importance: Undecided
   Status: New

** Changed in: charm-designate
   Importance: Undecided => Medium

** Changed in: charm-designate
   Status: New => Fix Committed

** Changed in: charm-designate
Milestone: None => 18.02

** Changed in: charm-designate
 Assignee: (unassigned) => David Ames (thedac)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736171

Title:
  Update OS API charm default haproxy timeout values

Status in OpenStack AODH Charm:
  Fix Committed
Status in OpenStack Barbican Charm:
  Fix Committed
Status in OpenStack ceilometer charm:
  Fix Committed
Status in OpenStack ceph-radosgw charm:
  Fix Committed
Status in OpenStack cinder charm:
  Fix Committed
Status in OpenStack Designate Charm:
  Fix Committed
Status in OpenStack glance charm:
  Fix Committed
Status in OpenStack heat charm:
  In Progress
Status in OpenStack keystone charm:
  Fix Committed
Status in OpenStack Manila Charm:
  Fix Committed
Status in OpenStack neutron-api charm:
  Fix Committed
Status in OpenStack neutron-gateway charm:
  Invalid
Status in OpenStack nova-cloud-controller charm:
  Fix Committed
Status in OpenStack openstack-dashboard charm:
  Fix Committed
Status in OpenStack swift-proxy charm:
  Fix Committed
Status in neutron:
  Invalid

Bug description:
  Change OpenStack API charm haproxy timeout values

haproxy-server-timeout: 9
haproxy-client-timeout: 9
haproxy-connect-timeout: 9000
haproxy-queue-timeout: 9000

  Workaround until this lands is to set these values in config:

  juju config neutron-api haproxy-server-timeout=9 haproxy-client-
  timeout=9 haproxy-queue-timeout=9000 haproxy-connect-timeout=9000

  
  --- Original Bug -
  NeutronNetworks.create_and_delete_subnets is 

[Yahoo-eng-team] [Bug 1736171] Re: create_and_delete_subnets rally test failures

2017-12-06 Thread Ryan Beisner
** Also affects: charm-neutron-gateway
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Changed in: charm-neutron-gateway
 Assignee: (unassigned) => David Ames (thedac)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736171

Title:
  create_and_delete_subnets rally test failures

Status in OpenStack neutron-gateway charm:
  New
Status in neutron:
  Invalid

Bug description:
  NeutronNetworks.create_and_delete_subnets is failing when run with
  concurrency greater than 1.

  Here's a snippet of a failure: http://paste.ubuntu.com/25927074/

  Here is my rally yaml: http://paste.ubuntu.com/26112719/

  This is happening using pike on xenial, from the ubuntu cloud
  archive's.  The deployment is distributed across 9 nodes, with HA
  services.

  For now we have adjusted our test scenario to be more realistic.  When
  we spread the test over 30 tenants, instead of 3 and if we simulate 2
  users per tenant, instead of 3, we do not hit the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1736171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352256] Please test proposed package

2017-09-26 Thread Ryan Beisner
Hello Ashish, or anyone else affected,

Accepted horizon into kilo-proposed. The package will build now and be
available in the Ubuntu Cloud Archive in a few hours, and then in the
-proposed repository.

Please help us by testing this new package. To enable the -proposed
repository:

  sudo add-apt-repository cloud-archive:kilo-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-kilo-needed to verification-kilo-done. If it does not
fix the bug for you, please add a comment stating that, and change the
tag to verification-kilo-failed. In either case, details of your testing
will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!

** Changed in: cloud-archive/kilo
   Status: Fix Released => Fix Committed

** Tags added: verification-kilo-needed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352256

Title:
  Uploading a new object fails with Ceph as object storage backend using
  RadosGW

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  While uploading a new Object using Horizon, with Ceph as object
  storage backend, it fails with error mesage "Error: Unable to upload
  object"

  Ceph Release : Firefly

  Error in horizon_error.log:

  
  [Wed Jul 23 09:04:46.840751 2014] [:error] [pid 30045:tid 140685813683968] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
firefly-master.ashish.com
  [Wed Jul 23 09:04:46.842984 2014] [:error] [pid 30045:tid 140685813683968] 
WARNING:urllib3.connectionpool:HttpConnectionPool is full, discarding 
connection: firefly-master.ashish.com
  [Wed Jul 23 09:04:46.843118 2014] [:error] [pid 30045:tid 140685813683968] 
REQ: curl -i http://firefly-master.ashish.com/swift/v1/new-cont-dash/test -X 
PUT -H "X-Auth-Token: 91fc8466ce17e0d22af86de9b3343b2d"
  [Wed Jul 23 09:04:46.843227 2014] [:error] [pid 30045:tid 140685813683968] 
RESP STATUS: 411 Length Required
  [Wed Jul 23 09:04:46.843584 2014] [:error] [pid 30045:tid 140685813683968] 
RESP HEADERS: [('date', 'Wed, 23 Jul 2014 09:04:46 GMT'), ('content-length', 
'238'), ('content-type', 'text/html; charset=iso-8859-1'), ('connection', 
'close'), ('server', 'Apache/2.4.7 (Ubuntu)')]
  [Wed Jul 23 09:04:46.843783 2014] [:error] [pid 30045:tid 140685813683968] 
RESP BODY: 
  [Wed Jul 23 09:04:46.843907 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843930 2014] [:error] [pid 30045:tid 140685813683968] 
411 Length Required
  [Wed Jul 23 09:04:46.843937 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843944 2014] [:error] [pid 30045:tid 140685813683968] 
Length Required
  [Wed Jul 23 09:04:46.843951 2014] [:error] [pid 30045:tid 140685813683968] 
A request of the requested method PUT requires a valid Content-length.
  [Wed Jul 23 09:04:46.843957 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843963 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843969 2014] [:error] [pid 30045:tid 140685813683968]
  [Wed Jul 23 09:04:46.844530 2014] [:error] [pid 30045:tid 140685813683968] 
Object PUT failed: http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 
411 Length Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844555 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844607 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844900 2014] [:error] [pid 30045:tid 140685813683968] 
https://bugs.launchpad.net/cloud-archive/+bug/1352256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382079] Please test proposed package

2017-09-26 Thread Ryan Beisner
Hello Thiago, or anyone else affected,

Accepted horizon into kilo-proposed. The package will build now and be
available in the Ubuntu Cloud Archive in a few hours, and then in the
-proposed repository.

Please help us by testing this new package. To enable the -proposed
repository:

  sudo add-apt-repository cloud-archive:kilo-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-kilo-needed to verification-kilo-done. If it does not
fix the bug for you, please add a comment stating that, and change the
tag to verification-kilo-failed. In either case, details of your testing
will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!

** Changed in: cloud-archive/kilo
   Status: Fix Released => Fix Committed

** Tags removed: verification-kilo-done
** Tags added: verification-kilo-needed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382079

Title:
  [SRU] Project selector not working

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Vivid:
  Won't Fix
Status in horizon source package in Wily:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * Not able to switch projects by the project dropdown list.

  [Test Case]

  1 - enable Identity V3 in local_settings.py
  2 - Log in on Horizon
  3 - make sure that the SESSION_BACKEND is "signed_cookies"
  4 - Try to change project on the dropdown

  [Regression Potential]

   * None

  When you try to select a new project on the project dropdown, the
  project doesn't change. The commit below has introduced this bug on
  Horizon's master and has passed the tests verifications.

  
https://github.com/openstack/horizon/commit/16db58fabad8934b8fbdfc6aee0361cc138b20af

  For what I've found so far, the context being received in the
  decorator seems to be the old context, with the token to the previous
  project. When you take the decorator out, the context received by the
  "can_access" function receives the correct context, with the token to
  the new project.

  Steps to reproduce:

  1 - Enable Identity V3 (to have a huge token)
  2 - Log in on Horizon (lots of permissions loaded on session)
  3 - Certify that you SESSION_BACKEND is "signed_cookies"
  4 - Try to change project on the dropdown

  The project shall remain the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1382079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668410] Re: [SRU] Infinite loop trying to delete deleted HA router

2017-09-19 Thread Ryan Beisner
This bug was fixed in the package neutron - 2:8.4.0-0ubuntu5~cloud0
---

 neutron (2:8.4.0-0ubuntu5~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:8.4.0-0ubuntu5) xenial; urgency=medium
 .
   * d/p/l3-ha-don-t-send-routers-without-_ha_interface.patch: Backport fix for
 l3 ha: don't send routers without '_ha_interface' (LP: #1668410)


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668410

Title:
  [SRU] Infinite loop trying to delete deleted HA router

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix
Status in neutron package in Ubuntu:
  Invalid
Status in neutron source package in Xenial:
  Fix Released

Bug description:
  [Descriptoin]

  When deleting a router the logfile is filled up. See full log -
  http://paste.ubuntu.com/25429257/

  I can see the error 'Error while deleting router
  c0dab368-5ac8-4996-88c9-f5d345a774a6' occured 3343386 times from
  _safe_router_removed() [1]:

  $ grep -r 'Error while deleting router c0dab368-5ac8-4996-88c9-f5d345a774a6' 
|wc -l
  3343386

  This _safe_router_removed() is invoked by L488 [2], if
  _safe_router_removed() goes wrong it will return False, then
  self._resync_router(update) [3] will make the code
  _safe_router_removed be run again and again. So we saw so many errors
  'Error while deleting router X'.

  [1] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L361
  [2] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488
  [3] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L457

  [Test Case]

  That's because race condition between neutron server and L3 agent,
  after neutron server deletes HA interfaces the L3 agent may sync a HA
  router without HA interface info (just need to trigger L708[1] after
  deleting HA interfaces and before deleting HA router). If we delete HA
  router at this time, this problem will happen. So test case we design
  is as below:

  1, First update fixed package, and restart neutron-server by 'sudo
  service neutron-server restart'

  2, Create ha_router

  neutron router-create harouter --ha=True

  3, Delete ports associated with ha_router before deleting ha_router

  neutron router-port-list harouter |grep 'HA port' |awk '{print $2}' |xargs -l 
neutron port-delete
  neutron router-port-list harouter

  4, Update ha_router to trigger l3-agent to update ha_router info
  without ha_port into self.router_info

  neutron router-update harouter --description=test

  5, Delete ha_router this time

  neutron router-delete harouter

  [1] https://github.com/openstack/neutron/blob/mitaka-
  eol/neutron/db/l3_hamode_db.py#L708

  [Regression Potential]

  The fixed patch [1] for neutron-server will no longer return ha_router
  which is missing ha_ports, so L488 will no longer have chance to call
  _safe_router_removed() for a ha_router, so the problem has been
  fundamentally fixed by this patch and no regression potential.

  Besides, this fixed patch has been in mitaka-eol branch now, and
  neutron-server mitaka package is based on neutron-8.4.0, so we need to
  backport it to xenial and mitaka.

  $ git tag --contains 8c77ee6b20dd38cc0246e854711cb91cffe3a069
  mitaka-eol

  [1] https://review.openstack.org/#/c/440799/2/neutron/db/l3_hamode_db.py
  [2] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1668410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614054] Re: [SRU] Incorrect host cpu is given to emulator threads when cpu_realtime_mask flag is set

2017-08-22 Thread Ryan Beisner
This bug was fixed in the package nova - 2:13.1.4-0ubuntu2~cloud0
---

 nova (2:13.1.4-0ubuntu2~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 nova (2:13.1.4-0ubuntu2) xenial; urgency=medium
 .
   * d/p/libvirt-fix-incorrect-host-cpus-giving-to-emulator-t.patch:
 Backport fix for cpu pinning libvirt config incorrect emulator
 pin cpuset (LP: #1614054).


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1614054

Title:
  [SRU] Incorrect host cpu is given to emulator threads when
  cpu_realtime_mask flag is set

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  This bug affects users of Openstack Nova who want to create instances
  that will leverage the realtime functionality that libvirt/qemu offers
  by, amongst other things, pinning guest vcpus and qemu emulator
  threads to specific pcpus. Nova provides the means for the user to
  control, via the flavor hw:cpu_realtime_mask or image property
  hw_cpu_realtime_mask, which physical cpus these resources will pinned
  to. This mask allows you to mask the set of N pins that Nova selects
  such that 1 or more of your vcpus can be declared "real-time" by
  ensuring that they do not have emulator threads also pinned to them.
  The remaining "non-realtime" vcpus will have vcpu and emulator threads
  colocated. The bug fixes the case where e.g. you have a guest that has
  2 vcpus (logically 0 and 1) and Nova selects pcpus 14 and 22 and you
  have mask ^0 to indicate that you want all but the first vcpu to be
  realtime. This should result in the following being present in your
  libvirt xml for the guest:

    
  
  
  
  
    

  But (current only Mitaka since it does not have this patch) you will
  get this:

    
  
  
  
  
    

  i.e. Nova will always set the emulator pin to be id of the vcpu
  instead of the corresponding pcpu that it is pinned to.

  In terms of actual impact this could result in vcpus that are supposed
  to be isolated not being so and therefore not behaving as expected.

  [Test Case]

   * deploy openstack mitaka and configure nova.conf with vcpu-pin-
  set=0,1,2,3

     https://pastebin.ubuntu.com/25133260/

   * configure compute host kernel opts with "isolcpus=0,1,2,3" + reboot

   * create flavor with:

     openstack flavor create --public --ram 2048 --disk 10 --vcpus 2 --swap 0 
test_flavor
     openstack flavor set --property hw:cpu_realtime_mask='^0' test_flavor
     openstack flavor set --property hw:cpu_policy=dedicated test_flavor
     openstack flavor set --property hw:cpu_thread_policy=prefer test_flavor
     openstack flavor set --property hw:cpu_realtime=yes test_flavor

   * boot instance with ^^ flavor

   * check that libvirt xml for vm has correct emulator pin cpuset #

  [Regression Potential]

  Since the patch being backported only touches the specific aread of
  code that was causing the original problem  and that code only serves
  to select cpusets based on flavor filters, i can't think of any
  regressions that it would introduce. However, one potential side
  effect/change to be aware of is that once nova-compute is upgraded to
  this newer version, any new instances created will have the
  correct/expected cpuset assignments whereas instances created prior to
  upgrade will remain unchanged i.e. they will all likely still have
  their emulation threads pinned to the wrong pcpu. In terms of side
  effects this will mean less load on the pcpu that was previously
  incorrectly chosen for existing guests but it will mean that older
  instances will need to be recreated in order to benefit from the fix.

  

  Description of problem:
  When using the cpu_realtime and cpu_realtim_mask flag to create new instance, 
the 'cpuset' of 'emulatorpin' option is using the id of vcpu which is 
incorrect. The id of host cpu should be used here.

  e.g.
    
  
  
    ### the cpuset should be '2' here, 
when cpu_realtime_mask=^0.
  
    

  How reproducible:
  Boot new instance with cpu_realtime_mask flavor.

  Steps to Reproduce:
  1. Create RT flavor
  nova flavor-create m1.small.performance 6 2048 20 2
  nova flavor-key m1.small.performance set hw:cpu_realtime=yes
  nova flavor-key m1.small.performance set hw:cpu_realtime_mask=^0
  nova flavor-key m1.small.performance set hw:cpu_policy=dedicated
  2. Boot a instance with this flavor
  3. Check the xml of the new instance

  Actual results:
  

[Yahoo-eng-team] [Bug 1694537] Re: Instance creation fails with SSL, keystone v3

2017-05-30 Thread Ryan Beisner
** Also affects: charm-nova-cloud-controller
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1694537

Title:
  Instance creation fails with SSL, keystone v3

Status in OpenStack nova-cloud-controller charm:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  We can create volumes, networks, etc in an Ocata deployment using SSL,
  but launching an instance fails with the following error in horizon:
  https://pastebin.canonical.com/189552/ and an associated error in
  nova-cloud-controller's apache2 nova-placement error log:
  https://pastebin.canonical.com/189547/

  This seems to be a communication issue between the nova scheduler and
  the nova placement api.

  Steps to remedy taken so far:
  - Clearing the rabbitmq queue
  - Bouncing the rabbitmq services
  - Bouncing the apache2 services on nova-c-c and keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1694537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673467] Re: [ocata] unsupported configuration: CPU mode 'host-model' for aarch64 kvm domain on aarch64 host is not supported by hypervisor

2017-04-03 Thread Ryan Beisner
** Also affects: charm-nova-compute
   Importance: Undecided
   Status: New

** Changed in: charm-nova-compute
   Status: New => Confirmed

** Changed in: charm-nova-compute
   Importance: Undecided => High

** Changed in: charm-nova-compute
 Assignee: (unassigned) => Ryan Beisner (1chb1n)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1673467

Title:
  [ocata] unsupported configuration: CPU mode 'host-model' for aarch64
  kvm domain on aarch64 host is not supported by hypervisor

Status in OpenStack nova-compute charm:
  Confirmed
Status in OpenStack Compute (nova):
  New
Status in libvirt package in Ubuntu:
  Incomplete

Bug description:
  We hit this error in Ocata while trying to launch an arm64 instance:

  2017-03-16 08:01:42.329 144245 ERROR nova.virt.libvirt.guest 
[req-2ad2d5d9-696d-4baa-a071-756e460ca3de 8f431f83f7e44ef1a084e7e27b40a685 
a904dd389c5d4817a4d95b8f3268cf4d - - -] Error launching a defined domain with 
XML: 
instance-0001
220bec1b-8907-4da9-9862-9cc2354abf39

  http://openstack.org/xmlns/libvirt/nova/1.0;>

guestOS-test-arm64-kvm-xenial-ci_oil_slave14_0
2017-03-16 08:01:38

  2048
  20
  0
  0
  1


  admin
  admin


  

2097152
2097152
1

  1024


  hvm
  /usr/share/AAVMF/AAVMF_CODE.fd
  /var/lib/libvirt/qemu/nvram/instance-0001_VARS.fd
  


  
  
  


  
  


  
  

destroy
restart
destroy

  /usr/bin/kvm
  




  
  
  





  
  


  
  


  
  


  

  
   
  2017-03-16 08:01:42.333 144245 ERROR nova.virt.libvirt.driver 
[req-2ad2d5d9-696d-4baa-a071-756e460ca3de 8f431f83f7e44ef1a084e7e27b40a685 
a904dd389c5d4817a4d95b8f3268cf4d - - -] [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] Failed to start libvirt guest
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1930, in 
_build_and_run_instance
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] block_device_info=block_device_info)
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2688, in 
spawn
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] destroy_disks_on_failure=True)
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5099, in 
_create_domain_and_network
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] destroy_disks_on_failure)
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] self.force_reraise()
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] six.reraise(self.type_, self.value, 
self.tb)
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5071, in 
_create_domain_and_network
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] post_xml_callback=post_xml_callback)
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4989, in 
_create_domain
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] guest.launch(pause=pause)
  2017-03-16 08:01:43.522 144245 ERROR nova.compute.manager [instance: 
220bec1b-8907-4da9-9862-9cc2354abf39] self._encoded_xml, errors='ignore'

[Yahoo-eng-team] [Bug 1667033] Re: nova instance console log empty

2017-02-22 Thread Ryan Beisner
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667033

Title:
  nova instance console log empty

Status in OpenStack Compute (nova):
  New
Status in libvirt package in Ubuntu:
  New
Status in nova package in Ubuntu:
  Triaged

Bug description:
  Nova instance console log is empty on Xenial-Ocata with libvirt
  2.5.0-3ubuntu2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616240] Re: Traceback in vif.py execv() arg 2 must contain only strings

2017-01-12 Thread Ryan Beisner
This bug was fixed in the package python-oslo.privsep - 1.13.0-0ubuntu1.1~cloud0
---

 python-oslo.privsep (1.13.0-0ubuntu1.1~cloud0) xenial-newton; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 python-oslo.privsep (1.13.0-0ubuntu1.1) yakkety; urgency=medium
 .
   * d/p/deal-with-conf-config-dir.patch: Cherry pick patch from upstream
 stable/newton branch to properly handle CONF.config_dir (LP: #1616240).


** Changed in: cloud-archive/newton
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1616240

Title:
  Traceback in vif.py execv() arg 2 must contain only strings

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.privsep:
  Fix Released
Status in python-oslo.privsep package in Ubuntu:
  Fix Released
Status in python-oslo.privsep source package in Yakkety:
  Fix Released
Status in python-oslo.privsep source package in Zesty:
  Fix Released

Bug description:
  While bringing up VM with the latest master (August 23,2016) I see
  this traceback and VM fails to launch.

  Complete log is here: http://paste.openstack.org/show/562688/
  nova.conf used is here: http://paste.openstack.org/show/562757/

  The issue is 100% reproducible in my testbed.

  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager 
[req-81060644-0dd7-453c-a68c-0d9cffe28fe7 3d1cd826f71a49cc81b33e85329f94b3 
f738285a670c4be08d8a5e300aa25504 - - -] [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] Instance failed to spawn
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] Traceback (most recent call last):
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
2075, in _build_resources
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] yield resources
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
1919, in _build_and_run_instance
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] block_device_info=block_device_info)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 2583, in spawn
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] post_xml_callback=gen_confdrive)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 4803, in _create_domain_and_network
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] self.plug_vifs(instance, network_info)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 684, in plug_vifs
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] self.vif_driver.plug(instance, vif)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", 
line 801, in plug
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] self._plug_os_vif(instance, vif_obj)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", 
line 783, in _plug_os_vif
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] raise exception.NovaException(msg)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] NovaException: Failure running os_vif 
plugin plug method: Failed to plug VIF 
VIFBridge(active=False,address=fa:16:3e:c0:4a:fd,bridge_name='qbrb7b522a4-3f',has_traffic_filtering=True,id=b7b522a4-3faa-42ca-8e0f-d8c241432e1f,network=Network(f32fdde6-bb99-4981-926b-a7df30f0a612),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=True,vif_name='tapb7b522a4-3f').
 Got 

[Yahoo-eng-team] [Bug 1573073] Re: [SRU] When router has no ports _process_updated_router fails because the namespace does not exist

2017-01-05 Thread Ryan Beisner
This bug was fixed in the package neutron - 2:8.3.0-0ubuntu1.2~cloud0
---

 neutron (2:8.3.0-0ubuntu1.2~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:8.3.0-0ubuntu1.2) xenial; urgency=medium
 .
   * d/p/check-namespace-before-getting-devices.patch: Cherry-pick patch
 from upstream stable/mitaka branch to check if router namespace exists
 before getting devices (LP: #1573073).


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573073

Title:
  [SRU] When router has no ports _process_updated_router fails because
  the namespace does not exist

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in neutron:
  In Progress
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Fix Released
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Testcase]
  Happens in Kilo. Cannot test on other releases.

  Steps to reproduce:

  1) create a router and set at least a port, also the gateway is fine
  2) check that the namespace exists with
     ip netns show | grep qrouter-
  3) check the ports are there
     ip netns exec qrouter- ip addr show
  4) delete all ports from the router
  5) check that only loopback interface is present
     ip netns exec qrouter- ip addr show
  6) run the cronjob task that is installed in the file
     /etc/cron.d/neutron-l3-agent-netns-cleanup
  so basically run this command:
     /usr/bin/neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini
  7) the namespace should be gone:
     ip netns show | grep qrouter-
  8) delete the neutron router.
  9) check log file /var/log/neutron/vpn-agent.log

  When the router has no ports the namespace is deleted from the network
  node by the cronjob. However this brakes the router updates and the
  file vpn-agent.log is flooded with this traces:

  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Traceback 
(most recent call last):
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 343, in call
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 628, 
in process
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
self._process_internal_ports()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 404, 
in _process_internal_ports
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
existing_devices = self._get_existing_devices()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 328, 
in _get_existing_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info ip_devs 
= ip_wrapper.get_devices(exclude_loopback=True)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 102, in 
get_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 137, in 
execute
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info raise 
RuntimeError(m)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info RuntimeError:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762', 'find', 
'/sys/class/net', '-maxdepth', '1', '-type', 'l', '-printf', '%f ']
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Exit code: 1
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdin:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdout:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stderr: 
Cannot open network namespace "qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762": 
No such file or directory
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info
  

[Yahoo-eng-team] [Bug 1573073] Re: [SRU] When router has no ports _process_updated_router fails because the namespace does not exist

2017-01-05 Thread Ryan Beisner
This bug was fixed in the package neutron - 2:9.0.0-0ubuntu1.16.10.2~cloud0
---

 neutron (2:9.0.0-0ubuntu1.16.10.2~cloud0) xenial-newton; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:9.0.0-0ubuntu1.16.10.2) yakkety; urgency=medium
 .
   * d/p/check-namespace-before-getting-devices.patch: Cherry-pick patch
 from upstream stable/newton branch to check if router namespace exists
 before getting devices (LP: #1573073).


** Changed in: cloud-archive/newton
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573073

Title:
  [SRU] When router has no ports _process_updated_router fails because
  the namespace does not exist

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in neutron:
  In Progress
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Fix Released
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Testcase]
  Happens in Kilo. Cannot test on other releases.

  Steps to reproduce:

  1) create a router and set at least a port, also the gateway is fine
  2) check that the namespace exists with
     ip netns show | grep qrouter-
  3) check the ports are there
     ip netns exec qrouter- ip addr show
  4) delete all ports from the router
  5) check that only loopback interface is present
     ip netns exec qrouter- ip addr show
  6) run the cronjob task that is installed in the file
     /etc/cron.d/neutron-l3-agent-netns-cleanup
  so basically run this command:
     /usr/bin/neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini
  7) the namespace should be gone:
     ip netns show | grep qrouter-
  8) delete the neutron router.
  9) check log file /var/log/neutron/vpn-agent.log

  When the router has no ports the namespace is deleted from the network
  node by the cronjob. However this brakes the router updates and the
  file vpn-agent.log is flooded with this traces:

  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Traceback 
(most recent call last):
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 343, in call
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 628, 
in process
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
self._process_internal_ports()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 404, 
in _process_internal_ports
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
existing_devices = self._get_existing_devices()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 328, 
in _get_existing_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info ip_devs 
= ip_wrapper.get_devices(exclude_loopback=True)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 102, in 
get_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 137, in 
execute
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info raise 
RuntimeError(m)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info RuntimeError:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762', 'find', 
'/sys/class/net', '-maxdepth', '1', '-type', 'l', '-printf', '%f ']
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Exit code: 1
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdin:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdout:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stderr: 
Cannot open network namespace "qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762": 
No such file or directory
  2016-04-21 16:22:17.771 23382 TRACE 

[Yahoo-eng-team] [Bug 1573073] Re: [SRU] When router has no ports _process_updated_router fails because the namespace does not exist

2017-01-05 Thread Ryan Beisner
This bug was fixed in the package neutron - 2:9.0.0-0ubuntu1.16.10.2~cloud0
---

 neutron (2:9.0.0-0ubuntu1.16.10.2~cloud0) xenial-newton; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:9.0.0-0ubuntu1.16.10.2) yakkety; urgency=medium
 .
   * d/p/check-namespace-before-getting-devices.patch: Cherry-pick patch
 from upstream stable/newton branch to check if router namespace exists
 before getting devices (LP: #1573073).


** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573073

Title:
  [SRU] When router has no ports _process_updated_router fails because
  the namespace does not exist

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in neutron:
  In Progress
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Fix Released
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Testcase]
  Happens in Kilo. Cannot test on other releases.

  Steps to reproduce:

  1) create a router and set at least a port, also the gateway is fine
  2) check that the namespace exists with
     ip netns show | grep qrouter-
  3) check the ports are there
     ip netns exec qrouter- ip addr show
  4) delete all ports from the router
  5) check that only loopback interface is present
     ip netns exec qrouter- ip addr show
  6) run the cronjob task that is installed in the file
     /etc/cron.d/neutron-l3-agent-netns-cleanup
  so basically run this command:
     /usr/bin/neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini
  7) the namespace should be gone:
     ip netns show | grep qrouter-
  8) delete the neutron router.
  9) check log file /var/log/neutron/vpn-agent.log

  When the router has no ports the namespace is deleted from the network
  node by the cronjob. However this brakes the router updates and the
  file vpn-agent.log is flooded with this traces:

  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Traceback 
(most recent call last):
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 343, in call
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 628, 
in process
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
self._process_internal_ports()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 404, 
in _process_internal_ports
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
existing_devices = self._get_existing_devices()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 328, 
in _get_existing_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info ip_devs 
= ip_wrapper.get_devices(exclude_loopback=True)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 102, in 
get_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 137, in 
execute
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info raise 
RuntimeError(m)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info RuntimeError:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762', 'find', 
'/sys/class/net', '-maxdepth', '1', '-type', 'l', '-printf', '%f ']
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Exit code: 1
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdin:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdout:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stderr: 
Cannot open network namespace "qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762": 
No such file or directory
  2016-04-21 16:22:17.771 23382 TRACE 

[Yahoo-eng-team] [Bug 1639239] Re: ValueError for Invalid InitiatorConnector in s390

2016-11-29 Thread Ryan Beisner
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639239

Title:
  ValueError for Invalid InitiatorConnector in s390

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  Description
  ===
  Calling the InitiatorConnector factory results in a ValueError for 
unsupported protocols, which goes unhandled and may crash a calling service.

  Steps to reproduce
  ==
  - clone devstack
  - make stack

  Expected result
  ===
  The nova compute service should run.

  Actual result
  =
  A ValueError is thrown, which, in the case of the nova libvirt driver, is not 
handled appropriately. The compute service crashes.

  Environment
  ===
  os|distro=kvmibm1
  os|vendor=kvmibm
  os|release=1.1.3-beta4.3
  git|cinder|master[f6ab36d]
  git|devstack|master[928b3cd]
  git|nova|master[56138aa]
  pip|os-brick|1.7.0

  Logs & Configs
  ==
  [...]
  2016-11-03 17:56:57.204 46141 INFO nova.virt.driver 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Loading compute driver 
'libvirt.LibvirtDriver'
  2016-11-03 17:56:57.442 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISCSI on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.444 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISCSI on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.445 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISER on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.445 46141 CRITICAL nova 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] ValueError: Invalid 
InitiatorConnector protocol specified ISER
  2016-11-03 17:56:57.445 46141 ERROR nova Traceback (most recent call last):
  2016-11-03 17:56:57.445 46141 ERROR nova   File "/usr/bin/nova-compute", line 
10, in 
  2016-11-03 17:56:57.445 46141 ERROR nova sys.exit(main())
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/cmd/compute.py", line 56, in main
  2016-11-03 17:56:57.445 46141 ERROR nova topic=CONF.compute_topic)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/service.py", line 216, in create
  2016-11-03 17:56:57.445 46141 ERROR nova 
periodic_interval_max=periodic_interval_max)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/service.py", line 91, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/compute/manager.py", line 537, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self.driver = 
driver.load_compute_driver(self.virtapi, compute_driver)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/driver.py", line 1625, in load_compute_driver
  2016-11-03 17:56:57.445 46141 ERROR nova virtapi)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 44, in 
import_object
  2016-11-03 17:56:57.445 46141 ERROR nova return 
import_class(import_str)(*args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 356, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self._get_volume_drivers(), 
self._host)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/driver.py", line 44, in driver_dict_from_config
  2016-11-03 17:56:57.445 46141 ERROR nova driver_registry[driver_type] = 
driver_class(*args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/libvirt/volume/iser.py", line 34, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova transport=self._get_transport())
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/usr/lib/python2.7/site-packages/os_brick/initiator/connector.py", line 285, 
in factory
  2016-11-03 17:56:57.445 46141 ERROR nova raise ValueError(msg)
  2016-11-03 17:56:57.445 46141 ERROR nova ValueError: Invalid 
InitiatorConnector protocol specified ISER
  2016-11-03 17:56:57.445 46141 ERROR nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1639239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604397] Re: [SRU] python-swiftclient is missing in requirements.txt (for glare)

2016-11-18 Thread Ryan Beisner
This bug was fixed in the package python-glance-store - 0.18.0-0ubuntu1.1~cloud0
---

 python-glance-store (0.18.0-0ubuntu1.1~cloud0) xenial-newton; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 python-glance-store (0.18.0-0ubuntu1.1) yakkety; urgency=medium
 .
   [ Corey Bryant ]
   * d/control: Add run-time dependency for python-swiftclient (LP: #1604397).
   * d/p/drop-enum34.patch: Fix python3 test failures.
 .
   [ Thomas Goirand ]
   * Fixed enum34 runtime depends.


** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1604397

Title:
  [SRU] python-swiftclient is missing in requirements.txt (for glare)

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Glance:
  New
Status in python-glance-store package in Ubuntu:
  Fix Released
Status in python-glance-store source package in Yakkety:
  Fix Released
Status in python-glance-store source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Test Case]
  I'm using UCA glance packages (version "13.0.0~b1-0ubuntu1~cloud0").
  And I've got this error:
  <30>Jul 18 16:03:45 node-2 glance-glare[17738]: ERROR: Store swift could not 
be configured correctly. Reason: Missing dependency python_swiftclient.

  Installing "python-swiftclient" fix the problem.

  In master
  (https://github.com/openstack/glance/blob/master/requirements.txt)
  package "python-swiftclient" is not included in requirements.txt. So
  UCA packages don't have proper dependencies.

  I think requirements.txt should be updated (add python-swiftclient
  there). This change should affect UCA packages.

  [Regression Potential]
  Minimal as this just adds a new dependency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1604397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414218] Re: Remove extraneous trace in linux/dhcp.py

2016-09-09 Thread Ryan Beisner
This bug was fixed in the package neutron - 1:2014.1.5-0ubuntu5~cloud0
---

 neutron (1:2014.1.5-0ubuntu5~cloud0) precise-icehouse; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (1:2014.1.5-0ubuntu5) trusty; urgency=medium
 .
   * Backport performance fix by refactoring logging statements. (LP: #1414218):
 - d/p/refactor-log-in-loop.patch: do not perform debug trace with each
   iteration through the loop of ports, instead log it once at the end.


** Changed in: cloud-archive/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414218

Title:
  Remove extraneous trace in linux/dhcp.py

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive icehouse series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  Fix Released

Bug description:
  [Impact]

  The debug tracepoint in Dnsmasq._output_hosts_file is extraneous and
  causes unnecessary performance overhead when creating lots (> 1000)
  ports at one time.

  The trace point is unnecessary since the data is being written to disk
  and the file can be examined in a worst case scenario. The added
  performance overhead is an order of magnitude in difference (~.5
  seconds versus ~.05 seconds at 1500 ports).

  [Test Case]

  1. Deploy OpenStack using neutron for networking
  2. Create 1500 ports
  3. Observe the performance degradation for each port creation.

  [Regression Potential]

  Minimal. This code has been running in stable/juno, stable/kilo, and
  above for awhile.

  [Other Questions]

  This is likely to occur in OpenStack deployments which have large
  networks deployed. The degradation is gradual, but the performance
  becomes unacceptable with large enough networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1414218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598256] Re: neutron gateway fails to find network device names "eno1"

2016-07-07 Thread Ryan Beisner
The neutron-gateway charm is simply doing what it is told [1] ("use eth1
for the neutron external network").  As it stands, in cases where a
machine's eth device naming is different, the charm configuration value
which is defined by the bundle will need to be customized by the user.

This may be true for other configurations too - in that a user might
need to adjust configs or values to fit his or her deployment
environment.  Such as:  admin password, osd-devices, block device names,
and such.

[1] https://api.jujucharms.com/charmstore/v5/openstack-
base/archive/bundle.yaml

** Changed in: neutron-gateway (Juju Charms Collection)
   Status: New => Opinion

** Changed in: neutron-gateway (Juju Charms Collection)
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598256

Title:
  neutron gateway fails to find network device names "eno1"

Status in neutron:
  Invalid
Status in neutron-gateway package in Juju Charms Collection:
  Opinion

Bug description:
  OS Version: Openstack-base-43
  Neutron-gateway: neutron-gateway-1
  MAAS: 2.0 b8
  Juju: 2.0 b10
  Series: Xenial

  Steps:
  1) Bootstrap MAAS controller
  2) juju deploy openstack-base-43

  Result:
  2016-07-01 15:20:10 INFO juju-log Adding port eth1 to bridge br-ex
  2016-07-01 15:20:10 INFO config-changed Cannot find device "eth1"
  2016-07-01 15:20:10 INFO config-changed Traceback (most recent call last):
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/config-changed", line 
349, in 
  2016-07-01 15:20:10 INFO config-changed hooks.execute(sys.argv)
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/core/hookenv.py",
 line 717, in execute
  2016-07-01 15:20:10 INFO config-changed self._hooks[hook_name]()
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/openstack/utils.py",
 line 1574, in wrapped_f
  2016-07-01 15:20:10 INFO config-changed restart_functions)
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/core/host.py",
 line 475, in restart_on_change_helper
  2016-07-01 15:20:10 INFO config-changed r = lambda_f()
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/openstack/utils.py",
 line 1573, in 
  2016-07-01 15:20:10 INFO config-changed (lambda: f(*args, **kwargs)), 
restart_map, stopstart,
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/hardening/harden.py",
 line 81, in _harden_inner2
  2016-07-01 15:20:10 INFO config-changed return f(*args, **kwargs)
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/config-changed", line 
139, in config_changed
  2016-07-01 15:20:10 INFO config-changed configure_ovs()
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/neutron_utils.py", 
line 712, in configure_ovs
  2016-07-01 15:20:10 INFO config-changed add_bridge_port(EXT_BRIDGE, 
ext_port_ctx['ext_port'])
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/network/ovs/__init__.py",
 line 49, in add_bridge_port
  2016-07-01 15:20:10 INFO config-changed subprocess.check_call(["ip", 
"link", "set", port, "up"])
  2016-07-01 15:20:10 INFO config-changed   File 
"/usr/lib/python2.7/subprocess.py", line 541, in check_call
  2016-07-01 15:20:10 INFO config-changed raise CalledProcessError(retcode, 
cmd)
  2016-07-01 15:20:10 INFO config-changed subprocess.CalledProcessError: 
Command '['ip', 'link', 'set', u'eth1', 'up']' returned non-zero exit status 1
  2016-07-01 15:20:10 ERROR juju.worker.uniter.operation runhook.go:107 hook 
"config-changed" failed: exit status 1
  2016-07-01 15:20:10 INFO juju.worker.uniter resolver.go:107 awaiting error 
resolution for "config-changed" hook
  2016-07-01 15:20:10 DEBUG juju.worker.uniter agent.go:17 [AGENT-STATUS] 
error: hook failed: "config-changed"

  
  ubuntu@azurill:/var/log/juju$ ip link
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  2: eno1:  mtu 1500 qdisc mq master br-eno1 
state UP mode DEFAULT group default qlen 1000
  link/ether ec:b1:d7:7f:ff:a4 brd ff:ff:ff:ff:ff:ff
  3: eno2:  mtu 1500 qdisc mq state UP mode 
DEFAULT group default qlen 1000
  link/ether ec:b1:d7:7f:ff:a5 brd ff:ff:ff:ff:ff:ff
  4: eno3: 

[Yahoo-eng-team] [Bug 1598256] Re: neutron gateway fails to find network device names "eno1"

2016-07-07 Thread Ryan Beisner
** Also affects: neutron-gateway (Juju Charms Collection)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598256

Title:
  neutron gateway fails to find network device names "eno1"

Status in neutron:
  Invalid
Status in neutron-gateway package in Juju Charms Collection:
  New

Bug description:
  OS Version: Openstack-base-43
  Neutron-gateway: neutron-gateway-1
  MAAS: 2.0 b8
  Juju: 2.0 b10
  Series: Xenial

  Steps:
  1) Bootstrap MAAS controller
  2) juju deploy openstack-base-43

  Result:
  2016-07-01 15:20:10 INFO juju-log Adding port eth1 to bridge br-ex
  2016-07-01 15:20:10 INFO config-changed Cannot find device "eth1"
  2016-07-01 15:20:10 INFO config-changed Traceback (most recent call last):
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/config-changed", line 
349, in 
  2016-07-01 15:20:10 INFO config-changed hooks.execute(sys.argv)
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/core/hookenv.py",
 line 717, in execute
  2016-07-01 15:20:10 INFO config-changed self._hooks[hook_name]()
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/openstack/utils.py",
 line 1574, in wrapped_f
  2016-07-01 15:20:10 INFO config-changed restart_functions)
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/core/host.py",
 line 475, in restart_on_change_helper
  2016-07-01 15:20:10 INFO config-changed r = lambda_f()
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/openstack/utils.py",
 line 1573, in 
  2016-07-01 15:20:10 INFO config-changed (lambda: f(*args, **kwargs)), 
restart_map, stopstart,
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/hardening/harden.py",
 line 81, in _harden_inner2
  2016-07-01 15:20:10 INFO config-changed return f(*args, **kwargs)
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/config-changed", line 
139, in config_changed
  2016-07-01 15:20:10 INFO config-changed configure_ovs()
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/neutron_utils.py", 
line 712, in configure_ovs
  2016-07-01 15:20:10 INFO config-changed add_bridge_port(EXT_BRIDGE, 
ext_port_ctx['ext_port'])
  2016-07-01 15:20:10 INFO config-changed   File 
"/var/lib/juju/agents/unit-neutron-gateway-0/charm/hooks/charmhelpers/contrib/network/ovs/__init__.py",
 line 49, in add_bridge_port
  2016-07-01 15:20:10 INFO config-changed subprocess.check_call(["ip", 
"link", "set", port, "up"])
  2016-07-01 15:20:10 INFO config-changed   File 
"/usr/lib/python2.7/subprocess.py", line 541, in check_call
  2016-07-01 15:20:10 INFO config-changed raise CalledProcessError(retcode, 
cmd)
  2016-07-01 15:20:10 INFO config-changed subprocess.CalledProcessError: 
Command '['ip', 'link', 'set', u'eth1', 'up']' returned non-zero exit status 1
  2016-07-01 15:20:10 ERROR juju.worker.uniter.operation runhook.go:107 hook 
"config-changed" failed: exit status 1
  2016-07-01 15:20:10 INFO juju.worker.uniter resolver.go:107 awaiting error 
resolution for "config-changed" hook
  2016-07-01 15:20:10 DEBUG juju.worker.uniter agent.go:17 [AGENT-STATUS] 
error: hook failed: "config-changed"

  
  ubuntu@azurill:/var/log/juju$ ip link
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  2: eno1:  mtu 1500 qdisc mq master br-eno1 
state UP mode DEFAULT group default qlen 1000
  link/ether ec:b1:d7:7f:ff:a4 brd ff:ff:ff:ff:ff:ff
  3: eno2:  mtu 1500 qdisc mq state UP mode 
DEFAULT group default qlen 1000
  link/ether ec:b1:d7:7f:ff:a5 brd ff:ff:ff:ff:ff:ff
  4: eno3:  mtu 1500 qdisc mq state DOWN 
mode DEFAULT group default qlen 1000
  link/ether ec:b1:d7:7f:ff:a6 brd ff:ff:ff:ff:ff:ff
  5: eno4:  mtu 1500 qdisc mq state DOWN 
mode DEFAULT group default qlen 1000
  link/ether ec:b1:d7:7f:ff:a7 brd ff:ff:ff:ff:ff:ff
  6: br-eno1:  mtu 1500 qdisc noqueue state UP 
mode DEFAULT group default qlen 1000
  link/ether ec:b1:d7:7f:ff:a4 brd ff:ff:ff:ff:ff:ff

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to  

[Yahoo-eng-team] [Bug 1374999] Re: iSCSI volume detach does not correctly remove the multipath device descriptors

2016-05-18 Thread Ryan Beisner
This bug was fixed in the package nova - 1:2015.1.4-0ubuntu2
---

 nova (1:2015.1.4-0ubuntu2) trusty-kilo; urgency=medium
 .
   * d/p/fix-iscsi-detach.patch (LP: #1374999)
 - Clear latest path for last remaining iscsi disk to ensure
   disk is properly removed.
 .
 nova (1:2015.1.4-0ubuntu1) trusty-kilo; urgency=medium
 .
   * New upstream stable release (LP: #1580334).
   * d/p/skip-proxy-test.patch: Skip test_ssl_server and test_two_servers as
 they are hitting ProxyError during package builds.
 .
 nova (1:2015.1.3-0ubuntu1) trusty-kilo; urgency=medium
 .
   * New upstream stable release (LP: #1559215).
 .
 nova (1:2015.1.2-0ubuntu2) vivid; urgency=medium
 .
   * d/control: Bump oslo.concurrency to >= 1.8.2 (LP: #1518016).
 .
 nova (1:2015.1.2-0ubuntu1) vivid; urgency=medium
 .
   * Resynchronize with stable/kilo (68e9359) (LP: #1506058):
 - [68e9359] Fix quota update in init_instance on nova-compute restart
 - [d864603] Raise InstanceNotFound when save FK constraint fails
 - [db45b1e] Give instance default hostname if hostname is empty
 - [61f119e] Relax restrictions on server name
 - [2e731eb] Remove unnecessary 'context' param from quotas reserve method
 call
 - [5579928] Updated from global requirements
 - [08d1153] Don't expect meta attributes in object_compat that aren't in 
the
 db obj
 - [5c6f01f] VMware: pass network info to config drive.
 - [17b5052] Allow to use autodetection of volume device path
 - [5642b17] Delete orphaned instance files from compute nodes
 - [8110cdc] Updated from global requirements
 - [1f5b385] Hyper-V: Fixes serial port issue on Windows Threshold
 - [24251df] Handle FC LUN IDs greater 255 correctly on s390x architectures
 - [dcde7e7] Update obj_reset_changes signatures to match
 - [e16fcfa] Unshelving volume backed instance fails
 - [8fccffd] Make pagination tolerate a deleted marker
 - [587092c] Fix live-migrations usage of the wrong connector information
 - [8794b93] Don't check flavor disk size when booting from volume
 - [c1ad497] Updated from global requirements
 - [0b37312] Hyper-V: Removes old instance dirs after live migration
 - [2d571b1] Hyper-V: Fixes live migration configdrive copy operation
 - [07506f5] Hyper-V: Fix SMBFS volume attach race condition
 - [60356bf] Hyper-V: Fix missing WMI namespace issue on Windows 2008 R2
 - [83fb8cc] Hyper-V: Fix virtual hard disk detach
 - [6c857c2] Updated from global requirements
 - [0313351] Compute: replace incorrect instance object with dict
 - [9724d50] Don't pass the service catalog when making glance requests
 - [b5020a0] libvirt: Kill rsync/scp processes before deleting instance
 - [3f337f8] Support host type specific block volume attachment
 - [cb2a8fb] Fix serializer supported version reporting in object_backport
 - [701c889] Execute _poll_shelved_instances only if shelved_offload_time is
 > 0
 - [eb3b1c8] Fix rebuild of an instance with a volume attached
 - [e459add] Handle unexpected clear events call
 - [8280575] Support ssh-keygen of OpenSSH 6.8
 - [9a51140] Kilo-Removing extension "OS-EXT-VIF-NET" from v2.1 ext list
 - [b3f7b77] Fix wrong check when use image in local
 - [b13726b] Fix race between resource audit and cpu pinning
* debian/patches/not-check-disk-size.patch: Dropped no longer needed.
 .
 nova (1:2015.1.1-0ubuntu2) vivid; urgency=medium
 .
   [ Corey Bryant ]
   * d/rules: Prevent dh_python2 from guessing dependencies.
 .
   [ Liang Chen ]
   * d/p/not-check-disk-size.patch: Fix booting from volume error
 when flavor disk too small (LP: #1457517)
 .
 nova (1:2015.1.1-0ubuntu1) vivid; urgency=medium
 .
   * Resynchronize with stable/kilo (d8a470d) (LP: #1481008):
 - [e6e39e1] Remove incorrect Instance 1.18 relationship for PciDevice 1.2
 - [a55ea8c] Fix the incorrect PciDeviceList version number
 - [e56aed8] Add support for forcing migrate_flavor_data
 - [ccd002b] Fix migrate_flavor_data string substitution
 - [124b501] Allow libvirt cleanup completion when serial ports already 
released
 - [4908d46] Fixed incorrect dhcp_server value during nova-network creation
 - [0cf44ff] Fixed nova-network dhcp-hostsfile update during live-migration
 - [dc6af6b] libvirt: handle code=38 + sigkill (ebusy) in destroy()
 - [6e22a8b] hypervisor support matrix: add kvm on system z in kilo release
 - [e013ebf] Fix max_number for migrate_flavor data
 - [2b5fe5d] Reduce window for allocate_fixed_ip / release_fixed_ip race in 
nova-net
 - [cd6353a] Mark ironic credential config as secret
 - [48a6217] Ensure to store context in thread local after spawn/spawn_n
 - [fc7f1ab] Store context in local store after spawn_n
 - [199f0ab] Fixes TypeError when libvirt version is 
BAD_LIBVIRT_CPU_POLICY_VERSIONS
 - [1f4088d] Add 'docker' to the 

[Yahoo-eng-team] [Bug 1382079] Re: [SRU] Project selector not working

2016-05-18 Thread Ryan Beisner
This bug was fixed in the package horizon - 1:2015.1.4-0ubuntu2
---

 horizon (1:2015.1.4-0ubuntu2) trusty-kilo; urgency=medium
 .
   * d/p/remove-can-access-caching.patch (LP: #1382079): Remove session
 caching of can_access call results which was disabling the project
 selector.


** Changed in: cloud-archive/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382079

Title:
  [SRU] Project selector not working

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Vivid:
  Won't Fix
Status in horizon source package in Wily:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * Not able to switch projects by the project dropdown list.

  [Test Case]

  1 - enable Identity V3 in local_settings.py
  2 - Log in on Horizon
  3 - make sure that the SESSION_BACKEND is "signed_cookies"
  4 - Try to change project on the dropdown

  [Regression Potential]

   * None

  When you try to select a new project on the project dropdown, the
  project doesn't change. The commit below has introduced this bug on
  Horizon's master and has passed the tests verifications.

  
https://github.com/openstack/horizon/commit/16db58fabad8934b8fbdfc6aee0361cc138b20af

  For what I've found so far, the context being received in the
  decorator seems to be the old context, with the token to the previous
  project. When you take the decorator out, the context received by the
  "can_access" function receives the correct context, with the token to
  the new project.

  Steps to reproduce:

  1 - Enable Identity V3 (to have a huge token)
  2 - Log in on Horizon (lots of permissions loaded on session)
  3 - Certify that you SESSION_BACKEND is "signed_cookies"
  4 - Try to change project on the dropdown

  The project shall remain the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1382079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp