[Yahoo-eng-team] [Bug 1737879] Re: 2017-12-13 00:20:15.681+0000: shutting down, reason=crashed

2017-12-19 Thread James Page
** Project changed: charm-nova-compute => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737879

Title:
  2017-12-13 00:20:15.681+: shutting down, reason=crashed

Status in OpenStack Compute (nova):
  New

Bug description:
  1)vm shutdown info:
  43 2017-12-13 00:20:15.681+: shutting down, reason=crashed

  2) nova start vm info:
  2017-12-13 00:24:07.337+: starting up libvirt version: 3.2.0, 
package: 14.el7_4.3 (CentOS BuildSystem , 
2017-09-07-11:27:44, c1bm.rdu2.centos.org), qemu version: 1.5.3 
(qemu-kvm-1.5.3-141.el7_4.2), hostname: l23-41-5 45 LC_ALL=C 
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none 
/usr/libexec/qemu-kvm -name instance-005c -S -machine 
pc-i440fx-rhel7.0.0,accel=kvm,usb=off,dump-guest-core=off -m 16384 
-realtime mlock=off -smp 16,sockets=16,cores=16,threads=1 -uuid 
fc0cc515-5f7b-420c-bcda-d77b66610af2 -smbios 'type=1,manufacturer=Fedora 
Project,product=OpenStack 
Nova,version=13.1.2-1.el7,serial=f7f54ab4-0b4f-499c-8d1d-291e11f50bf2,uuid=fc0cc515-5f7b-420c-bcda-d77b66610af2,family=Virtual
 Machine' -no-user-config -nodefaults -chardev socket,id=cha
rmonitor,path=/var/lib/libvirt/qemu/domain-37-instance-005c/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc 
base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=delay 
-no-hpet -no-shutdown -boot strict=on -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
'file=rbd:nova_instances/fc0cc515-5f7b-420c-bcda-d77b66610af2
_disk:id=cinder:key=AQBNLRRZswapABAAh6GCDOkWFqWj5uYAMVgIKA==:auth_supported=cephx\;none:mon_host=10.211.41.4\:6789\;10.211.41.6\:6789\;10.211.41.8\:6789,format=raw,if=none,id=drive-virtio-disk

0,cache=writeback,throttling.bps-read=52425500,throttling.bps-write=52425500' 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive 
'file=rbd:cinder_volumes/volume-cd2fcd54-0661-4a4d-b13d-419b6585c676:id=cinder:key=AQBNLRRZswapABAAh6GCDOkWFqWj5uYAMVgIKA==:auth_supported=cephx\;none:mon_host=10.211.41.4\:6789\;10.211.

41.6\:6789\;10.211.41.8\:6789,format=raw,if=none,id=drive-virtio-disk1,serial=cd2fcd54-0661-4a4d-b13d-419b6585c676,cache=writeback'
 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=
drive-virtio-disk1,id=virtio-disk1 -drive 
'file=rbd:nova_instances/fc0cc515-5f7b-420c-bcda-d77b66610af2_disk.config:id=cinder:key=AQBNLRRZswapABAAh6GCDOkWFqWj5uYAMVgIKA==:auth_supported=

cephx\;none:mon_host=10.211.41.4\:6789\;10.211.41.6\:6789\;10.211.41.8\:6789,format=raw,if=none,id=drive-ide0-1-1,readonly=on,cache=writeback,throttling.bps-read=52425500,throttling.bps-write=
52425500' -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=39 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:6
0:0b:06,bus=pci.0,addr=0x3 -chardev 
file,id=charserial0,path=/data/nova/instances/fc0cc515-5f7b-420c-bcda-d77b66610af2/console.log
 -device isa-serial,chardev=charserial0,id=serial0 -chardev 
pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:6 -k en-us -vga cirrus 
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
   46 2017-12-13 00:24:07.337+: Domain id=37 is tainted: high-privileges
   47 char device redirected to /dev/pts/7 (label charserial1)

  3) Compute nodes system log: / var/log/messages
  Dec 13 08:20:15 l23-41-5 libvirtd: 2017-12-13 00:20:15.461+: 22613: 
error : qemuMonitorIO:697 : internal error: End of file from qemu monitor

  4) openstack nova-compute log : /var/log/nova/nova-compute.log

  2017-12-13 08:20:30.751 22579 INFO nova.compute.manager [-] [instance: 
fc0cc515-5f7b-420c-bcda-d77b66610af2] VM Stopped (Lifecycle Event)
  2017-12-13 08:20:30.845 22579 INFO nova.compute.manager 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] [instance: 
fc0cc515-5f7b-420c-bcda-d77b66610af2] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_state in the DB to match the hypervisor.
  2017-12-13 08:20:30.904 22579 WARNING nova.compute.manager 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] [instance: 
fc0cc515-5f7b-420c-bcda-d77b66610af2] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state: 1, current VM power_state: 4
  2017-12-13 08:20:31.038 22579 INFO nova.extend.network 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] Add william check_result: 
True
  2017-12-13 08:20:31.039 22579 INFO nova.extend.network 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] Add 

[Yahoo-eng-team] [Bug 1739013] [NEW] nova.tests.functional.test_server_group.ServerGroupTest*.test_evacuate_with_anti_affinity does not validate that evacuation really happens

2017-12-19 Thread Balazs Gibizer
Public bug reported:

The tests only asserts that the policy is kept after the evacuation API
is called [1] but does not check if the evacuated server is moved to a
new host. When I added those asserts locally it become clear that the
evacuation fails with NoValidHost but the test passes causing a false
positive result.

The logs in those failed tests show multiple potential problems [2].

[1] 
https://github.com/openstack/nova/blob/42d2c0263edf9041b7e97b0b59982dcfe904a137/nova/tests/functional/test_server_group.py#L431
[2] http://paste.openstack.org/show/629301/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: evacuate

** Tags added: evacuate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739013

Title:
  
nova.tests.functional.test_server_group.ServerGroupTest*.test_evacuate_with_anti_affinity
  does not  validate that evacuation really happens

Status in OpenStack Compute (nova):
  New

Bug description:
  The tests only asserts that the policy is kept after the evacuation
  API is called [1] but does not check if the evacuated server is moved
  to a new host. When I added those asserts locally it become clear that
  the evacuation fails with NoValidHost but the test passes causing a
  false positive result.

  The logs in those failed tests show multiple potential problems [2].

  [1] 
https://github.com/openstack/nova/blob/42d2c0263edf9041b7e97b0b59982dcfe904a137/nova/tests/functional/test_server_group.py#L431
  [2] http://paste.openstack.org/show/629301/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737879] [NEW] 2017-12-13 00:20:15.681+0000: shutting down, reason=crashed

2017-12-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

1)vm shutdown info:
43 2017-12-13 00:20:15.681+: shutting down, reason=crashed

2) nova start vm info:
2017-12-13 00:24:07.337+: starting up libvirt version: 3.2.0, package: 
14.el7_4.3 (CentOS BuildSystem , 2017-09-07-11:27:44, 
c1bm.rdu2.centos.org), qemu version: 1.5.3 (qemu-kvm-1.5.3-141.el7_4.2), 
hostname: l23-41-5 45 LC_ALL=C 
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none 
/usr/libexec/qemu-kvm -name instance-005c -S -machine 
pc-i440fx-rhel7.0.0,accel=kvm,usb=off,dump-guest-core=off -m 16384 
-realtime mlock=off -smp 16,sockets=16,cores=16,threads=1 -uuid 
fc0cc515-5f7b-420c-bcda-d77b66610af2 -smbios 'type=1,manufacturer=Fedora 
Project,product=OpenStack 
Nova,version=13.1.2-1.el7,serial=f7f54ab4-0b4f-499c-8d1d-291e11f50bf2,uuid=fc0cc515-5f7b-420c-bcda-d77b66610af2,family=Virtual
 Machine' -no-user-config -nodefaults -chardev socket,id=cha
rmonitor,path=/var/lib/libvirt/qemu/domain-37-instance-005c/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc 
base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=delay 
-no-hpet -no-shutdown -boot strict=on -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
'file=rbd:nova_instances/fc0cc515-5f7b-420c-bcda-d77b66610af2
_disk:id=cinder:key=AQBNLRRZswapABAAh6GCDOkWFqWj5uYAMVgIKA==:auth_supported=cephx\;none:mon_host=10.211.41.4\:6789\;10.211.41.6\:6789\;10.211.41.8\:6789,format=raw,if=none,id=drive-virtio-disk

0,cache=writeback,throttling.bps-read=52425500,throttling.bps-write=52425500' 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive 
'file=rbd:cinder_volumes/volume-cd2fcd54-0661-4a4d-b13d-419b6585c676:id=cinder:key=AQBNLRRZswapABAAh6GCDOkWFqWj5uYAMVgIKA==:auth_supported=cephx\;none:mon_host=10.211.41.4\:6789\;10.211.

41.6\:6789\;10.211.41.8\:6789,format=raw,if=none,id=drive-virtio-disk1,serial=cd2fcd54-0661-4a4d-b13d-419b6585c676,cache=writeback'
 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=
drive-virtio-disk1,id=virtio-disk1 -drive 
'file=rbd:nova_instances/fc0cc515-5f7b-420c-bcda-d77b66610af2_disk.config:id=cinder:key=AQBNLRRZswapABAAh6GCDOkWFqWj5uYAMVgIKA==:auth_supported=

cephx\;none:mon_host=10.211.41.4\:6789\;10.211.41.6\:6789\;10.211.41.8\:6789,format=raw,if=none,id=drive-ide0-1-1,readonly=on,cache=writeback,throttling.bps-read=52425500,throttling.bps-write=
52425500' -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=39 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:6
0:0b:06,bus=pci.0,addr=0x3 -chardev 
file,id=charserial0,path=/data/nova/instances/fc0cc515-5f7b-420c-bcda-d77b66610af2/console.log
 -device isa-serial,chardev=charserial0,id=serial0 -chardev 
pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:6 -k en-us -vga cirrus 
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
 46 2017-12-13 00:24:07.337+: Domain id=37 is tainted: high-privileges
 47 char device redirected to /dev/pts/7 (label charserial1)

3) Compute nodes system log: / var/log/messages
Dec 13 08:20:15 l23-41-5 libvirtd: 2017-12-13 00:20:15.461+: 22613: 
error : qemuMonitorIO:697 : internal error: End of file from qemu monitor

4) openstack nova-compute log : /var/log/nova/nova-compute.log

2017-12-13 08:20:30.751 22579 INFO nova.compute.manager [-] [instance: 
fc0cc515-5f7b-420c-bcda-d77b66610af2] VM Stopped (Lifecycle Event)
2017-12-13 08:20:30.845 22579 INFO nova.compute.manager 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] [instance: 
fc0cc515-5f7b-420c-bcda-d77b66610af2] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_state in the DB to match the hypervisor.
2017-12-13 08:20:30.904 22579 WARNING nova.compute.manager 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] [instance: 
fc0cc515-5f7b-420c-bcda-d77b66610af2] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state: 1, current VM power_state: 4
2017-12-13 08:20:31.038 22579 INFO nova.extend.network 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] Add william check_result: 
True
2017-12-13 08:20:31.039 22579 INFO nova.extend.network 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] Add william,vlan1210 is 
exists,so is not create
2017-12-13 08:20:31.039 22579 INFO nova.compute.manager 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] Add william, live migrate 
vm  bridge_multiple_vlan True
2017-12-13 08:20:31.054 22579 INFO nova.compute.manager 
[req-1834e8c9-71b2-4a4c-8127-eee6888cf446 - - - - -] [instance: 

[Yahoo-eng-team] [Bug 1708655] Re: mod_wsgi requires WSGIApplicationGroup %{GLOBAL} or it will hang

2017-12-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/526423
Committed: 
https://git.openstack.org/cgit/openstack/openstack-ansible-os_horizon/commit/?id=dfbc2a56b6fd0cf0a98fe6a2dc046a44c30a05b2
Submitter: Zuul
Branch:master

commit dfbc2a56b6fd0cf0a98fe6a2dc046a44c30a05b2
Author: Adrien Cunin 
Date:   Thu Dec 7 15:46:01 2017 +0100

Set WSGIApplicationGroup %{GLOBAL} as recommended

mod_wsgi hangs trying to import the recent versions of
python-gobject-base used by python-keyring library, which is in turn
used by python-keystoneclient. This does not happen if the
WSGIApplicationGroup is global.

Change-Id: I4c7408699fddf327feb1c3b47e8e47cf2dd946f1
Closes-Bug: #1708655
Closes-Bug: #1624791
Related-Bug: #1700176


** Changed in: openstack-ansible
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1708655

Title:
  mod_wsgi requires WSGIApplicationGroup %{GLOBAL} or it will hang

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in puppet-horizon:
  New

Bug description:
  It seems that the recent versions of python-gobject-base used by
  python-keyring library, which is in turn used by python-
  keystoneclient, use the simplified GIL state API as described in
  http://modwsgi.readthedocs.io/en/develop/user-guides/application-
  issues.html#python-simplified-gil-state-api

  Consequently, mod_wsgi hangs trying to import them, unless it has
  "WSGIApplicationGroup %{GLOBAL}" added  to its configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1708655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738659] Re: linux bridge assigns mac address to the wrong port

2017-12-19 Thread Adrian Pascalau
I have investigated this further, and it looks like the issue is in my
external network, since a device is bouncing back the arp request, and
this is why the bridge assigns it to the bond2 interface. So what
happens is the following:

default gw<-->physical switch<-->[bond2 bridge tap]<-->[eth0 cirrosVM]

The arp request goes out on the eth0 interface, and enters the bridge on
the tap interface. The bridge assigns the eth0 mac address to the tap
interface, and sends the arp request out on the bond2 interface. Now
some deice on the left side of the bridge (either the physical switch or
the default gw), broadcasts that arp requests back, therefore the same
arp request enters back the bridge on the bond2 interface, and the
bridge assigns the source mac address of the arp request (which is still
the eth0 mac address) to the bond2 port in the forwarding table, which
causes the behavior I have noticed...

This also explains why I see 2 arp requests and a single arp reply when
tracing:

# tcpdump -n -i bond2 arp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bond2, link-type EN10MB (Ethernet), capture size 262144 bytes
06:13:48.581758 ARP, Request who-has 10.20.21.1 tell 10.20.21.114, length 28
06:13:48.581791 ARP, Request who-has 10.20.21.1 tell 10.20.21.114, length 28
06:13:48.582221 ARP, Reply 10.20.21.1 is-at 00:17:08:c4:52:80, length 46

I am really sorry for all the trouble.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1738659

Title:
  linux bridge assigns mac address to the wrong port

Status in neutron:
  Invalid

Bug description:
  * High level description: 
  linux bridge assigns mac address to the physical external interface instead 
of the tap interface, therefore the VM instance that uses the tap interface is 
not able to communicate over IP. The workaround I have found is to convert the 
bridge to a hub, by setting ageing to 0 (brctl setageing br-name 0). In this 
way, the bridge floods all packets on all attached bridge interfaces, and 
everything starts working. 

  * Pre-conditions:
  I have an openstack pike running in latest centos 7 release (7.4.1708). 
Neutron was manually installed as described in the neutron installation guide 
at https://docs.openstack.org/neutron/latest/install/install-rdo.html. I have 
configured neutron for Network Option 2 (self service networks), however the 
setup I am testing here is an external flat provider network with a single 
cirros VM instance attached directly to it (without any router in between). The 
openstack environment is made of two nodes: a controller and a compute. The 
neutron package versions is 11.0.2-2.el7 (latest in centos 7), the bridge-utils 
version is 1.5-9.el7 and the kernel version is 3.10.0-693.11.1.el7.x86_64. I 
have tested this with cirros image cirros-0.4.0-x86_64-disk.img and 
cirros-0.3.5-x86_64-disk.img.

  # rpm -qa | grep neutron-linuxbridge
  openstack-neutron-linuxbridge-11.0.2-2.el7.noarch
  # rpm -qf /usr/sbin/brctl
  bridge-utils-1.5-9.el7.x86_64
  # uname -a
  Linux compute1 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux

  Bridging is configured like below in both controller and compute:

  ml2_conf.ini:
  [ml2_type_flat]
  flat_networks = physnet1

  linuxbridge_agent.ini:
  [linux_bridge]
  physical_interface_mappings = physnet1:bond2

  * Step-by-step reproduction steps:
  This is how I created the provider network:
  openstack network create \
--share \
--external \
--provider-physical-network physnet1 \
--provider-network-type flat \
ExtNet1

  This is how I create the provider subnet:
  openstack subnet create \
--network ExtNet1 \
--allocation-pool start=10.20.21.96,end=$10.20.21.127 \
--dns-nameserver 10.20.21.1 \
--gateway 10.20.21.1 \
--subnet-range 10.20.21.0/24 \
ExtSubnet1

  This is how I launch a cirros instance and attach it to the provider network:
  openstack server create \
--flavor m1.nano \
--image cirros-0.4.0-x86_64-disk.img \
--nic net-id=$(openstack network list | grep ExtNet1 | cut -d\  -f 2) \
--security-group default \
--key-name controller-key \
cirros1

  Based on the above, neutron creates in my compute node the following
  bridge:

  # brctl show
  bridge name bridge id   STP enabled interfaces
  brq75a55ef7-4a  8000.fc15b413e6a3   no  bond2
  tap44bc34bb-e2

  bond2 is the physical interface used for the flat provider network (in
  access mode, no vlans) and tap44bc34bb-e2 is the tap interface
  attached to my cirros VM instance.

  In the bridge, the bond2 is port 2, and the tap tap44bc34bb-e2
  interface is port 1, and both are in forwarding mode.

  # brctl showstp 

[Yahoo-eng-team] [Bug 1739023] [NEW] cloud-init should support iproute2 tools

2017-12-19 Thread Robert Schweikert
Public bug reported:

Older ifconig, route, netstat, and arp tools are being deprecated on
Linux distributions. cloud-init should in all cases support the newer
iproute2 tools.

ifconfig used in:
cloudinit/sources/DataSourceAzure.py
cloudinit/netinfo.py
tools/mock-meta.py

netstat used in:
cloudinit/netinfo.py

route used in:
cloudinit/config/cc_disable_ec2_metadata.py

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1739023

Title:
  cloud-init should support iproute2 tools

Status in cloud-init:
  New

Bug description:
  Older ifconig, route, netstat, and arp tools are being deprecated on
  Linux distributions. cloud-init should in all cases support the newer
  iproute2 tools.

  ifconfig used in:
  cloudinit/sources/DataSourceAzure.py
  cloudinit/netinfo.py
  tools/mock-meta.py

  netstat used in:
  cloudinit/netinfo.py

  route used in:
  cloudinit/config/cc_disable_ec2_metadata.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1739023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739042] [NEW] _move_operation_alloc_request fails with TypeError when using 1.12 version allocation request

2017-12-19 Thread Matt Riedemann
Public bug reported:

Seen here in the alternate hosts series:

http://logs.openstack.org/58/511358/43/check/openstack-tox-
functional/e642310/job-output.txt.gz#_2017-12-19_00_18_34_585930

2017-12-19 00:18:34.585930 | ubuntu-xenial | Traceback (most recent call 
last):
2017-12-19 00:18:34.585992 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 163, in _process_incoming
2017-12-19 00:18:34.586021 | ubuntu-xenial | res = 
self.dispatcher.dispatch(message)
2017-12-19 00:18:34.586082 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 220, in dispatch
2017-12-19 00:18:34.586114 | ubuntu-xenial | return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-12-19 00:18:34.586179 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 190, in _do_dispatch
2017-12-19 00:18:34.586207 | ubuntu-xenial | result = func(ctxt, 
**new_args)
2017-12-19 00:18:34.586241 | ubuntu-xenial |   File 
"nova/conductor/manager.py", line 603, in build_instances
2017-12-19 00:18:34.586267 | ubuntu-xenial | 
host.allocation_request_version)
2017-12-19 00:18:34.586300 | ubuntu-xenial |   File 
"nova/scheduler/utils.py", line 800, in claim_resources
2017-12-19 00:18:34.586335 | ubuntu-xenial | user_id, 
allocation_request_version=allocation_request_version)
2017-12-19 00:18:34.586370 | ubuntu-xenial |   File 
"nova/scheduler/client/__init__.py", line 37, in __run_method
2017-12-19 00:18:34.586402 | ubuntu-xenial | return 
getattr(self.instance, __name)(*args, **kwargs)
2017-12-19 00:18:34.586435 | ubuntu-xenial |   File 
"nova/scheduler/client/report.py", line 61, in wrapper
2017-12-19 00:18:34.586459 | ubuntu-xenial | return f(self, *a, **k)
2017-12-19 00:18:34.586493 | ubuntu-xenial |   File 
"nova/scheduler/client/report.py", line 110, in wrapper
2017-12-19 00:18:34.586516 | ubuntu-xenial | return f(self, *a, **k)
2017-12-19 00:18:34.586552 | ubuntu-xenial |   File 
"nova/scheduler/client/report.py", line 1126, in claim_resources
2017-12-19 00:18:34.586586 | ubuntu-xenial | payload = 
_move_operation_alloc_request(current_allocs, ar)
2017-12-19 00:18:34.586625 | ubuntu-xenial |   File 
"nova/scheduler/client/report.py", line 199, in _move_operation_alloc_request
2017-12-19 00:18:34.586657 | ubuntu-xenial | for a in 
dest_alloc_req['allocations']) - cur_rp_uuids
2017-12-19 00:18:34.586685 | ubuntu-xenial | TypeError: string indices must 
be integers

This is due to using a 1.12 version allocation candidate request bug
_move_operation_alloc_request is expecting the <1.12 format, where
allocations is a list instead of a dict.

I don't know if we should change the calling code to format the
allocation request to the <1.12 format, or make
_move_operation_alloc_request handle both styles (probably better to do
the latter).

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: placement

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739042

Title:
  _move_operation_alloc_request fails with TypeError when using 1.12
  version allocation request

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Seen here in the alternate hosts series:

  http://logs.openstack.org/58/511358/43/check/openstack-tox-
  functional/e642310/job-output.txt.gz#_2017-12-19_00_18_34_585930

  2017-12-19 00:18:34.585930 | ubuntu-xenial | Traceback (most recent call 
last):
  2017-12-19 00:18:34.585992 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 163, in _process_incoming
  2017-12-19 00:18:34.586021 | ubuntu-xenial | res = 
self.dispatcher.dispatch(message)
  2017-12-19 00:18:34.586082 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 220, in dispatch
  2017-12-19 00:18:34.586114 | ubuntu-xenial | return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-12-19 00:18:34.586179 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 190, in _do_dispatch
  2017-12-19 00:18:34.586207 | ubuntu-xenial | result = func(ctxt, 
**new_args)
  2017-12-19 00:18:34.586241 | 

[Yahoo-eng-team] [Bug 1739078] [NEW] fullstack: Use a pre-built database schema

2017-12-19 Thread Jakub Libosvar
Public bug reported:

This is a request for enhancement to avoid using alembic when creating
fullstack environment. The database creation is one of the most
expensive operations during env build up. Using a pre-defined sql script
that creates database schema can save time of fullstack runs.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: fullstack rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1739078

Title:
  fullstack: Use a pre-built database schema

Status in neutron:
  New

Bug description:
  This is a request for enhancement to avoid using alembic when creating
  fullstack environment. The database creation is one of the most
  expensive operations during env build up. Using a pre-defined sql
  script that creates database schema can save time of fullstack runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1739078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739075] [NEW] fullstack: Improve test suite by creating environment per test class

2017-12-19 Thread Jakub Libosvar
Public bug reported:

Currently, the fullstack environment is created per test function
consuming. The environment creation takes a fair amount of time. This
bug is a proposal to move environment creation to the test class level
to reduce time of fullstack runs. As a tradeoff, we won't be able to run
tests under same class in parallel.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: fullstack rfe

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1739075

Title:
  fullstack: Improve test suite by creating environment per test class

Status in neutron:
  New

Bug description:
  Currently, the fullstack environment is created per test function
  consuming. The environment creation takes a fair amount of time. This
  bug is a proposal to move environment creation to the test class level
  to reduce time of fullstack runs. As a tradeoff, we won't be able to
  run tests under same class in parallel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1739075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733496] Re: placement: The description of 'X-Openstack-Request-Id' in the response header is missing in API reference

2017-12-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/523007
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1190c3418377a721a5d8ead4d58fd09f72a2bacc
Submitter: Zuul
Branch:master

commit 1190c3418377a721a5d8ead4d58fd09f72a2bacc
Author: Takashi NATSUME 
Date:   Mon Nov 27 11:57:38 2017 +0900

[placement] Add x-openstack-request-id in API ref

Add the description about 'x-openstack-request-id'
in the request and the response headers.

Change-Id: I6ffdfbacb81660b89d7bf8ba83dbab1aa25a80bd
Closes-Bug: #1733496


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733496

Title:
  placement: The description of 'X-Openstack-Request-Id' in the response
  header is missing in API reference

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The description of 'X-Openstack-Request-Id' in response header is missing in 
Placement API reference(*1).
  It should be added like compute API reference(*2).

  *1: https://developer.openstack.org/api-ref/placement/
  *2: https://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1733496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739071] [NEW] Floating IP assigned to a DHCP port leads to a exception if DHCP port is deleted

2017-12-19 Thread Gary Kotton
Public bug reported:

A DHCP port should not be allowed to have a floating IP assigned to it.
Here is the trace for when this port is deleted.

root@loadbalancer01:~# neutron subnet-update  
2d370a99-f177-4b85-892e-56def086e046 --disable-dhcp
Request Failed: internal server error while processing your request.
Neutron server returns request_ids: ['req-f43b7e82-58eb-400e-a9ac-3341f3324e8c']

Backtrace from neutron-server.log file:
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
[req-f43b7e82-58eb-400e-a9ac-3341f3324e8c ab32ddb0a8e54a6eb756a0a1d82f8345 
1cb4e51898b04cbabdacb0b84b6a7e7e - - -] update failed: No details.
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 93, in 
resource
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 617, in update
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 95, in wrapped
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource self.force_reraise()
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 91, in wrapped
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource self.force_reraise()
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 131, in wrapped
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
traceback.format_exc())
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource self.force_reraise()
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 126, in wrapped
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource return f(*dup_args, 
**dup_kwargs)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 665, in _update
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/vmware_nsx/plugins/nsx_v/plugin.py", line 
2646, in update_subnet
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource return 
self._safe_update_subnet(context, id, subnet)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/vmware_nsx/plugins/nsx_v/plugin.py", line 
2681, in _safe_update_subnet
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
self._update_subnet_dhcp_status(subnet, context)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/vmware_nsx/plugins/nsx_v/plugin.py", 

[Yahoo-eng-team] [Bug 1736946] Re: Conductor: fails to clean up networking resources due to _destroy_build_request CantStartEngineError

2017-12-19 Thread Matt Riedemann
** Changed in: nova/newton
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736946

Title:
  Conductor: fails to clean up networking resources due to
  _destroy_build_request CantStartEngineError

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  Won't Fix
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  If libvirt fails to deploy instance - for example due to problematic
  vif type being passed. The conductor will fail to clean up resources.
  This fails with the exception below. This is due to the fact that the
  cell mapping was not invoked.

  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00mTraceback (most recent call last):
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    res = 
self.dispatcher.dispatch(message)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
220, in dispatch
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    return 
self._do_dispatch(endpoint, method, ctxt, args)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
190, in _do_dispatch
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    result = func(ctxt, **new_args)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/opt/stack/nova/nova/conductor/manager.py", line 559, in build_instances
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    
self._destroy_build_request(context, instance)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/opt/stack/nova/nova/conductor/manager.py", line 477, in _destroy_build_request
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    context, instance.uuid)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    result = fn(cls, context, 
*args, **kwargs)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/opt/stack/nova/nova/objects/build_request.py", line 176, in 
get_by_instance_uuid
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    db_req = 
cls._get_by_instance_uuid_from_db(context, instance_uuid)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 983, in wrapper
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    with 
self._transaction_scope(context):
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    return self.gen.next()
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 1033, in _transaction_scope
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    context=context) as resource:
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    return self.gen.next()
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 

[Yahoo-eng-team] [Bug 1739108] [NEW] api.keystone.is_cloud_admin/is_domain_admin do not work with the latest policy from keystone repo

2017-12-19 Thread Akihiro Motoki
Public bug reported:

openstack_dashboard.api.keystone.is_cloud_admin and is_domain_admin do
not work with the policy files generated from the latest master branch
(queens) of the keystone repository (For example, keystone commit
cfbc2aa30b7406b4bc77e40a55561d1f46174b5c).

During the policy-in-code work, keystone drops "default" policy (which
was "rule:admin_required").

is_cloud_admin() and is_domain_admin() refer to "cloud_admin" and 
"admin_and_matching_domain_id" policies respectively. They are not defined in 
the default keystone policy. 
Previously a policy check fallbacks to "default" rule (i.e., "admin_required") 
and as a result both Is_cloud_admin() and is_domain_admin() checks 
"admin_required".

Now the keystone default policy has no "default" rule. As a result
is_cloud_admin() and is_doman_admin() always returns False. This means
some admin-ness panels do not work.

IIUC, the horizon policy framework intend to work with the default policies 
from back-end services.
The current situation should be fixed until Queens release.

[1]
https://github.com/openstack/horizon/blob/0f598182919df31e40c7630ee1bd42bea259310d/openstack_dashboard/api/keystone.py#L325-L331

** Affects: horizon
 Importance: Critical
 Status: New

** Changed in: horizon
   Importance: Undecided => Critical

** Changed in: horizon
Milestone: None => queens-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1739108

Title:
  api.keystone.is_cloud_admin/is_domain_admin do not work with the
  latest policy from keystone repo

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  openstack_dashboard.api.keystone.is_cloud_admin and is_domain_admin do
  not work with the policy files generated from the latest master branch
  (queens) of the keystone repository (For example, keystone commit
  cfbc2aa30b7406b4bc77e40a55561d1f46174b5c).

  During the policy-in-code work, keystone drops "default" policy (which
  was "rule:admin_required").

  is_cloud_admin() and is_domain_admin() refer to "cloud_admin" and 
"admin_and_matching_domain_id" policies respectively. They are not defined in 
the default keystone policy. 
  Previously a policy check fallbacks to "default" rule (i.e., 
"admin_required") and as a result both Is_cloud_admin() and is_domain_admin() 
checks "admin_required".

  Now the keystone default policy has no "default" rule. As a result
  is_cloud_admin() and is_doman_admin() always returns False. This means
  some admin-ness panels do not work.

  IIUC, the horizon policy framework intend to work with the default policies 
from back-end services.
  The current situation should be fixed until Queens release.

  [1]
  
https://github.com/openstack/horizon/blob/0f598182919df31e40c7630ee1bd42bea259310d/openstack_dashboard/api/keystone.py#L325-L331

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1739108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739219] [NEW] Old dnsmasq listed as option:dns-server

2017-12-19 Thread Adrien Cunin
Public bug reported:

Pike, regular Neutron LinuxBridge, I have the following situation:

Went from a 4 network nodes setup to a 3 networks nodes setup, so one
dhcp agent was dropped. It was shut down as well as removed using
`openstack network agent delete`.

Issue is about the generated list of DNS servers for a subnet that
doesn't explicitly define DNS servers. After the removal of the fourth
dhcp agent, the corresponding dnsmasq IP address is still included in
the option:dns-server parameter of the generated dnsmasq dhcp config.

As a result, instances in such a subnet get a list of DNS servers with
one that is down.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1739219

Title:
  Old dnsmasq listed as option:dns-server

Status in neutron:
  New

Bug description:
  Pike, regular Neutron LinuxBridge, I have the following situation:

  Went from a 4 network nodes setup to a 3 networks nodes setup, so one
  dhcp agent was dropped. It was shut down as well as removed using
  `openstack network agent delete`.

  Issue is about the generated list of DNS servers for a subnet that
  doesn't explicitly define DNS servers. After the removal of the fourth
  dhcp agent, the corresponding dnsmasq IP address is still included in
  the option:dns-server parameter of the generated dnsmasq dhcp config.

  As a result, instances in such a subnet get a list of DNS servers with
  one that is down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1739219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709985] Re: test_rebuild_server_in_error_state randomly times out waiting for rebuilding instance to be active

2017-12-19 Thread Matt Riedemann
** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709985

Title:
  test_rebuild_server_in_error_state randomly times out waiting for
  rebuilding instance to be active

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  http://logs.openstack.org/12/491012/12/check/gate-tempest-dsvm-cells-
  ubuntu-xenial/4aa3da8/console.html#_2017-08-10_18_58_35_158151

  2017-08-10 18:58:35.158151 | 
tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_rebuild_server_in_error_state[id-682cb127-e5bb-4f53-87ce-cb9003604442]
  2017-08-10 18:58:35.158207 | 
---
  2017-08-10 18:58:35.158221 | 
  2017-08-10 18:58:35.158239 | Captured traceback:
  2017-08-10 18:58:35.158258 | ~~~
  2017-08-10 18:58:35.158281 | Traceback (most recent call last):
  2017-08-10 18:58:35.158323 |   File 
"tempest/api/compute/admin/test_servers.py", line 188, in 
test_rebuild_server_in_error_state
  2017-08-10 18:58:35.158346 | raise_on_error=False)
  2017-08-10 18:58:35.158381 |   File "tempest/common/waiters.py", line 96, 
in wait_for_server_status
  2017-08-10 18:58:35.158407 | raise lib_exc.TimeoutException(message)
  2017-08-10 18:58:35.158436 | tempest.lib.exceptions.TimeoutException: 
Request timed out
  2017-08-10 18:58:35.158525 | Details: 
(ServersAdminTestJSON:test_rebuild_server_in_error_state) Server 
e57c5e75-9a8b-436d-aa53-a545e32c308a failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: REBUILD. Current 
task state: rebuild_spawning.

  Looks like this mostly shows up in cells v1 jobs, which wouldn't be
  surprising if we missed some state change due to the instance sync to
  the top level cell, but it's also happening sometimes in non-cells
  jobs. Could be a duplicate bug where we missing or don't get a network
  change / vif plug notification from neutron so we just wait forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1709985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686109] Re: requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''", )) in functional tests

2017-12-19 Thread Matt Riedemann
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1686109

Title:
  requests.exceptions.ConnectionError: ('Connection aborted.',
  BadStatusLine("''",)) in functional tests

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We see this about 11 times in 7 days in the check and gate queues on
  the nova functional jobs:

  http://logs.openstack.org/20/459420/1/check/gate-nova-tox-functional-
  ubuntu-xenial/c0bdfa6/console.html#_2017-04-25_13_55_17_705409

  2017-04-25 13:55:17.704694 | Captured traceback:
  2017-04-25 13:55:17.704705 | ~~~
  2017-04-25 13:55:17.704720 | Traceback (most recent call last):
  2017-04-25 13:55:17.704749 |   File 
"nova/tests/functional/api_sample_tests/test_fixed_ips.py", line 100, in 
test_get_fixed_ip
  2017-04-25 13:55:17.704763 | self._test_get_fixed_ip()
  2017-04-25 13:55:17.704791 |   File 
"nova/tests/functional/api_sample_tests/test_fixed_ips.py", line 91, in 
_test_get_fixed_ip
  2017-04-25 13:55:17.704817 | response = 
self._do_get('os-fixed-ips/192.168.1.1')
  2017-04-25 13:55:17.704844 |   File 
"nova/tests/functional/api_samples_test_base.py", line 488, in _do_get
  2017-04-25 13:55:17.704857 | headers=headers)
  2017-04-25 13:55:17.704883 |   File 
"nova/tests/functional/api_samples_test_base.py", line 479, in _get_response
  2017-04-25 13:55:17.704901 | headers=headers, 
strip_version=strip_version)
  2017-04-25 13:55:17.704923 |   File 
"nova/tests/functional/api/client.py", line 164, in api_request
  2017-04-25 13:55:17.704938 | auth_result = self._authenticate()
  2017-04-25 13:55:17.704961 |   File 
"nova/tests/functional/api/client.py", line 150, in _authenticate
  2017-04-25 13:55:17.704972 | headers=headers)
  2017-04-25 13:55:17.704994 |   File 
"nova/tests/functional/api/client.py", line 138, in request
  2017-04-25 13:55:17.705016 | response = requests.request(method, url, 
data=body, headers=_headers)
  2017-04-25 13:55:17.705059 |   File 
"/home/jenkins/workspace/gate-nova-tox-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/requests/api.py",
 line 56, in request
  2017-04-25 13:55:17.705083 | return session.request(method=method, 
url=url, **kwargs)
  2017-04-25 13:55:17.705163 |   File 
"/home/jenkins/workspace/gate-nova-tox-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/requests/sessions.py",
 line 488, in request
  2017-04-25 13:55:17.705202 | resp = self.send(prep, **send_kwargs)
  2017-04-25 13:55:17.705280 |   File 
"/home/jenkins/workspace/gate-nova-tox-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/requests/sessions.py",
 line 609, in send
  2017-04-25 13:55:17.705321 | r = adapter.send(request, **kwargs)
  2017-04-25 13:55:17.705367 |   File 
"/home/jenkins/workspace/gate-nova-tox-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/requests/adapters.py",
 line 473, in send
  2017-04-25 13:55:17.705384 | raise ConnectionError(err, 
request=request)
  2017-04-25 13:55:17.705409 | requests.exceptions.ConnectionError: 
('Connection aborted.', BadStatusLine("''",))

  Which API it fails on is completely random I think.

  It looks like this is maybe a result of the OSAPIFixture spawning a
  greenthread to run the nova-api service:

  (9:19:08 AM) cdent: if for some reason the new greenthread doesn't get
  yielded to when it should, the client will try to read off the socket
  which nothing paying proper attention on the other side

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22requests.exceptions.ConnectionError%3A%20('Connection%20aborted.'%2C%20BadStatusLine(%5C%22''%5C%22%2C))%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20project%3A%5C%22openstack%2Fnova%5C%22=7d

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1686109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1706719] Re: Account is locked out and cannot have password updated.

2017-12-19 Thread Matt Riedemann
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1706719

Title:
  Account is locked out and cannot have password updated.

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  We are seeing this in tempest testing. In some tempest runs the test
  to change the user password fails because the account is locked out.
  Example traceback can be found at
  http://logs.openstack.org/21/485221/2/gate/gate-tempest-dsvm-neutron-
  full-ubuntu-xenial/4ecd651/console.html#_2017-07-20_01_14_10_769485
  and is pasted here so that log expiry doesn't delete it under us:

  2017-07-20 01:14:10.769485 | 
tempest.api.identity.v3.test_users.IdentityV3UsersTest.test_user_update_own_password[id-ad71bd23-12ad-426b-bb8b-195d2b635f27]
  2017-07-20 01:14:10.769531 | 
-
  2017-07-20 01:14:10.769545 | 
  2017-07-20 01:14:10.769562 | Captured traceback:
  2017-07-20 01:14:10.769580 | ~~~
  2017-07-20 01:14:10.769602 | Traceback (most recent call last):
  2017-07-20 01:14:10.769639 |   File 
"tempest/api/identity/v3/test_users.py", line 89, in 
test_user_update_own_password
  2017-07-20 01:14:10.769672 | 
self._update_password(original_password=old_pass, password=new_pass)
  2017-07-20 01:14:10.769707 |   File 
"tempest/api/identity/v3/test_users.py", line 42, in _update_password
  2017-07-20 01:14:10.769732 | original_password=original_password)
  2017-07-20 01:14:10.769769 |   File 
"tempest/lib/services/identity/v3/users_client.py", line 60, in 
update_user_password
  2017-07-20 01:14:10.769801 | resp, _ = self.post('users/%s/password' 
% user_id, update_user)
  2017-07-20 01:14:10.769831 |   File "tempest/lib/common/rest_client.py", 
line 270, in post
  2017-07-20 01:14:10.769864 | return self.request('POST', url, 
extra_headers, headers, body, chunked)
  2017-07-20 01:14:10.769895 |   File "tempest/lib/common/rest_client.py", 
line 659, in request
  2017-07-20 01:14:10.769919 | self._error_checker(resp, resp_body)
  2017-07-20 01:14:10.769951 |   File "tempest/lib/common/rest_client.py", 
line 755, in _error_checker
  2017-07-20 01:14:10.769979 | raise exceptions.Unauthorized(resp_body, 
resp=resp)
  2017-07-20 01:14:10.770005 | tempest.lib.exceptions.Unauthorized: 
Unauthorized
  2017-07-20 01:14:10.770054 | Details: {u'code': 401, u'title': 
u'Unauthorized', u'message': u'The account is locked for user: 
b99de038ad484b1fb4d65aebefd4464d.'}
  2017-07-20 01:14:10.770068 | 
  2017-07-20 01:14:10.770081 | 
  2017-07-20 01:14:10.770099 | Captured pythonlogging:
  2017-07-20 01:14:10.770118 | ~~~
  2017-07-20 01:14:10.770193 | 2017-07-20 00:54:16,576 23533 INFO 
[tempest.lib.common.rest_client] Request 
(IdentityV3UsersTest:test_user_update_own_password): 401 POST 
https://198.72.124.157/identity/v3/users/b99de038ad484b1fb4d65aebefd4464d/password
 0.049s
  2017-07-20 01:14:10.770284 | 2017-07-20 00:54:16,576 23533 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'X-Auth-Token': '', 'Accept': 'application/json'}
  2017-07-20 01:14:10.770331 | Body: {"user": {"password": 
"M8*qsS56SFEo%s4", "original_password": "T4+DR4vL577eGl_"}}
  2017-07-20 01:14:10.770475 | Response - Headers: {u'content-type': 
'application/json', u'date': 'Thu, 20 Jul 2017 00:54:16 GMT', u'vary': 
'X-Auth-Token', u'server': 'Apache/2.4.18 (Ubuntu)', u'connection': 'close', 
u'x-openstack-request-id': 'req-20995818-e4f4-4aaa-bdc1-d91c145ca562', 
u'www-authenticate': 'Keystone uri="https://198.72.124.157/identity;', 
u'content-length': '129', 'status': '401', 'content-location': 
'https://198.72.124.157/identity/v3/users/b99de038ad484b1fb4d65aebefd4464d/password'}
  2017-07-20 01:14:10.770528 | Body: {"error": {"message": "The 
account is locked for user: b99de038ad484b1fb4d65aebefd4464d.", "code": 401, 
"title": "Unauthorized"}}
  2017-07-20 01:14:10.770599 | 2017-07-20 00:54:16,614 23533 INFO 
[tempest.lib.common.rest_client] Request (IdentityV3UsersTest:_run_cleanups): 
401 POST 
https://198.72.124.157/identity/v3/users/b99de038ad484b1fb4d65aebefd4464d/password
 0.036s
  2017-07-20 01:14:10.770669 | 2017-07-20 00:54:16,614 23533 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'X-Auth-Token': '', 'Accept': 'application/json'}
  2017-07-20 01:14:10.770709 | Body: {"user": {"password": 
"H1!w*#WDyqGDBod", "original_password": "M8*qsS56SFEo%s4"}}
  2017-07-20 01:14:10.770857 | Response - Headers: {u'content-type': 
'application/json', u'date': 'Thu, 20 Jul 2017 00:54:16 

[Yahoo-eng-team] [Bug 1737201] Re: TypeError when sending notification during attach_interface

2017-12-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/527920
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=553f2edde596348ca5447588c5a0b06f3b6be286
Submitter: Zuul
Branch:master

commit 553f2edde596348ca5447588c5a0b06f3b6be286
Author: Balazs Gibizer 
Date:   Wed Dec 13 17:14:49 2017 +0100

Fix possible TypeError in VIF.fixed_ips

The VIF['network'] field can be initialized to None and therefore
a later call to VIF.fixed_ips() could raise a TypeError. This problem
was visible during AttachInterfacesTestJSON tempest test case when
nova tried to emit instance.interfacae_attach notification.

This patch checks makes sure that if VIF['network'] is None then
VIF.fixed_ips() return an empty list instead of raising a TypeError.

Change-Id: Ib285d874b19be5bc1dbcd1d2af32e461f67e34cb
Closes-Bug: #1737201


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737201

Title:
  TypeError when sending notification during attach_interface

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/50/524750/1/check/legacy-tempest-dsvm-
  neutron-
  full/eb8d805/logs/screen-n-api.txt.gz?level=TRACE#_Dec_04_13_34_20_635874

  Dec 04 13:34:20.635874 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: ERROR nova.api.openstack.extensions [None 
req-2d1b063f-1324-4498-af68-ce48c6d8e5a3 
tempest-AttachInterfacesTestJSON-149718191 
tempest-AttachInterfacesTestJSON-149718191] Unexpected exception in API method: 
TypeError: 'NoneType' object has no attribute '__getitem__'
  Dec 04 13:34:20.636066 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: Traceback (most recent call last):
  Dec 04 13:34:20.636202 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
  Dec 04 13:34:20.636336 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: res = self.dispatcher.dispatch(message)
  Dec 04 13:34:20.636474 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
220, in dispatch
  Dec 04 13:34:20.636614 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: return self._do_dispatch(endpoint, method, 
ctxt, args)
  Dec 04 13:34:20.636745 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
190, in _do_dispatch
  Dec 04 13:34:20.636892 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: result = func(ctxt, **new_args)
  Dec 04 13:34:20.637049 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 76, in wrapped
  Dec 04 13:34:20.637187 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: function_name, call_dict, binary)
  Dec 04 13:34:20.637317 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Dec 04 13:34:20.637442 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: self.force_reraise()
  Dec 04 13:34:20.637607 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Dec 04 13:34:20.637761 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: six.reraise(self.type_, self.value, self.tb)
  Dec 04 13:34:20.637895 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 67, in wrapped
  Dec 04 13:34:20.638044 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: return f(self, context, *args, **kw)
  Dec 04 13:34:20.638183 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 930, in decorated_function
  Dec 04 13:34:20.638306 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: return function(self, context, *args, 
**kwargs)
  Dec 04 13:34:20.638433 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 215, in decorated_function
  Dec 04 13:34:20.638566 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: kwargs['instance'], e, sys.exc_info())
  Dec 04 13:34:20.638696 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 

[Yahoo-eng-team] [Bug 1598783] Re: Config drives created on RHEL/CentOS 7.1 can't be found

2017-12-19 Thread James Penick
** Changed in: cloud-init
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1598783

Title:
  Config drives created on RHEL/CentOS 7.1 can't be found

Status in CirrOS:
  New
Status in cloud-init:
  Fix Released
Status in cloudbase-init:
  New

Bug description:
  Depending on the exact version of dosfstools used when preparing a
  config drive FS, it may not be detected by Cirron on VM boot. This is
  due to the fact, that Cirros currently performs a case-sensitive
  comparison of FS labels:

  http://bazaar.launchpad.net/~cirros-
  dev/cirros/trunk/view/head:/src/lib/cirros/shlib#L134

  and mkfs.vfat from CentOS will create an uppercase label "CONFIG-2".

  Apparently, dosfstools won't let you use lowercase labels on CentOS,
  while it works fine on Ubuntu:

  http://paste.openstack.org/show/507193/

  All the descriptions of the config drive format mention "config-2",
  not "CONFIG-2":

  http://cloudinit.readthedocs.io/en/latest/topics/datasources.html
  https://coreos.com/os/docs/latest/config-drive.html
  http://docs.openstack.org/user-guide/cli_config_drive.html

  Nothing is said about whether case-sensitive or -insensitive string
  comparison should be used for comparing of FS labels.

  Looks like FAT standard does not specify how labels should be treated,
  but Windows (at least XP) stores those in upper-case:

  "For FAT volumes, volume labels are stored as uppercase regardless of
  whether they contain lowercase letters. NTFS volume labels retain and
  display the case used when the label was created."

  https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs
  /en-us/label.mspx?mfr=true

  E.g. in Debian this was considered to be a bug and was fixed:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=714971;msg=2

  It even was accepted to upstream:

  
https://github.com/dosfstools/dosfstools/commit/465dd8cf8f643bdd39a732e7d7f819a6abdf3d83

  and made it to 3.0.22 release.

  Related bug in MOS: https://bugs.launchpad.net/mos/+bug/1587960

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1598783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739318] Re: Online data migration context does not contain project_id

2017-12-19 Thread Matt Riedemann
** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739318

Title:
  Online data migration context does not contain project_id

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  New
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  The online data migration generates a context in order to be able to
  execute migrations:

  https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L747

  However, this context does not contain a `project_id` when running
  this via CLI.

  https://github.com/openstack/nova/blob/master/nova/context.py#L279-L290

  During the creation of RequestSpec's for old instances, the context
  which contains no `project_id`.

  
https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L611-L622

  This means that a RequestSpec gets created with `project_id` set to
  `null`.  During the day-to-day operations, things work okay, however,
  when attempting to do a live migration, the `project_id` is set to
  `null` when trying to claim resources which the placement API refuses.

  https://github.com/openstack/nova/blob/master/nova/scheduler/utils.py#L791

  This will give errors as such:

  
   
400 Bad Request
   
   
400 Bad Request
The server could not comply with the request since it is either malformed 
or otherwise incorrect.
  JSON does not validate: None is not of type 'string'

  Failed validating 'type' in schema['properties']['project_id']:
  {'maxLength': 255, 'minLength': 1, 'type': 'string'}

  On instance['project_id']:
  None

  
   
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1664931] Re: [OSSA-2017-005] nova rebuild ignores all image properties and scheduler filters (CVE-2017-16239)

2017-12-19 Thread Corey Bryant
** Also affects: nova (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Zesty)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu Zesty)
   Status: New => Fix Released

** Changed in: nova (Ubuntu Zesty)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Artful)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Artful)
   Status: New => Fix Released

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/pike
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/newton
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Importance: Undecided => High

** Changed in: cloud-archive
   Status: New => Fix Released

** Changed in: cloud-archive/newton
   Importance: Undecided => High

** Changed in: cloud-archive/newton
   Status: New => Fix Released

** Changed in: cloud-archive/ocata
   Importance: Undecided => High

** Changed in: cloud-archive/ocata
   Status: New => Fix Released

** Changed in: cloud-archive/pike
   Importance: Undecided => High

** Changed in: cloud-archive/pike
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1664931

Title:
  [OSSA-2017-005] nova rebuild ignores all image properties and
  scheduler filters (CVE-2017-16239)

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in Ubuntu Cloud Archive pike series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in OpenStack Compute (nova) ocata series:
  Fix Committed
Status in OpenStack Compute (nova) pike series:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Zesty:
  Fix Released
Status in nova source package in Artful:
  Fix Released

Bug description:
  Big picture: If some image has some restriction on aggregates or hosts
  it can be run on, tenant may use  nova rebuild command to circumvent
  those restrictions. Main issue is with ImagePropertiesFilter, but it
  may cause issues with combination of flavor/image (for example allows
  to run license restricted OS (Windows) on host which has no such
  license, or rebuild instance with cheap flavor with image which is
  restricted only for high-priced flavors).

  I don't know if this is a security bug or not, if you would find it
  non-security issue, please remove the security flag.

  Steps to reproduce:

  1. Set up nova with  ImagePropertiesFilter or IsolatedHostsFilter active. 
They should allows to run 'image1' only on 'host1', but never on 'host2'.
  2. Boot instance with some other (non-restricted) image on 'host2'.
  3. Use nova rebuild INSTANCE image1

  Expected result:

  nova rejects rebuild because given image ('image1') may not run on
  'host2'.

  Actual result:

  nova happily rebuild instance with image1 on host2, violating
  restrictions.

  Checked affected version: mitaka.

  I believe, due to the way 'rebuild' command is working, newton and
  master are affected too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1664931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739227] [NEW] test_create_subport_invalid_inherit_network_segmentation_type doesn't obey when parent network is vlan

2017-12-19 Thread Jakub Libosvar
Public bug reported:

test_create_subport_invalid_inherit_network_segmentation_type uses
default network type. It tests that in case 'inherit' is passed as
segmentation type, the API call will fail because it assumes unsupported
segmentation type. In case the test is executed against deployment using
supported type as default, the test fails because API correctly returns
this supported segmentation type, e.g. vlan.

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: New


** Tags: api trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1739227

Title:
  test_create_subport_invalid_inherit_network_segmentation_type doesn't
  obey when parent network is vlan

Status in neutron:
  New

Bug description:
  test_create_subport_invalid_inherit_network_segmentation_type uses
  default network type. It tests that in case 'inherit' is passed as
  segmentation type, the API call will fail because it assumes
  unsupported segmentation type. In case the test is executed against
  deployment using supported type as default, the test fails because API
  correctly returns this supported segmentation type, e.g. vlan.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1739227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620761] Re: test_create_second_image_when_first_image_is_being_saved intermittently times out in teardown in cells v1 job

2017-12-19 Thread Matt Riedemann
Not seeing any logstash hits on this anymore so marking it invalid. We
can re-open or open a new bug if it shows up again.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620761

Title:
  test_create_second_image_when_first_image_is_being_saved
  intermittently times out in teardown in cells v1 job

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I've been noticing this failure more often lately:
  2016-09-02 17:06:30.570025 | 
tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON.test_create_second_image_when_first_image_is_being_saved[id-0460efcf-ee88-4f94-acef-1bf658695456,negative]
  2016-09-02 17:06:30.570109 | 

  2016-09-02 17:06:30.570116 | 
  2016-09-02 17:06:30.570128 | Captured traceback:
  2016-09-02 17:06:30.570140 | ~~~
  2016-09-02 17:06:30.570158 | Traceback (most recent call last):
  2016-09-02 17:06:30.570194 |   File 
"tempest/api/compute/images/test_images_oneserver_negative.py", line 38, in 
tearDown
  2016-09-02 17:06:30.570211 | self.server_check_teardown()
  2016-09-02 17:06:30.570241 |   File "tempest/api/compute/base.py", line 
164, in server_check_teardown
  2016-09-02 17:06:30.570267 | cls.server_id, 'ACTIVE')
  2016-09-02 17:06:30.570295 |   File "tempest/common/waiters.py", line 95, 
in wait_for_server_status
  2016-09-02 17:06:30.570315 | raise 
exceptions.TimeoutException(message)
  2016-09-02 17:06:30.570337 | tempest.exceptions.TimeoutException: Request 
timed out
  2016-09-02 17:06:30.570429 | Details: 
(ImagesOneServerNegativeTestJSON:tearDown) Server 
051f6d7d-15b3-459c-a372-902c5da15b40 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: ACTIVE. Current 
task state: image_snapshot.

  There are no clear failures from the nova logs from what I see. I'm
  also not sure if we regressed something that is making this failure
  more often in the cells v1 job, but cells v1 is inherently racy so I
  wouldn't be surprised.

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Details%3A%20(ImagesOneServerNegativeTestJSON%3AtearDown)%20Server%5C%22%20AND%20message%3A%5C%22failed%20to%20reach%20ACTIVE%20status%20and%20task%20state%20%5C%5C%5C%22None%5C%5C%5C%22%20within%20the%20required%20time%5C%22%20AND%20message%3A%5C%22Current%20status%3A%20ACTIVE.%20Current%20task%20state%3A%20image_snapshot.%5C%22%20AND%20build_name%3A%5C
  %22gate-tempest-dsvm-cells%5C%22=7d

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1620761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717962] Re: Unhelpful error in the keystone log

2017-12-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/526939
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=2be384b60c94e6e9d9cee7ee9358ea886b6a193c
Submitter: Zuul
Branch:master

commit 2be384b60c94e6e9d9cee7ee9358ea886b6a193c
Author: Gage Hugo 
Date:   Sun Dec 10 12:07:28 2017 -0600

Improve exception logging with 500 response

Currently when keystone throws a 500 Error, depending on the actual
exception type, it can log the message as an exception or as a warning.

Specifically, if the server throws an exception.UnexpectedError, it
does not log this as an exception; it simply logs it as a warning. This
patch set logs the error as an exception if exception.Error is an
exception.UnexpectedError.

Change-Id: Ia47cc11378ec64d59b7403cb8a284c764148d7a9
Co-Authored-By: Tin Lam 
Closes-Bug: #1717962


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1717962

Title:
  Unhelpful error in the keystone log

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Occasionally an API (i.e. DELETE /v3/domains/) receives an
  HTTP 500 response. However, all we got from keystone log is this

  2017-09-12 23:20:37.995 7321 WARNING keystone.common.wsgi
  [req-e1060272-c8b8-4d51-94f5-98b2b4d84a43
  960c1d5dba8847cfbde96764ee7747bb - default default -] An unexpected
  error prevented the server from fulfilling your request.

  No traceback. No other helpful messages as to what had caused the HTTP
  500. With HTTP 500, I would expect a handsome looking traceback in the
  keystone log.

  So diving into the code, I do see we log the exception if an
  unexpected error is raise.

  https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py#L248

  But, if the error is exception.UnexceptionError, we don't log the
  exception. We merely log it as a warning.

  https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py#L238

  Notice that expection.UnexpectedError is an instance of
  exception.Error.

  https://github.com/openstack/keystone/blob/master/keystone/exception.py#L474

  So we have a couple of choices.

  1. Find all the places where exception.UnexpectedError is raised. Log 
something meaningful/actionable prior to raising it.
  2. Add a couple of line of code here, 
https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py#L238, 
to log the traceback/exception if e is also an instance of 
error.UnexpectedError.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1717962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739325] [NEW] Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields

2017-12-19 Thread Mohammed Naser
Public bug reported:

With versioned notifications, the instance payload tries to attach a
flavor payload which it looks up from the instance.  It uses the one
which is attached in instance_extras however there seems to be a
scenario where the disabled field is missing in the database, causing
all operations to fail in the notification stage.

The JSON string for the flavor in the database is attached below (note
this is a cloud with a long lifetime so it might be some weird
conversion at some point in the life time of the cloud).

The temporary workaround as suggested by Matt was to switch to
unversioned notification which did the trick.

== flavor ==
{"new": null, "old": null, "cur": {"nova_object.version": "1.1", 
"nova_object.changes": ["root_gb", "name", "ephemeral_gb", "memory_mb", 
"vcpus", "extra_specs", "swap", "rxtx_factor", "flavorid", "vcpu_weight", 
"id"], "nova_object.name": "Flavor", "nova_object.data": {"root_gb": 80, 
"name": "nb.2G", "ephemeral_gb": 0, "memory_mb": 2048, "vcpus": 4, 
"extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "flavorid": 
"8c6a8477-20cb-4db9-ad1d-be3bc05cdae9", "vcpu_weight": null, "id": 8}, 
"nova_object.namespace": "nova"}}
== flavor ==

== stack ==
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
[req-edc9fb83-63ff-4c4b-b6c6-704d331905a8 604d5fd332904975a26b6e89c60a9d51 
d6ebcbe536f848b3af4403f922377f80 - default default] Exception during message 
handling: ValueError: Field `disabled' cannot be None
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, 
in dispatch
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _do_dispatch
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 189, in 
decorated_function
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server "Error: %s", 
e, instance=instance)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 159, in 
decorated_function
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 874, in 
decorated_function
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 217, in 
decorated_function
2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 

[Yahoo-eng-team] [Bug 1662623] Re: Testing keystone docs are outdated

2017-12-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/523524
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=227d38e4a19dee58a1679afea7132c88b799fcb6
Submitter: Zuul
Branch:master

commit 227d38e4a19dee58a1679afea7132c88b799fcb6
Author: Lance Bragstad 
Date:   Tue Nov 28 20:16:47 2017 +

Update keystone testing documentation

There were a lot of stale bits in our testing document. This commit
attempts to update those bits of information.

Change-Id: Ie99256a6189a7f00623f29c5c5cd49b046f181fe
Closes-Bug: 1662623


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1662623

Title:
  Testing keystone docs are outdated

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Lots of things landed recently related to testing in keystone,
  example: a new tempest plugin, a new devstack plugin which deploys a
  federated environment for keystone, etc. Our docs about testing [1]
  don't have these recent changes and should be updated.

  [1]
  
http://docs.openstack.org/developer/keystone/devref/development_best_practices.html
  #testing-keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1662623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733754] Re: 500 error if OS-TRUST:trust is not a dict when authenticate

2017-12-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/522107
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=4c824c8088e359d4fd9434e01d1652a26b905335
Submitter: Zuul
Branch:master

commit 4c824c8088e359d4fd9434e01d1652a26b905335
Author: wangxiyuan 
Date:   Wed Nov 22 11:41:35 2017 +0800

Add schema check for OS-TRUST:trust authentication

If the OS-TRUST:trust is not a dict when authenticating,
Keystone will raise 500 error. This patch add the
related schema check to avoid the error.

Change-Id: I575440fa507c5274e0c3bc09f4cfcb9b3d91a28c
Closes-bug: #1733754


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1733754

Title:
  500 error if OS-TRUST:trust is not a dict when  authenticate

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  env: master branch

  when user try to issue a token with OS-TRUST:trust if OS-TRUST:trust is not a 
dict, keystone will raise 500 error:
  SZX1000339032 devstack@keystone.service[12272]: ERROR keystone.common.wsgi 
Traceback (most recent call last):
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi   File "/opt/stack/keystone/keystone/common/wsgi.py", line 
228, in __call__
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi     LOG.warning(
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi   File "/opt/stack/keystone/keystone/auth/controllers.py", 
line 114, in authenticate_for_token
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi     auth_info = core.AuthInfo.create(auth=auth)
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi   File "/opt/stack/keystone/keystone/auth/core.py", line 
142, in create
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi     auth_info._validate_and_normalize_auth_data(scope_only)
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi   File "/opt/stack/keystone/keystone/auth/core.py", line 
295, in _validate_and_normalize_auth_data
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi     self._validate_and_normalize_scope_data()
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi   File "/opt/stack/keystone/keystone/auth/core.py", line 
255, in _validate_and_normalize_scope_dat
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi     self.auth['scope']['OS-TRUST:trust'])
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi   File "/opt/stack/keystone/keystone/auth/core.py", line 
224, in _lookup_trust
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi     trust_id = trust_info.get('id')
  Nov 07 16:46:18 SZX1000339032 devstack@keystone.service[12272]: ERROR 
keystone.common.wsgi AttributeError: 'str' object has no attribute 'get'

  Keystone should add OS-TRUST:trust into the schema check as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1733754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739323] Re: KeyError in host_manager for _get_host_states

2017-12-19 Thread Matt Riedemann
https://github.com/openstack/nova/commit/4660333d0d97d8e00cf290ea1d4ed932f5edc1dc
#diff-978b9f8734365934eaf8fbb01f11a7d7L624

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => High

** Changed in: nova/ocata
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739323

Title:
  KeyError in host_manager for _get_host_states

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L674-L718

  In _get_host_states, a list of all computes nodes is retrieved with a
  `state_key` of `(host, node)`.

  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L692
  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L708

  The small piece of code here removes all of the dead compute nodes
  from host_state_map

  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L708

  However, the result is returned by iterating over all seen nodes and
  using that index for host_state_map, some of which have been deleted
  by the code above, resulting in a KeyError.

  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L718

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739325] Re: Server operations fail to complete with versioned notifications

2017-12-19 Thread Matt Riedemann
Here is a prettier form of the json:

{
   "new":null,
   "old":null,
   "cur":{
  "nova_object.version":"1.1",
  "nova_object.changes":[
 "root_gb",
 "name",
 "ephemeral_gb",
 "memory_mb",
 "vcpus",
 "extra_specs",
 "swap",
 "rxtx_factor",
 "flavorid",
 "vcpu_weight",
 "id"
  ],
  "nova_object.name":"Flavor",
  "nova_object.data":{
 "root_gb":80,
 "name":"nb.2G",
 "ephemeral_gb":0,
 "memory_mb":2048,
 "vcpus":4,
 "extra_specs":{

 },
 "swap":0,
 "rxtx_factor":1.0,
 "flavorid":"8c6a8477-20cb-4db9-ad1d-be3bc05cdae9",
 "vcpu_weight":null,
 "id":8
  },
  "nova_object.namespace":"nova"
   }
}

** Tags added: notifications

** Changed in: nova
   Importance: Undecided => High

** Summary changed:

- Server operations fail to complete with versioned notifications
+ Server operations fail to complete with versioned notifications if payload 
contains unset non-nullable fields

** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739325

Title:
  Server operations fail to complete with versioned notifications if
  payload contains unset non-nullable fields

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  With versioned notifications, the instance payload tries to attach a
  flavor payload which it looks up from the instance.  It uses the one
  which is attached in instance_extras however there seems to be a
  scenario where the disabled field is missing in the database, causing
  all operations to fail in the notification stage.

  The JSON string for the flavor in the database is attached below (note
  this is a cloud with a long lifetime so it might be some weird
  conversion at some point in the life time of the cloud).

  The temporary workaround as suggested by Matt was to switch to
  unversioned notification which did the trick.

  == flavor ==
  {"new": null, "old": null, "cur": {"nova_object.version": "1.1", 
"nova_object.changes": ["root_gb", "name", "ephemeral_gb", "memory_mb", 
"vcpus", "extra_specs", "swap", "rxtx_factor", "flavorid", "vcpu_weight", 
"id"], "nova_object.name": "Flavor", "nova_object.data": {"root_gb": 80, 
"name": "nb.2G", "ephemeral_gb": 0, "memory_mb": 2048, "vcpus": 4, 
"extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "flavorid": 
"8c6a8477-20cb-4db9-ad1d-be3bc05cdae9", "vcpu_weight": null, "id": 8}, 
"nova_object.namespace": "nova"}}
  == flavor ==

  == stack ==
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
[req-edc9fb83-63ff-4c4b-b6c6-704d331905a8 604d5fd332904975a26b6e89c60a9d51 
d6ebcbe536f848b3af4403f922377f80 - default default] Exception during message 
handling: ValueError: Field `disabled' cannot be None
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, 
in dispatch
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _do_dispatch
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2017-10-23 14:49:21.117 40200 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
  2017-10-23 14:49:21.117 40200 ERROR 

[Yahoo-eng-team] [Bug 1737599] Re: Instance resize with new-style attach volume fails

2017-12-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/527228
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4f61f9be1704d3ed51249a10360b8c48e4cd53ff
Submitter: Zuul
Branch:master

commit 4f61f9be1704d3ed51249a10360b8c48e4cd53ff
Author: Matt Riedemann 
Date:   Mon Dec 11 15:33:43 2017 -0500

Update and complete volume attachments during resize

With the new cinder volume attachment flow, during a resize the
source compute will create a new volume attachment for any BDMs
connected to the instance and delete the existing volume attachments
which represents the connection to the source host. The finish_resize
flow on the destination compute will then refresh the connection info
on the BDMs it's working with before passing them to the virt driver.

Since the source compute updated the BDM.attachment_id to point at
the new attachment which is meant for the destination host, the
refresh_connection_info call will get the connection_info for the new
reserved attachment, which is actually empty since it hasn't been
connected to the destination host yet. This results in wiping out the
BDM.connection_info which has the "driver_volume_type" value which is
something the virt driver on the destination compute needs to know
which volume backend driver to use to connect the volume to the
destination host.

This change updates the volume attachments on the destination host
before refreshing the connection_info in the BDMs and before we call
the driver to finish the resize, and then we also complete the
volume attachments once the driver has successfully finished the
resize. This is similar to what would happen via _prep_block_device
during the initial instance create, but since the code paths into the
driver are different we have to handle this explicitly in the compute
manager.

Similarly, we have to perform the same incantation when reverting a
resize and going back to the original source host.

Closes-Bug: #1737599

Change-Id: Ifc80d07d94311534fd9e7824ede9d09223a011c2


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737599

Title:
  Instance resize with new-style attach volume fails

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The Trove gates are failing when attempting to resize an instance what
  has an ephemeral disk and an attached volume.

  The stack when it fails is this:
  Dec 11 03:03:28.751318 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [None req-11c69857-4556-4d83-b34c-1a0191175ceb 
alt_demo alt_demo] [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7] Setting 
instance vm_state to ERROR: VolumeDriverNotFound: Could not find a handler for 
None volume.
  Dec 11 03:03:28.751537 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7] 
Traceback (most recent call last):
  Dec 11 03:03:28.751683 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7]   
File "/opt/stack/new/nova/nova/compute/manager.py", line 7297, in 
_error_out_instance_on_exception
  Dec 11 03:03:28.751831 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7] 
yield
  Dec 11 03:03:28.751970 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7]   
File "/opt/stack/new/nova/nova/compute/manager.py", line 4358, in finish_resize
  Dec 11 03:03:28.752120 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7] 
disk_info, image_meta, bdms)
  Dec 11 03:03:28.752261 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7]   
File "/opt/stack/new/nova/nova/compute/manager.py", line 4326, in _finish_resize
  Dec 11 03:03:28.752408 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7] 
old_instance_type)
  Dec 11 03:03:28.752545 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7]   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  Dec 11 03:03:28.752680 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 
ERROR nova.compute.manager [instance: 85cdb482-63a5-487a-b103-95b9383ffcc7] 
self.force_reraise()
  Dec 11 03:03:28.752813 ubuntu-xenial-rax-dfw-0001351106 nova-compute[28059]: 

[Yahoo-eng-team] [Bug 1739318] [NEW] Online data migration context does not contain project_id

2017-12-19 Thread Mohammed Naser
Public bug reported:

The online data migration generates a context in order to be able to
execute migrations:

https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L747

However, this context does not contain a `project_id` when running this
via CLI.

https://github.com/openstack/nova/blob/master/nova/context.py#L279-L290

During the creation of RequestSpec's for old instances, the context
which contains no `project_id`.

https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L611-L622

This means that a RequestSpec gets created with `project_id` set to
`null`.  During the day-to-day operations, things work okay, however,
when attempting to do a live migration, the `project_id` is set to
`null` when trying to claim resources which the placement API refuses.

https://github.com/openstack/nova/blob/master/nova/scheduler/utils.py#L791

This will give errors as such:


 
  400 Bad Request
 
 
  400 Bad Request
  The server could not comply with the request since it is either malformed or 
otherwise incorrect.
JSON does not validate: None is not of type 'string'

Failed validating 'type' in schema['properties']['project_id']:
{'maxLength': 255, 'minLength': 1, 'type': 'string'}

On instance['project_id']:
None


 


** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739318

Title:
  Online data migration context does not contain project_id

Status in OpenStack Compute (nova):
  New

Bug description:
  The online data migration generates a context in order to be able to
  execute migrations:

  https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L747

  However, this context does not contain a `project_id` when running
  this via CLI.

  https://github.com/openstack/nova/blob/master/nova/context.py#L279-L290

  During the creation of RequestSpec's for old instances, the context
  which contains no `project_id`.

  
https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L611-L622

  This means that a RequestSpec gets created with `project_id` set to
  `null`.  During the day-to-day operations, things work okay, however,
  when attempting to do a live migration, the `project_id` is set to
  `null` when trying to claim resources which the placement API refuses.

  https://github.com/openstack/nova/blob/master/nova/scheduler/utils.py#L791

  This will give errors as such:

  
   
400 Bad Request
   
   
400 Bad Request
The server could not comply with the request since it is either malformed 
or otherwise incorrect.
  JSON does not validate: None is not of type 'string'

  Failed validating 'type' in schema['properties']['project_id']:
  {'maxLength': 255, 'minLength': 1, 'type': 'string'}

  On instance['project_id']:
  None

  
   
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739323] [NEW] KeyError in host_manager for _get_host_states

2017-12-19 Thread Mohammed Naser
Public bug reported:

https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L674-L718

In _get_host_states, a list of all computes nodes is retrieved with a
`state_key` of `(host, node)`.

https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L692
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L708

The small piece of code here removes all of the dead compute nodes from
host_state_map

https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L708

However, the result is returned by iterating over all seen nodes and
using that index for host_state_map, some of which have been deleted by
the code above, resulting in a KeyError.

https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L718

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739323

Title:
  KeyError in host_manager for _get_host_states

Status in OpenStack Compute (nova):
  New

Bug description:
  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L674-L718

  In _get_host_states, a list of all computes nodes is retrieved with a
  `state_key` of `(host, node)`.

  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L692
  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L708

  The small piece of code here removes all of the dead compute nodes
  from host_state_map

  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L708

  However, the result is returned by iterating over all seen nodes and
  using that index for host_state_map, some of which have been deleted
  by the code above, resulting in a KeyError.

  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L718

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383542] Re: boot from image(create a new volume) lost image property

2017-12-19 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1383542

Title:
  boot from image(create a new volume) lost image property

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  when we create a instance by boot from image(create a new volume), if the 
image has one or  more property, it will boot without the property.
  for example, we add a property 'hw_qemu_guest_agent=yes'  on image ubt1204, 
   Property   | Value|
  ++--+
  | Property 'hw_qemu_guest_agent' | yes  |
  | checksum   | 2823c8bf0336349b23f20fb75ec60626 |
  | container_format   | bare |
  | created_at | 2014-10-15T09:07:31  |
  | deleted| False|
  | disk_format| raw  |
  | id | 92811d87-d905-4ef4-b173-c7a17805cf9b |
  | is_public  | False|
  | min_disk   | 0|
  | min_ram| 0|
  | name   | qga-ubt1404  |
  | owner  | b85e1c03b2c84e079417d57ffce97751 |
  | protected  | False|
  | size   | 5368709120   |
  | status | active   |
  | updated_at | 2014-10-15T09:08:54   

  then,we create a instance on horizon by boot from image(create a new volume), 
in the libvirt.xml of this instance, it doesn't contain below config:
  




  and if we create a instance by boot from image, the libvirt.xml
  contain the above config

  I read the code and found :
  if we create a instance by boot from image(create a new volume), image_ref in 
instance is null, below is the code to create libvirt.xml:
  def to_xml(self, context, instance, network_info, disk_info,
 image_meta=None, rescue=None,
 block_device_info=None, write_to_disk=False):
  # We should get image metadata everytime for generating xml
  if image_meta is None:
  (image_service, image_id) = glance.get_remote_image_service(
  context, instance['image_ref'])
  image_meta = compute_utils.get_image_metadata(
  context, image_service, image_id, instance)
  # NOTE(danms): Stringifying a NetworkInfo will take a lock. Do
  # this ahead of time so that we don't acquire it while also
  # holding the logging lock.
  network_info_str = str(network_info)
  LOG.debug(_('Start to_xml '
  'network_info=%(network_info)s '
  'disk_info=%(disk_info)s '
  'image_meta=%(image_meta)s rescue=%(rescue)s'
  'block_device_info=%(block_device_info)s'),
{'network_info': network_info_str, 'disk_info': disk_info,
 'image_meta': image_meta, 'rescue': rescue,
 'block_device_info': block_device_info})
  conf = self.get_guest_config(instance, network_info, image_meta,
   disk_info, rescue, block_device_info)
  xml = conf.to_xml()

  if write_to_disk:
  instance_dir = libvirt_utils.get_instance_path(instance)
  xml_path = os.path.join(instance_dir, 'libvirt.xml')
  libvirt_utils.write_to_file(xml_path, xml)

  LOG.debug(_('End to_xml instance=%(instance)s xml=%(xml)s'),
{'instance': instance, 'xml': xml})
  return xml

  image_ref is null so it can't generate full libvirt.xml

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1383542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739349] [NEW] empty usage information in numa_topology of compute_node table after restart nova-compute

2017-12-19 Thread Minho Ban
Public bug reported:

Description
===
Since Ocata, usage information in numa_toplogy of compute_nodes in DB 
disappears around 2 minutes after a VM is spawned.

Steps to reproduce
==
* Enable NUMATopologyFilter to use vcpu pining
* Launch a VM with flavor having NUMA context like hw:cpu_policy=dedicated or 
hw:mem_page_size=large
* Check numa_topology of compute_nodes in nova DB to check whether NUMA usage 
is applied
* wait for 2 minutes (more or less)
* Check numa_topology of compute_nodes in nova DB to check whether NUMA usage 
has been reset

Expected result
===

There should have no changes in the DB.

Actual result
=

numa_topology of compute_nodes has been reset (usage information has
gone)

Environment
===
1. RDO Ocata

2. CentOS

Logs & Configs
==

NUMA usage information is alive right after a VM is spawned. (focusing
on pinned_cpus and memory_usage)

$ mysql -s nova -e "select numa_topology from compute_nodes where 
host='ocata1';"
numa_topology
{"nova_object.version": "1.2", "nova_object.changes": ["cells"], 
"nova_object.name": "NUMATopology", "nova_object.data": {"cells": 
[{"nova_object.version": "1.2", "nova_object.changes": ["cpu_usage", 
"memory_usage", "cpuset", "pinned_cpus", "siblings", "memory", "mempages", 
"id"], "nova_object.name": "NUMACell", "nova_object.data": {"cpu_usage": 4, 
"memory_usage": 1024, "cpuset": [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 
15, 16, 17, 18, 19], "pinned_cpus": [16, 17, 10, 11], "siblings": [[16, 17], 
[10, 11], [4, 5], [8, 9], [12, 13], [2, 3], [14, 15], [6, 7], [18, 19]], 
"memory": 20479, "mempages": [{"nova_object.version": "1.1", 
"nova_object.changes": ["used", "total", "reserved", "size_kb"], 
"nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, 
"total": 4456317, "reserved": 0, "size_kb": 4}, "nova_object.namespace": 
"nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", 
"used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", 
"nova_object.data": {"used": 1, "total": 3, "reserved": 0, "size_kb": 1048576}, 
"nova_object.namespace": "nova"}], "id": 0}, "nova_object.namespace": "nova"}, 
{"nova_object.version": "1.2", "nova_object.changes": ["cpu_usage", 
"memory_usage", "cpuset", "pinned_cpus", "siblings", "memory", "mempages", 
"id"], "nova_object.name": "NUMACell", "nova_object.data": {"cpu_usage": 0, 
"memory_usage": 0, "cpuset": [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 
34, 35, 36, 37, 38, 39], "pinned_cpus": [], "siblings": [[32, 33], [36, 37], 
[22, 23], [24, 25], [28, 29], [30, 31], [38, 39], [26, 27], [34, 35]], 
"memory": 20480, "mempages": [{"nova_object.version": "1.1", 
"nova_object.changes": ["used", "total", "reserved", "size_kb"], 
"nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, 
"total": 4718592, "reserved": 0, "size_kb": 4}, "nova_object.namespace": 
"nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["used", 
"total", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", 
"nova_object.data": {"used": 0, "total": 2, "reserved": 0, "size_kb": 1048576}, 
"nova_object.namespace": "nova"}], "id": 1}, "nova_object.namespace": 
"nova"}]}, "nova_object.namespace": "nova"}

But after 2 minutes (approximately), the usage information of
numa_topology was missing.

# mysql -s nova -e "select numa_topology from compute_nodes where 
host='ocata1';"
numa_topology
{"nova_object.version": "1.2", "nova_object.changes": ["cells"], 
"nova_object.name": "NUMATopology", "nova_object.data": {"cells": 
[{"nova_object.version": "1.2", "nova_object.changes": ["cpu_usage", 
"memory_usage", "cpuset", "mempages", "pinned_cpus", "memory", "siblings", 
"id"], "nova_object.name": "NUMACell", "nova_object.data": {"cpu_usage": 0, 
"memory_usage": 0, "cpuset": [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 
16, 17, 18, 19], "pinned_cpus": [], "siblings": [[16, 17], [10, 11], [4, 5], 
[8, 9], [12, 13], [2, 3], [14, 15], [6, 7], [18, 19]], "memory": 20479, 
"mempages": [{"nova_object.version": "1.1", "nova_object.changes": ["total", 
"used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", 
"nova_object.data": {"used": 0, "total": 4456317, "reserved": 0, "size_kb": 4}, 
"nova_object.namespace": "nova"}, {"nova_object.version": "1.1", 
"nova_object.changes": ["total", "used", "reserved", "size_kb"], 
"nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, 
"total": 3, "reserved": 0, "size_kb": 1048576}, "nova_object.namespace": 
"nova"}], "id": 0}, "nova_object.namespace": "nova"}, {"nova_object.version": 
"1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset", 
"mempages", "pinned_cpus", "memory", "siblings", "id"], "nova_object.name": 
"NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0, "cpuset": 
[22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39], 
"pinned_cpus": [], "siblings": [[32, 33], [36, 

[Yahoo-eng-team] [Bug 1738946] Re: so many Relationship links in api-ref documentation are userless

2017-12-19 Thread Gage Hugo
The links aren't useless, but it has come up multiple times where users
are confused what their purpose is for.  The best explanation we have
for them is here[0].

[0] https://bugs.launchpad.net/keystone/+bug/1674676/comments/3

** Changed in: keystone
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1738946

Title:
  so many Relationship links in api-ref documentation are userless

Status in OpenStack Identity (keystone):
  Opinion

Bug description:
  so many Relationship links in api-ref doc are userless.

  for example, in api-ref/v3/users.inc file, the Relationship link of
  list users is 'https://docs.openstack.org/api/openstack-
  identity/3/rel/users', however, when I type in my chrome browser, it
  links to 'https://docs.openstack.org/pike/api/'.

  the same issue happens in roles.inc、projects.inc etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1738946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739367] [NEW] Error updating resources due to instancecell.cpu_pinning None

2017-12-19 Thread Ma Wen Cheng
Public bug reported:

 Error updating resources for node valor5-dal09-ce47.
ERROR nova.compute.manager Traceback (most recent call last):
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6500, in 
update_available_resource
ERROR nova.compute.manager rt.update_available_resource(context)
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 528, 
in update_available_resource
ERROR nova.compute.manager self._update_available_resource(context, 
resources)
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
ERROR nova.compute.manager return f(*args, **kwargs)
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 573, 
in _update_available_resource
ERROR nova.compute.manager self._update_usage_from_instances(context, 
instances)
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 936, 
in _update_usage_from_instances
ERROR nova.compute.manager self._update_usage_from_instance(context, 
instance)
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 902, 
in _update_usage_from_instance
ERROR nova.compute.manager self._update_usage(instance, sign=sign)
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 741, 
in _update_usage
ERROR nova.compute.manager self.compute_node, usage, free)
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/hardware.py", line 1446, in 
get_host_numa_usage_from_instance
ERROR nova.compute.manager host_numa_topology, instance_numa_topology, 
free=free))
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/hardware.py", line 1306, in 
numa_usage_from_instances
ERROR nova.compute.manager pinned_cpus = 
set(instancecell.cpu_pinning.values())
ERROR nova.compute.manager AttributeError: 'NoneType' object has no attribute 
'values'

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- [req-ef854f56-7298-47ce-9179-212d66adda96 - - - - -] Error updating resources 
for node valor5-dal09-ce47.
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager Traceback (most 
recent call last):
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6500, in 
update_available_resource
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager 
rt.update_available_resource(context)
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 528, 
in update_available_resource
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager return f(*args, 
**kwargs)
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 573, 
in _update_available_resource
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager 
self._update_usage_from_instances(context, instances)
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 936, 
in _update_usage_from_instances
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager 
self._update_usage_from_instance(context, instance)
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 902, 
in _update_usage_from_instance
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager 
self._update_usage(instance, sign=sign)
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 741, 
in _update_usage
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager 
self.compute_node, usage, free)
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/hardware.py", line 1446, in 
get_host_numa_usage_from_instance
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager 
host_numa_topology, instance_numa_topology, free=free))
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/hardware.py", line 1306, in 
numa_usage_from_instances
- 2017-12-20 00:38:43.569 8563 ERROR nova.compute.manager pinned_cpus = 
set(instancecell.cpu_pinning.values())
- 2017-12-20 00:38:43.569 8563 ERROR 

[Yahoo-eng-team] [Bug 1739368] [NEW] Error message misleading when cross_az is false

2017-12-19 Thread Marc Koderer
Public bug reported:

Switching cross_az volume attachment to false the end-user see the
following error when trying to boot from a different AZ:

openstack server create --volume mko_test01 --flavor m1.small --nic 
net-id=61389c4b-631d-4fe1-8aa0-7bc658f373ec  --availability-zone az_2_1 
mko-test2
Block Device Mapping is Invalid: failed to get volume 
96eec88d-61e7-4aaa-be86-ca9bb9249648. (HTTP 400) (Request-ID: 
req-e0f554e6-8ccd-4f98-9086-3fdd1e53d66e)

Actually there is no problem in "getting the volume" - Cinder response
with 200. But the AZ check raises an exception that is nested in this
area.

** Affects: nova
 Importance: Undecided
 Assignee: Marc Koderer (m-koderer)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Marc Koderer (m-koderer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739368

Title:
  Error message misleading when cross_az is false

Status in OpenStack Compute (nova):
  New

Bug description:
  Switching cross_az volume attachment to false the end-user see the
  following error when trying to boot from a different AZ:

  openstack server create --volume mko_test01 --flavor m1.small --nic 
net-id=61389c4b-631d-4fe1-8aa0-7bc658f373ec  --availability-zone az_2_1 
mko-test2
  Block Device Mapping is Invalid: failed to get volume 
96eec88d-61e7-4aaa-be86-ca9bb9249648. (HTTP 400) (Request-ID: 
req-e0f554e6-8ccd-4f98-9086-3fdd1e53d66e)

  Actually there is no problem in "getting the volume" - Cinder response
  with 200. But the AZ check raises an exception that is nested in this
  area.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738983] [NEW] Dynamic routing: invalid exception inputs leads to test exceptions

2017-12-19 Thread Gary Kotton
Public bug reported:

Unit tests have exceptions:

Exception encountered during bgp_speaker rescheduling.
Traceback (most recent call last):
  File 
"/home/gkotton/neutron-dynamic-routing/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py",
 line 159, in reschedule_resources_from_down_agents
reschedule_resource(context, binding_resource_id)
  File "neutron_dynamic_routing/db/bgp_dragentscheduler_db.py", line 174, in 
reschedule_bgp_speaker
failure_reason="no eligible dr agent found")
TypeError: __init__() got an unexpected keyword argument 'failure_reason'
BgpDrAgent 84dfa040-c1d4-40b1-8a16-0502ffccbb5b is down

The tests pass but this will lead to extra failures in the real worl

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1738983

Title:
  Dynamic routing: invalid exception inputs leads to test exceptions

Status in neutron:
  In Progress

Bug description:
  Unit tests have exceptions:

  Exception encountered during bgp_speaker rescheduling.
  Traceback (most recent call last):
File 
"/home/gkotton/neutron-dynamic-routing/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py",
 line 159, in reschedule_resources_from_down_agents
  reschedule_resource(context, binding_resource_id)
File "neutron_dynamic_routing/db/bgp_dragentscheduler_db.py", line 174, in 
reschedule_bgp_speaker
  failure_reason="no eligible dr agent found")
  TypeError: __init__() got an unexpected keyword argument 'failure_reason'
  BgpDrAgent 84dfa040-c1d4-40b1-8a16-0502ffccbb5b is down

  The tests pass but this will lead to extra failures in the real worl

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1738983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738997] [NEW] don't call sync_guest_time if qga is not enabled

2017-12-19 Thread Chen Hanxiao
Public bug reported:

Description
===

sync_guest_time relies on qemu guest agent.

If hw_qemu_guest_agent is not set, we'll get:

DEBUG nova.virt.libvirt.guest Failed to set time: agent not configured

if sync_guest_time is called.

We could improve this by checking whether qga is enabled,
other than call libvirt and catch a exception.

Steps to reproduce
==
1) start a VM without hw_qemu_geust_agent
2) pause that VM then unpause
3) check the log of nova-compute


Expected result
===
No related logs

Actual result
=
Got a error log: Failed to set time: agent not configured

** Affects: nova
 Importance: Undecided
 Assignee: Chen Hanxiao (chenhanxiao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Chen Hanxiao (chenhanxiao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1738997

Title:
  don't call sync_guest_time if qga is not enabled

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  sync_guest_time relies on qemu guest agent.

  If hw_qemu_guest_agent is not set, we'll get:

  DEBUG nova.virt.libvirt.guest Failed to set time: agent not configured

  if sync_guest_time is called.

  We could improve this by checking whether qga is enabled,
  other than call libvirt and catch a exception.

  Steps to reproduce
  ==
  1) start a VM without hw_qemu_geust_agent
  2) pause that VM then unpause
  3) check the log of nova-compute

  
  Expected result
  ===
  No related logs

  Actual result
  =
  Got a error log: Failed to set time: agent not configured

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1738997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp