[Yahoo-eng-team] [Bug 1646255] Re: removing compute node causes ComputeHostNotFound in nova-api

2016-12-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/406627
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f0d44c5b09f3f3c84038d40b621bb629a1f8110e
Submitter: Jenkins
Branch:master

commit f0d44c5b09f3f3c84038d40b621bb629a1f8110e
Author: Matt Riedemann 
Date:   Sun Dec 4 15:08:04 2016 -0500

Handle ComputeHostNotFound when listing hypervisors

Compute node resources must currently be deleted manually
in the database, and as such they can reference service
records which have been deleted via the services delete API.
Because of this when listing hypervisors (compute nodes), we
may get a ComputeHostNotFound error when trying to lookup a
service record for a compute node where the service was
deleted. This causes the API to fail with a 500 since it's not
handled.

This change handles the ComputeHostNotFound when looping over
compute nodes in the hypervisors index and detail methods and
simply ignores them.

Change-Id: I2717274bb1bd370870acbf58c03dc59cee30cc5e
Closes-Bug: #1646255


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646255

Title:
  removing compute node causes ComputeHostNotFound in nova-api

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  trying to remove compute node properly

  Steps to reproduce
  ==
  1) remove all instances from the hypervisor:
  (env) vance@zs95k5:~$ nova hypervisor-servers zs93k23
  ++--+---+-+
  | ID | Name | Hypervisor ID | Hypervisor Hostname |
  ++--+---+-+
  ++--+---+-+

  2) disable the hypervisor:
  (env) vance@zs95k5:~$ nova service-list
  
+++-+--+--+---++-+
  | Id | Binary | Host| Zone | Status   | State | 
Updated_at | Disabled Reason |
  
+++-+--+--+---++-+
  | 3  | nova-cert  | juju-605709-2-lxd-3 | internal | enabled  | up| 
2016-11-30T21:13:34.00 | -   |
  | 4  | nova-scheduler | juju-605709-2-lxd-3 | internal | enabled  | up| 
2016-11-30T21:13:27.00 | -   |
  | 5  | nova-conductor | juju-605709-2-lxd-3 | internal | enabled  | up| 
2016-11-30T21:13:30.00 | -   |
  | 14 | nova-compute   | u27-maas-machine-1  | nova | disabled | up| 
2016-11-30T21:13:28.00 | -   |
  | 16 | nova-compute   | zs95k181| nova | enabled  | up| 
2016-11-30T21:13:33.00 | -   |
  | 17 | nova-compute   | zs93k23 | nova | enabled  | up| 
2016-11-30T21:13:33.00 | -   |
  
+++-+--+--+---++-+
  (env) vance@zs95k5:~$ nova service-disable zs93k23 nova-compute
  +-+--+--+
  | Host| Binary   | Status   |
  +-+--+--+
  | zs93k23 | nova-compute | disabled |
  +-+--+--+

  3) delete the compute service
  (env) vance@zs95k5:~$ nova service-delete 17
  (env) vance@zs95k5:~$ nova service-list
  
+++-+--+--+---++-+
  | Id | Binary | Host| Zone | Status   | State | 
Updated_at | Disabled Reason |
  
+++-+--+--+---++-+
  | 3  | nova-cert  | juju-605709-2-lxd-3 | internal | enabled  | up| 
2016-11-30T21:14:54.00 | -   |
  | 4  | nova-scheduler | juju-605709-2-lxd-3 | internal | enabled  | up| 
2016-11-30T21:14:47.00 | -   |
  | 5  | nova-conductor | juju-605709-2-lxd-3 | internal | enabled  | up| 
2016-11-30T21:14:56.00 | -   |
  | 14 | nova-compute   | u27-maas-machine-1  | nova | disabled | up| 
2016-11-30T21:14:48.00 | -   |
  | 16 | nova-compute   | zs95k181| nova | enabled  | up| 
2016-11-30T21:14:53.00 | -   |
  
+++-+--+--+---++-+

  4) delete the neutron agent
  (env) vance@zs95k5:~$ openstack network agent list
  

[Yahoo-eng-team] [Bug 1607114] Re: List role assignments doesn't include domain of role

2016-12-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/373516
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=812982a45f2a62f557c96f61108c3535811276c8
Submitter: Jenkins
Branch:master

commit 812982a45f2a62f557c96f61108c3535811276c8
Author: Samuel Pilla 
Date:   Tue Dec 6 08:26:13 2016 -0600

Domain included for role in list_role_assignment

When calling list_role_assignment and including the "include_names"
parameter, it would return the domain name and ID for each party
except for roles.

This will return the domain name and id for roles when the parameter
is included, if the role has a domain.

Added tests for roles with domains at manager and API level.

Co-Authored-By: Samuel de Medeiros Queiroz 

Closes-Bug: #1607114

Change-Id: I5dae9299522b5116f8530455dd3d3376e9597b52


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1607114

Title:
  List role assignments doesn't include domain of role

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The list role assignment will return the names (and domain names) of
  each party in an assignment if the the "include_names" query parameter
  is included.

  However, this is not true for roles, which would be useful for domain
  specific roles.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1607114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644725] Re: Check destination_type when booting with bdm provided

2016-12-06 Thread Ghanshyam Mann
But Destination_type in bdm is optional, checks in nova will make it
backward incompatible. I am not sure why cinder does not mark volume in-
use.

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1644725

Title:
  Check destination_type when booting with bdm provided

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in python-novaclient:
  In Progress

Bug description:
  When booting instance with block_device_mapping provided, in the
  current implementation, the "destination_type" is allowed to be None,
  and this lead to un-sync between Nova and Cinder:

  Step 1: Booting with block_device_mapping, leave destination_type to
  be None:

  root@SZX1000191849:/var/log/nova# nova --debug boot  --flavor 1
  --image 2ba75018-403f-407b-864a-08564022e1f8 --nic net-
  id=cce1d2f1-acf4-4646-abdc-069f8d0dbb71 --block-device
  'source=volume,id=9f49d5b0-3625-46a2-9ed4-d82f19949148' test_bdm

  the corresponding REST call is:
  DEBUG (session:342) REQ: curl -g -i -X POST 
http://10.229.45.17:8774/v2.1/os-volumes_boot -H "Accept: application/json" -H 
"User-Agent: python-novaclient" -H "OpenStack-API-Version: compute 2.37" -H 
"X-OpenStack-Nova-API-Version: 2.37" -H "X-Auth-Token: 
{SHA1}4d8c2c43338e1c4d96e08bcd1c2f3ff36de14154" -H "Content-Type: 
application/json" -d '{"server": {"name": "test_bdm", "imageRef": 
"2ba75018-403f-407b-864a-08564022e1f8", "block_device_mapping_v2": 
[{"source_type": "image", "delete_on_termination": true, "boot_index": 0, 
"uuid": "2ba75018-403f-407b-864a-08564022e1f8", "destination_type": "local"}, 
{"source_type": "volume", "uuid": "9f49d5b0-3625-46a2-9ed4-d82f19949148"}], 
"flavorRef": "1", "max_count": 1, "min_count": 1, "networks": [{"uuid": 
"cce1d2f1-acf4-4646-abdc-069f8d0dbb71"}]}}'

  Step 2: After the instance is successfully launched, the detailed info
  is like this:

  root@SZX1000191849:/var/log/nova# nova show 
83d9ec32-93e0-441a-ae10-00e08b65de0b
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | SZX1000191849
|
  | OS-EXT-SRV-ATTR:hostname | test-bdm 
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | SZX1000191849
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0016
|
  | OS-EXT-SRV-ATTR:kernel_id| 87c9afd6-3a47-4a4c-a804-6b456d68136d 
|
  | OS-EXT-SRV-ATTR:launch_index | 0
|
  | OS-EXT-SRV-ATTR:ramdisk_id   | acd02b28-6484-4f90-a5e7-bba7159343e1 
|
  | OS-EXT-SRV-ATTR:reservation_id   | r-fiqwkq02   
|
  | OS-EXT-SRV-ATTR:root_device_name | /dev/vda 
|
  | OS-EXT-SRV-ATTR:user_data| -
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2016-11-25T06:50:36.00   
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | config_drive |  

[Yahoo-eng-team] [Bug 1647486] Re: sample-data makes incorrect credentials call

2016-12-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407331
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=5fe929d4502752b539239010ea37750b797b0d7b
Submitter: Jenkins
Branch:master

commit 5fe929d4502752b539239010ea37750b797b0d7b
Author: Chetna Khullar 
Date:   Tue Dec 6 05:31:39 2016 +

Corrects sample-data incorrect credential call

This bug-fix corrects the sample-data credential call.

Change-Id: I216b455cf3d9966a2b641a79132e2f3dfdae5920
Closes-Bug: 1647486


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1647486

Title:
  sample-data makes incorrect credentials call

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  
  ADMIN_PASSWORD=keystone tools/sample_data.sh

  ... lots of stuff working fine ...

  usage: openstack ec2 credentials create [-h]
  [-f {json,shell,table,value,yaml}]
  [-c COLUMN] [--max-width ]
  [--noindent] [--prefix PREFIX]
  [--project ] [--user ]
  [--user-domain ]
  [--project-domain ]
  openstack ec2 credentials create: error: argument --user: expected one 
argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1647486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631371] Re: [RFE] Expose trunk details over metadata API

2016-12-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631371

Title:
  [RFE] Expose trunk details over metadata API

Status in neutron:
  Expired

Bug description:
  Enable bringup of subports via exposing trunk/subport details over
  the metadata API

  With the completion of the trunk port feature in Newton (Neutron
  bp/vlan-aware-vms [1]), trunk and subports are now available. But the
  bringup of the subports' VLAN interfaces inside an instance is not
  automatic. In Newton there's no easy way to pass information about
  the subports to the guest operating system. But using the metadata
  API we can change this.

  Problem Description
  ---

  To bring up (and/or tear down) a subport the guest OS

  (a) must know the segmentation-type and segmentation-id of a subport
  as set in 'openstack network trunk create/set --subport'

  (b) must know the MAC address of a subport
  as set in 'openstack port create'

  (c) must know which vNIC the subport belongs to

  (d) may need to know when were subports added or removed
  (if they are added or removed during the lifetime of an instance)

  Since subports do not have a corresponding vNIC, the approach used
  for regular ports (with a vNIC) cannot work.

  This write-up addresses problems (a), (b) and (c), but not (d).

  Proposed Change
  ---

  Here we propose a change involving both Nova and Neutron to expose
  the information needed via the metadata API.

  Information covering (a) and (b) is already available (read-only)
  in the 'trunk_details' attribute of the trunk parent port (ie. the
  port which the instance was booted with). [2]

  We propose to use the MAC address of the trunk parent port to cover
  (c). We recognize this may occasionally be problematic, because MAC
  addresses (of ports belonging to different neutron networks) are not
  guaranteed to be unique, therefore collision may happen. But this seems
  to be a small price for avoiding the complexity of other solutions.

  The mechanism would be the following. Let's suppose we have port0
  which is a trunk parent port and instance0 was booted with '--nic
  port-id=port0'. On every update of port0's trunk_details Neutron
  constructs the following JSON structure:

  PORT0-DETAILS = {
  "mac_address": PORT0-MAC-ADDRESS, "trunk_details":
  PORT0-TRUNK-DETAILS
  }

  Then Neutron sets a metadata key-value pair of instance0, equivalent
  to the following nova command:

  nova meta set instance0 trunk_details::PORT0-MAC-ADDRESS=PORT0-DETAILS

  Nova in Newton limits meta values to <= 255 characters, this limit
  must be raised. Assuming the current format of trunk_details roughly
  150 characters/subport are needed. Alternatively meta values could
  have unlimited length - at least for the service tenant used by
  Neutron. (Though tenant-specific API validators may not be a good
  idea.) The 'values' column of the the 'instance_metadata' table should
  be altered from VARCHAR(255) to TEXT() in a Nova DB migration.
  (A slightly related bug report: [3])

  A program could read
  http://169.254.169.254/openstack/2016-06-30/meta_data.json and
  bring up the subport VLAN interfaces accordingly. This program is
  not covered here, however it is worth pointing out that it could be
  called by cloud-init.

  Alternatives
  

  (1) The MAC address of a parent port can be reused for all its child
  ports (when creating the child ports). Then VLAN subinterfaces
  of a network interface will have the correct MAC address by
  default. Segmentation type and ID can be shared in other ways, for
  example as a VLAN plan embedded into a golden image. This approach
  could even partially solve problem (d), however it cannot solve problem
  (a) in the dynamic case. Use of this approach is currently blocked
  by an openvswitch firewall driver bug. [4][5]

  (2) Generate and inject a subport bringup script into the instance
  as user data. Cannot handle subports added or removed after VM boot.

  (3) An alternative solution to problem (c) could rely on the
  preservation of ordering between NICs passed to nova boot and NICs
  inside an instance. However this would turn the update of trunk_details
  into an instance-level operation instead of the port-level operation
  proposed here. Plus it would fail if this ordering is ever lost.

  References
  --

  [1] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
  [2] 
https://review.openstack.org/#q,Id23ce8fc16c6ea6a405cb8febf8470a5bf3bcb43,n,z
  [3] https://bugs.launchpad.net/nova/+bug/1117923
  [4] https://bugs.launchpad.net/neutron/+bug/1626010
  [5] https://bugs.launchpad.net/neutron/+bug/1593760

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1647914] [NEW] Cannot use minimized_polling when hypervisor is XenServer

2016-12-06 Thread huan
Public bug reported:

Env:
XenServer as hypervisor
Neutron ML2 use ovs agent

When using XenServer as hypervisor, the ovs agent running in compute node 
cannot set 
[agent]
minimize_polling = True

See related logs:

on/api/rpc/callbacks/resource_manager.py:74
2016-12-07 02:28:55.856 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Agent initialized 
successfully, now running... 
2016-12-07 02:28:55.857 DEBUG neutron.agent.linux.async_process 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Launching async process 
[ovsdb-client monitor Interface name,ofport,external_ids --format=json]. from 
(pid=6224) start /opt/stack/neutron/neutron/agent/linux/async_process.py:110
2016-12-07 02:28:55.857 DEBUG neutron.agent.linux.utils 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Running command: 
['/usr/local/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovsdb-client', 'monitor', 'Interface', 'name,ofport,external_ids', 
'--format=json'] from (pid=6224) create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:92
2016-12-07 02:28:55.879 DEBUG neutron.agent.linux.utils 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Running command: ['ps', 
'--ppid', '6253', '-o', 'pid='] from (pid=6224) create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:92

2016-12-07 02:29:00.863 DEBUG neutron.agent.linux.utils 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Running command: ['ps', 
'--ppid', '6253', '-o', 'pid='] from (pid=6224) create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:92
2016-12-07 02:29:00.976 ERROR ryu.lib.hub 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] hub: uncaught exception: 
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
return func(*args, **kwargs)
  File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 42, in agent_main_wrapper
ovs_agent.main(bridge_classes)
  File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2154, in main
agent.daemon_loop()
  File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 
154, in wrapper
return f(*args, **kwargs)
  File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2073, in daemon_loop
self.ovsdb_monitor_respawn_interval) as pm:
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
  File "/opt/stack/neutron/neutron/agent/linux/polling.py", line 35, in 
get_polling_manager
pm.start()
  File "/opt/stack/neutron/neutron/agent/linux/polling.py", line 57, in start
self._monitor.start(block=True)
  File "/opt/stack/neutron/neutron/agent/linux/ovsdb_monitor.py", line 117, in 
start
while not self.is_active():
  File "/opt/stack/neutron/neutron/agent/linux/async_process.py", line 101, in 
is_active
self.pid, self.cmd_without_namespace)
  File "/opt/stack/neutron/neutron/agent/linux/async_process.py", line 160, in 
pid
run_as_root=self.run_as_root)
  File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 246, in 
get_root_helper_child_pid
pid = find_child_pids(pid)[0]
  File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 179, in 
find_child_pids
log_fail_as_error=False)
  File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 127, in execute
_stdout, _stderr = obj.communicate(_process_input)
  File "/usr/lib/python2.7/subprocess.py", line 799, in communicate
return self._communicate(input)
  File "/usr/lib/python2.7/subprocess.py", line 1403, in _communicate
stdout, stderr = self._communicate_with_select(input)
  File "/usr/lib/python2.7/subprocess.py", line 1504, in 
_communicate_with_select
rlist, wlist, xlist = select.select(read_set, write_set, [])
  File "/usr/local/lib/python2.7/dist-packages/eventlet/green/select.py", line 
86, in select
return hub.switch()
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, 
in switch
return self.greenlet.switch()
Timeout: 5 seconds

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647914

Title:
  Cannot use minimized_polling when hypervisor is XenServer

Status in neutron:
  New

Bug description:
  Env:
  XenServer as hypervisor
  Neutron ML2 use ovs agent

  When using XenServer as hypervisor, the ovs agent running in compute node 
cannot set 
  [agent]
  minimize_polling = True

  See related logs:

  on/api/rpc/callbacks/resource_manager.py:74
  2016-12-07 02:28:55.856 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-dfcefef1-efbd-4896-af21-70f6fae0c0b8 None None] Agent 

[Yahoo-eng-team] [Bug 1647912] [NEW] Unit is not consistent for max-burst-rate

2016-12-06 Thread sunzuohua
Public bug reported:

For qos in neutron:
QosBandwidthLimitRule: defines the instance-egress bandwidth limit rule type, 
characterized by a max kbps and a max burst kbits.

API parameters are as follows:
max_kbps
max_burst_kbps


But for qos in openvswitch:
"ingress_policing_rate": the maximum rate (in Kbps) that this VM should be 
allowed to send.
"ingress_policing_burst": a parameter to the policing algorithm to indicate the 
maximum amount of data (in Kb) that this interface can send beyond the policing 
rate.Kb means kbytes

For example, if I create a QosBandwidthLimitRule as follows:
max_kbps = 1000
max_burst_kbps = 1000

The actual burst bandwidth is 9Mbits/sec

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647912

Title:
  Unit is not consistent for max-burst-rate

Status in neutron:
  New

Bug description:
  For qos in neutron:
  QosBandwidthLimitRule: defines the instance-egress bandwidth limit rule type, 
characterized by a max kbps and a max burst kbits.

  API parameters are as follows:
  max_kbps
  max_burst_kbps

  
  But for qos in openvswitch:
  "ingress_policing_rate": the maximum rate (in Kbps) that this VM should be 
allowed to send.
  "ingress_policing_burst": a parameter to the policing algorithm to indicate 
the maximum amount of data (in Kb) that this interface can send beyond the 
policing rate.Kb means kbytes

  For example, if I create a QosBandwidthLimitRule as follows:
  max_kbps = 1000
  max_burst_kbps = 1000

  The actual burst bandwidth is 9Mbits/sec

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647910] [NEW] hostname is set incorrectly if localhostname is fully qualified

2016-12-06 Thread Lars Kellogg-Stedman
Public bug reported:

If no data source is available and the local hostname is set to
"localhost.localdomain", and /etc/hosts looks like:

  127.0.0.1   localhost localhost.localdomain localhost4
localhost4.localdomain4

Then in sources/__init__.py in get_hostname:

- util.get_hostname() will return 'localhost.localdomain'
- util.get_fqdn_from_hosts(hostname) will return 'localhost'
- 'toks' will be set to [ 'localhost.localdomain', 'localdomain'

And ultimately the system hostname will be set to
'localhost.localdomain.localdomain', which isn't useful to anybody.

Also reported in:

https://bugzilla.redhat.com/show_bug.cgi?id=1389048

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1647910

Title:
  hostname is set incorrectly if localhostname is fully qualified

Status in cloud-init:
  New

Bug description:
  If no data source is available and the local hostname is set to
  "localhost.localdomain", and /etc/hosts looks like:

127.0.0.1   localhost localhost.localdomain localhost4
  localhost4.localdomain4

  Then in sources/__init__.py in get_hostname:

  - util.get_hostname() will return 'localhost.localdomain'
  - util.get_fqdn_from_hosts(hostname) will return 'localhost'
  - 'toks' will be set to [ 'localhost.localdomain', 'localdomain'

  And ultimately the system hostname will be set to
  'localhost.localdomain.localdomain', which isn't useful to anybody.

  Also reported in:

  https://bugzilla.redhat.com/show_bug.cgi?id=1389048

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1647910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482507] Re: launch vm can not choose flavor

2016-12-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/221758
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b1373e0438c19485f3b0223e93948b5d4519c7ae
Submitter: Jenkins
Branch:master

commit b1373e0438c19485f3b0223e93948b5d4519c7ae
Author: Ragalahari 
Date:   Wed Sep 9 19:08:42 2015 +0530

Reset flavors for other than "Boot from Image" source type.

If a launched instance from "Boot from image" source type is not
meeting the given image RAM/disk requirements, the "Flavor" field
of Launch Instance will display the below message.

"Some flavors not meeting minimum image requirements have been
disabled."

And the flavors that are not meeting the specified requirements
will be disabled under "Flavor" field.

After this message has been shown if we try to choose a source
type other than "Boot from image", the flavors are still in
disabled mode and message also not being cleared.

To fix this issue, Set the image name to default when the source type
is changed from "Boot from image" to other source type.So that
disableFlavorsForImage() has been called on Image name change to
reset flavors.

Change-Id: I1a105eb84bd5d92ad521d9a8ae290912d48cf275
Closes-Bug: #1482507


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482507

Title:
  launch vm can not choose flavor

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  when launch instance "instance boot source " choose "Boot from image",
  then "Flavor" will show tips: "Some flavors not meeting minimum image 
requirements have been disabled."
  and some flavor can not be choosed.

  then "instance boot source " choose "Boot from volume",the disabled flavors 
  also can not be choosed.

  Infact the disabled flavor should turn to be enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647464] Re: novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers broken since at least 12/2

2016-12-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407204
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=72d28ccd6e8ce8cf700891788f06578313b03c38
Submitter: Jenkins
Branch:master

commit 72d28ccd6e8ce8cf700891788f06578313b03c38
Author: Matt Riedemann 
Date:   Mon Dec 5 16:24:05 2016 -0500

Handle MarkerNotFound from cell0 database

When listing instances in the cellv2 world we look them up
from three locations:

1. Build requests which exist before the instances are created
   in the cell database (after the scheduler picks a host to
   build the instance). Currently instances and build requests
   are both created before casting to conductor, but that's going
   away in Ocata with the support for multiple cellsv2 cells.
2. The cell0 database for instances which failed to get scheduled
   to a compute host (and therefore a cell).
3. The actual cell database that the instance lives in. Currently
   that's only a single traditional nova database, but could be one
   of multiple cellsv2 cells when we add that support in Ocata.

If a marker is passed in when listing instances, if the instance
lives in an actual cell database, we'll get a MarkerNotFound failure
from cell0 because the instance doesn't exist in cell0, but we check
cell0 before we check the cell database. This makes the instance
listing short-circuit and fail with a 400 from the REST API.

This patch simply handles the MarkerNotFound when listing instances
from the cell0 database and ignores it so we can continue onto the
cell database.

Closes-Bug: #1647464

Change-Id: I977497be262fb7f2333e32fb7313b29624323422


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647464

Title:
  
novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers
  broken since at least 12/2

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  We're always getting a 400 on the marker value here now:

  http://logs.openstack.org/59/406359/1/check/gate-novaclient-dsvm-
  functional-neutron/30f5c67/console.html#_2016-12-05_18_00_49_690292

  2016-12-05 18:00:49.690292 | 2016-12-05 18:00:49.689 | 
novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers
  2016-12-05 18:00:49.691677 | 2016-12-05 18:00:49.691 | 
---
  2016-12-05 18:00:49.694401 | 2016-12-05 18:00:49.694 | 
  2016-12-05 18:00:49.695830 | 2016-12-05 18:00:49.695 | Captured traceback:
  2016-12-05 18:00:49.697230 | 2016-12-05 18:00:49.696 | ~~~
  2016-12-05 18:00:49.698889 | 2016-12-05 18:00:49.698 | Traceback (most 
recent call last):
  2016-12-05 18:00:49.700319 | 2016-12-05 18:00:49.699 |   File 
"novaclient/tests/functional/v2/legacy/test_servers.py", line 104, in 
test_list_all_servers
  2016-12-05 18:00:49.701907 | 2016-12-05 18:00:49.701 | output = 
self.nova("list", params="--limit -1 --name %s" % name)
  2016-12-05 18:00:49.703240 | 2016-12-05 18:00:49.702 |   File 
"novaclient/tests/functional/base.py", line 316, in nova
  2016-12-05 18:00:49.704505 | 2016-12-05 18:00:49.704 | endpoint_type, 
merge_stderr)
  2016-12-05 18:00:49.706426 | 2016-12-05 18:00:49.706 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 124, in nova
  2016-12-05 18:00:49.707668 | 2016-12-05 18:00:49.707 | 'nova', 
action, flags, params, fail_ok, merge_stderr)
  2016-12-05 18:00:49.709199 | 2016-12-05 18:00:49.708 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 368, in cmd_with_auth
  2016-12-05 18:00:49.710930 | 2016-12-05 18:00:49.710 | self.cli_dir)
  2016-12-05 18:00:49.712387 | 2016-12-05 18:00:49.712 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 68, in execute
  2016-12-05 18:00:49.714028 | 2016-12-05 18:00:49.713 | result_err)
  2016-12-05 18:00:49.715601 | 2016-12-05 18:00:49.715 | 
tempest.lib.exceptions.CommandFailed: Command 
'['/opt/stack/new/python-novaclient/.tox/functional/bin/nova', '--os-username', 
'admin', '--os-tenant-name', 'admin', '--os-password', 'secretadmin', 
'--os-auth-url', 'http://10.13.96.44/identity_admin', 
'--os-compute-api-version', '2.latest', '--os-endpoint-type', 'publicURL', 
'list', '--limit', '-1', '--name', '6a31a7c8-189d-4a63-88d5-7ee1f63f6810']' 
returned non-zero exit status 1.
  2016-12-05 18:00:49.717098 | 

[Yahoo-eng-team] [Bug 1647464] Re: novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers broken since at least 12/2

2016-12-06 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/newton
   Importance: Undecided => Medium

** Changed in: nova/newton
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647464

Title:
  
novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers
  broken since at least 12/2

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  We're always getting a 400 on the marker value here now:

  http://logs.openstack.org/59/406359/1/check/gate-novaclient-dsvm-
  functional-neutron/30f5c67/console.html#_2016-12-05_18_00_49_690292

  2016-12-05 18:00:49.690292 | 2016-12-05 18:00:49.689 | 
novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers
  2016-12-05 18:00:49.691677 | 2016-12-05 18:00:49.691 | 
---
  2016-12-05 18:00:49.694401 | 2016-12-05 18:00:49.694 | 
  2016-12-05 18:00:49.695830 | 2016-12-05 18:00:49.695 | Captured traceback:
  2016-12-05 18:00:49.697230 | 2016-12-05 18:00:49.696 | ~~~
  2016-12-05 18:00:49.698889 | 2016-12-05 18:00:49.698 | Traceback (most 
recent call last):
  2016-12-05 18:00:49.700319 | 2016-12-05 18:00:49.699 |   File 
"novaclient/tests/functional/v2/legacy/test_servers.py", line 104, in 
test_list_all_servers
  2016-12-05 18:00:49.701907 | 2016-12-05 18:00:49.701 | output = 
self.nova("list", params="--limit -1 --name %s" % name)
  2016-12-05 18:00:49.703240 | 2016-12-05 18:00:49.702 |   File 
"novaclient/tests/functional/base.py", line 316, in nova
  2016-12-05 18:00:49.704505 | 2016-12-05 18:00:49.704 | endpoint_type, 
merge_stderr)
  2016-12-05 18:00:49.706426 | 2016-12-05 18:00:49.706 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 124, in nova
  2016-12-05 18:00:49.707668 | 2016-12-05 18:00:49.707 | 'nova', 
action, flags, params, fail_ok, merge_stderr)
  2016-12-05 18:00:49.709199 | 2016-12-05 18:00:49.708 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 368, in cmd_with_auth
  2016-12-05 18:00:49.710930 | 2016-12-05 18:00:49.710 | self.cli_dir)
  2016-12-05 18:00:49.712387 | 2016-12-05 18:00:49.712 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 68, in execute
  2016-12-05 18:00:49.714028 | 2016-12-05 18:00:49.713 | result_err)
  2016-12-05 18:00:49.715601 | 2016-12-05 18:00:49.715 | 
tempest.lib.exceptions.CommandFailed: Command 
'['/opt/stack/new/python-novaclient/.tox/functional/bin/nova', '--os-username', 
'admin', '--os-tenant-name', 'admin', '--os-password', 'secretadmin', 
'--os-auth-url', 'http://10.13.96.44/identity_admin', 
'--os-compute-api-version', '2.latest', '--os-endpoint-type', 'publicURL', 
'list', '--limit', '-1', '--name', '6a31a7c8-189d-4a63-88d5-7ee1f63f6810']' 
returned non-zero exit status 1.
  2016-12-05 18:00:49.717098 | 2016-12-05 18:00:49.716 | stdout:
  2016-12-05 18:00:49.718661 | 2016-12-05 18:00:49.718 | 
  2016-12-05 18:00:49.720061 | 2016-12-05 18:00:49.719 | stderr:
  2016-12-05 18:00:49.721559 | 2016-12-05 18:00:49.721 | ERROR 
(BadRequest): marker [282483d5-433b-4c34-8a5d-894e40db705d] not found (HTTP 
400) (Request-ID: req-d0b88399-b0d6-4f0c-881c-442e88944350)

  There isn't anything obvious in the nova and novaclient changes around
  12/2 that would cause this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647855] [NEW] Page title not updated in AngularJS based Panels

2016-12-06 Thread Eddie Ramirez
Public bug reported:

How to reproduce:
1. Go to Project->Images or Admin->Images or enable a new angularjs-based panel 
(e.g. admin->flavors).
2. See the Page Title  is "Horizon - OpenStack Dashboard"

Expected result:
The page title should read the panel title as the user moves between panels, 
this title is tipically the same as the content of an h1 inside of 
div.panel-header.

Actual result:
The title reads "Horizon - OpenStack Dashboard". If the user has 20 tabs opened 
then good luck finding a tab - cannot distinguish by the title!

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- Page title not updated in AngularJS based Panes
+ Page title not updated in AngularJS based Panels

** Description changed:

  How to reproduce:
- 1. Go to Project->Images or Admin->Images or enable a new angularjs-based 
panel (flavors).
+ 1. Go to Project->Images or Admin->Images or enable a new angularjs-based 
panel (e.g. admin->flavors).
  2. See the Page Title  is "Horizon - OpenStack Dashboard"
  
  Expected result:
  The page title should read the panel title as the user moves between panels, 
this title is tipically the same as the content of an h1 inside of 
div.panel-header.
  
  Actual result:
  The title reads "Horizon - OpenStack Dashboard". If the user has 20 tabs 
opened then good luck finding a tab - cannot distinguish by the title!

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1647855

Title:
  Page title not updated in AngularJS based Panels

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:
  1. Go to Project->Images or Admin->Images or enable a new angularjs-based 
panel (e.g. admin->flavors).
  2. See the Page Title  is "Horizon - OpenStack Dashboard"

  Expected result:
  The page title should read the panel title as the user moves between panels, 
this title is tipically the same as the content of an h1 inside of 
div.panel-header.

  Actual result:
  The title reads "Horizon - OpenStack Dashboard". If the user has 20 tabs 
opened then good luck finding a tab - cannot distinguish by the title!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1647855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 832507] Re: console.log grows indefinitely

2016-12-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407450
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1f659251c7509cab045024044a6b8d642ad85aef
Submitter: Jenkins
Branch:master

commit 1f659251c7509cab045024044a6b8d642ad85aef
Author: Markus Zoeller 
Date:   Tue Dec 6 11:40:25 2016 +0100

libvirt: virtlogd: use virtlogd for char devices

This change makes actual usage of the "logd" sub-element for char devices.
The two REST APIs ``os-getConsoleOutput`` and ``os-getSerialConsole`` can
now be satisfied at the same time. This is valid for any combination of:
* char device element: "console", "serial"
* char device type: "tcp", "pty"
There is also no need to create multiple different device types anymore.
If we have a tcp device, we don't need the pty device anymore. The logging
will be done in the tcp device.

Implements blueprint libvirt-virtlogd
Closes-Bug: 832507
Change-Id: Ia412f55bd988f6e11cd78c4c5a50a86389e648b0


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/832507

Title:
  console.log grows indefinitely

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in libvirt package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in qemu-kvm package in Ubuntu:
  Triaged

Bug description:
  KVM takes everything from stdout and prints it to console.log. This
  does not appear to have a size limit, so if a user (mistakenly or
  otherwise) sends a lot of data to stdout, the console.log file can
  fill the entire disk of the compute node quite quickly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/832507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647464] Re: novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers broken since at least 12/2

2016-12-06 Thread Matt Riedemann
Yeah the problem is cell0:

http://logs.openstack.org/05/407205/1/check/gate-novaclient-dsvm-
functional-identity-v3-only-ubuntu-xenial-
nv/61e9a05/logs/screen-n-api.txt.gz#_2016-12-06_20_58_54_417

2016-12-06 20:58:54.417 27543 ERROR nova.compute.api 
[req-de260414-aacf-47c4-949b-0b501efa5e69 admin admin] Failed to find instance 
by marker fe05d5ed-e97a-48c3-a57f-fb8a05f43d88 in cell0; ignoring
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api Traceback (most recent 
call last):
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/compute/api.py", line 2404, in get_all
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api sort_dirs=sort_dirs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/compute/api.py", line 2483, in 
_get_instances_by_filters
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api expected_attrs=fields, 
sort_keys=sort_keys, sort_dirs=sort_dirs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api result = fn(cls, 
context, *args, **kwargs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 1219, in get_by_filters
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api use_slave=use_slave, 
sort_keys=sort_keys, sort_dirs=sort_dirs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 226, in wrapper
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api return f(*args, 
**kwargs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 1203, in 
_get_by_filters_impl
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api sort_keys=sort_keys, 
sort_dirs=sort_dirs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/db/api.py", line 763, in 
instance_get_all_by_filters_sort
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api sort_dirs=sort_dirs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 170, in wrapper
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api return f(*args, 
**kwargs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 271, in wrapped
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api return f(context, 
*args, **kwargs)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2243, in 
instance_get_all_by_filters_sort
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api raise 
exception.MarkerNotFound(marker=marker)
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api MarkerNotFound: Marker 
fe05d5ed-e97a-48c3-a57f-fb8a05f43d88 could not be found.
2016-12-06 20:58:54.417 27543 ERROR nova.compute.api 

So we just need to handle that MarkerNotFound and ignore it when listing
instances.

** No longer affects: python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647464

Title:
  
novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers
  broken since at least 12/2

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  We're always getting a 400 on the marker value here now:

  http://logs.openstack.org/59/406359/1/check/gate-novaclient-dsvm-
  functional-neutron/30f5c67/console.html#_2016-12-05_18_00_49_690292

  2016-12-05 18:00:49.690292 | 2016-12-05 18:00:49.689 | 
novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers
  2016-12-05 18:00:49.691677 | 2016-12-05 18:00:49.691 | 
---
  2016-12-05 18:00:49.694401 | 2016-12-05 18:00:49.694 | 
  2016-12-05 18:00:49.695830 | 2016-12-05 18:00:49.695 | Captured traceback:
  2016-12-05 18:00:49.697230 | 2016-12-05 18:00:49.696 | ~~~
  2016-12-05 18:00:49.698889 | 2016-12-05 18:00:49.698 | Traceback (most 
recent call last):
  2016-12-05 18:00:49.700319 | 2016-12-05 18:00:49.699 |   File 
"novaclient/tests/functional/v2/legacy/test_servers.py", line 104, in 
test_list_all_servers
  2016-12-05 18:00:49.701907 | 2016-12-05 18:00:49.701 | output = 
self.nova("list", params="--limit -1 --name %s" % name)
  2016-12-05 18:00:49.703240 | 2016-12-05 18:00:49.702 |   File 
"novaclient/tests/functional/base.py", line 316, in nova
  2016-12-05 18:00:49.704505 | 2016-12-05 18:00:49.704 | endpoint_type, 
merge_stderr)
  2016-12-05 18:00:49.706426 | 2016-12-05 18:00:49.706 |   File 

[Yahoo-eng-team] [Bug 1642679] Re: The OpenStack network_config.json implementation fails on Hyper-V compute nodes

2016-12-06 Thread Abhimanyu
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642679

Title:
  The OpenStack network_config.json implementation fails on Hyper-V
  compute nodes

Status in cloud-init:
  Confirmed
Status in OpenStack Compute (nova):
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  In Progress
Status in cloud-init source package in Yakkety:
  Confirmed

Bug description:
  === Begin SRU Template ===
  [Impact] 
  When a config drive provides network_data.json on Azure OpenStack,
  cloud-init will fail to configure networking.

  Console log and /var/log/cloud-init.log will show:
   ValueError: Unknown network_data link type: hyperv

  This woudl also occur when the type of the network device as declared
  to cloud-init was 'hw_veb', 'hyperv', or 'vhostuser'.

  [Test Case]
  Launch an instance with config drive on hyperv cloud.

  [Regression Potential] 
  Low to none.   cloud-init is relazing requirements and will accept things
  now that it previously complained were invalid.
  === End SRU Template ===

  We have discovered an issue when booting Xenial instances on OpenStack
  environments (Liberty or newer) and Hyper-V compute nodes using config
  drive as metadata source.

  When applying the network_config.json, cloud-init fails with this error:
  http://paste.openstack.org/show/RvHZJqn48JBb0TO9QznL/

  The fix would be to add 'hyperv' as a link type here:
  /usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py, line 
587

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1642679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647464] Re: novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers broken since at least 12/2

2016-12-06 Thread Matt Riedemann
As Diana pointed out, this started failing after we started running the
job with cells v2.

I think I've figured out through my debugging patches that we're
querying the cell0 database for the instance by uuid (the marker) and
that's raising the MarkerNotFound and we don't handle it in the compute
API code, then we fail.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Changed in: python-novaclient
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647464

Title:
  
novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers
  broken since at least 12/2

Status in OpenStack Compute (nova):
  Confirmed
Status in python-novaclient:
  Invalid

Bug description:
  We're always getting a 400 on the marker value here now:

  http://logs.openstack.org/59/406359/1/check/gate-novaclient-dsvm-
  functional-neutron/30f5c67/console.html#_2016-12-05_18_00_49_690292

  2016-12-05 18:00:49.690292 | 2016-12-05 18:00:49.689 | 
novaclient.tests.functional.v2.test_servers.TestServersListNovaClient.test_list_all_servers
  2016-12-05 18:00:49.691677 | 2016-12-05 18:00:49.691 | 
---
  2016-12-05 18:00:49.694401 | 2016-12-05 18:00:49.694 | 
  2016-12-05 18:00:49.695830 | 2016-12-05 18:00:49.695 | Captured traceback:
  2016-12-05 18:00:49.697230 | 2016-12-05 18:00:49.696 | ~~~
  2016-12-05 18:00:49.698889 | 2016-12-05 18:00:49.698 | Traceback (most 
recent call last):
  2016-12-05 18:00:49.700319 | 2016-12-05 18:00:49.699 |   File 
"novaclient/tests/functional/v2/legacy/test_servers.py", line 104, in 
test_list_all_servers
  2016-12-05 18:00:49.701907 | 2016-12-05 18:00:49.701 | output = 
self.nova("list", params="--limit -1 --name %s" % name)
  2016-12-05 18:00:49.703240 | 2016-12-05 18:00:49.702 |   File 
"novaclient/tests/functional/base.py", line 316, in nova
  2016-12-05 18:00:49.704505 | 2016-12-05 18:00:49.704 | endpoint_type, 
merge_stderr)
  2016-12-05 18:00:49.706426 | 2016-12-05 18:00:49.706 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 124, in nova
  2016-12-05 18:00:49.707668 | 2016-12-05 18:00:49.707 | 'nova', 
action, flags, params, fail_ok, merge_stderr)
  2016-12-05 18:00:49.709199 | 2016-12-05 18:00:49.708 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 368, in cmd_with_auth
  2016-12-05 18:00:49.710930 | 2016-12-05 18:00:49.710 | self.cli_dir)
  2016-12-05 18:00:49.712387 | 2016-12-05 18:00:49.712 |   File 
"/opt/stack/new/python-novaclient/.tox/functional/local/lib/python2.7/site-packages/tempest/lib/cli/base.py",
 line 68, in execute
  2016-12-05 18:00:49.714028 | 2016-12-05 18:00:49.713 | result_err)
  2016-12-05 18:00:49.715601 | 2016-12-05 18:00:49.715 | 
tempest.lib.exceptions.CommandFailed: Command 
'['/opt/stack/new/python-novaclient/.tox/functional/bin/nova', '--os-username', 
'admin', '--os-tenant-name', 'admin', '--os-password', 'secretadmin', 
'--os-auth-url', 'http://10.13.96.44/identity_admin', 
'--os-compute-api-version', '2.latest', '--os-endpoint-type', 'publicURL', 
'list', '--limit', '-1', '--name', '6a31a7c8-189d-4a63-88d5-7ee1f63f6810']' 
returned non-zero exit status 1.
  2016-12-05 18:00:49.717098 | 2016-12-05 18:00:49.716 | stdout:
  2016-12-05 18:00:49.718661 | 2016-12-05 18:00:49.718 | 
  2016-12-05 18:00:49.720061 | 2016-12-05 18:00:49.719 | stderr:
  2016-12-05 18:00:49.721559 | 2016-12-05 18:00:49.721 | ERROR 
(BadRequest): marker [282483d5-433b-4c34-8a5d-894e40db705d] not found (HTTP 
400) (Request-ID: req-d0b88399-b0d6-4f0c-881c-442e88944350)

  There isn't anything obvious in the nova and novaclient changes around
  12/2 that would cause this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647541] Re: tox -e docs error

2016-12-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407332
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=26020fed035f8e75346331ec0b7b6a2c913d867d
Submitter: Jenkins
Branch:master

commit 26020fed035f8e75346331ec0b7b6a2c913d867d
Author: YAMAMOTO Takashi 
Date:   Tue Dec 6 15:04:11 2016 +0900

doc: Fix a warning

Fix the following warning:
WARNING: Block quote ends without a blank line; unexpected unindent.

Closes-Bug: #1647541
Change-Id: I1025a22c4c97ca1b824f5fa26bfa3b354d42f4b8


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647541

Title:
  tox -e docs error

Status in neutron:
  Fix Released

Bug description:
  /Users/yamamoto/git/neutron/doc/source/policies/neutron-teams.rst:68: 
WARNING: B
  lock quote ends without a blank line; unexpected unindent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647800] Re: keystone-manage bootstrap isn't completely idempotent

2016-12-06 Thread Dolph Mathews
Marking this as Medium in mitaka since we didn't support zero-downtime
upgrades then, but this is still an unexpected behavior of bootstrap
that would potentially affect an upgrade process.

** Also affects: keystone/mitaka
   Importance: Undecided
   Status: New

** Changed in: keystone/mitaka
   Importance: Undecided => High

** Changed in: keystone/newton
   Importance: Undecided => High

** Changed in: keystone/mitaka
   Importance: High => Medium

** Changed in: keystone/mitaka
   Status: New => Confirmed

** Changed in: keystone/newton
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1647800

Title:
  keystone-manage bootstrap isn't completely idempotent

Status in OpenStack Identity (keystone):
  Confirmed
Status in OpenStack Identity (keystone) mitaka series:
  Confirmed
Status in OpenStack Identity (keystone) newton series:
  Confirmed

Bug description:
  The keystone-manage bootstrap command was designed to be idempotent.
  Most everything in the bootstrap command is wrapped with a try/except
  to handle cases where specific entities already exist (i.e. there is
  already an admin project or an admin user from a previous bootstrap
  run). This is important because bootstrap handles the creation of
  administrator-like things in order to "bootstrap" a deployment. If
  bootstrap wasn't idempotent, the side-effect of running it multiple
  times would be catastrophic.

  During an upgrade scenario, using OpenStack Ansible's rolling upgrade
  support [0], from stable/newton to master, I noticed a very specific
  case where bootstrap was not idempotent. Even if the admin user passed
  to bootstrap already exists, the command will still attempt to update
  it's password [1], even if the admin password hasn't changed. It does
  the same thing with the user's enabled property. This somehow creates
  a revocation event to be stored for that specific user [2]. As a
  result, all tokens for the user specified in the bootstrap command
  will be invalid once the upgrade happens, since OpenStack Ansible
  relies on `keystone-manage bootstrap` during the upgrade.

  This only affects the bootstrap user, but it can be considered a
  service interruption since it is being done during an upgrade. We
  could look into only updating the user's password, or enabled field,
  if and only if they have changed. In that case, a revocation event
  *should* be persisted since the bootstrap command is changing
  something about the account. In the case where there is no change in
  password or enabled status, tokens should still be able to be
  validated across releases.

  I have documented the upgrade procedure and process in a separate
  repository [3]

  [0] https://review.openstack.org/#/c/384269/
  [1] 
https://github.com/openstack/keystone/blob/1c60b1539cf63bba79711e237df496dfa094b2c5/keystone/cmd/cli.py#L226-L232
  [2] http://cdn.pasteraw.com/9gz9964mwufyw3f98rv1mv1hqxezpis
  [3] https://github.com/lbragstad/keystone-performance-upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1647800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647800] Re: keystone-manage bootstrap isn't completely idempotent

2016-12-06 Thread Dolph Mathews
Marking this as High because the consequence is perceivable downtime
during a zero-downtime upgrade.

** Also affects: keystone/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1647800

Title:
  keystone-manage bootstrap isn't completely idempotent

Status in OpenStack Identity (keystone):
  Confirmed
Status in OpenStack Identity (keystone) mitaka series:
  Confirmed
Status in OpenStack Identity (keystone) newton series:
  Confirmed

Bug description:
  The keystone-manage bootstrap command was designed to be idempotent.
  Most everything in the bootstrap command is wrapped with a try/except
  to handle cases where specific entities already exist (i.e. there is
  already an admin project or an admin user from a previous bootstrap
  run). This is important because bootstrap handles the creation of
  administrator-like things in order to "bootstrap" a deployment. If
  bootstrap wasn't idempotent, the side-effect of running it multiple
  times would be catastrophic.

  During an upgrade scenario, using OpenStack Ansible's rolling upgrade
  support [0], from stable/newton to master, I noticed a very specific
  case where bootstrap was not idempotent. Even if the admin user passed
  to bootstrap already exists, the command will still attempt to update
  it's password [1], even if the admin password hasn't changed. It does
  the same thing with the user's enabled property. This somehow creates
  a revocation event to be stored for that specific user [2]. As a
  result, all tokens for the user specified in the bootstrap command
  will be invalid once the upgrade happens, since OpenStack Ansible
  relies on `keystone-manage bootstrap` during the upgrade.

  This only affects the bootstrap user, but it can be considered a
  service interruption since it is being done during an upgrade. We
  could look into only updating the user's password, or enabled field,
  if and only if they have changed. In that case, a revocation event
  *should* be persisted since the bootstrap command is changing
  something about the account. In the case where there is no change in
  password or enabled status, tokens should still be able to be
  validated across releases.

  I have documented the upgrade procedure and process in a separate
  repository [3]

  [0] https://review.openstack.org/#/c/384269/
  [1] 
https://github.com/openstack/keystone/blob/1c60b1539cf63bba79711e237df496dfa094b2c5/keystone/cmd/cli.py#L226-L232
  [2] http://cdn.pasteraw.com/9gz9964mwufyw3f98rv1mv1hqxezpis
  [3] https://github.com/lbragstad/keystone-performance-upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1647800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647800] [NEW] keystone-manage bootstrap isn't completely idempotent

2016-12-06 Thread Lance Bragstad
Public bug reported:

The keystone-manage bootstrap command was designed to be idempotent.
Most everything in the bootstrap command is wrapped with a try/except to
handle cases where specific entities already exist (i.e. there is
already an admin project or an admin user from a previous bootstrap
run). This is important because bootstrap handles the creation of
administrator-like things in order to "bootstrap" a deployment. If
bootstrap wasn't idempotent, the side-effect of running it multiple
times would be catastrophic.

During an upgrade scenario, using OpenStack Ansible's rolling upgrade
support [0], from stable/newton to master, I noticed a very specific
case where bootstrap was not idempotent. Even if the admin user passed
to bootstrap already exists, the command will still attempt to update
it's password [1], even if the admin password hasn't changed. It does
the same thing with the user's enabled property. This somehow creates a
revocation event to be stored for that specific user [2]. As a result,
all tokens for the user specified in the bootstrap command will be
invalid once the upgrade happens, since OpenStack Ansible relies on
`keystone-manage bootstrap` during the upgrade.

This only affects the bootstrap user, but it can be considered a service
interruption since it is being done during an upgrade. We could look
into only updating the user's password, or enabled field, if and only if
they have changed. In that case, a revocation event *should* be
persisted since the bootstrap command is changing something about the
account. In the case where there is no change in password or enabled
status, tokens should still be able to be validated across releases.

I have documented the upgrade procedure and process in a separate
repository [3]

[0] https://review.openstack.org/#/c/384269/
[1] 
https://github.com/openstack/keystone/blob/1c60b1539cf63bba79711e237df496dfa094b2c5/keystone/cmd/cli.py#L226-L232
[2] http://cdn.pasteraw.com/9gz9964mwufyw3f98rv1mv1hqxezpis
[3] https://github.com/lbragstad/keystone-performance-upgrade

** Affects: keystone
 Importance: High
 Status: Confirmed

** Affects: keystone/newton
 Importance: Undecided
 Status: New


** Tags: newton-backport-potential upgrades

** Description changed:

  The keystone-manage bootstrap command was designed to be idempotent.
  Most everything in the bootstrap command is wrapped with a try/except to
  handle cases where specific entities already exist (i.e. there is
  already an admin project or an admin user from a previous bootstrap
  run). This is important because bootstrap handles the creation of
  administrator-like things in order to "bootstrap" a deployment. If
  bootstrap wasn't idempotent, the side-effect of running it multiple
  times would be catastrophic.
  
- During an upgrade scenario, using OpenStack Ansibles rolling upgrade
- support [1], from stable/newton to master, I noticed a very specific
+ During an upgrade scenario, using OpenStack Ansible's rolling upgrade
+ support [0], from stable/newton to master, I noticed a very specific
  case where bootstrap was not idempotent. Even if the admin user passed
  to bootstrap already exists, the command will still attempt to update
- it's password [0], even if the admin password hasn't changed. It does
+ it's password [1], even if the admin password hasn't changed. It does
  the same thing with the user's enabled property. This somehow creates a
- revocation event to be stored for that specific user [1]. As a result,
+ revocation event to be stored for that specific user [2]. As a result,
  all tokens for the user specified in the bootstrap command will be
  invalid once the upgrade happens, since OpenStack Ansible relies on
  `keystone-manage bootstrap` during the upgrade.
  
  This only affects the bootstrap user, but it can be considered a service
  interruption since it is being done during an upgrade. We could look
  into only updating the user's password, or enabled field, if and only if
  they have changed. In that case, a revocation event *should* be
- persisted. In the case where there is no change in password or enabled
+ persisted since the bootstrap command is changing something about the
+ account. In the case where there is no change in password or enabled
  status, tokens should still be able to be validated across releases.
  
+ I have documented the upgrade procedure and process in a separate
+ repository [3]
  
- [0] 
https://github.com/openstack/keystone/blob/1c60b1539cf63bba79711e237df496dfa094b2c5/keystone/cmd/cli.py#L226-L232
- [1] https://review.openstack.org/#/c/384269/
+ [0] https://review.openstack.org/#/c/384269/
+ [1] 
https://github.com/openstack/keystone/blob/1c60b1539cf63bba79711e237df496dfa094b2c5/keystone/cmd/cli.py#L226-L232
  [2] http://cdn.pasteraw.com/9gz9964mwufyw3f98rv1mv1hqxezpis
+ [3] https://github.com/lbragstad/keystone-performance-upgrade

-- 
You received this bug notification because you are a member of Yahoo!

[Yahoo-eng-team] [Bug 1647784] [NEW] latest devstack fails to start nova-serialproxy

2016-12-06 Thread Ludovic Beliveau
Public bug reported:

Description
===

Latest devstack fails to start nova-serialproxy.

Steps to reproduce
==

Start devstack with latest master.

Expected result
===

Successfully start devstack.

Actual result
=

Failed to spawn Start devstack with latest master (see logs below).

Environment
===

commit f61db221f31d9ba86f61c13a7d1c5a951654fdc0
Merge: 9be228a 3921224
Author: Jenkins 
Date:   Tue Dec 6 13:25:17 2016 +

Merge "Create schema generation for AddressBase"

Logs & Configs
==

[centos@IronPass-2 devstack]$ /usr/bin/nova-serialproxy --config-file 
/etc/nova/nova.conf & echo $! >/opt/stack/status/stack/n-sproxy.pid; fg || echo 
"n-sproxy failed to start" | tee "/opt/stack/status/stack/n-sproxy.failure"
[1] 148263
/usr/bin/nova-serialproxy --config-file /etc/nova/nova.conf
Traceback (most recent call last):
  File "/usr/bin/nova-serialproxy", line 6, in 
from nova.cmd.serialproxy import main
  File "/opt/stack/nova/nova/cmd/serialproxy.py", line 29, in 
serial.register_cli_opts(CONF)
  File "/opt/stack/nova/nova/conf/serial_console.py", line 123, in 
register_cli_opts
conf.register_cli_opt(CLI_OPTS, serial_opt_group)
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2285, in 
__inner
result = f(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2477, in 
register_cli_opt
return self.register_opt(opt, group, cli=True, clear_cache=False)
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2289, in 
__inner
return f(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2434, in 
register_opt
self._add_cli_opt(opt, group)
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2410, in 
_add_cli_opt
if {'opt': opt, 'group': group} in self._cli_opts:
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 937, in 
__eq__
return vars(self) == vars(another)
TypeError: vars() argument must have __dict__ attribute

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647784

Title:
  latest devstack fails to start nova-serialproxy

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  Latest devstack fails to start nova-serialproxy.

  Steps to reproduce
  ==

  Start devstack with latest master.

  Expected result
  ===

  Successfully start devstack.

  Actual result
  =

  Failed to spawn Start devstack with latest master (see logs below).

  Environment
  ===

  commit f61db221f31d9ba86f61c13a7d1c5a951654fdc0
  Merge: 9be228a 3921224
  Author: Jenkins 
  Date:   Tue Dec 6 13:25:17 2016 +

  Merge "Create schema generation for AddressBase"

  Logs & Configs
  ==

  [centos@IronPass-2 devstack]$ /usr/bin/nova-serialproxy --config-file 
/etc/nova/nova.conf & echo $! >/opt/stack/status/stack/n-sproxy.pid; fg || echo 
"n-sproxy failed to start" | tee "/opt/stack/status/stack/n-sproxy.failure"
  [1] 148263
  /usr/bin/nova-serialproxy --config-file /etc/nova/nova.conf
  Traceback (most recent call last):
File "/usr/bin/nova-serialproxy", line 6, in 
  from nova.cmd.serialproxy import main
File "/opt/stack/nova/nova/cmd/serialproxy.py", line 29, in 
  serial.register_cli_opts(CONF)
File "/opt/stack/nova/nova/conf/serial_console.py", line 123, in 
register_cli_opts
  conf.register_cli_opt(CLI_OPTS, serial_opt_group)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2285, in 
__inner
  result = f(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2477, in 
register_cli_opt
  return self.register_opt(opt, group, cli=True, clear_cache=False)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2289, in 
__inner
  return f(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2434, in 
register_opt
  self._add_cli_opt(opt, group)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2410, in 
_add_cli_opt
  if {'opt': opt, 'group': group} in self._cli_opts:
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 937, in 
__eq__
  return vars(self) == vars(another)
  TypeError: vars() argument must have __dict__ attribute

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647784/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647776] [NEW] sriov functional test should avoid starting compute service twice with the same hostname

2016-12-06 Thread Vladik Romanovsky
Public bug reported:

SRIOV functional test allowed some tests to start the compute service
twice with the same hostname, which affected the correctness of the tests.
This patch will make sure that the compute service is started only once.

** Affects: nova
 Importance: Undecided
 Assignee: Vladik Romanovsky (vladik-romanovsky)
 Status: In Progress


** Tags: libvirt testing

** Changed in: nova
 Assignee: (unassigned) => Vladik Romanovsky (vladik-romanovsky)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647776

Title:
  sriov functional test should avoid starting compute service twice with
  the same hostname

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  SRIOV functional test allowed some tests to start the compute service
  twice with the same hostname, which affected the correctness of the tests.
  This patch will make sure that the compute service is started only once.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647766] [NEW] sidebar doesn't show when network service not present

2016-12-06 Thread Matt Borland
Public bug reported:

In cases where the network service is not present, e.g. a Swift-only
installation, it is possible that a call to list_extensions() will raise
a ServiceCatalogException, which will cause the entire sidebar to fail
to render.  This means that the  included in
horizon/templates/horizon/common/_sidebar.html element is not present.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1647766

Title:
  sidebar doesn't show when network service not present

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In cases where the network service is not present, e.g. a Swift-only
  installation, it is possible that a call to list_extensions() will
  raise a ServiceCatalogException, which will cause the entire sidebar
  to fail to render.  This means that the  included in
  horizon/templates/horizon/common/_sidebar.html element is not present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1647766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-12-06 Thread Surya Prakash Singh
** Changed in: kolla
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in Karbor:
  Fix Released
Status in kolla:
  Fix Released
Status in kuryr:
  Fix Released
Status in kuryr-libnetwork:
  Fix Released
Status in Magnum:
  In Progress
Status in Mistral:
  Fix Released
Status in networking-calico:
  In Progress
Status in networking-midonet:
  Fix Released
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in osprofiler:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  Won't Fix
Status in tacker:
  In Progress
Status in watcher:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647730] [NEW] Remove old fallback logic in agent rpc

2016-12-06 Thread Marc Koderer
Public bug reported:

Agent/rpc.py contains old fallback logic to also support older (before Mitaka) 
RPC calls.
Remove those calls that are already marked for deletion.

** Affects: neutron
 Importance: Undecided
 Assignee: Marc Koderer (m-koderer)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Marc Koderer (m-koderer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647730

Title:
  Remove old fallback logic in agent rpc

Status in neutron:
  In Progress

Bug description:
  Agent/rpc.py contains old fallback logic to also support older (before 
Mitaka) RPC calls.
  Remove those calls that are already marked for deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647715] [NEW] get_subnetpool() raise error when called with unset filters

2016-12-06 Thread Artur Korzeniewski
Public bug reported:

get_subnetpools() from db_base_plugin_v2 is raising error when called with 
unset filters argument.
This is because filters are by default set to None and when calling OVO, the 
filters are passed as kwargs using '**':
https://github.com/openstack/neutron/blob/10.0.0.0b1/neutron/db/db_base_plugin_v2.py#L1087

It is only issue when calling directly using plugin.
Unit tests were not covering it.

API is tested and okay.

This is also affecting Newton.

** Affects: neutron
 Importance: Undecided
 Assignee: Artur Korzeniewski (artur-korzeniewski)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Artur Korzeniewski (artur-korzeniewski)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647715

Title:
  get_subnetpool() raise error when called with unset filters

Status in neutron:
  In Progress

Bug description:
  get_subnetpools() from db_base_plugin_v2 is raising error when called with 
unset filters argument.
  This is because filters are by default set to None and when calling OVO, the 
filters are passed as kwargs using '**':
  
https://github.com/openstack/neutron/blob/10.0.0.0b1/neutron/db/db_base_plugin_v2.py#L1087

  It is only issue when calling directly using plugin.
  Unit tests were not covering it.

  API is tested and okay.

  This is also affecting Newton.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647697] [NEW] When deleting a resource provider in the placement api the rp's associated aggregates are not cleaned up

2016-12-06 Thread Chris Dent
Public bug reported:

When deleting a resource provider the spec says that the resource
provider's inventory and associated aggregates should be deleted:

http://specs.openstack.org/openstack/nova-
specs/specs/newton/implemented/generic-resource-pools.html#delete-
resource-providers-uuid

In version 1.1 of the placement API, inventory is being deleted, but
aggregate associations are left untouched. This means that in the
following case the resulting aggregates will be wrong:

* create an rp with a known uuid (this is allowed and expected)
* associate some aggregates (a, b, c)
* delete the rp
* recreate the rp with same uuid
* query the associated aggregates get 'a, b, c' but expect []

Note that the set_aggregates functionality (and the associated PUT API)
is a full replace so any time the associated aggregates are updated, the
input in that request becomes the whole set of associated aggregates.
This bug is only present in the case where aggregates were present on
the previous incarnation of a resource provider but should not be on the
current incarnation.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647697

Title:
  When deleting a resource provider in the placement api the rp's
  associated aggregates are not cleaned up

Status in OpenStack Compute (nova):
  New

Bug description:
  When deleting a resource provider the spec says that the resource
  provider's inventory and associated aggregates should be deleted:

  http://specs.openstack.org/openstack/nova-
  specs/specs/newton/implemented/generic-resource-pools.html#delete-
  resource-providers-uuid

  In version 1.1 of the placement API, inventory is being deleted, but
  aggregate associations are left untouched. This means that in the
  following case the resulting aggregates will be wrong:

  * create an rp with a known uuid (this is allowed and expected)
  * associate some aggregates (a, b, c)
  * delete the rp
  * recreate the rp with same uuid
  * query the associated aggregates get 'a, b, c' but expect []

  Note that the set_aggregates functionality (and the associated PUT
  API) is a full replace so any time the associated aggregates are
  updated, the input in that request becomes the whole set of associated
  aggregates. This bug is only present in the case where aggregates were
  present on the previous incarnation of a resource provider but should
  not be on the current incarnation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647677] [NEW] angular table doesn't use the setting of Items Per Page

2016-12-06 Thread Kenji Ishii
Public bug reported:

Currently, the table based on angularjs support the feature of pagination.
However, the number of items by page is used the fixed value(it's 20).

Same as the table based on django, we need to support a personal setting
like above.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: In Progress


** Tags: angularjs ux

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

** Tags added: angularjs

** Tags added: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1647677

Title:
  angular table doesn't use the setting of Items Per Page

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently, the table based on angularjs support the feature of pagination.
  However, the number of items by page is used the fixed value(it's 20).

  Same as the table based on django, we need to support a personal
  setting like above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1647677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647652] [NEW] > returns all available networks instead of the internal networks that managed by this agent

2016-12-06 Thread Toni Freger
Public bug reported:

Version: Newton 
Tested with 3 controllers with dhcp agent on each controller.

 
Returns all available networks instead of the internal networks that managed by 
this agent.

I've created new network with new subnet with  --dhcp-disable and still
I got this network in the list of the managed network of the dhcp agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647652

Title:
  > returns all
  available networks instead of the internal networks that managed by
  this agent

Status in neutron:
  New

Bug description:
  Version: Newton 
  Tested with 3 controllers with dhcp agent on each controller.

   
  Returns all available networks instead of the internal networks that managed 
by this agent.

  I've created new network with new subnet with  --dhcp-disable and
  still I got this network in the list of the managed network of the
  dhcp agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647629] [NEW] auto_allocate_network is not called when external network exists

2016-12-06 Thread Yushiro FURUKAWA
Public bug reported:

When I try to boot an instance with '--nic auto', a network isn't created
automatically.  In neutron side, in order to automatically allocate
network/subnet/router, following network is necessary in advance:

* router:external is True(it means an external network)
* is_default is True
* subnet is associated

However, if I prepared above default network, nova doesn't call
'_can_auto_allocate_network'[1].  As a result, the VM instance is attached that
external network.(not generated a network automatically)

[1]
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1532

[Steps to reproduce]
1. Delete all network/subnet/port/router in neutron

2. Try to create an instance
  $ nova boot --flavor 1 --image 198384a2-9ece-4d24-a509-4ce02f183b63 vm1 --nic 
auto

ERROR (BadRequest): Unable to automatically allocate a network for project 
c8607bb7e4334b43aede76831b91ca03 (HTTP 400) (Request-ID: 
req-2d4088f6-d9aa-492d-a645-02327a058a47)
In n-api.log, we can see following error message:
Error message: {"NeutronError": {"message": "Deployment error: No default 
router:external network.", "type": "AutoAllocationFailure", "detail": ""}}

3. I see.  OK, let's create default external network(router:external)
  $ neutron net-create public --is_default --router:external --shared
  $ neutron subnet-create public --subnetpool 
a811bf33-dacc-42b9-b44a-d424f37023df

4. Try it again 'nova boot'
  $ nova boot --flavor 1 --image 198384a2-9ece-4d24-a509-4ce02f183b63 vm1 --nic 
auto

5. You can check the result with 'nova list'
  $ nova list
+--+--+++-+-+
| ID   | Name | Status | Task State | Power 
State | Networks|
+--+--+++-+-+
| 4e83f9d0-acbf-4440-8563-17dd6ec0cde5 | vm1  | BUILD  | spawning   | NOSTATE   
  | public=10.0.0.9 |
+--+--+++-+-+

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: auto-allocate-topology

** Summary changed:

- auto_allocate_network doesn't call when external network exists
+ auto_allocate_network is not called when external network exists

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647629

Title:
  auto_allocate_network is not called when external network exists

Status in OpenStack Compute (nova):
  New

Bug description:
  When I try to boot an instance with '--nic auto', a network isn't created
  automatically.  In neutron side, in order to automatically allocate
  network/subnet/router, following network is necessary in advance:

  * router:external is True(it means an external network)
  * is_default is True
  * subnet is associated

  However, if I prepared above default network, nova doesn't call
  '_can_auto_allocate_network'[1].  As a result, the VM instance is attached 
that
  external network.(not generated a network automatically)

  [1]
  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1532

  [Steps to reproduce]
  1. Delete all network/subnet/port/router in neutron

  2. Try to create an instance
$ nova boot --flavor 1 --image 198384a2-9ece-4d24-a509-4ce02f183b63 vm1 
--nic auto

  ERROR (BadRequest): Unable to automatically allocate a network for project 
c8607bb7e4334b43aede76831b91ca03 (HTTP 400) (Request-ID: 
req-2d4088f6-d9aa-492d-a645-02327a058a47)
  In n-api.log, we can see following error message:
  Error message: {"NeutronError": {"message": "Deployment error: No default 
router:external network.", "type": "AutoAllocationFailure", "detail": ""}}

  3. I see.  OK, let's create default external network(router:external)
$ neutron net-create public --is_default --router:external --shared
$ neutron subnet-create public --subnetpool 
a811bf33-dacc-42b9-b44a-d424f37023df

  4. Try it again 'nova boot'
$ nova boot --flavor 1 --image 198384a2-9ece-4d24-a509-4ce02f183b63 vm1 
--nic auto

  5. You can check the result with 'nova list'
$ nova list
  
+--+--+++-+-+
  | ID   | Name | Status | Task State | Power 
State | Networks|
  
+--+--+++-+-+
  | 4e83f9d0-acbf-4440-8563-17dd6ec0cde5 | vm1  | BUILD  | spawning   | NOSTATE 
| public=10.0.0.9 |
  
+--+--+++-+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647629/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1582323] Re: Commissioning fails when competing cloud metadata resides on disk

2016-12-06 Thread Andres Rodriguez
** Also affects: maas/2.1
   Importance: Undecided
   Status: New

** Also affects: maas/trunk
   Importance: Critical
   Status: Triaged

** Changed in: maas/trunk
Milestone: 2.0.0 => next

** Changed in: maas/trunk
Milestone: next => 2.2.0

** Changed in: maas/2.1
Milestone: None => 2.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1582323

Title:
  Commissioning fails when competing cloud metadata resides on disk

Status in cloud-init:
  Confirmed
Status in MAAS:
  Triaged
Status in MAAS 2.1 series:
  New
Status in MAAS trunk series:
  Triaged
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  A customer reused hardware that had previously deployed a RHEL
  Overcloud-controller which places metadata on the disk as a legitimate
  source, that cloud-init looks at by default.  When the newly enlisted
  node appeared it had the name of "overcloud-controller-0" vs. maas-
  enlist, pulled from the disk metadata which had overridden MAAS'
  metadata.  Commissioning continually failed on all of the nodes until
  the disk metadata was manually removed (KVM boot Ubuntu ISO, rm -f
  data or dd zeros to disk).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1582323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647584] [NEW] Instance with anti-affinity server group booted failed in concurrent scenario

2016-12-06 Thread Tao Li
Public bug reported:

Description
===
the follows assumption scenario。
1. The compute resources are enough.
2. Booting instances in concurrent scenario.
3. The number of instance is more than the instances'。
4. The instances booting with the same anti-affinity.
5. more than one controller node。

In the above scenario, the number of instances booting failed are more
than expected。In concurrent scenario, one more instances will be
scheduled to the same compute nodes even specifying anti-affinity, so
after 'instance_claim', compute will check the anti-affinity without
lock, perhaps two or more instances will be checked together and failed
because of affecting each other。so these instances will be rescheduled.
In the next scheduling, the previous compute node will be ignored.

Steps to reproduce
==
1. Assumpt 3 compute nodes and 2 or more controller nodes.
2. Create a anti-affinity server group.
3. Construct a bash script for booting instances with anti-affinity group in 
concurrent scenario.
   nova boot --flavor 1 --image cirros --nic 
net-id=b0406792-26a8-4f26-843e-3b2231dbd4da --hint 
group=eaa8694e-8c83-47f2-8c02-93657c08d2bd lt_test01 &
nova boot --flavor 1 --image cirros --nic 
net-id=b0406792-26a8-4f26-843e-3b2231dbd4da --hint 
group=eaa8694e-8c83-47f2-8c02-93657c08d2bd lt_test02 &
nova boot --flavor 1 --image cirros --nic 
net-id=b0406792-26a8-4f26-843e-3b2231dbd4da --hint 
group=eaa8694e-8c83-47f2-8c02-93657c08d2bd lt_test03 &
nova boot --flavor 1 --image cirros --nic 
net-id=b0406792-26a8-4f26-843e-3b2231dbd4da --hint 
group=eaa8694e-8c83-47f2-8c02-93657c08d2bd lt_test04 
4. execute the bash script.

Expected result
===
3 instances were booting successfully.

Actual result
=
2 instances were booting succssfully.

** Affects: nova
 Importance: Undecided
 Assignee: Tao Li (eric-litao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Tao Li (eric-litao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647584

Title:
  Instance with anti-affinity server group booted failed in concurrent
  scenario

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  the follows assumption scenario。
  1. The compute resources are enough.
  2. Booting instances in concurrent scenario.
  3. The number of instance is more than the instances'。
  4. The instances booting with the same anti-affinity.
  5. more than one controller node。

  In the above scenario, the number of instances booting failed are more
  than expected。In concurrent scenario, one more instances will be
  scheduled to the same compute nodes even specifying anti-affinity, so
  after 'instance_claim', compute will check the anti-affinity without
  lock, perhaps two or more instances will be checked together and
  failed because of affecting each other。so these instances will be
  rescheduled. In the next scheduling, the previous compute node will be
  ignored.

  Steps to reproduce
  ==
  1. Assumpt 3 compute nodes and 2 or more controller nodes.
  2. Create a anti-affinity server group.
  3. Construct a bash script for booting instances with anti-affinity group in 
concurrent scenario.
 nova boot --flavor 1 --image cirros --nic 
net-id=b0406792-26a8-4f26-843e-3b2231dbd4da --hint 
group=eaa8694e-8c83-47f2-8c02-93657c08d2bd lt_test01 &
  nova boot --flavor 1 --image cirros --nic 
net-id=b0406792-26a8-4f26-843e-3b2231dbd4da --hint 
group=eaa8694e-8c83-47f2-8c02-93657c08d2bd lt_test02 &
  nova boot --flavor 1 --image cirros --nic 
net-id=b0406792-26a8-4f26-843e-3b2231dbd4da --hint 
group=eaa8694e-8c83-47f2-8c02-93657c08d2bd lt_test03 &
  nova boot --flavor 1 --image cirros --nic 
net-id=b0406792-26a8-4f26-843e-3b2231dbd4da --hint 
group=eaa8694e-8c83-47f2-8c02-93657c08d2bd lt_test04 
  4. execute the bash script.

  Expected result
  ===
  3 instances were booting successfully.

  Actual result
  =
  2 instances were booting succssfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp