[Yahoo-eng-team] [Bug 1707176] [NEW] Foget to indicate the installation of pacakage apache2 and libapache2-mod-wsgi

2017-07-28 Thread zhiguo.li
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: In installation step 1,the guide dose 
not show the users to  install the package  apache2 and libapache2-mod-wsgi,but 
these two packages are necessary.
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 12.0.0.0b3.dev162 on 2017-07-27 00:25
SHA: c3b5d2d77b029880521912e43ad963f9b0c5bf99
Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
URL: 
https://docs.openstack.org/keystone/latest/install/keystone-install-ubuntu.html

** Affects: keystone
 Importance: Undecided
 Assignee: zhiguo.li (zhiguo)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => zhiguo.li (zhiguo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1707176

Title:
  Foget to indicate the installation of pacakage  apache2 and libapache2
  -mod-wsgi

Status in OpenStack Identity (keystone):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: In installation step 1,the guide 
dose not show the users to  install the package  apache2 and 
libapache2-mod-wsgi,but these two packages are necessary.
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.0.0b3.dev162 on 2017-07-27 00:25
  SHA: c3b5d2d77b029880521912e43ad963f9b0c5bf99
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/latest/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1707176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707168] [NEW] [placement] resource provider trait-related query creates unicode warning

2017-07-28 Thread Chris Dent
Public bug reported:

Running queries for shared providers creates the following warning:


/home/cdent/src/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:340:
 OsloDBDeprecationWarning: EngineFacade is deprecated; please use 
oslo_db.sqlalchemy.enginefacade
  self._legacy_facade = LegacyEngineFacade(None, _factory=self)

/home/cdent/src/nova/.tox/functional/local/lib/python2.7/site-packages/sqlalchemy/sql/sqltypes.py:219:
 SAWarning: Unicode type received non-unicode bind param value 
'MISC_SHARES_VIA_AGGREGATE'. (this warning may be suppressed after 10 
occurrences)
  (util.ellipses_string(value),))

This is annoying when trying to evaluate test logs. It's noise.

** Affects: nova
 Importance: Low
 Assignee: Chris Dent (cdent)
 Status: In Progress


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707168

Title:
  [placement] resource provider trait-related query creates unicode
  warning

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Running queries for shared providers creates the following warning:

  
/home/cdent/src/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:340:
 OsloDBDeprecationWarning: EngineFacade is deprecated; please use 
oslo_db.sqlalchemy.enginefacade
self._legacy_facade = LegacyEngineFacade(None, _factory=self)
  
/home/cdent/src/nova/.tox/functional/local/lib/python2.7/site-packages/sqlalchemy/sql/sqltypes.py:219:
 SAWarning: Unicode type received non-unicode bind param value 
'MISC_SHARES_VIA_AGGREGATE'. (this warning may be suppressed after 10 
occurrences)
(util.ellipses_string(value),))

  This is annoying when trying to evaluate test logs. It's noise.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670628] Re: nova-compute will try to re-plug the vif even if it exists for vhostuser port.

2017-07-28 Thread Sean Dague
What real world scenario would you expect to expose a situation where
the neutron environment is turned off and nova-compute is restarted?
This seems pretty synthetic, and the fact that it recovers ones the
neutron agent restarts seems like most of the environment is working as
expected.

** Changed in: nova
   Status: New => Incomplete

** Changed in: nova
   Status: Incomplete => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1670628

Title:
  nova-compute will try to re-plug the vif even if it exists for
  vhostuser port.

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description
  ===
  In mitaka version, deploy neutron with ovs-dpdk.
  If we stop ovs-agent, then re-start the nova-compute,the vm in the host will 
get network connection failed.

  Steps to reproduce
  ==
  deploy mitaka. with neutron, enabled ovs-dpdk, choose one compute node, where 
vm has network connection.
  run this in host,
  1. #systemctl stop neutron-openvswitch-agent.service
  2. #systemctl restart openstack-nova-compute.service

  then ping $VM_IN_THIS_HOST

  Expected result
  ===
  ping $VM_IN_THIS_HOST would would success

  Actual result
  =
  ping $VM_IN_THIS_HOST failed.

  Environment
  ===
  Centos7
  ovs2.5.1
  dpdk 2.2.0
  openstack-nova-compute-13.1.1-1

  Reason:
  after some digging, I found that nova-compute will try to plug the vif every 
time when it booting.
  Specially for vhostuser port, nova-compute will not check whether it exists 
as legacy ovs,and it will re-plug the port with vsctl args like "--if-exists 
del-port vhu".
  (refer 
https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/libvirt/vif.py#L679-L683)
  after recreate the ovs vhostuser port, it will not get the right vlan tag 
which set from ovs agent.

  In the test environment, after restart the ovs agent, the agent will
  set a proper vlan id for the port. and the network connection will be
  resumed.

  Not sure it's a bug or config issue, do I miss something?
  there is also fp_plug type for vhostuser port, how could we specify it?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1670628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669054] Re: RequestSpec.ignore_hosts from resize is reused in subsequent evacuate

2017-07-28 Thread Sean Dague
This is really a note to self. Moving to Opinion as there are a lot of
mights here. :)

** Tags added: note-to-self

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1669054

Title:
  RequestSpec.ignore_hosts from resize is reused in subsequent evacuate

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When doing a resize, if CONF.allow_resize_to_same_host is False, then
  we set RequestSpec.ignore_hosts and then save the RequestSpec.

  When we go to use the same RequestSpec on a subsequent
  rebuild/evacuate, ignore_hosts is still set from the previous resize.

  In RequestSpec.reset_forced_destinations() we reset force_hosts and
  force_nodes, it might make sense to also reset ignore_hosts.

  We may also want to change other things...for example in
  ConductorManager.rebuild_instance() we set request_spec.ignore_hosts
  to itself if it's set...that makes no sense if we're just going to
  reset it to nothing immediately afterwards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1669054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667269] Re: Nova volume-attach doesnt care for name given

2017-07-28 Thread Sean Dague
The device_name was removed from the API. I think the only place it ever
worked was xenserver.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667269

Title:
  Nova volume-attach doesnt care for  name given

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  stack@controller:~/devstack$ nova help volume-attach
  usage: nova volume-attach   []

  Attach a volume to a server.

  Positional arguments:
  Name or ID of server.
  ID of the volume to attach.
  Name of the device e.g. /dev/vdb. Use "auto" for autoassign (if
  supported). Libvirt driver will use default device name.

  
  As shown below:

  [root@greglinux2 ~(keystone_admin)]# nova volume-attachments 
e9c63adc-e837-4108-b5cf-10a8f147a5ab
  +++---+---+
  | ID | DEVICE | SERVER ID | VOLUME ID |
  +++---+---+
  +++---+---+
  [root@greglinux2 ~(keystone_admin)]# 
  [root@greglinux2 ~(keystone_admin)]# 
  [root@greglinux2 ~(keystone_admin)]# 

  [root@greglinux2 ~(keystone_admin)]# nova volume-attach 
e9c63adc-e837-4108-b5cf-10a8f147a5ab f0990f38-8fc5-4710-b9ac-e846b6c634cb 
/dev/vdb
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |>  attached as 
device /dev/vdb
  | id   | f0990f38-8fc5-4710-b9ac-e846b6c634cb |
  | serverId | e9c63adc-e837-4108-b5cf-10a8f147a5ab |
  | volumeId | f0990f38-8fc5-4710-b9ac-e846b6c634cb |
  +--+--+
  [root@greglinux2 ~(keystone_admin)]# 
  [root@greglinux2 ~(keystone_admin)]# 

  
  [root@greglinux2 ~(keystone_admin)]# 
  [root@greglinux2 ~(keystone_admin)]# 
  [root@greglinux2 ~(keystone_admin)]# nova volume-attach 
e9c63adc-e837-4108-b5cf-10a8f147a5ab f0990f38-8fc5-4710-b9ac-e846b6c634cb 
/dev/vdc >>>
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb | > Still attached as 
/dev/vdb 
  | id   | f0990f38-8fc5-4710-b9ac-e846b6c634cb |
  | serverId | e9c63adc-e837-4108-b5cf-10a8f147a5ab |
  | volumeId | f0990f38-8fc5-4710-b9ac-e846b6c634cb |
  +--+--+
  [root@greglinux2 ~(keystone_admin)]# 
  [root@greglinux2 ~(keystone_admin)]# 


  It looks like nova is not considering  parameter at all.
  Is it expected?
  Looking into code to reason this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667500] Re: Openstack add 'deafult' security group to a VM when attaching new interface to new network even the VM have customized secgroup

2017-07-28 Thread Sean Dague
If there is a request for a Nova feature here, please bring it in via
the Nova Specs process - https://specs.openstack.org/openstack/nova-
specs/readme.html

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667500

Title:
  Openstack add 'deafult' security group to a VM when attaching new
  interface  to new network  even the VM have customized secgroup

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  
  I am not sure if its design intention, Openstack add 'deafult' security group 
to a VM when attaching new interface to that VM even if the VM have customized 
secgroup .

  for many deployment, users create and add customized security group to
  the VMs, so when attaching new network interface to the VM, Openstack
  keeps the customized secgroup , but in addition, it adds the 'deafult'
  which is not good as default should not  have all security ports open
  by default.

  Liberty,


  before attach the VM to new network < Nova show  >

  | security_groups  | customized
  |

  
  after VM attached to new network < Nova show  > 
  | security_groups  | customized, default  
|

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659220] Re: Attaching volume after taking snapshot fails silently

2017-07-28 Thread Sean Dague
Managing the AppArmor config is currently beyond scope for Nova, thus
marking this as Opinion. It would be good if those rules from the
distros were able to do this better.

If you wanted to put that in scope for Nova, that would require a Nova
spec - https://specs.openstack.org/openstack/nova-specs/readme.html

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659220

Title:
  Attaching volume after taking snapshot fails silently

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description
  ===
  You have an instance and a volume and attach the volume against this instance 
everything works fine. But if you take a snapshot of this volume (detached) and 
try to attach again Horizon tells you everything was ok but the volume stays in 
the available state and isn't available in the instance.

  The actual problem seems to come from libvirt as it doesn't create the
  required AppArmor rules in the
  /etc/apparmor.d/libvirt/libvirt-.files

  Steps to reproduce
  ==
  * create an instance
  * create an volume
  * snapshot the volume
  * attach the volume

  Expected result
  ===
  Volume should be attached to the instance.

  Actual result
  =
  Volume stays available not attached.

  Environment
  ===
  Ubuntu 16.04 with Newton via Cloud Archive.

  1. Exact version of OpenStack you are running. See the following
  ii  nova-common2:14.0.1-0ubuntu1~cloud0all
  ii  nova-compute   2:14.0.1-0ubuntu1~cloud0all
  ii  nova-compute-kvm   2:14.0.1-0ubuntu1~cloud0all
  ii  nova-compute-libvirt   2:14.0.1-0ubuntu1~cloud0all
  ii  python-nova2:14.0.1-0ubuntu1~cloud0all
  ii  python-novaclient  2:6.0.0-0ubuntu1~cloud0 all

  2. Which hypervisor did you use?
  ii  libvirt-bin1.3.1-1ubuntu10.6   
amd64
  ii  libvirt0:amd64 1.3.1-1ubuntu10.6   
amd64
  ii  python-libvirt 1.3.1-1ubuntu1  
amd64

  ii  ipxe-qemu  1.0.0+git-20150424.a25a16d-1ubuntu1 all
  ii  qemu-block-extra:amd64 1:2.5+dfsg-5ubuntu10.6  
amd64
  ii  qemu-kvm   1:2.5+dfsg-5ubuntu10.6  
amd64
  ii  qemu-system-common 1:2.5+dfsg-5ubuntu10.6  
amd64
  ii  qemu-system-x861:2.5+dfsg-5ubuntu10.6  
amd64
  ii  qemu-utils 1:2.5+dfsg-5ubuntu10.6  
amd64

  2. Which storage type did you use?
  ii  glusterfs-client   3.7.6-1ubuntu1  
amd64
  ii  glusterfs-common   3.7.6-1ubuntu1  
amd64

  3. Which networking type did you use?
  ii  neutron-common 2:9.0.0-0ubuntu1.16.10.2~cloud0 all
  ii  neutron-linuxbridge-agent  2:9.0.0-0ubuntu1.16.10.2~cloud0 all
  ii  neutron-plugin-linuxbridge-agent   2:9.0.0-0ubuntu1.16.10.2~cloud0 all
  ii  python-neutron 2:9.0.0-0ubuntu1.16.10.2~cloud0 all
  ii  python-neutron-fwaas   1:9.0.0-0ubuntu1~cloud0 all
  ii  python-neutron-lib 0.4.0-0ubuntu1~cloud0   all
  ii  python-neutronclient   1:6.0.0-0ubuntu1~cloud0 all

  Logs & Configs
  ==

  kern.log
  Jan 25 09:29:03 ext kernel: [6527389.763944] audit: type=1400 
audit(1485332943.463:125): apparmor="DENIED" operation="open" 
profile="libvirt-c9c9ec47-f4e7-4065-9b45-947d1c9efcd3" 
name="/var/lib/nova/mnt/974fd68b172ee6f649b9ada09de2edf4/volume-fb44d9f4-4247-44e1-98a9-14bde7582ab0"
 pid=30968 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=113 
ouid=113
  Jan 25 09:29:03 ext kernel: [6527389.763999] audit: type=1400 
audit(1485332943.463:126): apparmor="DENIED" operation="open" 
profile="libvirt-c9c9ec47-f4e7-4065-9b45-947d1c9efcd3" 
name="/var/lib/nova/mnt/974fd68b172ee6f649b9ada09de2edf4/volume-fb44d9f4-4247-44e1-98a9-14bde7582ab0"
 pid=30968 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=113 
ouid=113

  libvirt-bin:
  Jan 25 09:39:57 m2r1 libvirtd[31600]: internal error: unable to execute QEMU 
command 'device_add': Property 'virtio-blk-device.drive' can't find value 
'drive-virtio-disk1'

  nova-compute:
  2017-01-25 09:39:57.287 4091 ERROR nova.virt.libvirt.driver 
[req-e164a413-2688-4906-93e1-814da66e4b87 78009cee2c13413db3f15ef242e9073c 
6c8b3ce916864989ba15a76dfe88e0a9 - - -] [instance: 
c9c9ec47-f4e7-4065-9b45-947d1c9efcd3] Failed to attach volume at 

[Yahoo-eng-team] [Bug 1679703] Re: Unable to boot instance with VF( direct-port ) because the PF is not online

2017-07-28 Thread Sean Dague
Fixed in docs.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1679703

Title:
  Unable to boot instance with VF( direct-port ) because the PF is not
  online

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description of problem:

  Booted VM with Direct-physical port (The entire PF is associated to the 
instance).
  When I deleted the instance I expected that PF will be available and online.
  Actually when I am trying to boot instance with direct port (VF)
  I get this error message :

  VM in error state- 
  fault | {"message": "Exceeded maximum number of retries. Exceeded max 
scheduling attempts 3 for instance 102fde1b-22d3-4b05-8246-0f1af520455a. Last 
exception: internal error: Unable to configure VF 4 of PF 'p1p1' because the PF 
is not online. Please change host network config", "code": 500, "details": "  
File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 524, 
in build_instances | filter_properties, instances[0].uuid)  

  [root@compute-0 ~]# ifconfig |grep p1p1 --->PF is not online
  it's impossible to create an instance with a direct port (VF)

  
  version: 
  Ocata 
  How reproducible:
  Always

  Steps to Reproduce:
  1. Deploy SRIOV setup with PF support 
  2. boot instance with Direct-physical port
  3. Delete VM that is associated to PF
  4. boot instance with Direct port (VF)

  Expected results:
  VM with direct port should be booted. PF should be released

  Additional info:
  Workaround - systemctl restart network

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1679703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686999] Re: Instance with two attached volumes fails to start with error: Duplicate ID 'drive-ide0-0-0' for drive

2017-07-28 Thread Sean Dague
I think this is one of those edge cases that the work around you
provided is the right way through. The decorating on images is meant to
be part of the original image build process, and doesn't magically fix
things.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1686999

Title:
  Instance with two attached volumes fails to start with error:
  Duplicate ID 'drive-ide0-0-0' for drive

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  nova version: mitaka

  I imported into Openstack a Linux Centos machine. The instance does
  not have support for VirtIO. I had to import the boot disk as hda
  (decorating the glance image with hw_disk_bus='ide'). Now I have this
  instance with two volumes attached, but when I try to boot the
  following XML is generated.

  
    
    [..CUT...]
    
    c3841ee3-3f9a-457e-b504-d35e367a1193
    
  
  
    
    [..CUT...]
    
    63e05c59-8de1-4908-a3dd-3f2261c82ea9
    
  

  This is because the two cinder volumes attached appear as /dev/hda and
  /dev/sda, and this creates a duplicate disk in the XML.

  The machine does not boot, and in the nova-compute.log I find a
  stacktrace.

  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 185, 
in _dispatch
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 110, in wrapped
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher payload)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 89, in wrapped
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 359, in 
decorated_function
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance=instance)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 328, in 
decorated_function
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 409, in 
decorated_function
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 387, in 
decorated_function
  2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1683770] Re: "nova volume-attach" should not allow attachment of cinder volume of other project to the instance of admin project

2017-07-28 Thread Sean Dague
If nova cli allows you to do that, it means the REST API allows you to
do that. Permissions should not be done on the client side as they can
be circumvented with curl.

This looks like it's a permissions issue on the server side where you'd
like a different policy?

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1683770

Title:
  "nova volume-attach" should not allow attachment of cinder volume of
  other project to the instance of admin project

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Description of problem:

  The cinder volume created in other project is not visible under admin
  project. Similarly nova CLI should not allow to attach other project
  volume to the admin project instance. Horizon is not permit this kind
  of operation, however nova CLI allow to do so.

  Further at the other project side, the volume status shows 
  "Attached to None on /dev/vdX" which is also a confusing status.

  However "nova volume-attach" command

  Version-Release number of selected component (if applicable):

  
  How reproducible:

  
  Steps to Reproduce:
  1. Create volume demo-vol1(Tenant).
  2. Create VM admin-vm1(Admin).
  3. Source admin credential
  4. Use nova volume-attch command to attached the admin-vm1 to the demo-vol1.
  5. Open horizon -> under Tenant -> volume.
  See that the volume display attach to "None".
  ​

  Actual results:

  
  Expected results:

  The Operation should not be allowed as demo-vol1 should not be visible
  under admin project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1683770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687067] Re: problems with cpu and cpu-thread policy where flavor/image specify different settings

2017-07-28 Thread Sean Dague
Ok, given the docs are fixed, lets put this into Opinion (which is a
closed state) for the actual code changes which don't have concensus.

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1687067

Title:
  problems with cpu and cpu-thread policy where flavor/image specify
  different settings

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  nova version: Newton (and later)

  There are a number of issues related to CPU policy and CPU thread
  policy where the flavor extra-spec and image properties do not match
  up.

  The docs at https://docs.openstack.org/admin-guide/compute-cpu-
  topologies.html say the following:

  "Image metadata takes precedence over flavor extra specs. Thus,
  configuring competing policies causes an exception. By setting a
  shared policy through image metadata, administrators can prevent users
  configuring CPU policies in flavors and impacting resource
  utilization."

  For the CPU policy this is exactly backwards based on the code.  The
  flavor is specified by the admin, and so it generally takes priority
  over the image which is specified by the end user.  If the flavor
  specifies "dedicated" then the result is dedicated regardless of what
  the image specifies.  If the flavor specifies "shared" then the result
  depends on the image--if it specifies "dedicated" then we will raise
  an exception, otherwise we use "shared".  If the flavor doesn't
  specify a CPU policy then the image can specify whatever policy it
  wants.

  The issue around CPU threading policy is more complicated.

  Back in Mitaka, if the flavor specified a CPU threading policy of
  either None or "prefer" then we would use the threading policy
  specified by the image (if it was set).  If the flavor specified a CPU
  threading policy of "isolate" or "require" and the image specified a
  different CPU threading policy then we raised
  exception.ImageCPUThreadPolicyForbidden(), otherwise we used the CPU
  threading policy specified by the flavor.  This behaviour is described
  in the spec at https://specs.openstack.org/openstack/nova-
  specs/specs/mitaka/implemented/virt-driver-cpu-thread-pinning.html

  In git commit 24997343 (which went into Newton) Nikola Dipanov made a
  code change that doesn't match the intent in the git commit message:

   if flavor_thread_policy in [None, 
fields.CPUThreadAllocationPolicy.PREFER]:
  -cpu_thread_policy = image_thread_policy
  +cpu_thread_policy = flavor_thread_policy or image_thread_policy

  The effect of this is that if the flavor specifies a CPU threading
  policy of "prefer" then we will use a policy of "prefer" regardless of
  the policy from the image.  If the flavor specifies a CPU threading
  policy of None then we will use the policy from the image.

  This is a bug, because the original intent was to treat None and
  "prefer" identically, since "prefer" was just an explicit way to
  specify the default behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1687067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677892] Re: nova scheduler_default_filter ComputeCapabilities filter breaks other filters

2017-07-28 Thread Sean Dague
** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1677892

Title:
  nova scheduler_default_filter ComputeCapabilities filter breaks other
  filters

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  im using a Fuel deployed Liberty environment. I have a "test team"
  project and a "dev team" project.

  I set up an aggregate with metadata "devhardware = true". and put some
  older hypervisors in the aggregate. (this is so the dev team will
  create instances on the older hardware and the test team will get the
  newest hardware to test our cloud product on).

  I created a flavor that only the dev team will use that also has set:
  aggregate_instance_extra_specs: devhardware = true

  I added
  AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation to
  the scheduler_default_filters of nova.conf

  I could not create an instance using that flavor. Creation would fail:

  2017-03-30 16:51:07.670 7773 WARNING nova.scheduler.utils 
[req-f3e4c44e-2edd-4da0-98ee-265628f2c5c8 e581f58b4ab441f2bb61d3ec9c3bf735 
bca6cc5337f44bd089ab4490124b3cff - - -] Failed to compute_task_build_instances: 
No valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

    File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
142, in inner
  return func(*args, **kwargs)

    File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 84, 
in select_destinations
  filter_properties)

    File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 90, in select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  2017-03-30 16:51:07.671 7773 WARNING nova.scheduler.utils [req-
  f3e4c44e-2edd-4da0-98ee-265628f2c5c8 e581f58b4ab441f2bb61d3ec9c3bf735
  bca6cc5337f44bd089ab4490124b3cff - - -] [instance:
  b430ff72-4d52-4768-ae75-8ddd06b31337] Setting instance to ERROR state.

  But once I removed ComputeCapabilitiesFilter from
  scheduler_default_filters on nova.conf (which seems was there by
  default) I could create instances with that flavor!! (and they
  correctly were only created on the hypervisors in the aggregate)

  
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

  
#scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

  seems like a bug, but what do I know... thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1677892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659647] Re: The resource tracker clears the tracked_instances dictionary on every periodic job

2017-07-28 Thread Sean Dague
** Tags added: note-to-self

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659647

Title:
  The resource tracker clears the tracked_instances dictionary on every
  periodic job

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The resource tracker has a dict, named 'tracked_instances' which based
  on the name is used to keep track of the instances that the resource
  tracker is tracking.

  However, on every run of the 'update_available_resource' method, the
  '_update_usage_from_instances' method is called and in there the
  tracked_instances dict is cleared. This means that the conditionals in
  the '_update_usage_from_instance' (singular) method always indicate
  that the current instance is considered new and the various update
  methods for that instance will always be called.

  In the case of the calls to the placement API, this means there are
  many extra calls which could be avoided.

  Removing the clear() call results in no unit or functional test
  failures. A test run the gate will be tried.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659062] Re: Failed evacuations leave neutron ports on destination host

2017-07-28 Thread Sean Dague
Evacuate behavior changes are so dicey at this point that I think
anything like this probably needs a spec to actually think through the
edge conditions.

Please dive in here if you are interested -
https://specs.openstack.org/openstack/nova-specs/readme.html

** Tags added: evacuate

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659062

Title:
  Failed evacuations leave neutron ports on destination host

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description
  ===
  This is related to https://bugs.launchpad.net/nova/+bug/1430042 and the 
associated fix https://review.openstack.org/#/c/169827/; if an evacuation fails 
there is no reverting of the neutron ports' host_id binding back to the source 
host.

  This may or may not be a bug, but if the evacuation fails and the
  source host comes back up and VMs are expected to be running, then the
  neutron ports should probably be rolled back.

  Steps to reproduce
  ==
  * Raise an exception at some point in the evacuation flow after the 
setup_instance_network_on_host calls in _do_rebuild_instance in the manager
  * Issue an evacuation of a VM to the host that will fail

  Expected result
  ===
  * If the evacuation fails the expectation would be to have the neutron ports 
have their host_id binding updated to be the source host.

  Actual result
  =
  * The ports host_id bindings remain as the destination host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
 Newton

  2. Which hypervisor did you use?
 PowerVM

  2. Which storage type did you use?
 N/A

  3. Which networking type did you use?
 Neutron with SEA

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454418] Re: Evacuate fails when using cells - AttributeError: 'NoneType' object has no attribute 'count'

2017-07-28 Thread Sean Dague
** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454418

Title:
  Evacuate fails when using cells - AttributeError: 'NoneType' object
  has no attribute 'count'

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  nova version: 2014.2.2
  Using cells (parent - child setup)

  
  How to reproduce:

  nova evacuate  
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-af20-182a-4acd-869a-1b23314b21d4)


  LOG:

  2015-05-12 23:17:27.274 8013 ERROR nova.api.openstack 
[req-af20-182a-4acd-869a-1b23314b21d4 None] Caught error: 'NoneType' object 
has no attribute 'count'
  Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 134, in _dispatch_and_reply
  incoming.message))

File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 177, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 123, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

File "/usr/lib/python2.7/site-packages/nova/cells/manager.py", line 268, in 
service_get_by_compute_host
  service = response.value_or_raise()

File "/usr/lib/python2.7/site-packages/nova/cells/messaging.py", line 406, 
in process
  next_hop = self._get_next_hop()

File "/usr/lib/python2.7/site-packages/nova/cells/messaging.py", line 361, 
in _get_next_hop
  dest_hops = target_cell.count(_PATH_CELL_SEP)

  AttributeError: 'NoneType' object has no attribute 'count'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655030] Re: AggregateImagePropertiesIsolation can be circumvented using Boot from Volume

2017-07-28 Thread Sean Dague
This would be a spec enhancement I think, please look at the specs
process here - https://specs.openstack.org/openstack/nova-
specs/readme.html

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1655030

Title:
  AggregateImagePropertiesIsolation can be circumvented using Boot from
  Volume

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I have set up AggregateImagePropertiesIsolation to boot certain images on one 
compute node only. It works.
  However, when I use Boot from Volume, the VM is launched on any node, 
although volume_image_metadata of the volume contains the image ID, such as:

  volume_image_metadata = {u'container_format': u'bare', u'min_ram':
  u'0', u'disk_format': u'qcow2', u'image_name': u'windows',
  u'image_id': u'f6add2c7-52c0-46f1-97a5-3c30562fb9b3', u'checksum':
  u'a11bdae56c6bb8b864fcaf35d4e1e9bb', u'min_disk': u'16', u'size':
  u'10131734528'}

  I think this makes the AggregateImagePropertiesIsolation filter next
  to useless and will make me resort to aggregate segregation by flavor.

  I think the problem is in the function get_image_metadata_from_volume. It 
only copies the properties size, min_ram, and min_disk and not the custom 
properties used for filtering with AggregateImagePropertiesIsolation
  http://code.metager.de/source/xref/OpenStack/nova/nova/utils.py#1338

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1655030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1706229] Re: security group: ipv6 protocol integer works in ipv4 ethertype

2017-07-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/487130
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2ec36dc812710c284b75498e695a44585484c6a1
Submitter: Jenkins
Branch:master

commit 2ec36dc812710c284b75498e695a44585484c6a1
Author: Trevor McCasland 
Date:   Tue Jul 25 08:44:08 2017 -0500

Enforce ethertype with IPv6 integer protocols

By extending the black list to include the integer representation
for IPv6 we can succesfully block api requests to create security
group rules for IPv6 protocols with ehtertype IPv4.

Closes-Bug: #1706229
Change-Id: I5abeff178b3be18f1e93d00d9d546147b11c1a74


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1706229

Title:
  security group: ipv6 protocol integer works in ipv4 ethertype

Status in neutron:
  Fix Released

Bug description:
  Creating a security group rule with ethertype IPv4 and an IPv6
  protocol integer succeeds when it should fail.

  1. create security group, 'mygroup'
  2. create security group rule --protocol 43 --ethertype IPv4 mygroup

  Expected output:
  ubuntu@ubuntu:/opt/stack/tempest$ openstack security group rule create 
--protocol ipv6-route --ethertype IPv4 mygroup
  Error while executing command: Bad Request (HTTP 400) (Request-ID: 
req-c51a4492-3f9f-4381-98c4-8331d4366cca)

  Actual output:
  ubuntu@ubuntu:/opt/stack/tempest$ openstack security group rule create 
--protocol 43 --ethertype IPv4 mygroup
  +---+--+
  | Field | Value|
  +---+--+
  | created_at| 2017-07-25T00:34:46Z |
  | description   |  |
  | direction | ingress  |
  | ether_type| IPv4 |
  | id| 230d5bd4-4be5-4814-a80a-b8aa74d8f5d2 |
  | name  | None |
  | port_range_max| None |
  | port_range_min| None |
  | project_id| 4cdd24e0cfb54cf49aef2da436884a7a |
  | protocol  | 43   |
  | remote_group_id   | None |
  | remote_ip_prefix  | 0.0.0.0/0|
  | revision_number   | 0|
  | security_group_id | 439a1eb6-37a6-45ff-adb6-87aa87e8b68c |
  | updated_at| 2017-07-25T00:34:46Z |
  +---+--+

  The problem is here neutron/db/securitygroups_db.py:
  if rule['protocol'] in [constants.PROTO_NAME_IPV6_ENCAP,
  constants.PROTO_NAME_IPV6_FRAG,
  constants.PROTO_NAME_IPV6_ICMP,
  constants.PROTO_NAME_IPV6_ICMP_LEGACY,
  constants.PROTO_NAME_IPV6_NONXT,
  constants.PROTO_NAME_IPV6_OPTS,
  constants.PROTO_NAME_IPV6_ROUTE]:
  if rule['ethertype'] == constants.IPv4:
  raise ext_sg.SecurityGroupEthertypeConflictWithProtocol(
  ethertype=rule['ethertype'], 
protocol=rule['protocol'])

  It should check for numbers and names from neutron_lib constants.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1706229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707085] Re: Max_unit should account for allocation_ratio

2017-07-28 Thread Sean Dague
Self over allocating VCPUs is not a good idea. If your machine only has
4 CPUs and you expose that as 5 CPUs to a guest, you'll get
pathologically bad performance as the guest tries to optimize workloads
across those, which are causing cache flushes in the CPUs below.

Definitely in the Won't Fix category

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707085

Title:
  Max_unit should account for allocation_ratio

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  It's great that the Placement API v1.4 provides a way to query for
  hosts that can fulfill a set of allocations. However, because the
  max_unit doesn't account for allocation_ratio, the result is not as
  expected.

  For example, my host has a total of 4 vCPU and 2 vCPU have been
  allocated. cpu_allocation_ratio is 3. I expect that this host is
  considered a qualified host if I make a request to the /resource_providers?resources=VCPU:5 because Capacity = (total-
  reserved) x allocation_ratio - used which is 10. However, because the
  max_unit is set as the total which is 4, my host is not in the
  response although it can fulfill the requested allocation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675276] Re: Volumes attached to shelved instance may contain incorrect device_name

2017-07-28 Thread Sean Dague
Working as design, device_name is since removed from the API.

** Tags added: shelve

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1675276

Title:
  Volumes attached to shelved instance may contain incorrect device_name

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Nova supported to attach and detach volumes to/from shelved instances
  since microversion 2.20.

  When we attach a volume to an instance and
  then unshelve the instance the cinder side doesn't include the
  device_name information.

  How to re-produce:

  #1 shelve an instance

  stack@SZX1000280461:/opt/devstack$ nova list
  
---+---+++-+--+
  | ID   | Name  | Status | Task State | Power 
State | Networks |
  
+--+---+++-+--+
  | bd09421c-90b2-411c-99d0-fcf07338c542 | test2 | ACTIVE | -  | 
Running | Kevin=10.0.0.76, 172.24.4.10 |
  
+--+---+++-+--+

  stack@SZX1000280461:/opt/devstack$ nova shelve bd09421c-90b2-411c-
  99d0-fcf07338c542

  stack@SZX1000280461:/opt/devstack$ nova list
  
+--+---+---++-+--+
  | ID   | Name  | Status| Task 
State | Power State | Networks |
  
+--+---+---++-+--+
  | bd09421c-90b2-411c-99d0-fcf07338c542 | test2 | SHELVED_OFFLOADED | -
  | Shutdown| Kevin=10.0.0.76, 172.24.4.10 |
  
+--+---+---++-+--+

  # 2 attach a cinder volume to the shelved insntace:

  stack@SZX1000280461:/opt/devstack$ cinder list
  
+--+---+---+--+-+--+-+
  | ID   | Status| Name  | Size | Volume 
Type | Bootable | Attached to |
  
+--+---+---+--+-+--+-+
  | a72f9642-ca8f-4c2e-bfe0-362c6220d498 | available | test1 | 1| 
lvmdriver-1 | false| |
  
+--+---+---+--+-+--+-+

  stack@SZX1000280461:/opt/devstack$ nova volume-attach 
bd09421c-90b2-411c-99d0-fcf07338c542 a72f9642-ca8f-4c2e-bfe0-362c6220d498
  +--+--+
  | Property | Value|
  +--+--+
  | device   | -|
  | id   | a72f9642-ca8f-4c2e-bfe0-362c6220d498 |
  | serverId | bd09421c-90b2-411c-99d0-fcf07338c542 |
  | volumeId | a72f9642-ca8f-4c2e-bfe0-362c6220d498 |
  +--+--+

  stack@SZX1000280461:/opt/devstack$ cinder list
  
+--++---+--+-+--+--+
  | ID   | Status | Name  | Size | Volume Type 
| Bootable | Attached to  |
  
+--++---+--+-+--+--+
  | a72f9642-ca8f-4c2e-bfe0-362c6220d498 | in-use | test1 | 1| lvmdriver-1 
| false| bd09421c-90b2-411c-99d0-fcf07338c542 |
  
+--++---+--+-+--+--+

  stack@SZX1000280461:/opt/devstack$ nova show test2
  
+--+--+
  | Property | Value
|
  
+--+--+
  | Kevin network| 10.0.0.76, 172.24.4.10   
|
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  |  
|
  | OS-EXT-SRV-ATTR:host | -
  

[Yahoo-eng-team] [Bug 1661086] Re: Failed to plug VIF VIFBridge

2017-07-28 Thread Sean Dague
I expect this was an issue with a stale installation of dependencies. If
this is still an issue, please reopen

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661086

Title:
  Failed to plug VIF VIFBridge

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I did a fresh restack/reclone this morning and can no longer boot up a
  cirros instance.

  Nova client returns:

  | fault| {"message": "Failure running
  os_vif plugin plug method: Failed to plug VIF
  
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397
  -474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-
  fe3fc3c7", "code": 500, "details": "  File
  \"/opt/stack/nova/nova/compute/manager.py\", line 1780, in
  _do_build_and_run_instance |

  pip list:
  nova (15.0.0.0b4.dev77, /opt/stack/nova)
  os-vif (1.4.0)

  n-cpu.log shows:
  2017-02-01 11:13:32.880 DEBUG nova.network.os_vif_util 
[req-17c8b4e4-2197-4205-aed3-007d0f2837e4 admin admin] Converted object 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
 from (pid=69603) nova_to_osvif_vif 
/opt/stack/nova/nova/network/os_vif_util.py:425
  2017-02-01 11:13:32.880 DEBUG os_vif 
[req-17c8b4e4-2197-4205-aed3-007d0f2837e4 admin admin] Unplugging vif 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
 from (pid=69603) unplug 
/usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:112
  2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: 
request[139935485013840]: (3, b'vif_plug_ovs.linux_net.delete_bridge', 
('qbrd3377ad5-43', b'qvbd3377ad5-43'), {}) from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: Exception 
during request[139935485013840]: a bytes-like object is required, not 'str' 
from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139935485013840]: (5, 'builtins.TypeError', ("a bytes-like object is 
required, not 'str'",)) from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2017-02-01 11:13:32.882 ERROR os_vif 
[req-17c8b4e4-2197-4205-aed3-007d0f2837e4 admin admin] Failed to unplug vif 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
  2017-02-01 11:13:32.882 TRACE os_vif Traceback (most recent call last):
  2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/os_vif/__init__.py", line 113, in unplug
  2017-02-01 11:13:32.882 TRACE os_vif plugin.unplug(vif, instance_info)
  2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/vif_plug_ovs/ovs.py", line 216, in 
unplug
  2017-02-01 11:13:32.882 TRACE os_vif self._unplug_bridge(vif, 
instance_info)
  2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/vif_plug_ovs/ovs.py", line 192, in 
_unplug_bridge
  2017-02-01 11:13:32.882 TRACE os_vif 
linux_net.delete_bridge(vif.bridge_name, v1_name)
  2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", line 
205, in _wrap
  2017-02-01 11:13:32.882 TRACE os_vif return 
self.channel.remote_call(name, args, kwargs)
  2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 186, in 
remote_call
  2017-02-01 11:13:32.882 TRACE os_vif exc_type = 
importutils.import_class(result[1])
  2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 30, in 
import_class
  2017-02-01 11:13:32.882 TRACE os_vif __import__(mod_str)
  2017-02-01 11:13:32.882 TRACE os_vif ImportError: No module named builtins
  2017-02-01 11:13:32.882 TRACE os_vif

  Full n-cpu.log is attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661086/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1663225] Re: ironic does not clean or shutdown nodes if nova-compute is down at the moment of 'nova delete'

2017-07-28 Thread Sean Dague
I think on the Nova side this is pretty much working as designed. If
there is different / better ironic behavior, perhaps it could be brought
up with the Ironic team?

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663225

Title:
  ironic does not clean or shutdown nodes if nova-compute is down at the
  moment of 'nova delete'

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Affected configuration: Ironic installation with Ironic driver for
  nova.

  If nova-compute service is down at the moment of execution 'nova
  delete' for instance, nova marks instance as 'deleted' even node is
  continue to run.

  Steps to reproduce:
  1. Prepare ironic/nova
  2. Start instance (nova boot/openstack server create)
  3. Wait until 'ACTIVE' state for instance.
  4. Stop nova-compute
  5. Wait until it become 'down' in 'nova service-list'
  5. Execute 'nova delete' command for instance.
  6. Start nova-compute serivce

  Expected result:
  - Instance sits in the 'deleting' state until nova-compute is not come back.
  - Node switch to 'cleaning/available' as soon as nova-compute come back
  - Tenant instance (baremetal server) stops to operate as soon as nova-compute 
is up.
  nova-compute is up.

  Actual result:
  - Instance deleted almost instantly regardless of nova-compute status.
  - Node keeps 'active' state with filled in 'Instance UUID' field.
  - Tenant instance (baremetal server) continue to work after nova-compute is 
up to "running_deleted_instance_action" time.

  I believe this is incorrect behavior, because it allows tenants to
  continue to use services regardless of nova report that there are no
  instances are allocated to tenant.

  Affected version: newton.

  P.S. Normally nova (with libvirt/kvm driver) would keep instance in
  'deleting' state until nova-compute is not come back, and remove it
  from server (from libvirt). Only after that nova marks instance as
  deleted in database. Ironic driver should do the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663165] Re: network_data from metadata agent does not contain local routes for adjacent subnets

2017-07-28 Thread Sean Dague
I think this is the kind of enhancement that would come through the
specs process - https://specs.openstack.org/openstack/nova-
specs/readme.html

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663165

Title:
  network_data from metadata agent does not contain local routes for
  adjacent subnets

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description
  ---
  The network_data.json provided by the config drive/metadata service is 
missing local routes for adjacent subnets.

  When using DHCP, these routes *are* provided via neutron's DHCP agent
  correctly.

  Steps to Reproduce
  --
  * Create a new network in Neutron with 2 subnets: 192.168.0.0/24 and 
192.168.1.0/24 with gateways 192.168.0.1 and 192.168.1.1 respectively.
  * Launch a new instance with an address in subnet 192.168.0.0/24 .
  * Inspect the available metadata in `openstack/latest/network_data.json`

  Expected Behaviour
  --
  There should be two routes specified in `network_data.json`:
  * network: 0.0.0.0, netmask: 0.0.0.0,   gateway: 192.168.0.1
  * network: 192.168.1.0, netmask: 255.255.255.0, gateway: 0.0.0.0

  Actual Behaviour
  
  There is only one route specified in `network_data.json`:
  * network: 0.0.0.0, netmask: 0.0.0.0,   gateway: 192.168.0.1

  Environment
  ---
  * Ubuntu 14.04
  * stable/newton - latest release from Git

  =

  For reference to Neutron's behaviour, see: [neutron.agent.linux.dhcp]

  NOTE: this is also not properly implemented in cloud-init's static
  networking implementation and an issue is currently open
  HERE:[https://bugs.launchpad.net/cloud-init/+bug/1663049]. This may be
  relevant for anybody attempting to test the behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707246] [NEW] Configuration guide references configuration options for policy instead of sample policy file

2017-07-28 Thread Lance Bragstad
Public bug reported:

The configuration guide document should contain all information for
configuration options, as well as sample policy files. Keystone's
configuration section uses the wrong directive, which results in the
configuration options being rendered where the sample policy file should
be:

https://docs.openstack.org/keystone/latest/configuration/policy.html

We should correct this so that the policy section of the configuration
guide references policy and not configuration options.

** Affects: keystone
 Importance: Low
 Status: In Progress


** Tags: documentation low-hanging-fruit

** Tags added: docu

** Tags removed: docu
** Tags added: documentation low

** Tags removed: low
** Tags added: low-hanging-fruit

** Changed in: keystone
   Importance: Undecided => Low

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Status: Triaged => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1707246

Title:
  Configuration guide references configuration options for policy
  instead of sample policy file

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  The configuration guide document should contain all information for
  configuration options, as well as sample policy files. Keystone's
  configuration section uses the wrong directive, which results in the
  configuration options being rendered where the sample policy file
  should be:

  https://docs.openstack.org/keystone/latest/configuration/policy.html

  We should correct this so that the policy section of the configuration
  guide references policy and not configuration options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1707246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707239] [NEW] Incorrect cloud-init datasource (CloudStack) when booting in OpenStack

2017-07-28 Thread Matthew Wynne
Public bug reported:

I got this message while booting the latest Ubuntu 16.04 Cloud image from: 
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img


It said to report it, so here I am :)

1. I'm running OpenStack Newton.
2. 0.7.9-153-g16a7302f-0ubuntu1~16.04.2


# A new feature in cloud-init identified possible datasources for#  

   
#
#   ['OpenStack', 'None']#   
# However, the datasource used was: CloudStack   #   
##   
# In the future, cloud-init will only attempt to use datasources that#   
# are identified or specifically configured. #   
# For more information see   #   
#   https://bugs.launchpad.net/bugs/1669675  #   
##   
# If you are seeing this message, please file a bug against  #   
# cloud-init at  #   
#https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #   
# Make sure to include the cloud provider your instance is   #   
# running on.#   
##   
# After you have filed a bug, you can disable this warning by launching  #
# your instance with the cloud-config below, or putting that content #   
# into /etc/cloud/cloud.cfg.d/99-warnings.cfg#   
##   
# #cloud-config  #   
# warnings:  #   
#   dsid_missing_source: off #

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: dsid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1707239

Title:
  Incorrect cloud-init datasource (CloudStack) when booting in OpenStack

Status in cloud-init:
  New

Bug description:
  I got this message while booting the latest Ubuntu 16.04 Cloud image from: 
  
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img

  
  It said to report it, so here I am :)

  1. I'm running OpenStack Newton.
  2. 0.7.9-153-g16a7302f-0ubuntu1~16.04.2


  # A new feature in cloud-init identified possible datasources for#

 
  #
  #   ['OpenStack', 'None']#   
  # However, the datasource used was: CloudStack   #   
  ##   
  # In the future, cloud-init will only attempt to use datasources that#   
  # are identified or specifically configured. #   
  # For more information see   #   
  #   https://bugs.launchpad.net/bugs/1669675  #   
  ##   
  # If you are seeing this message, please file a bug against  #   
  # cloud-init at  #   
  #https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #   
  # Make sure to include the cloud provider your instance is   #   
  # running on.#   
  ##   
  # After you have filed a bug, you can disable this warning by launching  #
  # your instance with the cloud-config below, or putting that content #   
  # into /etc/cloud/cloud.cfg.d/99-warnings.cfg#   
  ##   
  # #cloud-config  #   
  # warnings:  #   
  #   dsid_missing_source: off #

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1707239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707238] [NEW] detach_device_with_retry doesn't detach from live domain if persistent domain was already detached in the past

2017-07-28 Thread melanie witt
Public bug reported:

In an attempt to fix a different bug [1] where a later try to detach a
volume failed if the guest was busy and ignored the request to detach
from the live domain, a new bug was introduced where a later try to
detach a volume silently passes even though the device is still attached
to the live domain.

This bug is serious because now it's possible for a volume to be
attached to two live domains and data corruption can occur. We should be
trying to detach from the live domain even if we had already detached
from the persistent domain in the past.

[1] https://bugs.launchpad.net/nova/+bug/1633236

** Affects: nova
 Importance: High
 Assignee: melanie witt (melwitt)
 Status: New


** Tags: libvirt volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707238

Title:
  detach_device_with_retry doesn't detach from live domain if persistent
  domain was already detached in the past

Status in OpenStack Compute (nova):
  New

Bug description:
  In an attempt to fix a different bug [1] where a later try to detach a
  volume failed if the guest was busy and ignored the request to detach
  from the live domain, a new bug was introduced where a later try to
  detach a volume silently passes even though the device is still
  attached to the live domain.

  This bug is serious because now it's possible for a volume to be
  attached to two live domains and data corruption can occur. We should
  be trying to detach from the live domain even if we had already
  detached from the persistent domain in the past.

  [1] https://bugs.launchpad.net/nova/+bug/1633236

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644725] Re: Check destination_type when booting with bdm provided

2017-07-28 Thread Sean Dague
It does feel like it might be better to fix this on the client side.
Marking as opinion as the patch author abandoned the nova patch. It is
welcome to come back later.

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1644725

Title:
  Check destination_type when booting with bdm provided

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  Opinion
Status in python-novaclient:
  In Progress

Bug description:
  When booting instance with block_device_mapping provided, in the
  current implementation, the "destination_type" is allowed to be None,
  and this lead to un-sync between Nova and Cinder:

  Step 1: Booting with block_device_mapping, leave destination_type to
  be None:

  root@SZX1000191849:/var/log/nova# nova --debug boot  --flavor 1
  --image 2ba75018-403f-407b-864a-08564022e1f8 --nic net-
  id=cce1d2f1-acf4-4646-abdc-069f8d0dbb71 --block-device
  'source=volume,id=9f49d5b0-3625-46a2-9ed4-d82f19949148' test_bdm

  the corresponding REST call is:
  DEBUG (session:342) REQ: curl -g -i -X POST 
http://10.229.45.17:8774/v2.1/os-volumes_boot -H "Accept: application/json" -H 
"User-Agent: python-novaclient" -H "OpenStack-API-Version: compute 2.37" -H 
"X-OpenStack-Nova-API-Version: 2.37" -H "X-Auth-Token: 
{SHA1}4d8c2c43338e1c4d96e08bcd1c2f3ff36de14154" -H "Content-Type: 
application/json" -d '{"server": {"name": "test_bdm", "imageRef": 
"2ba75018-403f-407b-864a-08564022e1f8", "block_device_mapping_v2": 
[{"source_type": "image", "delete_on_termination": true, "boot_index": 0, 
"uuid": "2ba75018-403f-407b-864a-08564022e1f8", "destination_type": "local"}, 
{"source_type": "volume", "uuid": "9f49d5b0-3625-46a2-9ed4-d82f19949148"}], 
"flavorRef": "1", "max_count": 1, "min_count": 1, "networks": [{"uuid": 
"cce1d2f1-acf4-4646-abdc-069f8d0dbb71"}]}}'

  Step 2: After the instance is successfully launched, the detailed info
  is like this:

  root@SZX1000191849:/var/log/nova# nova show 
83d9ec32-93e0-441a-ae10-00e08b65de0b
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | SZX1000191849
|
  | OS-EXT-SRV-ATTR:hostname | test-bdm 
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | SZX1000191849
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0016
|
  | OS-EXT-SRV-ATTR:kernel_id| 87c9afd6-3a47-4a4c-a804-6b456d68136d 
|
  | OS-EXT-SRV-ATTR:launch_index | 0
|
  | OS-EXT-SRV-ATTR:ramdisk_id   | acd02b28-6484-4f90-a5e7-bba7159343e1 
|
  | OS-EXT-SRV-ATTR:reservation_id   | r-fiqwkq02   
|
  | OS-EXT-SRV-ATTR:root_device_name | /dev/vda 
|
  | OS-EXT-SRV-ATTR:user_data| -
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2016-11-25T06:50:36.00   
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | config_drive |   

[Yahoo-eng-team] [Bug 1669070] Re: Checking whether group has role assignment on domain without specifying a role ID result in HTTP 200

2017-07-28 Thread Morgan Fainberg
This isn't a bug. IF the {role_id} at the end of the call is not passed,
we use the list action of:

/v3/domains/{domain_id}/groups/{group_id}/roles/ (regardless of head or
get action)

If a role_id is passed, you're calling a different API. This is not a
great design, but this is working as intended.

** Changed in: keystone
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1669070

Title:
  Checking whether group has role assignment on domain without
  specifying a role ID result in HTTP 200

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  It should've been either 400 or 404. Steps to reproduce.

  1. install a vanilla devstack
  2. use "openstack group list" to find a group ID. Any group will do. i.e.

  openstack group list
  +--+---+
  | ID   | Name  |
  +--+---+
  | 64e5dcd8dea0429ca22f97bcac4629bc | admins|
  | 9ff3c6f47a034223ad19bb6d0dd52bb6 | nonadmins |
  +--+---+

  3. get a token. i.e. "openstack token issue"
  4. call the check group assignment on domain API using curl without 
specifying the role ID and you can see an HTTP 200 is returned. i.e.

  curl -v --head -H 'X-Auth-Token: 
gABYtwwzxv9T3fxnHY3Js2ln2lTvoi1fukAYe0NSXgoV9S1qI808zQSYJyKb1AtTBy3MNUJFONBb7rpsIAu12zfRlZulfOgl7vvD_EM1DkMogpIRQvotJN1aYKMq8XqcgZ-NikolKCpUfas30GMQPFOoPpJdz0qjfIcniX0ihzVRTDqVcb0'
 
http://localhost/identity/v3/domains/default/groups/64e5dcd8dea0429ca22f97bcac4629bc/roles/
  *   Trying 127.0.0.1...
  * Connected to localhost (127.0.0.1) port 80 (#0)
  > HEAD 
/identity/v3/domains/default/groups/64e5dcd8dea0429ca22f97bcac4629bc/roles/ 
HTTP/1.1
  > Host: localhost
  > User-Agent: curl/7.47.0
  > Accept: */*
  > X-Auth-Token: 
gABYtwwzxv9T3fxnHY3Js2ln2lTvoi1fukAYe0NSXgoV9S1qI808zQSYJyKb1AtTBy3MNUJFONBb7rpsIAu12zfRlZulfOgl7vvD_EM1DkMogpIRQvotJN1aYKMq8XqcgZ-NikolKCpUfas30GMQPFOoPpJdz0qjfIcniX0ihzVRTDqVcb0
  >
  < HTTP/1.1 200 OK
  HTTP/1.1 200 OK
  < Date: Wed, 01 Mar 2017 18:06:01 GMT
  Date: Wed, 01 Mar 2017 18:06:01 GMT
  < Server: Apache/2.4.18 (Ubuntu)
  Server: Apache/2.4.18 (Ubuntu)
  < Vary: X-Auth-Token
  Vary: X-Auth-Token
  < x-openstack-request-id: req-9ea5a135-4128-4967-8552-d1c6a7b63c97
  x-openstack-request-id: req-9ea5a135-4128-4967-8552-d1c6a7b63c97
  < Content-Length: 158
  Content-Length: 158
  < Content-Type: application/json
  Content-Type: application/json

  <
  * Connection #0 to host localhost left intact

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1669070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1706083] Re: Post-migration, Cinder volumes lose disk cache value, resulting in I/O latency

2017-07-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/485752
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=14c38ac0f253036da79f9d07aedf7dfd5778fde8
Submitter: Jenkins
Branch:master

commit 14c38ac0f253036da79f9d07aedf7dfd5778fde8
Author: Kashyap Chamarthy 
Date:   Thu Jul 20 19:01:23 2017 +0200

libvirt: Post-migration, set cache value for Cinder volume(s)

This was noticed in a downstream bug when a Nova instance with Cinder
volume (in this case, both the Nova instance storage _and_ Cinder volume
are located on Ceph) is migrated to a target Compute node, the disk
cache value for the Cinder volume gets changed.  I.e. the QEMU
command-line for the Cinder volume stored on Ceph turns into the
following:

Pre-migration, QEMU command-line for the Nova instance:

[...] -drive file=rbd:volumes/volume-[...],cache=writeback

Post-migration, QEMU command-line for the Nova instance:

[...] -drive file=rbd:volumes/volume-[...],cache=none

Furthermore, Jason Dillaman from Ceph confirms RBD cache being enabled
pre-migration:

$ ceph --admin-daemon /var/run/qemu/ceph-client.openstack.[...] \
config get rbd_cache
{
"rbd_cache": "true"
}

And disabled, post-migration:

$ ceph --admin-daemon /var/run/qemu/ceph-client.openstack.[...] \
config get rbd_cache
{
"rbd_cache": "false"
}

This change in cache value post-migration causes I/O latency on the
Cinder volume.

From a chat with Daniel Berrangé on IRC: Prior to live migration, Nova
rewrites all the  elements, and passes this updated guest XML
across to target libvirt.  And it is never calling _set_cache_mode()
when doing this.  So `nova.conf`'s `writeback` setting is getting lost,
leaving the default `cache=none` setting.  And this mistake (of leaving
the default cache value to 'none') will of course be correct when you
reboot the guest on the target later.

So:

  - Call _set_cache_mode() in _get_volume_config() method -- because it
is a callback function to _update_volume_xml() in
nova/virt/libvirt/migration.py.

  - And remove duplicate calls to _set_cache_mode() in
_get_guest_storage_config() and attach_volume().

  - Fix broken unit tests; adjust test_get_volume_config() to reflect
the disk cache mode.

Thanks: Jason Dillaman of Ceph for observing the change in cache modes
in a downstream bug analysis, Daniel Berrangé for help in
analysis from a Nova libvirt driver POV, and Stefan Hajnoczi
from QEMU for help on I/O latency instrumentation with `perf`.

Closes-bug: 1706083
Change-Id: I4184382b49dd2193d6a21bfe02ea973d02d8b09f


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1706083

Title:
  Post-migration, Cinder volumes lose disk cache value, resulting in I/O
  latency

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  Description
  ===

  [This was initially reported by a Red Hat OSP customer.]

  The I/O latency of a Cinder volume after live migration of an instance
  to which it's attached increases significantly. This stays increased
  till the VM is stopped and started again. [VM is booted with Cinder
  volume.

  This is not the case when using a disk from a Nova store backend [
  without Cinder volume] -- or at least the difference isn't so
  significantly high after a live migration.

  The storage backend is Ceph 2.0.

  
  How reproducible: Consistently

  
  Steps to Reproduce
  ==

  (0) Both the Nova instances and Cinder volumes are located on Ceph

  (1) Create a Nova instance with a Cinder volume attached to it

  (2) Live migrate it to a target Compute node

  (3) Run `ioping` (`ioping -c 10 .`) on the Cinder volume.
  Alternatively, run other I/O benchmarks like using `fio` with
  'direct=1' (which uses non-bufferred I/O) as a good sanity check to
  get a second opinion regarding latency.

  
  Actual result
  =

  Before live migration: `ioping` output on the Cinder volume attached to a Nova
  instance:

  [guest]$ sudo ioping -c 10 .
  4 KiB <<< . (xfs /dev/sda1): request=1 time=98.0 us (warmup)
  4 KiB <<< . (xfs /dev/sda1): request=2 time=135.6 us
  4 KiB <<< . (xfs /dev/sda1): request=3 time=155.5 us
  4 KiB <<< . (xfs /dev/sda1): request=4 time=161.7 us
  4 KiB <<< . (xfs /dev/sda1): request=5 time=148.4 us
  4 KiB <<< . (xfs /dev/sda1): request=6 

[Yahoo-eng-team] [Bug 1707252] [NEW] Claims in the scheduler does not account for doubling allocations on resize to same host

2017-07-28 Thread Matt Riedemann
Public bug reported:

This code in the scheduler report client is used by the scheduler when
making allocation requests against a certain instance during a move
operation:

https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L200

The idea is to retain the source node allocations in Placement when also
adding allocations for the destination node.

However, with the set difference code in there, it does not account for
resizing an instance to the same host, where the compute node for the
source and destination are the same.

We need to double the allocations for resize to same host for a case
like resizing the instance and the VCPU/MEMORY_MB goes down but the
DISK_GB goes up.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: pike-rc-potential placement scheduler

** Tags added: pike-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707252

Title:
  Claims in the scheduler does not account for doubling allocations on
  resize to same host

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This code in the scheduler report client is used by the scheduler when
  making allocation requests against a certain instance during a move
  operation:

  
https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L200

  The idea is to retain the source node allocations in Placement when
  also adding allocations for the destination node.

  However, with the set difference code in there, it does not account
  for resizing an instance to the same host, where the compute node for
  the source and destination are the same.

  We need to double the allocations for resize to same host for a case
  like resizing the instance and the VCPU/MEMORY_MB goes down but the
  DISK_GB goes up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707128] [NEW] amazon customer service number +1-888-341-6651 Support Amazon prime

2017-07-28 Thread jacob
Private bug reported:

Amazon prime customer service number +1-888-341-6651 Prime Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 amazon customer service number amazon Customer support
Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer Service Phone Amazon prime
customer support number +1-888-341-6651 Prime Customer Service Phone
Amazon prime customer service number +1-888-341-6651 Prime Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 amazon Customer support Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer support Phone Amazon prime
customer support number +1-888-341-6651 Prime Customer Service Phone
Amazon prime customer service number +1-888-341-6651 Prime Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer Service Phone Amazon prime
customer service number +1-888-341-6651 Prime Customer Service Phone
Amazon prime customer support number +1-888-341-6651 amazon Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer support Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer Service Phone Amazon prime
customer service number +1-888-341-6651 Prime Customer Service Phone
Amazon prime customer service number +1-888-341-6651 Prime Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 amazon Customer Service Phone Amazon
prime customer service number +1-888-341-6651 Prime Customer support
Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer Service Phone Amazon prime
customer service number +1-888-341-6651 Prime Customer Service Phone
Amazon prime customer service number +1-888-341-6651 amazon Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer support Phone Amazon prime
customer service number +1-888-341-6651 amazon Customer Service Phone
Amazon prime customer service number +1-888-341-6651 Prime Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 amazon Customer Service Phone Amazon
prime customer service number +1-888-341-6651 Prime Customer Service
Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer Service Phone Amazon prime
customer service number +1-888-341-6651 Prime Customer Service Phone
Amazon prime customer service number +1-888-341-6651 amazon Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer Service Phone Amazon prime
customer service number +1-888-341-6651 amazon Customer Service Phone
Amazon prime customer service number +1-888-341-6651 Prime Customer
Service Phone Amazon prime customer service number +1-888-341-6651 Prime
Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer Service Phone Amazon prime
customer service number +1-888-341-6651 Prime Customer Service Phone
Amazon prime customer service number +1-888-341-6651 Prime Customer
Service Phone Amazon prime customer service number +1-888-341-6651
amazon Customer Service Phone Amazon prime customer service number
+1-888-341-6651 Prime Customer Service Phone Amazon prime customer
service number +1-888-341-6651 Prime Customer Service Phone Amazon prime
customer service number +1-888-341-6651 Prime Customer Service Phone
Amazon prime 

[Yahoo-eng-team] [Bug 1707128] Re: amazon customer service number +1-888-341-6651 Support Amazon prime

2017-07-28 Thread William Grant
** Project changed: nova => null-and-void

** Information type changed from Public to Private

** Changed in: null-and-void
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707128

Title:
  amazon customer service number +1-888-341-6651 Support Amazon prime

Status in NULL Project:
  Invalid

Bug description:
  Amazon prime customer service number +1-888-341-6651 Prime Customer
  Service Phone Amazon prime customer service number +1-888-341-6651
  Prime Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 amazon customer service number amazon Customer support
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 Prime Customer Service Phone Amazon
  prime customer support number +1-888-341-6651 Prime Customer Service
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 amazon Customer support Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer support
  Phone Amazon prime customer support number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 Prime Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer Service
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  support number +1-888-341-6651 amazon Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer support
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 Prime Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer Service
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 amazon Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer support
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 Prime Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer Service
  Phone Amazon prime customer service number +1-888-341-6651 amazon
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 Prime Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer support
  Phone Amazon prime customer service number +1-888-341-6651 amazon
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 Prime Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer Service
  Phone Amazon prime customer service number +1-888-341-6651 amazon
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 Prime Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer Service
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 amazon Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer Service
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon prime customer
  service number +1-888-341-6651 amazon Customer Service Phone Amazon
  prime customer service number +1-888-341-6651 Prime Customer Service
  Phone Amazon prime customer service number +1-888-341-6651 Prime
  Customer Service Phone Amazon prime customer service number
  +1-888-341-6651 Prime Customer Service Phone Amazon 

[Yahoo-eng-team] [Bug 1707130] [NEW] Lack of the step to create a domain

2017-07-28 Thread zhiguo.li
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: Although the domains has a deault 
name and defaultid,the guide should tell users how to create a new domain with 
command.
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 12.0.0.0b3.dev162 on 2017-07-27 00:25
SHA: c3b5d2d77b029880521912e43ad963f9b0c5bf99
Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-users.rst
URL: https://docs.openstack.org/keystone/latest/install/keystone-users.html

** Affects: keystone
 Importance: Undecided
 Assignee: zhiguo.li (zhiguo)
 Status: New


** Tags: doc

** Changed in: keystone
 Assignee: (unassigned) => zhiguo.li (zhiguo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1707130

Title:
  Lack of the step to create a domain

Status in OpenStack Identity (keystone):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: Although the domains has a deault 
name and defaultid,the guide should tell users how to create a new domain with 
command.
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.0.0b3.dev162 on 2017-07-27 00:25
  SHA: c3b5d2d77b029880521912e43ad963f9b0c5bf99
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-users.rst
  URL: https://docs.openstack.org/keystone/latest/install/keystone-users.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1707130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707129] [NEW] Error in docs for configuring distributed virtual routing

2017-07-28 Thread yanpuqing
Public bug reported:

The doc for configuring Network nodes in the /neutron/doc/source/admin
/config-dvr-ha-snat.rst is that:

"Configure the Open vSwitch agent. Add the following to
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ovs]
local_ip = TUNNEL_INTERFACE_IP_ADDRESS
bridge_mappings = external:br-ex

[agent]
enable_distributed_routing = True
tunnel_types = vxlan
l2_population = True"

It should be ml2_conf rather than ovs_conf.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707129

Title:
  Error in docs for configuring distributed virtual routing

Status in neutron:
  New

Bug description:
  The doc for configuring Network nodes in the /neutron/doc/source/admin
  /config-dvr-ha-snat.rst is that:

  "Configure the Open vSwitch agent. Add the following to
  /etc/neutron/plugins/ml2/ml2_conf.ini:

  [ovs]
  local_ip = TUNNEL_INTERFACE_IP_ADDRESS
  bridge_mappings = external:br-ex

  [agent]
  enable_distributed_routing = True
  tunnel_types = vxlan
  l2_population = True"

  It should be ml2_conf rather than ovs_conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666926] Re: introduce os-vif VIF object for veth pairs

2017-07-28 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Invalid

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666926

Title:
  introduce os-vif VIF object for veth pairs

Status in os-vif:
  In Progress

Bug description:
  placeholder fo use by neutron
  sean-k-mooney to expand later

To manage notifications about this bug go to:
https://bugs.launchpad.net/os-vif/+bug/1666926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666927] Re: introduce os-vif VIF object for patch port pairs

2017-07-28 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Invalid

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666927

Title:
  introduce os-vif VIF object for patch port pairs

Status in os-vif:
  In Progress

Bug description:
  placeholder for use by neutron
  sean-k-mooney to expand later.

To manage notifications about this bug go to:
https://bugs.launchpad.net/os-vif/+bug/1666927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682805] Re: transient switching loop caused by neutron-openvswitch-agent

2017-07-28 Thread Jesse
After I recheck this issue, I find that transient switching loop may not 
exist...
The fail_mode of br-int, br-eth0 and br-ex are secure, which means that when 
node reboot or OpenvSwitch restart, there will no normal flow in these bridges 
so no packets can pass these bridges.
The normal flow in br-int will not make switching loop because there is no 
normal flow in br-eth0 and br-ex. Normal flow and drop flow are added in 
br-eth0 then br-ex. It seems transient switching loop can not happen.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1682805

Title:
  transient switching loop caused by neutron-openvswitch-agent

Status in neutron:
  Invalid

Bug description:
  If we have the topology bellow in network node.

  https://etherpad.openstack.org/p/neutron_transient_switching_loop

  The ports on switch connected to eth0 and eth1 set to trunk all VLANs.
  When neutron-openvswitch-agent restart, First it will set br-int bridge by 
self.setup_integration_br(), then set br-eth0 and br-ex by 
self.setup_physical_bridges(self.bridge_mappings).

  Before this bug (https://bugs.launchpad.net/neutron/+bug/1383674), all flows 
in br-int will clear when neutron-openvswitch-agent restart, this will cause 
the transient switching loop decribed bellow.
  After the bug above fixed, the flows in br-int will remain to keep the 
network connected if neutron-openvswitch-agent restart, but if the network node 
reboot, the transient switching loop will also happen as decribed bellow.

  In self.setup_integration_br(), A normal flow in table 0 will be added in 
br-int flow.
  In the self.setup_physical_bridges(self.bridge_mappings), Drop flow for 
packet coming from int-br-eth0 and int-br-ex will be added in br-int flow.
  This drop flows will cut the switching loop from switch to br-int.
  But before the drop flows added to br-int, If there is a broadcast packet 
coming from switch, the packet will loop bewtween switch and br-int.

  We should add normal flow in table 0 in br-int after the drop flows
  added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1682805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707156] [NEW] [RFE] Adoption of "os-vif" in Neutron

2017-07-28 Thread Rodolfo Alonso
Public bug reported:

[Existing problem]
>From `os-vif Nova SPEC`_, whenever a new Neutron mechanism driver is created, 
>this results in the definition of a new VIF type. This situation generates two 
>problems:

* Nova developers need to maintain the plug/unplug code in the VIF drivers, 
which is defined bt the needs of the Neutron mechanism.
* The format of the data passed between Nova and Neutron for the VIF port 
binding is fairly loosely defined (no versioning or formal definition).

"os-vif" is being adopted progressively in Nova. As said before, "os-
vif" is in charge of the plugging and unplugging operations for the
existing VIF types.


[Proposal]
To adopt "os-vif" project in Neutron, decoupling any plug/unplug operations 
from Neutron.

This RFE could be the container smaller contributions migrating step by
step al VIF types in Neutron.

This topic will be discussed during the Denver PTG [2].

The proposed solution (to be discussed) is to add a new class in each
mech driver agent, leveraging "os-vif" directly in our agent's plugging
logic.


[Benefits]
* Centralize the plug/unplug operations in one common place.
* Provide to Neutron a way to define VIF types as required by Nova. The 
definitions contained in "os-vif" will be used by both projects.
* Remove from Neutron any plug/unplug driver specific operation, leaving to 
"os-vif" these actions. Neutron is supposed to be a L2-L4 SDN controller.


[References]
[1] `os-vif Nova SPEC`_: https://review.openstack.org/#/c/287090/
[2] https://etherpad.openstack.org/p/neutron-queens-ptg
[3] Nova os-vif library: 
https://review.openstack.org/#/q/topic:bp/os-vif-library,n,z

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707156

Title:
  [RFE] Adoption of "os-vif" in Neutron

Status in neutron:
  New

Bug description:
  [Existing problem]
  From `os-vif Nova SPEC`_, whenever a new Neutron mechanism driver is created, 
this results in the definition of a new VIF type. This situation generates two 
problems:

  * Nova developers need to maintain the plug/unplug code in the VIF drivers, 
which is defined bt the needs of the Neutron mechanism.
  * The format of the data passed between Nova and Neutron for the VIF port 
binding is fairly loosely defined (no versioning or formal definition).

  "os-vif" is being adopted progressively in Nova. As said before, "os-
  vif" is in charge of the plugging and unplugging operations for the
  existing VIF types.

  
  [Proposal]
  To adopt "os-vif" project in Neutron, decoupling any plug/unplug operations 
from Neutron.

  This RFE could be the container smaller contributions migrating step
  by step al VIF types in Neutron.

  This topic will be discussed during the Denver PTG [2].

  The proposed solution (to be discussed) is to add a new class in each
  mech driver agent, leveraging "os-vif" directly in our agent's
  plugging logic.

  
  [Benefits]
  * Centralize the plug/unplug operations in one common place.
  * Provide to Neutron a way to define VIF types as required by Nova. The 
definitions contained in "os-vif" will be used by both projects.
  * Remove from Neutron any plug/unplug driver specific operation, leaving to 
"os-vif" these actions. Neutron is supposed to be a L2-L4 SDN controller.

  
  [References]
  [1] `os-vif Nova SPEC`_: https://review.openstack.org/#/c/287090/
  [2] https://etherpad.openstack.org/p/neutron-queens-ptg
  [3] Nova os-vif library: 
https://review.openstack.org/#/q/topic:bp/os-vif-library,n,z

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707160] [NEW] test_create_port_in_allowed_allocation_pools test fails on ironic grenade

2017-07-28 Thread Vladyslav Drok
Public bug reported:

Here is an example of a job at
http://logs.openstack.org/58/487458/6/check/gate-grenade-dsvm-ironic-
ubuntu-xenial/d8f187e/console.html#_2017-07-28_09_33_52_031224

2017-07-28 09:33:52.027473 | Captured pythonlogging:
2017-07-28 09:33:52.027484 | ~~~
2017-07-28 09:33:52.027539 | 2017-07-28 09:15:48,746 9778 INFO 
[tempest.lib.common.rest_client] Request 
(PortsTestJSON:test_create_port_in_allowed_allocation_pools): 201 POST 
http://149.202.183.40:9696/v2.0/networks 0.342s
2017-07-28 09:33:52.027604 | 2017-07-28 09:15:48,746 9778 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2017-07-28 09:33:52.027633 | Body: {"network": {"name": 
"tempest-PortsTestJSON-test-network-1596805013"}}
2017-07-28 09:33:52.027728 | Response - Headers: {u'date': 'Fri, 28 Jul 
2017 09:15:48 GMT', u'x-openstack-request-id': 
'req-0502025a-db49-4f1f-b30d-c38b8098b79e', u'content-type': 
'application/json', u'content-length': '582', 'content-location': 
'http://149.202.183.40:9696/v2.0/networks', 'status': '201', u'connection': 
'close'}
2017-07-28 09:33:52.027880 | Body: 
{"network":{"status":"ACTIVE","router:external":false,"availability_zone_hints":[],"availability_zones":[],"description":"","subnets":[],"shared":false,"tenant_id":"5c851bb85bef4b008714ef04d1fe3671","created_at":"2017-07-28T09:15:48Z","tags":[],"ipv6_address_scope":null,"mtu":1450,"updated_at":"2017-07-28T09:15:48Z","admin_state_up":true,"revision_number":2,"ipv4_address_scope":null,"is_default":false,"port_security_enabled":true,"project_id":"5c851bb85bef4b008714ef04d1fe3671","id":"b8a3fb1c-86a4-4518-8c3a-dd12db585659","name":"tempest-PortsTestJSON-test-network-1596805013"}}
2017-07-28 09:33:52.027936 | 2017-07-28 09:15:49,430 9778 INFO 
[tempest.lib.common.rest_client] Request 
(PortsTestJSON:test_create_port_in_allowed_allocation_pools): 201 POST 
http://149.202.183.40:9696/v2.0/subnets 0.682s
2017-07-28 09:33:52.027998 | 2017-07-28 09:15:49,431 9778 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2017-07-28 09:33:52.028054 | Body: {"subnet": {"ip_version": 4, 
"allocation_pools": [{"end": "10.1.0.14", "start": "10.1.0.2"}], "network_id": 
"b8a3fb1c-86a4-4518-8c3a-dd12db585659", "gateway_ip": "10.1.0.1", "cidr": 
"10.1.0.0/28"}}
2017-07-28 09:33:52.028135 | Response - Headers: {u'date': 'Fri, 28 Jul 
2017 09:15:49 GMT', u'x-openstack-request-id': 
'req-1a50b739-8683-4aaa-ba4a-6e9daf73f1c8', u'content-type': 
'application/json', u'content-length': '594', 'content-location': 
'http://149.202.183.40:9696/v2.0/subnets', 'status': '201', u'connection': 
'close'}
2017-07-28 09:33:52.030085 | Body: 
{"subnet":{"service_types":[],"description":"","enable_dhcp":true,"tags":[],"network_id":"b8a3fb1c-86a4-4518-8c3a-dd12db585659","tenant_id":"5c851bb85bef4b008714ef04d1fe3671","created_at":"2017-07-28T09:15:49Z","dns_nameservers":[],"updated_at":"2017-07-28T09:15:49Z","gateway_ip":"10.1.0.1","ipv6_ra_mode":null,"allocation_pools":[{"start":"10.1.0.2","end":"10.1.0.14"}],"host_routes":[],"revision_number":0,"ip_version":4,"ipv6_address_mode":null,"cidr":"10.1.0.0/28","project_id":"5c851bb85bef4b008714ef04d1fe3671","id":"be974b50-e56b-44a8-86a9-6bcc345f9d55","subnetpool_id":null,"name":""}}
2017-07-28 09:33:52.030176 | 2017-07-28 09:15:50,616 9778 INFO 
[tempest.lib.common.rest_client] Request 
(PortsTestJSON:test_create_port_in_allowed_allocation_pools): 201 POST 
http://149.202.183.40:9696/v2.0/ports 1.185s
2017-07-28 09:33:52.030232 | 2017-07-28 09:15:50,617 9778 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2017-07-28 09:33:52.030259 | Body: {"port": {"network_id": 
"b8a3fb1c-86a4-4518-8c3a-dd12db585659"}}
2017-07-28 09:33:52.030369 | Response - Headers: {u'date': 'Fri, 28 Jul 
2017 09:15:50 GMT', u'x-openstack-request-id': 
'req-6b57ff81-c874-4e97-8183-bd57c7e8de81', u'content-type': 
'application/json', u'content-length': '691', 'content-location': 
'http://149.202.183.40:9696/v2.0/ports', 'status': '201', u'connection': 
'close'}
2017-07-28 09:33:52.030596 | Body: 

[Yahoo-eng-team] [Bug 1705012] Re: vif type unbound not supported when creating server

2017-07-28 Thread YAMAMOTO Takashi
I confirmed the symptom seen in bug 1700448 was fixed

** Changed in: networking-midonet
   Importance: Undecided => Critical

** Changed in: networking-midonet
   Status: New => Fix Released

** Changed in: networking-midonet
Milestone: None => 5.0.0

** Changed in: networking-midonet
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1705012

Title:
  vif type unbound not supported when creating server

Status in networking-midonet:
  Fix Released
Status in networking-vsphere:
  New
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Creating a vmware server (using stable/newton packages) fails with the
  following error logged in nova-compute.log:

  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager 
[req-a970b485-da9e-4032
  -8c8a-af6f57d4d0f5 c8380cb1ad1842729061bff8d4a2b637 
6fec64fa2bd947e680100e1877db
  a0c7 - - -] [instance: c27d61ad-6884-4881-9fdb-5201c17141f7] Instance failed 
to 
  spawn
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-688
  4-4881-9fdb-5201c17141f7] Traceback (most recent call last):
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-688
  4-4881-9fdb-5201c17141f7]   File 
"/usr/lib/python2.7/site-packages/nova/compute/
  manager.py", line 2083, in _build_resources
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-688
  4-4881-9fdb-5201c17141f7] yield resources
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-688
  4-4881-9fdb-5201c17141f7]   File 
"/usr/lib/python2.7/site-packages/nova/compute/
  manager.py", line 1924, in _build_and_run_instance
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] block_device_info=block_device_info)
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/driver.py", line 316, in 
spawn
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] admin_password, network_info, 
block_device_info)
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 739, in 
spawn
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] metadata)
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 281, in 
build_virtual_machine
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] network_info)
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vif.py", line 178, in 
get_vif_info
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] is_neutron, vif))
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vif.py", line 164, in 
get_vif_dict
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] ref = get_network_ref(session, 
cluster, vif, is_neutron)
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vif.py", line 153, in 
get_network_ref
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] network_ref = 
_get_neutron_network(session, cluster, vif)
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] raise 
exception.InvalidInput(reason=reason)
  2017-07-11 13:52:02.494 21616 ERROR nova.compute.manager [instance: 
c27d61ad-6884-4881-9fdb-5201c17141f7] InvalidInput: Invalid input received: vif 
type unbound not supported

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1705012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707222] [NEW] usage of /tmp during boot is not safe due to systemd-tmpfiles-clean

2017-07-28 Thread Scott Moser
Public bug reported:

Earlier this week on Zesty on Azure I saw a cloud-init failure in its 
'mount_cb' function.
That function esentially does:
 a.) make a tmp directory for a mount point
 b.)  mount some filesystem to that mount point
 c.) call a function
 d.) unmount the directory

What I recall was that access to a file inside the mount point failed during 
'c'.
This seems possible as systemd-tmpfiles-clean may be running at the same time 
as cloud-init (cloud-init.service in this example).


It seems that this service basically inhibits *any* other service from using 
tmp files.
It's ordering statements are only:

  After=local-fs.target time-sync.target
  Before=shutdown.target

So while in most cases only services that run early in the boot process
like cloud-init will be affected, any service could have its tmp files
removed.  this service could take quite a long time to run if /tmp/ had
been filled with lots of files in the previous boot.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Affects: cloud-init (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: systemd (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1707222

Title:
  usage of /tmp during boot is not safe due to systemd-tmpfiles-clean

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  New
Status in systemd package in Ubuntu:
  New

Bug description:
  Earlier this week on Zesty on Azure I saw a cloud-init failure in its 
'mount_cb' function.
  That function esentially does:
   a.) make a tmp directory for a mount point
   b.)  mount some filesystem to that mount point
   c.) call a function
   d.) unmount the directory

  What I recall was that access to a file inside the mount point failed during 
'c'.
  This seems possible as systemd-tmpfiles-clean may be running at the same time 
as cloud-init (cloud-init.service in this example).

  
  It seems that this service basically inhibits *any* other service from using 
tmp files.
  It's ordering statements are only:

After=local-fs.target time-sync.target
Before=shutdown.target

  So while in most cases only services that run early in the boot
  process like cloud-init will be affected, any service could have its
  tmp files removed.  this service could take quite a long time to run
  if /tmp/ had been filled with lots of files in the previous boot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1707222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707307] [NEW] Neutron log obesity epidepmic

2017-07-28 Thread Kevin Benton
Public bug reported:

>From a single scenario job (http://logs.openstack.org/57/488557/1/check
/gate-tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-
nv/36d8e0b/logs/?C=S;O=D)

[   ]   screen-q-agt.txt.gz 2017-07-28 19:459.6M 
[   ]   screen-q-svc.txt.gz 2017-07-28 19:457.1M 


Our compressed log sizes are 9x for server and 7x for agent the size of
the next in the list (Keystone). Before the release I would like to trim
down on the debug messages.

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707307

Title:
  Neutron log obesity epidepmic

Status in neutron:
  New

Bug description:
  From a single scenario job
  (http://logs.openstack.org/57/488557/1/check/gate-tempest-dsvm-
  neutron-dvr-multinode-scenario-ubuntu-xenial-nv/36d8e0b/logs/?C=S;O=D)

  [   ] screen-q-agt.txt.gz 2017-07-28 19:459.6M 
  [   ] screen-q-svc.txt.gz 2017-07-28 19:457.1M 


  Our compressed log sizes are 9x for server and 7x for agent the size
  of the next in the list (Keystone). Before the release I would like to
  trim down on the debug messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1693315] Re: Unhelpful invalid bdm error in compute logs when volume creation fails during boot from volume

2017-07-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/467715
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=20c4715a49a44c642882618f102cd0fc9342978d
Submitter: Jenkins
Branch:master

commit 20c4715a49a44c642882618f102cd0fc9342978d
Author: Matt Riedemann 
Date:   Thu Jun 15 11:46:44 2017 -0400

Provide original fault message when BFV fails

When booting from volume and Nova is creating the volume,
it can fail (timeout, invalid AZ in Cinder, etc) and the
generic Exception handling in _prep_block_device will log
the original exception trace but then raise a generic
InvalidBDM exception, which is handled higher up and converted
to a BuildAbortException, which is recorded as an instance
fault, but the original error message is lost from the fault.

It would be better to include the original exception message that
triggered the failure so that goes into the fault for debug.

For example, this is a difference of getting an error like this:

  BuildAbortException: Build of instance
  9484f5a7-3198-47ff-b728-178515a26277 aborted:
  Block Device Mapping is Invalid.

To something more useful like this:

  BuildAbortException: Build of instance
  9484f5a7-3198-47ff-b728-178515a26277 aborted:
  Volume da947c97-66c6-4b7e-9ae6-54eb8128bb75 did not finish
  being created even after we waited 3 seconds or 2 attempts.
  And its status is error.

Change-Id: I20a5e8e5e10dd505c1b24c208f919c6550e9d1a4
Closes-Bug: #1693315


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1693315

Title:
  Unhelpful invalid bdm error in compute logs when volume creation fails
  during boot from volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  This came up in IRC while debugging a separate problem with a user.

  They are booting from volume where nova creates the volume, and were
  getting this unhelpful error message in the end:

  BuildAbortException: Build of instance
  9484f5a7-3198-47ff-b728-178515a26277 aborted: Block Device Mapping is
  Invalid.

  That's from this generic exception that is raised up:

  
https://github.com/openstack/nova/blob/81bdbd0b50aeac9a677a0cef9001081008a2c407/nova/compute/manager.py#L1595

  The actual exception in the traceback is much more specific:

  http://paste.as47869.net/p/9qbburh7z3w3toi

  2017-05-24 16:33:26.127 2331 ERROR nova.compute.manager [instance:
  9484f5a7-3198-47ff-b728-178515a26277] VolumeNotCreated: Volume
  da947c97-66c6-4b7e-9ae6-54eb8128bb75 did not finish being created even
  after we waited 3 seconds or 2 attempts. And its status is error.

  That's showing that the volume failed to be created almost
  immediately.

  It would be better to include that error message in what goes into the
  BuildAbortException which is what ultimately goes into the recorded
  instance fault:

  
https://github.com/openstack/nova/blob/81bdbd0b50aeac9a677a0cef9001081008a2c407/nova/compute/manager.py#L1878

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1693315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707168] Re: [placement] resource provider trait-related query creates unicode warning

2017-07-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/488363
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5728a575c77bc74af8df5e5d8ef22dba0eed1677
Submitter: Jenkins
Branch:master

commit 5728a575c77bc74af8df5e5d8ef22dba0eed1677
Author: Chris Dent 
Date:   Fri Jul 28 11:25:13 2017 +0100

[placement] quash unicode warning with shared provider

trait.name is expected to be unicode and sqlalchemy will warn when it
doesn't get that. The os_traits library creates default quoted strings
for its symbols, so it needs a six.text_type wrapper to shut the warning
up.

Closes-Bug: #1707168
Change-Id: Id9d859830d584d650ea748c8c5274156a30fd773


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707168

Title:
  [placement] resource provider trait-related query creates unicode
  warning

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Running queries for shared providers creates the following warning:

  
/home/cdent/src/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:340:
 OsloDBDeprecationWarning: EngineFacade is deprecated; please use 
oslo_db.sqlalchemy.enginefacade
self._legacy_facade = LegacyEngineFacade(None, _factory=self)
  
/home/cdent/src/nova/.tox/functional/local/lib/python2.7/site-packages/sqlalchemy/sql/sqltypes.py:219:
 SAWarning: Unicode type received non-unicode bind param value 
'MISC_SHARES_VIA_AGGREGATE'. (this warning may be suppressed after 10 
occurrences)
(util.ellipses_string(value),))

  This is annoying when trying to evaluate test logs. It's noise.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585193] Re: walinuxagent not found on Centos when cloud-init is started

2017-07-28 Thread Joshua Powers
Marking invalid per comment above.

** Changed in: cloud-init
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1585193

Title:
  walinuxagent not found on Centos when cloud-init is started

Status in cloud-init:
  Invalid

Bug description:
  Hello,

  I'm trying to use cloud-init on Centos and Azure, but the the Azure
  agent has a different name (waagent instead of walinuxagent) from
  Ubuntu and it is hardcoded in the source code. Do you plan to make a
  condition for RHEL distro ?

  Here is the python stack :

  May 24 14:22:50 test cloud-init: 2016-05-24 14:22:50,899 - util.py[DEBUG]: 
agent command '['service', 'walinuxagent', 'start']' failed.
  May 24 14:22:50 test cloud-init: Traceback (most recent call last):
  May 24 14:22:50 test cloud-init: File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceAzure.py", line 
166, in get_data
  May 24 14:22:50 test cloud-init: invoke_agent(mycfg['agent_command'])
  May 24 14:22:50 test cloud-init: File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceAzure.py", line 
400, in invoke_agent
  May 24 14:22:50 test cloud-init: util.subp(cmd, shell=(not isinstance(cmd, 
list)))
  May 24 14:22:51 test cloud-init: File 
"/usr/lib/python2.7/site-packages/cloudinit/util.py", line 1539, in subp
  May 24 14:22:51 test cloud-init: cmd=args)
  May 24 14:22:51 test cloud-init: ProcessExecutionError: Unexpected error 
while running command.
  May 24 14:22:51 test cloud-init: Command: ['service', 'walinuxagent', 'start']
  May 24 14:22:51 test cloud-init: Exit code: 6
  May 24 14:22:51 test cloud-init: Reason: -
  May 24 14:22:51 test cloud-init: Stdout: ''
  May 24 14:22:51 test cloud-init: Stderr: 'Redirecting to /bin/systemctl start 
 walinuxagent.service\nFailed to start walinuxagent.service: Unit 
walinuxagent.service failed to load: No such file or directory.\n'

  Thanks.

  Best,
  Antoine Rouaze

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1585193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551747] Re: ubuntu-fan causes issues during network configuration

2017-07-28 Thread Joshua Powers
** Changed in: cloud-init
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1551747

Title:
  ubuntu-fan causes issues during network configuration

Status in cloud-init:
  Invalid
Status in Snappy:
  Confirmed
Status in ubuntu-fan package in Ubuntu:
  Fix Released
Status in ubuntu-fan source package in Xenial:
  Fix Released
Status in ubuntu-fan source package in Yakkety:
  Fix Released

Bug description:
  it seems that ubuntu-fan is causing issues with network configuration.

  On 16.04 daily image:

  root@localhost:~# snappy list
  NameDate   Version  Developer
  canonical-pi2   2016-02-02 3.0  canonical
  canonical-pi2-linux 2016-02-03 4.3.0-1006-3 canonical
  ubuntu-core 2016-02-22 16.04.0-10.armhf canonical

  I see this when I'm activating a wifi card on a raspberry pi 2.

  root@localhost:~# ifdown wlan0
  ifdown: interface wlan0 not configured
  root@localhost:~# ifup wlan0
  Internet Systems Consortium DHCP Client 4.3.3
  Copyright 2004-2015 Internet Systems Consortium.
  All rights reserved.
  For info, please visit https://www.isc.org/software/dhcp/

  Listening on LPF/wlan0/c4:e9:84:17:31:9b
  Sending on   LPF/wlan0/c4:e9:84:17:31:9b
  Sending on   Socket/fallback
  DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 3 (xid=0x81c0c95e)
  DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 5 (xid=0x81c0c95e)
  DHCPREQUEST of 192.168.0.170 on wlan0 to 255.255.255.255 port 67 
(xid=0x5ec9c081)
  DHCPOFFER of 192.168.0.170 from 192.168.0.251
  DHCPACK of 192.168.0.170 from 192.168.0.251
  RTNETLINK answers: File exists
  bound to 192.168.0.170 -- renewal in 17145 seconds.
  run-parts: /etc/network/if-up.d/ubuntu-fan exited with return code 1
  Failed to bring up wlan0.

  ===
  [Impact]

  Installing ubuntu-fan can trigger error messages when initialising
  with no fan configuration.

  [Test Case]

  As above.

  [Regression Potential]

  Low, suppresses errorneous error messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1551747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635160] Re: No bootable device when evacuate a instance on shared_storage_storage ceph

2017-07-28 Thread Sean Dague
*** This bug is a duplicate of bug 1562681 ***
https://bugs.launchpad.net/bugs/1562681

** Tags added: evacuate

** This bug has been marked a duplicate of bug 1562681
   Post instance evacuation, image metadata is not retained when using shared 
storage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1635160

Title:
  No bootable device when evacuate a instance on shared_storage_storage
  ceph

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova Verion:nova-kilo-2015.1.1
  Ceph Verion:ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

  When i test nova evacuate function i found after the instance evacuated it 
cant not boot normally.
  By the vnc console i see "No bootable device" info.

  Through some tests i found the reason: when u used the shared storage
  the rebuild task flow will not get the image meta again. So if u set
  meta for the image, the problem occur.

  The code:
   nova/compute/manager.py

   @object_compat
   @messaging.expected_exceptions(exception.PreserveEphemeralNotSupported)
   @wrap_exception()
   @reverts_task_state
   @wrap_instance_event
   @wrap_instance_fault
   def rebuild_instance(self, context, instance, orig_image_ref, image_ref,
     injected_files, new_pass,
    orig_sys_metadata,
     bdms, recreate, on_shared_storage,
     preserve_ephemeral=False):
  ..

  if on_shared_storage != self.driver.instance_on_disk(instance):
    raise exception.InvalidSharedStorage(_("Invalid state of instance files on 
shared"
  " storage"))

  if on_shared_storage:
    LOG.info(_LI('disk on shared storage, recreating using'
   ' existing disk'))
  else:
    image_ref = orig_image_ref = instance.image_ref
    LOG.info(_LI("disk not on shared storage, rebuilding from:"
   " '%s'"), str(image_ref))

  # NOTE(mriedem): On a recreate (evacuate), we need to update
  # the instance's host and node properties to reflect it's
  # destination node for the recreate.
  node_name = None
  try:
    compute_node = self._get_compute_info(context, self.host)
   node_name = compute_node.hypervisor_hostname
  except exception.ComputeHostNotFound:
    LOG.exception(_LE('Failed to get compute_info for %s'),self.host)
  finally:
    instance.host = self.host
    instance.node = node_name
    instance.save()

  if image_ref:
    image_meta = self.image_api.get(context, image_ref)
  else:
    image_meta = {}
  ..

  Bellow is my image info
  +--+--+
  | Property | Value|
  +--+--+
  | OS-EXT-IMG-SIZE:size | 53687091200  |
  | created  | 2016-09-20T08:15:21Z |
  | id   | 8b218b4d-74ff-44af-bc4c-c37fb1106b03 |
  | metadata hw_disk_bus | scsi |
  | metadata hw_qemu_guest_agent | yes  |
  | metadata hw_scsi_model   | virtio-scsi  |
  | minDisk  | 0|
  | minRam   | 0|
  | name | zptest-20160920  |
  | progress | 100  |
  | status   | ACTIVE   |
  | updated  | 2016-10-20T07:38:54Z |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1635160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707284] [NEW] Extend attached volume fails with "VolumePathsNotFound: Could not find any paths for the volume." in os-brick iscsi connector

2017-07-28 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/78/480778/2/check/gate-tempest-dsvm-neutron-
full-ubuntu-
xenial/10b/logs/screen-n-cpu.txt.gz?level=TRACE#_Jul_27_07_06_43_460444

Jul 27 07:06:43.460444 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server 
[req-3e8818f0-952a-4242-a294-0319bae0121a 
req-b68879c5-8886-4a58-ae10-7a3804868df7 service nova] Exception during message 
handling: VolumePathsNotFound: Could not find any paths for the volume.
Jul 27 07:06:43.460651 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server Traceback (most recent 
call last):
Jul 27 07:06:43.460831 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
Jul 27 07:06:43.461013 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
Jul 27 07:06:43.461314 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
Jul 27 07:06:43.461483 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
Jul 27 07:06:43.461653 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispatch
Jul 27 07:06:43.461816 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server result = func(ctxt, 
**new_args)
Jul 27 07:06:43.461989 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 76, in wrapped
Jul 27 07:06:43.462153 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server function_name, 
call_dict, binary)
Jul 27 07:06:43.462324 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Jul 27 07:06:43.462484 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server self.force_reraise()
Jul 27 07:06:43.462653 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Jul 27 07:06:43.462818 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
Jul 27 07:06:43.462988 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 67, in wrapped
Jul 27 07:06:43.463151 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
Jul 27 07:06:43.463373 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6907, in 
external_instance_event
Jul 27 07:06:43.463543 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server 
self.extend_volume(context, instance, event.tag)
Jul 27 07:06:43.463800 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 864, in decorated_function
Jul 27 07:06:43.463993 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server return function(self, 
context, *args, **kwargs)
Jul 27 07:06:43.464156 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 211, in decorated_function
Jul 27 07:06:43.464316 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server kwargs['instance'], e, 
sys.exc_info())
Jul 27 07:06:43.464476 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Jul 27 07:06:43.464633 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server self.force_reraise()
Jul 27 07:06:43.464793 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Jul 27 07:06:43.464963 ubuntu-xenial-citycloud-la1-10111613 
nova-compute[18654]: ERROR 

[Yahoo-eng-team] [Bug 1707246] Re: Configuration guide references configuration options for policy instead of sample policy file

2017-07-28 Thread Lance Bragstad
** Also affects: oslo.policy
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1707246

Title:
  Configuration guide references configuration options for policy
  instead of sample policy file

Status in OpenStack Identity (keystone):
  In Progress
Status in oslo.policy:
  New

Bug description:
  The configuration guide document should contain all information for
  configuration options, as well as sample policy files. Keystone's
  configuration section uses the wrong directive, which results in the
  configuration options being rendered where the sample policy file
  should be:

  https://docs.openstack.org/keystone/latest/configuration/policy.html

  We should correct this so that the policy section of the configuration
  guide references policy and not configuration options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1707246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625105] Re: During rebuild/spawn instance stopped compute, the instances set error_state

2017-07-28 Thread Sean Dague
There definitely could be enhancements to the recovery mode when things
start up again, but this is more than a simple bug, and probably needs a
spec to work through all the edge conditions here.

The Nova spec process is here - https://specs.openstack.org/openstack
/nova-specs/

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625105

Title:
  During rebuild/spawn instance stopped compute, the instances set
  error_state

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Error cases:
  1) If we create any new instance and stop nova while spawn, then the nova 
compute at start set this instance to error state. 
  2) If we start rebuild any instance and stop nova while not rebuilt, then the 
nova compute at start set this instance to error state.

  We should start again (or resume) the interrupted process at nova
  start instead of set to error state the related instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1625105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649845] Re: Interface drivers don't update port MTU if the port already exists

2017-07-28 Thread Sean Dague
This is apparently fixed in os-vif for Newton and beyond. Marking
Invalid on the Nova side because the logic doesn't live in Nova in any
supported version.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649845

Title:
  Interface drivers don't update port MTU if the port already exists

Status in networking-midonet:
  New
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in os-vif:
  Fix Released

Bug description:
  This is needed because Neutron allows to change MTU values for
  networks (through configuration options modification and neutron-
  server restart). Without that, there is no way to apply new MTU for
  DHCP and router ports without migrating resources to other nodes.

  I suggest we apply MTU on conseqent plug() attempts, even if port
  exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1649845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707257] [NEW] Admin can't modify flavor access

2017-07-28 Thread Ivan Kolodyazhny
Public bug reported:

Steps to reproduce:
1. Create public flavor using UI or API
2. Change flavor access to admin project
3. Login with user credentials
4. Check that flavor is unavailable for user while creating instance

Expected result:
Flavor absents

Actual result:
Flavor presents

** Affects: horizon
 Importance: Undecided
 Assignee: Ivan Kolodyazhny (e0ne)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1707257

Title:
  Admin can't modify flavor access

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Steps to reproduce:
  1. Create public flavor using UI or API
  2. Change flavor access to admin project
  3. Login with user credentials
  4. Check that flavor is unavailable for user while creating instance

  Expected result:
  Flavor absents

  Actual result:
  Flavor presents

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1707257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707256] [NEW] Scheduler report client does not account for shared resource providers

2017-07-28 Thread Matt Riedemann
Public bug reported:

There are a few places in the scheduler report client that don't account
for shared resource providers, like a shared storage pool.

1.
https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L921

That's used in _allocate_for_instance when it compares the current
allocations for the instance vs the allocations that the compute node
thinks it has:

https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L929

If those are different, the compute node allocations are going to
overwrite the current allocations for the instance, which could include
a shared storage allocation created by the scheduler. This is
particularly bad since it happens during the update_available_resource
periodic task that happens in the compute service / resource tracker.

2.
https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L1024

This is related to #1 and called from the same code in #1, the
_allocate_for_instance method which is the one comparing the current
allocations for the instance to the ones that the compute node things it
needs, which doesn't account for shared resource providers.

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: compute pike-rc-potential placement resource-tracker

** Summary changed:

- Scheduler report client is not account for shared resource providers
+ Scheduler report client does not account for shared resource providers

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707256

Title:
  Scheduler report client does not account for shared resource providers

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  There are a few places in the scheduler report client that don't
  account for shared resource providers, like a shared storage pool.

  1.
  
https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L921

  That's used in _allocate_for_instance when it compares the current
  allocations for the instance vs the allocations that the compute node
  thinks it has:

  
https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L929

  If those are different, the compute node allocations are going to
  overwrite the current allocations for the instance, which could
  include a shared storage allocation created by the scheduler. This is
  particularly bad since it happens during the update_available_resource
  periodic task that happens in the compute service / resource tracker.

  2.
  
https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L1024

  This is related to #1 and called from the same code in #1, the
  _allocate_for_instance method which is the one comparing the current
  allocations for the instance to the ones that the compute node things
  it needs, which doesn't account for shared resource providers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707319] [NEW] Security group doesn't apply to existing port

2017-07-28 Thread hongbin
Public bug reported:

Description
===
Create an instance with an existing port and a security group. The security 
group is ignored. The port's security group is not updated. Steps to reproduce:

Steps to reproduce
==
$ source /opt/stack/devstack/openrc demo demo
$ openstack port create --network private vm-port
$ PORT_ID=$(openstack port show vm-port | awk '/ id /{print $4}')
$ openstack security group create vm-sg
$ SG_ID=$(openstack security group show vm-sg | awk '/ id /{print $4}')
$ openstack server create --flavor m1.tiny --nic port-id=$PORT_ID 
--security-group $SG_ID --image cirros-0.3.5-x86_64-disk vm
$ openstack server show vm -c security_groups
+-++
| Field   | Value  |
+-++
| security_groups | name='default' |
+-++

Expected result
===
I expect Nova to update the port's security group. For example, the security 
group should be updated as name='vm-sg' instead of name='default'.

Actual result
=
The specified security group is ignored. The port's security group is not 
updated (stay as 'default')

Environment
===
$ git log -1
commit 2fbac08c0686e92aaee65f24bf2958db6a451046
Author: Stephen Finucane 
Date:   Mon Jun 26 11:14:55 2017 +0100

Add missing microversion documentation

Part of blueprint placement-project-user

Change-Id: I9d77649e7e02f0ace5546e42e04122162ec5661f

hypervisor: Libvirt + KVM

Networking type: Neutron

** Affects: nova
 Importance: Undecided
 Assignee: hongbin (hongbin034)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => hongbin (hongbin034)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707319

Title:
  Security group doesn't apply to existing port

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Create an instance with an existing port and a security group. The security 
group is ignored. The port's security group is not updated. Steps to reproduce:

  Steps to reproduce
  ==
  $ source /opt/stack/devstack/openrc demo demo
  $ openstack port create --network private vm-port
  $ PORT_ID=$(openstack port show vm-port | awk '/ id /{print $4}')
  $ openstack security group create vm-sg
  $ SG_ID=$(openstack security group show vm-sg | awk '/ id /{print $4}')
  $ openstack server create --flavor m1.tiny --nic port-id=$PORT_ID 
--security-group $SG_ID --image cirros-0.3.5-x86_64-disk vm
  $ openstack server show vm -c security_groups
  +-++
  | Field   | Value  |
  +-++
  | security_groups | name='default' |
  +-++

  Expected result
  ===
  I expect Nova to update the port's security group. For example, the security 
group should be updated as name='vm-sg' instead of name='default'.

  Actual result
  =
  The specified security group is ignored. The port's security group is not 
updated (stay as 'default')

  Environment
  ===
  $ git log -1
  commit 2fbac08c0686e92aaee65f24bf2958db6a451046
  Author: Stephen Finucane 
  Date:   Mon Jun 26 11:14:55 2017 +0100

  Add missing microversion documentation

  Part of blueprint placement-project-user

  Change-Id: I9d77649e7e02f0ace5546e42e04122162ec5661f

  hypervisor: Libvirt + KVM

  Networking type: Neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1706083] Re: Post-migration, Cinder volumes lose disk cache value, resulting in I/O latency

2017-07-28 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/newton
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/newton
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1706083

Title:
  Post-migration, Cinder volumes lose disk cache value, resulting in I/O
  latency

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  Description
  ===

  [This was initially reported by a Red Hat OSP customer.]

  The I/O latency of a Cinder volume after live migration of an instance
  to which it's attached increases significantly. This stays increased
  till the VM is stopped and started again. [VM is booted with Cinder
  volume.

  This is not the case when using a disk from a Nova store backend [
  without Cinder volume] -- or at least the difference isn't so
  significantly high after a live migration.

  The storage backend is Ceph 2.0.

  
  How reproducible: Consistently

  
  Steps to Reproduce
  ==

  (0) Both the Nova instances and Cinder volumes are located on Ceph

  (1) Create a Nova instance with a Cinder volume attached to it

  (2) Live migrate it to a target Compute node

  (3) Run `ioping` (`ioping -c 10 .`) on the Cinder volume.
  Alternatively, run other I/O benchmarks like using `fio` with
  'direct=1' (which uses non-bufferred I/O) as a good sanity check to
  get a second opinion regarding latency.

  
  Actual result
  =

  Before live migration: `ioping` output on the Cinder volume attached to a Nova
  instance:

  [guest]$ sudo ioping -c 10 .
  4 KiB <<< . (xfs /dev/sda1): request=1 time=98.0 us (warmup)
  4 KiB <<< . (xfs /dev/sda1): request=2 time=135.6 us
  4 KiB <<< . (xfs /dev/sda1): request=3 time=155.5 us
  4 KiB <<< . (xfs /dev/sda1): request=4 time=161.7 us
  4 KiB <<< . (xfs /dev/sda1): request=5 time=148.4 us
  4 KiB <<< . (xfs /dev/sda1): request=6 time=354.3 us
  4 KiB <<< . (xfs /dev/sda1): request=7 time=138.0 us (fast)
  4 KiB <<< . (xfs /dev/sda1): request=8 time=150.7 us
  4 KiB <<< . (xfs /dev/sda1): request=9 time=149.6 us
  4 KiB <<< . (xfs /dev/sda1): request=10 time=138.6 us (fast)
  
  --- . (xfs /dev/sda1) ioping statistics ---
  9 requests completed in 1.53 ms, 36 KiB read, 5.87 k iops, 22.9 MiB/s
  generated 10 requests in 9.00 s, 40 KiB, 1 iops, 4.44 KiB/s
  min/avg/max/mdev = 135.6 us / 170.3 us / 354.3 us / 65.6 us

  
  After live migration, `ioping` output on the Cinder 

  [guest]$ sudo ioping -c 10 .
  4 KiB <<< . (xfs /dev/sda1): request=1 time=1.03 ms (warmup)
  4 KiB <<< . (xfs /dev/sda1): request=2 time=948.6 us
  4 KiB <<< . (xfs /dev/sda1): request=3 time=955.7 us
  4 KiB <<< . (xfs /dev/sda1): request=4 time=920.5 us
  4 KiB <<< . (xfs /dev/sda1): request=5 time=1.03 ms
  4 KiB <<< . (xfs /dev/sda1): request=6 time=838.2 us
  4 KiB <<< . (xfs /dev/sda1): request=7 time=1.13 ms (slow)
  4 KiB <<< . (xfs /dev/sda1): request=8 time=868.6 us
  4 KiB <<< . (xfs /dev/sda1): request=9 time=985.2 us
  4 KiB <<< . (xfs /dev/sda1): request=10 time=936.6 us
  
  --- . (xfs /dev/sda1) ioping statistics ---
  9 requests completed in 8.61 ms, 36 KiB read, 1.04 k iops, 4.08 MiB/s
  generated 10 requests in 9.00 s, 40 KiB, 1 iops, 4.44 KiB/s
  min/avg/max/mdev = 838.2 us / 956.9 us / 1.13 ms / 81.0 us

  This goes back to an average of 200us again after shutting down and
  starting up the instance. 

  
  Expected result
  ===

  No I/O latency experienced on Cinder volumes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1706083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707339] [NEW] test_trunk_subport_lifecycle fails on subport down timeout

2017-07-28 Thread Armando Migliaccio
Public bug reported:

subport doesn't transition to DOWN state after trunk deletion


logstash query: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3Agate-tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-nv%20AND%20build_branch%3Amaster%20AND%20message%3A%5C%22Timed%20out%20waiting%20for%20subport%20%5C%22%20AND%20message%3A%5C%22to%20transition%20to%20DOWN%5C%22%20AND%20tags%3Aconsole

A failed run:

http://logs.openstack.org/76/467976/22/check/gate-tempest-dsvm-neutron-
dvr-multinode-scenario-ubuntu-xenial-
nv/e11aeaf/logs/testr_results.html.gz

The agent log is filled with trace:

Error while processing VIF ports: OVSFWPortNotFound: Port
526e3ca9-9af3-4b94-8550-90a5bdc9b4e7 is not managed by this agent.

http://logs.openstack.org/76/467976/22/check/gate-tempest-dsvm-neutron-
dvr-multinode-scenario-ubuntu-xenial-
nv/e11aeaf/logs/screen-q-agt.txt.gz?level=TRACE

Incidentally, this port is a parent of a trunk port.

Now the trunk's OVSDB handler on the agent side relies on the OVS agent
main loop to detect the port removal and let it notify the server to
mark the logical port down. I wonder if these exceptions prevent that
from happening.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: trunk

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

** Tags added: tru

** Tags removed: tru
** Tags added: trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707339

Title:
  test_trunk_subport_lifecycle fails on subport down timeout

Status in neutron:
  Confirmed

Bug description:
  subport doesn't transition to DOWN state after trunk deletion

  
  logstash query: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3Agate-tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-nv%20AND%20build_branch%3Amaster%20AND%20message%3A%5C%22Timed%20out%20waiting%20for%20subport%20%5C%22%20AND%20message%3A%5C%22to%20transition%20to%20DOWN%5C%22%20AND%20tags%3Aconsole

  A failed run:

  http://logs.openstack.org/76/467976/22/check/gate-tempest-dsvm-
  neutron-dvr-multinode-scenario-ubuntu-xenial-
  nv/e11aeaf/logs/testr_results.html.gz

  The agent log is filled with trace:

  Error while processing VIF ports: OVSFWPortNotFound: Port
  526e3ca9-9af3-4b94-8550-90a5bdc9b4e7 is not managed by this agent.

  http://logs.openstack.org/76/467976/22/check/gate-tempest-dsvm-
  neutron-dvr-multinode-scenario-ubuntu-xenial-
  nv/e11aeaf/logs/screen-q-agt.txt.gz?level=TRACE

  Incidentally, this port is a parent of a trunk port.

  Now the trunk's OVSDB handler on the agent side relies on the OVS
  agent main loop to detect the port removal and let it notify the
  server to mark the logical port down. I wonder if these exceptions
  prevent that from happening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp