[Yahoo-eng-team] [Bug 1730271] Re: test_two_sec_groups failure

2017-11-08 Thread YAMAMOTO Takashi
** Changed in: networking-midonet
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730271

Title:
  test_two_sec_groups failure

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  eg. http://logs.openstack.org/24/517524/3/gate/legacy-tempest-dsvm-
  networking-midonet-aio-ml2/538c357/logs/testr_results.html.gz

  ft1.4: 
neutron.tests.tempest.scenario.test_security_groups.NetworkDefaultSecGroupTest.test_two_sec_groups[id-3d73ec1a-2ec6-45a9-b0f8-04a283d9d964]_StringException:
 pythonlogging:'': {{{
  2017-11-06 02:36:21,418 20994 INFO [tempest.lib.common.rest_client] 
Request (NetworkDefaultSecGroupTest:test_two_sec_groups): 201 POST 
http://15.184.68.89:9696/v2.0/security-groups 0.316s
  2017-11-06 02:36:21,418 20994 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: {"security_group": {"name": "ssh_secgrp"}}
  Response - Headers: {u'x-openstack-request-id': 
'req-a84466f5-d1db-4b8e-99e3-d1e078d2299d', 'content-location': 
'http://15.184.68.89:9696/v2.0/security-groups', u'content-length': '1342', 
u'date': 'Mon, 06 Nov 2017 02:36:21 GMT', u'content-type': 'application/json', 
'status': '201', u'connection': 'close'}
  Body: {"security_group": {"description": "", "tags": [], "tenant_id": 
"2ed61f69d3d54a9980201640fbc24456", "created_at": "2017-11-06T02:36:21Z", 
"updated_at": "2017-11-06T02:36:21Z", "security_group_rules": [{"direction": 
"egress", "protocol": null, "description": null, "tags": [], "port_range_max": 
null, "updated_at": "2017-11-06T02:36:21Z", "revision_number": 0, "id": 
"39a7f84c-d3c4-4d57-accc-d7732d17abb1", "remote_group_id": null, 
"remote_ip_prefix": null, "created_at": "2017-11-06T02:36:21Z", 
"security_group_id": "95395f8b-5099-4c5a-8c13-1bbafbb60c65", "tenant_id": 
"2ed61f69d3d54a9980201640fbc24456", "port_range_min": null, "ethertype": 
"IPv6", "project_id": "2ed61f69d3d54a9980201640fbc24456"}, {"direction": 
"egress", "protocol": null, "description": null, "tags": [], "port_range_max": 
null, "updated_at": "2017-11-06T02:36:21Z", "revision_number": 0, "id": 
"9ea97847-8896-40ea-a72f-b3c0d35e9111", "remote_group_id": null, 
"remote_ip_prefix": null, "created_at": "2017-11-06T02:36:21Z", 
"security_group_id": "95395f8b-5099-4c5a-8c13-1bbafbb60c65", "tenant_id": 
"2ed61f69d3d54a9980201640fbc24456", "port_range_min": null, "ethertype": 
"IPv4", "project_id": "2ed61f69d3d54a9980201640fbc24456"}], "revision_number": 
2, "project_id": "2ed61f69d3d54a9980201640fbc24456", "id": 
"95395f8b-5099-4c5a-8c13-1bbafbb60c65", "name": "ssh_secgrp"}}
  2017-11-06 02:36:21,657 20994 INFO [tempest.lib.common.rest_client] 
Request (NetworkDefaultSecGroupTest:test_two_sec_groups): 201 POST 
http://15.184.68.89:9696/v2.0/security-group-rules 0.237s
  2017-11-06 02:36:21,657 20994 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: {"security_group_rule": {"port_range_max": 22, "direction": 
"ingress", "port_range_min": 22, "remote_ip_prefix": "0.0.0.0/0", 
"security_group_id": "95395f8b-5099-4c5a-8c13-1bbafbb60c65", "protocol": "tcp"}}
  Response - Headers: {u'x-openstack-request-id': 
'req-96b4e7f7-d605-4415-9a2f-5a724ec440e5', 'content-location': 
'http://15.184.68.89:9696/v2.0/security-group-rules', u'content-length': '514', 
u'date': 'Mon, 06 Nov 2017 02:36:21 GMT', u'content-type': 'application/json', 
'status': '201', u'connection': 'close'}
  Body: {"security_group_rule": {"remote_group_id": null, "direction": 
"ingress", "protocol": "tcp", "description": "", "ethertype": "IPv4", 
"remote_ip_prefix": "0.0.0.0/0", "port_range_max": 22, "updated_at": 
"2017-11-06T02:36:21Z", "security_group_id": 
"95395f8b-5099-4c5a-8c13-1bbafbb60c65", "port_range_min": 22, 
"revision_number": 0, "tenant_id": "2ed61f69d3d54a9980201640fbc24456", 
"created_at": "2017-11-06T02:36:21Z", "project_id": 
"2ed61f69d3d54a9980201640fbc24456", "id": 
"9ea35584-5a38-4941-a70a-9a145185e0a6"}}
  2017-11-06 02:36:21,942 20994 INFO [tempest.lib.common.rest_client] 
Request (NetworkDefaultSecGroupTest:test_two_sec_groups): 201 POST 
http://15.184.68.89:9696/v2.0/security-groups 0.283s
  2017-11-06 02:36:21,943 20994 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: {"security_group": {"name": "icmp_secgrp"}}
  Response - Headers: {u'x-openstack-request-id': 
'req-495a344b-8161-4426-95f6-b6340536d80c', 'content-location': 
'http://15.184.68.89:9696/v2.0/security-groups', u'content-length': '1343', 
u'date': 'Mon, 06 Nov 2017 02:36:21 GMT', u'content-type': 

[Yahoo-eng-team] [Bug 1730271] Re: test_two_sec_groups failure

2017-11-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/517855
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=83e73e0e6f8e3c7a0bd438f8bebeb0ea08e53216
Submitter: Zuul
Branch:master

commit 83e73e0e6f8e3c7a0bd438f8bebeb0ea08e53216
Author: YAMAMOTO Takashi 
Date:   Mon Nov 6 13:22:52 2017 +0900

test_security_groups: Randomize SG names

To avoid conflicts with concurrent tests.

Closes-Bug: #1730271
Change-Id: Id33dc37118f57a0dbbfbe5ee71d1c7a48eefeb3e


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730271

Title:
  test_two_sec_groups failure

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  eg. http://logs.openstack.org/24/517524/3/gate/legacy-tempest-dsvm-
  networking-midonet-aio-ml2/538c357/logs/testr_results.html.gz

  ft1.4: 
neutron.tests.tempest.scenario.test_security_groups.NetworkDefaultSecGroupTest.test_two_sec_groups[id-3d73ec1a-2ec6-45a9-b0f8-04a283d9d964]_StringException:
 pythonlogging:'': {{{
  2017-11-06 02:36:21,418 20994 INFO [tempest.lib.common.rest_client] 
Request (NetworkDefaultSecGroupTest:test_two_sec_groups): 201 POST 
http://15.184.68.89:9696/v2.0/security-groups 0.316s
  2017-11-06 02:36:21,418 20994 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: {"security_group": {"name": "ssh_secgrp"}}
  Response - Headers: {u'x-openstack-request-id': 
'req-a84466f5-d1db-4b8e-99e3-d1e078d2299d', 'content-location': 
'http://15.184.68.89:9696/v2.0/security-groups', u'content-length': '1342', 
u'date': 'Mon, 06 Nov 2017 02:36:21 GMT', u'content-type': 'application/json', 
'status': '201', u'connection': 'close'}
  Body: {"security_group": {"description": "", "tags": [], "tenant_id": 
"2ed61f69d3d54a9980201640fbc24456", "created_at": "2017-11-06T02:36:21Z", 
"updated_at": "2017-11-06T02:36:21Z", "security_group_rules": [{"direction": 
"egress", "protocol": null, "description": null, "tags": [], "port_range_max": 
null, "updated_at": "2017-11-06T02:36:21Z", "revision_number": 0, "id": 
"39a7f84c-d3c4-4d57-accc-d7732d17abb1", "remote_group_id": null, 
"remote_ip_prefix": null, "created_at": "2017-11-06T02:36:21Z", 
"security_group_id": "95395f8b-5099-4c5a-8c13-1bbafbb60c65", "tenant_id": 
"2ed61f69d3d54a9980201640fbc24456", "port_range_min": null, "ethertype": 
"IPv6", "project_id": "2ed61f69d3d54a9980201640fbc24456"}, {"direction": 
"egress", "protocol": null, "description": null, "tags": [], "port_range_max": 
null, "updated_at": "2017-11-06T02:36:21Z", "revision_number": 0, "id": 
"9ea97847-8896-40ea-a72f-b3c0d35e9111", "remote_group_id": null, 
"remote_ip_prefix": null, "created_at": "2017-11-06T02:36:21Z", 
"security_group_id": "95395f8b-5099-4c5a-8c13-1bbafbb60c65", "tenant_id": 
"2ed61f69d3d54a9980201640fbc24456", "port_range_min": null, "ethertype": 
"IPv4", "project_id": "2ed61f69d3d54a9980201640fbc24456"}], "revision_number": 
2, "project_id": "2ed61f69d3d54a9980201640fbc24456", "id": 
"95395f8b-5099-4c5a-8c13-1bbafbb60c65", "name": "ssh_secgrp"}}
  2017-11-06 02:36:21,657 20994 INFO [tempest.lib.common.rest_client] 
Request (NetworkDefaultSecGroupTest:test_two_sec_groups): 201 POST 
http://15.184.68.89:9696/v2.0/security-group-rules 0.237s
  2017-11-06 02:36:21,657 20994 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: {"security_group_rule": {"port_range_max": 22, "direction": 
"ingress", "port_range_min": 22, "remote_ip_prefix": "0.0.0.0/0", 
"security_group_id": "95395f8b-5099-4c5a-8c13-1bbafbb60c65", "protocol": "tcp"}}
  Response - Headers: {u'x-openstack-request-id': 
'req-96b4e7f7-d605-4415-9a2f-5a724ec440e5', 'content-location': 
'http://15.184.68.89:9696/v2.0/security-group-rules', u'content-length': '514', 
u'date': 'Mon, 06 Nov 2017 02:36:21 GMT', u'content-type': 'application/json', 
'status': '201', u'connection': 'close'}
  Body: {"security_group_rule": {"remote_group_id": null, "direction": 
"ingress", "protocol": "tcp", "description": "", "ethertype": "IPv4", 
"remote_ip_prefix": "0.0.0.0/0", "port_range_max": 22, "updated_at": 
"2017-11-06T02:36:21Z", "security_group_id": 
"95395f8b-5099-4c5a-8c13-1bbafbb60c65", "port_range_min": 22, 
"revision_number": 0, "tenant_id": "2ed61f69d3d54a9980201640fbc24456", 
"created_at": "2017-11-06T02:36:21Z", "project_id": 
"2ed61f69d3d54a9980201640fbc24456", "id": 
"9ea35584-5a38-4941-a70a-9a145185e0a6"}}
  2017-11-06 02:36:21,942 20994 INFO [tempest.lib.common.rest_client] 
Request (NetworkDefaultSecGroupTest:test_two_sec_groups): 201 POST 
http://15.184.68.89:9696/v2.0/security-groups 

[Yahoo-eng-team] [Bug 1731112] [NEW] Raise error when pass not support protocol value during sg_rule creation

2017-11-08 Thread zhaobo
Public bug reported:

repro

neutron security-group-rule-create test --direction ingress --protocol 115 
--port-range-min 22 --port-range-max 22
Request Failed: internal server error while processing your request.

This will raise 500 internal error.

Log

ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/api.py", line 127, in wrapped
ERROR neutron.api.v2.resource ^[[01;35m^[[00mLOG.debug("Retry wrapper got 
retriable exception: %s", e)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
ERROR neutron.api.v2.resource ^[[01;35m^[[00mself.force_reraise()
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
ERROR neutron.api.v2.resource ^[[01;35m^[[00msix.reraise(self.type_, 
self.value, self.tb)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/api.py", line 123, in wrapped
ERROR neutron.api.v2.resource ^[[01;35m^[[00mreturn f(*dup_args, 
**dup_kwargs)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 548, in _create
ERROR neutron.api.v2.resource ^[[01;35m^[[00mobj = do_create(body)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 530, in do_create
ERROR neutron.api.v2.resource ^[[01;35m^[[00mrequest.context, 
reservation.reservation_id)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
ERROR neutron.api.v2.resource ^[[01;35m^[[00mself.force_reraise()
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
ERROR neutron.api.v2.resource ^[[01;35m^[[00msix.reraise(self.type_, 
self.value, self.tb)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 523, in do_create
ERROR neutron.api.v2.resource ^[[01;35m^[[00mreturn 
obj_creator(request.context, **kwargs)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/securitygroups_rpc_base.py", line 56, in 
create_security_group_rule
ERROR neutron.api.v2.resource ^[[01;35m^[[00msecurity_group_rule)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/api.py", line 162, in wrapped
ERROR neutron.api.v2.resource ^[[01;35m^[[00mreturn method(*args, **kwargs)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/api.py", line 92, in wrapped
ERROR neutron.api.v2.resource ^[[01;35m^[[00msetattr(e, '_RETRY_EXCEEDED', 
True)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
ERROR neutron.api.v2.resource ^[[01;35m^[[00mself.force_reraise()
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
ERROR neutron.api.v2.resource ^[[01;35m^[[00msix.reraise(self.type_, 
self.value, self.tb)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/api.py", line 88, in wrapped
ERROR neutron.api.v2.resource ^[[01;35m^[[00mreturn f(*args, **kwargs)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 150, in wrapper
ERROR neutron.api.v2.resource ^[[01;35m^[[00mectxt.value = e.inner_exc
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
ERROR neutron.api.v2.resource ^[[01;35m^[[00mself.force_reraise()
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
ERROR neutron.api.v2.resource ^[[01;35m^[[00msix.reraise(self.type_, 
self.value, self.tb)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
ERROR neutron.api.v2.resource ^[[01;35m^[[00mreturn f(*args, **kwargs)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/api.py", line 127, in wrapped
ERROR neutron.api.v2.resource ^[[01;35m^[[00mLOG.debug("Retry wrapper got 
retriable exception: %s", e)
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
ERROR neutron.api.v2.resource ^[[01;35m^[[00mself.force_reraise()
ERROR neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
ERROR neutron.api.v2.resource ^[[01;35m^[[00msix.reraise(self.type_, 

[Yahoo-eng-team] [Bug 1730892] Re: Nova Image Resize Generating Errors

2017-11-08 Thread Xuanzhou Perry Dong
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730892

Title:
  Nova Image Resize Generating Errors

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  When flavor disk size is larger than the image size, Nova will try to 
increase the image disk size to match the flavor disk size. In the process, it 
will call resize2fs to resize the image disk file system as will for 
raw-format, but this will generate an error since resize2fs should be executed 
on a partition block device instead of the whole block device (which includes 
boot sector, partition table, etc). 

  Steps to Reproduce
  ==

  1. Set the following configuration for nova-compute:
  use_cow_images = False
  force_raw_images = True

  2. nova boot --image cirros-0.3.5-x86_64-disk --nic net-
  id=6f0df6a5-8848-427b-8222-7b69d5602fe4 --flavor m1.tiny test_vm

  The following error log are generated:

  Nov 08 14:42:51 devstack01 nova-compute[10609]: DEBUG nova.virt.disk.api 
[None req-771fa44d-46ce-4486-9ed7-7a89ddb735ed demo admin] Unable to determine 
label for image  with error Unexpected error while running command.
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Command: e2label 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Exit code: 1
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stdout: u''
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stderr: u"e2label: Bad magic 
number in super-block while trying to open 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk\nCouldn't
 find valid filesystem superblock.\n". Cannot resize. {{(pid=10609) 
is_image_extendable /opt/stack/nova/nova/virt/disk/api.py:254}}

  Expected Result
  ===
  Wrong command should not be executed and no error logs should be generated.

  Actual Result
  =
  Error logs are generated:

  Environment
  ===
  1. Openstack Nova
  stack@devstack01:/opt/stack/nova$ git log -1
  commit 232458ae4e83e8b218397e42435baa9f1d025b68
  Merge: 650c9f3 9d400c3
  Author: Jenkins 
  Date:   Tue Oct 10 06:27:52 2017 +

  Merge "rp: Move RP._get|set_aggregates() to module scope"

  2. Hypervisor
  Libvirt + QEMU
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep libvirt
  ii  libvirt-bin3.6.0-1ubuntu5~cloud0  
amd64programs for the libvirt library
  ii  libvirt-clients3.6.0-1ubuntu5~cloud0  
amd64Programs for the libvirt library
  ii  libvirt-daemon 3.6.0-1ubuntu5~cloud0  
amd64Virtualization daemon
  ii  libvirt-daemon-system  3.6.0-1ubuntu5~cloud0  
amd64Libvirt daemon configuration files
  ii  libvirt-dev:amd64  3.6.0-1ubuntu5~cloud0  
amd64development files for the libvirt library
  ii  libvirt0:amd64 3.6.0-1ubuntu5~cloud0  
amd64library for interfacing with different virtualization systems
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep qemu
  ii  ipxe-qemu  1.0.0+git-20150424.a25a16d-1ubuntu1
all  PXE boot firmware - ROM images for qemu
  ii  qemu-block-extra:amd64 1:2.10+dfsg-0ubuntu3~cloud0
amd64extra block backend modules for qemu-system and qemu-utils
  ii  qemu-kvm   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU Full virtualization
  ii  qemu-slof  20151103+dfsg-1ubuntu1 
all  Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries
  ii  qemu-system-arm1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (arm)
  ii  qemu-system-common 1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries (common files)
  ii  qemu-system-mips   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (mips)
  ii  qemu-system-misc   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x  

[Yahoo-eng-team] [Bug 1724613] Re: AllocationCandidates.get_by_filters ignores shared RPs when the RC exists in both places

2017-11-08 Thread Jay Pipes
This is by design. Non-sharing providers that have all the resources
needed in the request are used as-is and there is no attempt to create
permutations of *some* the non-sharing provider's resources with those
of a sharing provider.

If you had, though, a second resource provider that only had VCPU and
MEMORY_MB but no disk, and associated that second provider to the shared
storage provider via aggregate, you would see two allocation requests,
one with all resources coming from the first compute node resource
provider and the other with VCPU and MEMORY_MB from the second compute
node resource provider and DISK_GB from the shared storage provider.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724613

Title:
  AllocationCandidates.get_by_filters ignores shared RPs when the RC
  exists in both places

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When both the compute node resource provider and the shared resource
  provider have inventory in the same resource class,
  AllocationCandidates.get_by_filters will not return an
  AllocationRequest including the shared resource provider.

  Example:

   cnrp { VCPU: 24,
  MEMORY_MB: 2048,
  DISK_GB: 16 }
   ssrp { DISK_GB: 32 }

   AllocationCandidates.get_by_filters(
   resources={ VCPU: 1,
   MEMORY_MB: 512,
   DISK_GB: 2 } )

  Expected:

   allocation_requests: [
   { cnrp: { VCPU: 1,
 MEMORY_MB: 512,
 DISK_GB: 2 } },
   { cnrp: { VCPU: 1,
 MEMORY_MB: 512 }
 ssrp: { DISK_GB: 2 } },
   ]

  Actual:

   allocation_requests: [
   { cnrp: { VCPU: 1,
 MEMORY_MB: 512,
 DISK_GB: 2 } }
   ]

  I will post a review shortly that demonstrates this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731072] [NEW] AllocationCandidates.get_by_filters returns garbage with multiple aggregates

2017-11-08 Thread Eric Fried
Public bug reported:

I set up a test scenario with multiple providers (sharing and non),
across multiple aggregates.  Requesting allocation candidates gives some
candidates as expected, but some are garbled.  Bad behaviors include:

(1) When inventory in a given RC is provided both by a non-sharing and a 
sharing RP in an aggregate, the sharing RP is ignored in the results (this is 
tracked via https://bugs.launchpad.net/nova/+bug/1724613)
(2) When inventory in a given RC is provided solely by a sharing RP, I don't 
get the expected candidate where that sharing RP provides that inventory and 
the rest comes from the non-sharing RP.
(3) The above applies when there are multiple sharing RPs in the same aggregate 
providing that same shared resource.
(4) ...and also when the sharing RPs provide different resources.

And we get a couple of unexpected candidates that are really garbled:

(5) Where there are multiple sharing RPs with different resources, one 
candidate has the expected resources from the non-sharing RP and one of the 
sharing RPs, but is missing the third requested resource entirely.
(6) With that same setup, we get another candidate that has the expected 
resource from the non-sharing RP; but duplicated resources spread across 
multiple sharing RPs from *different* *aggregates*.  This one is also missing 
one of the requested resources entirely.

I will post a commit shortly that demonstrates this behavior.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1731072

Title:
  AllocationCandidates.get_by_filters returns garbage with multiple
  aggregates

Status in OpenStack Compute (nova):
  New

Bug description:
  I set up a test scenario with multiple providers (sharing and non),
  across multiple aggregates.  Requesting allocation candidates gives
  some candidates as expected, but some are garbled.  Bad behaviors
  include:

  (1) When inventory in a given RC is provided both by a non-sharing and a 
sharing RP in an aggregate, the sharing RP is ignored in the results (this is 
tracked via https://bugs.launchpad.net/nova/+bug/1724613)
  (2) When inventory in a given RC is provided solely by a sharing RP, I don't 
get the expected candidate where that sharing RP provides that inventory and 
the rest comes from the non-sharing RP.
  (3) The above applies when there are multiple sharing RPs in the same 
aggregate providing that same shared resource.
  (4) ...and also when the sharing RPs provide different resources.

  And we get a couple of unexpected candidates that are really garbled:

  (5) Where there are multiple sharing RPs with different resources, one 
candidate has the expected resources from the non-sharing RP and one of the 
sharing RPs, but is missing the third requested resource entirely.
  (6) With that same setup, we get another candidate that has the expected 
resource from the non-sharing RP; but duplicated resources spread across 
multiple sharing RPs from *different* *aggregates*.  This one is also missing 
one of the requested resources entirely.

  I will post a commit shortly that demonstrates this behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1731072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731035] [NEW] misleading error message for hosts file template expansion

2017-11-08 Thread Robert Schweikert
Public bug reported:

The error message for hosts file template expansion is misleading, it
reads:

cloudinit.cloud: WARNING: No template found at
/etc/cloud/templates/hosts.suse.tmpl for template named hosts.suse

But should be

cloudinit.cloud: WARNING: No template found at /etc/cloud/templates for
template hosts.suse

** Affects: cloud-init
 Importance: Undecided
 Assignee: Robert Schweikert (rjschwei)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) => Robert Schweikert (rjschwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1731035

Title:
  misleading error message for hosts file template expansion

Status in cloud-init:
  New

Bug description:
  The error message for hosts file template expansion is misleading, it
  reads:

  cloudinit.cloud: WARNING: No template found at
  /etc/cloud/templates/hosts.suse.tmpl for template named hosts.suse

  But should be

  cloudinit.cloud: WARNING: No template found at /etc/cloud/templates
  for template hosts.suse

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1731035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731022] [NEW] host template expansion does not work on SUSE distros

2017-11-08 Thread Robert Schweikert
Public bug reported:

On openSUSE and SUSE Linux Enterprise server the hostname is not written
to /etc/hosts when managed_etc_hosts is enabled.

** Affects: cloud-init
 Importance: Undecided
 Assignee: Robert Schweikert (rjschwei)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) => Robert Schweikert (rjschwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1731022

Title:
  host template expansion does not work on SUSE distros

Status in cloud-init:
  New

Bug description:
  On openSUSE and SUSE Linux Enterprise server the hostname is not
  written to /etc/hosts when managed_etc_hosts is enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1731022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726370] Re: Trace in fetch_and_sync_all_routers

2017-11-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/516322
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d2b909f5339e72f84de797977384e4164d72a154
Submitter: Zuul
Branch:master

commit d2b909f5339e72f84de797977384e4164d72a154
Author: Brian Haley 
Date:   Mon Oct 30 09:41:46 2017 -0400

Move check_ha_state_for_router() into notification code

As soon as we call router_info.initialize(), we could
possibly try and process a router.  If it is HA, and
we have not fully initialized the HA port or keepalived
manager, we could trigger an exception.

Move the call to check_ha_state_for_router() into the
update notification code so it's done after the router
has been created.  Updated the functional tests for this
since the unit tests are now invalid.

Also added a retry counter to the RouterUpdate object so
the l3-agent code will stop re-enqueuing the same update
in an infinite loop.  We will delete the router if the
limit is reached.

Finally, have the L3 HA code verify that ha_port and
keepalived_manager objects are valid during deletion since
there is no need to do additional work if they are not.

Change-Id: Iae65305cbc04b7af482032ddf06b6f2162a9c862
Closes-bug: #1726370


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1726370

Title:
  Trace in fetch_and_sync_all_routers

Status in neutron:
  Fix Released

Bug description:
  I am seeing below trace fetch_and_sync_all_routers for HA router

  
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
[req-19638c71-4ad9-412f-b5d7-dc9cb84eca4f - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task Traceback 
(most recent call last):
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task task(self, 
context)
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 568, in 
periodic_sync_routers_task
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 603, in 
fetch_and_sync_all_routers
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task r['id'], 
r.get(l3_constants.HA_ROUTER_STATE_KEY))
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha.py", line 120, in 
check_ha_state_for_router
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task if ri and 
current_state != TRANSLATION_MAP[ri.ha_state]:
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 81, in 
ha_state
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
ha_state_path = self.keepalived_manager.get_full_config_file_path(
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
AttributeError: 'NoneType' object has no attribute 'get_full_config_file_path'
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1726370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730975] [NEW] List servers by hypervisor doesn't support ids

2017-11-08 Thread Forest Romain
Public bug reported:

compute_node_search_by_hypervisor[1] (used by GET
http://nova/v2/tenant_id/os-hypervisors//servers) searchs
hypervisor using "like" hypervisor name instead of using hypervisor id.

For example:

Lets assume that we have hypervisors hyp1 (with id=9), hyp2 (with id=1), hyp11 
(with id=2):
http://nova/v2/tenant_id/os-hypervisors/
returns information on an hypervisor using its id as key
=> http://nova/v2/tenant_id/os-hypervisors/1 returns information on hyp2

http://nova/v2/tenant_id/os-hypervisors//servers
returns information on hypervisor(s) using a like query on hypervisor hostname.
=> http://nova/v2/tenant_id/os-hypervisors/1/servers returns servers on hyp1 
and hyp12 (because "hyp1" and "hyp12" contain "1")

It seems inconsistent, usually REST APIs use resource ids not resource
names (nor even worse like resource names).

This trouble affects at least kilo, ocata and queen. According to git it
affects every version supporting this feature.

[1]https://github.com/openstack/nova/blob/b7f53a33faf6187ad0b16e45cb14ece07892f7bc/nova/db/sqlalchemy/api.py#L737

** Affects: nova
 Importance: Undecided
 Assignee: Forest Romain (romain-forest)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Forest Romain (romain-forest)

** Description changed:

  compute_node_search_by_hypervisor[1] (used by GET
  http://nova/v2/tenant_id/os-hypervisors//servers) searchs
  hypervisor using "like" hypervisor name instead of using hypervisor id.
  
  For example:
  
  Lets assume that we have hypervisors hyp1 (with id=9), hyp2 (with id=1), 
hyp11 (with id=2):
- http://nova/v2/tenant_id/os-hypervisors/ returns information 
on an hypervisor using its id as key
+ http://nova/v2/tenant_id/os-hypervisors/ 
+ returns information on an hypervisor using its id as key
+ 
  => http://nova/v2/tenant_id/os-hypervisors/1 returns information on hyp2
  
- http://nova/v2/tenant_id/os-hypervisors//servers returns 
information on hypervisor(s) using a like query on hypervisor hostname.
- => http://nova/v2/tenant_id/os-hypervisors/1/servers returns servers on hyp1 
and hyp12 (because "hyp1" and "hyp12" contains "1"
+ http://nova/v2/tenant_id/os-hypervisors//servers 
+ returns information on hypervisor(s) using a like query on hypervisor 
hostname.
+ 
+ => http://nova/v2/tenant_id/os-hypervisors/1/servers returns servers on
+ hyp1 and hyp12 (because "hyp1" and "hyp12" contains "1"
  
  It seems inconsistent, usually REST APIs use resource ids not resource
  names (nor even worse like resource names).
  
  This trouble affects at least kilo, ocata and queen. According to git it
  affects every version supporting this feature.
  
  
[1]https://github.com/openstack/nova/blob/b7f53a33faf6187ad0b16e45cb14ece07892f7bc/nova/db/sqlalchemy/api.py#L737

** Description changed:

  compute_node_search_by_hypervisor[1] (used by GET
  http://nova/v2/tenant_id/os-hypervisors//servers) searchs
  hypervisor using "like" hypervisor name instead of using hypervisor id.
  
+ 
  For example:
  
  Lets assume that we have hypervisors hyp1 (with id=9), hyp2 (with id=1), 
hyp11 (with id=2):
- http://nova/v2/tenant_id/os-hypervisors/ 
+ http://nova/v2/tenant_id/os-hypervisors/
  returns information on an hypervisor using its id as key
  
  => http://nova/v2/tenant_id/os-hypervisors/1 returns information on hyp2
  
- http://nova/v2/tenant_id/os-hypervisors//servers 
+ 
+ http://nova/v2/tenant_id/os-hypervisors//servers
  returns information on hypervisor(s) using a like query on hypervisor 
hostname.
  
  => http://nova/v2/tenant_id/os-hypervisors/1/servers returns servers on
  hyp1 and hyp12 (because "hyp1" and "hyp12" contains "1"
  
- It seems inconsistent, usually REST APIs use resource ids not resource
- names (nor even worse like resource names).
+ 
+ 
+ It seems inconsistent, usually REST APIs use resource ids not resource names 
(nor even worse like resource names).
  
  This trouble affects at least kilo, ocata and queen. According to git it
  affects every version supporting this feature.
  
  
[1]https://github.com/openstack/nova/blob/b7f53a33faf6187ad0b16e45cb14ece07892f7bc/nova/db/sqlalchemy/api.py#L737

** Description changed:

  compute_node_search_by_hypervisor[1] (used by GET
  http://nova/v2/tenant_id/os-hypervisors//servers) searchs
  hypervisor using "like" hypervisor name instead of using hypervisor id.
- 
  
  For example:
  
  Lets assume that we have hypervisors hyp1 (with id=9), hyp2 (with id=1), 
hyp11 (with id=2):
  http://nova/v2/tenant_id/os-hypervisors/
  returns information on an hypervisor using its id as key
- 
  => http://nova/v2/tenant_id/os-hypervisors/1 returns information on hyp2
- 
  
  http://nova/v2/tenant_id/os-hypervisors//servers
  returns information on hypervisor(s) using a like query on hypervisor 
hostname.
+ => http://nova/v2/tenant_id/os-hypervisors/1/servers returns servers on hyp1 
and hyp12 (because "hyp1" and "hyp12" contains "1"
  
- => 

[Yahoo-eng-team] [Bug 1721502] Re: Date formats for qos panel are unintelligable

2017-11-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/504268
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=80991325652229f0a5eed2d26a487f4653421024
Submitter: Zuul
Branch:master

commit 80991325652229f0a5eed2d26a487f4653421024
Author: Beth Elwell 
Date:   Thu Sep 14 16:35:19 2017 -0600

Cleaned up formats for qos panel

Data formats added for qos panel - this makes the updated and created
date formats legible.

Change-Id: Ib0e364b51de6388395506a1f84454c9d1dc874da
Closes-Bug: #1721502


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1721502

Title:
  Date formats for qos panel are unintelligable

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Date format for created_at and updated_at is just a string that is not
  clear to the user what is actually meant by it. This should be in a
  clear format that shows date and time of creation and update.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1721502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730959] [NEW] [RFE] Add timestamp to LBaaS resources

2017-11-08 Thread Dongcan Ye
Public bug reported:

Currently most of Neutron resources are support timestamp (like create_at, 
update_at).
This is also useful for LBaaS related resources, for the sake of monitoring 
resources or querying.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730959

Title:
  [RFE] Add timestamp to LBaaS resources

Status in neutron:
  New

Bug description:
  Currently most of Neutron resources are support timestamp (like create_at, 
update_at).
  This is also useful for LBaaS related resources, for the sake of monitoring 
resources or querying.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719770] Re: hypervisor stats issue after charm removal if nova-compute service not disabled first

2017-11-08 Thread Edward Hope-Morley
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1719770

Title:
  hypervisor stats aggregates resources from deleted and existing
  services if they share the same hostname

Status in OpenStack nova-compute charm:
  Invalid
Status in OpenStack Compute (nova):
  In Progress
Status in nova package in Ubuntu:
  Confirmed

Bug description:
  In an environment with 592 physical threads (lscpu |grep '^CPU.s' and
  openstack hypervisor show -f value -c vcpus both show correct counts)
  I am seeing 712 vcpus. (likely also seeing inflated memory_mb and
  other stats due to the issue.)

  Querying the nova services DB table, I see:
  http://pastebin.ubuntu.com/25624553/

  It appears that of the 6 machines showing deleted in the services
  table, only one is showing as disabled.

  Digging through the nova/db/sqlalchemy/api.py code, it appears that
  there are filters on the hypervisor stats for Service.disabled ==
  false() and Service.binary == 'nova-compute', but I don't see it
  filtering for deleted == 0.

  I'm not exactly certain of the timeline of my uninstall and reinstall
  of the nova-compute units on the 6 x 24vcpu servers happened (see
  *-ST-{1,2} nova-compute services) that caused this behavior of the
  services not getting disabled, but nova api for hypervisor stats might
  be well served to filter out deleted services as well as disabled
  services, or if a deleted service should never not be disabled, nova
  service-delete should also set the disabled flag for the service.

  These services and compute_nodes do not show up in openstack
  hypervisor list.

  Site is running up-to-date Xenial/Mitaka on openstack-charmers 17.02.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1719770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730918] [NEW] Nova does not respect default_schedule_zone `None`

2017-11-08 Thread Ondrej Vasko
Public bug reported:

Hello,

DESCRIPTION
---

I want Nova to behave like that when Availability Zone is not specified
(`Any Availability Zone` is set in Horizon), it will create VM in random
AZ.

STEPS TO REPRODUCE
--

Steps I do to configure that:

1. In `nova.conf` I set `default_schedule_zone = None` and restarted all nova 
services. I found this attribute in documentation [1] and also in Mirantis blog 
post [2]
2. I create 2 availability zones (2 host aggregates each with 1 hypervisor 
added). 
3. I try to create VM In Horizon with `Any Availability Zone` and it results in 
following error:

The requested availability zone is not available (HTTP 400).

The commands I executed to create AZs:

```
openstack aggregate create HA-Test1 --zone AZ-Test1 --property 
availability_zone=AZ-Test1
openstack aggregate create HA-Test2 --zone AZ-Test2 --property 
availability_zone=AZ-Test2
openstack aggregate add host HA-Test1 os-compute-01
openstack aggregate add host HA-Test2 os-compute-02
```

POINT
-

Now this doesn't work as expected, but(!) when I remove `nova.conf`
attribute `default_schedule_zone = None`, or I will configure it empty
`default_schedule_zone = `, spawning of VMs works as expected and they
are scheduled in random AZ.

Therefore I think that Nova doesn't handle `None` as python `None`, but
as string (As if you set default schedule zone to zone `None`).

ENVIRONMENT
---

I am using stable Pike release with KVM + Libvirt installed via
Openstack-Ansible on Ubuntu 16.04.

[1] 
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
[2] 
https://www.mirantis.com/blog/the-first-and-final-word-on-openstack-availability-zones/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730918

Title:
  Nova does not respect default_schedule_zone `None`

Status in OpenStack Compute (nova):
  New

Bug description:
  Hello,

  DESCRIPTION
  ---

  I want Nova to behave like that when Availability Zone is not
  specified (`Any Availability Zone` is set in Horizon), it will create
  VM in random AZ.

  STEPS TO REPRODUCE
  --

  Steps I do to configure that:

  1. In `nova.conf` I set `default_schedule_zone = None` and restarted all nova 
services. I found this attribute in documentation [1] and also in Mirantis blog 
post [2]
  2. I create 2 availability zones (2 host aggregates each with 1 hypervisor 
added). 
  3. I try to create VM In Horizon with `Any Availability Zone` and it results 
in following error:

  The requested availability zone is not available (HTTP 400).

  The commands I executed to create AZs:

  ```
  openstack aggregate create HA-Test1 --zone AZ-Test1 --property 
availability_zone=AZ-Test1
  openstack aggregate create HA-Test2 --zone AZ-Test2 --property 
availability_zone=AZ-Test2
  openstack aggregate add host HA-Test1 os-compute-01
  openstack aggregate add host HA-Test2 os-compute-02
  ```

  POINT
  -

  Now this doesn't work as expected, but(!) when I remove `nova.conf`
  attribute `default_schedule_zone = None`, or I will configure it empty
  `default_schedule_zone = `, spawning of VMs works as expected and they
  are scheduled in random AZ.

  Therefore I think that Nova doesn't handle `None` as python `None`,
  but as string (As if you set default schedule zone to zone `None`).

  ENVIRONMENT
  ---

  I am using stable Pike release with KVM + Libvirt installed via
  Openstack-Ansible on Ubuntu 16.04.

  [1] 
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
  [2] 
https://www.mirantis.com/blog/the-first-and-final-word-on-openstack-availability-zones/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1730918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1637444] Re: security group association per port is not supported

2017-11-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/404178
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e9db12382e5cf3d231ab9a483ede0beb6546338a
Submitter: Zuul
Branch:master

commit e9db12382e5cf3d231ab9a483ede0beb6546338a
Author: Kenji Ishii 
Date:   Thu Mar 30 02:43:49 2017 +

Support security groups association per port

This patch support operation for operators and project users to
associate security groups to a port. The feature is mentioned at
the neutron user feedback session in Barcelona summit [1].

This function UI is same as the function of security groups
association per instance. To realize this, the way of implementation
for 'Edit port' is changed, which move from a single modal to a
workflow base.

[1] 
https://etherpad.openstack.org/p/ocata-neutron-end-user-operator-feedback 
(L.35+)

Also we need to display how security groups is associated at a port.
At the moment, there is not way to be able to see it (only this function).
It should be done as an another patch.

Change-Id: I96e0fafdffbf05b8167ec1b85f7430176fdaab90
Closes-Bug: #1637444
Co-Authored-By: Akihiro Motoki 


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1637444

Title:
  security group association per port is not supported

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Horizon security group association interface only allows users to
  apply security groups with an instance (server). There is no way to
  apply security groups to a specific port.

  It is a usual use case where a server has multiple interfaces and each
  interface has a specific purpose (for example, one for internal
  connectivity and the other for external connectivity). In this case, a
  user wants to apply different security groups to different ports.

  It comes from the neutron user feedback session in Barcelona.
  https://etherpad.openstack.org/p/ocata-neutron-end-user-operator-feedback 
(L.35+)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1637444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730901] [NEW] Missing user and database setup on Install and configure (Red Hat) in glance

2017-11-08 Thread somat
Public bug reported:

- [x] This is a doc addition request.
- [x] I have a fix to the document that I can paste below including example: 
input and output. 


In the documentation it seems like instruction for user and database creation 
for Glance are missing.

MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
->   IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
->   IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)


---
Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
SHA: 9091d262afb120fd077bae003d52463f833a4fde
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
URL: https://docs.openstack.org/glance/pike/install/install-rdo.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1730901

Title:
  Missing user and database setup on Install and configure (Red Hat) in
  glance

Status in Glance:
  New

Bug description:
  - [x] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 

  
  In the documentation it seems like instruction for user and database creation 
for Glance are missing.

  MariaDB [(none)]> CREATE DATABASE glance;
  Query OK, 1 row affected (0.00 sec)

  MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  ->   IDENTIFIED BY 'GLANCE_DBPASS';
  Query OK, 0 rows affected (0.00 sec)

  MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  ->   IDENTIFIED BY 'GLANCE_DBPASS';
  Query OK, 0 rows affected (0.00 sec)


  ---
  Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
  SHA: 9091d262afb120fd077bae003d52463f833a4fde
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
  URL: https://docs.openstack.org/glance/pike/install/install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1730901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp