[Yahoo-eng-team] [Bug 1661503] [NEW] If public_endpoint is set, the first call will be always public endpoint

2017-02-02 Thread Yoshi Kadokawa
Public bug reported:

I have setup a keystone service(Mitaka) on ubuntu,
and it seems that the first call will always be to keystone's public api url,
when you have set "public_endpoint" in keystone.conf.

For example, when I do the following openstack commands, I always get
the following error.

$ubuntu@client:~$ openstack token issue
Unable to establish connection to http://10.12.2.2:5000/fuga/v3/auth/tokens

The keystone's endpoint are like this:
public:   http://10.12.2.2:5000/fuga/v3 
admin:http://10.12.1.2:35357/fuga/v3
internal: http://10.12.3.2:5000/fuga/v3 

openstack client is installed in a client node, which is seperate to keystone 
node,
and this client node has no network access to public api network.
So if accessing to public api, this is expected, but I have set the env 
variables like this,

ubuntu@client:~$ env | grep OS_
OS_USER_DOMAIN_NAME=default
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=openstack
OS_AUTH_URL=http://10.12.1.2:35357/fuga/v3
OS_USERNAME=admin
OS_INTERFACE=admin
OS_PROJECT_DOMAIN_NAME=default

Therefore, my expectation is that api access goes only through admin url.
I have tried also with internal api url, but get the same error.

And of course if the client node has public api network access, the openstack 
client worked perfectly.
Also, if you just not use the special path for api urls, so by not setting 
"public_api", it will also work perfectly.

According to this:
https://github.com/openstack/keystone/blob/stable/mitaka/keystone/version/service.py#L160
"public" string is given, and here:
https://github.com/openstack/keystone/blob/stable/mitaka/keystone/common/wsgi.py#L372
the string is being combined with "_endpoint", which will become 
"public_endpoint",
and if the url is set, this public url will be the initial access.


I have attached some info,
- /etc/keystone/keystone.conf
- /etc/apache2/sites-enabled/wsgi-keystone.conf
- output with debug option

** Affects: keystone
 Importance: Undecided
 Status: New

** Attachment added: "debugoutput-openstackclient.txt"
   
https://bugs.launchpad.net/bugs/1661503/+attachment/4812390/+files/debugoutput-openstackclient.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1661503

Title:
  If public_endpoint is set, the first call will be always public
  endpoint

Status in OpenStack Identity (keystone):
  New

Bug description:
  I have setup a keystone service(Mitaka) on ubuntu,
  and it seems that the first call will always be to keystone's public api url,
  when you have set "public_endpoint" in keystone.conf.

  For example, when I do the following openstack commands, I always get
  the following error.

  $ubuntu@client:~$ openstack token issue
  Unable to establish connection to http://10.12.2.2:5000/fuga/v3/auth/tokens

  The keystone's endpoint are like this:
  public:   http://10.12.2.2:5000/fuga/v3 
  admin:http://10.12.1.2:35357/fuga/v3
  internal: http://10.12.3.2:5000/fuga/v3 

  openstack client is installed in a client node, which is seperate to keystone 
node,
  and this client node has no network access to public api network.
  So if accessing to public api, this is expected, but I have set the env 
variables like this,

  ubuntu@client:~$ env | grep OS_
  OS_USER_DOMAIN_NAME=default
  OS_PROJECT_NAME=admin
  OS_IDENTITY_API_VERSION=3
  OS_PASSWORD=openstack
  OS_AUTH_URL=http://10.12.1.2:35357/fuga/v3
  OS_USERNAME=admin
  OS_INTERFACE=admin
  OS_PROJECT_DOMAIN_NAME=default

  Therefore, my expectation is that api access goes only through admin url.
  I have tried also with internal api url, but get the same error.

  And of course if the client node has public api network access, the openstack 
client worked perfectly.
  Also, if you just not use the special path for api urls, so by not setting 
"public_api", it will also work perfectly.

  According to this:
  
https://github.com/openstack/keystone/blob/stable/mitaka/keystone/version/service.py#L160
  "public" string is given, and here:
  
https://github.com/openstack/keystone/blob/stable/mitaka/keystone/common/wsgi.py#L372
  the string is being combined with "_endpoint", which will become 
"public_endpoint",
  and if the url is set, this public url will be the initial access.

  
  I have attached some info,
  - /etc/keystone/keystone.conf
  - /etc/apache2/sites-enabled/wsgi-keystone.conf
  - output with debug option

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1661503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661303] Re: neutron-ns-metadata-proxy process failing under python3.5

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/428504
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=33129f2ac6a41afe2f59a0d50d5db0257f588a79
Submitter: Jenkins
Branch:master

commit 33129f2ac6a41afe2f59a0d50d5db0257f588a79
Author: Kevin Benton 
Date:   Thu Feb 2 16:12:40 2017 -0800

Use bytes for python3 friendly os.write

Bytes not str. Otherwise we get
TypeError: a bytes-like object is required, not 'str'
in the metadata proxy and it dies.

Closes-Bug: #1661303
Change-Id: If6b6f19130c965436a637a03a4cf72203e0786b0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661303

Title:
  neutron-ns-metadata-proxy process failing under python3.5

Status in neutron:
  Fix Released

Bug description:
  When running under python 3.5, we are seeing the neutron-ns-metadata-
  proxy fail repeatedly on Ocata RC1 master.

  This is causing instances to fail to boot under a python3.5 devstack.

  A gate example is here:
  
http://logs.openstack.org/99/407099/25/check/gate-rally-dsvm-py35-neutron-neutron-ubuntu-xenial/4741e0d/logs/screen-q-l3.txt.gz?level=ERROR#_2017-02-02_11_41_52_029

  2017-02-02 11:41:52.029 29906 ERROR
  neutron.agent.linux.external_process [-] metadata-proxy for router
  with uuid 79af72b9-6b17-4864-8088-5dc96b9271df not found. The process
  should not have died

  Running this locally I see the debug output of the configuration
  settings and it immediately exits with no error output.

  To reproduce:
  Stack a fresh devstack with the "USE_PYTHON3=True" setting in your localrc 
(NOTE: There are other python3x devstack bugs that may reconfigure your host in 
bad ways once you do this.  Plan to only stack with this setting on a throw 
away host or one you plan to use for Python3.x going forward)

  Once this devstack is up and running, setup a neuron network and
  subnet, then boot a cirros instance on that new subnet.

  Check the cirros console.log to see that it cannot find a metadata
  datasource (Due to this change disabling configdrive:
  https://github.com/openstack-
  dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4).

  Check the q-l3.txt log to see the repeated "The process should not
  have died" messages.

  You will also note that the cirros instance did not receive it's ssh
  keys and is requiring password login due to the missing datasource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661501] [NEW] Unlocalized string "Warning!" is found in Register Image page under Data Processing

2017-02-02 Thread Yuko Katabami
Public bug reported:

The string "Warning!" is unlocalized. 
I confirmed it in Japanese and Simplified Chinese.

I can locate the same string in Zanata but that seems to be used in the
Developer tab according to the pot file at:
http://tarballs.openstack.org/translation-
source/horizon/master/openstack_dashboard/locale/djangojs.pot

#: openstack_dashboard/contrib/developer/static/dashboard/
developer/theme-preview/theme-preview.html:995
msgid "Warning!"
msgstr ""

In Zanata, I cannot even find the translated strings used in the same
text box.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Unlocalized_Warning.png"
   
https://bugs.launchpad.net/bugs/1661501/+attachment/4812388/+files/Unlocalized_Warning.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661501

Title:
  Unlocalized string "Warning!" is found in Register Image page under
  Data Processing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The string "Warning!" is unlocalized. 
  I confirmed it in Japanese and Simplified Chinese.

  I can locate the same string in Zanata but that seems to be used in
  the Developer tab according to the pot file at:
  http://tarballs.openstack.org/translation-
  source/horizon/master/openstack_dashboard/locale/djangojs.pot

  #: openstack_dashboard/contrib/developer/static/dashboard/
  developer/theme-preview/theme-preview.html:995
  msgid "Warning!"
  msgstr ""

  In Zanata, I cannot even find the translated strings used in the same
  text box.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648095] Re: neutron-ns-metadata-proxy cleanup does not occur

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/411566
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a502c96b8efa62627e245ffa69a23f4b6d3f90a5
Submitter: Jenkins
Branch:master

commit a502c96b8efa62627e245ffa69a23f4b6d3f90a5
Author: Isaku Yamahata 
Date:   Thu Dec 15 17:05:26 2016 -0800

Kill the metadata proxy process unconditionally

When force_metadata=True and enable_isolated_metadata=False,
the namespace metadata proxy process might not be terminated
when the network is deleted because the subnets and ports
will have already been deleted, so we could incorrectly
determine it was started. Calling destroy_monitored_metadata_proxy() is
a noop when there is no process running.

Change-Id: I77ff545ce02f2dca4c38e587b37ea809ad6f072c
Closes-Bug: #1648095


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648095

Title:
  neutron-ns-metadata-proxy cleanup does not occur

Status in networking-odl:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  Devstack + newton + networking-odl / v2.
  To reproduce:
  1. Create a new vxlan network.
  2. Add 1 or more subnets
  3. Delete the network.
  I notice this occur during 
tempest.api.network.test_networks.NetworksIpV6Test.test_create_list_subnet_with_no_gw64_one_network

  Finally linux is Out of memory and terminating processes based on
  score.

  Many like this;
  stack 2538 1  0 11:19 ?00:00:00 /usr/bin/python 
/usr/local/bin/neutron-ns-metadata-proxy 
--pid_file=/opt/stack/data/neutron/external/pids/b408d77c-ca69-4b2c-b2f1-5a889f149f11.pid
 --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy 
--network_id=b408d77c-ca69-4b2c-b2f1-5a889f149f11 
--state_path=/opt/stack/data/neutron --metadata_port=80 
--metadata_proxy_user=1000 --metadata_proxy_group=1000 --debug
  stack 2981 1  0 11:20 ?00:00:00 /usr/bin/python 
/usr/local/bin/neutron-ns-metadata-proxy 
--pid_file=/opt/stack/data/neutron/external/pids/ddfc2980-2c16-46f4-8eab-e91a047e796c.pid
 --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy 
--network_id=ddfc2980-2c16-46f4-8eab-e91a047e796c 
--state_path=/opt/stack/data/neutron --metadata_port=80 
--metadata_proxy_user=1000 --metadata_proxy_group=1000 --debug
  stack 4977 1  0 11:28 ?00:00:00 /usr/bin/python 
/usr/local/bin/neutron-ns-metadata-proxy 
--pid_file=/opt/stack/data/neutron/external/pids/2b6d0fe6-f13a-4be4-9ea6-f6bc12ed3d69.pid
 --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy 
--network_id=2b6d0fe6-f13a-4be4-9ea6-f6bc12ed3d69 
--state_path=/opt/stack/data/neutron --metadata_port=80 
--metadata_proxy_user=1000 --metadata_proxy_group=1000 --debug
  stack 6199 12085  0 11:34 pts/600:00:00 grep neutron-ns-metadata-proxy
  stack29462 1  0 11:11 ?00:00:00 /usr/bin/python 
/usr/local/bin/neutron-ns-metadata-proxy 
--pid_file=/opt/stack/data/neutron/external/pids/00192768-29d1-46d2-9092-7a7bbd7b55ef.pid
 --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy 
--network_id=00192768-29d1-46d2-9092-7a7bbd7b55ef 
--state_path=/opt/stack/data/neutron --metadata_port=80 
--metadata_proxy_user=1000 --metadata_proxy_group=1000 --debug
  stack32627 1  0 11:17 ?00:00:00 /usr/bin/python 
/usr/local/bin/neutron-ns-metadata-proxy 
--pid_file=/opt/stack/data/neutron/external/pids/b2aa9b47-b55e-4b3f-bf2c-31bc91d86b33.pid
 --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy 
--network_id=b2aa9b47-b55e-4b3f-bf2c-31bc91d86b33 
--state_path=/opt/stack/data/neutron --metadata_port=80 
--metadata_proxy_user=1000 --metadata_proxy_group=1000 --debug

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1648095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570122] Re: ipv6 prefix delegated subnets are not accessable external of the router they are attached.

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407025
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=cd38886d20c0900788f2c15157be1a016cb475be
Submitter: Jenkins
Branch:master

commit cd38886d20c0900788f2c15157be1a016cb475be
Author: John Davidge 
Date:   Mon Dec 5 12:32:19 2016 +

Fix iptables rules for Prefix Delegated subnets

Make sure the correct iptables rule is added when the router gets
an interface on a PD-enabled subnet. This will allow traffic on PD
subnets to reach the external network.

Includes a unit test for the new function, and modifies an
existing test to verify the adding and removal of the rule.

Change-Id: I42f8f42995e9809e5bda2b29726f7244c052ca1c
Closes-Bug: #1570122


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570122

Title:
  ipv6 prefix delegated subnets are not accessable external of the
  router they are attached.

Status in neutron:
  Fix Released

Bug description:
  currently ip6tables in the qrouter namespace has the following rule.
  This causes unmarked packets to drop.

  -A neutron-l3-agent-scope -o qr-ca9ffa4f-fd -m mark ! --mark
  0x401/0x -j DROP

  It seems that prefix delegated subnets don't get that mark set on
  incoming trafic from the gateway port, I had to add my own rule to do
  that.

  ip6tables -t mangle -A neutron-l3-agent-scope -i qg-ac290c4b-4f -j
  MARK --set-xmark 0x401/0x

  At the moment that is probably too permissive, it should likely be
  limited based on the prefix delegated. with a '-d dead:beef:cafe::/64'
  or whatever the delegation is (tested this and it does work).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660647] Re: _cleanup_failed_start aggressively removes local instance files when handling plug_vif failures

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427267
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=67aa277b4ef623c9877b97bfd7952f0bb1d80a81
Submitter: Jenkins
Branch:master

commit 67aa277b4ef623c9877b97bfd7952f0bb1d80a81
Author: Lee Yarwood 
Date:   Tue Jan 31 15:26:38 2017 +

libvirt: Limit destroying disks during cleanup to spawn

Iab5afdf1b5b now ensures that cleanup is always called when VIF plugging
errors are encountered by _create_domain_and_network. At present cleanup
is always called with destroy_disks=True leading to any local instance
files being removed from the host.

_create_domain_and_network itself has various callers such as resume and
hard_reboot that assume these files will persist any such failures. As a
result the removal of these files will leave instances in an unbootable
state.

In order to correct this an additional destroy_disks_on_failures kwarg
is now provided to _create_domain_and_network and passed down into
cleanup. This kwarg defaults to False and is only enabled when
_create_domain_and_network is used to spawn a new instance.

Closes-bug: #1660647
Change-Id: I38c969690fedb71c5b5ec4418c1b0dd53df733ec


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660647

Title:
  _cleanup_failed_start aggressively removes local instance files when
  handling plug_vif failures

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  Iab5afdf1b5b8d107ea0e5895c24d50712e7dc7b1 [1] ensured that 
_cleanup_failed_start is always called if we encounter VIF plugging failures in 
_create_domain_and_network. However this currently leads to any local instance 
files being removed as cleanup is called with destroy_disks=True.

  As such any failures when resuming or restarting an instance will lead
  to these files being removed and the instance left in an unbootable
  state. IMHO these files should only be removed when cleaning up after
  errors hit while initially spawning an instance.

  Steps to reproduce
  ==
  - Boot an instance using local disks
  - Stop the instance
  - Start the instance, causing a timeout or other failure during plug_vifs
  - Attempt to start the instance again

  Expected result
  ===
  The local instance files are left on the host if instances are rebooting or 
resuming.

  Actual result
  =
  The local instance files are removed from the host if _cleanup_failed_start 
is called.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
 list for all releases: http://docs.openstack.org/releases/

 $ pwd
 /opt/stack/nova
 $ git rev-parse HEAD
 4969a21ee28ef4a68bd5ab1ec8a12c4ad126

  
  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt + KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

  $ nova boot --image cirros-0.3.4-x86_64-uec --flavor 1 test-boot
  [..]
  $ nova stop test-boot
  $ ll ../data/nova/instances/be6cb386-e005-4fb2-8332-7e0c375ee452/
  total 18596
  -rw-rw-r--. 1 root  root16699 Jan 31 09:30 console.log
  -rw-r--r--. 1 root  root 10289152 Jan 31 09:30 disk
  -rw-r--r--. 1 stack libvirtd  257 Jan 31 09:29 disk.info
  -rw-rw-r--. 1 qemu  qemu  4979632 Jan 31 09:29 kernel
  -rw-rw-r--. 1 qemu  qemu  3740163 Jan 31 09:29 ramdisk

  I used the following change to artificially recreate an issue plugging
  the VIFs :

  $ git diff
  diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py
  index 33e3157..248e960 100644
  --- a/nova/virt/libvirt/driver.py
  +++ b/nova/virt/libvirt/driver.py
  @@ -5015,6 +5015,7 @@ class LibvirtDriver(driver.ComputeDriver):
   pause = bool(events)
   guest = None
   try:
  +raise exception.VirtualInterfaceCreateException()
   with self.virtapi.wait_for_instance_event(
   instance, events, deadline=timeout,
   error_callback=self._neutron_failed_callback):

  $ nova start test-boot
  Request to start server test-boot has been accepted.
  $ nova list
  
+--+---+-++-++
  | ID   | Name  | Status  | Task State | 
Power State | Networks   |
  

[Yahoo-eng-team] [Bug 1660484] Re: nova-status upgrade check fails if there are no computes reporting into placement

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427499
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a954bab009c95b48565d06e2d77afb7d7cb0e080
Submitter: Jenkins
Branch:master

commit a954bab009c95b48565d06e2d77afb7d7cb0e080
Author: Matt Riedemann 
Date:   Tue Jan 31 17:59:53 2017 -0500

nova-status: relax the resource providers check

As of 4660333d0d97d8e00cf290ea1d4ed932f5edc1dc the filter
scheduler will fallback to not using the placement service
if the minimum nova-compute service version in the deployment
is not new enough to ensure the computes are reporting into
the placement service.

This means we need to relax our check for when there are no
resource providers but there are compute nodes, since the filter
scheduler will not fail on that in Ocata. We intend on making
that a hard failure in Pike, at which time we'll need to adjust
nova-status again.

Change-Id: I1e4dae17cf9d1336bf0ca72948b135b02434ba15
Closes-Bug: #1660484


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660484

Title:
  nova-status upgrade check fails if there are no computes reporting
  into placement

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I've got a patch:

  https://review.openstack.org/#/c/426926/

  That's trying to integrated the 'nova-status upgrade check' command
  into a grenade run, and it's failing here:

  http://logs.openstack.org/26/426926/2/check/gate-grenade-dsvm-neutron-
  ubuntu-xenial/8b536f3/logs/grenade.sh.txt.gz#_2017-01-30_22_20_29_466

  It's failing because there is 1 compute node in the database (from the
  newton-side run of grenade) but no resource providers, which is
  because we're running the upgrade check after starting the placement
  service but before starting nova-compute with the upgraded ocata code,
  which is what gets that compute reporting into the placement service.

  I think this check is probably overly strict for Ocata given we will
  fallback to the compute_nodes table in the filter scheduler if not all
  computes are upgraded to ocata yet per this change:

  https://review.openstack.org/#/c/417961/

  In Pike, the filter scheduler will not fallback to the compute_nodes
  table and will produce a NoValidHost error if there are no compute
  resource providers available. At that point, we could make the upgrade
  check more strict, but for now we probably just want this to be a
  warning or info level result for nova-status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616442] Re: SRIOV agent error when VM booted with direct-physical port

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/377781
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1bcdc299ba8ffbf778fb1442cd8f9da59903ffdc
Submitter: Jenkins
Branch:master

commit 1bcdc299ba8ffbf778fb1442cd8f9da59903ffdc
Author: Manjunath Patil 
Date:   Tue Sep 27 20:12:39 2016 +0530

Allow the other nic to allocate VMs post PCI-PT VM creation.

Let's say a compute node has got 2 network adapters
(em1 and em2) with its correspondent VFs configured.
If you start a VM with a direct-physical binding
it will take one of these NICs.

At that moment, SRIOV agent starts to show
ERROR messages including the "device dictionary"
completely empty.

In consequence, you cannot allocate VMs with VFs
even though there is still another NIC available.

Change-Id: I8bf0dd41f900b69e32fcd416690c089dde7989b9
Closes-Bug: #1616442


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1616442

Title:
  SRIOV agent error when VM booted with direct-physical port

Status in neutron:
  Fix Released

Bug description:
  When assigning neutron port to PF (neutron port type - direct-physical )
  the vm is booted and active but there is errors in sriov agent log 
  attached  file with the errors 

  Version-Release number of selected component (if applicable):
  RHOS-10 
  [root@controller1 ~(keystone_admin)]# rpm -qa |grep neutron 
  python-neutron-lib-0.3.0-0.20160803002107.405f896.el7ost.noarch
  openstack-neutron-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  puppet-neutron-9.1.0-0.20160813031056.7cf5e07.el7ost.noarch
  python-neutron-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  openstack-neutron-lbaas-9.0.0-0.20160816191643.4e7301e.el7ost.noarch
  python-neutron-fwaas-9.0.0-0.20160817171450.e1ac68f.el7ost.noarch
  python-neutron-lbaas-9.0.0-0.20160816191643.4e7301e.el7ost.noarch
  openstack-neutron-ml2-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  openstack-neutron-metering-agent-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  openstack-neutron-openvswitch-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  python-neutronclient-5.0.0-0.20160812094704.ec20f7f.el7ost.noarch
  openstack-neutron-common-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  openstack-neutron-fwaas-9.0.0-0.20160817171450.e1ac68f.el7ost.noarch
  [root@controller1 ~(keystone_admin)]# rpm -qa |grep nova
  python-novaclient-5.0.1-0.20160724130722.6b11a1c.el7ost.noarch
  openstack-nova-api-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  puppet-nova-9.1.0-0.20160813014843.b94f0a0.el7ost.noarch
  openstack-nova-common-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-novncproxy-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-conductor-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  python-nova-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-scheduler-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-cert-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-console-14.0.0-0.20160817225441.04cef3b.el7ost.noarch

  How reproducible:

  
  Steps to Reproduce:
  1.deploy SRIOV setup and set PF functionality  you can use guide : 
  
https://docs.google.com/document/d/1qQbJlLI1hSlE4uwKpmVd0BoGSDBd8Z0lTzx5itQ6WL0/edit#
  2.boot vm & assign it to PF 
  3.check in compute node sriov agent log

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1616442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659266] Re: Disk allocation for instance is not good with swap

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/428352
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c2e1133be15254747c207cce7805bf56f3cca6fa
Submitter: Jenkins
Branch:master

commit c2e1133be15254747c207cce7805bf56f3cca6fa
Author: John Garbutt 
Date:   Thu Feb 2 18:41:46 2017 +

Stop swap allocations being wrong due to MB vs GB

Swap is in MB, but allocations for disk are in GB.

We totally should claim disk in GB, for now lets just round up the swap
allocation to the next GB. While this is wasteful, its the only safe
answer to ensure you don't over commit resources on the node.

Updated the test so the swap is 1023MB, after rounding up this should
claim the same 1GB extra space for swap.

Closes-Bug: #1659266

Change-Id: If50eab870b2c50f4055668143780574e1350a438


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659266

Title:
  Disk allocation for instance is not good with swap

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When creating the allocation for the instance, we lookup the flavor to
  know the disk sizes for root, ephemeral and gb and we basically sum
  them.

  
https://github.com/openstack/nova/blob/7d04c78c1e2c26125eff5b1a8543b1ac5d027107/nova/scheduler/client/report.py#L129-L131

  Unfortunately, since both root and ephemeral size are expressed in GB
  while swap is expressed in MB, the sum is not good.

  See how the DiskFilter works for accounting resources :
  
https://github.com/openstack/nova/blob/3cafa7f5bd0775b8ba49080226c03f8a91468d7d/nova/scheduler/filters/disk_filter.py#L36-L38

  We should change the logic to ceil to the next GB if modulo(root +
  ephemeral * 1024 + swap / 1024) is not rounded to 0 since we want to
  count allocations as the Inventory only counts by GB.

  That's suboptimal and a long-term solution would be to report
  inventories in bytes (as the smallest attribute) but that's a big
  change so probably requiring a BP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659053] Re: use uuids with pycadf

2017-02-02 Thread Gage Hugo
** Also affects: pycadf
   Importance: Undecided
   Status: New

** Changed in: pycadf
   Status: New => In Progress

** Changed in: pycadf
 Assignee: (unassigned) => Gage Hugo (gagehugo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1659053

Title:
  use uuids with pycadf

Status in OpenStack Identity (keystone):
  In Progress
Status in pycadf:
  In Progress

Bug description:
  pycadf warnings are plentiful in keystone tests: UserWarning: Invalid uuid. 
To ensure interoperability, identifiersshould be a valid uuid.
warnings.warn('Invalid uuid. To ensure interoperability, identifiers'

  
  Be sure keystone is providing uuids appropriately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1659053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660436] Re: Federated users cannot log into horizon

2017-02-02 Thread Steve Martinelli
** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1660436

Title:
  Federated users cannot log into horizon

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in keystoneauth:
  Invalid
Status in python-novaclient:
  Invalid

Bug description:
  As of this bugfix in novaclient, federated users cannot log in to
  horizon:

  https://bugs.launchpad.net/python-novaclient/+bug/1658963

  Before this bugfix, horizon would attempt to list nova extensions
  using what was apparently the wrong class, and the error would be
  caught and quietly logged as such:

   Call to list supported extensions failed. This is likely due to a
  problem communicating with the Nova endpoint. Host Aggregates panel
  will not be displayed.

  The dashboard would display:

   Error: Unable to retrieve usage information.

  but at least the user was logged into the dashboard.

  The error that was being hidden was:

   __init__() takes at least 3 arguments (2 given)

  Now that that is fixed, horizon makes it further but fails to
  authenticate the federated user when attempting this request, giving
  the traceback here:

   http://paste.openstack.org/show/596929/

  The problem lies somewhere between keystoneauth, novaclient, and
  horizon.

  keystoneauth:

  When keystoneauth does version discovery, it first tries the Identity
  v2.0 API, and finding no domain information in the request, returns
  that API as the Identity endpoint. Modifying keystoneauth to not stop
  there and continue trying the v3 API, even though it lacks domain
  information, allows the user to successfully log in:

   http://paste.openstack.org/show/596930/

  I'm not really sure why that works or what would break with that
  change.

  novaclient:

  When creating a Token plugin the novaclient is aware of a project's
  domain but not of a domain on its own or of a default domain:

   http://git.openstack.org/cgit/openstack/python-
  novaclient/tree/novaclient/client.py#n137

  keystoneauth relies on having default_domain_(id|name),
  domain_(id|name), or project_domain(id|name) set, and novaclient isn't
  receiving information about the project_domain(id|name) and isn't
  capable of sending any other domain information when using the Token
  plugin, which it must for a federated user.

  horizon:

  For federated users novaclient is only set up to pass along domain
  info for the project, which horizon doesn't store in its user object:

  
http://git.openstack.org/cgit/openstack/django_openstack_auth/tree/openstack_auth/user.py#n202

  However things seem to just work if we fudge the user_domain_id as the
  project_domain_id, though that is obviously not a good solution:

   http://paste.openstack.org/show/596933/

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1660436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661454] Re: Inadequate Japanese translation for "Browse" on App Catalog tab

2017-02-02 Thread Ian Y. Choi
** Tags added: i18n

** Also affects: openstack-i18n
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661454

Title:
  Inadequate Japanese translation for "Browse" on App Catalog tab

Status in OpenStack Dashboard (Horizon):
  New
Status in openstack i18n:
  New

Bug description:
  Japanese translation for "Browse" is not adequate for this context. It
  is currently translated into 探索, but it should be 参照 or ブラウズ, but
  unable to locate the string from Zanata

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661456] [NEW] horizon fails authentication with keystone

2017-02-02 Thread Jeremy Nauck
Public bug reported:

This report is almost identical to
https://bugs.launchpad.net/horizon/+bug/1637072

Chef allinone installation following directions here;
https://github.com/openstack/openstack-chef-repo

Command line access works fine, but access from Horizon to many
functions renders an error similar to "Error: Unable to retrieve usage
information." in System -> Overview.  Similarly trying to display
flavors yields the error "Error: Unable to retrieve flavor list."

Attempting to create a Flavor will cause an exception with the message
"Danger: There was an error submitting the form. Please try again."

This installation is running Newton on Ubuntu 16.04 with Horizon 10.0.

The Horizon operations are frequently logging errors like these when
trying to create a new Flavor.

DEBUG:keystoneauth.session:GET call to identity for 
http://127.0.0.1:5000/v3/users/e87dfdcd37204e60b9cc5e2b4f2cecee/projects used 
request id req-929b0e41-1ec0-4435-b8e9-de0ea821668d
DEBUG:keystoneauth.identity.v3.base:Making authentication request to 
http://127.0.0.1:5000/v3/auth/token
DEBUG:keystoneauth.session:Request returned failure status: 400
Internal Server Error: /admin/flavors/create/

[Fri Feb 03 00:55:32.998833 2017] [wsgi:error] [pid 4156:tid 140316095899392]   
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 
655, in request
[Fri Feb 03 00:55:32.998835 2017] [wsgi:error] [pid 4156:tid 140316095899392]   
  raise exceptions.from_response(resp, method, url)
[Fri Feb 03 00:55:32.998837 2017] [wsgi:error] [pid 4156:tid 140316095899392] 
BadRequest: Expecting to find domain in user - the server could not comply with 
the request since it is either malformed or otherwise incorrect. The client is 
assumed to be in error. (HTTP 400) (Request-ID: 
req-8e28b2a0-25c4-466c-b881-47a0cd8224a1)


After following along in the Bug report 1637072 I attempted to replace 
python-novaclient with 7.1.0, but the error still persisted.  I'm currently 
running python-novaclient 6.0.0 and it also seems to fail with similar errors.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "openstack-dashboard-error.log.gz"
   
https://bugs.launchpad.net/bugs/1661456/+attachment/4812304/+files/openstack-dashboard-error.log.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661456

Title:
  horizon fails authentication with keystone

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This report is almost identical to
  https://bugs.launchpad.net/horizon/+bug/1637072

  Chef allinone installation following directions here;
  https://github.com/openstack/openstack-chef-repo

  Command line access works fine, but access from Horizon to many
  functions renders an error similar to "Error: Unable to retrieve usage
  information." in System -> Overview.  Similarly trying to display
  flavors yields the error "Error: Unable to retrieve flavor list."

  Attempting to create a Flavor will cause an exception with the message
  "Danger: There was an error submitting the form. Please try again."

  This installation is running Newton on Ubuntu 16.04 with Horizon 10.0.

  The Horizon operations are frequently logging errors like these when
  trying to create a new Flavor.

  DEBUG:keystoneauth.session:GET call to identity for 
http://127.0.0.1:5000/v3/users/e87dfdcd37204e60b9cc5e2b4f2cecee/projects used 
request id req-929b0e41-1ec0-4435-b8e9-de0ea821668d
  DEBUG:keystoneauth.identity.v3.base:Making authentication request to 
http://127.0.0.1:5000/v3/auth/token
  DEBUG:keystoneauth.session:Request returned failure status: 400
  Internal Server Error: /admin/flavors/create/
  
  [Fri Feb 03 00:55:32.998833 2017] [wsgi:error] [pid 4156:tid 140316095899392] 
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 
655, in request
  [Fri Feb 03 00:55:32.998835 2017] [wsgi:error] [pid 4156:tid 140316095899392] 
raise exceptions.from_response(resp, method, url)
  [Fri Feb 03 00:55:32.998837 2017] [wsgi:error] [pid 4156:tid 140316095899392] 
BadRequest: Expecting to find domain in user - the server could not comply with 
the request since it is either malformed or otherwise incorrect. The client is 
assumed to be in error. (HTTP 400) (Request-ID: 
req-8e28b2a0-25c4-466c-b881-47a0cd8224a1)

  
  After following along in the Bug report 1637072 I attempted to replace 
python-novaclient with 7.1.0, but the error still persisted.  I'm currently 
running python-novaclient 6.0.0 and it also seems to fail with similar errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1661453] [NEW] Unlocalized strings on App Catalog tab

2017-02-02 Thread Yuko Katabami
Public bug reported:

On App Catalog tab, the label "App Catalog" and "Browse Local" are not
localized. I am not able to locate those strings in Zanata.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "UnlocalizedString_AppCatalog.png"
   
https://bugs.launchpad.net/bugs/1661453/+attachment/4812290/+files/UnlocalizedString_AppCatalog.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661453

Title:
  Unlocalized strings on App Catalog tab

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On App Catalog tab, the label "App Catalog" and "Browse Local" are not
  localized. I am not able to locate those strings in Zanata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661452] [NEW] Unable to locate en-US in the Languages list

2017-02-02 Thread Yuko Katabami
Public bug reported:

User Settings => Languages list

For some reason, I am not able to find the en-US under the Languages
dropdown menu.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "LangList_en-US_Missing.png"
   
https://bugs.launchpad.net/bugs/1661452/+attachment/4812289/+files/LangList_en-US_Missing.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661452

Title:
  Unable to locate en-US in the Languages list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  User Settings => Languages list

  For some reason, I am not able to find the en-US under the Languages
  dropdown menu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661454] [NEW] Inadequate Japanese translation for "Browse" on App Catalog tab

2017-02-02 Thread Yuko Katabami
Public bug reported:

Japanese translation for "Browse" is not adequate for this context. It
is currently translated into 探索, but it should be 参照 or ブラウズ, but unable
to locate the string from Zanata

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "AppCatalgue_Browse.png"
   
https://bugs.launchpad.net/bugs/1661454/+attachment/4812291/+files/AppCatalgue_Browse.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661454

Title:
  Inadequate Japanese translation for "Browse" on App Catalog tab

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Japanese translation for "Browse" is not adequate for this context. It
  is currently translated into 探索, but it should be 参照 or ブラウズ, but
  unable to locate the string from Zanata

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661396] Re: undercloud install fails (nova-db-sync timeout) on VM on an SATA disk hypervisor

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/428435
Committed: 
https://git.openstack.org/cgit/openstack/puppet-tripleo/commit/?id=3f7e74ab24bb43f9ad7e24e0efd4206ac6a3dd4e
Submitter: Jenkins
Branch:master

commit 3f7e74ab24bb43f9ad7e24e0efd4206ac6a3dd4e
Author: Alex Schultz 
Date:   Thu Feb 2 21:29:32 2017 +

Revert "set innodb_file_per_table to ON for MySQL / Galera"

This reverts commit 621ea892a299d2029348db2b56fea1338bd41c48.

We're getting performance problems on SATA disks.

Change-Id: I30312fd5ca3405694d57e6a4ff98b490de388b92
Closes-Bug: #1661396
Related-Bug: #1660722


** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661396

Title:
  undercloud install fails (nova-db-sync timeout) on VM on an SATA disk
  hypervisor

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Fix Released

Bug description:
  2017-02-01 15:24:49,084 INFO: Error: Command exceeded timeout
  2017-02-01 15:24:49,084 INFO: Error: 
/Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]/returns: change from notrun to 0 
failed: Command exceeded timeout

  The nova-db-sync command is exceeding 300 seconds when installing the
  undercloud on a VM that is using SATA based storage. This seems to be
  related to the switch to innodb_file_per_table to ON which has doubled
  the amount of time the db sync takes on this class of hardware.  To
  unblock folks doing Ocata testing, we need to skip doing this in Ocata
  and will need to revisit enabling it in Pike.

  See Bug 1660722 for details as to why we enabled this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659965] Re: test_get_root_helper_child_pid_returns_first_child gate failure

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427141
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e5320e764625bfc5b6723c52ec202ebdb926d32f
Submitter: Jenkins
Branch:master

commit e5320e764625bfc5b6723c52ec202ebdb926d32f
Author: Jakub Libosvar 
Date:   Tue Jan 31 07:32:21 2017 -0500

functional: Check for processes only if there are any

In one of tests, pstree received SIGSEGV signal and was terminated. This
means the output is empty and we check on empty string. This patch makes
the test less prone to the errors coming from executed binaries.

Change-Id: I22a7f2fea56c9d97a1d765f0f25e9f28c7942b55
Closes-bug: 1659965


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1659965

Title:
  test_get_root_helper_child_pid_returns_first_child gate failure

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/01/410501/5/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/8a82bb0/testr_results.html.gz

  ft1.1: 
neutron.tests.functional.agent.linux.test_utils.TestGetRootHelperChildPid.test_get_root_helper_child_pid_returns_first_child_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [oslo_policy._cache_handler] Reloading cached file 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
 

[Yahoo-eng-team] [Bug 1564110] Re: OpenStack should support MySQL Cluster (NDB)

2017-02-02 Thread Octave Orgeron
** Changed in: keystone
 Assignee: (unassigned) => Octave Orgeron (octave-orgeron)

** Changed in: keystone
   Status: Opinion => In Progress

** Changed in: heat
 Assignee: (unassigned) => Octave Orgeron (octave-orgeron)

** Changed in: neutron
 Assignee: (unassigned) => Octave Orgeron (octave-orgeron)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564110

Title:
  OpenStack should support MySQL Cluster (NDB)

Status in Ceilometer:
  Won't Fix
Status in Cinder:
  Opinion
Status in Glance:
  New
Status in heat:
  New
Status in Ironic:
  Confirmed
Status in OpenStack Identity (keystone):
  In Progress
Status in neutron:
  Incomplete
Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.db:
  In Progress

Bug description:
  oslo.db assumes that a MySQL database can only have a storage engine
  of InnoDB. This causes complications for OpenStack to support other
  MySQL storage engines, such as MySQL Cluster (NDB). Oslo.db should
  have a configuration string (i.e. mysql_storage_engine) in the oslo_db
  database group that can be used by SQLalchemy, Alembic, and OpenStack
  to implement the desired support and behavior for alternative MySQL
  storage engines.

  I do have a change-set patch for options.py in oslo_db that will add
  this functionality. I'll post once I'm added to the CLA for OpenStack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1564110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645655] Re: ovs firewall cannot handle server reboot

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/404564
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=829b39b1ebdb96d87323852e2e099a955e402b63
Submitter: Jenkins
Branch:master

commit 829b39b1ebdb96d87323852e2e099a955e402b63
Author: IWAMOTO Toshihiro 
Date:   Wed Nov 30 15:26:39 2016 +0900

ovsfw: Refresh OFPort when necessary

Events like server reboots change ofport numbers.  In such cases,
cached ofports need to be refreshed.

Change-Id: If4acf61736b8f1e9707efc409509e1f557d5f886
Closes-Bug: #1645655


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645655

Title:
  ovs firewall cannot handle server reboot

Status in neutron:
  Fix Released

Bug description:
  See tempest test results for 
  https://review.openstack.org/#/c/399400/

  
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
 fails at ssh connection after server soft reboot.
  A few other tests seem to have some issues, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548511] Re: Shared pools support

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/428439
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=8e312d6da19dc2b4b98137cc0656cd2a7701cfbc
Submitter: Jenkins
Branch:master

commit 8e312d6da19dc2b4b98137cc0656cd2a7701cfbc
Author: Boden R 
Date:   Thu Feb 2 14:42:29 2017 -0700

api-ref: add pools to loadbalancer response

As per associated defect, this patch adds pools to
loadbalancer responses in the api-ref.

Change-Id: I99be656f68b130765f759359e1ec179e0de49457
Closes-Bug: #1548511


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548511

Title:
  Shared pools support

Status in neutron:
  Fix Released
Status in openstack-api-site:
  Invalid

Bug description:
  https://review.openstack.org/218560
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 4f3cf154829fcd69ecf3fa7f4e49f82d0104a4f0
  Author: Stephen Balukoff 
  Date:   Sat Aug 29 02:51:09 2015 -0700

  Shared pools support
  
  In preparation for L7 switching functionality, we need to
  reduce the rigidity of our model somewhat and allow pools
  to exist independent of listeners and be shared by 0 or
  more listeners. With this patch, pools are now associated
  with loadbalancers directly, and there is now a N:M
  relationship between listeners and pools.
  
  This patch does alter the Neutron LBaaS v2 API slightly,
  but all these changes are backward compatible. Nevertheless,
  since Neutron core dev team has asked that any API changes
  take place in an extension, that is what is being done in
  this patch.
  
  This patch also updates the reference namespace driver to
  render haproxy config templates correctly given the pool
  sharing functionality added with the patch.
  
  Finally, the nature of shared pools means that the usual
  workflow for tenants can be (but doesn't have to be)
  altered such that pools can be created before listeners
  independently, and assigned to listeners as a later step.
  
  DocImpact
  APIImpact
  Partially-Implements: blueprint lbaas-l7-rules
  Change-Id: Ia0974b01f1f02771dda545c4cfb5ff428a9327b4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553653] Fix merged to neutron-lib (master)

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427766
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=fd6e2c88b42091e9152911cadd35d39d12b1e78f
Submitter: Jenkins
Branch:master

commit fd6e2c88b42091e9152911cadd35d39d12b1e78f
Author: Boden R 
Date:   Wed Feb 1 08:20:40 2017 -0700

api-ref: fix description for floating IPs

Adds/corrects the description request/response parameter as
applicable to core resources (see defect).

Change-Id: I1cbbfeafcfd35c8f7544af966a03e605619c0579
Partial-Bug: #1553653


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553653

Title:
  neutron-lib api-ref: Add a description field to all standard resources

Status in neutron:
  Fix Released
Status in openstack-api-site:
  Invalid

Bug description:
  https://review.openstack.org/269887
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 5dacbba701037200f9b0ae40c34981ecd941b41c
  Author: Kevin Benton 
  Date:   Wed Feb 10 17:00:21 2016 -0800

  Add a description field to all standard resources
  
  In order to give users and operators more flexibility in
  annotating the purpose of various Neutron resources, this patch
  adds a description field limited to 255 chars to all of the
  Neutron resources that reference the standard attribute table.
  The resource that reference the table are the following:
  security_group_rules, security_groups, ports, subnets,
  networks, routers, floatingips, subnetpools
  
  This patch adds a description table related to standard attributes
  and migrates over the existing security group description to the new
  table as well.
  
  Co-Authored-By: James Dempsey 
  
  APIImpact
  DocImpact: Adds a description field to all resources outline in
 commit message.
  Closes-Bug: #1483480
  Change-Id: I6e1ef53d7aae7d04a5485810cc1db0a8eb125953

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1553653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656346] Re: The neutron-lib devref has a link to py-modindex that gives a 404 error

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/426368
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=47f482468bda42039f8b02931028429327cc9385
Submitter: Jenkins
Branch:master

commit 47f482468bda42039f8b02931028429327cc9385
Author: Boden R 
Date:   Fri Jan 27 12:24:39 2017 -0700

Remove devref modindex ref

Sphinx module index docs won't be generated unless a
module level docstring is included (ex [1]) with class refs.
As we don't include such module level refs in neutron-lib,
nothing is generated and thus no py-modindex.html
is created. This results in a dead link in our devref (see
bug report).

This change removes the modindex ref from our devref's
index to account for this fact. In the future if we wish to
add module documentation to support generation of
modindex we can add the ref back into our index.

[1] https://github.com/openstack/neutron/blob/
master/neutron/neutron_plugin_base_v2.py#L19

Change-Id: I4dbf473a9dc6540ef7cb16febd6703aa21f5a0b1
Closes-Bug: #1656346


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656346

Title:
  The neutron-lib devref has a link to py-modindex that gives a 404
  error

Status in neutron:
  Fix Released

Bug description:
  On this page: http://docs.openstack.org/developer/neutron-
  lib/devref/index.html there's a link to py-modindex but it doesn't
  exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661396] Re: undercloud install fails (nova-db-sync timeout) on VM on an SATA disk hypervisor

2017-02-02 Thread Emilien Macchi
Mike: done.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661396

Title:
  undercloud install fails (nova-db-sync timeout) on VM on an SATA disk
  hypervisor

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  In Progress

Bug description:
  2017-02-01 15:24:49,084 INFO: Error: Command exceeded timeout
  2017-02-01 15:24:49,084 INFO: Error: 
/Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]/returns: change from notrun to 0 
failed: Command exceeded timeout

  The nova-db-sync command is exceeding 300 seconds when installing the
  undercloud on a VM that is using SATA based storage. This seems to be
  related to the switch to innodb_file_per_table to ON which has doubled
  the amount of time the db sync takes on this class of hardware.  To
  unblock folks doing Ocata testing, we need to skip doing this in Ocata
  and will need to revisit enabling it in Pike.

  See Bug 1660722 for details as to why we enabled this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661417] Re: Several pages in Horizon are very slow (Floating IPs, Security Groups, etc.)

2017-02-02 Thread Lingxian Kong
mark this bug invalid the problems are split into small bugs.

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: Lingxian Kong (kong) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661417

Title:
  Several pages in Horizon are very slow (Floating IPs, Security Groups,
  etc.)

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  There are several places that could be improved:

  - Use 'detailed=False' when listing servers, because only instance ID and 
name are used in some places (e.g. when loading 'Floating IPs' panel)
  - Disable quota check (make it configurable) in button 'allowed' function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661420] [NEW] neutron-fwaas tempest v2 job on stable/newton fails with "extension could not be found"

2017-02-02 Thread Nate Johnston
Public bug reported:

The gate-neutron-fwaas-v2-dsvm-tempest is failing for stable/newton jobs
in neutron-fwaas.  The errors look like:

ft1.3: 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_create_update_delete_firewall_rule[id-563564f7-7077-4f5e-8cdc-51f37ae5a2b9]_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2017-02-02 21:56:08,309 22180 INFO [tempest.lib.common.rest_client] Request 
(FWaaSExtensionTestJSON:setUp): 404 POST 
http://198.61.190.237:9696/v2.0/fw/firewall_rules 0.025s
2017-02-02 21:56:08,310 22180 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'X-Auth-Token': '', 'Accept': 'application/json', 
'Content-Type': 'application/json'}
Body: {"firewall_rule": {"name": "fw-rule-1600127867", "protocol": 
"tcp", "action": "allow"}}
Response - Headers: {u'x-openstack-request-id': 
'req-267c7949-c777-4f3f-a63e-55ecf98338aa', u'content-type': 'application/json; 
charset=UTF-8', u'date': 'Thu, 02 Feb 2017 21:56:08 GMT', 'content-location': 
'http://198.61.190.237:9696/v2.0/fw/firewall_rules', 'status': '404', 
u'connection': 'close', u'content-length': '112'}
Body: {"message": "The resource could not be found.\n\n\n", 
"code": "404 Not Found", "title": "Not Found"}
}}}

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaas_extensions.py",
 line 65, in setUp
protocol="tcp")
  File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/fwaas_client.py",
 line 65, in create_firewall_rule
**kwargs)
  File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/services/client.py",
 line 62, in create_firewall_rule
return self.create_resource(uri, post_data)
  File "tempest/lib/services/network/base.py", line 60, in create_resource
resp, body = self.post(req_uri, req_post_data)
  File "tempest/lib/common/rest_client.py", line 276, in post
return self.request('POST', url, extra_headers, headers, body, chunked)
  File "tempest/lib/common/rest_client.py", line 664, in request
self._error_checker(resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 761, in _error_checker
raise exceptions.NotFound(resp_body, resp=resp)
tempest.lib.exceptions.NotFound: Object not found
Details: {u'title': u'Not Found', u'code': u'404 Not Found', u'message': u'The 
resource could not be found.\n\n\n'}

Example: http://logs.openstack.org/08/427508/1/check/gate-neutron-
fwaas-v2-dsvm-tempest/7ca2bc0/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas gate-failure newton-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661420

Title:
  neutron-fwaas tempest v2 job on stable/newton fails with "extension
  could not be found"

Status in neutron:
  New

Bug description:
  The gate-neutron-fwaas-v2-dsvm-tempest is failing for stable/newton
  jobs in neutron-fwaas.  The errors look like:

  ft1.3: 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_create_update_delete_firewall_rule[id-563564f7-7077-4f5e-8cdc-51f37ae5a2b9]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2017-02-02 21:56:08,309 22180 INFO [tempest.lib.common.rest_client] 
Request (FWaaSExtensionTestJSON:setUp): 404 POST 
http://198.61.190.237:9696/v2.0/fw/firewall_rules 0.025s
  2017-02-02 21:56:08,310 22180 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'X-Auth-Token': '', 'Accept': 'application/json', 
'Content-Type': 'application/json'}
  Body: {"firewall_rule": {"name": "fw-rule-1600127867", "protocol": 
"tcp", "action": "allow"}}
  Response - Headers: {u'x-openstack-request-id': 
'req-267c7949-c777-4f3f-a63e-55ecf98338aa', u'content-type': 'application/json; 
charset=UTF-8', u'date': 'Thu, 02 Feb 2017 21:56:08 GMT', 'content-location': 
'http://198.61.190.237:9696/v2.0/fw/firewall_rules', 'status': '404', 
u'connection': 'close', u'content-length': '112'}
  Body: {"message": "The resource could not be found.\n\n\n", "code": "404 Not Found", "title": "Not Found"}
  }}}

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaas_extensions.py",
 line 65, in setUp
  protocol="tcp")
File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/fwaas_client.py",
 line 65, in create_firewall_rule
  **kwargs)
File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/services/client.py",
 line 62, in create_firewall_rule
  return self.create_resource(uri, post_data)
File "tempest/lib/services/network/base.py", line 60, in create_resource
  resp, body = self.post(req_uri, req_post_data)
File 

[Yahoo-eng-team] [Bug 1661422] [NEW] Sidebar disappears at exact 768px width

2017-02-02 Thread George Moon
Public bug reported:

The following behaviours occur when the dashboard is viewed at specific
widths:

- screen width <= 767px : the sidebar disappears and is replaced by a 
'hamburger' button in the top bar. Works properly.
- screen width >= 769px : the sidebar is visible. Works properly.
- screen width == 768px : the sidebar disappears, but the main content padding 
is not adjusted, and no 'hamburger' button is added to the menu.

The SCSS code needs to be fixed so that the sidebar remains visible at a
width of 768px, or the main content padding and hamburger menu appear at
the width.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661422

Title:
  Sidebar disappears at exact 768px width

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The following behaviours occur when the dashboard is viewed at
  specific widths:

  - screen width <= 767px : the sidebar disappears and is replaced by a 
'hamburger' button in the top bar. Works properly.
  - screen width >= 769px : the sidebar is visible. Works properly.
  - screen width == 768px : the sidebar disappears, but the main content 
padding is not adjusted, and no 'hamburger' button is added to the menu.

  The SCSS code needs to be fixed so that the sidebar remains visible at
  a width of 768px, or the main content padding and hamburger menu
  appear at the width.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661423] [NEW] No need to get all the information for servers for some pages

2017-02-02 Thread Lingxian Kong
Public bug reported:

When loading panels like 'Floating IPs', 'Volumes', etc. Horizon will
get all the information for all servers from Nova, which will take bunch
of time especially when there are huge number of VMs in the cloud.

Since novaclient already support 'detailed=False' when listing servers,
Horizon could also benefit from that.

** Affects: horizon
 Importance: Undecided
 Assignee: Lingxian Kong (kong)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661423

Title:
  No need to get all the information for servers for some pages

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When loading panels like 'Floating IPs', 'Volumes', etc. Horizon will
  get all the information for all servers from Nova, which will take
  bunch of time especially when there are huge number of VMs in the
  cloud.

  Since novaclient already support 'detailed=False' when listing
  servers, Horizon could also benefit from that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661418] [NEW] neutron-fwaas functional tests do not execute

2017-02-02 Thread Nate Johnston
Public bug reported:

The neutron-fwaas functional test suite for stable/newton runs tests
[1], but the functional test sute for master (ocata) does not [2].

[1] 
http://logs.openstack.org/03/425003/1/check/gate-neutron-fwaas-dsvm-functional/35cae70/testr_results.html.gz
[2] 
http://logs.openstack.org/51/424551/13/check/gate-neutron-fwaas-dsvm-functional/2596d9d/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661418

Title:
  neutron-fwaas functional tests do not execute

Status in neutron:
  New

Bug description:
  The neutron-fwaas functional test suite for stable/newton runs tests
  [1], but the functional test sute for master (ocata) does not [2].

  [1] 
http://logs.openstack.org/03/425003/1/check/gate-neutron-fwaas-dsvm-functional/35cae70/testr_results.html.gz
  [2] 
http://logs.openstack.org/51/424551/13/check/gate-neutron-fwaas-dsvm-functional/2596d9d/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661419] [NEW] neutron-fwaas functional tests on stable/newton fail because db backend not set up

2017-02-02 Thread Nate Johnston
Public bug reported:

The functional tests for neutron-fwaas master all fail with exceptions
like:

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/functional/db/test_migrations.py", 
line 136, in setUp
super(_TestModelsMigrations, self).setUp()
  File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 283, in 
setUp
self._setup_database_fixtures()
  File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 320, in 
_setup_database_fixtures
self.fail(msg)
  File 
"/opt/stack/new/neutron-fwaas/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: backend 'mysql' unavailable

or

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/functional/db/test_migrations.py", 
line 136, in setUp
super(_TestModelsMigrations, self).setUp()
  File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 283, in 
setUp
self._setup_database_fixtures()
  File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 320, in 
_setup_database_fixtures
self.fail(msg)
  File 
"/opt/stack/new/neutron-fwaas/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: backend 'postgresql' unavailable

See: http://logs.openstack.org/03/425003/1/check/gate-neutron-fwaas-
dsvm-functional/35cae70/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas gate-failure newton-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661419

Title:
  neutron-fwaas functional tests on stable/newton fail because db
  backend not set up

Status in neutron:
  New

Bug description:
  The functional tests for neutron-fwaas master all fail with exceptions
  like:

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron/neutron/tests/functional/db/test_migrations.py", line 
136, in setUp
  super(_TestModelsMigrations, self).setUp()
File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 283, 
in setUp
  self._setup_database_fixtures()
File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 320, 
in _setup_database_fixtures
  self.fail(msg)
File 
"/opt/stack/new/neutron-fwaas/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: backend 'mysql' unavailable

  or

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron/neutron/tests/functional/db/test_migrations.py", line 
136, in setUp
  super(_TestModelsMigrations, self).setUp()
File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 283, 
in setUp
  self._setup_database_fixtures()
File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 320, 
in _setup_database_fixtures
  self.fail(msg)
File 
"/opt/stack/new/neutron-fwaas/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: backend 'postgresql' unavailable

  See: http://logs.openstack.org/03/425003/1/check/gate-neutron-fwaas-
  dsvm-functional/35cae70/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661417] [NEW] Several pages in Horizon are very slow (Floating IPs, Security Groups, etc.)

2017-02-02 Thread Lingxian Kong
Public bug reported:

There are several places that could be improved:

- Use 'detailed=False' when listing servers, because only instance ID and name 
are used in some places (e.g. when loading 'Floating IPs' panel)
- Disable quota check (make it configurable) in button 'allowed' function.

** Affects: horizon
 Importance: Undecided
 Assignee: Lingxian Kong (kong)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Lingxian Kong (kong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661417

Title:
  Several pages in Horizon are very slow (Floating IPs, Security Groups,
  etc.)

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There are several places that could be improved:

  - Use 'detailed=False' when listing servers, because only instance ID and 
name are used in some places (e.g. when loading 'Floating IPs' panel)
  - Disable quota check (make it configurable) in button 'allowed' function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661402] Re: hypervisor panel blank

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/428461
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b0bbc49d8cb83c8a73770ece522478dcdc4a017d
Submitter: Jenkins
Branch:master

commit b0bbc49d8cb83c8a73770ece522478dcdc4a017d
Author: David Lyle 
Date:   Thu Feb 2 14:57:57 2017 -0700

Fix Hypervisors page

The index template was incorrectly changed to the default, there
happens to be a lot on the index page, so this patch restores it.

Closes-Bug: #1661402
Change-Id: I3f8d1724060220e3b60f649988b843bc3457a8a1


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661402

Title:
  hypervisor panel blank

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  no, it's really blank, due to an overzealous refactor

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298061] Re: nova should allow evacuate for an instance in the Error state

2017-02-02 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 1:2014.1.5-0ubuntu1.6

---
nova (1:2014.1.5-0ubuntu1.6) trusty; urgency=medium

  * Allow evacuate for an instance in the Error state (LP: #1298061)
- d/p/remove_useless_state_check.patch remove unnecessary task_state check
- d/p/evacuate_error_vm.patch Allow evacuate from error state

 -- Liang Chen   Fri, 09 Sep 2016 17:41:48
+0800

** Changed in: nova (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298061

Title:
  nova should allow evacuate for an instance in the Error state

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive icehouse series:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Released

Bug description:
  [Impact]

   * Instances in error state cannot be evacuated.

  [Test Case]

   * nova evacuate  
   * nova refuses to evacuate the instance because of its state

  [Regression Potential]

   * Cherry picked from upstream
 - removed unnecessary argument passing
 - add allowing ERROR state before evacuating.
   * actually, in code, added one parameter, and removed unused one.
 so very low regression possibility.
   * Tested on juju+maas test env.
   * Passed tempest smoke tests locally.

  Note: one simple way to put an instance into error state is to
  directly change its database record, for example "update instances set
  vm_state='error' where uuid=''"

  We currently allow reboot/rebuild/rescue for an instance in the Error
  state if the instance has successfully booted at least once.

  We should allow "evacuate" as well, since it is essentially a
  "rebuild" on a different compute node.

  This would be useful in a number of cases, in particular if an initial
  evacuation attempt fails (putting the instance into the Error state).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1298061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644878] Re: "get_local_port_mac" in ovs_lib uses an only Linux command, brokes Windows compatibility

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/403804
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=39567df190b0e46706d7bab32c58f86ee8786aa1
Submitter: Jenkins
Branch:master

commit 39567df190b0e46706d7bab32c58f86ee8786aa1
Author: Rodolfo Alonso Hernandez 
Date:   Mon Nov 28 16:48:22 2016 +

Fix broken Windows compatibility in ovs_lib

This patch changes the ip_lib used. Substitutes the
the Linux implementation for the generic wrapper.

Closes-Bug: #1644878
Change-Id: Ida1b3586d8d6b5fda69286aa1428d31d16010d71


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644878

Title:
  "get_local_port_mac" in ovs_lib uses an only Linux command, brokes
  Windows compatibility

Status in neutron:
  Fix Released

Bug description:
  neutron/agent/common/ovs_lib:OVSBridge is mented to be compatible both
  for Linux and Windows.

  In function "get_local_port_mac", the IP Linux lib is called, instead
  of the ip_lib wrapper in neutron/agent/common.

  The solution to this problem is:
  - To call the ip_lib in neutron/agent/common.
  - To write the function to read the MAC address in the IPDevice class 
implementation for Windows.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661402] [NEW] hypervisor panel blank

2017-02-02 Thread David Lyle
Public bug reported:

no, it's really blank, due to an overzealous refactor

** Affects: horizon
 Importance: Critical
 Assignee: David Lyle (david-lyle)
 Status: In Progress


** Tags: ocata-backport-potential ocata-rc2-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661402

Title:
  hypervisor panel blank

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  no, it's really blank, due to an overzealous refactor

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647910] Re: hostname is set incorrectly if localhostname is fully qualified

2017-02-02 Thread Brian Murray
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1647910

Title:
  hostname is set incorrectly if localhostname is fully qualified

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  If no data source is available and the local hostname is set to
  "localhost.localdomain", and /etc/hosts looks like:

127.0.0.1   localhost localhost.localdomain localhost4
  localhost4.localdomain4

  Then in sources/__init__.py in get_hostname:

  - util.get_hostname() will return 'localhost.localdomain'
  - util.get_fqdn_from_hosts(hostname) will return 'localhost'
  - 'toks' will be set to [ 'localhost.localdomain', 'localdomain'

  And ultimately the system hostname will be set to
  'localhost.localdomain.localdomain', which isn't useful to anybody.

  Also reported in:

  https://bugzilla.redhat.com/show_bug.cgi?id=1389048

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1647910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653967] Re: nova raises ConfigFileValueError for URLs with dashes

2017-02-02 Thread Corey Bryant
** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: nova

** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653967

Title:
  nova raises ConfigFileValueError for URLs with dashes

Status in OpenStack Global Requirements:
  New
Status in oslo.config:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in python-rfc3986 package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  New
Status in python-rfc3986 source package in Xenial:
  New
Status in nova source package in Yakkety:
  New
Status in python-rfc3986 source package in Yakkety:
  New
Status in nova source package in Zesty:
  Fix Released
Status in python-rfc3986 source package in Zesty:
  Fix Released

Bug description:
  nova version: newton
  dpkg version: 2:14.0.1-0ubuntu1~cloud0
  distribution: nova @ xenial with ubuntu cloud archive, amd64.

  Nova fails with exception  ConfigFileValueError: Value for option url
  is not valid: invalid URI: if url parameter of [neutron] section or
  novncproxy_base_url parameter contains dashes in url.

  Steps to reproduce:

  Take a working openstack with nova+neutron.

  Put (in [neutron] section) url= http://nodash.example.com:9696  - it
  works

  Put url = http://with-dash.example.com:9696 - it fails with exception:

  
  nova[18937]: TRACE Traceback (most recent call last):
  nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
  nova[18937]: TRACE sys.exit(main())
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
  nova[18937]: TRACE service.wait()
  nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
  nova[18937]: TRACE _launcher.wait()
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
  nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  nova[18937]: TRACE return self._conf._get(name, self._group)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  nova[18937]: TRACE value = self._do_get(name, group, namespace)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  nova[18937]: TRACE % (opt.name, str(ve)))
  nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

  Expected behavior: do not crash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-requirements/+bug/1653967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661243] Re: reserved_host_disk_mb reporting incorrectly

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/428120
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d4502e1f53de9237b53c0967ed1e37cc06effdf5
Submitter: Jenkins
Branch:master

commit d4502e1f53de9237b53c0967ed1e37cc06effdf5
Author: John Garbutt 
Date:   Thu Feb 2 12:51:36 2017 +

Report reserved_host_disk_mb in GB not KB

We were reporting reserved_host_disk_mb as GB not KB.

This created this log message:
  Invalid inventory for 'DISK_GB' on resource provider .
  The reserved value is greater than or equal to total.

This corrects the reporting of reserved_host_disk_mb to the placement
API when updating the compute node inventory.

Closes-Bug: #1661243

Change-Id: I5573c82eb99cde13c407c8d6a06ecb04165ab9c5


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661243

Title:
  reserved_host_disk_mb reporting incorrectly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We're getting the following failure in our gate jobs for nova:

  Failed to update inventory for resource provider 2c2e388f-be21-4461
  -bdcf-e1b20b9c90e2: 400 400 Bad Request

  The server could not comply with the request since it is either
  malformed or otherwise incorrect.

   Unable to update inventory for resource provider 2c2e388f-be21-4461
  -bdcf-e1b20b9c90e2: Invalid inventory for 'DISK_GB' on resource
  provider '2c2e388f-be21-4461-bdcf-e1b20b9c90e2'. The reserved value is
  greater than or equal to total.

  =

  The reserved_host_disk_mb is set to 2048 and the compute host is showing free 
disk of 57GB.
  The build is done based on head of master of nova, and uses QEMU

  =

  Link to compute logs showing reserved_host_disk_mb: 2048
  
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_03_006

  Link to compute logs showing free_disk=57GB
  
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_03_513

  Link to compute logs showing the Error:
  
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_05_506

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661195] Re: Servers filter by access_ip_v4 does not filter servers

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/428071
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cd29a4e6c5af419050f0db1a2d8cc2c45bdcde03
Submitter: Jenkins
Branch:master

commit cd29a4e6c5af419050f0db1a2d8cc2c45bdcde03
Author: ghanshyam 
Date:   Thu Feb 2 10:02:53 2017 +

Fix access_ip_v4/6 filters params for servers filter

While adding the json schema for servers filter query,
we added 'accessIPv4' and 'accessIPv6' as allowed params
but they do not match with what DB has. It is 'access_ip_v4'
and 'access_ip_v6' in DB.
This makes  'access_ip_v4' and 'access_ip_v6' filter stop working.

The schema should be fixed accordingly to allow the 'access_ip_v4'
and 'access_ip_v6' as valid filter.

'accessIPv4' and 'accessIPv6' are something the API accepts
and returns and internally API layer translate those param
to their respective field('access_ip_v4' and 'access_ip_v6')
present in DB.
So user does not know anything about 'access_ip_v4' and
'access_ip_v6'. They are not in API representation actually.

Later list filter and sort param should be same as field return
in GET or accepted in POST/PUT which are 'accessIPv4' and 'accessIPv6'.
But that is something new attribute support in filter and can be
done later after more discussion.

Change-Id: Idc12de0062d298259e25c8b4c0dde889054a9ae5
Closes-Bug: #1661195


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661195

Title:
  Servers filter by access_ip_v4 does not filter servers

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Recently we added the server list query param validation in json
  schema. In schema, 'accessIPv4' and 'accessIPv6' are allowed which
  does not match with what DB has. It is 'access_ip_v4' and
  'access_ip_v6' in DB.

  Below schema should be fixed accordingly to allow the 'access_ip_v4'
  and 'access_ip_v6' as valid filter.

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L343-L344

  
  On other note:
  'accessIPv4' and 'accessIPv6' are the something API accept and return and 
internally API layer translate those param to their respective 
field('access_ip_v4' and 'access_ip_v6') present in DB.
  So user does not know anything about 'access_ip_v4' and 'access_ip_v6'. They 
are not in API representation actually. 

  List filter and sort param should be same as field return in GET or
  accepted in POST/PUT which are 'accessIPv4' and 'accessIPv6'. But that
  is something new attribute support in filter and can be done later
  after discussion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660436] Re: Federated users cannot log into horizon

2017-02-02 Thread Steve Martinelli
Marked as invalid for keystone projects and novaclient. The fix was
centralized to Horizon and DOA.

** Changed in: keystone
   Status: New => Invalid

** Changed in: python-novaclient
   Status: New => Invalid

** Changed in: keystoneauth
   Status: New => Invalid

** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

** Changed in: django-openstack-auth
   Status: New => Fix Released

** Changed in: keystone
Milestone: ocata-rc1 => None

** Changed in: keystone
 Assignee: Colleen Murphy (krinkle) => (unassigned)

** Changed in: keystone
   Importance: Critical => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1660436

Title:
  Federated users cannot log into horizon

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Invalid
Status in keystoneauth:
  Invalid
Status in python-novaclient:
  Invalid

Bug description:
  As of this bugfix in novaclient, federated users cannot log in to
  horizon:

  https://bugs.launchpad.net/python-novaclient/+bug/1658963

  Before this bugfix, horizon would attempt to list nova extensions
  using what was apparently the wrong class, and the error would be
  caught and quietly logged as such:

   Call to list supported extensions failed. This is likely due to a
  problem communicating with the Nova endpoint. Host Aggregates panel
  will not be displayed.

  The dashboard would display:

   Error: Unable to retrieve usage information.

  but at least the user was logged into the dashboard.

  The error that was being hidden was:

   __init__() takes at least 3 arguments (2 given)

  Now that that is fixed, horizon makes it further but fails to
  authenticate the federated user when attempting this request, giving
  the traceback here:

   http://paste.openstack.org/show/596929/

  The problem lies somewhere between keystoneauth, novaclient, and
  horizon.

  keystoneauth:

  When keystoneauth does version discovery, it first tries the Identity
  v2.0 API, and finding no domain information in the request, returns
  that API as the Identity endpoint. Modifying keystoneauth to not stop
  there and continue trying the v3 API, even though it lacks domain
  information, allows the user to successfully log in:

   http://paste.openstack.org/show/596930/

  I'm not really sure why that works or what would break with that
  change.

  novaclient:

  When creating a Token plugin the novaclient is aware of a project's
  domain but not of a domain on its own or of a default domain:

   http://git.openstack.org/cgit/openstack/python-
  novaclient/tree/novaclient/client.py#n137

  keystoneauth relies on having default_domain_(id|name),
  domain_(id|name), or project_domain(id|name) set, and novaclient isn't
  receiving information about the project_domain(id|name) and isn't
  capable of sending any other domain information when using the Token
  plugin, which it must for a federated user.

  horizon:

  For federated users novaclient is only set up to pass along domain
  info for the project, which horizon doesn't store in its user object:

  
http://git.openstack.org/cgit/openstack/django_openstack_auth/tree/openstack_auth/user.py#n202

  However things seem to just work if we fudge the user_domain_id as the
  project_domain_id, though that is obviously not a good solution:

   http://paste.openstack.org/show/596933/

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1660436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661364] [NEW] the url for "Translation at ..." on the overview tab is wrong

2017-02-02 Thread Lucas H. Xu
Public bug reported:

On this overview tab: https://launchpad.net/horizon

"Translations happen at: https://www.translate.openstack.org/;

should be

Translations happen at: https://translate.openstack.org/

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661364

Title:
  the url for "Translation at ..." on the overview tab is wrong

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On this overview tab: https://launchpad.net/horizon

  "Translations happen at: https://www.translate.openstack.org/;

  should be

  Translations happen at: https://translate.openstack.org/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567807] Re: nova delete doesn't work with EFI booted VMs

2017-02-02 Thread Corey Bryant
Chuck has started the backport to stable/newton upstream.  We won't be
able to get the patch backported upstream to stable/mitaka at this point
since they're only accepting critical/security fixes at this time.  The
patch appears to apply cleanly to stable/mitaka.

** Also affects: nova (Ubuntu Zesty)
   Importance: Low
 Assignee: Kevin Zhao (kevin-zhao)
   Status: Triaged

** Also affects: nova (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: nova (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: nova (Ubuntu Yakkety)
   Importance: Undecided => Medium

** Changed in: nova (Ubuntu Zesty)
   Importance: Low => Medium

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/newton
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

** Changed in: nova (Ubuntu Yakkety)
   Status: New => Triaged

** Changed in: cloud-archive/newton
   Status: New => Triaged

** Changed in: cloud-archive/newton
   Importance: Undecided => Medium

** Changed in: cloud-archive/ocata
   Importance: Undecided => High

** Changed in: nova (Ubuntu Yakkety)
   Importance: Medium => High

** Changed in: cloud-archive/newton
   Importance: Medium => High

** Changed in: cloud-archive/ocata
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567807

Title:
  nova delete doesn't work with EFI booted VMs

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Triaged
Status in nova source package in Xenial:
  Triaged
Status in nova source package in Yakkety:
  Triaged
Status in nova source package in Zesty:
  Triaged

Bug description:
  I've been setting up a Mitaka Openstack using the cloud archive
  running on Trusty, and am having problems working with EFI enabled
  instances on ARM64.

  I've done some work with wgrant and gotten things to a stage where I
  can boot instances, using the aavmf images.

  However, when I tried to delete a VM booted like this, I get an error:

libvirtError: Requested operation is not valid: cannot delete
  inactive domain with nvram

  I've included the full traceback at
  https://paste.ubuntu.com/15682718/.

  Thanks to a suggestion from wgrant again, I got it working by editing 
nova/virt/libvirt/guest.py in delete_configuration() and replacing  
self._domain.undefineFlags(libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE) with 
self._domain.undefineFlags(libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE | 
libvirt.VIR_DOMAIN_UNDEFINE_NVRAM).
  I've attached a rough patch.

  Once that's applied and nova-compute restarted, I was able to delete
  the instance fine.

  Could someone please investigate this and see if its the correct fix,
  and look at getting it fixed in the archive?

  This was done on a updated trusty deployment using the cloud-archives
  for mitaka.

  $ dpkg-query -W python-nova
  python-nova 2:13.0.0~b2-0ubuntu1~cloud0

  Please let me know if you need any further information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1567807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661360] Re: tempest test fails with "Instance not found" error

2017-02-02 Thread Emilien Macchi
It affects Puppet OpenStack CI but also TripleO. We can't spawn a VM
anymore.

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Critical

** Changed in: tripleo
Milestone: None => ocata-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661360

Title:
  tempest test fails with "Instance not found" error

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  Running OpenStack services from master, when we try to run tempest
  test
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
  (among others). It always fails with message "u'message': u'Instance
  bf33af04-6b55-4835-bb17-02484c196f13 could not be found.'" (full log
  in http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/console.html)

  According to the sequence in the log, this is what happens:

  1. tempest creates an instance:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_291997

  2. nova server returns instance bf33af04-6b55-4835-bb17-02484c196f13
  so it seems it has been properly created:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292483

  3. tempest try to get status of the instance right after creating it
  and nova server returns 404, instance not found:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292565

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292845

  At that time following messages are found in nova log:

  2017-02-02 12:58:10.823 7439 DEBUG nova.compute.api 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] [instance: 
bf33af04-6b55-4835-bb17-02484c196f13] Fetching instance by UUID get 
/usr/lib/python2.7/site-packages/nova/compute/api.py:2312
  2017-02-02 12:58:10.879 7439 INFO nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] HTTP exception thrown: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found.
  2017-02-02 12:58:10.880 7439 DEBUG nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] Returning 404 to user: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1039

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/logs/nova/nova-
  api.txt.gz#_2017-02-02_12_58_10_879

  4. Then tempest start cleaning up environment, deleting security
  group, etc...

  We are hitting this with nova from commit
  f40467b0eb2b58a369d24a0e832df1ace6c400c3





  
  Tempest starts cleaning up securitygroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639239] Re: ValueError for Invalid InitiatorConnector in s390

2017-02-02 Thread Corey Bryant
** Also affects: cloud-archive/ocata
   Importance: Medium
   Status: Confirmed

** Changed in: cloud-archive/ocata
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639239

Title:
  ValueError for Invalid InitiatorConnector in s390

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Confirmed
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released
Status in Ubuntu on IBM z Systems:
  Confirmed
Status in nova package in Ubuntu:
  Fix Released
Status in python-os-brick package in Ubuntu:
  Fix Released
Status in nova source package in Yakkety:
  Confirmed
Status in python-os-brick source package in Yakkety:
  Confirmed
Status in nova source package in Zesty:
  Fix Released
Status in python-os-brick source package in Zesty:
  Fix Released

Bug description:
  Description
  ===
  Calling the InitiatorConnector factory results in a ValueError for 
unsupported protocols, which goes unhandled and may crash a calling service.

  Steps to reproduce
  ==
  - clone devstack
  - make stack

  Expected result
  ===
  The nova compute service should run.

  Actual result
  =
  A ValueError is thrown, which, in the case of the nova libvirt driver, is not 
handled appropriately. The compute service crashes.

  Environment
  ===
  os|distro=kvmibm1
  os|vendor=kvmibm
  os|release=1.1.3-beta4.3
  git|cinder|master[f6ab36d]
  git|devstack|master[928b3cd]
  git|nova|master[56138aa]
  pip|os-brick|1.7.0

  Logs & Configs
  ==
  [...]
  2016-11-03 17:56:57.204 46141 INFO nova.virt.driver 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Loading compute driver 
'libvirt.LibvirtDriver'
  2016-11-03 17:56:57.442 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISCSI on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.444 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISCSI on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.445 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISER on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.445 46141 CRITICAL nova 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] ValueError: Invalid 
InitiatorConnector protocol specified ISER
  2016-11-03 17:56:57.445 46141 ERROR nova Traceback (most recent call last):
  2016-11-03 17:56:57.445 46141 ERROR nova   File "/usr/bin/nova-compute", line 
10, in 
  2016-11-03 17:56:57.445 46141 ERROR nova sys.exit(main())
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/cmd/compute.py", line 56, in main
  2016-11-03 17:56:57.445 46141 ERROR nova topic=CONF.compute_topic)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/service.py", line 216, in create
  2016-11-03 17:56:57.445 46141 ERROR nova 
periodic_interval_max=periodic_interval_max)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/service.py", line 91, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/compute/manager.py", line 537, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self.driver = 
driver.load_compute_driver(self.virtapi, compute_driver)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/driver.py", line 1625, in load_compute_driver
  2016-11-03 17:56:57.445 46141 ERROR nova virtapi)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 44, in 
import_object
  2016-11-03 17:56:57.445 46141 ERROR nova return 
import_class(import_str)(*args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 356, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self._get_volume_drivers(), 
self._host)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/driver.py", line 44, in driver_dict_from_config
  2016-11-03 17:56:57.445 46141 ERROR nova driver_registry[driver_type] = 
driver_class(*args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/libvirt/volume/iser.py", line 34, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova transport=self._get_transport())
  2016-11-03 17:56:57.445 46141 ERROR nova   File 

[Yahoo-eng-team] [Bug 1661360] [NEW] tempest test fails with "Instance not found" error

2017-02-02 Thread Alfredo Moralejo
Public bug reported:

Running OpenStack services from master, when we try to run tempest test
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
(among others). It always fails with message "u'message': u'Instance
bf33af04-6b55-4835-bb17-02484c196f13 could not be found.'" (full log in
http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
integration-4-scenario001-tempest-centos-7/b29f35b/console.html)

According to the sequence in the log, this is what happens:

1. tempest creates an instance:

http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
integration-4-scenario001-tempest-
centos-7/b29f35b/console.html#_2017-02-02_13_04_48_291997

2. nova server returns instance bf33af04-6b55-4835-bb17-02484c196f13 so
it seems it has been properly created:

http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
integration-4-scenario001-tempest-
centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292483

3. tempest try to get status of the instance right after creating it and
nova server returns 404, instance not found:

http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
integration-4-scenario001-tempest-
centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292565

http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
integration-4-scenario001-tempest-
centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292845

At that time following messages are found in nova log:

2017-02-02 12:58:10.823 7439 DEBUG nova.compute.api 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] [instance: 
bf33af04-6b55-4835-bb17-02484c196f13] Fetching instance by UUID get 
/usr/lib/python2.7/site-packages/nova/compute/api.py:2312
2017-02-02 12:58:10.879 7439 INFO nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] HTTP exception thrown: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found.
2017-02-02 12:58:10.880 7439 DEBUG nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] Returning 404 to user: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1039

http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
integration-4-scenario001-tempest-centos-7/b29f35b/logs/nova/nova-
api.txt.gz#_2017-02-02_12_58_10_879

4. Then tempest start cleaning up environment, deleting security group,
etc...

We are hitting this with nova from commit
f40467b0eb2b58a369d24a0e832df1ace6c400c3




Tempest starts cleaning up securitygroup

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661360

Title:
  tempest test fails with "Instance not found" error

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  Running OpenStack services from master, when we try to run tempest
  test
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
  (among others). It always fails with message "u'message': u'Instance
  bf33af04-6b55-4835-bb17-02484c196f13 could not be found.'" (full log
  in http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/console.html)

  According to the sequence in the log, this is what happens:

  1. tempest creates an instance:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_291997

  2. nova server returns instance bf33af04-6b55-4835-bb17-02484c196f13
  so it seems it has been properly created:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292483

  3. tempest try to get status of the instance right after creating it
  and nova server returns 404, instance not found:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292565

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292845

  At that time following messages are found in nova log:

  2017-02-02 12:58:10.823 7439 DEBUG nova.compute.api 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] [instance: 
bf33af04-6b55-4835-bb17-02484c196f13] Fetching instance by UUID get 
/usr/lib/python2.7/site-packages/nova/compute/api.py:2312
  2017-02-02 12:58:10.879 7439 INFO nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] HTTP exception thrown: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be 

[Yahoo-eng-team] [Bug 850443] Re: Nova API does not listen on IPv6

2017-02-02 Thread Chuck Short
I do believe this is no longer and issue. Please re-open if it is.

** Changed in: python-eventlet (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/850443

Title:
  Nova API does not listen on IPv6

Status in OpenStack Compute (nova):
  Invalid
Status in python-eventlet package in Ubuntu:
  Fix Released

Bug description:
  Nova API service does not bind to IPv6 interfaces. When specifying v6
  address using ec2_list or osapi_listen, it returns "gaierror: [Errno
  -9] Address family for hostname not supported"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/850443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655670] Re: HTTP 404 when requesting /v3/domains/Default

2017-02-02 Thread Dean Troyer
Agreed, this is expected behaviour for the reason Boris mentioned.

** Changed in: python-openstackclient
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1655670

Title:
  HTTP 404 when requesting /v3/domains/Default

Status in OpenStack Identity (keystone):
  Invalid
Status in python-openstackclient:
  Invalid

Bug description:
  After running a Keystone test with Rally, I found a lot of HTTP 404
  errors in the Apache2 access log when requesting /v3/domains/Default.

  10.26.11.110 - - [16/Dec/2016:14:17:41 +0100] "GET /v3/domains/Default 
HTTP/1.1" 404 91 "-" "python-keystoneclient" 
  10.26.11.110 - - [16/Dec/2016:14:17:44 +0100] "GET /v3/domains/Default 
HTTP/1.1" 404 91 "-" "python-keystoneclient" 
  10.26.11.110 - - [16/Dec/2016:14:17:47 +0100] "GET /v3/domains/Default 
HTTP/1.1" 404 91 "-" "python-keystoneclient"

  I was able to reproduce this by executing the command "openstack project list 
--domain Default".
  I also found the appropriate entry in the Keystone log:

  2017-01-10 16:49:39.012184 2017-01-10 16:49:39.011 6748 INFO 
keystone.common.wsgi [req-ce3d7d7e-8dce-4dd5-8131-64ec7b047cc7 
3e23a62541e04ab6b9726e65060bbb33 e64f8ce1d278474d9989c38162ff7bdd - default 
default] GET https://identity-dd2d.cloudandheat.com:5000/v3/domains/Default
  2017-01-10 16:49:39.018540 2017-01-10 16:49:39.017 6748 WARNING 
keystone.common.wsgi [req-ce3d7d7e-8dce-4dd5-8131-64ec7b047cc7 
3e23a62541e04ab6b9726e65060bbb33 e64f8ce1d278474d9989c38162ff7bdd - default 
default] Could not find domain: Default

  It works well when you use the domain id instead (openstack project list 
--domain default).
  Please find the debug output attached.

  There are two GET requests for /v3/domains/Default. Both of them get the 404.
  The next request uses another URL (/v3/domains?name=Default) which works well.
  I assume that's why you get the list even when the 404s occur.

  We use OpenStack Newton with Keystone version 10.0.0.

  Steps to reproduce:
  1. Install DevStack
  2. Execute "openstack --debug project list --domain Default"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1655670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648206] Re: sriov agent report_state is slow

2017-02-02 Thread Corey Bryant
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Also affects: neutron (Ubuntu Zesty)
   Importance: Undecided
   Status: Fix Released

** Also affects: neutron (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: neutron (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: neutron (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Yakkety)
   Importance: Undecided => High

** Also affects: cloud-archive/newton
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: Confirmed

** Changed in: cloud-archive/ocata
   Status: Confirmed => Fix Released

** Changed in: cloud-archive/newton
   Status: New => Triaged

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

** Changed in: neutron (Ubuntu Yakkety)
   Status: Confirmed => Triaged

** Changed in: cloud-archive/newton
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Zesty)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648206

Title:
  sriov agent report_state is slow

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Confirmed
Status in neutron source package in Yakkety:
  Triaged
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  On a system with lots of VFs and PFs we get these logs:

  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 29.67 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 45.43 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 47.64 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 23.89 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 30.20 sec

  
  Depending on the agent_down_time configuration, this can cause the Neutron 
server to think the agent has died.

  
  This appears to be caused by blocking on the eswitch manager every time to 
get a device count to include in the state report.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1648206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661350] [NEW] Empty white box in Network Topology when there's nothing to show

2017-02-02 Thread Eddie Ramirez
Public bug reported:

How to reproduce:

1. Access to a clean installion of OpenStack or Project with no networks, 
routers, etc.
2. Go to Project->Network->Network Topology
3. You're taken to the first tab "Topology", click on "Graph" tab.
4. See that an empty white box is shown, this in both tabs.

Expected result:
You should read a message explaning there are no networks, routers or anything 
else to show here.
Example: http://pasteboard.co/tA0ZWVdKq.png

Actual result:
An empty white box. No clear indication that there's nothing to show. Example: 
http://pasteboard.co/tzRJkUxqE.png

Note: HTML is there but a css property is hiding the div with its
message.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: topology-view

** Attachment added: "Network Topology   OpenStack Dashboard.png"
   
https://bugs.launchpad.net/bugs/1661350/+attachment/4812155/+files/Network%20Topology%20%20%20OpenStack%20Dashboard.png

** Description changed:

  How to reproduce:
  
- 1. Access a clean install of OpenStack or Project with no networks, routers, 
etc.
+ 1. Access to a clean installion of OpenStack or Project with no networks, 
routers, etc.
  2. Go to Project->Network->Network Topology
  3. You're taken to the first tab "Topology", click on "Graph" tab.
  4. See that an empty white box is shown, this in both tabs.
  
  Expected result:
  You should read a message explaning there are no networks, routers or 
anything else to show here.
  Example: http://pasteboard.co/tA0ZWVdKq.png
  
  Actual result:
  An empty white box. No clear indication that there's nothing to show. 
Example: http://pasteboard.co/tzRJkUxqE.png
  
  Note: HTML is there but a css property is hiding the div with its
  message.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661350

Title:
  Empty white box in Network Topology when there's nothing to show

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:

  1. Access to a clean installion of OpenStack or Project with no networks, 
routers, etc.
  2. Go to Project->Network->Network Topology
  3. You're taken to the first tab "Topology", click on "Graph" tab.
  4. See that an empty white box is shown, this in both tabs.

  Expected result:
  You should read a message explaning there are no networks, routers or 
anything else to show here.
  Example: http://pasteboard.co/tA0ZWVdKq.png

  Actual result:
  An empty white box. No clear indication that there's nothing to show. 
Example: http://pasteboard.co/tzRJkUxqE.png

  Note: HTML is there but a css property is hiding the div with its
  message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630507] Re: Different use of args in ungettext_lazy causes error on syncing with translation infra

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/386954
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=0409080e7b93a91323bbb69e1811ef79b17c4e6d
Submitter: Jenkins
Branch:master

commit 0409080e7b93a91323bbb69e1811ef79b17c4e6d
Author: Ian Y. Choi 
Date:   Sat Oct 15 23:14:52 2016 +0900

i18n: The same use of args with ugettext_lazy

Different use of args in ungettext_lazy causes
error on import job from translation infrastructure
to horizon repository.

The use of variables in singular and plural strings
needs to be same. This commit also adjusts the string
with ugettext_lazy() as other strings are dealt with.

Change-Id: I9a836178b2d615504950545654242c0a4c196723
Closes-Bug: #1630507


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1630507

Title:
  Different use of args in ungettext_lazy causes error on syncing with
  translation infra

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In
  
http://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#n214
  ,

  If we look at strings for singular and plural on ungettext_lazy(),
  singular string uses only "%(avail)i" arg,
  and plural string uses both "%(req)i" and "%(avail)i" args.

  In Zanata (translation platform), currently, po files on some
  languages are saved if the languages are set to just use singular
  form.

  #: 
openstack_dashboard/dashboards/project/instances/workflows/create_instance.py:214
  #, python-format
  msgid ""
  "The requested instance cannot be launched as you only have %(avail)i of your 
"
  "quota available. "
  msgid_plural ""
  "The requested %(req)i instances cannot be launched as you only have "
  "%(avail)i of your quota available."
  msgstr[0] ""
  "The requested instance cannot be launched as you only have %(avail)i of your 
"
  "quota available. "

  This generates an error when msgfmt command is executed:

  $ msgfmt --check-format -o /dev/null django.po
  django.po:10766: a format specification for argument 'req' doesn't exist in 
'msgstr[0]'
  msgfmt: found 1 fatal error

  Because of this occurrence, there have been job failures for Korean and 
Indonesian language
  to import translated strings to Horizon git repository.

  The current solution would be to add "%(req)i" argument on the
  singular string.

  
  Reference
  [1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103941.html
  [2] 
http://lists.openstack.org/pipermail/openstack-i18n/2016-October/002476.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1630507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661326] Re: neutron-ovs-agent fails to start on Windows due to Linux-specific imports

2017-02-02 Thread Ihar Hrachyshka
[2] suggests that oslo.rootwrap is not Win32 friendly. Added the project
to list of affected.

** Also affects: oslo.rootwrap
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661326

Title:
  neutron-ovs-agent fails to start on Windows due to Linux-specific
  imports

Status in neutron:
  In Progress
Status in oslo.rootwrap:
  New

Bug description:
  Currently, the neutron-ovs-agent service cannot start on Windows, due
  to a few Linux-specific imports. [1][2]

  [1] http://paste.openstack.org/show/597391/
  [2] http://paste.openstack.org/show/597392/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660878] Re: test_reboot_deleted_server fails with 409 "Cannot 'reboot' instance while it is in vm_state building"

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427775
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8ba92778fe14b47ad4ff5b53022e0550a93f37d3
Submitter: Jenkins
Branch:master

commit 8ba92778fe14b47ad4ff5b53022e0550a93f37d3
Author: Matt Riedemann 
Date:   Wed Feb 1 10:35:32 2017 -0500

Ensure build request exists before creating instance

When creating instances in conductor, the build requests are
coming from the compute API and might be stale by the time
the instance is created, i.e. the build request might have
been deleted from the database before the instance is actually
created in a cell.

This is trivial to recreate; all you need to do is create a
server and then immediately delete it, then try to perform
some kind of action on the server expecting it to be deleted
but the action might not return a 404 for a missing instance.
We're seeing this in Tempest runs where the expected 404 for
the deleted instance is a 409 because the test is trying to
perform an action on a server while it's building, which is
generally not allowed.

This fixes the issue by making a last-second check to make
sure the build request still exists before the instance is
created in a cell.

Change-Id: I6c32d5a4086a227d59ad7b1f6f50e7e532c74c84
Closes-Bug: #1660878


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660878

Title:
  test_reboot_deleted_server fails with 409 "Cannot 'reboot' instance
  while it is in vm_state building"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/91/426991/1/check/gate-tempest-dsvm-neutron-
  full-ubuntu-xenial/f218227/console.html#_2017-02-01_02_06_33_592237

  2017-02-01 02:06:33.592237 | 
tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_deleted_server[id-581a397d-5eab-486f-9cf9-1014bbd4c984,negative]
  2017-02-01 02:06:33.592305 | 
--
  2017-02-01 02:06:33.592321 | 
  2017-02-01 02:06:33.592340 | Captured traceback:
  2017-02-01 02:06:33.592367 | ~~~
  2017-02-01 02:06:33.592398 | Traceback (most recent call last):
  2017-02-01 02:06:33.592453 |   File 
"tempest/api/compute/servers/test_servers_negative.py", line 190, in 
test_reboot_deleted_server
  2017-02-01 02:06:33.593010 | server['id'], type='SOFT')
  2017-02-01 02:06:33.593072 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
  2017-02-01 02:06:33.593110 | self.assertThat(our_callable, matcher)
  2017-02-01 02:06:33.593162 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 496, in assertThat
  2017-02-01 02:06:33.593205 | mismatch_error = 
self._matchHelper(matchee, matcher, message, verbose)
  2017-02-01 02:06:33.593266 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 547, in _matchHelper
  2017-02-01 02:06:33.593294 | mismatch = matcher.match(matchee)
  2017-02-01 02:06:33.593345 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  2017-02-01 02:06:33.593388 | mismatch = 
self.exception_matcher.match(exc_info)
  2017-02-01 02:06:33.593443 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  2017-02-01 02:06:33.593468 | mismatch = matcher.match(matchee)
  2017-02-01 02:06:33.593515 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 475, in match
  2017-02-01 02:06:33.593544 | reraise(*matchee)
  2017-02-01 02:06:33.593597 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  2017-02-01 02:06:33.593618 | result = matchee()
  2017-02-01 02:06:33.593667 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 1049, in __call__
  2017-02-01 02:06:33.593699 | return 
self._callable_object(*self._args, **self._kwargs)
  2017-02-01 02:06:33.593736 |   File 
"tempest/lib/services/compute/servers_client.py", line 236, in reboot_server
  2017-02-01 02:06:33.593777 | return self.action(server_id, 'reboot', 
**kwargs)
  2017-02-01 02:06:33.593814 |   File 

[Yahoo-eng-team] [Bug 1661326] [NEW] neutron-ovs-agent fails to start on Windows due to Linux-specific imports

2017-02-02 Thread Claudiu Belu
Public bug reported:

Currently, the neutron-ovs-agent service cannot start on Windows, due to
a few Linux-specific imports. [1][2]

[1] http://paste.openstack.org/show/597391/
[2] http://paste.openstack.org/show/597392/

** Affects: neutron
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661326

Title:
  neutron-ovs-agent fails to start on Windows due to Linux-specific
  imports

Status in neutron:
  In Progress

Bug description:
  Currently, the neutron-ovs-agent service cannot start on Windows, due
  to a few Linux-specific imports. [1][2]

  [1] http://paste.openstack.org/show/597391/
  [2] http://paste.openstack.org/show/597392/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660747] Re: test_list_servers_filter_by_error_status intermittently fails with MismatchError on no servers in response

2017-02-02 Thread Ken'ichi Ohmichi
As Matt said, HTTP 202 means just "Accepted" for a request. That doesn't mean 
the operation is completed already.
However on cell-v1, the API behavior looks like "the operation is completed 
already" and Tempest is expecting it on the test.
So I guess Tempest side work is unnecessary now.

** Changed in: tempest
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660747

Title:
  test_list_servers_filter_by_error_status intermittently fails with
  MismatchError on no servers in response

Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Opinion

Bug description:
  Seen here:

  http://logs.openstack.org/59/424759/12/gate/gate-tempest-dsvm-cells-
  ubuntu-xenial/d7b1311/console.html#_2017-01-31_17_48_34_663273

  2017-01-31 17:48:34.663337 | Captured traceback:
  2017-01-31 17:48:34.663348 | ~~~
  2017-01-31 17:48:34.663363 | Traceback (most recent call last):
  2017-01-31 17:48:34.663393 |   File 
"tempest/api/compute/admin/test_servers.py", line 59, in 
test_list_servers_filter_by_error_status
  2017-01-31 17:48:34.663414 | self.assertIn(self.s1_id, map(lambda x: 
x['id'], servers))
  2017-01-31 17:48:34.663448 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
  2017-01-31 17:48:34.663468 | self.assertThat(haystack, 
Contains(needle), message)
  2017-01-31 17:48:34.663502 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2017-01-31 17:48:34.663515 | raise mismatch_error
  2017-01-31 17:48:34.663542 | testtools.matchers._impl.MismatchError: 
u'108b4797-74fd-4a00-912a-b7fe0e142888' not in []

  This test resets the state on a server to ERROR:

  2017-01-31 17:48:34.663649 | 2017-01-31 17:28:39,375 504 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_filter_by_error_status): 202 POST 
http://10.23.154.32:8774/v2.1/servers/108b4797-74fd-4a00-912a-b7fe0e142888/action
 0.142s
  2017-01-31 17:48:34.663695 | 2017-01-31 17:28:39,376 504 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-01-31 17:48:34.663714 | Body: {"os-resetState": {"state": 
"error"}}

  Then tries to list servers by that status and expects to get that one
  back:

  2017-01-31 17:48:34.663883 | 2017-01-31 17:28:39,556 504 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_filter_by_error_status): 200 GET 
http://10.23.154.32:8774/v2.1/servers?status=error 0.179s
  2017-01-31 17:48:34.663955 | 2017-01-31 17:28:39,556 504 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-01-31 17:48:34.663969 | Body: None
  2017-01-31 17:48:34.664078 | Response - Headers: 
{u'x-openstack-nova-api-version': '2.1', u'vary': 
'X-OpenStack-Nova-API-Version', u'content-length': '15', 'status': '200', 
u'content-type': 'application/json', u'x-compute-request-id': 
'req-91ef16ab-28c3-47c5-b823-6a321bde5c01', u'date': 'Tue, 31 Jan 2017 17:28:39 
GMT', 'content-location': 'http://10.23.154.32:8774/v2.1/servers?status=error', 
u'openstack-api-version': 'compute 2.1', u'connection': 'close'}
  2017-01-31 17:48:34.664094 | Body: {"servers": []}

  And the list is coming back empty, intermittently, with cells v1. So
  there is probably some vm_state change race between the state change
  in the child cell and reporting that back up to the parent API cell.

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Body%3A%20%7B%5C%5C%5C%22servers%5C%5C%5C%22%3A%20%5B%5D%7D%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20build_name%3A%5C
  %22gate-tempest-dsvm-cells-ubuntu-xenial%5C%22=7d

  16 hits in 7 days, check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661312] [NEW] Evacuation will corrupt instance allocations

2017-02-02 Thread Dan Smith
Public bug reported:

The following sequence of events will result in a corrupted instance
allocation in placement:

1. Instance running on host A, placement has allocations for instance on host A
2. Host A goes down
3. Instance is evacuated to host B, host B creates duplicated allocations in 
placement for instance
4. Host A comes up, notices that instance is gone, deletes all allocations for 
instance on both hosts A and B
5. Instance now has no allocations for a period
6. Eventually, host B will re-create the allocations for the instance

The period between #4 and #6 will have the scheduler making bad
decisions because it thinks host B is less loaded than it is.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661312

Title:
  Evacuation will corrupt instance allocations

Status in OpenStack Compute (nova):
  New

Bug description:
  The following sequence of events will result in a corrupted instance
  allocation in placement:

  1. Instance running on host A, placement has allocations for instance on host 
A
  2. Host A goes down
  3. Instance is evacuated to host B, host B creates duplicated allocations in 
placement for instance
  4. Host A comes up, notices that instance is gone, deletes all allocations 
for instance on both hosts A and B
  5. Instance now has no allocations for a period
  6. Eventually, host B will re-create the allocations for the instance

  The period between #4 and #6 will have the scheduler making bad
  decisions because it thinks host B is less loaded than it is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661303] [NEW] neutron-ns-metadata-proxy process failing under python3.5

2017-02-02 Thread Michael Johnson
Public bug reported:

When running under python 3.5, we are seeing the neutron-ns-metadata-
proxy fail repeatedly on Ocata RC1 master.

This is causing instances to fail to boot under a python3.5 devstack.

A gate example is here:
http://logs.openstack.org/99/407099/25/check/gate-rally-dsvm-py35-neutron-neutron-ubuntu-xenial/4741e0d/logs/screen-q-l3.txt.gz?level=ERROR#_2017-02-02_11_41_52_029

2017-02-02 11:41:52.029 29906 ERROR neutron.agent.linux.external_process
[-] metadata-proxy for router with uuid
79af72b9-6b17-4864-8088-5dc96b9271df not found. The process should not
have died

Running this locally I see the debug output of the configuration
settings and it immediately exits with no error output.

To reproduce:
Stack a fresh devstack with the "USE_PYTHON3=True" setting in your localrc 
(NOTE: There are other python3x devstack bugs that may reconfigure your host in 
bad ways once you do this.  Plan to only stack with this setting on a throw 
away host or one you plan to use for Python3.x going forward)

Once this devstack is up and running, setup a neuron network and subnet,
then boot a cirros instance on that new subnet.

Check the cirros console.log to see that it cannot find a metadata
datasource (Due to this change disabling configdrive: https://github.com
/openstack-
dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4).

Check the q-l3.txt log to see the repeated "The process should not have
died" messages.

You will also note that the cirros instance did not receive it's ssh
keys and is requiring password login due to the missing datasource.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661303

Title:
  neutron-ns-metadata-proxy process failing under python3.5

Status in neutron:
  New

Bug description:
  When running under python 3.5, we are seeing the neutron-ns-metadata-
  proxy fail repeatedly on Ocata RC1 master.

  This is causing instances to fail to boot under a python3.5 devstack.

  A gate example is here:
  
http://logs.openstack.org/99/407099/25/check/gate-rally-dsvm-py35-neutron-neutron-ubuntu-xenial/4741e0d/logs/screen-q-l3.txt.gz?level=ERROR#_2017-02-02_11_41_52_029

  2017-02-02 11:41:52.029 29906 ERROR
  neutron.agent.linux.external_process [-] metadata-proxy for router
  with uuid 79af72b9-6b17-4864-8088-5dc96b9271df not found. The process
  should not have died

  Running this locally I see the debug output of the configuration
  settings and it immediately exits with no error output.

  To reproduce:
  Stack a fresh devstack with the "USE_PYTHON3=True" setting in your localrc 
(NOTE: There are other python3x devstack bugs that may reconfigure your host in 
bad ways once you do this.  Plan to only stack with this setting on a throw 
away host or one you plan to use for Python3.x going forward)

  Once this devstack is up and running, setup a neuron network and
  subnet, then boot a cirros instance on that new subnet.

  Check the cirros console.log to see that it cannot find a metadata
  datasource (Due to this change disabling configdrive:
  https://github.com/openstack-
  dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4).

  Check the q-l3.txt log to see the repeated "The process should not
  have died" messages.

  You will also note that the cirros instance did not receive it's ssh
  keys and is requiring password login due to the missing datasource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661292] [NEW] VFS: Cannot open root device "LABEL=cloudimg-rootfs" or unknown-block(0, 0): error -6

2017-02-02 Thread Jason Hobbs
Public bug reported:

Description
===
An kvm instance failed to boot with a kernel panic after hitting this error:

VFS: Cannot open root device "LABEL=cloudimg-rootfs" or unknown-
block(0,0): error -6

Steps to reproduce
==
This doesn't reproduce reliably, it seems to be a race condition.

* I started an instance through the API.  It had not attached storage,
and was attached to a single network. It used an Ubuntu Xenial cloud
image which booted successfully on a different instance on a different
compute node.

Expected Result
===
* I expected the instance to boot successfully

Actual Result
=
* The instance failed to boot with a kernel panic:
http://pastebin.ubuntu.com/23911493/

* I used virsh destroy to stop that instance, then started it again
through the API, and it worked.

Environment
===
xenial/newtown/kvm/openvswitch, no attached storage.

ii  nova-common  2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - common files
ii  nova-compute 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node base
ii  nova-compute-kvm 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node libvirt support
ii  python-nova  2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute Python libraries
ii  python-novaclient2:6.0.0-0ubuntu1~cloud0 
all  client library for OpenStack Compute API - Python 2.7
ii  libvirt-bin  1.3.1-1ubuntu10.6   
amd64programs for the libvirt library
ii  libvirt0:amd64   1.3.1-1ubuntu10.6   
amd64library for interfacing with different virtualization systems
ii  nova-compute-libvirt 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node libvirt support
ii  python-libvirt   1.3.1-1ubuntu1  
amd64libvirt Python bindings

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: oil oil-guestos-fail

** Attachment added: "sosreport-hayward-49-20170202154349.tar.xz"
   
https://bugs.launchpad.net/bugs/1661292/+attachment/4812126/+files/sosreport-hayward-49-20170202154349.tar.xz

** Description changed:

  Description
  ===
  An kvm instance failed to boot with a kernel panic after hitting this error:
  
  VFS: Cannot open root device "LABEL=cloudimg-rootfs" or unknown-
  block(0,0): error -6
  
  Steps to reproduce
  ==
  This doesn't reproduce reliably, it seems to be a race condition.
  
  * I started an instance through the API.  It had not attached storage,
  and was attached to a single network. It used an Ubuntu Xenial cloud
  image which booted successfully on a different instance on a different
  compute node.
  
  Expected Result
  ===
  * I expected the instance to boot successfully
  
  Actual Result
  =
  * The instance failed to boot with a kernel panic:
  http://pastebin.ubuntu.com/23911493/
  
+ * I used virsh destroy to stop that instance, then started it again
+ through the API, and it worked.
+ 
  Environment
  ===
  xenial/newtown/kvm/openvswitch, no attached storage.
  
  ii  nova-common  2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - common files
  ii  nova-compute 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node libvirt support
  ii  python-nova  2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute Python libraries
  ii  python-novaclient2:6.0.0-0ubuntu1~cloud0 
all  client library for OpenStack Compute API - Python 2.7
  ii  libvirt-bin  1.3.1-1ubuntu10.6   
amd64programs for the libvirt library
  ii  libvirt0:amd64   1.3.1-1ubuntu10.6   
amd64library for interfacing with different virtualization systems
  ii  nova-compute-libvirt 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node libvirt support
  ii  python-libvirt   1.3.1-1ubuntu1  
amd64libvirt Python bindings

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).

[Yahoo-eng-team] [Bug 1572794] Re: nova sometimes doesn't clean up neutron ports when VM spawning fails

2017-02-02 Thread James Page
12.0.2 was a liberty version; I've been testing with SR-IOV ports today
on Newton, and I've not observed this issue when VM creation fails;
ports drop back to unbound and can be consumed straight away.

So I suspect this is actually fixed.

** Changed in: nova-cloud-controller (Juju Charms Collection)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572794

Title:
  nova sometimes doesn't clean up neutron ports when VM spawning fails

Status in OpenStack Compute (nova):
  New
Status in nova-cloud-controller package in Juju Charms Collection:
  Invalid

Bug description:
  Hi,

  It appears that sometimes, nova doesn't clean up ports when spawning
  the instance fails. I'm using SR-IOV so I'm creating ports manually.

  Example :
  Create port with :
  $ neutron port-create  --name direct --binding:vnic_type direct

  Boot instance using said port :
  $ nova boot --image  --flavor  --key-name admin_key --nic 
port-id= vm_direct

  If VM creation fails and the failed VM gets deleted, the port is still bound :
  $ neutron port-show direct2
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:host_id   | xx  
 |
  | binding:profile   | {"pci_slot": ":04:10.2", "physical_network": 
"physnet1", "pci_vendor_info": "8086:10ca"}  |
  | binding:vif_details   | {"port_filter": false, "vlan": "1234"}  
  |
  | binding:vif_type  | hw_veb  
  |
  | binding:vnic_type | direct  
  |
  | device_id | 2aecc61b-e3c9-4b1f-9e47-574733705a91
  |
  | device_owner  | compute:None
  |
  | dns_assignment| {"hostname": "host-10-190-5-35", "ip_address": 
"10.190.5.35", "fqdn": "host-10-190-5-35.openstacklocal."} |
  | dns_name  | 
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | {"subnet_id": 
"72cfdfed-e614-4add-b880-f4c9d3bb89cc", "ip_address": "10.190.5.35"}
|
  | id| f34b55c1-f10e-44c9-8326-5a42996c691a
  |
  | mac_address   | fa:16:3e:5d:da:16   
  |
  | name  | direct  
  |
  | network_id| 200d501c-13df-4625-9f46-d7e28ee18dc2
  |
  | security_groups   | feb7b440-450b-4c7b-aa3f-92f498cd2841
  |
  | status| BUILD   
  |
  | tenant_id | 09fae15a5f6f4acf838a97d202786d25
  |
  
+---+---+

  $ nova show 2aecc61b-e3c9-4b1f-9e47-574733705a91
  ERROR: No server with a name or ID of '2aecc61b-e3c9-4b1f-9e47-574733705a91' 
exists.

  This is similar to LP#1423845, but I do have the fix from that bug
  already in my code.

  Package versions are : 2:12.0.2-0ubuntu1~cloud0, running on trusty

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1550422] Re: Failed to connect to server(code:1006) - Console access issue

2017-02-02 Thread James Page
Marking all bug tasks as invalid, as switching to using memcache
resolved this issue.

** Changed in: nova-cloud-controller (Ubuntu)
   Status: New => Invalid

** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: nova-cloud-controller (Juju Charms Collection)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550422

Title:
  Failed to connect to server(code:1006) -  Console access issue

Status in OpenStack Compute (nova):
  Invalid
Status in nova-cloud-controller package in Ubuntu:
  Invalid
Status in nova-cloud-controller package in Juju Charms Collection:
  Invalid

Bug description:
  I am getting a "Failed to connect to server(code:1006)" error when
  accessing the console of an instance. Refreshing the page fixes this.

  I am deploying cs:trusty/nova-cloud-controller-63 along with cs:trusty
  /openstack-dashboard-16 for kilo. I have also tried deploying the
  latest revision of both charms but hitting the same issue.

  Here is my config for both charms:

  openstack-dashboard:
  vip: "192.168.100.57"
  hacluster-dashboard:
  cluster_count: 3
  corosync_transport: unicast
  monitor_host: "192.168.100.1"

  nova-cloud-controller:
  network-manager: "Neutron"
  console-access-protocol: "novnc"
  openstack-origin: "cloud:trusty-kilo"
  quantum-security-groups: "yes"
  region: "serverstack"
  vip: "192.168.100.58"
  hacluster-nova-cc:
  cluster_count: 3
  monitor_host: "192.168.100.1"
  corosync_transport: unicast

  Let me know if this is a config issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1550422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572794] Re: nova sometimes doesn't clean up neutron ports when VM spawning fails

2017-02-02 Thread James Page
This is more of a nova problem than a charm problem, so raising a nova
bug task.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572794

Title:
  nova sometimes doesn't clean up neutron ports when VM spawning fails

Status in OpenStack Compute (nova):
  New
Status in nova-cloud-controller package in Juju Charms Collection:
  Invalid

Bug description:
  Hi,

  It appears that sometimes, nova doesn't clean up ports when spawning
  the instance fails. I'm using SR-IOV so I'm creating ports manually.

  Example :
  Create port with :
  $ neutron port-create  --name direct --binding:vnic_type direct

  Boot instance using said port :
  $ nova boot --image  --flavor  --key-name admin_key --nic 
port-id= vm_direct

  If VM creation fails and the failed VM gets deleted, the port is still bound :
  $ neutron port-show direct2
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:host_id   | xx  
 |
  | binding:profile   | {"pci_slot": ":04:10.2", "physical_network": 
"physnet1", "pci_vendor_info": "8086:10ca"}  |
  | binding:vif_details   | {"port_filter": false, "vlan": "1234"}  
  |
  | binding:vif_type  | hw_veb  
  |
  | binding:vnic_type | direct  
  |
  | device_id | 2aecc61b-e3c9-4b1f-9e47-574733705a91
  |
  | device_owner  | compute:None
  |
  | dns_assignment| {"hostname": "host-10-190-5-35", "ip_address": 
"10.190.5.35", "fqdn": "host-10-190-5-35.openstacklocal."} |
  | dns_name  | 
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | {"subnet_id": 
"72cfdfed-e614-4add-b880-f4c9d3bb89cc", "ip_address": "10.190.5.35"}
|
  | id| f34b55c1-f10e-44c9-8326-5a42996c691a
  |
  | mac_address   | fa:16:3e:5d:da:16   
  |
  | name  | direct  
  |
  | network_id| 200d501c-13df-4625-9f46-d7e28ee18dc2
  |
  | security_groups   | feb7b440-450b-4c7b-aa3f-92f498cd2841
  |
  | status| BUILD   
  |
  | tenant_id | 09fae15a5f6f4acf838a97d202786d25
  |
  
+---+---+

  $ nova show 2aecc61b-e3c9-4b1f-9e47-574733705a91
  ERROR: No server with a name or ID of '2aecc61b-e3c9-4b1f-9e47-574733705a91' 
exists.

  This is similar to LP#1423845, but I do have the fix from that bug
  already in my code.

  Package versions are : 2:12.0.2-0ubuntu1~cloud0, running on trusty

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661258] [NEW] Deleted ironic node has an inventory in nova_api database

2017-02-02 Thread Vladyslav Drok
Public bug reported:

Running latest devstack, ironic and nova, I get the following error when
I request an instance:

| fault| {"message": "Node 
6cc8803d-4e77-4948-b653-663d8d5e52b7 could not be found. (HTTP 404)", "code": 
500, "details": "  File \"/opt/stack/nova/nova/compute/manager.py\", line 1780, 
in _do_build_and_run_instance |
|  | filter_properties) 


   |
|  |   File 
\"/opt/stack/nova/nova/compute/manager.py\", line 2016, in 
_build_and_run_instance 
|
|  | instance_uuid=instance.uuid, 
reason=six.text_type(e))

 |
|  | ", "created": "2017-02-02T13:42:01Z"}  


   |

On ironic side, this node was indeed deleted, it is also deleted from
nova.compute_nodes table:

| created_at  | updated_at  | deleted_at  | id | 
service_id | vcpus | memory_mb | local_gb | vcpus_used | memory_mb_used | 
local_gb_used | hypervisor_type | hypervisor_version | cpu_info | 
disk_available_least | free_ram_mb | free_disk_gb | current_workload | 
running_vms | hypervisor_hostname  | deleted | host_ip| 
supported_instances  | pci_stats

 | metrics | 
extra_resources | stats  | numa_topology | host   | 
ram_allocation_ratio | cpu_allocation_ratio | uuid  
   | disk_allocation_ratio |
...
| 2017-02-02 12:20:27 | 2017-02-02 13:20:15 | 2017-02-02 13:21:15 |  2 |   
NULL | 1 |  1536 |   10 |  0 |  0 | 
0 | ironic  |  1 |  |   10 |
1536 |   10 |0 |   0 | 
6cc8803d-4e77-4948-b653-663d8d5e52b7 |   2 | 192.168.122.22 | [["x86_64", 
"baremetal", "hvm"]] | {"nova_object.version": "1.1", "nova_object.changes": 
["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": 
{"objects": []}, "nova_object.namespace": "nova"} | []  | NULL| 
{"cpu_arch": "x86_64"} | NULL  | ubuntu |1 |
0 | 035be695-0797-44b3-930b-42349e40579e | 0 |

But in nova_api.inventories it's still there:

| created_at  | updated_at | id | resource_provider_id | 
resource_class_id | total | reserved | min_unit | max_unit | step_size | 
allocation_ratio |
..
| 2017-02-02 13:20:14 | NULL   | 13 |2 |
 0 | 1 |0 |1 |1 | 1 |   16 |
| 2017-02-02 13:20:14 | NULL   | 14 |2 |
 1 |  1536 |0 |1 | 1536 | 1 |1 |
| 2017-02-02 13:20:14 | NULL   | 15 |2 |
 2 |10 |0 |1 |   10 | 1 |1 |

nova_api.resource_providers bit:
| created_at  | updated_at  | id | uuid 
| name | generation | can_host |
.
| 2017-02-02 12:20:27 | 2017-02-02 13:20:14 |  2 | 
035be695-0797-44b3-930b-42349e40579e | 6cc8803d-4e77-4948-b653-663d8d5e52b7 |   
   7 |0 |

Waiting for resource tracker run did not help, node's been deleted for
~30 minutes already and the inventory is still there.

Code versions:
Devstack commit debc695ddfc8b7b2aeb53c01c624e15f69ed9fa2 Updated from 
generate-devstack-plugins-list.
Nova commit 5dad7eaef7f8562425cce6b233aed610ca2d3148 Merge "doc: update the man 
page entry for nova-manage db sync"
Ironic commit 5071b99835143ebcae876432e2982fd27faece10 Merge "Remove deprecated 
heartbeat policy check"

If it is anyhow relevant, I also run two nova-computes on the same host,
I've set host=test for the second one, other than that all configs are
the same. I was trying to reproduce another cell-related issue, and was
creating/deleting ironic nodes, so that they map to the second nova-
compute by the hash_ring.

** Affects: nova
 Importance: Undecided
 Status: 

[Yahoo-eng-team] [Bug 1661243] [NEW] reserved_host_disk_mb reporting incorrectly

2017-02-02 Thread Andy McCrae
Public bug reported:

We're getting the following failure in our gate jobs for nova:

Failed to update inventory for resource provider 2c2e388f-be21-4461
-bdcf-e1b20b9c90e2: 400 400 Bad Request

The server could not comply with the request since it is either
malformed or otherwise incorrect.

 Unable to update inventory for resource provider 2c2e388f-be21-4461
-bdcf-e1b20b9c90e2: Invalid inventory for 'DISK_GB' on resource provider
'2c2e388f-be21-4461-bdcf-e1b20b9c90e2'. The reserved value is greater
than or equal to total.

=

The reserved_host_disk_mb is set to 2048 and the compute host is showing free 
disk of 57GB.
The build is done based on head of master of nova, and uses QEMU

=

Link to compute logs showing reserved_host_disk_mb: 2048
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_03_006

Link to compute logs showing free_disk=57GB
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_03_513

Link to compute logs showing the Error:
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_05_506

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661243

Title:
  reserved_host_disk_mb reporting incorrectly

Status in OpenStack Compute (nova):
  New

Bug description:
  We're getting the following failure in our gate jobs for nova:

  Failed to update inventory for resource provider 2c2e388f-be21-4461
  -bdcf-e1b20b9c90e2: 400 400 Bad Request

  The server could not comply with the request since it is either
  malformed or otherwise incorrect.

   Unable to update inventory for resource provider 2c2e388f-be21-4461
  -bdcf-e1b20b9c90e2: Invalid inventory for 'DISK_GB' on resource
  provider '2c2e388f-be21-4461-bdcf-e1b20b9c90e2'. The reserved value is
  greater than or equal to total.

  =

  The reserved_host_disk_mb is set to 2048 and the compute host is showing free 
disk of 57GB.
  The build is done based on head of master of nova, and uses QEMU

  =

  Link to compute logs showing reserved_host_disk_mb: 2048
  
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_03_006

  Link to compute logs showing free_disk=57GB
  
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_03_513

  Link to compute logs showing the Error:
  
http://logs.openstack.org/57/418457/48/check/gate-openstack-ansible-os_nova-ansible-func-ubuntu-xenial/8ce42e7/logs/host/nova/nova-compute.log.txt.gz#_2017-02-01_20_43_05_506

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661113] Re: Filtering servers by terminated_at does not work

2017-02-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427964
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1bbecbd98f368c8891d291e187481b5d1c2374d9
Submitter: Jenkins
Branch:master

commit 1bbecbd98f368c8891d291e187481b5d1c2374d9
Author: Matt Riedemann 
Date:   Wed Feb 1 18:20:16 2017 -0500

Fix the terminated_at field in the server query params schema

The field is 'terminated_at', not 'terminate_at', which was
probably just a typo. This fixes the field name in the server
query parameter schema and adds a test to show it working.

Change-Id: I279fa7b40da1d1057a9774e2dc380425454f11dd
Closes-Bug: #1661113


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661113

Title:
  Filtering servers by terminated_at does not work

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I noticed this in review after the code was merged:

  
https://review.openstack.org/#/c/408571/41/nova/api/openstack/compute/schemas/servers.py@277

  That field should be 'terminated_at' to match the DB field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654183] Re: Token based authentication in Client class does not work

2017-02-02 Thread Martin André
** Also affects: tripleo-quickstart
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1654183

Title:
  Token based authentication in Client class does not work

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in python-novaclient:
  Fix Released
Status in tripleo:
  Fix Released
Status in tripleo-quickstart:
  In Progress
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  With newly released novaclient (7.0.0) it seems that token base
  authentication does not work in novaclient.client.Clinet.

  I have get back the following response from Nova server:

  Malformed request URL: URL's project_id
  'e0beb44615f34d54b8a9a9203a3e5a1c' doesn't match Context's project_id
  'None' (HTTP 400)

  I just created the Nova client in following way:
  Client(
  2,
  endpoint_type="public",
  service_type='compute',
  auth_token=auth_token,
  tenant_id="devel",
  region_name="RegionOne",
  auth_url=keystone_url,
  insecure=True,
  endpoint_override=nova_endpoint 
#https://.../v2/e0beb44615f34d54b8a9a9203a3e5a1c
  )

  After it nova client performs a new token based authentication without
  project_id (tenant_id) and it causes that the new token does not
  belong to any project. Anyway if we have a token already why
  novaclient requests a new one from keystone? (Other clients like Heat
  and Neutron for example does not requests any token from keystone if
  it is already provided for client class)

  The bug is introduced by follwoig commit:
  
https://github.com/openstack/python-novaclient/commit/8409e006c5f362922baae9470f14c12e0443dd70

  +if not auth and auth_token:
  +auth = identity.Token(auth_url=auth_url,
  +  token=auth_token)

  When project_id is also passed into Token authentication than
  everything works fine. So newly requested token belongs to right
  project/tenant.

  Note: Originally this problem appears in Mistral project of OpenStack,
  which is using the client classes directly from their actions with
  token based authentication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1654183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659391] Re: Server list API does not show existing servers if cell service disabled and default cell not configured

2017-02-02 Thread Valeriy Ponomaryov
Updated description. Bug is valid.

** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659391

Title:
  Server list API does not show existing servers if cell service
  disabled and default cell not configured

Status in OpenStack Compute (nova):
  New

Bug description:
  After merge of commit [1] command "nova list --all-" started returning empty 
list when servers exist. Revert of this change makes API work again.
  It is possible when we disable cell services and do not configure default 
one. But, "list" operation should always show all scheduled servers.

  Steps to reproduce:
  1) install latest nova that contains commit [1], not configuring cell service 
and not creating default cell.
  2) create VM
  3) run any of following commands:
  $ nova list --all-
  $ openstack server list --all
  $ openstack server show %name-of-server%
  $ nova show %name-of-server%

  Expected: we see data of server we created on second step.
  Actual: empty list on "list" command or "NotFound" error on "show" command.

  [1] https://review.openstack.org/#/c/396775/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643911] Re: libvirt randomly crashes on xenial nodes with "*** Error in `/usr/sbin/libvirtd': malloc(): memory corruption:"

2017-02-02 Thread Dr. David Alan Gilbert
** Also affects: libvirt (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1643911

Title:
  libvirt randomly crashes on xenial nodes with "*** Error in
  `/usr/sbin/libvirtd': malloc(): memory corruption:"

Status in libvirt:
  New
Status in OpenStack Compute (nova):
  Confirmed
Status in libvirt package in Ubuntu:
  New

Bug description:
  Seen here:

  http://logs.openstack.org/44/386844/17/check/gate-tempest-dsvm-
  neutron-full-ubuntu-xenial/91befad/logs/syslog.txt.gz#_Nov_22_00_27_46

  Nov 22 00:27:46 ubuntu-xenial-rax-ord-5717228 virtlogd[16875]: End of file 
while reading data: Input/output error
  Nov 22 00:27:46 ubuntu-xenial-rax-ord-5717228 libvirtd[16847]: *** Error in 
`/usr/sbin/libvirtd': malloc(): memory corruption: 0x558ff1c7c800 ***

  http://logs.openstack.org/44/386844/17/check/gate-tempest-dsvm-
  neutron-full-ubuntu-
  xenial/91befad/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-11-22_00_27_46_571

  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager 
[req-d6b33315-636c-4ebc-99e4-8cac236e1f7f 
tempest-ServerDiskConfigTestJSON-191847812 
tempest-ServerDiskConfigTestJSON-191847812] [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] Failed to allocate network(s)
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] Traceback (most recent call last):
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2021, in _build_resources
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] requested_networks, security_groups)
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1445, in 
_build_networks_for_instance
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] requested_networks, macs, 
security_groups, dhcp_options)
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1461, in _allocate_network
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] 
self._update_resource_tracker(context, instance)
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 564, in 
_update_resource_tracker
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] 
self.driver.node_is_available(instance.node)):
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e]   File 
"/opt/stack/new/nova/nova/virt/driver.py", line 1383, in node_is_available
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] if nodename in 
self.get_available_nodes():
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 6965, in 
get_available_nodes
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] return [self._host.get_hostname()]
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e]   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 708, in get_hostname
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] hostname = 
self.get_connection().getHostname()
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e]   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 420, in get_connection
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] raise 
exception.HypervisorUnavailable(host=CONF.host)
  2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: 
d52c0be8-eed2-47a5-bbb5-dd560bb9276e] HypervisorUnavailable: Connection to the 
hypervisor is broken on host: ubuntu-xenial-rax-ord-5717228

  2016-11-22 00:27:46.279+: 16847: error : qemuMonitorIORead:580 :
  Unable to read from monitor: Connection reset by peer

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22***%20Error%20in%20%60%2Fusr%2Fsbin%2Flibvirtd'%3A%20malloc()%3A%20memory%20corruption%3A%5C%22%20AND%20tags%3A%5C%22syslog%5C%22=7d

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1661189] [NEW] calls to cinder always in user context

2017-02-02 Thread Maurice Schreiber
Public bug reported:

My user is not Admin in Cinder. On attaching a volume nova tries to
update the volume admin metadata, but this fails:

"Policy doesn't allow volume:update_volume_admin_metadata to be
performed."

I would have expected that this call from nova to cinder happens in
context of an elevated service user, and not with the user's
context/token.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661189

Title:
  calls to cinder always in user context

Status in OpenStack Compute (nova):
  New

Bug description:
  My user is not Admin in Cinder. On attaching a volume nova tries to
  update the volume admin metadata, but this fails:

  "Policy doesn't allow volume:update_volume_admin_metadata to be
  performed."

  I would have expected that this call from nova to cinder happens in
  context of an elevated service user, and not with the user's
  context/token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661195] [NEW] Servers filter by access_ip_v4 does not filter servers

2017-02-02 Thread Ghanshyam Mann
Public bug reported:

Recently we added the server list query param validation in json schema.
In schema, 'accessIPv4' and 'accessIPv6' are allowed which does not
match with what DB has. It is 'access_ip_v4' and 'access_ip_v6' in DB.

Below schema should be fixed accordingly to allow the 'access_ip_v4' and
'access_ip_v6' as valid filter.

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L343-L344


On other note:
'accessIPv4' and 'accessIPv6' are the something API accept and return and 
internally API layer translate those param to their respective 
field('access_ip_v4' and 'access_ip_v6') present in DB.
So user does not know anything about 'access_ip_v4' and 'access_ip_v6'. They 
are not in API representation actually. 

List filter and sort param should be same as field return in GET or
accepted in POST/PUT which are 'accessIPv4' and 'accessIPv6'. But that
is something new attribute support in filter and can be done later after
discussion.

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Ghanshyam Mann (ghanshyammann)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661195

Title:
  Servers filter by access_ip_v4 does not filter servers

Status in OpenStack Compute (nova):
  New

Bug description:
  Recently we added the server list query param validation in json
  schema. In schema, 'accessIPv4' and 'accessIPv6' are allowed which
  does not match with what DB has. It is 'access_ip_v4' and
  'access_ip_v6' in DB.

  Below schema should be fixed accordingly to allow the 'access_ip_v4'
  and 'access_ip_v6' as valid filter.

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L343-L344

  
  On other note:
  'accessIPv4' and 'accessIPv6' are the something API accept and return and 
internally API layer translate those param to their respective 
field('access_ip_v4' and 'access_ip_v6') present in DB.
  So user does not know anything about 'access_ip_v4' and 'access_ip_v6'. They 
are not in API representation actually. 

  List filter and sort param should be same as field return in GET or
  accepted in POST/PUT which are 'accessIPv4' and 'accessIPv6'. But that
  is something new attribute support in filter and can be done later
  after discussion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661184] Re: libvirtd: malloc.c:3720: _int_malloc: Assertion `(unsigned long) (size) >= (unsigned long) (nb)' failed.

2017-02-02 Thread Thomas Morin
openstack logstash shows 12 hits over the past week

** Also affects: libvirt
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661184

Title:
  libvirtd: malloc.c:3720: _int_malloc: Assertion `(unsigned long)
  (size) >= (unsigned long) (nb)' failed.

Status in libvirt:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Feb 01 22:28:44 ubuntu-xenial-osic-cloud1-s3500-7066924
  libvirtd[16775]: libvirtd: malloc.c:3720: _int_malloc: Assertion
  `(unsigned long) (size) >= (unsigned long) (nb)' failed.


  ( http://logs.openstack.org/29/380329/21/check/gate-tempest-dsvm-
  neutron-full-ubuntu-xenial/299cecb/logs/syslog.txt.gz#_Feb_01_22_28_44
  )

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1661184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661184] [NEW] libvirtd: malloc.c:3720: _int_malloc: Assertion `(unsigned long) (size) >= (unsigned long) (nb)' failed.

2017-02-02 Thread Thomas Morin
Public bug reported:

Feb 01 22:28:44 ubuntu-xenial-osic-cloud1-s3500-7066924 libvirtd[16775]:
libvirtd: malloc.c:3720: _int_malloc: Assertion `(unsigned long) (size)
>= (unsigned long) (nb)' failed.


( http://logs.openstack.org/29/380329/21/check/gate-tempest-dsvm-
neutron-full-ubuntu-xenial/299cecb/logs/syslog.txt.gz#_Feb_01_22_28_44 )

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661184

Title:
  libvirtd: malloc.c:3720: _int_malloc: Assertion `(unsigned long)
  (size) >= (unsigned long) (nb)' failed.

Status in OpenStack Compute (nova):
  New

Bug description:
  Feb 01 22:28:44 ubuntu-xenial-osic-cloud1-s3500-7066924
  libvirtd[16775]: libvirtd: malloc.c:3720: _int_malloc: Assertion
  `(unsigned long) (size) >= (unsigned long) (nb)' failed.


  ( http://logs.openstack.org/29/380329/21/check/gate-tempest-dsvm-
  neutron-full-ubuntu-xenial/299cecb/logs/syslog.txt.gz#_Feb_01_22_28_44
  )

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661181] [NEW] IPv6 subnet update create s RPC client with each call

2017-02-02 Thread Gary Kotton
Public bug reported:

Commit fc7cae844cb783887b8a8eb4d9c3286116d740e6 invokes the class once
per call. We should do this only once like in the l3_db.py

** Affects: neutron
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661181

Title:
  IPv6 subnet update create s RPC client with each call

Status in neutron:
  In Progress

Bug description:
  Commit fc7cae844cb783887b8a8eb4d9c3286116d740e6 invokes the class once
  per call. We should do this only once like in the l3_db.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655718] Re: nav bar intermittently does not display

2017-02-02 Thread Rob Cresswell
*** This bug is a duplicate of bug 1656045 ***
https://bugs.launchpad.net/bugs/1656045

** This bug has been marked a duplicate of bug 1656045
   Dashboard panels intermittently disappear when they are in the 'default' 
panel group.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1655718

Title:
  nav bar intermittently does not display

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  From time to time, the left hand nav bar does not display correctly.

  This seems to occur a bit more frequently when you switch projects
  with different roles (admin to _member_ or back and fort), and also
  occurs when horizon is provided by several backend servers (behind a
  load balancer).

  I'm suspecting that this only occurs with several horizon nodes behind
  a LB, and could be a timing or caching issue - but it's not 100% clear
  when this does or does not occur.

  Rob had said he would look into this more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1655718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp