[Yahoo-eng-team] [Bug 2060974] [NEW] neutron-dhcp-agent attemps to read pid.haproxy but can't

2024-04-11 Thread Thomas Goirand
Public bug reported:

Hi,

>From neutron-dhcp-agent.log, I can see it's trying to access:

/var/lib/neutron/external/pids/*.pid.haproxy

It used to be that these files where having the unix rights (at least in
Debian 11, aka Bullseye):

-rw-r--r--

However, in Debian 12 (aka Bookworm), for a reason, they now are:

-rw-r-

and then the agent doesn't have the necessary rights to read these
files.

Note that in devstack, these PIDs are owned by the stack user, so that's
not an issue. But that's not the case in a Debian package, where haproxy
writes these pid files as root:root, when the neutron-dhcp-agent is
running under neutron:neutron, and therefore, can't read the files.

One possibility would be reading the PIDs through privsep.

Another fix would be to understand why the PID files aren't world
readable. At this point, I can't tell why.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2060974

Title:
  neutron-dhcp-agent attemps to read pid.haproxy but can't

Status in neutron:
  New

Bug description:
  Hi,

  From neutron-dhcp-agent.log, I can see it's trying to access:

  /var/lib/neutron/external/pids/*.pid.haproxy

  It used to be that these files where having the unix rights (at least
  in Debian 11, aka Bullseye):

  -rw-r--r--

  However, in Debian 12 (aka Bookworm), for a reason, they now are:

  -rw-r-

  and then the agent doesn't have the necessary rights to read these
  files.

  Note that in devstack, these PIDs are owned by the stack user, so
  that's not an issue. But that's not the case in a Debian package,
  where haproxy writes these pid files as root:root, when the neutron-
  dhcp-agent is running under neutron:neutron, and therefore, can't read
  the files.

  One possibility would be reading the PIDs through privsep.

  Another fix would be to understand why the PID files aren't world
  readable. At this point, I can't tell why.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2060974/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038474] [NEW] many unit tests issues with objects compared to strings

2023-10-04 Thread Thomas Goirand
Public bug reported:

Running unit tests of Horizon in Debian Unstable leads to many unit test
failures like this one below.

My instinct tells me that this is Python 3.12 related, but I'm not sure.
Here's a pip freeze output with the installed build-dependency of the
package:

alabaster==0.7.12
appdirs==1.4.4
asgiref==3.7.2
async-timeout==4.0.3
attrs==23.1.0
autopage==0.4.0
Babel==2.10.3
blinker==1.6.2
calmjs==3.4.2
calmjs.parse==1.2.5
calmjs.types==1.0.1
certifi==2023.7.22
chardet==5.2.0
charset-normalizer==3.2.0
cliff==4.2.0
cmd2==2.4.3+ds
coverage==7.2.7
cryptography==38.0.4
csscompressor==0.9.5
debtcollector==2.5.0
decorator==5.1.1
Deprecated==1.2.14
deprecation==2.0.7
Django==4.2.5
django-appconf==1.0.5
django-compressor==4.0
django-debreach==2.1.0
django-pyscss==2.0.2
dnspython==2.4.2
docutils==0.19
dogpile.cache==1.1.8
dulwich==0.21.6
enmerkar==0.7.1
eventlet==0.33.1
exceptiongroup==1.1.3
execnet==2.0.0
extras==1.0.0
fasteners==0.17.3
fixtures==4.0.1
flake8==5.0.4
freezegun==1.2.1
futurist==2.4.1
greenlet==2.0.2
h11==0.14.0
hacking==4.1.0
idna==3.3
imagesize==1.4.1
importlib-metadata==4.12.0
iniconfig==1.1.1
iso8601==1.0.2
jaraco.classes==3.2.1
jeepney==0.8.0
Jinja2==3.1.2
jmespath==1.0.1
jsonpatch==1.32
jsonpointer==2.3
jsonschema==4.10.3
keyring==24.2.0
keystoneauth1==5.3.0
lxml==4.9.3
Mako==1.2.4.dev0
MarkupSafe==2.1.3
mccabe==0.7.0
monotonic==1.6
more-itertools==10.1.0
msgpack==1.0.3
netaddr==0.8.0
netifaces==0.11.0
oauthlib==3.2.2
openstackdocstheme==1.20.0
openstacksdk==1.0.1
os-client-config==2.1.0
os-service-types==1.7.0
osc-lib==2.8.1
oslo.concurrency==5.1.1
oslo.config==9.1.1
oslo.context==5.1.1
oslo.i18n==6.0.0
oslo.log==5.2.0
oslo.policy==4.1.1
oslo.serialization==5.1.1
oslo.upgradecheck==2.1.1
oslo.utils==6.1.0
osprofiler==3.4.3
outcome==1.2.0
packaging==23.1
pbr==5.11.1
pep8==1.7.1
pluggy==1.3.0
ply==3.11
prettytable==3.6.0
pycodestyle==2.10.0
pyflakes==2.5.0
Pygments==2.15.1
pyinotify==0.9.6
PyJWT==2.7.0
pymongo==3.11.0
pyOpenSSL==23.0.0
pyparsing==3.1.1
pyperclip==1.8.2
pyrsistent==0.18.1
pyScss==1.4.0
pytest==7.4.2
pytest-django==4.5.2
pytest-xdist==3.3.1
python-cinderclient==9.3.0
python-dateutil==2.8.2
python-glanceclient==4.3.0
python-keystoneclient==5.1.0
python-memcached==1.58
python-neutronclient==9.0.0
python-novaclient==18.3.0
python-swiftclient==4.2.0
pytz==2023.3.post1
PyYAML==6.0.1
rcssmin==1.1.0
redis==4.3.4
requests==2.31.0
requestsexceptions==1.4.0
rfc3986==1.5.0
rjsmin==1.2.0
roman==3.3
SecretStorage==3.3.3
selenium==4.13.0
semantic-version==2.9.0
simplejson==3.19.1
six==1.16.0
sniffio==1.2.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
Sphinx==5.3.0
sqlparse==0.4.2
stevedore==5.1.0
testscenarios==0.5.0
testtools==2.5.0
trio==0.22.2
trio-websocket==0.10.3
urllib3==1.26.16
warlock==2.0.1
wcwidth==0.2.5
WebOb==1.8.6
wrapt==1.14.1
wsproto==1.2.0
XStatic==1.0.3
XStatic-Angular==1.8.2.2
XStatic-Angular-Bootstrap==2.5.0.0
XStatic-Angular-FileUpload==12.0.4.0
XStatic-Angular-Gettext==2.4.1.0
XStatic-Angular-lrdragndrop==1.0.2.2
XStatic-Angular-Schema-Form==0.8.13.0
XStatic-angular-ui-router==0.3.1.4
XStatic-Bootstrap-Datepicker==1.3.1.0
XStatic-Bootstrap-SCSS==3.4.1.0
XStatic-bootswatch==3.3.7.0
XStatic-D3==3.5.17.0
XStatic-Font-Awesome==4.7.0.0
XStatic-Hogan==2.0.0.2
XStatic-Jasmine==2.4.1.0
XStatic-jQuery==3.5.1.0
XStatic-JQuery-Migrate==3.3.2.1
XStatic-jquery-ui==1.12.0.1
XStatic-JQuery.quicksearch==2.0.4.1
XStatic-JQuery.TableSorter==2.14.5.1
XStatic-JSEncrypt==2.3.1.1
XStatic-Magic-Search==0.2.5.1
XStatic-mdi==1.6.50.2
XStatic-objectpath==1.2.1.0
XStatic-Rickshaw==1.5.0.2
XStatic-roboto-fontface==0.5.0.0
XStatic-smart-table==1.4.13.2
XStatic-Spin==1.2.8.2
XStatic-term.js==0.0.7.0
XStatic-tv4==1.2.7.0
xvfbwrapper==0.2.9
zipp==1.0.0

and here's a typical failure below. Note that there's maybe more than 3
dozen of issues like it:

__ WorkflowsTests.test_workflow_registration ___
[gw3] linux -- Python 3.11.5 /usr/bin/python3.11

self = 

def test_workflow_registration(self):
req = self.factory.get("/foo")
flow = WorkflowForTesting(req)
>   self.assertQuerysetEqual(flow.steps,
 ['',
  ''])

horizon/test/unit/workflows/test_workflows.py:328: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3/dist-packages/django/test/testcases.py:1330: in 
assertQuerysetEqual
return self.assertQuerySetEqual(*args, **kw)
/usr/lib/python3/dist-packages/django/test/testcases.py:1346: in 
assertQuerySetEqual
return self.assertEqual(list(items), values, msg=msg)
E   AssertionError: Lists differ: [, ] != ['', '']
E   
E   First differing element 0:
E   
E   ''
E   
E   - [, ]
E   + ['', '']
E   ?  + +  + +

Probably, it used to be that Python returned strings, but now it's not?
Anyways, calling repr() on each object fixes it, with something like
this:

   

[Yahoo-eng-team] [Bug 2025985] [NEW] ovn-octavia-provider tests fail on 32 bits arch

2023-07-05 Thread Thomas Goirand
Public bug reported:

Hi,

The ovn-octavia-provider package cannot migrate from Debian to Unstable
because its autopkgtest (which runs the package unit tests in the Debian
CI) are failing on all 32 bits arch. Please have a look to this page:

https://qa.debian.org/excuses.php?package=ovn-octavia-provider

Please fix the unit tests, and let me know where to find the patch (zigo
on IRC).

Cheers,

Thomas Goirand (zigo)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025985

Title:
  ovn-octavia-provider tests fail on 32 bits arch

Status in neutron:
  New

Bug description:
  Hi,

  The ovn-octavia-provider package cannot migrate from Debian to
  Unstable because its autopkgtest (which runs the package unit tests in
  the Debian CI) are failing on all 32 bits arch. Please have a look to
  this page:

  https://qa.debian.org/excuses.php?package=ovn-octavia-provider

  Please fix the unit tests, and let me know where to find the patch
  (zigo on IRC).

  Cheers,

  Thomas Goirand (zigo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025985/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996527] [NEW] Unit test failure with Python 3.11

2022-11-14 Thread Thomas Goirand
Public bug reported:

Hi,

In Debian, we're trying to switch to 3.11 before the Bookworm freeze in
January.

Rebuilding Neutron under Python 3.11 fails:

FAIL: 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverPortsV2.test__port_provisioned_no_binding
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverPortsV2.test__port_provisioned_no_binding
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File 
"/<>/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 2817, in setUp
ovn_conf.cfg.CONF.set_override('ovn_metadata_enabled', False,
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2077, in 
__inner
result = f(self, *args, **kwargs)
 
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2460, in 
set_override
opt_info = self._get_opt_info(name, group)
   ^^^
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2869, in 
_get_opt_info
group = self._get_group(group)
^^
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2838, in 
_get_group
raise NoSuchGroupError(group_name)
oslo_config.cfg.NoSuchGroupError: no such group [ovn]

and 145 more like this one...

This looks weird to me though, and unrelated to OVN.

Cheers,

Thomas

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996527

Title:
  Unit test failure with Python 3.11

Status in neutron:
  New

Bug description:
  Hi,

  In Debian, we're trying to switch to 3.11 before the Bookworm freeze
  in January.

  Rebuilding Neutron under Python 3.11 fails:

  FAIL: 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverPortsV2.test__port_provisioned_no_binding
  
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverPortsV2.test__port_provisioned_no_binding
  --
  testtools.testresult.real._StringException: Traceback (most recent call last):
File 
"/<>/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 2817, in setUp
  ovn_conf.cfg.CONF.set_override('ovn_metadata_enabled', False,
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2077, in 
__inner
  result = f(self, *args, **kwargs)
   
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2460, in 
set_override
  opt_info = self._get_opt_info(name, group)
 ^^^
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2869, in 
_get_opt_info
  group = self._get_group(group)
  ^^
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2838, in 
_get_group
  raise NoSuchGroupError(group_name)
  oslo_config.cfg.NoSuchGroupError: no such group [ovn]

  and 145 more like this one...

  This looks weird to me though, and unrelated to OVN.

  Cheers,

  Thomas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996527/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993502] [NEW] failing unit tests when not running them all

2022-10-19 Thread Thomas Goirand
Public bug reported:

Looks like a bunch of OVN unit tests are highly depending on the test
order.

When rebuilding the Debian Zed package of Neutron under Debian Unstable,
I get 200+ unit test failures like what's below. Using tox, if running:

tox -e py3 --
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver

this works, however, running a single unit test like this:

tox -e py3 --
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverSecurityGroup

simply fails. Under Debian, I had to add to the blacklist of unit tests
all of:

- plugins.ml2.drivers.ovn.mech_driver.TestOVNMechanismDriverSecurityGroup.*
- services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.*

Please help me fix this.

Below are example of the 2 types of failure.

==
FAIL: 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverSecurityGroup.test_update_sg_duplicate_rule_multi_ports
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverSecurityGroup.test_update_sg_duplicate_rule_multi_ports
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File 
"/<>/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 3733, in setUp
cfg.CONF.set_override('dns_servers', ['8.8.8.8'], group='ovn')
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2077, in 
__inner
result = f(self, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2460, in 
set_override
opt_info = self._get_opt_info(name, group)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2869, in 
_get_opt_info
group = self._get_group(group)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2838, in 
_get_group
raise NoSuchGroupError(group_name)
oslo_config.cfg.NoSuchGroupError: no such group [ovn]


==
FAIL: 
neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.test__notify_gateway_port_ip_changed
neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.test__notify_gateway_port_ip_changed
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2219, in 
__getattr__
return self._get(name)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2653, in _get
value, loc = self._do_get(name, group, namespace)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2671, in 
_do_get
info = self._get_opt_info(name, group)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2876, in 
_get_opt_info
raise NoSuchOptError(opt_name, group)
oslo_config.cfg.NoSuchOptError: no such option ovn in group [DEFAULT]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/<>/neutron/tests/unit/services/ovn_l3/test_plugin.py", 
line 1738, in setUp
super(test_l3.L3BaseForIntTests, self).setUp(
  File "/<>/neutron/tests/unit/db/test_db_base_plugin_v2.py", line 
163, in setUp
self.api = router.APIRouter()
  File "/<>/neutron/api/v2/router.py", line 21, in APIRouter
return pecan_app.v2_factory(None, **local_config)
  File "/<>/neutron/pecan_wsgi/app.py", line 47, in v2_factory
startup.initialize_all()
  File "/<>/neutron/pecan_wsgi/startup.py", line 39, in 
initialize_all
manager.init()
  File "/<>/neutron/manager.py", line 301, in init
NeutronManager.get_instance()
  File "/<>/neutron/manager.py", line 252, in get_instance
cls._create_instance()
  File "/usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py", line 
414, in inner
return f(*args, **kwargs)
  File "/<>/neutron/manager.py", line 238, in _create_instance
cls._instance = cls()
  File "/<>/neutron/manager.py", line 132, in __init__
self._load_service_plugins()
  File "/<>/neutron/manager.py", line 211, in _load_service_plugins
self._create_and_add_service_plugin(provider)
  File "/<>/neutron/manager.py", line 214, in 
_create_and_add_service_plugin
plugin_inst = self._get_plugin_instance('neutron.service_plugins',
  File "/<>/neutron/manager.py", line 162, in _get_plugin_instance
plugin_inst = plugin_class()
  File "/<>/neutron/quota/resource_registry.py", line 124, in 
wrapper
return f(*args, **kwargs)
  File "/<>/neutron/services/ovn_l3/plugin.py", line 92, in 
__init__
self.scheduler = l3_ovn_scheduler.get_scheduler()
  File "/<>/neutron/scheduler/l3_ovn_scheduler.py", line 153, in 
get_scheduler
return OVN_SCHEDULER_STR_TO_CLASS[ovn_conf.get_ovn_l3_scheduler()]()
  File 

[Yahoo-eng-team] [Bug 1990121] [NEW] Nova 26 needs to depend on os-traits >= 2.9.0

2022-09-19 Thread Thomas Goirand
Public bug reported:

Without the latest os-traits, we get unit test failures like below.

==
FAIL: 
nova.tests.unit.compute.test_pci_placement_translator.TestTranslator.test_trait_normalization_09
nova.tests.unit.compute.test_pci_placement_translator.TestTranslator.test_trait_normalization_09
--
testtools.testresult.real._StringException: pythonlogging:'': {{{
2022-09-17 10:46:54,848 WARNING [oslo_policy.policy] JSON formatted policy_file 
support is deprecated since Victoria release. You need to use YAML format which 
will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool 
to convert existing JSON-formatted policy file to YAML-formatted in backward 
compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
2022-09-17 10:46:54,849 WARNING [oslo_policy.policy] JSON formatted policy_file 
support is deprecated since Victoria release. You need to use YAML format which 
will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool 
to convert existing JSON-formatted policy file to YAML-formatted in backward 
compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
2022-09-17 10:46:54,851 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] specified in policy files are the 
same as the defaults provided by the service. You can remove these rules from 
policy files which will make maintenance easier. You can detect these redundant 
rules by ``oslopolicy-list-redundant`` tool also.
}}}

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/ddt.py", line 191, in wrapper
return func(self, *args, **kwargs)
  File 
"/<>/nova/tests/unit/compute/test_pci_placement_translator.py", 
line 92, in test_trait_normalization
ppt._get_traits_for_dev({"traits": trait_names})
  File "/<>/nova/compute/pci_placement_translator.py", line 78, in 
_get_traits_for_dev
os_traits.COMPUTE_MANAGED_PCI_DEVICE
AttributeError: module 'os_traits' has no attribute 'COMPUTE_MANAGED_PCI_DEVICE'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1990121

Title:
  Nova 26 needs to depend on os-traits >= 2.9.0

Status in OpenStack Compute (nova):
  New

Bug description:
  Without the latest os-traits, we get unit test failures like below.

  ==
  FAIL: 
nova.tests.unit.compute.test_pci_placement_translator.TestTranslator.test_trait_normalization_09
  
nova.tests.unit.compute.test_pci_placement_translator.TestTranslator.test_trait_normalization_09
  --
  testtools.testresult.real._StringException: pythonlogging:'': {{{
  2022-09-17 10:46:54,848 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2022-09-17 10:46:54,849 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2022-09-17 10:46:54,851 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] 

[Yahoo-eng-team] [Bug 1986545] [NEW] websockfiy open redirection unit test broken with Python >= 3.10.6 standard lib

2022-08-15 Thread Thomas Goirand
Public bug reported:

Lucas Nussbaum reported this Debian bug:

https://bugs.debian.org/1017217

so I started investigating it. It took me a while to understand it was
due to a change in the Python 3.10.6 standard http/server.py library.

Running these 2 unit tests against Python 3.10.5 works:

test_websocketproxy.NovaProxyRequestHandlerTestCase.test_reject_open_redirect
console.test_websocketproxy.NovaProxyRequestHandlerTestCase.test_reject_open_redirect_3_slashes

However, under Python 3.10.6, this fails. The reason isn't the
interpreter itself, but the standard library, which has additional open
redirection protection.

Looking at the changelog here:
https://docs.python.org/3/whatsnew/changelog.html

we see this issue:
https://github.com/python/cpython/issues/87389

which has been addressed by this commit:
https://github.com/python/cpython/commit/defaa2b19a9a01c79c1d5641a8aa179bb10ead3f

If I "fix" the Python 3.10.5 standard library using the 2 lines of code
of the first hunk of this patch, then I can reproduce the issue.

I guess that the unit testing should be skipped if using Python >=
3.10.6, probably, or adapted somehow. I leave this to the Nova
maintainers: for the Debian package, I'll just skip these 2 unit tests.

Cheers,

Thomas Goirand (zigo)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1986545

Title:
  websockfiy open redirection unit test broken with Python >= 3.10.6
  standard lib

Status in OpenStack Compute (nova):
  New

Bug description:
  Lucas Nussbaum reported this Debian bug:

  https://bugs.debian.org/1017217

  so I started investigating it. It took me a while to understand it was
  due to a change in the Python 3.10.6 standard http/server.py library.

  Running these 2 unit tests against Python 3.10.5 works:

  test_websocketproxy.NovaProxyRequestHandlerTestCase.test_reject_open_redirect
  
console.test_websocketproxy.NovaProxyRequestHandlerTestCase.test_reject_open_redirect_3_slashes

  However, under Python 3.10.6, this fails. The reason isn't the
  interpreter itself, but the standard library, which has additional
  open redirection protection.

  Looking at the changelog here:
  https://docs.python.org/3/whatsnew/changelog.html

  we see this issue:
  https://github.com/python/cpython/issues/87389

  which has been addressed by this commit:
  
https://github.com/python/cpython/commit/defaa2b19a9a01c79c1d5641a8aa179bb10ead3f

  If I "fix" the Python 3.10.5 standard library using the 2 lines of
  code of the first hunk of this patch, then I can reproduce the issue.

  I guess that the unit testing should be skipped if using Python >=
  3.10.6, probably, or adapted somehow. I leave this to the Nova
  maintainers: for the Debian package, I'll just skip these 2 unit
  tests.

  Cheers,

  Thomas Goirand (zigo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1986545/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1944111] [NEW] Missing __init__.py in nova/db/api

2021-09-20 Thread Thomas Goirand
Public bug reported:

Looks like nova/db/api is missing an __init__.py, which breaks *at
least* my Debian packaging.

** Affects: nova
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1944111

Title:
  Missing __init__.py in nova/db/api

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Looks like nova/db/api is missing an __init__.py, which breaks *at
  least* my Debian packaging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1944111/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1909972] [NEW] a number of tests fail under ppc64el arch

2021-01-03 Thread Thomas Goirand
Public bug reported:

Hi,

As per this Debian bug entry:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=976954

a number of unit tests are failing under ppc64el arch. Please fix these
or exclude the tests on this arch.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1909972

Title:
  a number of tests fail under ppc64el arch

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  As per this Debian bug entry:
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=976954

  a number of unit tests are failing under ppc64el arch. Please fix
  these or exclude the tests on this arch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1909972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1909969] [NEW] test only works on i386: test_get_disk_mapping_stable_rescue_ide_cdrom

2021-01-03 Thread Thomas Goirand
Public bug reported:

Hi,

Running test_get_disk_mapping_stable_rescue_ide_cdrom on arm64 or
ppcel64 results in failure:

> ==
> FAIL: 
> nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_stable_rescue_ide_cdrom
> nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_stable_rescue_ide_cdrom
> --
> testtools.testresult.real._StringException: pythonlogging:'': {{{
> 2020-12-05 01:40:10,410 WARNING [oslo_policy.policy] JSON formatted 
> policy_file support is deprecated since Victoria release. You need to use 
> YAML format which will be default in future. You can use 
> ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
> policy file to YAML-formatted in backward compatible way: 
> https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
> 2020-12-05 01:40:10,411 WARNING [oslo_policy.policy] JSON formatted 
> policy_file support is deprecated since Victoria release. You need to use 
> YAML format which will be default in future. You can use 
> ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
> policy file to YAML-formatted in backward compatible way: 
> https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
> 2020-12-05 01:40:10,412 WARNING [oslo_policy.policy] Policy Rules 
> ['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
> 'os_compute_api:os-quota-sets:defaults', 
> 'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
> 'project_reader_api', 'os_compute_api:os-lock-server:unlock:unlock_override', 
> 'os_compute_api:servers:create:zero_disk_flavor', 
> 'compute:servers:resize:cross_cell'] specified in policy files are the same 
> as the defaults provided by the service. You can remove these rules from 
> policy files which will make maintenance easier. You can detect these 
> redundant rules by ``oslopolicy-list-redundant`` tool also.
> }}}
> 
> Traceback (most recent call last):
>   File "/<>/nova/tests/unit/virt/libvirt/test_blockinfo.py", 
> line 352, in test_get_disk_mapping_stable_rescue_ide_cdrom
> self._test_get_disk_mapping_stable_rescue(
>   File "/<>/nova/tests/unit/virt/libvirt/test_blockinfo.py", 
> line 293, in _test_get_disk_mapping_stable_rescue
> self.assertEqual(expected, mapping)
>   File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 415, in 
> assertEqual
> self.assertThat(observed, matcher, message)
>   File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 502, in 
> assertThat
> raise mismatch_error
> testtools.matchers._impl.MismatchError: !=:
> reference = {'disk': {'boot_index': '1', 'bus': 'virtio', 'dev': 'vda', 
> 'type': 'disk'},
>  'disk.rescue': {'bus': 'ide', 'dev': 'hda', 'type': 'cdrom'},
>  'root': {'boot_index': '1', 'bus': 'virtio', 'dev': 'vda', 'type': 'disk'}}
> actual= {'disk': {'boot_index': '1', 'bus': 'virtio', 'dev': 'vda', 
> 'type': 'disk'},
>  'disk.rescue': {'bus': 'scsi', 'dev': 'sda', 'type': 'cdrom'},
>  'root': {'boot_index': '1', 'bus': 'virtio', 'dev': 'vda', 'type': 'disk'}}

Looks like 'bus': 'ide' is not happening as expected...

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1909969

Title:
  test only works on i386: test_get_disk_mapping_stable_rescue_ide_cdrom

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  Running test_get_disk_mapping_stable_rescue_ide_cdrom on arm64 or
  ppcel64 results in failure:

  > ==
  > FAIL: 
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_stable_rescue_ide_cdrom
  > 
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_stable_rescue_ide_cdrom
  > --
  > testtools.testresult.real._StringException: pythonlogging:'': {{{
  > 2020-12-05 01:40:10,410 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  > 2020-12-05 01:40:10,411 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing 

[Yahoo-eng-team] [Bug 1908074] [NEW] swift backend: openstack image-download / glance image-save fails

2020-12-14 Thread Thomas Goirand
Public bug reported:

Hi,

I'm the maintainer of OpenStack in Debian. Using a deployment with Swift
as backend, with glance-api.conf configured this way:

[swift]
swift_store_cacert=/etc/ssl/certs/oci-pki-oci-ca-chain.pem
swift_store_create_container_on_put=True
swift_store_endpoint_type=internalURL
swift_store_config_file=/etc/glance/glance-swift.conf
default_swift_reference=ref1

# cat /etc/glance/glance-swift.conf

[ref1]
user = services:glance
key = PASSWORD
auth_version = 3
auth_address = https://:443/identity/v3
user_domain_id=default
project_domain_id=default

I could upload an image to Glance. It's really there in Swift, I checked
for that fact. Though saving the image fails: when I do "openstack image
save", my swift-proxy recieves a HTTP/1.1 499 (ie: Client Closed
Request), then glance-api returns a 502 (bad gatway). Unfortunately,
using uwsgi and https for Glance wasn't very much verbose, so I
downgraded Glance-api to use eventlet without ssl, and then I could see
in the glance-api.log:

2020-12-14 10:43:47.367 16080 DEBUG swiftclient 
[req-f1a898a5-e202-45b7-80e1-7bb68c3b3f52 dcc01371101246afacc8403030921f53 
d71a5d98aef04386b57736a4ea4f3644 - default default] RESP STATUS: 200 OK 
http_log /usr/lib/python3/dist-packages/swiftclient/client.py:188
2020-12-14 10:43:47.367 16080 DEBUG swiftclient 
[req-f1a898a5-e202-45b7-80e1-7bb68c3b3f52 dcc01371101246afacc8403030921f53 
d71a5d98aef04386b57736a4ea4f3644 - default default] RESP HEADERS: 
{'Content-Type': 'application/octet-stream', 'Etag': 
'aef23ab9c77b8caa2e6042fa30aadd95', 'Last-Modified': 'Mon, 14 Dec 2020 10:18:04 
GMT', 'X-Timestamp': '1607941083.89546', 'Accept-Ranges': 'bytes', 
'X-Trans-Id': 'txdabda903f73c47d1a266e-005fd741e1', 'X-Openstack-Request-Id': 
'txdabda903f73c47d1a266e-005fd741e1', 'Connection': 'close', 
'Strict-Transport-Security': 'max-age=63072000'} http_log 
/usr/lib/python3/dist-packages/swiftclient/client.py:189
2020-12-14 10:43:47.368 16080 WARNING glance.location 
[req-f1a898a5-e202-45b7-80e1-7bb68c3b3f52 dcc01371101246afacc8403030921f53 
d71a5d98aef04386b57736a4ea4f3644 - default default] Get image 
8d2ca7c8-de71-41c1-a6bc-73dd0dd37646 data failed: int() argument must be a 
string, a bytes-like object or a number, not 'NoneType'.
2020-12-14 10:43:47.368 16080 ERROR glance.location 
[req-f1a898a5-e202-45b7-80e1-7bb68c3b3f52 dcc01371101246afacc8403030921f53 
d71a5d98aef04386b57736a4ea4f3644 - default default] Glance tried all active 
locations to get data for image 8d2ca7c8-de71-41c1-a6bc-73dd0dd37646 but all 
have failed.

Then later on, I get some:

  File "/usr/lib/python3/dist-packages/webob/dec.py", line 143, in __call__
return resp(environ, start_response)
  File "/usr/lib/python3/dist-packages/webob/dec.py", line 143, in __call__
return resp(environ, start_response)
TypeError: 'ImageProxy' object is not callable

but that's a consequence of Glance-api not being able to properly
download the image from Swift (so I didn't past all the stack-dump
above).

My setup is with the packages from Debian (which I maintain), running
Victoria over Buster. If you want to try, with Buster you can do:

apt-get install extrepo
extrepo enable openstack_victoria
apt-get update
apt-get install glance-api...

If you are brave enough, you can also try directly in Debian Unstable
(that's the same packages which I upload there, and maintain as
backports for Debian Stable).

Cheers,

Thomas Goirand

** Affects: glance
 Importance: Undecided
 Status: New

** Summary changed:

- openstack image-download / glance image-save fails
+ swift backend: openstack image-download / glance image-save fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1908074

Title:
  swift backend: openstack image-download / glance image-save fails

Status in Glance:
  New

Bug description:
  Hi,

  I'm the maintainer of OpenStack in Debian. Using a deployment with
  Swift as backend, with glance-api.conf configured this way:

  [swift]
  swift_store_cacert=/etc/ssl/certs/oci-pki-oci-ca-chain.pem
  swift_store_create_container_on_put=True
  swift_store_endpoint_type=internalURL
  swift_store_config_file=/etc/glance/glance-swift.conf
  default_swift_reference=ref1

  # cat /etc/glance/glance-swift.conf

  [ref1]
  user = services:glance
  key = PASSWORD
  auth_version = 3
  auth_address = https://:443/identity/v3
  user_domain_id=default
  project_domain_id=default

  I could upload an image to Glance. It's really there in Swift, I
  checked for that fact. Though saving the image fails: when I do
  "openstack image save", my swift-proxy recieves a HTTP/1.1 499 (ie:
  Client Closed Request), then glance-api returns a 502 (bad gatway).
  Unfortunately, using uwsgi and https for Glance wasn't very much
  verbose, so I downgraded Glance-api to use eventlet without ssl, and
  then I could see in the gl

[Yahoo-eng-team] [Bug 1907232] [NEW] dynamic-routing: "dragent add speaker" is buggy

2020-12-08 Thread Thomas Goirand
Public bug reported:

When I add a speaker to a dragent, the table
bgp_speaker_dragent_bindings gets 2 entries instead of one. One is the
one I am expecting, and the 2nd one is another dragent which has nothing
to do with the one I'm trying to configure.

When there's only a single agent remaining, then it simply doesn't work,
I get this:

# openstack network agent show 0b8214b8-eb73-4823-b4bf-3f1c2865544b -c 
configuration
+---++
| Field | Value  |
+---++
| configuration | {'advertise_routes': 0, 'bgp_peers': 0, 'bgp_speakers': 0} |
+---++

when it really should show 'bgp_speakers': 1...

The only workaround that I found was to:

1/ add a dragent in another node temporarily (one that I don't need).
2/ do the "openstack bgp dragent add speaker"
3/ fix the MySQL bgp_speaker_dragent_bindings table by removing the double-entry
4/ remove the bgp agent that I don't need

This really is annoying, and a fix would be very much appreciated.

Cheers,

Thomas

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1907232

Title:
  dynamic-routing: "dragent add speaker" is buggy

Status in neutron:
  New

Bug description:
  When I add a speaker to a dragent, the table
  bgp_speaker_dragent_bindings gets 2 entries instead of one. One is the
  one I am expecting, and the 2nd one is another dragent which has
  nothing to do with the one I'm trying to configure.

  When there's only a single agent remaining, then it simply doesn't
  work, I get this:

  # openstack network agent show 0b8214b8-eb73-4823-b4bf-3f1c2865544b -c 
configuration
  +---++
  | Field | Value  |
  +---++
  | configuration | {'advertise_routes': 0, 'bgp_peers': 0, 'bgp_speakers': 0} |
  +---++

  when it really should show 'bgp_speakers': 1...

  The only workaround that I found was to:

  1/ add a dragent in another node temporarily (one that I don't need).
  2/ do the "openstack bgp dragent add speaker"
  3/ fix the MySQL bgp_speaker_dragent_bindings table by removing the 
double-entry
  4/ remove the bgp agent that I don't need

  This really is annoying, and a fix would be very much appreciated.

  Cheers,

  Thomas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1907232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1900451] [NEW] nova-manage still shows deprecation

2020-10-19 Thread Thomas Goirand
Public bug reported:

When doing something like this:

su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts"

I see lots of deprecation warnings. There should be a way to disable the
warnings, or having them off by default. Discussion should be open on
how to fix this.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1900451

Title:
  nova-manage still shows deprecation

Status in OpenStack Compute (nova):
  New

Bug description:
  When doing something like this:

  su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts"

  I see lots of deprecation warnings. There should be a way to disable
  the warnings, or having them off by default. Discussion should be open
  on how to fix this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1900451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1875418] [NEW] Generated policy.json in Ussuri is broken by default

2020-04-27 Thread Thomas Goirand
Public bug reported:

Looks like the generated policy.json is broken by default and can't be
used by operators as-is, as it doesn't include the deprecated options
which are unfortunately needed for it to work.

With the default policy.json as generated by the nova namespace, the
admin user can't even do simple things like:

- openstack flavor create
- openstack hypervisor list

and probably many more...

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1875418

Title:
  Generated policy.json in Ussuri is broken by default

Status in OpenStack Compute (nova):
  New

Bug description:
  Looks like the generated policy.json is broken by default and can't be
  used by operators as-is, as it doesn't include the deprecated options
  which are unfortunately needed for it to work.

  With the default policy.json as generated by the nova namespace, the
  admin user can't even do simple things like:

  - openstack flavor create
  - openstack hypervisor list

  and probably many more...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1875418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1874458] [NEW] Glance doesn't take into account swift_store_cacert

2020-04-23 Thread Thomas Goirand
Public bug reported:

It looks like Glance is broken regarding using a custom CA certificate
for the Swift API. Indeed, even if I set a CA file in
swift_store_cacert, Glance wouldn't upload an image to Swift, unless I
set swift_store_auth_insecure to True.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1874458

Title:
  Glance doesn't take into account swift_store_cacert

Status in Glance:
  New

Bug description:
  It looks like Glance is broken regarding using a custom CA certificate
  for the Swift API. Indeed, even if I set a CA file in
  swift_store_cacert, Glance wouldn't upload an image to Swift, unless I
  set swift_store_auth_insecure to True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1874458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868455] [NEW] unit tests with py3.8: glance_store.exceptions.UnknownScheme: Unknown scheme '' found in URI

2020-03-22 Thread Thomas Goirand
Public bug reported:

Rebuilding Glance 19.0.0 in Debian Sid gives the below result:

FAIL: 
glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time
glance.tests.unit.v2.test_tasks_resource.TestTasksController.test_create_with_live_time
--
testtools.testresult.real._StringException: pythonlogging:'': {{{
2020-03-22 13:34:41,556 INFO [glance.db.simple.api] Calling task_create: 
args=(None, {'id': 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d', 'status': 'pending', 
'type': 'import', 'input': {}, 'result': None, 'owner': 
'6838eb7b-6ded-434a-882c-b344c77fe8df', 'message': None, 'expires_at': 
datetime.datetime(2021, 3, 22, 13, 34, 41, 556098), 'created_at': 
datetime.datetime(2020, 3, 22, 13, 34, 41, 556080), 'updated_at': 
datetime.datetime(2020, 3, 22, 13, 34, 41, 556080), 'deleted_at': None, 
'deleted': False}), kwargs={}
2020-03-22 13:34:41,556 INFO [glance.db.simple.api] Returning task_create: 
{'id': 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d', 'type': 'import', 'status': 
'pending', 'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df', 'expires_at': 
datetime.datetime(2021, 3, 22, 13, 34, 41, 556098), 'created_at': 
datetime.datetime(2020, 3, 22, 13, 34, 41, 556080), 'updated_at': 
datetime.datetime(2020, 3, 22, 13, 34, 41, 556080), 'deleted_at': None, 
'deleted': False, 'input': {}, 'result': None, 'message': None}
2020-03-22 13:34:41,556 INFO [glance.db.simple.api] Calling task_create: 
args=(None, {'id': 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc', 'status': 'pending', 
'type': 'import', 'input': {}, 'result': None, 'owner': 
'2c014f32-55eb-467d-8fcb-4bd706012f81', 'message': None, 'expires_at': 
datetime.datetime(2021, 3, 22, 13, 34, 41, 556103), 'created_at': 
datetime.datetime(2020, 3, 22, 13, 34, 46, 556080), 'updated_at': 
datetime.datetime(2020, 3, 22, 13, 34, 46, 556080), 'deleted_at': None, 
'deleted': False}), kwargs={}
2020-03-22 13:34:41,556 INFO [glance.db.simple.api] Returning task_create: 
{'id': 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc', 'type': 'import', 'status': 
'pending', 'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81', 'expires_at': 
datetime.datetime(2021, 3, 22, 13, 34, 41, 556103), 'created_at': 
datetime.datetime(2020, 3, 22, 13, 34, 46, 556080), 'updated_at': 
datetime.datetime(2020, 3, 22, 13, 34, 46, 556080), 'deleted_at': None, 
'deleted': False, 'input': {}, 'result': None, 'message': None}
2020-03-22 13:34:41,557 INFO [glance.db.simple.api] Calling task_create: 
args=(None, {'id': '971ec09a-8067-4bc8-a91f-ae3557f1c4c7', 'status': 'pending', 
'type': 'import', 'input': {}, 'result': None, 'owner': 
'5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8', 'message': None, 'expires_at': 
datetime.datetime(2021, 3, 22, 13, 34, 41, 556107), 'created_at': 
datetime.datetime(2020, 3, 22, 13, 34, 51, 556080), 'updated_at': 
datetime.datetime(2020, 3, 22, 13, 34, 51, 556080), 'deleted_at': None, 
'deleted': False}), kwargs={}
2020-03-22 13:34:41,557 INFO [glance.db.simple.api] Returning task_create: 
{'id': '971ec09a-8067-4bc8-a91f-ae3557f1c4c7', 'type': 'import', 'status': 
'pending', 'owner': '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8', 'expires_at': 
datetime.datetime(2021, 3, 22, 13, 34, 41, 556107), 'created_at': 
datetime.datetime(2020, 3, 22, 13, 34, 51, 556080), 'updated_at': 
datetime.datetime(2020, 3, 22, 13, 34, 51, 556080), 'deleted_at': None, 
'deleted': False, 'input': {}, 'result': None, 'message': None}
2020-03-22 13:34:41,557 INFO [glance.db.simple.api] Calling task_create: 
args=(None, {'id': '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86', 'status': 'pending', 
'type': 'import', 'input': {}, 'result': None, 'owner': 
'c6c87f25-8a94-47ed-8c83-053c25f42df4', 'message': None, 'expires_at': 
datetime.datetime(2021, 3, 22, 13, 34, 41, 556110), 'created_at': 
datetime.datetime(2020, 3, 22, 13, 34, 56, 556080), 'updated_at': 
datetime.datetime(2020, 3, 22, 13, 34, 56, 556080), 'deleted_at': None, 
'deleted': False}), kwargs={}
2020-03-22 13:34:41,557 INFO [glance.db.simple.api] Returning task_create: 
{'id': '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86', 'type': 'import', 'status': 
'pending', 'owner': 'c6c87f25-8a94-47ed-8c83-053c25f42df4', 'expires_at': 
datetime.datetime(2021, 3, 22, 13, 34, 41, 556110), 'created_at': 
datetime.datetime(2020, 3, 22, 13, 34, 56, 556080), 'updated_at': 
datetime.datetime(2020, 3, 22, 13, 34, 56, 556080), 'deleted_at': None, 
'deleted': False, 'input': {}, 'result': None, 'message': None}
2020-03-22 13:34:41,563 INFO [glance.db.simple.api] Calling task_create: 
args=(, {'id': 
'7eaae4bb-0eb8-48c1-a299-19a1e2882913', 'type': 'import', 'status': 'pending', 
'input': {'import_from': 
'http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img', 
'import_from_format': 'qcow2', 'image_properties': {'disk_format': 'qcow2', 
'container_format': 'bare', 'name': 'test-task'}}, 'result': None, 'owner': 
'6838eb7b-6ded-434a-882c-b344c77fe8df', 'message': '', 'expires_at': None, 
'created_at': 

[Yahoo-eng-team] [Bug 1866635] [NEW] router-update for internal networking not correct when restarting ovs-agent

2020-03-09 Thread Thomas Goirand
Public bug reported:

In our production environment, running Rocky (Neutron 13.0.5 + this
patch: https://review.opendev.org/#/c/693681/), we've experienced some
missing network route for our internal networking.

We have:

one VM-1 on 192.168.2.20, on compute-1

one VM-2 on 192.168.2.3, compute-2

Our VM-1 also has a floating IP, and is used to monitor VM-2 (VM-1
contains a Zabbix proxy).

Let's say ones reboot compute-1, Neutron misses the necessary rules to
connect VM-1 to VM-2. Restarting VM-2 adds new OVS flow rules on
compute-1, which restores connectivity. Restarting ovs-agent on
compute-1 has no effect, which means that there's a problem there.

Please help us fixing the bug.

Cheers,

Thomas Goirand (zigo)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1866635

Title:
  router-update for internal networking not correct when restarting ovs-
  agent

Status in neutron:
  New

Bug description:
  In our production environment, running Rocky (Neutron 13.0.5 + this
  patch: https://review.opendev.org/#/c/693681/), we've experienced some
  missing network route for our internal networking.

  We have:

  one VM-1 on 192.168.2.20, on compute-1

  one VM-2 on 192.168.2.3, compute-2

  Our VM-1 also has a floating IP, and is used to monitor VM-2 (VM-1
  contains a Zabbix proxy).

  Let's say ones reboot compute-1, Neutron misses the necessary rules to
  connect VM-1 to VM-2. Restarting VM-2 adds new OVS flow rules on
  compute-1, which restores connectivity. Restarting ovs-agent on
  compute-1 has no effect, which means that there's a problem there.

  Please help us fixing the bug.

  Cheers,

  Thomas Goirand (zigo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1866635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1851659] [NEW] removing a network from a DHCP agent removes L3 rules even if it shouldn't

2019-11-07 Thread Thomas Goirand
Public bug reported:

Hi,

Previously on our deployment, we had DHCP agents on the compute nodes.
We decided to add network nodes, and later, move the DHCP agents from
the compute nodes to the network nodes.

When removing the network from the DHCP agent, it looks like the L3
rules were also removed from the compute node, even if these were needed
to access the VM which it was hosting. As a consequence, there was
downtime for that instance. What I did to recover was restarting the L3
agent on that compute node, and the rules where re-added.

FYI, I was running Rocky on that deployment, with neutron 13.0.4.

This IMO is a very important thing to fix in priority.

Cheers,

Thomas

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1851659

Title:
  removing a network from a DHCP agent removes L3 rules even if it
  shouldn't

Status in neutron:
  New

Bug description:
  Hi,

  Previously on our deployment, we had DHCP agents on the compute nodes.
  We decided to add network nodes, and later, move the DHCP agents from
  the compute nodes to the network nodes.

  When removing the network from the DHCP agent, it looks like the L3
  rules were also removed from the compute node, even if these were
  needed to access the VM which it was hosting. As a consequence, there
  was downtime for that instance. What I did to recover was restarting
  the L3 agent on that compute node, and the rules where re-added.

  FYI, I was running Rocky on that deployment, with neutron 13.0.4.

  This IMO is a very important thing to fix in priority.

  Cheers,

  Thomas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1851659/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1850928] [NEW] unit tests failing in some environments

2019-11-01 Thread Thomas Goirand
Public bug reported:

Hi,

Over here:
https://bugs.debian.org/908862

Santiago Vila reported that unit tests are failing in his environment.
As I am not sure what's going on, and can't work more on the problem,
I'm opening this upstream, in the hope you guys can be of some help. At
first, it probably looks more like a bug in oslo.db or in SQLAlchemy to
me, with tables not being created before the unit tests start or
something.

Any clue what's going on here?

Cheers,

Thomas Goirand (zigo)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1850928

Title:
  unit tests failing in some environments

Status in neutron:
  New

Bug description:
  Hi,

  Over here:
  https://bugs.debian.org/908862

  Santiago Vila reported that unit tests are failing in his environment.
  As I am not sure what's going on, and can't work more on the problem,
  I'm opening this upstream, in the hope you guys can be of some help.
  At first, it probably looks more like a bug in oslo.db or in
  SQLAlchemy to me, with tables not being created before the unit tests
  start or something.

  Any clue what's going on here?

  Cheers,

  Thomas Goirand (zigo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1850928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841667] [NEW] failing libvirt tests: need ordering

2019-08-27 Thread Thomas Goirand
*** This bug is a duplicate of bug 1838666 ***
https://bugs.launchpad.net/bugs/1838666

Public bug reported:

When rebuilding Nova from Stein in Debian Sid, I get 3 unit test errors,
probably due to a more recent libvirt (ie: 5.6.0). See for example, on
this first one:



we get bus= and dev= inverted.

==
FAIL: 
nova.tests.unit.virt.libvirt.test_driver.LibvirtDriverTestCase.test_get_disk_xml
nova.tests.unit.virt.libvirt.test_driver.LibvirtDriverTestCase.test_get_disk_xml
--
_StringException: pythonlogging:'': {{{2019-08-27 20:26:05,026 WARNING 
[os_brick.initiator.connectors.remotefs] Connection details not present. 
RemoteFsClient may not initialize properly.}}}

Traceback (most recent call last):
  File "/<>/nova/tests/unit/virt/libvirt/test_driver.py", line 
20926, in test_get_disk_xml
self.assertEqual(diska_xml.strip(), actual_diska_xml.strip())
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = '''\

  
  
  0e38683e-f0af-418f-a3f1-6b67ea0f919d
'''
actual= '''\

  
  
  0e38683e-f0af-418f-a3f1-6b67ea0f919d
'''


==
FAIL: 
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_detach_volume_with_vir_domain_affect_live_flag
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_detach_volume_with_vir_domain_affect_live_flag
--
_StringException: pythonlogging:'': {{{2019-08-27 20:26:31,189 WARNING 
[os_brick.initiator.connectors.remotefs] Connection details not present. 
RemoteFsClient may not initialize properly.}}}

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/mock/mock.py", line 1330, in patched
return func(*args, **keywargs)
  File "/<>/nova/tests/unit/virt/libvirt/test_driver.py", line 
7955, in test_detach_volume_with_vir_domain_affect_live_flag
""", flags=flags)
  File "/usr/lib/python3/dist-packages/mock/mock.py", line 944, in 
assert_called_with
six.raise_from(AssertionError(_error_message(cause)), cause)
  File "", line 3, in raise_from
AssertionError: expected call not found.
Expected: detachDeviceFlags('\n  \n  \n\n', 
flags=3)
Actual: detachDeviceFlags('\n  \n  \n\n', 
flags=3)


==
FAIL: 
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_update_volume_xml
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_update_volume_xml
--
_StringException: pythonlogging:'': {{{2019-08-27 20:26:37,451 WARNING 
[os_brick.initiator.connectors.remotefs] Connection details not present. 
RemoteFsClient may not initialize properly.}}}

Traceback (most recent call last):
  File "/<>/nova/tests/unit/virt/libvirt/test_driver.py", line 
10157, in test_update_volume_xml
etree.tostring(config, encoding='unicode'))
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = '58a84f6d-3f0c-4e19-a0af-eb657b790657'
actual= '58a84f6d-3f0c-4e19-a0af-eb657b790657'

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova/stein
 Importance: Undecided
 Status: New


** Tags: libvirt testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1841667

Title:
  failing libvirt tests: need ordering

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) stein series:
  New

Bug description:
  When rebuilding Nova from Stein in Debian Sid, I get 3 unit test
  errors, probably due to a more recent libvirt (ie: 5.6.0). See for
  example, on this first one:

  

  we get bus= and dev= inverted.

  ==
  FAIL: 
nova.tests.unit.virt.libvirt.test_driver.LibvirtDriverTestCase.test_get_disk_xml
  
nova.tests.unit.virt.libvirt.test_driver.LibvirtDriverTestCase.test_get_disk_xml
  --
  _StringException: pythonlogging:'': {{{2019-08-27 20:26:05,026 WARNING 
[os_brick.initiator.connectors.remotefs] Connection details not present. 
RemoteFsClient may not initialize properly.}}}

  Traceback (most recent call last):
File 

[Yahoo-eng-team] [Bug 1837169] [NEW] Stein: testing _make_floatingip fails in Debian Sid

2019-07-18 Thread Thomas Goirand
Public bug reported:

Hi,

I'm trying to build Stein Neutron from Buster to Sid.

There's the following 4 unit tests failing when building the debian
package in Sid, all located in neutron.tests.unit.extensions:

test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
test_extraroute.ExtraRouteDBSepTestCase.test_floatingip_via_router_interface_returns_201
test_l3.L3NatDBIntTestCase.test_floatingip_via_router_interface_returns_201
test_l3.L3NatDBSepTestCase.test_floatingip_via_router_interface_returns_201

All of these tests fails with more or less the same stack dump, as seen
below.

(unstable-amd64-sbuild)root@buzig2:~/neutron# PYTOHN=python3 
PYTHONPATH=`pwd`/debian/tmp/usr/lib/python3/dist-packages stestr run --subunit 
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
 | subunit2pyunit 
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
 ... FAIL

==
FAIL: 
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
--
_StringException: Traceback (most recent call last):
  File "/root/neutron/neutron/tests/base.py", line 176, in func
return f(self, *args, **kwargs)
  File "/root/neutron/neutron/tests/base.py", line 176, in func
return f(self, *args, **kwargs)
  File "/root/neutron/neutron/tests/unit/extensions/test_l3.py", line 3183, in 
test_floatingip_via_router_interface_returns_201
self._test_floatingip_via_router_interface(exc.HTTPCreated.code)
  File "/root/neutron/neutron/tests/unit/extensions/test_l3.py", line 3137, in 
_test_floatingip_via_router_interface
http_status=http_status)
  File "/root/neutron/neutron/tests/unit/extensions/test_l3.py", line 510, in 
_make_floatingip
self.assertEqual(http_status, res.status_int)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 201 != 500


--
Ran 1 test in 11.585s

FAILED (failures=1)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837169

Title:
  Stein: testing _make_floatingip fails in Debian Sid

Status in neutron:
  New

Bug description:
  Hi,

  I'm trying to build Stein Neutron from Buster to Sid.

  There's the following 4 unit tests failing when building the debian
  package in Sid, all located in neutron.tests.unit.extensions:

  
test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
  
test_extraroute.ExtraRouteDBSepTestCase.test_floatingip_via_router_interface_returns_201
  test_l3.L3NatDBIntTestCase.test_floatingip_via_router_interface_returns_201
  test_l3.L3NatDBSepTestCase.test_floatingip_via_router_interface_returns_201

  All of these tests fails with more or less the same stack dump, as
  seen below.

  (unstable-amd64-sbuild)root@buzig2:~/neutron# PYTOHN=python3 
PYTHONPATH=`pwd`/debian/tmp/usr/lib/python3/dist-packages stestr run --subunit 
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
 | subunit2pyunit 
  
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
  
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
 ... FAIL

  ==
  FAIL: 
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
  
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBIntTestCase.test_floatingip_via_router_interface_returns_201
  --
  _StringException: Traceback (most recent call last):
File "/root/neutron/neutron/tests/base.py", line 176, in func
  return f(self, *args, **kwargs)
File "/root/neutron/neutron/tests/base.py", line 176, in func
  return f(self, *args, **kwargs)
File "/root/neutron/neutron/tests/unit/extensions/test_l3.py", line 3183, 
in test_floatingip_via_router_interface_returns_201
  

[Yahoo-eng-team] [Bug 1833574] [NEW] neutron-dynamic-routing fails under SQLA 1.3.1

2019-06-20 Thread Thomas Goirand
Public bug reported:

Running unit tests of neutron-dynamic-routing with SQLAlchemy 1.3.1
leads to these failures. Please fix this before the Buster release, as
SQLAlchemy 1.3.x will be uploaded to Sid instead of Experimental, and
anyway, we can't upload the Stein release to Debian Experimental before
this is fixed:

==
FAIL: 
neutron_dynamic_routing.tests.unit.db.test_bgp_db.Ml2BgpTests.test_get_ipv6_tenant_subnet_routes_by_bgp_speaker_ipv6
neutron_dynamic_routing.tests.unit.db.test_bgp_db.Ml2BgpTests.test_get_ipv6_tenant_subnet_routes_by_bgp_speaker_ipv6
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/neutron/tests/base.py", line 174, in func
return f(self, *args, **kwargs)
  File "/<>/neutron_dynamic_routing/tests/unit/db/test_bgp_db.py", 
line 685, in test_get_ipv6_tenant_subnet_routes_by_bgp_speaker_ipv6
binding_cidr)
  File "/<>/neutron_dynamic_routing/tests/unit/db/test_bgp_db.py", 
line 644, in _advertised_routes_by_bgp_speaker
speaker['id'])
  File "/<>/neutron_dynamic_routing/services/bgp/bgp_plugin.py", 
line 225, in get_advertised_routes
bgp_speaker_id)
  File "/<>/neutron_dynamic_routing/db/bgp_db.py", line 315, in 
get_advertised_routes
routes = self.get_routes_by_bgp_speaker_id(context, bgp_speaker_id)
  File "/<>/neutron_dynamic_routing/db/bgp_db.py", line 480, in 
get_routes_by_bgp_speaker_id
bgp_speaker_id)
  File "/<>/neutron_dynamic_routing/db/bgp_db.py", line 674, in 
_get_central_fip_host_routes_by_bgp_speaker
l3_db.Router.id == router_attrs.router_id)
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2259, in 
outerjoin
from_joinpoint=from_joinpoint,
  File "", line 2, in _join
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/base.py", line 220, in 
generate
fn(self, *args[1:], **kw)
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2414, in 
_join
left, right, onclause, prop, create_aliases, outerjoin, full
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2437, in 
_join_left_to_right
) = self._join_determine_implicit_left_side(left, right, onclause)
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2526, in 
_join_determine_implicit_left_side
"Can't determine which FROM clause to join "
sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join 
from, there are multiple FROMS which can join to this entity. Try adding an 
explicit ON clause to help resolve the ambiguity.


==
FAIL: 
neutron_dynamic_routing.tests.unit.db.test_bgp_db.Ml2BgpTests.test_all_routes_by_bgp_speaker_different_tenant_address_scope
neutron_dynamic_routing.tests.unit.db.test_bgp_db.Ml2BgpTests.test_all_routes_by_bgp_speaker_different_tenant_address_scope
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/neutron/tests/base.py", line 174, in func
return f(self, *args, **kwargs)
  File "/<>/neutron_dynamic_routing/tests/unit/db/test_bgp_db.py", 
line 753, in test_all_routes_by_bgp_speaker_different_tenant_address_scope
bgp_speaker_id)
  File "/<>/neutron_dynamic_routing/db/bgp_db.py", line 480, in 
get_routes_by_bgp_speaker_id
bgp_speaker_id)
  File "/<>/neutron_dynamic_routing/db/bgp_db.py", line 674, in 
_get_central_fip_host_routes_by_bgp_speaker
l3_db.Router.id == router_attrs.router_id)
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2259, in 
outerjoin
from_joinpoint=from_joinpoint,
  File "", line 2, in _join
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/base.py", line 220, in 
generate
fn(self, *args[1:], **kw)
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2414, in 
_join
left, right, onclause, prop, create_aliases, outerjoin, full
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2437, in 
_join_left_to_right
) = self._join_determine_implicit_left_side(left, right, onclause)
  File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2526, in 
_join_determine_implicit_left_side
"Can't determine which FROM clause to join "
sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join 
from, there are multiple FROMS which can join to this entity. Try adding an 
explicit ON clause to help resolve the ambiguity.


==
FAIL: 
neutron_dynamic_routing.tests.unit.db.test_bgp_db.Ml2BgpTests.test_get_ipv4_tenant_subnet_routes_by_bgp_speaker_ipv4

[Yahoo-eng-team] [Bug 1830747] [NEW] Error 500 trying to migrate an instance after wrong request_spec

2019-05-28 Thread Thomas Goirand
Public bug reported:

We've started an instance last Wednesday, and the compute where it ran
failed (maybe hardware issue?). Since the networking looked wrong (ie:
missing network interfaces), I tried to migrate the instance.

According to Matt, it looked like the request_spec entry for the
instance is wrong:

 my guess is something like this happened: 1. create server in a 
group, 2. cold migrate the server which fails on host A and does a reschedule 
to host B which maybe also fails (would be good to know if previous cold 
migration attempts failed with reschedules), 3. try to cold migrate again which 
fails with the instance_group.uuid thing
 the reschedule might be the key b/c like i said conductor has to 
rebuild a request spec and i think that's probably where we're doing a partial 
build of the request spec but missing the group uuid

Here's what I had in my novaapidb:

{
  "nova_object.name": "RequestSpec",
  "nova_object.version": "1.11",
  "nova_object.data": {
"ignore_hosts": null,
"requested_destination": null,
"instance_uuid": "2098b550-c749-460a-a44e-5932535993a9",
"num_instances": 1,
"image": {
  "nova_object.name": "ImageMeta",
  "nova_object.version": "1.8",
  "nova_object.data": {
"min_disk": 40,
"disk_format": "raw",
"min_ram": 0,
"container_format": "bare",
"properties": {
  "nova_object.name": "ImageMetaProps",
  "nova_object.version": "1.20",
  "nova_object.data": {},
  "nova_object.namespace": "nova"
}
  },
  "nova_object.namespace": "nova",
  "nova_object.changes": [
"properties",
"min_ram",
"container_format",
"disk_format",
"min_disk"
  ]
},
"availability_zone": "AZ3",
"flavor": {
  "nova_object.name": "Flavor",
  "nova_object.version": "1.2",
  "nova_object.data": {
"id": 28,
"name": "cpu2-ram6-disk40",
"is_public": true,
"rxtx_factor": 1,
"deleted_at": null,
"root_gb": 40,
"vcpus": 2,
"memory_mb": 6144,
"disabled": false,
"extra_specs": {},
"updated_at": null,
"flavorid": "e29f3ee9-3f07-46d2-b2e2-efa4950edc95",
"deleted": false,
"swap": 0,
"description": null,
"created_at": "2019-02-07T07:48:21Z",
"vcpu_weight": 0,
"ephemeral_gb": 0
  },
  "nova_object.namespace": "nova"
},
"force_hosts": null,
"retry": null,
"instance_group": {
  "nova_object.name": "InstanceGroup",
  "nova_object.version": "1.11",
  "nova_object.data": {
"members": null,
"hosts": null,
"policy": "anti-affinity"
  },
  "nova_object.namespace": "nova",
  "nova_object.changes": [
"policy",
"members",
"hosts"
  ]
},
"scheduler_hints": {
  "group": [
"295c99ea-2db6-469a-877f-454a3903a8d8"
  ]
},
"limits": {
  "nova_object.name": "SchedulerLimits",
  "nova_object.version": "1.0",
  "nova_object.data": {
"disk_gb": null,
"numa_topology": null,
"memory_mb": null,
"vcpu": null
  },
  "nova_object.namespace": "nova",
  "nova_object.changes": [
"disk_gb",
"vcpu",
"memory_mb",
"numa_topology"
  ]
},
"force_nodes": null,
"project_id": "1bf4dbb3d2c746658f462bf8e59ec6be",
"user_id": "255cca4584c24b16a684e3e8322b436b",
"numa_topology": null,
"is_bfv": false,
"pci_requests": {
  "nova_object.name": "InstancePCIRequests",
  "nova_object.version": "1.1",
  "nova_object.data": {
"instance_uuid": "2098b550-c749-460a-a44e-5932535993a9",
"requests": []
  },
  "nova_object.namespace": "nova"
}
  },
  "nova_object.namespace": "nova",
  "nova_object.changes": [
"ignore_hosts",
"requested_destination",
"num_instances",
"image",
"availability_zone",
"instance_uuid",
"flavor",
"scheduler_hints",
"pci_requests",
"instance_group",
"limits",
"project_id",
"user_id",
"numa_topology",
"is_bfv",
"retry"
  ]
}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830747

Title:
  Error 500 trying to migrate an instance after wrong request_spec

Status in OpenStack Compute (nova):
  New

Bug description:
  We've started an instance last Wednesday, and the compute where it ran
  failed (maybe hardware issue?). Since the networking looked wrong (ie:
  missing network interfaces), I tried to migrate the instance.

  According to Matt, it looked like the request_spec entry for the
  instance is wrong:

   my guess is something like this happened: 1. create server in a 
group, 2. cold 

[Yahoo-eng-team] [Bug 1825345] [NEW] admin-state-down doesn't evacuate bindings in the dhcp_agent_id column

2019-04-18 Thread Thomas Goirand
Public bug reported:

Hi,

This is a real report from the production front, with a deployment
causing us a lot of head-scratch because of a somehow broken hardware.

If, for some reason, a node running the neutron-dhcp-agent has some
hardware issue, then an admin will probably want to disable the agent
there. This is done with, for example:

neutron agent-update --admin-state-down e865d619-b122-4234-aebb-
3f5c24df1c8e

or something like this too:

openstack network agent set --disable e865d619-b122-4234-aebb-
3f5c24df1c8e

This works, and no new network will be assigned to this agent in the
future, however, if there was some networks already assigned to this
agent, they wont be evacuated.

What needs to be done is:

1/ Perform an update of the networkdhcpagentbindings table, and remove all 
instances of e865d619-b122-4234-aebb-3f5c24df1c8e that we see in dhcp_agent_id. 
The networks should be reassigned to another agent. Best would be to spread the 
load on many, if possible, otherwise reassigning all networks to a single agent 
would be ok-ish.
2/ Restart the neutron-dhcp-agent process where the network have been moved, so 
that new dnsmasq process start for this network.
3/ Attempt to get the disabled agent to restart as well, knowing that reaching 
it may fail (since it has been disabled, that's probably because it's broken 
somehow...).

Currently, one needs to do all of this by hand. I've done that, and
restored connectivity to a working DHCP server, as our user expected.
This is kind of painful and boring to do, plus that's not really what an
openstack user is expecting.

In fact, if we could also provide something like this, it'd be super
nice:

openstack network agent evacuate e865d619-b122-4234-aebb-3f5c24df1c8e

then we'd be using it during the "set --disable" process.

Cheers,

Thomas Goirand (zigo)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1825345

Title:
  admin-state-down doesn't evacuate bindings in the dhcp_agent_id column

Status in neutron:
  New

Bug description:
  Hi,

  This is a real report from the production front, with a deployment
  causing us a lot of head-scratch because of a somehow broken hardware.

  If, for some reason, a node running the neutron-dhcp-agent has some
  hardware issue, then an admin will probably want to disable the agent
  there. This is done with, for example:

  neutron agent-update --admin-state-down e865d619-b122-4234-aebb-
  3f5c24df1c8e

  or something like this too:

  openstack network agent set --disable e865d619-b122-4234-aebb-
  3f5c24df1c8e

  This works, and no new network will be assigned to this agent in the
  future, however, if there was some networks already assigned to this
  agent, they wont be evacuated.

  What needs to be done is:

  1/ Perform an update of the networkdhcpagentbindings table, and remove all 
instances of e865d619-b122-4234-aebb-3f5c24df1c8e that we see in dhcp_agent_id. 
The networks should be reassigned to another agent. Best would be to spread the 
load on many, if possible, otherwise reassigning all networks to a single agent 
would be ok-ish.
  2/ Restart the neutron-dhcp-agent process where the network have been moved, 
so that new dnsmasq process start for this network.
  3/ Attempt to get the disabled agent to restart as well, knowing that 
reaching it may fail (since it has been disabled, that's probably because it's 
broken somehow...).

  Currently, one needs to do all of this by hand. I've done that, and
  restored connectivity to a working DHCP server, as our user expected.
  This is kind of painful and boring to do, plus that's not really what
  an openstack user is expecting.

  In fact, if we could also provide something like this, it'd be super
  nice:

  openstack network agent evacuate e865d619-b122-4234-aebb-3f5c24df1c8e

  then we'd be using it during the "set --disable" process.

  Cheers,

  Thomas Goirand (zigo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1825345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819260] [NEW] Adding floating IP fails with SQLA 1.3.0

2019-03-09 Thread Thomas Goirand
Public bug reported:

Hi,

Trying to evaluate if we can upgrade Buster to SQLAlchemy 1.3.0, doing
this in my PoC:

openstack server add floating ip demo-server 192.168.105.101

leads to this stack dump below. Obviously, there's something wrong that
needs fixing. Best would be before Stein is out.

2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource 
[req-2ca7dd4c-515f-4958-964c-8506811c0b5a a498c39ddde54be4aafa7b3ded5563e6 
9e0e0a4c736a4687ade8c5e765353bd7 - default default] update failed: No details.: 
sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join 
from, there are multiple FROMS which can join to this entity. Try adding an 
explicit ON clause to help resolve the ambiguity.
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron/api/v2/resource.py", line 98, in 
resource
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron/api/v2/base.py", line 626, in update
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 140, in wrapped
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource 
self.force_reraise()
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/six.py", line 693, in reraise
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource raise value
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 136, in wrapped
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_db/api.py", line 154, in wrapper
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource 
self.force_reraise()
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/six.py", line 693, in reraise
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource raise value
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_db/api.py", line 142, in wrapper
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 183, in wrapped
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource 
LOG.debug("Retry wrapper got retriable exception: %s", e)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource 
self.force_reraise()
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/six.py", line 693, in reraise
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource raise value
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 179, in wrapped
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1807396] [NEW] With many VMs on the same tenant, the L3 ip neigh add is too slow

2018-12-07 Thread Thomas Goirand
Public bug reported:

In our setup, we run with DVR, and really a lot of VMs in the same
tenant/project (we have currently between 1500 and 2000 VMs). In such
setup, the internal function _set_subnet_arp_info of
neutron/agent/l3/dvr_local_router.py is taking a way too long. Indeed,
what it does is, on each compute node (since we use a Neutron L3 router
on each compute), operations like:

ip neigh add

for every VM in the project. As we have both ipv4 and ipv6, the L3 agent
does this twice. In our setup, this results in about 4000 Python
processes that have to be spawned to execute the "ip neigh add" command.
This takes between 20 and 30 minutes, each time we either:

- Add a first VM from the tenant to the host
- Restart the compute node
- Restart the L3 agent

So, there's this issue with "ip neigh add", though there's also the same
kind of issue when OVS is doing:

ovs-vsctl add-flows

about 2000 times as well.

So in other words, this doesn't scale, and this needs to be addressed,
so that the L3 agent can react in a reasonable mater to operations on
the DVRs when there's many VMs in the same project.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1807396

Title:
  With many VMs on the same tenant, the L3 ip neigh add is too slow

Status in neutron:
  New

Bug description:
  In our setup, we run with DVR, and really a lot of VMs in the same
  tenant/project (we have currently between 1500 and 2000 VMs). In such
  setup, the internal function _set_subnet_arp_info of
  neutron/agent/l3/dvr_local_router.py is taking a way too long. Indeed,
  what it does is, on each compute node (since we use a Neutron L3
  router on each compute), operations like:

  ip neigh add

  for every VM in the project. As we have both ipv4 and ipv6, the L3
  agent does this twice. In our setup, this results in about 4000 Python
  processes that have to be spawned to execute the "ip neigh add"
  command. This takes between 20 and 30 minutes, each time we either:

  - Add a first VM from the tenant to the host
  - Restart the compute node
  - Restart the L3 agent

  So, there's this issue with "ip neigh add", though there's also the
  same kind of issue when OVS is doing:

  ovs-vsctl add-flows

  about 2000 times as well.

  So in other words, this doesn't scale, and this needs to be addressed,
  so that the L3 agent can react in a reasonable mater to operations on
  the DVRs when there's many VMs in the same project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1807396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1791296] [NEW] Browsing /horizon/admin/info/ fails if Neutron not installed

2018-09-07 Thread Thomas Goirand
Public bug reported:

When browsing to Admin -> System -> System information, as an admin
user, if there's no Neutron installed, then Horizon just crashes:

  [ ... snip ... ]

File "/usr/lib/python3/dist-packages/horizon/utils/memoized.py" in wrapped
  176. args.insert(request_index, request_func(request))

File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py" in 
get_auth_params_from_request
  804. base.url_for(request, 'network'),

File "/usr/share/openstack-dashboard/openstack_dashboard/api/base.py" in url_for
  347. raise exceptions.ServiceCatalogException(service_type)

Exception Type: ServiceCatalogException at /admin/info/
Exception Value: Invalid service catalog: network

After installing Neutron, then things do display correctly.

It'd be nice to fix this.

Note, if this matters, I was using the Rocky package 3:14.0.0-2 under
Debian Sid to do this test.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1791296

Title:
  Browsing /horizon/admin/info/ fails if Neutron not installed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When browsing to Admin -> System -> System information, as an admin
  user, if there's no Neutron installed, then Horizon just crashes:

[ ... snip ... ]

  File "/usr/lib/python3/dist-packages/horizon/utils/memoized.py" in wrapped
176. args.insert(request_index, request_func(request))

  File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py" in 
get_auth_params_from_request
804. base.url_for(request, 'network'),

  File "/usr/share/openstack-dashboard/openstack_dashboard/api/base.py" in 
url_for
347. raise exceptions.ServiceCatalogException(service_type)

  Exception Type: ServiceCatalogException at /admin/info/
  Exception Value: Invalid service catalog: network

  After installing Neutron, then things do display correctly.

  It'd be nice to fix this.

  Note, if this matters, I was using the Rocky package 3:14.0.0-2 under
  Debian Sid to do this test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1791296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1790849] [NEW] Faiing hacking tests when building Rocky Debian package under Python 3.7

2018-09-05 Thread Thomas Goirand
Public bug reported:

Building the Nova 18.0.0 package in Sid (ie: Python 3.7), I get the
below unit test failure with hacking tests:

==
FAIL: nova.tests.unit.test_hacking.HackingTestCase.test_check_doubled_words
nova.tests.unit.test_hacking.HackingTestCase.test_check_doubled_words
--
_StringException: Traceback (most recent call last):
  File "/<>/nova/tests/unit/test_hacking.py", line 586, in 
test_check_doubled_words
expected_errors=errors)
  File "/<>/nova/tests/unit/test_hacking.py", line 293, in 
_assert_has_errors
self.assertEqual(expected_errors or [], actual_errors)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: [(1, 0, 'N343')] != []

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1790849

Title:
  Faiing hacking tests when building Rocky Debian package under Python
  3.7

Status in OpenStack Compute (nova):
  New

Bug description:
  Building the Nova 18.0.0 package in Sid (ie: Python 3.7), I get the
  below unit test failure with hacking tests:

  ==
  FAIL: nova.tests.unit.test_hacking.HackingTestCase.test_check_doubled_words
  nova.tests.unit.test_hacking.HackingTestCase.test_check_doubled_words
  --
  _StringException: Traceback (most recent call last):
File "/<>/nova/tests/unit/test_hacking.py", line 586, in 
test_check_doubled_words
  expected_errors=errors)
File "/<>/nova/tests/unit/test_hacking.py", line 293, in 
_assert_has_errors
  self.assertEqual(expected_errors or [], actual_errors)
File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: [(1, 0, 'N343')] != []

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1790849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1790847] [NEW] Faiing tests when building Rocky Debian package in Python 3.7

2018-09-05 Thread Thomas Goirand
Public bug reported:

Building Nova 18.0.0 in Debian Sid (ie: Python 3.7), I get the below
unit test failures.

==
FAIL: 
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
--
_StringException: Traceback (most recent call last):
  File "/<>/nova/api/validation/validators.py", line 300, in 
validate
self.validator.validate(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/jsonschema/validators.py", line 129, in 
validate
for error in self.iter_errors(*args, **kwargs):
  File "/usr/lib/python3/dist-packages/jsonschema/validators.py", line 105, in 
iter_errors
for error in errors:
  File "/usr/lib/python3/dist-packages/jsonschema/_validators.py", line 14, in 
patternProperties
if re.search(pattern, k):
  File "/usr/lib/python3.7/re.py", line 183, in search
return _compile(pattern, flags).search(string)
TypeError: expected string or bytes-like object

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/<>/nova/tests/unit/test_api_validation.py", line 152, in 
check_validation_error
method(body=body, req=req)
  File "/<>/nova/api/validation/__init__.py", line 109, in wrapper
args, kwargs)
  File "/<>/nova/api/validation/__init__.py", line 88, in 
_schema_validation_helper
schema_validator.validate(target)
  File "/<>/nova/api/validation/validators.py", line 334, in 
validate
raise exception.ValidationError(detail=detail)
nova.exception.ValidationError: expected string or bytes-like object

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/<>/nova/tests/unit/test_api_validation.py", line 473, in 
test_validate_patternProperties_fails
expected_detail=detail)
  File "/<>/nova/tests/unit/test_api_validation.py", line 160, in 
check_validation_error
'Exception details did not match expected')
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'expected string or buffer' != 
'expected string or bytes-like object': Exception details did not match expected


==
FAIL: 
nova.tests.unit.test_flavors.CreateInstanceTypeTest.test_name_with_non_printable_characters
nova.tests.unit.test_flavors.CreateInstanceTypeTest.test_name_with_non_printable_characters
--
_StringException: Traceback (most recent call last):
  File "/<>/nova/tests/unit/test_flavors.py", line 191, in 
test_name_with_non_printable_characters
self.assertInvalidInput(u'm1.\u0868 #', 64, 1, 120)
  File "/<>/nova/tests/unit/test_flavors.py", line 173, in 
assertInvalidInput
*create_args, **create_kwargs)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 485, in 
assertRaises
self.assertThat(our_callable, matcher)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError:  
returned 
Flavor(created_at=2018-09-05T09:19:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs=,flavorid='6d546cc3-f962-4e24-ae39-3e198c1721c2',id=7,is_public=True,memory_mb=64,name='m1.ࡨ
 
#',projects=[],root_gb=120,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1790847

Title:
  Faiing tests when building Rocky Debian package in Python 3.7

Status in OpenStack Compute (nova):
  New

Bug description:
  Building Nova 18.0.0 in Debian Sid (ie: Python 3.7), I get the below
  unit test failures.

  ==
  FAIL: 
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
  
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
  --
  _StringException: Traceback (most recent call last):
File "/<>/nova/api/validation/validators.py", line 300, in 
validate
  self.validator.validate(*args, **kwargs)
File "/usr/lib/python3/dist-packages/jsonschema/validators.py", line 129, 
in validate
  for error in self.iter_errors(*args, **kwargs):

[Yahoo-eng-team] [Bug 1790850] [NEW] Xenapi test failure when building the Debian 18.0.0 package in Sid

2018-09-05 Thread Thomas Goirand
Public bug reported:

Building Nova 18.0.0 in Debian Sid makes the below unit tests fail. Note
that it is probably related to OpenSSL 1.1.1 (though I didn't
investigate further and disabled running test tests).

==
FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_message_with_newlines_at_end
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_message_with_newlines_at_end
--
_StringException: Traceback (most recent call last):
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1679, in test_encrypt_message_with_newlines_at_end
self._test_encryption('This message has a newline at the end.\n')
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1670, in _test_encryption
enc = self.alice.encrypt(message)
  File "/<>/nova/virt/xenapi/agent.py", line 432, in encrypt
return self._run_ssl(text).strip('\n')
  File "/<>/nova/virt/xenapi/agent.py", line 428, in _run_ssl
raise RuntimeError(_('OpenSSL error: %s') % err)
RuntimeError: OpenSSL error: *** WARNING : deprecated key derivation used.
Using -iter or -pbkdf2 would be better.


==
FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
--
_StringException: Traceback (most recent call last):
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1685, in test_encrypt_newlines_inside_message
self._test_encryption('Message\nwith\ninterior\nnewlines.')
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1670, in _test_encryption
enc = self.alice.encrypt(message)
  File "/<>/nova/virt/xenapi/agent.py", line 432, in encrypt
return self._run_ssl(text).strip('\n')
  File "/<>/nova/virt/xenapi/agent.py", line 428, in _run_ssl
raise RuntimeError(_('OpenSSL error: %s') % err)
RuntimeError: OpenSSL error: *** WARNING : deprecated key derivation used.
Using -iter or -pbkdf2 would be better.


==
FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_many_newlines_at_end
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_many_newlines_at_end
--
_StringException: Traceback (most recent call last):
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1682, in test_encrypt_many_newlines_at_end
self._test_encryption('Message with lotsa newlines.\n\n\n')
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1670, in _test_encryption
enc = self.alice.encrypt(message)
  File "/<>/nova/virt/xenapi/agent.py", line 432, in encrypt
return self._run_ssl(text).strip('\n')
  File "/<>/nova/virt/xenapi/agent.py", line 428, in _run_ssl
raise RuntimeError(_('OpenSSL error: %s') % err)
RuntimeError: OpenSSL error: *** WARNING : deprecated key derivation used.
Using -iter or -pbkdf2 would be better.


==
FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_really_long_message
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_really_long_message
--
_StringException: Traceback (most recent call last):
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1691, in test_encrypt_really_long_message
self._test_encryption(''.join(['abcd' for i in range(1024)]))
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1670, in _test_encryption
enc = self.alice.encrypt(message)
  File "/<>/nova/virt/xenapi/agent.py", line 432, in encrypt
return self._run_ssl(text).strip('\n')
  File "/<>/nova/virt/xenapi/agent.py", line 428, in _run_ssl
raise RuntimeError(_('OpenSSL error: %s') % err)
RuntimeError: OpenSSL error: *** WARNING : deprecated key derivation used.
Using -iter or -pbkdf2 would be better.


==
FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_simple_message
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_simple_message
--
_StringException: Traceback (most recent call last):
  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1676, in test_encrypt_simple_message
self._test_encryption('This is a simple message.')
  File 

[Yahoo-eng-team] [Bug 1789654] [NEW] placement allocation_ratio initialized with 0.0

2018-08-29 Thread Thomas Goirand
Public bug reported:

After I just finished packaging Rocky, I wanted to test it with puppet-
openstack. Then I couldn't boot VMs after the puppet run, because
allocation_ration in placement is set to 0.0 by default:

# openstack resource provider list
+--+---++
| uuid | name  | generation |
+--+---++
| f9716941-356f-4a2e-b5ea-31c3c1630892 | poi.infomaniak.ch |  2 |
+--+---++
# openstack resource provider show f9716941-356f-4a2e-b5ea-31c3c1630892
++--+
| Field  | Value|
++--+
| uuid   | f9716941-356f-4a2e-b5ea-31c3c1630892 |
| name   | poi.infomaniak.ch|
| generation | 2|
++--+
# openstack resource provider inventory list 
f9716941-356f-4a2e-b5ea-31c3c1630892
++--+---+--+---+--+--+
| resource_class | reserved | step_size | allocation_ratio | total | min_unit | 
max_unit |
++--+---+--+---+--+--+
| VCPU   |0 | 1 |  0.0 | 4 |1 | 
   4 |
| DISK_GB|0 | 1 |  0.0 |19 |1 | 
  19 |
| MEMORY_MB  |  512 | 1 |  0.0 |  7987 |1 | 
7987 |
++--+---+--+---+--+--+

Later on, setting-up correct allocation_ratio fixed the problem:
# openstack resource provider inventory class set --allocation_ratio 16.0 
--total 4 f9716941-356f-4a2e-b5ea-31c3c1630892 VCPU
+--++
| Field| Value  |
+--++
| max_unit | 2147483647 |
| min_unit | 1  |
| step_size| 1  |
| reserved | 0  |
| allocation_ratio | 16.0   |
| total| 4  |
+--++
# openstack resource provider inventory list 
f9716941-356f-4a2e-b5ea-31c3c1630892
++--+--++---+--+---+
| resource_class | allocation_ratio | reserved |   max_unit | step_size | 
min_unit | total |
++--+--++---+--+---+
| DISK_GB|  0.0 |0 | 19 | 1 |   
 1 |19 |
| MEMORY_MB  |  0.0 |  512 |   7987 | 1 |   
 1 |  7987 |
| VCPU   | 16.0 |0 | 2147483647 | 1 |   
 1 | 4 |
++--+--++---+--+---+

so, after this, I could boot VMs normally. Though clearly,
allocation_ratio should not be zero by default.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1789654

Title:
  placement allocation_ratio initialized with 0.0

Status in OpenStack Compute (nova):
  New

Bug description:
  After I just finished packaging Rocky, I wanted to test it with
  puppet-openstack. Then I couldn't boot VMs after the puppet run,
  because allocation_ration in placement is set to 0.0 by default:

  # openstack resource provider list
  +--+---++
  | uuid | name  | generation |
  +--+---++
  | f9716941-356f-4a2e-b5ea-31c3c1630892 | poi.infomaniak.ch |  2 |
  +--+---++
  # openstack resource provider show f9716941-356f-4a2e-b5ea-31c3c1630892
  ++--+
  | Field  | Value|
  ++--+
  | uuid   | f9716941-356f-4a2e-b5ea-31c3c1630892 |
  | name   | poi.infomaniak.ch|
  | generation | 2|
  ++--+
  # openstack resource provider inventory list 
f9716941-356f-4a2e-b5ea-31c3c1630892
  
++--+---+--+---+--+--+
  | resource_class | reserved | step_size | allocation_ratio | total | min_unit 
| max_unit |
  
++--+---+--+---+--+--+
  | 

[Yahoo-eng-team] [Bug 1789046] [NEW] unit test cannot be started under Python 3.7

2018-08-25 Thread Thomas Goirand
Public bug reported:

Hi,

Building Horizon in Debian Stretch is no problem, and all unit test
pass. Unfortunately, under Python 3.7 in Debian Sid, it doesn't even
start to run unit tests:

+ http_proxy=127.0.0.1:9 https_proxy=127.0.0.9:9 HTTP_PROXY=127.0.0.1:9 
HTTPS_PROXY=127.0.0.1:9 
PYTHONPATH=/<>/debian/tmp/usr/lib/python3/dist-packages 
PYTHON=python3.7 python3.7 -m coverage run /<>/manage.py test -v 2 
--settings=horizon.test.settings horizon/test/unit
/usr/lib/python3/dist-packages/scss/selector.py:54: FutureWarning: Possible 
nested set at position 329
  ''', re.VERBOSE | re.MULTILINE)
/usr/lib/python3/dist-packages/pep8.py:110: FutureWarning: Possible nested set 
at position 1
  EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')
Traceback (most recent call last):
  File "/<>/manage.py", line 25, in 
execute_from_command_line(sys.argv)
  File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", 
line 381, in execute_from_command_line
utility.execute()
  File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", 
line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
  File 
"/usr/lib/python3/dist-packages/django/core/management/commands/test.py", line 
26, in run_from_argv
super().run_from_argv(argv)
  File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 
316, in run_from_argv
self.execute(*args, **cmd_options)
  File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 
353, in execute
output = self.handle(*args, **options)
  File 
"/usr/lib/python3/dist-packages/django/core/management/commands/test.py", line 
56, in handle
failures = test_runner.run_tests(test_labels)
  File "/usr/lib/python3/dist-packages/django/test/runner.py", line 605, in 
run_tests
self.run_checks()
  File "/usr/lib/python3/dist-packages/django/test/runner.py", line 567, in 
run_checks
call_command('check', verbosity=self.verbosity)
  File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", 
line 148, in call_command
return command.execute(*args, **defaults)
  File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 
353, in execute
output = self.handle(*args, **options)
  File 
"/usr/lib/python3/dist-packages/django/core/management/commands/check.py", line 
65, in handle
fail_level=getattr(checks, options['fail_level']),
  File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 
379, in check
include_deployment_checks=include_deployment_checks,
  File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 
366, in _run_checks
return checks.run_checks(**kwargs)
  File "/usr/lib/python3/dist-packages/django/core/checks/registry.py", line 
71, in run_checks
new_errors = check(app_configs=app_configs)
  File "/usr/lib/python3/dist-packages/django/core/checks/urls.py", line 13, in 
check_url_config
return check_resolver(resolver)
  File "/usr/lib/python3/dist-packages/django/core/checks/urls.py", line 23, in 
check_resolver
return check_method()
  File "/usr/lib/python3/dist-packages/django/urls/resolvers.py", line 396, in 
check
for pattern in self.url_patterns:
  File "/usr/lib/python3/dist-packages/django/utils/functional.py", line 37, in 
__get__
res = instance.__dict__[self.name] = self.func(instance)
  File "/usr/lib/python3/dist-packages/django/urls/resolvers.py", line 533, in 
url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
  File "/usr/lib/python3/dist-packages/django/utils/functional.py", line 37, in 
__get__
res = instance.__dict__[self.name] = self.func(instance)
  File "/usr/lib/python3/dist-packages/django/urls/resolvers.py", line 526, in 
urlconf_module
return import_module(self.urlconf_name)
  File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
  File "", line 1006, in _gcd_import
  File "", line 983, in _find_and_load
  File "", line 967, in _find_and_load_unlocked
  File "", line 677, in _load_unlocked
  File "", line 728, in exec_module
  File "", line 219, in _call_with_frames_removed
  File "/<>/horizon/test/urls.py", line 36, in 
url(r"auth/login/", views.login, {'template_name': "auth/login.html"},
AttributeError: module 'django.contrib.auth.views' has no attribute 'login'
Creating test database for alias 'default' 
('file:memorydb_default?mode=memory=shared')...

Could someone from the Horizon team help me to find the solution? I need
this to be fixed in order to have Horizon for Debian Buster.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1789046

Title:
  unit test cannot be started under Python 3.7

Status in 

[Yahoo-eng-team] [Bug 1772874] [NEW] attr_ops.verify_attributes() wrongly reject binding:host_id in _fixup_res_dict()

2018-05-23 Thread Thomas Goirand
Public bug reported:

While testing puppet-openstack and Neutron with Debian packages (ie:
running with Stretch and Queens), I had to use neutron-api using uwsgi,
as /usr/bin/neutron-server would not work with Eventlet + Python 3
(which is famously broken). Therefore, I did a setup with uwsgi and
running nova-rpc-server.

Then, I tried spawning an instance, then I got some issues in the rpc-
server:

 [req-4b2c2379-78ef-437c-b08a-bd8b309fa0b0 - - - - -] Exception during message 
handling: ValueError: Unrecognized attribute(s) 'binding:host_id'
 Traceback (most recent call last):
   File "/usr/lib/python3/dist-packages/neutron/plugins/common/utils.py", line 
162, in _fixup_res_dict
 attr_ops.verify_attributes(res_dict)
   File "/usr/lib/python3/dist-packages/neutron_lib/api/attributes.py", line 
200, in verify_attributes
 raise exc.HTTPBadRequest(msg)
 webob.exc.HTTPBadRequest: Unrecognized attribute(s) 'binding:host_id'

 During handling of the above exception, another exception occurred:

 Traceback (most recent call last):
   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
 res = self.dispatcher.dispatch(message)
   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
220, in dispatch
 return self._do_dispatch(endpoint, method, ctxt, args)
   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
190, in _do_dispatch
 result = func(ctxt, **new_args)
   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 
226, in inner
 return func(*args, **kwargs)
   File "/usr/lib/python3/dist-packages/neutron/db/api.py", line 91, in wrapped
 setattr(e, '_RETRY_EXCEEDED', True)
   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
 raise value
   File "/usr/lib/python3/dist-packages/neutron/db/api.py", line 87, in wrapped
 return f(*args, **kwargs)
   File "/usr/lib/python3/dist-packages/oslo_db/api.py", line 147, in wrapper
 ectxt.value = e.inner_exc
   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
 raise value
   File "/usr/lib/python3/dist-packages/oslo_db/api.py", line 135, in wrapper
 return f(*args, **kwargs)
   File "/usr/lib/python3/dist-packages/neutron/db/api.py", line 126, in wrapped
 LOG.debug("Retry wrapper got retriable exception: %s", e)
   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
 self.force_reraise()
   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
 raise value
   File "/usr/lib/python3/dist-packages/neutron/db/api.py", line 122, in wrapped
 return f(*dup_args, **dup_kwargs)
   File "/usr/lib/python3/dist-packages/neutron/quota/resource_registry.py", 
line 99, in wrapper
 ret_val = f(_self, context, *args, **kwargs)
   File "/usr/lib/python3/dist-packages/neutron/api/rpc/handlers/dhcp_rpc.py", 
line 271, in create_dhcp_port
 return self._port_action(plugin, context, port, 'create_port')
   File "/usr/lib/python3/dist-packages/neutron/api/rpc/handlers/dhcp_rpc.py", 
line 98, in _port_action
 return p_utils.create_port(plugin, context, port)
   File "/usr/lib/python3/dist-packages/neutron/plugins/common/utils.py", line 
189, in create_port
 check_allow_post=check_allow_post)
   File "/usr/lib/python3/dist-packages/neutron/plugins/common/utils.py", line 
166, in _fixup_res_dict
 raise ValueError(e.detail)
 ValueError: Unrecognized attribute(s) 'binding:host_id'

FYI, the content of the res_dict variable before calling
attr_ops.verify_attributes(res_dict) is as follow (I added a LOG.debug()
to find out):

res_dict var: {'device_owner': 'network:dhcp', 'network_id': '5fa58f3a-
3a72-4d5a-a781-dca20d882007', 'fixed_ips': [{'subnet_id': '85a0b153-fcd8
-418d-90c2-7d0140431d61'}], 'mac_address':
, 'name': '',
'admin_state_up': True, 'binding:host_id': 'poi', 'device_id':
'dhcp6d2441eb-6701-5705-adb9-c31fa3421a1a-5fa58f3a-
3a72-4d5a-a781-dca20d882007', 'tenant_id':
'be123aff43cd4699a0fd062dc0f898c6'}

As the binding:host_id looked valid to me (it's been there for years in
Neutron), I figured out that the parameter validation code must have had
something wrong, so I commented out the
attr_ops.verify_attributes(res_dict) call. And without the check,
everything worked 

[Yahoo-eng-team] [Bug 1771506] [NEW] Unit test failure with OpenSSL 1.1.1

2018-05-16 Thread Thomas Goirand
Public bug reported:

Hi,

Building the Nova Queens package with OpenSSL 1.1.1 leads to unit test
problems. This was reported to Debian at: https://bugs.debian.org/898807

The new openssl 1.1.1 is currently in experimental [0]. This package
failed to build against this new package [1] while it built fine against
the openssl version currently in unstable [2]. Could you please have a
look?

FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
|nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
|--
|_StringException: pythonlogging:'': {{{2018-05-01 20:48:09,960 WARNING 
[oslo_config.cfg] Config option key_manager.api_class  is deprecated. Use 
option key_manager.backend instead.}}}
|
|Traceback (most recent call last):
|  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1592, in test_encrypt_newlines_inside_message
|self._test_encryption('Message\nwith\ninterior\nnewlines.')
|  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1577, in _test_encryption
|enc = self.alice.encrypt(message)
|  File "/<>/nova/virt/xenapi/agent.py", line 432, in encrypt
|return self._run_ssl(text).strip('\n')
|  File "/<>/nova/virt/xenapi/agent.py", line 428, in _run_ssl
|raise RuntimeError(_('OpenSSL error: %s') % err)
|RuntimeError: OpenSSL error: *** WARNING : deprecated key derivation used.
|Using -iter or -pbkdf2 would be better.

It looks like due to additional message on stderr.

[0] https://lists.debian.org/msgid-search/20180501211400.ga21...@roeckx.be
[1] 
https://breakpoint.cc/openssl-rebuild/2018-05-03-rebuild-openssl1.1.1-pre6/attempted/nova_17.0.0-4_amd64-2018-05-01T20%3A39%3A38Z
[2] 
https://breakpoint.cc/openssl-rebuild/2018-05-03-rebuild-openssl1.1.1-pre6/successful/nova_17.0.0-4_amd64-2018-05-02T18%3A46%3A36Z

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771506

Title:
  Unit test failure with OpenSSL 1.1.1

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  Building the Nova Queens package with OpenSSL 1.1.1 leads to unit test
  problems. This was reported to Debian at:
  https://bugs.debian.org/898807

  The new openssl 1.1.1 is currently in experimental [0]. This package
  failed to build against this new package [1] while it built fine
  against the openssl version currently in unstable [2]. Could you
  please have a look?

  FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
  
|nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
  |--
  |_StringException: pythonlogging:'': {{{2018-05-01 20:48:09,960 WARNING 
[oslo_config.cfg] Config option key_manager.api_class  is deprecated. Use 
option key_manager.backend instead.}}}
  |
  |Traceback (most recent call last):
  |  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1592, in test_encrypt_newlines_inside_message
  |self._test_encryption('Message\nwith\ninterior\nnewlines.')
  |  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1577, in _test_encryption
  |enc = self.alice.encrypt(message)
  |  File "/<>/nova/virt/xenapi/agent.py", line 432, in encrypt
  |return self._run_ssl(text).strip('\n')
  |  File "/<>/nova/virt/xenapi/agent.py", line 428, in _run_ssl
  |raise RuntimeError(_('OpenSSL error: %s') % err)
  |RuntimeError: OpenSSL error: *** WARNING : deprecated key derivation used.
  |Using -iter or -pbkdf2 would be better.

  It looks like due to additional message on stderr.

  [0] https://lists.debian.org/msgid-search/20180501211400.ga21...@roeckx.be
  [1] 
https://breakpoint.cc/openssl-rebuild/2018-05-03-rebuild-openssl1.1.1-pre6/attempted/nova_17.0.0-4_amd64-2018-05-01T20%3A39%3A38Z
  [2] 
https://breakpoint.cc/openssl-rebuild/2018-05-03-rebuild-openssl1.1.1-pre6/successful/nova_17.0.0-4_amd64-2018-05-02T18%3A46%3A36Z

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754767] [NEW] bad error message when /var/lib/glance/* have wrong rights

2018-03-09 Thread Thomas Goirand
Public bug reported:

Because of an error in the packaging of Glance in Debian, I had:

# openstack image create --container-format bare --disk-format qcow2 --file 
debian-testing-openstack-amd64.qcow2 debian-buster-amd64
410 Gone: Error in store configuration. Adding images to store is disabled. 
(HTTP N/A)

It took me more than one hour to figure out that /var/lib/glance/images
and /var/lib/glance/image-cache was owned by root:root instead of
glance:glance. After fixing this and restarting the daemon, it just
worked, of course.

This was really my fault, because I attempted to fix the Debian postinst
to stop using chown -R, but still... While this is a normal behavior,
the "Adding images to store is disabled." error message is really
deceptive. It made me think that my glance-{api,registry}.conf files
were wrong.

So, of course, that's only a wishlist bug. Please fix the error message
and make it nicer and less deceptive, at least in the logs (no need to
have the user see that the admin is silly).

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1754767

Title:
  bad error message when /var/lib/glance/* have wrong rights

Status in Glance:
  New

Bug description:
  Because of an error in the packaging of Glance in Debian, I had:

  # openstack image create --container-format bare --disk-format qcow2 --file 
debian-testing-openstack-amd64.qcow2 debian-buster-amd64
  410 Gone: Error in store configuration. Adding images to store is disabled. 
(HTTP N/A)

  It took me more than one hour to figure out that
  /var/lib/glance/images and /var/lib/glance/image-cache was owned by
  root:root instead of glance:glance. After fixing this and restarting
  the daemon, it just worked, of course.

  This was really my fault, because I attempted to fix the Debian
  postinst to stop using chown -R, but still... While this is a normal
  behavior, the "Adding images to store is disabled." error message is
  really deceptive. It made me think that my glance-{api,registry}.conf
  files were wrong.

  So, of course, that's only a wishlist bug. Please fix the error
  message and make it nicer and less deceptive, at least in the logs (no
  need to have the user see that the admin is silly).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1754767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751551] [NEW] broken test in Py 3.6: SAMLGenerationTests.test_sign_assertion_exc

2018-02-25 Thread Thomas Goirand
Public bug reported:

While building Keystone in Debian Sid + Python 3.6, I get the below
stack dump. Obviously, this is a broken test, not a broken code. Notice
the final . (ie: dot) after "status 1" that is the cause of the test
failure.

Everything else seems to pass in my environment.

keystone.tests.unit.test_v3_federation.SAMLGenerationTests.test_sign_assertion_exc
--

Captured traceback:
~~~
b'Traceback (most recent call last):'
b'  File "/usr/lib/python3/dist-packages/mock/mock.py", line 1305, in 
patched'
b'return func(*args, **keywargs)'
b'  File 
"/home/zigo/sources/openstack/queens/services/keystone/build-area/keystone-13.0.0~rc1/keystone/tests/unit/test_v3_federation.py",
 line 4049, in test_sign_assertion_exc'
b'self.assertEqual(expected_log, logger_fixture.output)'
b'  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, 
in assertEqual'
b'self.assertThat(observed, matcher, message)'
b'  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, 
in assertThat'
b'raise mismatch_error'
b'testtools.matchers._impl.MismatchError: !=:'
b"reference = '''\\"
b"Error when signing assertion, reason: Command 'xmlsec1' returned non-zero 
exit status 1 
keystone.tests.unit.test_v3_federation.SAMLGenerationTests.test_sign_assertion_exc-1"
b"'''"
b"actual= '''\\"
b"Error when signing assertion, reason: Command 'xmlsec1' returned non-zero 
exit status 1. 
keystone.tests.unit.test_v3_federation.SAMLGenerationTests.test_sign_assertion_exc-1"
b"'''"
b''
b''

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1751551

Title:
  broken test in Py 3.6: SAMLGenerationTests.test_sign_assertion_exc

Status in OpenStack Identity (keystone):
  New

Bug description:
  While building Keystone in Debian Sid + Python 3.6, I get the below
  stack dump. Obviously, this is a broken test, not a broken code.
  Notice the final . (ie: dot) after "status 1" that is the cause of the
  test failure.

  Everything else seems to pass in my environment.

  
keystone.tests.unit.test_v3_federation.SAMLGenerationTests.test_sign_assertion_exc
  
--

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File "/usr/lib/python3/dist-packages/mock/mock.py", line 1305, in 
patched'
  b'return func(*args, **keywargs)'
  b'  File 
"/home/zigo/sources/openstack/queens/services/keystone/build-area/keystone-13.0.0~rc1/keystone/tests/unit/test_v3_federation.py",
 line 4049, in test_sign_assertion_exc'
  b'self.assertEqual(expected_log, logger_fixture.output)'
  b'  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 
411, in assertEqual'
  b'self.assertThat(observed, matcher, message)'
  b'  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 
498, in assertThat'
  b'raise mismatch_error'
  b'testtools.matchers._impl.MismatchError: !=:'
  b"reference = '''\\"
  b"Error when signing assertion, reason: Command 'xmlsec1' returned 
non-zero exit status 1 
keystone.tests.unit.test_v3_federation.SAMLGenerationTests.test_sign_assertion_exc-1"
  b"'''"
  b"actual= '''\\"
  b"Error when signing assertion, reason: Command 'xmlsec1' returned 
non-zero exit status 1. 
keystone.tests.unit.test_v3_federation.SAMLGenerationTests.test_sign_assertion_exc-1"
  b"'''"
  b''
  b''

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1751551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750991] [NEW] lbaas: TestHaproxyCfg.test_render_template_https fails in Python 3.6

2018-02-22 Thread Thomas Goirand
Public bug reported:

FAIL: 
neutron_lbaas.tests.unit.drivers.haproxy.test_jinja_cfg.TestHaproxyCfg.test_render_template_https
neutron_lbaas.tests.unit.drivers.haproxy.test_jinja_cfg.TestHaproxyCfg.test_render_template_https
--
_StringException: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/tests/unit/drivers/haproxy/test_jinja_cfg.py",
 line 183, in test_render_template_https
frontend=fe, backend=be), rendered_obj)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = '''\
# Configuration for test-lb
global
daemon
user nobody
group nogroup
log /dev/log local0
log /dev/log local1 notice
maxconn 98
stats socket /sock_path mode 0666 level user

defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 5
timeout server 5

frontend sample_listener_id_1
option tcplog
maxconn 98
bind 10.0.0.2:443
mode tcp
default_backend sample_pool_id_1

backend sample_pool_id_1
mode tcp
balance roundrobin
cookie SRV insert indirect nocache
timeout check 31s
option httpchk GET /index.html
http-check expect rstatus 500|405|404
option ssl-hello-chk
server sample_member_id_1 10.0.0.99:82 weight 13 check inter 30s fall 3 
cookie sample_member_id_1
server sample_member_id_2 10.0.0.98:82 weight 13 check inter 30s fall 3 
cookie sample_member_id_2

'''
actual= '''\
# Configuration for test-lb
global
daemon
user nobody
group nogroup
log /dev/log local0
log /dev/log local1 notice
maxconn 98
stats socket /sock_path mode 0666 level user

defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 5
timeout server 5

frontend sample_listener_id_1
option tcplog
maxconn 98
bind 10.0.0.2:443
mode tcp
default_backend sample_pool_id_1

backend sample_pool_id_1
mode tcp
balance roundrobin
cookie SRV insert indirect nocache
timeout check 31s
option httpchk GET /index.html
http-check expect rstatus 500|404|405
option ssl-hello-chk
server sample_member_id_1 10.0.0.99:82 weight 13 check inter 30s fall 3 
cookie sample_member_id_1
server sample_member_id_2 10.0.0.98:82 weight 13 check inter 30s fall 3 
cookie sample_member_id_2

'''

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- TestHaproxyCfg.test_render_template_https fails in Python 3.6
+ lbaas: TestHaproxyCfg.test_render_template_https fails in Python 3.6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750991

Title:
  lbaas: TestHaproxyCfg.test_render_template_https fails in Python 3.6

Status in neutron:
  New

Bug description:
  FAIL: 
neutron_lbaas.tests.unit.drivers.haproxy.test_jinja_cfg.TestHaproxyCfg.test_render_template_https
  
neutron_lbaas.tests.unit.drivers.haproxy.test_jinja_cfg.TestHaproxyCfg.test_render_template_https
  --
  _StringException: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/neutron/tests/base.py", line 132, in 
func
  return f(self, *args, **kwargs)
File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/tests/unit/drivers/haproxy/test_jinja_cfg.py",
 line 183, in test_render_template_https
  frontend=fe, backend=be), rendered_obj)
File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = '''\
  # Configuration for test-lb
  global
  daemon
  user nobody
  group nogroup
  log /dev/log local0
  log /dev/log local1 notice
  maxconn 98
  stats socket /sock_path mode 0666 level user

  defaults
  log global
  retries 3
  option redispatch
  timeout connect 5000
  timeout client 5
  timeout server 5

  frontend sample_listener_id_1
  option tcplog
  maxconn 98
  bind 10.0.0.2:443
  mode tcp
  default_backend sample_pool_id_1

  backend sample_pool_id_1
  mode tcp
  balance roundrobin
  

[Yahoo-eng-team] [Bug 1750996] [NEW] lbaas: radware driver TestLBaaSDriverRestClient.test_recover_was_called fails in Python 3.6

2018-02-22 Thread Thomas Goirand
Public bug reported:

While building the neutron-lbaas package in Debian Sid with Python 3.6,
I get the below failure.

FAIL: 
neutron_lbaas.tests.unit.drivers.radware.test_v2_plugin_driver.TestLBaaSDriverRestClient.test_recover_was_called
neutron_lbaas.tests.unit.drivers.radware.test_v2_plugin_driver.TestLBaaSDriverRestClient.test_recover_was_called
--
_StringException: pythonlogging:'': {{{
WARNING [neutron_lbaas.services.loadbalancer.plugin] neutron-lbaas is now 
deprecated. See: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation
 WARNING [neutron.api.extensions] Did not find expected name 
"Ip_substring_port_filtering_lib" in 
/usr/lib/python3/dist-packages/neutron/extensions/ip_substring_port_filtering_lib.py
 WARNING [neutron_lbaas.services.loadbalancer.plugin] neutron-lbaas is now 
deprecated. See: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation
 WARNING [neutron.quota.resource_registry] member is already registered
 WARNING [neutron.quota.resource_registry] loadbalancer is already registered
 WARNING [neutron.quota.resource_registry] listener is already registered
 WARNING [neutron.quota.resource_registry] pool is already registered
 WARNING [neutron.quota.resource_registry] healthmonitor is already registered
 WARNING [neutron.quota.resource_registry] l7policy is already registered
}}}

stderr: {{{
/usr/lib/python3/dist-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: 
Parameters to load are deprecated.  Call .resolve and .require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
}}}

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/tests/unit/drivers/radware/test_v2_plugin_driver.py",
 line 243, in test_recover_was_called
None, None)
  File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/drivers/radware/rest_client.py",
 line 90, in call
resp = self._call(action, resource, data, headers, binary)
  File "/usr/lib/python3/dist-packages/oslo_log/helpers.py", line 67, in wrapper
return method(*args, **kwargs)
  File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/drivers/radware/rest_client.py",
 line 139, in _call
self.server, self.port, timeout=self.timeout)
  File "/usr/lib/python3.6/http/client.py", line 1377, in __init__
context = ssl._create_default_https_context()
  File "/usr/lib/python3/dist-packages/eventlet/green/ssl.py", line 414, in 
green_create_default_context
context = _original_create_default_context(*a, **kw)
  File "/usr/lib/python3.6/ssl.py", line 506, in create_default_context
context.verify_mode = CERT_REQUIRED
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode

[ ... 3 pages of the same thing like this ... ]

  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  File "/usr/lib/python3.6/ssl.py", line 485, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
RecursionError: maximum recursion depth exceeded while calling a Python object

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which 

[Yahoo-eng-team] [Bug 1750994] [NEW] lbaas: TestHaproxyCfg.test_transform_listener fails in Python 3.6

2018-02-22 Thread Thomas Goirand
Public bug reported:

While building the neutron-lbaas package in Debian Sid with Python 3.6,
I get the below failure. As this looks like a broken test rather than a
software bug, I've disabled running the unit test at package build time,
however, it'd be nice to have it fixed.

FAIL: 
neutron_lbaas.tests.unit.drivers.haproxy.test_jinja_cfg.TestHaproxyCfg.test_transform_listener
neutron_lbaas.tests.unit.drivers.haproxy.test_jinja_cfg.TestHaproxyCfg.test_transform_listener
--
_StringException: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/tests/unit/drivers/haproxy/test_jinja_cfg.py",
 line 443, in test_transform_listener
self.assertEqual(sample_configs.RET_LISTENER, ret)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = {'connection_limit': 98,
 'default_pool': {'admin_state_up': True,
  'health_monitor': {'admin_state_up': True,
 'delay': 30,
 'expected_codes': '500|405|404',
 'http_method': 'GET',
 'id': 'sample_monitor_id_1',
 'max_retries': 3,
 'timeout': 31,
 'type': 'HTTP',
 'url_path': '/index.html'},
  'id': 'sample_pool_id_1',
  'lb_algorithm': 'roundrobin',
  'members': [{'address': '10.0.0.99',
   'admin_state_up': True,
   'id': 'sample_member_id_1',
   'protocol_port': 82,
   'provisioning_status': 'ACTIVE',
   'subnet_id': '10.0.0.1/24',
   'weight': 13},
  {'address': '10.0.0.98',
   'admin_state_up': True,
   'id': 'sample_member_id_2',
   'protocol_port': 82,
   'provisioning_status': 'ACTIVE',
   'subnet_id': '10.0.0.1/24',
   'weight': 13}],
  'protocol': 'http',
  'provisioning_status': 'ACTIVE',
  'session_persistence': {'cookie_name': 'HTTP_COOKIE',
  'type': 'HTTP_COOKIE'}},
 'id': 'sample_listener_id_1',
 'protocol': 'HTTP',
 'protocol_mode': 'http',
 'protocol_port': '80'}
actual= {'connection_limit': 98,
 'default_pool': {'admin_state_up': True,
  'health_monitor': {'admin_state_up': True,
 'delay': 30,
 'expected_codes': '500|404|405',
 'http_method': 'GET',
 'id': 'sample_monitor_id_1',
 'max_retries': 3,
 'timeout': 31,
 'type': 'HTTP',
 'url_path': '/index.html'},
  'id': 'sample_pool_id_1',
  'lb_algorithm': 'roundrobin',
  'members': [{'address': '10.0.0.99',
   'admin_state_up': True,
   'id': 'sample_member_id_1',
   'protocol_port': 82,
   'provisioning_status': 'ACTIVE',
   'subnet_id': '10.0.0.1/24',
   'weight': 13},
  {'address': '10.0.0.98',
   'admin_state_up': True,
   'id': 'sample_member_id_2',
   'protocol_port': 82,
   'provisioning_status': 'ACTIVE',
   'subnet_id': '10.0.0.1/24',
   'weight': 13}],
  'protocol': 'http',
  'provisioning_status': 'ACTIVE',
  'session_persistence': {'cookie_name': 'HTTP_COOKIE',
  'type': 'HTTP_COOKIE'}},
 'id': 'sample_listener_id_1',
 'protocol': 'HTTP',
 'protocol_mode': 'http',
 'protocol_port': '80'}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug 

[Yahoo-eng-team] [Bug 1750999] [NEW] neutron_lbaas.tests.unit.agent.test_agent.TestLbaasService.test_main failure in Python 3.6

2018-02-22 Thread Thomas Goirand
Public bug reported:

While building the neutron-lbaas package in Debian Sid with Python 3.6,
I get the below failure. As this looks like a broken test rather than a
software bug, I've disabled running the unit test at package build time,
however, it'd be nice to have it fixed.

My wet finger double guess without looking too much, is that there's a
string type issue in this test that Python 3.6 doesn't like.


FAIL: neutron_lbaas.tests.unit.agent.test_agent.TestLbaasService.test_main
neutron_lbaas.tests.unit.agent.test_agent.TestLbaasService.test_main
--
_StringException: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/tests/unit/agent/test_agent.py",
 line 45, in test_main
agent.main()
  File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/agent/agent.py",
 line 66, in main
common_config.init(sys.argv[1:])
  File "/usr/lib/python3/dist-packages/neutron/common/config.py", line 78, in 
init
**kwargs)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2492, in 
__call__
default_config_files, default_config_dirs)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2374, in 
_pre_setup
prog = os.path.basename(sys.argv[0])
  File "/usr/lib/python3.6/posixpath.py", line 144, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not MagicMock

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750999

Title:
  neutron_lbaas.tests.unit.agent.test_agent.TestLbaasService.test_main
  failure in Python 3.6

Status in neutron:
  New

Bug description:
  While building the neutron-lbaas package in Debian Sid with Python
  3.6, I get the below failure. As this looks like a broken test rather
  than a software bug, I've disabled running the unit test at package
  build time, however, it'd be nice to have it fixed.

  My wet finger double guess without looking too much, is that there's a
  string type issue in this test that Python 3.6 doesn't like.

  
  FAIL: neutron_lbaas.tests.unit.agent.test_agent.TestLbaasService.test_main
  neutron_lbaas.tests.unit.agent.test_agent.TestLbaasService.test_main
  --
  _StringException: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/neutron/tests/base.py", line 132, in 
func
  return f(self, *args, **kwargs)
File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/tests/unit/agent/test_agent.py",
 line 45, in test_main
  agent.main()
File 
"/home/zigo/sources/openstack/queens/services/neutron-lbaas/build-area/neutron-lbaas-12.0.0~rc1/neutron_lbaas/agent/agent.py",
 line 66, in main
  common_config.init(sys.argv[1:])
File "/usr/lib/python3/dist-packages/neutron/common/config.py", line 78, in 
init
  **kwargs)
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2492, in 
__call__
  default_config_files, default_config_dirs)
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2374, in 
_pre_setup
  prog = os.path.basename(sys.argv[0])
File "/usr/lib/python3.6/posixpath.py", line 144, in basename
  p = os.fspath(p)
  TypeError: expected str, bytes or os.PathLike object, not MagicMock

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750672] [NEW] failure to generate Nova's doc in Python 3.6

2018-02-20 Thread Thomas Goirand
Public bug reported:

When generating the sphinx doc in Debian Sid with Python 3.6 for the
Queens RC1 release of Nova, I get the below stack dump, though it passes
under Python 2.7. A fix would be more than welcome, cause I'm removing
all traces of Python 2.7, including sphinx stuff and all modules.

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/sphinx/cmdline.py", line 306, in main
app.build(opts.force_all, filenames)
  File "/usr/lib/python3/dist-packages/sphinx/application.py", line 339, in 
build
self.builder.build_update()
  File "/usr/lib/python3/dist-packages/sphinx/builders/__init__.py", line 329, 
in build_update
'out of date' % len(to_build))
  File "/usr/lib/python3/dist-packages/sphinx/builders/__init__.py", line 342, 
in build
updated_docnames = set(self.env.update(self.config, self.srcdir, 
self.doctreedir))
  File "/usr/lib/python3/dist-packages/sphinx/environment/__init__.py", line 
601, in update
self._read_serial(docnames, self.app)
  File "/usr/lib/python3/dist-packages/sphinx/environment/__init__.py", line 
621, in _read_serial
self.read_doc(docname, app)
  File "/usr/lib/python3/dist-packages/sphinx/environment/__init__.py", line 
758, in read_doc
pub.publish()
  File "/usr/lib/python3/dist-packages/docutils/core.py", line 217, in publish
self.settings)
  File "/usr/lib/python3/dist-packages/sphinx/io.py", line 74, in read
self.parse()
  File "/usr/lib/python3/dist-packages/docutils/readers/__init__.py", line 78, 
in parse
self.parser.parse(self.input, document)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/__init__.py", line 
191, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
172, in run
input_source=document['source'])
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 239, in 
run
context, state, transitions)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 460, in 
check_line
return method(match, context, next_state)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
2754, in underline
self.section(title, source, style, lineno - 1, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
328, in section
self.new_subsection(title, lineno, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
396, in new_subsection
node=section_node, match_titles=True)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
283, in nested_parse
node=node, match_titles=match_titles)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
197, in run
results = StateMachineWS.run(self, input_lines, input_offset)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 239, in 
run
context, state, transitions)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 460, in 
check_line
return method(match, context, next_state)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
2754, in underline
self.section(title, source, style, lineno - 1, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
328, in section
self.new_subsection(title, lineno, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
396, in new_subsection
node=section_node, match_titles=True)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
283, in nested_parse
node=node, match_titles=match_titles)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
197, in run
results = StateMachineWS.run(self, input_lines, input_offset)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 239, in 
run
context, state, transitions)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 460, in 
check_line
return method(match, context, next_state)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
2754, in underline
self.section(title, source, style, lineno - 1, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
328, in section
self.new_subsection(title, lineno, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
396, in new_subsection
node=section_node, match_titles=True)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
283, in nested_parse
node=node, match_titles=match_titles)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
197, in run
results = StateMachineWS.run(self, input_lines, input_offset)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 239, in 
run
context, state, transitions)
  File 

[Yahoo-eng-team] [Bug 1736114] [NEW] metadata_agent.ini cannot be built reproducibly

2017-12-04 Thread Thomas Goirand
Public bug reported:

Hi,

When generating metadata_agent.ini, the metadata_workers directive
default value is filled with the number of CPUs used when building the
file. This makes the whole Neutron package not reproducible.

The config code is like this (from
neutron/conf/agent/metadata/config.py):

cfg.IntOpt('metadata_workers',
   default=host.cpu_count() // 2,
   help=_('Number of separate worker processes for metadata '
  'server (defaults to half of the number of CPUs)')),

Instead of writing this, the default value should be set to None, then
whenever something fetches the metadata_workers value, something like
this should be written (probably, a //2 should be added if we want to
retain the above):

def get_num_metadata_workers():
"""Return the configured number of workers."""
if CONF.metadata_workers is None:
# None implies the number of CPUs
return processutils.get_worker_count()
return CONF.metadata_workers

This way, the value really is taken from runtime, and not build time,
which is probably what the original author wanted to write. Note that
this type of fix has already been written in Glance, and many other
OpenStack packages.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736114

Title:
  metadata_agent.ini cannot be built reproducibly

Status in neutron:
  New

Bug description:
  Hi,

  When generating metadata_agent.ini, the metadata_workers directive
  default value is filled with the number of CPUs used when building the
  file. This makes the whole Neutron package not reproducible.

  The config code is like this (from
  neutron/conf/agent/metadata/config.py):

  cfg.IntOpt('metadata_workers',
 default=host.cpu_count() // 2,
 help=_('Number of separate worker processes for metadata '
'server (defaults to half of the number of CPUs)')),

  Instead of writing this, the default value should be set to None, then
  whenever something fetches the metadata_workers value, something like
  this should be written (probably, a //2 should be added if we want to
  retain the above):

  def get_num_metadata_workers():
  """Return the configured number of workers."""
  if CONF.metadata_workers is None:
  # None implies the number of CPUs
  return processutils.get_worker_count()
  return CONF.metadata_workers

  This way, the value really is taken from runtime, and not build time,
  which is probably what the original author wanted to write. Note that
  this type of fix has already been written in Glance, and many other
  OpenStack packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1736114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734784] [NEW] Cannot boot instances on filesystem without O_DIRECT support (fails on tmpfs)

2017-11-27 Thread Thomas Goirand
Public bug reported:

I'm trying to (tempest) validate OpenStack Pike for Debian. So I'm
running Pike (nova 16.0.3) on Debian Sid. My environment for running
tempest is a Debian Live system which I just boot once, install all of
OpenStack on, and run tempest.

As a consequence, my filesystem is a bit weirdo. It's a single root
partition that is using overlayfs, which has its read/write volume on
tpmfs. This worked well when validating Newton, but it seems there's a
regression with Pike. Here's what happens when trying to boot an
instance:

 Traceback (most recent call last):
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2192, 
in _build_resources
 yield resources
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2007, 
in _build_and_run_instance
 block_device_info=block_device_info)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2802, in spawn
 block_device_info=block_device_info)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3240, in _create_image
 fallback_from_host)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3331, in _create_and_inject_local_root
 instance, size, fallback_from_host)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
6988, in _try_fetch_image_cache
 size=size)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 241, in cache
 *args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 595, in create_image
 prepare_template(target=base, *args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
 return f(*args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 237, in fetch_func_sync
 fetch_func(target=target, *args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 
446, in fetch_image
 images.fetch_to_raw(context, image_id, target)
   File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 171, in 
fetch_to_raw
 % {'exp': exp})
 ImageUnacceptable: Image f8dc206b-d5a1-4123-b26c-7216d03ab1e7 is unacceptable: 
Unable to convert image to raw: Image 
/var/lib/nova/instances/_base/c44b0b620ae7c6fd8111e0abb5a8d1fc39fcdf08.part is 
unacceptable: Unable to convert image to raw: Unexpected error while running 
command.
 Command: qemu-img convert -t none -O raw -f qcow2 
/var/lib/nova/instances/_base/c44b0b620ae7c6fd8111e0abb5a8d1fc39fcdf08.part 
/var/lib/nova/instances/_base/c44b0b620ae7c6fd8111e0abb5a8d1fc39fcdf08.converted
 Exit code: 1
 Stdout: u''
 Stderr: u"qemu-img: file system may not support O_DIRECT\nqemu-img: Could not 
open 
'/var/lib/nova/instances/_base/c44b0b620ae7c6fd8111e0abb5a8d1fc39fcdf08.converted':
 Could not open 
'/var/lib/nova/instances/_base/c44b0b620ae7c6fd8111e0abb5a8d1fc39fcdf08.converted':
 Invalid argument\n"

In this log, the important bit is:

file system may not support O_DIRECT

Indeed, my weirdo filesystem setup doesn't supports this.

I tried to set [libvirt]/disk_cachemodes = file=writethrough, as per
recommendation on #openstack-nova, but it didn't fix the problem. It
looks like nova doesn't really care about this.

Because I also have a small scratch disk to do some cinder tests, and
wanted to make sure about this issue, I tried mounting
/var/lib/nova/instances on ext4, just to see if there was still the
issue. And indeed, it fixed the problem.

So, here, nova should check if /var/lib/nova/instances really has
O_DIRECT support and use the correct options for libvirt, but it's
looking like it isn't doing things properly. This IMO deserves a fix.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

** Summary changed:

- Cannot boot instances on filesystem without O_DIRECT, like tpmfs
+ Cannot boot instances on filesystem without O_DIRECT support (fails on tmpfs)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1734784

Title:
  Cannot boot instances on filesystem without O_DIRECT support (fails on
  tmpfs)

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm trying to (tempest) validate OpenStack Pike for Debian. So I'm
  running Pike (nova 16.0.3) on Debian Sid. My environment for running
  tempest is a Debian Live system which I just boot once, install all of
  OpenStack on, and run tempest.

  As a consequence, my filesystem is a bit weirdo. It's a single root
  partition that is using overlayfs, which has its read/write volume on
  tpmfs. This worked well when validating Newton, but it seems there's a
  regression with Pike. Here's what happens when trying to boot an
  instance:

   Traceback (most recent call last):
 File 

[Yahoo-eng-team] [Bug 1623838] [NEW] Nova requires netaddr >= 0.7.12 which is not enough

2016-09-15 Thread Thomas Goirand
Public bug reported:

In this commit:
https://github.com/openstack/nova/commit/4647f418afb9ced223c089f9d49cd686eccae9e2

nova starts using the modified_eui64() function which isn't available in
netaddr 0.7.12. It is available in version 0.7.18, which is what the
upper-constraints.txt has. I haven't investigate (yet) when the new
method was introduce in netaddr, though in all reasonableness, I'd
strongly suggest pushing for an upgrade of global-requirements.txt to
0.7.18 (which is what we've been gating on for a long time).

At the packaging level, it doesn't seem to be a big problem, as 0.7.18
is what Xenial has.

The other solution would be to remove the call of modified_eui64() in
Nova, but this looks a more risky option to me, so close from the
release.

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: newton-backport-potential testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623838

Title:
  Nova requires netaddr >= 0.7.12 which is not enough

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  In this commit:
  
https://github.com/openstack/nova/commit/4647f418afb9ced223c089f9d49cd686eccae9e2

  nova starts using the modified_eui64() function which isn't available
  in netaddr 0.7.12. It is available in version 0.7.18, which is what
  the upper-constraints.txt has. I haven't investigate (yet) when the
  new method was introduce in netaddr, though in all reasonableness, I'd
  strongly suggest pushing for an upgrade of global-requirements.txt to
  0.7.18 (which is what we've been gating on for a long time).

  At the packaging level, it doesn't seem to be a big problem, as 0.7.18
  is what Xenial has.

  The other solution would be to remove the call of modified_eui64() in
  Nova, but this looks a more risky option to me, so close from the
  release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595440] [NEW] neutron-fwaas ships /usr/bin/neutron-l3-agent a 2nd time

2016-06-23 Thread Thomas Goirand
Public bug reported:

In distributions, it's not possible to ship twice a daemon in /usr/bin
with 2 different packages. This means that /usr/bin/neutron-l3-agent in
neutron-fwaas is clashing with /usr/bin/neutron-l3-agent from python-
neutron. In Debian, we will rename /usr/bin/neutron-l3-agent as /usr/bin
/neutron-fwaas-l3-agent. Please do the same upstream.

Full explanations on why this is a problem is available here:
http://lists.openstack.org/pipermail/openstack-dev/2016-June/097956.html

I'm available on IRC if you want to discuss the problem as well.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595440

Title:
  neutron-fwaas ships /usr/bin/neutron-l3-agent a 2nd time

Status in neutron:
  New

Bug description:
  In distributions, it's not possible to ship twice a daemon in /usr/bin
  with 2 different packages. This means that /usr/bin/neutron-l3-agent
  in neutron-fwaas is clashing with /usr/bin/neutron-l3-agent from
  python-neutron. In Debian, we will rename /usr/bin/neutron-l3-agent as
  /usr/bin/neutron-fwaas-l3-agent. Please do the same upstream.

  Full explanations on why this is a problem is available here:
  http://lists.openstack.org/pipermail/openstack-dev/2016-June/097956.html

  I'm available on IRC if you want to discuss the problem as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592535] [NEW] wrong default value for $pybasedir

2016-06-14 Thread Thomas Goirand
Public bug reported:

In nova/conf/paths.py, I can read:

path_opts = [
cfg.StrOpt('pybasedir',
   default=os.path.abspath(os.path.join(os.path.dirname(__file__),
'../../')),
   help='Directory where the nova python module is installed'),

This means that, wherever nova source code is installed to generate
nova.conf is going to be the default value for pybasedir. This has all
the chances in the world to be a wrong value. For example, if building
from /home/zigo/sources/mitaka/nova/nova, we end up with:

# Directory where the nova python module is installed (string value)
#pybasedir = 
/home/zigo/sources/openstack/mitaka/nova/build-area/nova-13.0.0/debian/tmp/usr/lib/python2.7/dist-packages

instead of:
#pybasedir = /usr/lib/python2.7/dist-packages

Unfortunately, this ends up in the package.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592535

Title:
  wrong default value for $pybasedir

Status in OpenStack Compute (nova):
  New

Bug description:
  In nova/conf/paths.py, I can read:

  path_opts = [
  cfg.StrOpt('pybasedir',
 default=os.path.abspath(os.path.join(os.path.dirname(__file__),
  '../../')),
 help='Directory where the nova python module is installed'),

  This means that, wherever nova source code is installed to generate
  nova.conf is going to be the default value for pybasedir. This has all
  the chances in the world to be a wrong value. For example, if building
  from /home/zigo/sources/mitaka/nova/nova, we end up with:

  # Directory where the nova python module is installed (string value)
  #pybasedir = 
/home/zigo/sources/openstack/mitaka/nova/build-area/nova-13.0.0/debian/tmp/usr/lib/python2.7/dist-packages

  instead of:
  #pybasedir = /usr/lib/python2.7/dist-packages

  Unfortunately, this ends up in the package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1592535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591281] [NEW] Missing test-requirement: testresources

2016-06-10 Thread Thomas Goirand
Public bug reported:

Keystone Mitaka b1 fails to build because of a missing testresources in
test-requirements.txt.

** Affects: keystone
 Importance: Undecided
 Assignee: Thomas Goirand (thomas-goirand)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1591281

Title:
  Missing test-requirement: testresources

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Keystone Mitaka b1 fails to build because of a missing testresources
  in test-requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1591281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566972] Re: Neutron unit tests are failing with SQLAlchemy 1.0.11, 1.0.12 fixes the issue

2016-06-07 Thread Thomas Goirand
** Changed in: neutron
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566972

Title:
  Neutron unit tests are failing with SQLAlchemy 1.0.11, 1.0.12 fixes
  the issue

Status in neutron:
  Confirmed

Bug description:
  When running the unit tests (when building the Debian package for
  Neutron Mitaka RC3), Neutron fails more than 500 unit tests. Upgrading
  from SQLAlchemy 1.0.11 to 1.0.12 fixed the issue.

  Example of failed run:
  https://mitaka-jessie.pkgs.mirantis.com/job/neutron/37/consoleFull

  Moving forward, upgrading the global-requirements.txt to SQLAlchemy
  1.0.12 may not be possible, so probably it'd be nice to fix the issue
  in Neutron.

  FYI, in Debian, I don't really mind, as Debian Sid has version 1.0.12,
  and that's where I upload. For the (non-official) backports to Debian
  Jessie and Ubuntu Trusty, I did a backport of 1.0.12, and that is
  fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566972] [NEW] Neutron unit tests are failing with SQLAlchemy 1.0.11, 1.0.12 fixes the issue

2016-04-06 Thread Thomas Goirand
Public bug reported:

When running the unit tests (when building the Debian package for
Neutron Mitaka RC3), Neutron fails more than 500 unit tests. Upgrading
from SQLAlchemy 1.0.11 to 1.0.12 fixed the issue.

Example of failed run:
https://mitaka-jessie.pkgs.mirantis.com/job/neutron/37/consoleFull

Moving forward, upgrading the global-requirements.txt to SQLAlchemy
1.0.12 may not be possible, so probably it'd be nice to fix the issue in
Neutron.

FYI, in Debian, I don't really mind, as Debian Sid has version 1.0.12,
and that's where I upload. For the (non-official) backports to Debian
Jessie and Ubuntu Trusty, I did a backport of 1.0.12, and that is fixed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566972

Title:
  Neutron unit tests are failing with SQLAlchemy 1.0.11, 1.0.12 fixes
  the issue

Status in neutron:
  New

Bug description:
  When running the unit tests (when building the Debian package for
  Neutron Mitaka RC3), Neutron fails more than 500 unit tests. Upgrading
  from SQLAlchemy 1.0.11 to 1.0.12 fixed the issue.

  Example of failed run:
  https://mitaka-jessie.pkgs.mirantis.com/job/neutron/37/consoleFull

  Moving forward, upgrading the global-requirements.txt to SQLAlchemy
  1.0.12 may not be possible, so probably it'd be nice to fix the issue
  in Neutron.

  FYI, in Debian, I don't really mind, as Debian Sid has version 1.0.12,
  and that's where I upload. For the (non-official) backports to Debian
  Jessie and Ubuntu Trusty, I did a backport of 1.0.12, and that is
  fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548433] Re: neutron returns objects other than oslo_config.cfg.Opt instances from list_opts

2016-03-14 Thread Thomas Goirand
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548433

Title:
  neutron returns objects other than oslo_config.cfg.Opt instances from
  list_opts

Status in keystoneauth:
  Incomplete
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  The neutron function for listing options for use with the
  configuration generator returns things that are not compliant with the
  oslo_config.cfg.Opt class API. At the very least this includes the
  options from keystoneauth1, but I haven't looked to find if there are
  others.

  We'll work around this for now in the configuration generator code,
  but in the future we will more strictly enforce the API compliance by
  refusing to generate a configuration file or by leaving options out of
  the output.

  The change blocked by this issue is:
  https://review.openstack.org/#/c/282435/5

  One failure log showing the issue is:
  http://logs.openstack.org/35/282435/5/check/gate-tempest-dsvm-neutron-
  src-oslo.config/77044c6/logs/devstacklog.txt.gz

  The neutron code triggering the issue is in:
  http://git.openstack.org/cgit/openstack/neutron/tree/neutron/opts.py#n279

  The best solution would be to fix keystoneauth to support option
  discovery natively using proper oslo.config Opts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystoneauth/+bug/1548433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552897] [NEW] Unit test failure when buildnig debian package for Mitaka b3

2016-03-03 Thread Thomas Goirand
Public bug reported:

When building the Debian package of Nova for Mitaka b3 (ie: Nova
13.0.0~b3), I get the below unit test failures. Please help me to fix
this. I'm available on IRC if you need more details and a way to
reproduce (but basically, try to build the package in Sid + Experimental
using the sources from git clone
git://git.debian.org/git/openstack/nova.git).

==
FAIL: nova.tests.unit.test_cache.TestOsloCache.test_get_client
nova.tests.unit.test_cache.TestOsloCache.test_get_client
--
_StringException: Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
  File "nova/tests/unit/test_cache.py", line 64, in test_get_client
expiration_time=60)],
  File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 969, in 
assert_has_calls
), cause)
  File "/usr/lib/python2.7/dist-packages/six.py", line 718, in raise_from
raise value
AssertionError: Calls not found.
Expected: [call('oslo_cache.dict', arguments={'expiration_time': 60}, 
expiration_time=60), call('dogpile.cache.memcached', arguments={'url': 
['localhost:11211']}, expiration_time=60), call('dogpile.cache.null', 
_config_argument_dict=, _config_prefix='cache.oslo.arguments.', 
expiration_time=60, wrap=None), call('oslo_cache.dict', 
arguments={'expiration_time': 60}, expiration_time=60)]
Actual: [call('oslo_cache.dict', arguments={'expiration_time': 60}, 
expiration_time=60),
 call('dogpile.cache.memcached', arguments={'url': ['localhost:11211']}, 
expiration_time=60),
 call('dogpile.cache.null', 
_config_argument_dict={'cache.oslo.arguments.pool_maxsize': 10, 
'cache.oslo.arguments.pool_unused_timeout': 60, 'cache.oslo.arguments.url': 
['localhost:11211'], 'cache.oslo.arguments.socket_timeout': 3, 
'cache.oslo.expiration_time': 60, 'cache.oslo.arguments.dead_retry': 300, 
'cache.oslo.arguments.pool_connection_get_timeout': 10, 'cache.oslo.backend': 
'dogpile.cache.null'}, _config_prefix='cache.oslo.arguments.', 
expiration_time=60),
 call('oslo_cache.dict', arguments={'expiration_time': 60}, expiration_time=60)]

==
FAIL: nova.tests.unit.test_cache.TestOsloCache.test_get_memcached_client
nova.tests.unit.test_cache.TestOsloCache.test_get_memcached_client
--
_StringException: Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
  File "nova/tests/unit/test_cache.py", line 120, in test_get_memcached_client
expiration_time=60, wrap=None)]
  File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 969, in 
assert_has_calls
), cause)
  File "/usr/lib/python2.7/dist-packages/six.py", line 718, in raise_from
raise value
AssertionError: Calls not found.
Expected: [call('dogpile.cache.memcached', arguments={'url': 
['localhost:11211']}, expiration_time=60), call('dogpile.cache.memcached', 
arguments={'url': ['localhost:11211']}, expiration_time=60), 
call('dogpile.cache.null', _config_argument_dict=, 
_config_prefix='cache.oslo.arguments.', expiration_time=60, wrap=None)]
Actual: [call('dogpile.cache.memcached', arguments={'url': 
['localhost:11211']}, expiration_time=60),
 call('dogpile.cache.memcached', arguments={'url': ['localhost:11211']}, 
expiration_time=60),
 call('dogpile.cache.null', 
_config_argument_dict={'cache.oslo.arguments.pool_maxsize': 10, 
'cache.oslo.arguments.pool_unused_timeout': 60, 'cache.oslo.arguments.url': 
['localhost:11211'], 'cache.oslo.arguments.socket_timeout': 3, 
'cache.oslo.expiration_time': 60, 'cache.oslo.arguments.dead_retry': 300, 
'cache.oslo.arguments.pool_connection_get_timeout': 10, 'cache.oslo.backend': 
'dogpile.cache.null'}, _config_prefix='cache.oslo.arguments.', 
expiration_time=60)]

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1552897

Title:
  Unit test failure when buildnig debian package for Mitaka b3

Status in OpenStack Compute (nova):
  New

Bug description:
  When building the Debian package of Nova for Mitaka b3 (ie: Nova
  13.0.0~b3), I get the below unit test failures. Please help me to fix
  this. I'm available on IRC if you need more details and a way to
  reproduce (but basically, try to build the package in Sid +
  Experimental using the sources from git clone
  git://git.debian.org/git/openstack/nova.git).

  ==
  FAIL: 

[Yahoo-eng-team] [Bug 1542855] [NEW] Glance generates a non-reproducible config file

2016-02-07 Thread Thomas Goirand
Public bug reported:

When generating glance-api.conf and glance-registry.conf, the workers
directive, even though commented, is populated with a value which
depends on the number of CPU (or core?) of the machine generating the
file. This means that the build of Glance isn't reproducible (ie:
building the package on 2 different machine will not produce byte for
byte identical packages).

If you didn't hear about the Debian reproducible build effort, please read 
these wiki entries:
https://wiki.debian.org/ReproducibleBuilds
https://wiki.debian.org/ReproducibleBuilds/About

and please consider removing non-deterministic bits when generating the
configuration files.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1542855

Title:
  Glance generates a non-reproducible config file

Status in Glance:
  New

Bug description:
  When generating glance-api.conf and glance-registry.conf, the workers
  directive, even though commented, is populated with a value which
  depends on the number of CPU (or core?) of the machine generating the
  file. This means that the build of Glance isn't reproducible (ie:
  building the package on 2 different machine will not produce byte for
  byte identical packages).

  If you didn't hear about the Debian reproducible build effort, please read 
these wiki entries:
  https://wiki.debian.org/ReproducibleBuilds
  https://wiki.debian.org/ReproducibleBuilds/About

  and please consider removing non-deterministic bits when generating
  the configuration files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1542855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537044] [NEW] Unit test failure when buildnig debian package for Mitaka b2

2016-01-22 Thread Thomas Goirand
Public bug reported:

Hi,

I have 3 unit test failures when building the Glance Mitaka b2 package,
as per below. Please help me to fix them.

==
FAIL: glance.tests.functional.test_reload.TestReload.test_reload
--
Traceback (most recent call last):
testtools.testresult.real._StringException: traceback-1: {{{
Traceback (most recent call last):
  File "glance/tests/functional/test_reload.py", line 50, in tearDown
self.stop_servers()
  File "glance/tests/functional/__init__.py", line 899, in stop_servers
self.stop_server(self.scrubber_daemon, 'Scrubber daemon')
  File "glance/tests/functional/__init__.py", line 884, in stop_server
server.stop()
  File "glance/tests/functional/__init__.py", line 257, in stop
raise Exception('why is this being called? %s' % self.server_name)
Exception: why is this being called? scrubber
}}}
   
Traceback (most recent call last):
  File "glance/tests/functional/test_reload.py", line 113, in test_reload
self.start_servers(fork_socket=False, **vars(self))
  File "glance/tests/functional/__init__.py", line 804, in start_servers
self.start_with_retry(self.api_server, 'api_port', 3, **kwargs)
  File "glance/tests/functional/__init__.py", line 774, in start_with_retry
launch_msg = self.wait_for_servers([server], expect_launch)
  File "glance/tests/functional/__init__.py", line 866, in wait_for_servers
execute(cmd, raise_error=False, expect_exit=False)
  File "glance/tests/utils.py", line 315, in execute  
env=env)
  File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory


==
FAIL: 
glance.tests.functional.v1.test_multiprocessing.TestMultiprocessing.test_interrupt_avoids_respawn_storm
--
Traceback (most recent call last):
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "glance/tests/functional/v1/test_multiprocessing.py", line 61, in 
test_interrupt_avoids_respawn_storm
children = self._get_children()
  File "glance/tests/functional/v1/test_multiprocessing.py", line 50, in 
_get_children
children = process.get_children()
AttributeError: 'Process' object has no attribute 'get_children'


==
FAIL: 
glance.tests.unit.common.test_wsgi_ipv6.IPv6ServerTest.test_evnetlet_no_dnspython
--
Traceback (most recent call last):
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "glance/tests/unit/common/test_wsgi_ipv6.py", line 61, in 
test_evnetlet_no_dnspython
self.assertEqual(0, rc)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 350, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 435, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 0 != 1

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1537044

Title:
  Unit test failure when buildnig debian package for Mitaka b2

Status in Glance:
  New

Bug description:
  Hi,

  I have 3 unit test failures when building the Glance Mitaka b2
  package, as per below. Please help me to fix them.

  ==
  FAIL: glance.tests.functional.test_reload.TestReload.test_reload
  --
  Traceback (most recent call last):
  testtools.testresult.real._StringException: traceback-1: {{{
  Traceback (most recent call last):
File "glance/tests/functional/test_reload.py", line 50, in tearDown
  self.stop_servers()
File "glance/tests/functional/__init__.py", line 899, in stop_servers
  self.stop_server(self.scrubber_daemon, 'Scrubber daemon')
File "glance/tests/functional/__init__.py", line 884, in stop_server
  server.stop()
File "glance/tests/functional/__init__.py", line 257, in stop
  raise Exception('why is this being called? %s' % self.server_name)
  Exception: why is this being called? scrubber
  }}}
 
  Traceback (most recent call last):
File "glance/tests/functional/test_reload.py", line 113, in test_reload
  self.start_servers(fork_socket=False, **vars(self))
File "glance/tests/functional/__init__.py", line 804, in start_servers
  self.start_with_retry(self.api_server, 'api_port', 3, 

[Yahoo-eng-team] [Bug 1536437] [NEW] 'module' object has no attribute 'moved_function' failure when buildnig debian package for Mitaka b2

2016-01-20 Thread Thomas Goirand
Public bug reported:

When building the Debian package of Neutron for the Mitaka b2 release, I
get the below unit test failures. All other tests are ok (6451 tests).
Please help me to fix these last 3.


==
FAIL: 
unittest2.loader._FailedTest.neutron.tests.unit.agent.linux.test_bridge_lib
unittest2.loader._FailedTest.neutron.tests.unit.agent.linux.test_bridge_lib
--
_StringException: Traceback (most recent call last):
ImportError: Failed to import test module: 
neutron.tests.unit.agent.linux.test_bridge_lib
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in 
_find_test_path
module = self._get_module_from_name(name)
  File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
__import__(name)
  File "neutron/tests/unit/agent/linux/test_bridge_lib.py", line 20, in 
from neutron.agent.linux import bridge_lib
  File "neutron/agent/linux/bridge_lib.py", line 23, in 
from neutron.i18n import _LE
  File "neutron/i18n.py", line 25, in 
_ = moves.moved_function(neutron._i18n._, '_', __name__, message=message)
AttributeError: 'module' object has no attribute 'moved_function'


==
FAIL: unittest2.loader._FailedTest.neutron.tests.unit.cmd.server
unittest2.loader._FailedTest.neutron.tests.unit.cmd.server
--
_StringException: Traceback (most recent call last):
ImportError: Failed to import test module: neutron.tests.unit.cmd.server
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 490, in 
_find_test_path
package = self._get_module_from_name(name)
  File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
__import__(name)
  File "neutron/tests/unit/cmd/server/__init__.py", line 16, in 
from neutron.cmd.eventlet import server
  File "neutron/cmd/eventlet/server/__init__.py", line 17, in 
from neutron.server import wsgi_pecan
  File "neutron/server/wsgi_pecan.py", line 23, in 
from neutron.pecan_wsgi import app as pecan_app 
  File "neutron/pecan_wsgi/app.py", line 23, in 
from neutron.pecan_wsgi import hooks
  File "neutron/pecan_wsgi/hooks/__init__.py", line 23, in 
from neutron.pecan_wsgi.hooks import translation
  File "neutron/pecan_wsgi/hooks/translation.py", line 22, in 
from neutron.i18n import _LE
  File "neutron/i18n.py", line 25, in 
_ = moves.moved_function(neutron._i18n._, '_', __name__, message=message)
AttributeError: 'module' object has no attribute 'moved_function'

==
FAIL: 
unittest2.loader._FailedTest.neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent
unittest2.loader._FailedTest.neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent
--
_StringException: Traceback (most recent call last):
ImportError: Failed to import test module: 
neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in 
_find_test_path
module = self._get_module_from_name(name)
  File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
__import__(name)
  File 
"neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py",
 line 21, in 
from neutron.agent.linux import bridge_lib
  File "neutron/agent/linux/bridge_lib.py", line 23, in 
from neutron.i18n import _LE
  File "neutron/i18n.py", line 25, in 
_ = moves.moved_function(neutron._i18n._, '_', __name__, message=message)
AttributeError: 'module' object has no attribute 'moved_function'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536437

Title:
  'module' object has no attribute 'moved_function' failure when
  buildnig debian package for Mitaka b2

Status in neutron:
  New

Bug description:
  When building the Debian package of Neutron for the Mitaka b2 release,
  I get the below unit test failures. All other tests are ok (6451
  tests). Please help me to fix these last 3.

  
  ==
  FAIL: 
unittest2.loader._FailedTest.neutron.tests.unit.agent.linux.test_bridge_lib
  unittest2.loader._FailedTest.neutron.tests.unit.agent.linux.test_bridge_lib
  

[Yahoo-eng-team] [Bug 1534553] [NEW] [Django 1.9] uses TemplateDoesNotExist

2016-01-15 Thread Thomas Goirand
Public bug reported:

Horizon uses the TemplateDoesNotExist which was removed from Django 1.9

** Affects: horizon
 Importance: Undecided
 Assignee: Thomas Goirand (thomas-goirand)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534553

Title:
  [Django 1.9] uses TemplateDoesNotExist

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Horizon uses the TemplateDoesNotExist which was removed from Django
  1.9

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1534553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534555] [NEW] [Django 1.9] django.forms.util renamed with an s

2016-01-15 Thread Thomas Goirand
Public bug reported:

The class django.forms.util was renamed django.forms.utils (note the
added s) in Django 1.9. Horizon must follow.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534555

Title:
  [Django 1.9] django.forms.util renamed with an s

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The class django.forms.util was renamed django.forms.utils (note the
  added s) in Django 1.9. Horizon must follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1534555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534522] [NEW] [django 1.9] uses django.utils.importlib

2016-01-15 Thread Thomas Goirand
Public bug reported:

Horizon still uses django.utils.importlib which is removed from Django
1.9. We should use:

from importlib import import_module

instead of:

from django.utils.importlib import import_module

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534522

Title:
  [django 1.9] uses django.utils.importlib

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon still uses django.utils.importlib which is removed from Django
  1.9. We should use:

  from importlib import import_module

  instead of:

  from django.utils.importlib import import_module

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1534522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534526] [NEW] [Django 1.9] Horizon uses django.utils.log.NullHandler

2016-01-15 Thread Thomas Goirand
Public bug reported:

We should use logging.NullHandler, as this is removed from Django 1.9

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534526

Title:
  [Django 1.9] Horizon uses django.utils.log.NullHandler

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We should use logging.NullHandler, as this is removed from Django 1.9

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1534526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241027] Re: Intermitent Selenium unit test timout error

2015-11-20 Thread Thomas Goirand
I've reopened the issue, as there's no sign that it was fixed.

** Changed in: horizon
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241027

Title:
  Intermitent Selenium unit test timout error

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  I have the following error *SOMETIMES* (eg: sometimes it does work,
  sometimes it doesn't):

  This is surprising, because the python-selenium, which is non-free,
  isn't installed in my environment, and we were supposed to have a
  patch to not use it if it was detected it wasn't there.

  Since there's a 2 seconds timeout, probably it happens when my server
  is busy. I would suggest to first try increasing this timeout to
  something like 5 seconds or something similar...

  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 227, in run
  self.tearDown()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 350, in
  tearDown
  self.teardownContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 366, in
  teardownContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 469, in try_run
  return func()
File
  
"/home/zigo/sources/openstack/havana/horizon/build-area/horizon-2013.2~rc3/horizon/test/helpers.py",
  line 179, in tearDownClass
  super(SeleniumTestCase, cls).tearDownClass()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1170, in tearDownClass
  cls.server_thread.join()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1094, in join
  self.httpd.shutdown()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  984, in shutdown
  "Failed to shutdown the live test server in 2 seconds. The "
  RuntimeError: Failed to shutdown the live test server in 2 seconds. The
  server might be stuck or generating a slow response.

  The same way, there's this one, which must be related (or shall I say,
  due to the previous error?):

  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 208, in run
  self.setUp()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 291, in setUp
  self.setupContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 314, in
  setupContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 469, in try_run
  return func()
File
  
"/home/zigo/sources/openstack/havana/horizon/build-area/horizon-2013.2~rc3/horizon/test/helpers.py",
  line 173, in setUpClass
  super(SeleniumTestCase, cls).setUpClass()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1160, in setUpClass
  raise cls.server_thread.error
  WSGIServerException: [Errno 98] Address already in use

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512369] [NEW] glance should declare a test-requirements.txt on swiftclient

2015-11-02 Thread Thomas Goirand
Public bug reported:

As glance is building its config, it needs swiftclient. And it isn't
defined in test-requirements.txt, meaning that some options may be
missing for swift. I've added python-swiftclient in the Build-Depends-
Indep: of the Debian package, though it should IMO also go into the
test-requirements.txt of the upstream project.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1512369

Title:
  glance should declare a test-requirements.txt on swiftclient

Status in Glance:
  New

Bug description:
  As glance is building its config, it needs swiftclient. And it isn't
  defined in test-requirements.txt, meaning that some options may be
  missing for swift. I've added python-swiftclient in the Build-Depends-
  Indep: of the Debian package, though it should IMO also go into the
  test-requirements.txt of the upstream project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1512369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501662] [NEW] filesystem_store_datadir doesn't have the default it pretends

2015-10-01 Thread Thomas Goirand
Public bug reported:

Testing Glance Liberty RC1 over on top of Jessie, I have found out that
if I don't write a value for filesystem_store_datadir in both glance-
api.conf and glance-registry.conf, Glance simply doesn't work, despite
what is written here:

http://docs.openstack.org/developer/glance/configuring.html#configuring-
the-filesystem-storage-backend

So I would suggest to either fix the doc, or better, make it so that the
filesystem_store_datadir really defaults to a sane value, which is
/var/lib/glance/images in the case of a distribution deployment. It is
my understanding that devstack anyway sets a correct value there, so we
wont break the gate by fixing the default value to something that works.

Hoping this helps.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1501662

Title:
  filesystem_store_datadir doesn't have the default it pretends

Status in Glance:
  New

Bug description:
  Testing Glance Liberty RC1 over on top of Jessie, I have found out
  that if I don't write a value for filesystem_store_datadir in both
  glance-api.conf and glance-registry.conf, Glance simply doesn't work,
  despite what is written here:

  http://docs.openstack.org/developer/glance/configuring.html
  #configuring-the-filesystem-storage-backend

  So I would suggest to either fix the doc, or better, make it so that
  the filesystem_store_datadir really defaults to a sane value, which is
  /var/lib/glance/images in the case of a distribution deployment. It is
  my understanding that devstack anyway sets a correct value there, so
  we wont break the gate by fixing the default value to something that
  works.

  Hoping this helps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1501662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500830] [NEW] setting COMPRESS_ENABLED = False and restarting Apache leads to every xstatic library being NOT FOUND

2015-09-29 Thread Thomas Goirand
Public bug reported:

Hi,

Trying to see if it is possible to debug Horizon in production, one of
my colleague tried to disable compress. Then the result isn't nice at
all. Setting COMPRESS_ENABLED = False and restarting Apache leads to
every xstatic library being NOT FOUND, and loading of pages taking
forever.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1500830

Title:
  setting COMPRESS_ENABLED = False and restarting Apache leads to every
  xstatic library being NOT FOUND

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi,

  Trying to see if it is possible to debug Horizon in production, one of
  my colleague tried to disable compress. Then the result isn't nice at
  all. Setting COMPRESS_ENABLED = False and restarting Apache leads to
  every xstatic library being NOT FOUND, and loading of pages taking
  forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1500830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500361] [NEW] Generated config files are completely wrong

2015-09-28 Thread Thomas Goirand
Public bug reported:

The files generated using oslo-config-generator are completely wrong.
For example, it is missing [keystone_authtoken] and many more. This
shows on the example config in git (ie: etc/glance-api.conf in Glance's
git repo).

I believe the generator's config files is missing --namespace
keystonemiddleware.auth_token (maybe instead of
keystoneclient.middleware.auth_token).

IMO, this is a critical issue, which should be addressed with highest
priority. This blocks me from testing Liberty rc1 in Debian.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1500361

Title:
  Generated config files are completely wrong

Status in Glance:
  New

Bug description:
  The files generated using oslo-config-generator are completely wrong.
  For example, it is missing [keystone_authtoken] and many more. This
  shows on the example config in git (ie: etc/glance-api.conf in
  Glance's git repo).

  I believe the generator's config files is missing --namespace
  keystonemiddleware.auth_token (maybe instead of
  keystoneclient.middleware.auth_token).

  IMO, this is a critical issue, which should be addressed with highest
  priority. This blocks me from testing Liberty rc1 in Debian.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1500361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498403] [NEW] fatal python dump if browsing /project/access_and_security/ without nova installed

2015-09-22 Thread Thomas Goirand
Public bug reported:

Hi there!

I perfectly know that Nova is a required component for Horizon, however,
for all pages, if only Keystone is installed (ie: Nova isn't), there's
only a RED error message on the top-right of the web page, saying that
compute isn't installed. However, going to /project/access_and_security/
generates a python dump:

Traceback:
File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py" in 
get_response
  111. response = wrapped_callback(request, *callback_args, 
**callback_kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/decorators.py" in dec
  36. return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/decorators.py" in dec
  52. return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/decorators.py" in dec
  36. return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py" in view
  69. return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py" in dispatch
  87. return handler(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/tabs/views.py" in get
  127. self.load_tabs()
File "/usr/lib/python2.7/dist-packages/horizon/tabs/views.py" in load_tabs
  97. tab_group = self.get_tabs(self.request, **self.kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/tabs/views.py" in get_tabs
  44. self._tab_group = self.tab_group_class(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/tabs/base.py" in __init__
  110. tab_instances.append((tab.slug, tab(self, request)))
File "/usr/lib/python2.7/dist-packages/horizon/tabs/base.py" in __init__
  422. super(TableTab, self).__init__(tab_group, request)
File "/usr/lib/python2.7/dist-packages/horizon/tabs/base.py" in __init__
  272. self._allowed = self.allowed(request) and (
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/tabs.py"
 in allowed
  123. return network.floating_ip_supported(request)
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/network.py"
 in floating_ip_supported
  91. return NetworkClient(request).floating_ips.is_supported()
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/network.py"
 in __init__
  34. self.floating_ips = nova.FloatingIpManager(request)
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py"
 in __init__
  398. self.client = novaclient(request)
File "/usr/lib/python2.7/dist-packages/horizon/utils/memoized.py" in wrapped
  90. value = cache[key] = func(*args, **kwargs)
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py"
 in novaclient
  451.auth_url=base.url_for(request, 'compute'),
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/base.py"
 in url_for
  325. raise exceptions.ServiceCatalogException(service_type)

Exception Type: ServiceCatalogException at /project/access_and_security/
Exception Value: Invalid service catalog service: compute

It'd be nice to have this page error out in a more nicer way. As Mathias
pointed out on IRC, some people are using Horizon for a Swift only
install.

It's fine to have this bug having a lower type of priority, however,
it'd be too bad to just forget about it. :)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1498403

Title:
  fatal python dump if browsing /project/access_and_security/ without
  nova installed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi there!

  I perfectly know that Nova is a required component for Horizon,
  however, for all pages, if only Keystone is installed (ie: Nova
  isn't), there's only a RED error message on the top-right of the web
  page, saying that compute isn't installed. However, going to
  /project/access_and_security/ generates a python dump:

  Traceback:
  File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py" in 
get_response
111. response = wrapped_callback(request, 
*callback_args, **callback_kwargs)
  File "/usr/lib/python2.7/dist-packages/horizon/decorators.py" in dec
36. return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/horizon/decorators.py" in dec
52. return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/horizon/decorators.py" in dec
36. 

[Yahoo-eng-team] [Bug 1494171] [NEW] Javascript error: Module 'horizon.auth' is not available!

2015-09-10 Thread Thomas Goirand
Public bug reported:

When trying Horizon from Liberty b3, I can login into Horizon, but then
I get the below error all the time in the JS console of Iceweasel:

Error: [$injector:modulerr] Failed to instantiate module horizon.app due to:
[$injector:modulerr] Failed to instantiate module horizon.auth due to:
[$injector:nomod] Module 'horizon.auth' is not available! You either misspelled 
the module name or forgot to load it. If registering a module ensure that you 
specify the dependencies as the second argument.
http://errors.angularjs.org/1.3.17/$injector/nomod?p0=horizon.auth
minErr/<@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:691:8
module/<@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:796:1
ensure@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:794:320
module@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:796:1
loadModules/<@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:895:35
forEach@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:697:391
loadModules@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:894:63
loadModules/<@http://sid.gplhost.com/static/dashboa

Then I can't click on the top left or top right "admin" drop-down
button. Note that the above console dump is complete, and I haven't
truncated it myself, maybe Iceweasel does (I'm not sure who truncates
it...).

** Affects: horizon
 Importance: High
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1494171

Title:
  Javascript error: Module 'horizon.auth' is not available!

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When trying Horizon from Liberty b3, I can login into Horizon, but
  then I get the below error all the time in the JS console of
  Iceweasel:

  Error: [$injector:modulerr] Failed to instantiate module horizon.app due to:
  [$injector:modulerr] Failed to instantiate module horizon.auth due to:
  [$injector:nomod] Module 'horizon.auth' is not available! You either 
misspelled the module name or forgot to load it. If registering a module ensure 
that you specify the dependencies as the second argument.
  http://errors.angularjs.org/1.3.17/$injector/nomod?p0=horizon.auth
  minErr/<@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:691:8
  module/<@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:796:1
  ensure@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:794:320
  module@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:796:1
  
loadModules/<@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:895:35
  forEach@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:697:391
  loadModules@http://sid.gplhost.com/static/dashboard/js/2d60d97176f3.js:894:63
  loadModules/<@http://sid.gplhost.com/static/dashboa

  Then I can't click on the top left or top right "admin" drop-down
  button. Note that the above console dump is complete, and I haven't
  truncated it myself, maybe Iceweasel does (I'm not sure who truncates
  it...).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1494171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474760] [NEW] Unit test failures with sqlalchemy 1.0.6

2015-07-15 Thread Thomas Goirand
Public bug reported:

Hi,

Building Keystone in Jessie poses no problem, but it looks like in Sid,
Keystone doesn't like SQLAlchemy 1.0.6. Here's a full build log:

http://sid.gplhost.com/keystone_8.0.0~b1-1_amd64.build

Just in case if that file wasn't available, here's an example crash.
There's one single occurence of the first failure, and 26 of the 2nd one
with migrate.exceptions.DatabaseAlreadyControlledError as error.

FAIL: 
keystone.tests.unit.test_sql_upgrade.SqlUpgradeTests.test_add_actor_id_index
--
Traceback (most recent call last):
testtools.testresult.real._StringException: Empty attachments:
  pythonlogging:''-1
  stderr
  stdout

pythonlogging:'': {{{
Adding cache-proxy 'keystone.tests.unit.test_cache.CacheIsolatingProxy' to 
backend.
Loading repository 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo...
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/044_icehouse.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/044_icehouse.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/045_placeholder.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/045_placeholder.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/046_placeholder.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/046_placeholder.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/047_placeholder.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/047_placeholder.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/048_placeholder.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/048_placeholder.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/049_placeholder.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/049_placeholder.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/050_fk_consistent_indexes.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/050_fk_consistent_indexes.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/051_add_id_mapping.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/051_add_id_mapping.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/052_add_auth_url_to_region.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/052_add_auth_url_to_region.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/053_endpoint_to_region_association.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/053_endpoint_to_region_association.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/054_add_actor_id_index.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/054_add_actor_id_index.py
 loaded successfully
Loading script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/055_add_indexes_to_token_table.py...
Script 
/home/zigo/sources/openstack/liberty/keystone/build-area/keystone-8.0.0~b1/keystone/common/sql/migrate_repo/versions/055_add_indexes_to_token_table.py
 loaded successfully
Loading script 

[Yahoo-eng-team] [Bug 1465016] Re: nova uses suds, which is to be removed from Debian/Ubuntu

2015-06-15 Thread Thomas Goirand
Hi Doug. I think you missunderstood. suds-jurko isn't more actively
maintained than suds, the upstream maintainer is *ALSO* dead, so we need
a better replacement. pysimplesoap is one of the possible alternatives.

** Changed in: nova
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465016

Title:
  nova uses suds, which is to be removed from Debian/Ubuntu

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  Suds is a library which isn't maintained upstream. Please switch to
  something else. As you may have seen, someone filed a bug against the
  nova package in Debian because of this:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=788081

  As mentioned on the bug report: Please consider to migrate your
  package to use a maintained soap library (like pysimplesoap, at the
  time of writing in NEW).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465016] [NEW] nova uses suds, which is to be removed from Debian/Ubuntu

2015-06-14 Thread Thomas Goirand
Public bug reported:

Suds is a library which isn't maintained upstream. Please switch to
something else. As you may have seen, someone filed a bug against the
nova package in Debian because of this:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=788081

As mentioned on the bug report: Please consider to migrate your package
to use a maintained soap library (like pysimplesoap, at the time of
writing in NEW).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465016

Title:
  nova uses suds, which is to be removed from Debian/Ubuntu

Status in OpenStack Compute (Nova):
  New

Bug description:
  Suds is a library which isn't maintained upstream. Please switch to
  something else. As you may have seen, someone filed a bug against the
  nova package in Debian because of this:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=788081

  As mentioned on the bug report: Please consider to migrate your
  package to use a maintained soap library (like pysimplesoap, at the
  time of writing in NEW).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456749] [NEW] Coverage for pysaml2 is insuficient

2015-05-19 Thread Thomas Goirand
Public bug reported:

Releasing Kilo in Debian, I found out that Keystone just broke with
pysaml2 2.0.0, and in fact needs 2.4.0. The unit tests just passed, but
with pysaml2 2.0.0 Keystone just crashes with a stack dump.

Out of this, 2 remarks:
- requirements.txt is wrong and should ask for something higher than 2.0.0 
(maybe 2.4.0, or something lower)
- unit tests should have detected the issue, meaning that coverage isn't good 
enough

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1456749

Title:
  Coverage for pysaml2 is insuficient

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Releasing Kilo in Debian, I found out that Keystone just broke with
  pysaml2 2.0.0, and in fact needs 2.4.0. The unit tests just passed,
  but with pysaml2 2.0.0 Keystone just crashes with a stack dump.

  Out of this, 2 remarks:
  - requirements.txt is wrong and should ask for something higher than 2.0.0 
(maybe 2.4.0, or something lower)
  - unit tests should have detected the issue, meaning that coverage isn't good 
enough

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1456749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445827] [NEW] unit test failures: Glance insist on ordereddict

2015-04-18 Thread Thomas Goirand
Public bug reported:

There's no python-ordereddict package anymore in Debian, as this is
normally included in Python 2.7. I have therefore patched
requirements.txt to remove ordereddict. However, even after this, I get
some bad unit test errors about it. This must be fixed upstream, because
there's no way (modern) downstream distributions can fix it (as the
ordereddict Python package will *not* come back).

Below is the tracebacks for the 4 failed unit tests.

FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_api_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 143, in 
test_list_api_opts
expected_opt_groups, expected_opt_names)
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
list_fn = ep.load()
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
self.require(env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 639, in resolve
raise DistributionNotFound(req)
DistributionNotFound: ordereddict


==
FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_cache_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 288, in 
test_list_cache_opts
expected_opt_groups, expected_opt_names)
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
list_fn = ep.load()
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
self.require(env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 639, in resolve
raise DistributionNotFound(req)
DistributionNotFound: ordereddict


==
FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_manage_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 301, in 
test_list_manage_opts
expected_opt_groups, expected_opt_names)
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
list_fn = ep.load()
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
self.require(env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 639, in resolve
raise DistributionNotFound(req)
DistributionNotFound: ordereddict


==
FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_registry_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 192, in 
test_list_registry_opts
expected_opt_groups, expected_opt_names)
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
list_fn = ep.load()
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
self.require(env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 639, in resolve
raise DistributionNotFound(req)
DistributionNotFound: ordereddict


==
FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_scrubber_opts
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 241, in 
test_list_scrubber_opts
expected_opt_groups, expected_opt_names)
  File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
list_fn = ep.load()
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
self.require(env, installer)
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
items = working_set.resolve(reqs, env, installer)
  File 

[Yahoo-eng-team] [Bug 1435216] Re: unit tests: KeyError: 'port_security' when building Debian package

2015-03-25 Thread Thomas Goirand
Hi. The issue was the lack of PYTHONPATH=. when running ./run_tests.sh.
After this change, I got no unit test errors. Sorry for the noise, and
thanks for the help. I'm declaring this as invalid.

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435216

Title:
  unit tests: KeyError: 'port_security' when building Debian package

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hi there!

  I'm getting a bunch of  KeyError: 'port_security' when building
  Neutron Kilo b3 in Debian (see below). I'm cut/pasting only a single
  trace dump here, though there's about a dozen similar issues. Please
  help me to fix this. Note that the package is building under a Sbuild
  chroot, and the full build log may be found on my jenkins at:

  https://kilo-jessie.pkgs.mirantis.com/job/neutron/

  Cheers,

  Thomas Goirand (zigo)

  FAIL: 
neutron.tests.unit.ml2.test_ext_portsecurity.PSExtDriverTestCase.test_create_network_with_portsecurity_mac
  
neutron.tests.unit.ml2.test_ext_portsecurity.PSExtDriverTestCase.test_create_network_with_portsecurity_mac
  --
  _StringException: Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout

  pythonlogging:'': {{{
  2015-03-23 08:47:57,618 INFO [neutron.manager] Loading core plugin: 
neutron.plugins.ml2.plugin.Ml2Plugin
  2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.managers] Configured 
type driver names: ['local', 'flat', 'vlan', 'gre', 'vxlan']
  2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.drivers.type_flat] 
Allowable flat physical_network names: []
  2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.drivers.type_vlan] 
Network VLAN ranges: {'physnet2': [(200, 300)], 'physnet1': [(1, 100)]}
  2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.drivers.type_local] ML2 
LocalTypeDriver initialization complete
  2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.managers] Loaded type 
driver names: ['flat', 'vlan', 'gre', 'local', 'vxlan']
  2015-03-23 08:47:57,620 INFO [neutron.plugins.ml2.managers] Registered 
types: ['flat', 'vlan', 'local', 'gre', 'vxlan']
  2015-03-23 08:47:57,620 INFO [neutron.plugins.ml2.managers] Tenant 
network_types: ['local']
  2015-03-23 08:47:57,620 INFO [neutron.plugins.ml2.managers] Configured 
extension driver names: ['port_security']
  }}}

  Traceback (most recent call last):
File /«PKGBUILDDIR»/neutron/tests/unit/ml2/test_ext_portsecurity.py, line 
29, in setUp
  super(PSExtDriverTestCase, self).setUp()
File /«PKGBUILDDIR»/neutron/tests/unit/ml2/test_ml2_plugin.py, line 118, 
in setUp
  self.setup_parent()
File /«PKGBUILDDIR»/neutron/tests/unit/ml2/test_ml2_plugin.py, line 100, 
in setup_parent
  Ml2PluginConf.setUp(self, parent_setup)
File /«PKGBUILDDIR»/neutron/tests/unit/ml2/test_ml2_plugin.py, line 80, 
in setUp
  parent_setup()
File /«PKGBUILDDIR»/neutron/tests/unit/test_extension_portsecurity.py, 
line 171, in setUp
  super(PortSecurityDBTestCase, self).setUp(plugin)
File /«PKGBUILDDIR»/neutron/tests/unit/test_extension_portsecurity.py, 
line 40, in setUp
  super(PortSecurityTestCase, self).setUp(plugin=plugin, ext_mgr=ext_mgr)
File /«PKGBUILDDIR»/neutron/tests/unit/test_db_plugin.py, line 120, in 
setUp
  self.api = router.APIRouter()
File /«PKGBUILDDIR»/neutron/api/v2/router.py, line 74, in __init__
  plugin = manager.NeutronManager.get_plugin()
File /«PKGBUILDDIR»/neutron/manager.py, line 222, in get_plugin
  return weakref.proxy(cls.get_instance().plugin)
File /«PKGBUILDDIR»/neutron/manager.py, line 216, in get_instance
  cls._create_instance()
File /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 
431, in inner
  return f(*args, **kwargs)
File /«PKGBUILDDIR»/neutron/manager.py, line 202, in _create_instance
  cls._instance = cls()
File /«PKGBUILDDIR»/neutron/manager.py, line 117, in __init__
  plugin_provider)
File /«PKGBUILDDIR»/neutron/manager.py, line 143, in _get_plugin_instance
  return plugin_class()
File /«PKGBUILDDIR»/neutron/plugins/ml2/plugin.py, line 128, in __init__
  self.extension_manager = managers.ExtensionManager()
File /«PKGBUILDDIR»/neutron/plugins/ml2/managers.py, line 704, in __init__
  name_order=True)
File /usr/lib/python2.7/dist-packages/stevedore/named.py, line 56, in 
__init__
  self._init_plugins(extensions)
File /usr/lib/python2.7/dist-packages/stevedore/named.py, line 112, in 
_init_plugins
  self.extensions = [self[n] for n in self._names]
File /usr/lib/python2.7/dist-packages/stevedore/extension.py, line 283, 
in __getitem__
  return self

[Yahoo-eng-team] [Bug 1435216] [NEW] unit tests: KeyError: 'port_security' when building Debian package

2015-03-23 Thread Thomas Goirand
Public bug reported:

Hi there!

I'm getting a bunch of  KeyError: 'port_security' when building Neutron
Kilo b3 in Debian (see below). I'm cut/pasting only a single trace dump
here, though there's about a dozen similar issues. Please help me to fix
this. Note that the package is building under a Sbuild chroot, and the
full build log may be found on my jenkins at:

https://kilo-jessie.pkgs.mirantis.com/job/neutron/

Cheers,

Thomas Goirand (zigo)

FAIL: 
neutron.tests.unit.ml2.test_ext_portsecurity.PSExtDriverTestCase.test_create_network_with_portsecurity_mac
neutron.tests.unit.ml2.test_ext_portsecurity.PSExtDriverTestCase.test_create_network_with_portsecurity_mac
--
_StringException: Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

pythonlogging:'': {{{
2015-03-23 08:47:57,618 INFO [neutron.manager] Loading core plugin: 
neutron.plugins.ml2.plugin.Ml2Plugin
2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.managers] Configured type 
driver names: ['local', 'flat', 'vlan', 'gre', 'vxlan']
2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.drivers.type_flat] 
Allowable flat physical_network names: []
2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.drivers.type_vlan] 
Network VLAN ranges: {'physnet2': [(200, 300)], 'physnet1': [(1, 100)]}
2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.drivers.type_local] ML2 
LocalTypeDriver initialization complete
2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.managers] Loaded type 
driver names: ['flat', 'vlan', 'gre', 'local', 'vxlan']
2015-03-23 08:47:57,620 INFO [neutron.plugins.ml2.managers] Registered 
types: ['flat', 'vlan', 'local', 'gre', 'vxlan']
2015-03-23 08:47:57,620 INFO [neutron.plugins.ml2.managers] Tenant 
network_types: ['local']
2015-03-23 08:47:57,620 INFO [neutron.plugins.ml2.managers] Configured 
extension driver names: ['port_security']
}}}

Traceback (most recent call last):
  File /«PKGBUILDDIR»/neutron/tests/unit/ml2/test_ext_portsecurity.py, line 
29, in setUp
super(PSExtDriverTestCase, self).setUp()
  File /«PKGBUILDDIR»/neutron/tests/unit/ml2/test_ml2_plugin.py, line 118, in 
setUp
self.setup_parent()
  File /«PKGBUILDDIR»/neutron/tests/unit/ml2/test_ml2_plugin.py, line 100, in 
setup_parent
Ml2PluginConf.setUp(self, parent_setup)
  File /«PKGBUILDDIR»/neutron/tests/unit/ml2/test_ml2_plugin.py, line 80, in 
setUp
parent_setup()
  File /«PKGBUILDDIR»/neutron/tests/unit/test_extension_portsecurity.py, line 
171, in setUp
super(PortSecurityDBTestCase, self).setUp(plugin)
  File /«PKGBUILDDIR»/neutron/tests/unit/test_extension_portsecurity.py, line 
40, in setUp
super(PortSecurityTestCase, self).setUp(plugin=plugin, ext_mgr=ext_mgr)
  File /«PKGBUILDDIR»/neutron/tests/unit/test_db_plugin.py, line 120, in setUp
self.api = router.APIRouter()
  File /«PKGBUILDDIR»/neutron/api/v2/router.py, line 74, in __init__
plugin = manager.NeutronManager.get_plugin()
  File /«PKGBUILDDIR»/neutron/manager.py, line 222, in get_plugin
return weakref.proxy(cls.get_instance().plugin)
  File /«PKGBUILDDIR»/neutron/manager.py, line 216, in get_instance
cls._create_instance()
  File /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 
431, in inner
return f(*args, **kwargs)
  File /«PKGBUILDDIR»/neutron/manager.py, line 202, in _create_instance
cls._instance = cls()
  File /«PKGBUILDDIR»/neutron/manager.py, line 117, in __init__
plugin_provider)
  File /«PKGBUILDDIR»/neutron/manager.py, line 143, in _get_plugin_instance
return plugin_class()
  File /«PKGBUILDDIR»/neutron/plugins/ml2/plugin.py, line 128, in __init__
self.extension_manager = managers.ExtensionManager()
  File /«PKGBUILDDIR»/neutron/plugins/ml2/managers.py, line 704, in __init__
name_order=True)
  File /usr/lib/python2.7/dist-packages/stevedore/named.py, line 56, in 
__init__
self._init_plugins(extensions)
  File /usr/lib/python2.7/dist-packages/stevedore/named.py, line 112, in 
_init_plugins
self.extensions = [self[n] for n in self._names]
  File /usr/lib/python2.7/dist-packages/stevedore/extension.py, line 283, in 
__getitem__
return self._extensions_by_name[name]
KeyError: 'port_security'

Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

pythonlogging:'': {{{
2015-03-23 08:47:57,618 INFO [neutron.manager] Loading core plugin: 
neutron.plugins.ml2.plugin.Ml2Plugin
2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.managers] Configured type 
driver names: ['local', 'flat', 'vlan', 'gre', 'vxlan']
2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.drivers.type_flat] 
Allowable flat physical_network names: []
2015-03-23 08:47:57,619 INFO [neutron.plugins.ml2.drivers.type_vlan] 
Network VLAN ranges: {'physnet2': [(200, 300

[Yahoo-eng-team] [Bug 1435174] [NEW] SSLTestCase errors when building Debian package

2015-03-23 Thread Thomas Goirand
Public bug reported:

Hi,

I get the bellow issues when building Keystone in Debian (Jessie chroot
using sbuild). Help from the keystone team would be appreciate to
resolve this. Cheers! :)

==
FAIL: keystone.tests.unit.test_ssl.SSLTestCase.test_2way_ssl_with_ipv6_ok
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''-1
  stderr
  stdout

pythonlogging:'': {{{
Adding cache-proxy 'keystone.tests.unit.test_cache.CacheIsolatingProxy' to 
backend.
KVS region configuration for os-revoke-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 
'keystone.kvs.arguments.lock_timeout': 6, 'keystone.kvs.backend': 
'openstack.kvs.Memory'}
Using default dogpile sha1_mangle_key as KVS region os-revoke-driver key_mangler
Starting /usr/lib/python2.7/dist-packages/subunit/run.py on ::1:0
(8731) wsgi starting up on https://::1:60772/
(8731) wsgi exited, is_accepting=True
}}}

Traceback (most recent call last):
  File /«PKGBUILDDIR»/keystone/tests/unit/test_ssl.py, line 124, in 
test_2way_ssl_with_ipv6_ok
conn.request('GET', '/')
  File /usr/lib/python2.7/httplib.py, line 1001, in request
self._send_request(method, url, body, headers)
  File /usr/lib/python2.7/httplib.py, line 1035, in _send_request
self.endheaders(body)
  File /usr/lib/python2.7/httplib.py, line 997, in endheaders
self._send_output(message_body)
  File /usr/lib/python2.7/httplib.py, line 850, in _send_output
self.send(msg)
  File /usr/lib/python2.7/httplib.py, line 812, in send
self.connect()
  File /usr/lib/python2.7/httplib.py, line 1212, in connect
server_hostname=server_hostname)
  File /usr/lib/python2.7/ssl.py, line 350, in wrap_socket
_context=self)
  File /usr/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 64, in 
__init__
ca_certs, do_handshake_on_connect and six.PY2, *args, **kw)
  File /usr/lib/python2.7/ssl.py, line 566, in __init__
self.do_handshake()
  File /usr/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 237, in 
do_handshake
super(GreenSSLSocket, self).do_handshake)
  File /usr/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 109, in 
_call_trampolining
return func(*a, **kw)
  File /usr/lib/python2.7/ssl.py, line 788, in do_handshake
self._sslobj.do_handshake()
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed 
(_ssl.c:581)

Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''-1
  stderr
  stdout

pythonlogging:'': {{{
Adding cache-proxy 'keystone.tests.unit.test_cache.CacheIsolatingProxy' to 
backend.
KVS region configuration for os-revoke-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 
'keystone.kvs.arguments.lock_timeout': 6, 'keystone.kvs.backend': 
'openstack.kvs.Memory'}
Using default dogpile sha1_mangle_key as KVS region os-revoke-driver key_mangler
Starting /usr/lib/python2.7/dist-packages/subunit/run.py on ::1:0
(8731) wsgi starting up on https://::1:60772/
(8731) wsgi exited, is_accepting=True
}}}

Traceback (most recent call last):
  File /«PKGBUILDDIR»/keystone/tests/unit/test_ssl.py, line 124, in 
test_2way_ssl_with_ipv6_ok
conn.request('GET', '/')
  File /usr/lib/python2.7/httplib.py, line 1001, in request
self._send_request(method, url, body, headers)
  File /usr/lib/python2.7/httplib.py, line 1035, in _send_request
self.endheaders(body)
  File /usr/lib/python2.7/httplib.py, line 997, in endheaders
self._send_output(message_body)
  File /usr/lib/python2.7/httplib.py, line 850, in _send_output
self.send(msg)
  File /usr/lib/python2.7/httplib.py, line 812, in send
self.connect()
  File /usr/lib/python2.7/httplib.py, line 1212, in connect
server_hostname=server_hostname)
  File /usr/lib/python2.7/ssl.py, line 350, in wrap_socket
_context=self)
  File /usr/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 64, in 
__init__
ca_certs, do_handshake_on_connect and six.PY2, *args, **kw)
  File /usr/lib/python2.7/ssl.py, line 566, in __init__
self.do_handshake()
  File /usr/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 237, in 
do_handshake
super(GreenSSLSocket, self).do_handshake)
  File /usr/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 109, in 
_call_trampolining
return func(*a, **kw)
  File /usr/lib/python2.7/ssl.py, line 788, in do_handshake
self._sslobj.do_handshake()
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed 
(_ssl.c:581)


==
FAIL: keystone.tests.unit.test_ssl.SSLTestCase.test_1way_ssl_ok
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
_StringException: Empty attachments:
  

[Yahoo-eng-team] [Bug 1433146] [NEW] neutron-lbaas tries to read neutron.conf from /usr

2015-03-17 Thread Thomas Goirand
Public bug reported:

neutron-lbaas is trying to read neutron.conf from /usr/lib/python2.7
/dist-packages/etc/neutron.conf. Of course, in the context of building a
Debian package, this fails miserably.

Also, I'm really not fan of neutron-lbaas and neutron being so
interdependent. The way it is setup, neutron must build-depend on
neutron-lbaas and neutron-lbaas must build-depend on neutron. It
shouldn't be this way. One of the project should mock the other. Should
I fill another bug report for this?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433146

Title:
  neutron-lbaas tries to read neutron.conf from /usr

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron-lbaas is trying to read neutron.conf from /usr/lib/python2.7
  /dist-packages/etc/neutron.conf. Of course, in the context of building
  a Debian package, this fails miserably.

  Also, I'm really not fan of neutron-lbaas and neutron being so
  interdependent. The way it is setup, neutron must build-depend on
  neutron-lbaas and neutron-lbaas must build-depend on neutron. It
  shouldn't be this way. One of the project should mock the other.
  Should I fill another bug report for this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407459] [NEW] glance-registry searches for [pipeline:glance-registry-FLAVOR]

2015-01-04 Thread Thomas Goirand
Public bug reported:

Depending on what's in the flavor directive of /etc/glance/glance-
registry.conf, glance-registry searches for a different section of
/etc/glance/glance-registry-paste.ini. For example, if I put flavor =
keystone+caching, then glance-registry searches for a section [pipeline
:glance-registry-keystone+caching] in /etc/glance/glance-registry-
paste.ini.

If this is a feature, then the default configuration file should be
updated to handle all values of the flavor directive.

If it's a bug (eg: regression) and that glance-registry is searching for
the wrong section, then this should be corrected.

In the mean while, I'm patching the default configuration file in my
Debian package to add the below lines to glance-registry-paste.ini:

[pipeline:glance-registry-keystone]
pipeline = osprofiler unauthenticated-context registryapp

[pipeline:glance-registry-caching]
pipeline = osprofiler unauthenticated-context registryapp

[pipeline:glance-registry-keystone+caching]
pipeline = osprofiler unauthenticated-context registryapp

[pipeline:glance-registry-cachemanagement]
pipeline = osprofiler unauthenticated-context registryapp

[pipeline:glance-registry-keystone+cachemanagement]
pipeline = osprofiler unauthenticated-context registryapp

Without this, glance-registry may refuse to start by default, which is
really annoying.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1407459

Title:
  glance-registry searches for [pipeline:glance-registry-FLAVOR]

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Depending on what's in the flavor directive of /etc/glance/glance-
  registry.conf, glance-registry searches for a different section of
  /etc/glance/glance-registry-paste.ini. For example, if I put flavor =
  keystone+caching, then glance-registry searches for a section
  [pipeline:glance-registry-keystone+caching] in /etc/glance/glance-
  registry-paste.ini.

  If this is a feature, then the default configuration file should be
  updated to handle all values of the flavor directive.

  If it's a bug (eg: regression) and that glance-registry is searching
  for the wrong section, then this should be corrected.

  In the mean while, I'm patching the default configuration file in my
  Debian package to add the below lines to glance-registry-paste.ini:

  [pipeline:glance-registry-keystone]
  pipeline = osprofiler unauthenticated-context registryapp

  [pipeline:glance-registry-caching]
  pipeline = osprofiler unauthenticated-context registryapp

  [pipeline:glance-registry-keystone+caching]
  pipeline = osprofiler unauthenticated-context registryapp

  [pipeline:glance-registry-cachemanagement]
  pipeline = osprofiler unauthenticated-context registryapp

  [pipeline:glance-registry-keystone+cachemanagement]
  pipeline = osprofiler unauthenticated-context registryapp

  Without this, glance-registry may refuse to start by default, which is
  really annoying.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1407459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377374] [NEW] test_launch_instance_exception_on_flavors fails

2014-10-03 Thread Thomas Goirand
Public bug reported:

When building Horizon Juno RC1, I have the below Python stack dump. If
you need more details on how to reproduce this in Debian Sid, please let
me know, and I'll explain, though the package is currently in Debian
Experimental, so just doing dpkg-buildpackage using it should be enough.

==
FAIL: test_launch_instance_exception_on_flavors 
(openstack_dashboard.dashboards.project.databases.tests.DatabaseTests)
--
Traceback (most recent call last):
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~rc1/openstack_dashboard/test/helpers.py,
 line 80, in instance_stub_out
return fn(self, *args, **kwargs)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~rc1/openstack_dashboard/dashboards/project/databases/tests.py,
 line 159, in test_launch_instance_exception_
self.client.get(LAUNCH_URL)
AssertionError: Http302 not raised

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1377374

Title:
  test_launch_instance_exception_on_flavors fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When building Horizon Juno RC1, I have the below Python stack dump. If
  you need more details on how to reproduce this in Debian Sid, please
  let me know, and I'll explain, though the package is currently in
  Debian Experimental, so just doing dpkg-buildpackage using it should
  be enough.

  ==
  FAIL: test_launch_instance_exception_on_flavors 
(openstack_dashboard.dashboards.project.databases.tests.DatabaseTests)
  --
  Traceback (most recent call last):
File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~rc1/openstack_dashboard/test/helpers.py,
 line 80, in instance_stub_out
  return fn(self, *args, **kwargs)
File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~rc1/openstack_dashboard/dashboards/project/databases/tests.py,
 line 159, in test_launch_instance_exception_
  self.client.get(LAUNCH_URL)
  AssertionError: Http302 not raised

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1377374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374919] [NEW] instance.security_groups must be a list

2014-09-28 Thread Thomas Goirand
-packages/django/test/utils.py, line 88, in 
instrumented_test_render
return self.nodelist.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 858, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/defaulttags.py, line 
527, in render
return self.nodelist.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 858, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/defaulttags.py, line 
312, in render
return nodelist.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 858, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/defaulttags.py, line 
208, in render
nodelist.append(node.render(context))
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 898, in 
render
output = self.filter_expression.resolve(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 596, in 
resolve
obj = self.var.resolve(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 734, in 
resolve
value = self._resolve_lookup(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 788, in 
_resolve_lookup
current = current()
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/horizon/tabs/base.py,
 line 317, in render
return render_to_string(self.get_template_name(self.request), context)
  File /usr/lib/python2.7/dist-packages/django/template/loader.py, line 172, 
in render_to_string
return t.render(Context(dictionary))
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 148, in 
render
return self._render(context)
  File /usr/lib/python2.7/dist-packages/django/test/utils.py, line 88, in 
instrumented_test_render
return self.nodelist.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 858, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/defaulttags.py, line 
161, in render
values = list(values)
TypeError: 'SecurityGroup' object is not iterable

In openstack_dashboard/dashboards/project/instances/tests.py, we should
have security_group = self.security_groups.list() and not security_group
= self.security_groups.first().

** Affects: horizon
 Importance: Undecided
 Assignee: Thomas Goirand (thomas-goirand)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1374919

Title:
  instance.security_groups must be a list

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In Django 1.7, these 2 issues show:

  ==
  ERROR: test_instance_details_volume_sorting 
(openstack_dashboard.dashboards.project.instances.tests.InstanceTests)
  --
  Traceback (most recent call last):
File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/openstack_dashboard/dashboards/project/instances/tests.py,
 line 704, in test_instance_details_volume_sorting
  security_groups_return=security_group)
File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/openstack_dashboard/test/helpers.py,
 line 80, in instance_stub_out
  return fn(self, *args, **kwargs)
File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/openstack_dashboard/dashboards/project/instances/tests.py,
 line 684, in _get_instance_details
  return self.client.get(url)
File /usr/lib/python2.7/dist-packages/django/test/client.py, line 467, in 
get
  **extra)
File /usr/lib/python2.7/dist-packages/django/test/client.py, line 285, in 
get
  return self.generic('GET', path, secure=secure, **r)
File /usr/lib/python2.7/dist-packages/django/test/client.py, line 355, in 
generic
  return self.request(**r)
File /usr/lib/python2.7/dist-packages/django/test/client.py, line 437, in 
request
  six.reraise(*exc_info)
File /usr/lib/python2.7/dist-packages/django/core/handlers/base.py, line 
111, in get_response
  response = wrapped_callback(request

[Yahoo-eng-team] [Bug 1371428] [NEW] Keystone unit tests trying to use git clone

2014-09-19 Thread Thomas Goirand
Public bug reported:

This bug is the 2nd instance of
https://bugs.launchpad.net/keystone/+bug/948495, 2 years later.

Its looking like keystone is fetching python-keystoneclient from git, in
order to run its unit tests. This is not possible when running during
the build of the Debian package.

The result is that all keystone.tests.test_keystoneclient.KcMasterTestCase.* 
unit tests are failing, as per this build log:
https://117.121.243.214/job/keystone/36/console

One way to test it would be to completely disable these if git isn't
available (which is the case when running in a minimal chroot build
environment).

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1371428

Title:
  Keystone unit tests trying to use git clone

Status in OpenStack Identity (Keystone):
  New

Bug description:
  This bug is the 2nd instance of
  https://bugs.launchpad.net/keystone/+bug/948495, 2 years later.

  Its looking like keystone is fetching python-keystoneclient from git,
  in order to run its unit tests. This is not possible when running
  during the build of the Debian package.

  The result is that all keystone.tests.test_keystoneclient.KcMasterTestCase.* 
unit tests are failing, as per this build log:
  https://117.121.243.214/job/keystone/36/console

  One way to test it would be to completely disable these if git isn't
  available (which is the case when running in a minimal chroot build
  environment).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1371428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352919] [NEW] horizon/workflows/base.py contains add_error() which conflicts with Django 1.7 definition

2014-08-05 Thread Thomas Goirand
Public bug reported:

As per the subject, horizon/workflows/base.py contains a definition of
add_error(). Unfortunately, this now a function name used by Django 1.7.
This conflicts with it, and leads to unit test errors when running with
Django 1.7 installed

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352919

Title:
  horizon/workflows/base.py contains add_error() which conflicts with
  Django 1.7 definition

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As per the subject, horizon/workflows/base.py contains a definition of
  add_error(). Unfortunately, this now a function name used by Django
  1.7. This conflicts with it, and leads to unit test errors when
  running with Django 1.7 installed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1352919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292311] [NEW] 5 unicode unit test failures when building Debian package

2014-03-13 Thread Thomas Goirand
Public bug reported:

==
FAIL: 
keystone.tests.test_exception.ExceptionTestCase.test_invalid_unicode_string
--
Traceback (most recent call last):
_StringException: pythonlogging:'': {{{Adding cache-proxy 
'keystone.tests.test_cache.CacheIsolatingProxy' to backend.}}}

Traceback (most recent call last):
  File /tmp/buildd/keystone-2014.1~b3/keystone/tests/test_exception.py, line 
100, in test_invalid_unicode_string
self.assertIn('%(attribute)', e.message)
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 327, in 
assertIn
self.assertThat(haystack, Contains(needle))
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
raise mismatch_error
MismatchError: '%(attribute)' not in 'Expecting to find xx in \xe7a va. The 
server could not comply with the request since it is either malformed or 
otherwise incorrect. The client is assumed to be in error.'


==
FAIL: keystone.tests.test_exception.ExceptionTestCase.test_unicode_string
--
Traceback (most recent call last):
_StringException: pythonlogging:'': {{{Adding cache-proxy 
'keystone.tests.test_cache.CacheIsolatingProxy' to backend.}}}

Traceback (most recent call last):
  File /tmp/buildd/keystone-2014.1~b3/keystone/tests/test_exception.py, line 
92, in test_unicode_string
self.assertIn(u'\u2013', e.message)
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 327, in 
assertIn
self.assertThat(haystack, Contains(needle))
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 404, in 
assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 454, in 
_matchHelper
mismatch = matcher.match(matchee)
  File /usr/lib/python2.7/dist-packages/testtools/matchers/_basic.py, line 
285, in match
if self.needle not in matchee:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 29: 
ordinal not in range(128)


==
FAIL: 
keystone.tests.test_exception.SecurityErrorTestCase.test_invalid_unicode_string
--
Traceback (most recent call last):
_StringException: pythonlogging:'': {{{Adding cache-proxy 
'keystone.tests.test_cache.CacheIsolatingProxy' to backend.}}}

Traceback (most recent call last):
  File /tmp/buildd/keystone-2014.1~b3/keystone/tests/test_exception.py, line 
100, in test_invalid_unicode_string
self.assertIn('%(attribute)', e.message)
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 327, in 
assertIn
self.assertThat(haystack, Contains(needle))
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
raise mismatch_error
MismatchError: '%(attribute)' not in 'Expecting to find xx in \xe7a va. The 
server could not comply with the request since it is either malformed or 
otherwise incorrect. The client is assumed to be in error.'


==
FAIL: keystone.tests.test_exception.SecurityErrorTestCase.test_unicode_string
--
Traceback (most recent call last):
_StringException: pythonlogging:'': {{{Adding cache-proxy 
'keystone.tests.test_cache.CacheIsolatingProxy' to backend.}}}

Traceback (most recent call last):
  File /tmp/buildd/keystone-2014.1~b3/keystone/tests/test_exception.py, line 
92, in test_unicode_string
self.assertIn(u'\u2013', e.message)
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 327, in 
assertIn
self.assertThat(haystack, Contains(needle))
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 404, in 
assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File /usr/lib/python2.7/dist-packages/testtools/testcase.py, line 454, in 
_matchHelper
mismatch = matcher.match(matchee)
  File /usr/lib/python2.7/dist-packages/testtools/matchers/_basic.py, line 
285, in match
if self.needle not in matchee:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 29: 
ordinal not in range(128)


==
FAIL: 
keystone.tests.test_wsgi.LocalizedResponseTest.test_static_translated_string_is_Message
--
Traceback (most recent call last):
_StringException: pythonlogging:'': {{{Adding cache-proxy 
'keystone.tests.test_cache.CacheIsolatingProxy' to backend.}}}

Traceback (most recent call last):
  File 

[Yahoo-eng-team] [Bug 1286717] [NEW] Keystone unit tests fails with SQLAlchemy 0.9.3

2014-03-01 Thread Thomas Goirand
Public bug reported:

Keystone fails its unit tests when running with SQLAlchemy 0.9.3, as per
the log below. It is important for Debian that Havana Keystone continues
to work in Sid with SQLA 0.9.

==
ERROR: keystone.tests.test_sql_upgrade.SqlUpgradeTests.test_upgrade_14_to_16
--
_StringException: traceback-1: {{{
Traceback (most recent call last):
  File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 90, in tearDown
self.downgrade(0)
  File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 125, in downgrade
self._migrate(*args, downgrade=True, **kwargs)
  File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 139, in _migrate
self.schema.runchange(ver, change, changeset.step)
  File /usr/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 
91, in runchange
change.run(self.engine, step)
  File /usr/lib/python2.7/dist-packages/migrate/versioning/script/py.py, line 
145, in run
script_func(engine)
  File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/common/sql/migrate_repo/versions/016_normalize_domain_ids.py,
 line 430, in downgrade
downgrade_user_table_with_copy(meta, migrate_engine, session)
  File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/common/sql/migrate_repo/versions/016_normalize_domain_ids.py,
 line 225, in downgrade_user_table_with_copy
'extra': user.extra})
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 978, 
in execute
clause, params or {})
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 717, 
in execute
return meth(self, multiparams, params)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py, line 317, 
in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 814, 
in _execute_clauseelement
compiled_sql, distilled_params
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 927, 
in _execute_context
context)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1076, 
in _handle_dbapi_exception
exc_info
  File /usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 185, 
in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 920, 
in _execute_context
context)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
425, in do_execute
cursor.execute(statement, parameters)
IntegrityError: (IntegrityError) UNIQUE constraint failed: temp_user.name 
u'insert into temp_user (id, name, password, enabled, extra) values ( ?, ?, ?, 
?, ?);' (u'433e0e1c02ff436a9bf1829ee42790d1', 
u'6327d5d819064064a82bc10e0ef7fdca', u'5ef83255a7df4fb1a3ae1d6877719a1e', True, 
u'{}')
}}}
 
Traceback (most recent call last):
  File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 487, in test_upgrade_14_to_16
self.check_uniqueness_constraints()
  File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 882, in check_uniqueness_constraints
cmd = this_table.delete(id=user['id'])
  File string, line 1, in lambda
  File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/selectable.py, line 
1237, in delete
return dml.Delete(self, whereclause, **kwargs)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/dml.py, line 749, in 
__init__
self._validate_dialect_kwargs(dialect_kw)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/base.py, line 132, in 
_validate_dialect_kwargs
named dialectname_argument, got '%s' % k)
TypeError: Additional arguments should be named dialectname_argument, got 
'id'

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1286717

Title:
  Keystone unit tests fails with SQLAlchemy 0.9.3

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Keystone fails its unit tests when running with SQLAlchemy 0.9.3, as
  per the log below. It is important for Debian that Havana Keystone
  continues to work in Sid with SQLA 0.9.

  ==
  ERROR: keystone.tests.test_sql_upgrade.SqlUpgradeTests.test_upgrade_14_to_16
  --
  _StringException: