[Yahoo-eng-team] [Bug 1448813] Re: radvd running as neutron user in Kilo, attached network dead

2015-06-22 Thread Christopher Aedo
** No longer affects: app-catalog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448813

Title:
  radvd running as neutron user in Kilo, attached network dead

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  Kilo RC1 release, mirantis Debian Jessie build

  Linux Kernel 3.19.3, ML2 vlan networking

  radvd version 1:1.9.1-1.3

  Network with IPv6 ULA SLAAC, IPv6 GUA SLAAC, Ipv4 RFC1918 configured.

  Radvd does not start, neutron-l3-agent does not set up OVS vlan
  forwarding between network and compute node, IPv4 completely
  disconnected as well. Looks like complete L2 breakage.

  Need to get this one fixed before release of Kilo.

  Work around:

  chown root:neutron /usr/sbin/radvd
  chmod 2750 /usr/sbin/radvd

  radvd gives message about not being able to create an IPv6 ICMP port
  in neutron-l3-agent log, just like when run as an non-root user.

  Notice radvd is not being executed via root wrap/sudo anymore, like
  all the other ip route/ip address/ip netns information gathering
  commands. Was executing in a privileged fashion missed in Neutron code
  refactor?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1448813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449775] Re: Got server fault when set admin_state_up=false for health monitor

2015-06-22 Thread Christopher Aedo
** Project changed: app-catalog = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449775

Title:
  Got server fault when set admin_state_up=false for health monitor

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This happens for tempest for health monitor, when set admin_state_up
  =false for creating or updating health monitor, will get the following
  bug:

  Traceback (most recent call last):
  File 
/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py,
 line 504, in test_udpate_health_monitor_invalid_admin_state_up
hm.get('id'), admin_state_up=False)
   File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
422, in assertRaises
self.assertThat(our_callable, matcher)
   File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
435, in assertThat
 raise mismatch_error
   testtools.matchers._impl.MismatchError: bound method 
type._update_health_monitor of class 
'test_health_monitors_non_admin.TestHealthMonitors' returned 
{u'admin_state_up': False, u'tenant_id': u'7e24ec89b7df4a7d8738d415b6ac8422', 
u'delay': 3, u'expected_codes': u'200', u'max_retries': 10, u'http_method': 
u'GET', u'timeout': 5, u'pools': [{u'id': 
u'1409120f-fde8-49fc-8db5-25dc3941f460'}], u'url_path': 
   u'/', u'type': u'HTTP', u'id': u'5cee3bf8-d94e-42a4-ab30-b190c66f87de'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448248] Re: Keystone Middleware Installation

2015-06-22 Thread Christopher Aedo
** Project changed: app-catalog = keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1448248

Title:
  Keystone Middleware Installation

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Hi,

  I was performing openstack devstack juno installation, downloaded the
  scripts from github and got some keystone middleware error

  + install_keystonemiddleware
  + use_library_from_git keystonemiddleware
  + local name=keystonemiddleware
  + local enabled=1
  + [[ ,, =~ ,keystonemiddleware, ]]
  + return 1
  + pip_install_gr keystonemiddleware
  + local name=keystonemiddleware
  ++ get_from_global_requirements keystonemiddleware
  ++ local package=keystonemiddleware
  +++ cut -d# -f1
  +++ grep -h '^keystonemiddleware' 
/opt/stack/requirements/global-requirements.txt
  ++ local required_pkg=
  ++ [[ '' == '' ]]
  ++ die 1601 'Can'\''t find package keystonemiddleware in requirements'
  ++ local exitcode=0
  ++ set +o xtrace
  [ERROR] /home/stack/devstack/functions-common:1601 Can't find package 
keystonemiddleware in requirements
  + local 'clean_name=[Call Trace]
  ./stack.sh:781:install_keystonemiddleware
  /home/stack/devstack/lib/keystone:496:pip_install_gr
  /home/stack/devstack/functions-common:1535:get_from_global_requirements
  /home/stack/devstack/functions-common:1601:die'
  + pip_install '[Call' 'Trace]' ./stack.sh:781:install_keystonemiddleware 
/home/stack/devstack/lib/keystone:496:pip_install_gr 
/home/stack/devstack/functions-common:1535:get_from_global_requirements 
/home/stack/devstack/functions-common:1601:die
  ++ set +o
  ++ grep xtrace
  + local 'xtrace=set -o xtrace'
  + set +o xtrace
  + sudo -H PIP_DOWNLOAD_CACHE=/var/cache/pip http_proxy= https_proxy= 
no_proxy= /usr/local/bin/pip install '[Call' 'Trace]' 
./stack.sh:781:install_keystonemiddleware 
/home/stack/devstack/lib/keystone:496:pip_install_gr 
/home/stack/devstack/functions-common:1535:get_from_global_requirements 
/home/stack/devstack/functions-common:1601:die
  DEPRECATION: --download-cache has been deprecated and will be removed in the 
future. Pip now automatically uses and configures its cache.
  Exception:
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 246, 
in main
  status = self.run(options, args)
File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 
308, in run
  name, None, isolated=options.isolated_mode,
File /usr/local/lib/python2.7/dist-packages/pip/req/req_install.py, line 
220, in from_line
  isolated=isolated)
File /usr/local/lib/python2.7/dist-packages/pip/req/req_install.py, line 
79, in __init__
  req = pkg_resources.Requirement.parse(req)
File 
/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py, 
line 2960, in parse
  reqs = list(parse_requirements(s))
File 
/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py, 
line 2891, in parse_requirements
  raise ValueError(Missing distribution spec, line)
  ValueError: ('Missing distribution spec', '[Call')

  + exit_trap
  + local r=2
  ++ jobs -p
  + jobs=
  + [[ -n '' ]]
  + kill_spinner
  + '[' '!' -z '' ']'
  + [[ 2 -ne 0 ]]
  + echo 'Error on exit'
  Error on exit
  + [[ -z '' ]]
  + /home/stack/devstack/tools/worlddump.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1448248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449492] Re: Cinder not working with IPv6 ISCSI

2015-06-22 Thread Christopher Aedo
** Project changed: app-catalog = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449492

Title:
  Cinder not working with IPv6 ISCSI

Status in OpenStack Compute (Nova):
  New

Bug description:
  Testing configuring Openstack completely with IPv6

  Found that IP address parsing was thrown in a lot of cases because of
  need to have '[]' encasing the address, or not for use with URLs and
  the parsing of some user space 3rd party C binaries - iscsiadm for
  example. All the others are best left by using a name set to the IPv6
  address in the /etc/hosts file, iSCSI though its not possible.

  Got Cinder working by setting iscsi_ip_address
  (/etc/cinder/cinder.conf) to '[$my_ip]' where my ip is an IPv6 address
  like 2001:db08::1 (not RFC documentation address ?) and changing one
  line of python iin the nova virt/libvirt/volume.py code:

  
  --- nova/virt/libvirt/volume.py.orig2015-04-27 23:00:00.208075644 +1200
  +++ nova/virt/libvirt/volume.py 2015-04-27 22:38:08.938643636 +1200
  @@ -833,7 +833,7 @@
   def _get_host_device(self, transport_properties):
   Find device path in devtemfs.
   device = (ip-%s-iscsi-%s-lun-%s %
  -  (transport_properties['target_portal'],
  +  
(transport_properties['target_portal'].replace('[','').replace(']',''),
  transport_properties['target_iqn'],
  transport_properties.get('target_lun', 0)))
   if self._get_transport() == default:

  Nova-compute was looking for '/dev/disk/by-path/ip-[2001:db08::1]:3260
  -iscsi-iqn.2010-10.org.openstack:*' when there were no '[]' in the
  udev generated path

  This one can't be worked around by using the /etc/hosts file. iscsiadm
  and tgt ned the IPv6 address wrapped in '[]', and iscsadm uses it in
  output.  The above patch could be matched with a bi ihte cinder code
  that puts '[]' around iscsi_ip_address if if it is not supplied.

  More work is obvioulsy need on a convention for writing IPv6 addresses
  in the Openstack configuration files, and there will be a lot of
  places where code will need to be tweaked.

  Lets start by fixing this blooper/lo hanging one  first though as it
  makes it possible to get Cinder working in a pure IPv6 environment.
  Above may be a bit of a hack, but only one one code path needs
  adjustment...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1449492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp