[Yahoo-eng-team] [Bug 2045415] Re: ovn-octavia-provider lacks a sync script like Neutron

2023-12-01 Thread Michael Johnson
Marking the Octavia project as invalid. The OVN provider is a neutron
project and not under the Octavia team.

** Changed in: octavia
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045415

Title:
  ovn-octavia-provider lacks a sync script like Neutron

Status in neutron:
  New
Status in octavia:
  Invalid

Bug description:
  Neutron has neutron-ovn-db-sync-util - but Octavia ovn-octavia-
  provider does not have one - so in case of discrepancies (e.g. OVN NB
  DB entries got removed manually or the whole database was re
  provisioned) - there's no way to get Octavia and OVN NB in sync.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045415/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1990987] [NEW] keystone-manage segmentation fault on CentOS 9 Stream

2022-09-27 Thread Michael Johnson
Public bug reported:

When running wallaby devstack on a fresh build of CentOS 9 Stream,
keystone-manage causes a segmentation fault and stops the install.

python3-3.9.13-3.el9.x86_64

commit a9e81626c5e9dac897759c5f66c7ae1b4efa3c6d (HEAD -> stable/wallaby, 
origin/stable/wallaby)
Merge: 5633be211f edb8bcb029
Author: Zuul 
Date:   Wed Sep 7 02:21:04 2022 +

Merge "reenable greendns in nova." into stable/wallaby

[16313.919417] keystone-manage[105312]: segfault at 7bc20d57dec9 ip 
7fc20d351679 sp 7fff8cdba3f0 error 4 in 
libpython3.9.so.1.0[7fc20d2a4000+1b5000]
[16313.919431] Code: 83 ec 08 48 8b 5f 10 48 83 eb 01 78 2c 4d 39 f4 75 3f 0f 
1f 80 00 00 00 00 49 8b 47 18 48 8b 2c d8 48 85 ed 74 e1 48 8b 55 08  82 a9 
00 00 00 40 75 3e 48 83 eb 01 73 e0 31 c0 48 83 c4 08 5b

/opt/stack/devstack/lib/keystone: line 575: 105312 Segmentation fault
(core dumped) $KEYSTONE_BIN_DIR/keystone-manage bootstrap --bootstrap-
username admin --bootstrap-password "$ADMIN_PASSWORD" --bootstrap-
project-name admin --bootstrap-role-name admin --bootstrap-service-name
keystone --bootstrap-region-id "$REGION_NAME" --bootstrap-admin-url
"$KEYSTONE_AUTH_URI" --bootstrap-public-url "$KEYSTONE_SERVICE_URI"

** Affects: keystone
 Importance: Undecided
 Status: New

** Project changed: nova => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1990987

Title:
  keystone-manage segmentation fault on CentOS 9 Stream

Status in OpenStack Identity (keystone):
  New

Bug description:
  When running wallaby devstack on a fresh build of CentOS 9 Stream,
  keystone-manage causes a segmentation fault and stops the install.

  python3-3.9.13-3.el9.x86_64

  commit a9e81626c5e9dac897759c5f66c7ae1b4efa3c6d (HEAD -> stable/wallaby, 
origin/stable/wallaby)
  Merge: 5633be211f edb8bcb029
  Author: Zuul 
  Date:   Wed Sep 7 02:21:04 2022 +

  Merge "reenable greendns in nova." into stable/wallaby

  [16313.919417] keystone-manage[105312]: segfault at 7bc20d57dec9 ip 
7fc20d351679 sp 7fff8cdba3f0 error 4 in 
libpython3.9.so.1.0[7fc20d2a4000+1b5000]
  [16313.919431] Code: 83 ec 08 48 8b 5f 10 48 83 eb 01 78 2c 4d 39 f4 75 3f 0f 
1f 80 00 00 00 00 49 8b 47 18 48 8b 2c d8 48 85 ed 74 e1 48 8b 55 08  82 a9 
00 00 00 40 75 3e 48 83 eb 01 73 e0 31 c0 48 83 c4 08 5b

  /opt/stack/devstack/lib/keystone: line 575: 105312 Segmentation fault
  (core dumped) $KEYSTONE_BIN_DIR/keystone-manage bootstrap --bootstrap-
  username admin --bootstrap-password "$ADMIN_PASSWORD" --bootstrap-
  project-name admin --bootstrap-role-name admin --bootstrap-service-
  name keystone --bootstrap-region-id "$REGION_NAME" --bootstrap-admin-
  url "$KEYSTONE_AUTH_URI" --bootstrap-public-url
  "$KEYSTONE_SERVICE_URI"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1990987/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1977524] Re: Wrong redirect after deleting zone from Zone Overview pane

2022-06-08 Thread Michael Johnson
** Also affects: designate-dashboard
   Importance: Undecided
   Status: New

** No longer affects: designate

** Changed in: designate-dashboard
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1977524

Title:
  Wrong redirect after deleting zone from Zone Overview pane

Status in Designate Dashboard:
  Confirmed
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When deleting zone from Zones -> specific zone ->  Overview pane i am getting 
page not exist error. 
  After successful notification that zone is being removed website redirects to 
/dashboard/dashboard/project/dnszones which has duplicate dashboard path. 
  When deleting from zones list view everything works fine.

  
  Tested on Ussuri environment, but code seems to be unchanged in newer 
releases. 
  I've tried to apply bugfixes for reloading zones/flating-ip panes but with no 
effect for this case

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1977524/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1977524] Re: Wrong redirect after deleting zone from Zone Overview pane

2022-06-03 Thread Michael Johnson
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1977524

Title:
  Wrong redirect after deleting zone from Zone Overview pane

Status in Designate:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When deleting zone from Zones -> specific zone ->  Overview pane i am getting 
page not exist error. 
  After successful notification that zone is being removed website redirects to 
/dashboard/dashboard/project/dnszones which has duplicate dashboard path. 
  When deleting from zones list view everything works fine.

  
  Tested on Ussuri environment, but code seems to be unchanged in newer 
releases. 
  I've tried to apply bugfixes for reloading zones/flating-ip panes but with no 
effect for this case

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1977524/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2022-05-03 Thread Michael Johnson
** Changed in: python-designateclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  Fix Released
Status in Bandit:
  Fix Released
Status in Barbican:
  Fix Released
Status in Blazar:
  Fix Released
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-infoblox:
  In Progress
Status in networking-l2gw:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in quark:
  In Progress
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  Fix Released
Status in PBR:
  Fix Released
Status in pycadf:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in Glance Client:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Rally:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in sqlalchemy-migrate:
  In Progress
Status in SWIFT:
  In Progress
Status in tacker:
  Fix Released
Status in tempest:
  Invalid
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1970679] [NEW] neutron-tempest-plugin-designate-scenario cross project job is failing on OVN

2022-04-27 Thread Michael Johnson
Public bug reported:

The cross-project neutron-tempest-plugin-designate-scenario job is
failing during the Designate gate runs due to an OVN failure.

+ lib/neutron_plugins/ovn_agent:start_ovn:698 :   wait_for_sock_file 
/var/run/openvswitch/ovnnb_db.sock
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   local count=0
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 1 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=2
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 2 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=3
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 3 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=4
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 4 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=5
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 5 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=6
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 6 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:178 :   die 178 'Socket 
/var/run/openvswitch/ovnnb_db.sock not found'
+ functions-common:die:264 :   local exitcode=0
[Call Trace]
./stack.sh:1284:start_ovn_services
/opt/stack/devstack/lib/neutron-legacy:516:start_ovn
/opt/stack/devstack/lib/neutron_plugins/ovn_agent:698:wait_for_sock_file
/opt/stack/devstack/lib/neutron_plugins/ovn_agent:178:die
[ERROR] /opt/stack/devstack/lib/neutron_plugins/ovn_agent:178 Socket 
/var/run/openvswitch/ovnnb_db.sock not found
exit_trap: cleaning up child processes

An example job run is here:
https://zuul.opendev.org/t/openstack/build/b014e50e018d426b9367fd3219ed489e

** Affects: neutron
 Importance: Critical
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1970679

Title:
  neutron-tempest-plugin-designate-scenario cross project job is failing
  on OVN

Status in neutron:
  New

Bug description:
  The cross-project neutron-tempest-plugin-designate-scenario job is
  failing during the Designate gate runs due to an OVN failure.

  + lib/neutron_plugins/ovn_agent:start_ovn:698 :   wait_for_sock_file 
/var/run/openvswitch/ovnnb_db.sock
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   local count=0
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 1 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=2
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 2 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=3
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 3 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=4
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 4 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 

[Yahoo-eng-team] [Bug 1902276] [NEW] libvirtd going into a tight loop causing instances to not transition to ACTIVE

2020-10-30 Thread Michael Johnson
Public bug reported:

Description
===
This is current master branch (wallaby) of OpenStack.

We seen this regularly, but it's intermittent.

We are seeing nova instances that do not transition to ACTIVE inside
five minutes. Investigating this led us to find that libvirtd seems to
be going into a tight loop on an instance delete.

The 136MB log is here:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c77/759973/3/check/octavia-v2
-dsvm-scenario/c77fe63/controller/logs/libvirt/libvirtd_log.txt

The overall job logs are here: 
https://zuul.opendev.org/t/openstack/build/c77fe63a94ef4298872ad5f40c5df7d4/logs

When running the Octavia scenario test suite, we occasionally see nova
instances fail to become ACTIVE in a timely manner, causing timeouts and
failures. In investigating this issue we found the libvirtd log was
136MB.

Most of the file is full of this repeating:
2020-10-28 23:45:06.330+: 20852: debug : qemuMonitorIO:767 : Error on 
monitor internal error: End of file from qemu monitor
2020-10-28 23:45:06.330+: 20852: debug : qemuMonitorIO:788 : Triggering EOF 
callback
2020-10-28 23:45:06.330+: 20852: debug : qemuProcessHandleMonitorEOF:301 : 
Received EOF on 0x7f6278014ca0 'instance-0001'
2020-10-28 23:45:06.330+: 20852: debug : qemuProcessHandleMonitorEOF:305 : 
Domain is being destroyed, EOF is expected

Here is a snippet for the lead in to the repeated lines:
http://paste.openstack.org/show/799559/

It appears to be a tight loop, repeating many times per second.

Eventually it does stop and things seem to go back to normal in nova.

Here is the snippet of the end of the loop in the log:
http://paste.openstack.org/show/799560/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1902276

Title:
  libvirtd going into a tight loop causing instances to not transition
  to ACTIVE

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  This is current master branch (wallaby) of OpenStack.

  We seen this regularly, but it's intermittent.

  We are seeing nova instances that do not transition to ACTIVE inside
  five minutes. Investigating this led us to find that libvirtd seems to
  be going into a tight loop on an instance delete.

  The 136MB log is here:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c77/759973/3/check/octavia-v2
  -dsvm-scenario/c77fe63/controller/logs/libvirt/libvirtd_log.txt

  The overall job logs are here: 
  
https://zuul.opendev.org/t/openstack/build/c77fe63a94ef4298872ad5f40c5df7d4/logs

  When running the Octavia scenario test suite, we occasionally see nova
  instances fail to become ACTIVE in a timely manner, causing timeouts
  and failures. In investigating this issue we found the libvirtd log
  was 136MB.

  Most of the file is full of this repeating:
  2020-10-28 23:45:06.330+: 20852: debug : qemuMonitorIO:767 : Error on 
monitor internal error: End of file from qemu monitor
  2020-10-28 23:45:06.330+: 20852: debug : qemuMonitorIO:788 : Triggering 
EOF callback
  2020-10-28 23:45:06.330+: 20852: debug : qemuProcessHandleMonitorEOF:301 
: Received EOF on 0x7f6278014ca0 'instance-0001'
  2020-10-28 23:45:06.330+: 20852: debug : qemuProcessHandleMonitorEOF:305 
: Domain is being destroyed, EOF is expected

  Here is a snippet for the lead in to the repeated lines:
  http://paste.openstack.org/show/799559/

  It appears to be a tight loop, repeating many times per second.

  Eventually it does stop and things seem to go back to normal in nova.

  Here is the snippet of the end of the loop in the log:
  http://paste.openstack.org/show/799560/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1902276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894136] [NEW] [OVN Octavia Provider] OVN provider fails during listener delete

2020-09-03 Thread Michael Johnson
Public bug reported:

The OVN provider is consistently failing during a listener delete as
part of the member API tempest test tear down with a 'filedescriptor out
of range in select()' error.

o-api logs snippet:

Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
[None req-9201aee8-9a5b-460c-bf8b-c6408d20aec7 tempest-MemberAPITest-903346660 
tempest-MemberAPITest-903346660] OVS database connection to OVN_Northbound 
failed with error: 'filedescriptor out of range in select()'. Verify that the 
OVS and OVN services are available and that the 'ovn_nb_connection' and 
'ovn_sb_connection' configuration options are correct.: ValueError: 
filedescriptor out of range in select()
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
Traceback (most recent call last):
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/opt/stack/ovn-octavia-provider/ovn_octavia_provider/ovsdb/impl_idl_ovn.py", 
line 61, in start_connection
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 self.ovsdb_connection.start()
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 79, in start
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 idlutils.wait_for_change(self.idl, self.timeout)
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", 
line 201, in wait_for_change
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 ovs_poller.block()
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File "/usr/local/lib/python3.6/dist-packages/ovs/poller.py", line 231, in block
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 events = self.poll.poll(self.timeout)
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File "/usr/local/lib/python3.6/dist-packages/ovs/poller.py", line 140, in poll
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 timeout)
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
ValueError: filedescriptor out of range in select()
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
Sep 03 15:44:05.172746 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR octavia.api.drivers.driver_factory [None 
req-9201aee8-9a5b-460c-bf8b-c6408d20aec7 tempest-MemberAPITest-903346660 
tempest-MemberAPITest-903346660] Unable to load provider driver ovn due to: OVS 
database connection to OVN_Northbound failed with error: 'filedescriptor out of 
range in select()'. Verify that the OVS and OVN services are available and that 
the 'ovn_nb_connection' and 'ovn_sb_connection' configuration options are 
correct.: ovn_octavia_provider.ovsdb.impl_idl_ovn.OvsdbConnectionUnavailable: 
OVS database connection to OVN_Northbound failed with error: 'filedescriptor 
out of range in select()'. Verify that the OVS and OVN services are available 
and that the 'ovn_nb_connection' and 'ovn_sb_connection' configuration options 
are correct.
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR wsme.api [None 
req-9201aee8-9a5b-460c-bf8b-c6408d20aec7 tempest-MemberAPITest-903346660 
tempest-MemberAPITest-903346660] Server-side error: "Provider 'ovn' was not 
found.". Detail:
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: Traceback (most recent call last):
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]:   File 
"/opt/stack/ovn-octavia-provider/ovn_octavia_provider/ovsdb/impl_idl_ovn.py", 
line 61, in start_connection
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: self.ovsdb_connection.start()
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 

[Yahoo-eng-team] [Bug 1886116] [NEW] slaac no longer works on IPv6 tenant subnets

2020-07-02 Thread Michael Johnson
Public bug reported:

Nova instances no longer get an IPv6 address using slaac on tenant
subnets.

Using a standard devstack install with "SERVICE_IP_VERSION="6"" added,
master (Victoria).

[ml2]
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,linuxbridge


network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones| nova |
| created_at| 2020-07-02T22:55:51Z |
| description   |  |
| dns_domain| None |
| id| e8258754-6a0b-40ea-abf6-c55b39845f62 |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| is_default| None |
| is_vlan_transparent   | None |
| location  | cloud='', project.domain_id='default',   |
|   | project.domain_name=,|
|   | project.id='08c84a34e4c34dacb3abbfe840edf6e3',   |
|   | project.name='admin', region_name='RegionOne',   |
|   | zone=|
| mtu   | 1450 |
| name  | lb-mgmt-net  |
| port_security_enabled | True |
| project_id| 08c84a34e4c34dacb3abbfe840edf6e3 |
| provider:network_type | vxlan|
| provider:physical_network | None |
| provider:segmentation_id  | 2|
| qos_policy_id | None |
| revision_number   | 2|
| router:external   | Internal |
| segments  | None |
| shared| False|
| status| ACTIVE   |
| subnets   | 2f17a970-09b1-410d-89de-c75b1e5f6eef |
| tags  |  |
| updated_at| 2020-07-02T22:55:52Z |
+---+--+

Subnet:
+--+---+
| Field| Value |
+--+---+
| allocation_pools | fd00:0:0:42::2-fd00::42::::   |
| cidr | fd00:0:0:42::/64  |
| created_at   | 2020-07-02T22:55:52Z  |
| description  |   |
| dns_nameservers  |   |
| dns_publish_fixed_ip | None  |
| enable_dhcp  | True  |
| gateway_ip   | fd00:0:0:42:: |
| host_routes  |   |
| id   | 2f17a970-09b1-410d-89de-c75b1e5f6eef  |
| ip_version   | 6 |
| ipv6_address_mode| slaac |
| ipv6_ra_mode | slaac |
| location | cloud='', project.domain_id='default',|
|  | project.domain_name=, |
|  | project.id='08c84a34e4c34dacb3abbfe840edf6e3',|
|  | project.name='admin', region_name='RegionOne', zone=  |
| name | lb-mgmt-subnet|
| network_id   | 

[Yahoo-eng-team] [Bug 1871239] [NEW] ovn-octavia-provider is not using load balancing algorithm source-ip-port

2020-04-06 Thread Michael Johnson
Public bug reported:

When using the ovn-octavia-provider, OVN is not honoring the
SOURCE_IP_PORT pool load balancing algorithm. The ovn-octavia-provider
only supports the SOURCE_IP_PORT load balancing algorithm.

The following test was created for the SOURCE_IP_PORT algorithm in tempest:
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest.test_source_ip_port_tcp_traffic

Available in this patch: https://review.opendev.org/#/c/714004/

The test run shows that OVN is randomly distributing the connections
from the same source IP and port across the backend member servers. One
server is configured to return '1' and the other '5'.

Loadbalancer response totals: {'1': 12, '5': 8}

It should be seeing a result of:

Loadbalancer response totals: {'1': 20}

The attached files provide:

ovn-provider.pcap -- A pcap file capturing the test run.
ovn-tempest-output.txt -- The tempest console output.
tempest.log -- The tempest framework log from the test run.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871239

Title:
  ovn-octavia-provider is not using load balancing algorithm source-ip-
  port

Status in neutron:
  New

Bug description:
  When using the ovn-octavia-provider, OVN is not honoring the
  SOURCE_IP_PORT pool load balancing algorithm. The ovn-octavia-provider
  only supports the SOURCE_IP_PORT load balancing algorithm.

  The following test was created for the SOURCE_IP_PORT algorithm in tempest:
  
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest.test_source_ip_port_tcp_traffic

  Available in this patch: https://review.opendev.org/#/c/714004/

  The test run shows that OVN is randomly distributing the connections
  from the same source IP and port across the backend member servers.
  One server is configured to return '1' and the other '5'.

  Loadbalancer response totals: {'1': 12, '5': 8}

  It should be seeing a result of:

  Loadbalancer response totals: {'1': 20}

  The attached files provide:

  ovn-provider.pcap -- A pcap file capturing the test run.
  ovn-tempest-output.txt -- The tempest console output.
  tempest.log -- The tempest framework log from the test run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1871239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870002] Re: The operating_status value of loadbalancer is abnormal

2020-04-01 Thread Michael Johnson
Octavia tracks bugs and RFEs in the new OpenStack Storyboard and not launchpad.
https://storyboard.openstack.org/#!/project/openstack/octavia
Please open your bug in Storyboard for the Octavia team.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870002

Title:
  The operating_status value of loadbalancer is abnormal

Status in neutron:
  Invalid

Bug description:
  Summary of problems:
  1.One loadbalancer contains multiple pools and listeners,as long as the 
operating_status status of a pool is error, the operating_status status of 
loadbalancer is error.
  2. The operating_status state of listener is inconsistent with that of pool 
and loadbalancer.

  1. Loadbalancer contains multiple pools and listeners:

  openstack loadbalancer show a6c134fa-eb05-47e3-b760-ae5ca7117996
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | created_at  | 2020-03-23T03:36:15  |
  | description |  |
  | flavor_id   | a3de1882-8ace-4df7-9979-ce11153f912c |
  | id  | a6c134fa-eb05-47e3-b760-ae5ca7117996 |
  | listeners   | c0759d56-4eb1-4404-a805-ff196c8fa9ad |
  | | f6b8eee5-2a0a-4ff3-b339-be9489511f1c |
  | | 0fbf23f7-5b3d-48d8-b417-e7b770fb949f |
  | | 4b067982-2cb2-47cc-856b-ab65307f2ba5 |
  | name| gengjie-lvs  |
  | operating_status| ERROR|
  | pools   | 3ba5de47-3276-4687-aa27-9344d348cdda |
  | | 407eaff9-b90e-4cde-a254-04f3047b270f |
  | | 73edd2f9-78ea-4cd6-a20f-d02664dd4654 |
  | | bf07f027-9793-44e4-b307-495b3273a1ae |
  | | d479dba7-a7d2-4631-8eb0-0300800708a2 |
  | project_id  | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
  | provider| amphora  |
  | provisioning_status | ACTIVE   |
  | updated_at  | 2020-04-01T02:07:43  |
  | vip_address | 192.168.0.170|
  | vip_network_id  | 3d22ec75-5b4e-43d7-86bd-480d07c0784b |
  | vip_port_id | 518304bc-41d3-4ac6-bc5a-328c5c2a0674 |
  | vip_qos_policy_id   | None |
  | vip_subnet_id   | 2f55d6f6-ba8b-4390-8679-9338f94afe3e |
  +-+--+
  2. as long as the operating_status status of a pool is error, the 
operating_statusstatus of loadbalancer is error

  openstack loadbalancer pool show 3ba5de47-3276-4687-aa27-9344d348cdda
  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | created_at   | 2020-03-27T08:32:41  |
  | description  |  |
  | healthmonitor_id | d6a78953-a5a5-49dd-b780-e28c6bf9f16e |
  | id   | 3ba5de47-3276-4687-aa27-9344d348cdda |
  | lb_algorithm | LEAST_CONNECTIONS|
  | listeners| c0759d56-4eb1-4404-a805-ff196c8fa9ad |
  | loadbalancers| a6c134fa-eb05-47e3-b760-ae5ca7117996 |
  | members  | 5cff4fa5-39c0-4f8b-8c9c-bfb53ea7d028 |
  | name | ysy-test-01  |
  | operating_status | ERROR|
  | project_id   | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
  | protocol | HTTP |
  | provisioning_status  | ACTIVE   |
  | session_persistence  | None |
  | updated_at   | 2020-03-31T11:56:30  |
  | tls_container_ref| None |
  | ca_tls_container_ref | None |
  | crl_container_ref| None |
  | tls_enabled  | False|
  +--+--+
  openstack loadbalancer pool show 407eaff9-b90e-4cde-a254-04f3047b270f
  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | created_at   | 2020-03-30T07:18:09   

[Yahoo-eng-team] [Bug 1863190] [NEW] Server group anti-affinity no longer works

2020-02-13 Thread Michael Johnson
Public bug reported:

Server group anti-affinity is no longer working, at least in the simple
case. I am able to boot two VMs in an anti-affinity server group on a
devstack that only has one compute instance. Previously this would fail
and/or require soft-anti-affinity enabled.

$ openstack host list
+---+---+--+
| Host Name | Service   | Zone |
+---+---+--+
| devstack2 | scheduler | internal |
| devstack2 | conductor | internal |
| devstack2 | conductor | internal |
| devstack2 | compute   | nova |
+---+---+--+

$ openstack compute service list
+++---+--+-+---++
| ID | Binary | Host  | Zone | Status  | State | Updated At 
|
+++---+--+-+---++
|  3 | nova-scheduler | devstack2 | internal | enabled | up| 
2020-02-14T00:59:15.00 |
|  6 | nova-conductor | devstack2 | internal | enabled | up| 
2020-02-14T00:59:16.00 |
|  1 | nova-conductor | devstack2 | internal | enabled | up| 
2020-02-14T00:59:19.00 |
|  3 | nova-compute   | devstack2 | nova | enabled | up| 
2020-02-14T00:59:17.00 |
+++---+--+-+---++

$ openstack server list
+--+--++---+-++
| ID   | Name   
  | Status | Networks  | Image  
 | Flavor |
+--+--++---+-++
| a44febef-330c-4db5-b220-959cbbff8f8c | 
amphora-1bc97ddb-80da-446a-bce3-0c867c1fc258 | ACTIVE | 
lb-mgmt-net=192.168.0.58; public=172.24.4.200 | amphora-x64-haproxy | 
m1.amphora |
| de776347-0cf4-47d5-bb37-17fb37d79f2e | 
amphora-433abe98-fd8e-4e4f-ac11-4f76bbfc7aba | ACTIVE | 
lb-mgmt-net=192.168.0.199; public=172.24.4.11 | amphora-x64-haproxy | 
m1.amphora |
+--+--++---+-++

$ openstack server group show ddbc8544-c664-4da4-8fd8-32f6bd01e960
+--++
| Field| Value  
|
+--++
| id   | ddbc8544-c664-4da4-8fd8-32f6bd01e960   
|
| members  | a44febef-330c-4db5-b220-959cbbff8f8c, 
de776347-0cf4-47d5-bb37-17fb37d79f2e |
| name | octavia-lb-cc40d031-6ce9-475f-81b4-0a6096178834
|
| policies | anti-affinity  
|
+--++

Steps to reproduce:
1. Boot a devstack.
2. Create an anti-affinity server group.
2. Boot two VMs in that server group.

Expected Behavior:

The second VM boot should fail with an error similar to "not enough
hosts"

Actual Behavior:

The second VM boots with no error, The two instances in the server group
are on the same host.

Environment:
Nova version (current Ussuri): 0d3aeb0287a0619695c9b9e17c2dec49099876a5
commit 0d3aeb0287a0619695c9b9e17c2dec49099876a5 (HEAD -> master, origin/master, 
origin/HEAD)
Merge: 1fcd74730d 65825ebfbd
Author: Zuul 
Date:   Thu Feb 13 14:25:10 2020 +

Merge "Make RBD imagebackend flatten method idempotent"

Fresh devstack install, however I have another devstack from August that
is also showing this behavior.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863190

Title:
  Server group anti-affinity no longer works

Status in OpenStack Compute (nova):
  New

Bug description:
  Server group anti-affinity is no longer working, at least in the
  simple case. I am able to boot two VMs in an anti-affinity server
  group on a devstack that only has one compute instance. Previously
  this would fail and/or require soft-anti-affinity enabled.

  $ openstack host list
  +---+---+--+
  | Host Name | Service   | Zone |
  +---+---+--+
  | devstack2 | scheduler | internal |
  | devstack2 | conductor | internal |
  | devstack2 | conductor | internal |
  | devstack2 | compute   | nova |
  

[Yahoo-eng-team] [Bug 1853637] [NEW] Assign floating IP to port owned by another tenant is not override-able with RBAC policy

2019-11-22 Thread Michael Johnson
Public bug reported:

In neutron/db/l3_db.py:

def _internal_fip_assoc_data(self, context, fip, tenant_id):
"""Retrieve internal port data for floating IP.
Retrieve information concerning the internal port where
the floating IP should be associated to.
"""
internal_port = self._core_plugin.get_port(context, fip['port_id'])
if internal_port['tenant_id'] != tenant_id and not context.is_admin:
port_id = fip['port_id']
msg = (_('Cannot process floating IP association with '
 'Port %s, since that port is owned by a '
 'different tenant') % port_id)
raise n_exc.BadRequest(resource='floatingip', msg=msg)

This code does not allow operators to override the ability to assign
floating IPs to ports on another tenant using RBAC policy. It also does
not allow members of the advsvc role to take this action.

This code should be fixed to use the standard neutron RBAC and allow the
advsvc role to take this action.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1853637

Title:
  Assign floating IP to port owned by another tenant is not override-
  able with RBAC policy

Status in neutron:
  New

Bug description:
  In neutron/db/l3_db.py:

  def _internal_fip_assoc_data(self, context, fip, tenant_id):
  """Retrieve internal port data for floating IP.
  Retrieve information concerning the internal port where
  the floating IP should be associated to.
  """
  internal_port = self._core_plugin.get_port(context, fip['port_id'])
  if internal_port['tenant_id'] != tenant_id and not context.is_admin:
  port_id = fip['port_id']
  msg = (_('Cannot process floating IP association with '
   'Port %s, since that port is owned by a '
   'different tenant') % port_id)
  raise n_exc.BadRequest(resource='floatingip', msg=msg)

  This code does not allow operators to override the ability to assign
  floating IPs to ports on another tenant using RBAC policy. It also
  does not allow members of the advsvc role to take this action.

  This code should be fixed to use the standard neutron RBAC and allow
  the advsvc role to take this action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1853637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831613] Re: Deletion of Lbaas-listener is successfull even when it is part of Lbaas pool

2019-06-05 Thread Michael Johnson
neutron-lbaas is not a neutron project. This patch has been moved to the
neutron-lbaas storyboard in story:
https://storyboard.openstack.org/#!/story/2005827

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831613

Title:
  Deletion of Lbaas-listener is successfull  even when it is part of
  Lbaas pool

Status in neutron:
  Invalid

Bug description:
  Description ->Deletion of  loadbalancer   listener  is successfull
  even when  it is attached to Lbaas pool. After deletion of listener
  when user creates a new listener neutron Cli command do not support
  addition of new listener in existing lbaas pool.

  User impact of deletion-> loadbalancer stops working if user  able to
  delete listener accidentally

  Step to reproduce the scenario->

  neutron lbaas-loadbalancer-create --name lb-15 public-subnet
  neutron lbaas-listener-create --name listener-15-1 --loadbalancer lb-15 
--protocol HTTP --protocol-port 80 --connection-limit 1
  neutron lbaas-pool-create --name pool-15 --lb-algorithm  ROUND_ROBIN  
--listener listener-15-1  --protocol HTTP
  neutron lbaas-healthmonitor-create --name health-15 --delay 5 --max-retries 4 
--timeout 3 --type PING --pool pool-15
  neutron lbaas-listener-delete 

  create a listener again and try to add to existing pool no cli support
  this operation as well as no Horizon support for the same

  Expected output-> Two approach to look for.

  1. If deletion of listener is possible then addition of listener
  should also be allowed.

  2. Another option is if listener is mandatory field for pool creation
  then like other field lbaaS listener  deletion  should throw an error.

  version of openstack -> stable stein
  linux ubuntu -> 18.04


  
  Reason why it is needed:(while creation of listener is mandatory for a pool 
then deletion also should not be allowed without deleting pool).
   root@vmware:~/vio6.0# neutron lbaas-pool-create --name lb-pool2 
--lb-algorithm ROUND_ROBIN --protocol HTTP --insecure
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  /usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py:857: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
  /usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py:857: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
  At least one of --listener or --loadbalancer must be specified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1831613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822382] Re: DBDeadlock for INSERT INTO resourcedeltas

2019-03-29 Thread Michael Johnson
Looking at this deeper, it appears neutron did properly retry this DB
action and the instance connection issue may be unrelated. Marking this
invalid.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822382

Title:
  DBDeadlock for INSERT INTO resourcedeltas

Status in neutron:
  Invalid

Bug description:
  Recently we started seeing instances fail to become reachable in the
  Octavia tempest jobs. This is intermittent, but recurring. This may be
  related to other DBDeadlock bugs recently reported for quotas, but
  since the SQL is different here I am reporting it.

  This is on Master/Train.

  Summary of the error in q-svc:

  Mar 29 20:04:12.816598 ubuntu-xenial-rax-dfw-0004550340 neutron-
  server[11470]: ERROR oslo_db.sqlalchemy.exc_filters
  oslo_db.exception.DBDeadlock: (pymysql.err.InternalError) (1213,
  'Deadlock found when trying to get lock; try restarting transaction')
  [SQL: 'INSERT INTO resourcedeltas (resource, reservation_id, amount)
  VALUES (%(resource)s, %(reservation_id)s, %(amount)s)'] [parameters:
  {'reservation_id': '4f198b7d-ac31-42bb-98bd-686c830322ab', 'resource':
  'port', 'amount': 1}] (Background on this error at:
  http://sqlalche.me/e/2j85)

  Full traceback from q-svc for once occurrence:

  Mar 29 20:04:12.790909 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters 
[req-56d6484e-b182-4dbd-8bb9-8db4ceb3c38a 
req-ddb65494-cdaf-4dec-ab19-84efbede0da7 admin admin] DBAPIError exception 
wrapped from (pymysql.err.InternalError) (1305, 'SAVEPOINT sa_savepoint_9 does 
not exist') [SQL: 'ROLLBACK TO SAVEPOINT sa_savepoint_9'] (Background on this 
error at: http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1305, 
'SAVEPOINT sa_savepoint_9 does not exist')
  Mar 29 20:04:12.791441 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
  Mar 29 20:04:12.791889 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 1193, 
in _execute_context
  Mar 29 20:04:12.792380 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters context)
  Mar 29 20:04:12.792860 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/default.py", line 
507, in do_execute
  Mar 29 20:04:12.793296 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  Mar 29 20:04:12.793872 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 165, in 
execute
  Mar 29 20:04:12.794320 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
  Mar 29 20:04:12.794743 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 321, in _query
  Mar 29 20:04:12.795219 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters conn.query(q)
  Mar 29 20:04:12.795668 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 860, in 
query
  Mar 29 20:04:12.796102 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  Mar 29 20:04:12.796505 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
  Mar 29 20:04:12.796904 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters result.read()
  Mar 29 20:04:12.797336 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1349, in 
read
  Mar 29 20:04:12.797730 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters first_packet = 
self.connection._read_packet()
  Mar 29 20:04:12.798022 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
  Mar 29 20:04:12.798305 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR 

[Yahoo-eng-team] [Bug 1822382] [NEW] DBDeadlock for INSERT INTO resourcedeltas

2019-03-29 Thread Michael Johnson
Public bug reported:

Recently we started seeing instances fail to become reachable in the
Octavia tempest jobs. This is intermittent, but recurring. This may be
related to other DBDeadlock bugs recently reported for quotas, but since
the SQL is different here I am reporting it.

This is on Master/Train.

Summary of the error in q-svc:

Mar 29 20:04:12.816598 ubuntu-xenial-rax-dfw-0004550340 neutron-
server[11470]: ERROR oslo_db.sqlalchemy.exc_filters
oslo_db.exception.DBDeadlock: (pymysql.err.InternalError) (1213,
'Deadlock found when trying to get lock; try restarting transaction')
[SQL: 'INSERT INTO resourcedeltas (resource, reservation_id, amount)
VALUES (%(resource)s, %(reservation_id)s, %(amount)s)'] [parameters:
{'reservation_id': '4f198b7d-ac31-42bb-98bd-686c830322ab', 'resource':
'port', 'amount': 1}] (Background on this error at:
http://sqlalche.me/e/2j85)

Full traceback from q-svc for once occurrence:

Mar 29 20:04:12.790909 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters [req-56d6484e-b182-4dbd-8bb9-8db4ceb3c38a 
req-ddb65494-cdaf-4dec-ab19-84efbede0da7 admin admin] DBAPIError exception 
wrapped from (pymysql.err.InternalError) (1305, 'SAVEPOINT sa_savepoint_9 does 
not exist') [SQL: 'ROLLBACK TO SAVEPOINT sa_savepoint_9'] (Background on this 
error at: http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1305, 
'SAVEPOINT sa_savepoint_9 does not exist')
Mar 29 20:04:12.791441 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
Mar 29 20:04:12.791889 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 1193, 
in _execute_context
Mar 29 20:04:12.792380 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters context)
Mar 29 20:04:12.792860 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/default.py", line 
507, in do_execute
Mar 29 20:04:12.793296 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters)
Mar 29 20:04:12.793872 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 165, in 
execute
Mar 29 20:04:12.794320 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters result = self._query(query)
Mar 29 20:04:12.794743 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 321, in _query
Mar 29 20:04:12.795219 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters conn.query(q)
Mar 29 20:04:12.795668 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 860, in 
query
Mar 29 20:04:12.796102 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters self._affected_rows = 
self._read_query_result(unbuffered=unbuffered)
Mar 29 20:04:12.796505 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
Mar 29 20:04:12.796904 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters result.read()
Mar 29 20:04:12.797336 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1349, in 
read
Mar 29 20:04:12.797730 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters first_packet = 
self.connection._read_packet()
Mar 29 20:04:12.798022 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
Mar 29 20:04:12.798305 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters packet.check_error()
Mar 29 20:04:12.798600 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 384, in 
check_error
Mar 29 20:04:12.798894 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters err.raise_mysql_exception(self._data)
Mar 29 20:04:12.799224 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR 

[Yahoo-eng-team] [Bug 1811455] [NEW] QoS plugin fails if network is not found

2019-01-11 Thread Michael Johnson
 'qos_policy_id'

It appears that the qos_plugin is always assuming it will get a network object 
back for ports:
neutron/services/qos/qos_plugin.py: L97

# Note(lajoskatona): handle the case when the port inherits qos-policy
# from the network.
if not qos_policy:
net = network_object.Network.get_object(
context.get_admin_context(), id=port_res['network_id'])
if net.qos_policy_id:
qos_policy = policy_object.QosPolicy.get_network_policy(
context.get_admin_context(), net.id)

I think this needs to be updated to handle the case that a network is
not returned.

** Affects: neutron
 Importance: Undecided
 Assignee: Michael Johnson (johnsom)
 Status: In Progress


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1811455

Title:
  QoS plugin fails if network is not found

Status in neutron:
  In Progress

Bug description:
  Master neutron (Stein):
  We are intermittently seeing gate failures with a q-svc exception:

  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server [None 
req-9e4027ef-c0b5-4d46-99be-1a1da640c506 None None] Exception during message 
handling: AttributeError: 'NoneType' object has no attribute 'qos_policy_id'
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server Traceback (most recent 
call last):
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_messaging/rpc/server.py", line 
166, in _process_incoming
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
265, in dispatch
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
194, in _do_dispatch
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server result = func(ctxt, 
**new_args)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 146, in 
get_active_networks_info
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server ports = 
plugin.get_ports(context, filters=filters)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/neutron_lib/db/api.py", line 233, in 
wrapped
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server return method(*args, 
**kwargs)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/neutron_lib/db/api.py", line 140, in 
wrapped
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server setattr(e, 
'_RETRY_EXCEEDED', True)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server self.force_reraise()
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server raise value
  J

[Yahoo-eng-team] [Bug 1780376] Re: Queens neutron broken with recent L3 removal from neutron-lib.constants

2018-07-13 Thread Michael Johnson
This issue only applies to master where qa/infra has removed zuul cloner
and is now relying on requirements/upper-contraints.

So from Boden's comments it sounds like this is a broken requirements
/upper-constraint for neutron/neutron-lib.

I will add the requirements team to the bug.

** Also affects: openstack-requirements
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1780376

Title:
  Queens neutron broken with recent L3 removal from neutron-
  lib.constants

Status in neutron:
  Confirmed
Status in OpenStack Global Requirements:
  New

Bug description:
  This patch: https://github.com/openstack/neutron-
  lib/commit/ec829f9384547864aebb56390da8e17df7051aac breaks neutron in
  the current global requirements setup. Current GR with the new
  versioning pulls queens neutron and the 1.17.0 neutron-lib. Since L3
  was removed from neutron-lib.constants,  queens neutron fails on the
  reference neutron/plugins/common/constants.py

  I'm not sure if L3 should be put back, queens neutron patched, or the
  global requirements setup where it's pulling different versions of
  neutron and neutron-lib needs to be fixed.

  Steps to reproduce:
  Checkout neutron-lbaas and run tox -e py27
  Zuul seems to be pulling the right versions, local does not due to the GR 
constraints.

  Failed to import test module: neutron_lbaas.tests.unit.agent.test_agent
  Traceback (most recent call last):
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "neutron_lbaas/tests/unit/agent/test_agent.py", line 19, in 
  from neutron_lbaas.agent import agent
File "neutron_lbaas/agent/agent.py", line 26, in 
  from neutron_lbaas.agent import agent_manager as manager
File "neutron_lbaas/agent/agent_manager.py", line 17, in 
  from neutron.agent import rpc as agent_rpc
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/agent/rpc.py",
 line 27, in 
  from neutron.agent import resource_cache
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/agent/resource_cache.py",
 line 20, in 
  from neutron.api.rpc.callbacks.consumer import registry as registry_rpc
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/consumer/registry.py",
 line 15, in 
  from neutron.api.rpc.callbacks import resource_manager
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resource_manager.py",
 line 21, in 
  from neutron.api.rpc.callbacks import resources
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resources.py",
 line 15, in 
  from neutron.objects import network
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/objects/network.py",
 line 21, in 
  from neutron.db.models import segment as segment_model
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/db/models/segment.py",
 line 24, in 
  from neutron.extensions import segment
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/extensions/segment.py",
 line 26, in 
  from neutron.api import extensions
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/extensions.py",
 line 32, in 
  from neutron.plugins.common import constants as const
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/plugins/common/constants.py",
 line 28, in 
  'router': constants.L3,
  AttributeError: 'module' object has no attribute 'L3'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1780376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780376] [NEW] Queens neutron broken with recent L3 removal from neutron-lib.constants

2018-07-05 Thread Michael Johnson
Public bug reported:

This patch: https://github.com/openstack/neutron-
lib/commit/ec829f9384547864aebb56390da8e17df7051aac breaks neutron in
the current global requirements setup. Current GR with the new
versioning pulls queens neutron and the 1.17.0 neutron-lib. Since L3 was
removed from neutron-lib.constants,  queens neutron fails on the
reference neutron/plugins/common/constants.py

I'm not sure if L3 should be put back, queens neutron patched, or the
global requirements setup where it's pulling different versions of
neutron and neutron-lib needs to be fixed.

Steps to reproduce:
Checkout neutron-lbaas and run tox -e py27
Zuul seems to be pulling the right versions, local does not due to the GR 
constraints.

Failed to import test module: neutron_lbaas.tests.unit.agent.test_agent
Traceback (most recent call last):
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "neutron_lbaas/tests/unit/agent/test_agent.py", line 19, in 
from neutron_lbaas.agent import agent
  File "neutron_lbaas/agent/agent.py", line 26, in 
from neutron_lbaas.agent import agent_manager as manager
  File "neutron_lbaas/agent/agent_manager.py", line 17, in 
from neutron.agent import rpc as agent_rpc
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/agent/rpc.py",
 line 27, in 
from neutron.agent import resource_cache
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/agent/resource_cache.py",
 line 20, in 
from neutron.api.rpc.callbacks.consumer import registry as registry_rpc
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/consumer/registry.py",
 line 15, in 
from neutron.api.rpc.callbacks import resource_manager
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resource_manager.py",
 line 21, in 
from neutron.api.rpc.callbacks import resources
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resources.py",
 line 15, in 
from neutron.objects import network
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/objects/network.py",
 line 21, in 
from neutron.db.models import segment as segment_model
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/db/models/segment.py",
 line 24, in 
from neutron.extensions import segment
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/extensions/segment.py",
 line 26, in 
from neutron.api import extensions
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/extensions.py",
 line 32, in 
from neutron.plugins.common import constants as const
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/plugins/common/constants.py",
 line 28, in 
'router': constants.L3,
AttributeError: 'module' object has no attribute 'L3'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1780376

Title:
  Queens neutron broken with recent L3 removal from neutron-
  lib.constants

Status in neutron:
  New

Bug description:
  This patch: https://github.com/openstack/neutron-
  lib/commit/ec829f9384547864aebb56390da8e17df7051aac breaks neutron in
  the current global requirements setup. Current GR with the new
  versioning pulls queens neutron and the 1.17.0 neutron-lib. Since L3
  was removed from neutron-lib.constants,  queens neutron fails on the
  reference neutron/plugins/common/constants.py

  I'm not sure if L3 should be put back, queens neutron patched, or the
  global requirements setup where it's pulling different versions of
  neutron and neutron-lib needs to be fixed.

  Steps to reproduce:
  Checkout neutron-lbaas and run tox -e py27
  Zuul seems to be pulling the right versions, local does not due to the GR 
constraints.

  Failed to import test module: neutron_lbaas.tests.unit.agent.test_agent
  Traceback (most recent call last):
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
  

[Yahoo-eng-team] [Bug 1767028] Re: loadbalancer can't create with chinese character name

2018-04-30 Thread Michael Johnson
Marking invalid here to move the bug over to the neutron-lbaas
storyboard.

https://storyboard.openstack.org/#!/story/2001946

** Changed in: neutron
   Status: New => Invalid

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1767028

Title:
  loadbalancer can't create with chinese character name

Status in octavia:
  Invalid

Bug description:
  When create a loadbalancer with chinese character name, It will have
  some problems. Because its name will be written in haproxy
  configuration, but chinese character can't be written correctly.

  - version of Neutron server and Neutron LBaaS plugin are both mitaka
  - cat /var/log/neutron/lbaasv2-agent.log

  ……
  2018-04-26 17:08:28.115 2128890 INFO neutron.common.config [-] 
/usr/bin/neutron-lbaasv2-agent version 0.0.1.dev14379
  2018-04-26 17:08:30.985 2128890 WARNING oslo_config.cfg 
[req-ef0cef5b-d415-4a90-a953-616cb938bfb2 - - - - -] Option "quota_items" from 
group "QUOTAS" is deprecated for removal.  Its value may be silently ignored in 
the future.
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
[req-482029a2-2d4a-410a-9d24-5ec3eb7722fd 673c04fcbf374619af91d09eed27ed6f 
e1a0b669b61744ff867274586ef6a968 - - -] Create loadbalancer 
31822d01-d425-456b-8376-4853d820ab1d failed on device driver haproxy_ns
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File "/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", 
line 283, in create_loadbalancer
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
driver.loadbalancer.create(loadbalancer, ha_info=ha_info)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 433, in create
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
self.refresh(loadbalancer, ha_info=ha_info)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 423, in refresh
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
if (not self.driver.deploy_instance(loadbalancer, ha_info=ha_info) and
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 201, in deploy_instance
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
self.create(loadbalancer, ha_info=ha_info)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 251, in create
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
self._spawn(loadbalancer)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 406, in _spawn
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 93, in save_config
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
n_utils.replace_file(conf_path, config_str)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 535, in 
replace_file
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File "/usr/lib64/python2.7/socket.py", line 316, in write
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
data = str(data) # XXX Should really reject non-string non-buffers
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
UnicodeEncodeError: 'ascii' codec can't encode characters in position 20-21: 
ordinal not in range(128)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager

  - command outputs
  # neutron lbaas-loadbalancer-create 0f45f8d1-7a50-4e4f-93f0-22bdf1e9a4fc 
--name 测试
  Created a new loadbalancer:
  +-+--+
  | Field   | Value

[Yahoo-eng-team] [Bug 1718356] Re: Include default config files in python wheel

2017-09-21 Thread Michael Johnson
Correct, our policy is in code and we don't use paste.  Marking invalid.

** Changed in: octavia
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718356

Title:
  Include default config files in python wheel

Status in Barbican:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Fuxi:
  New
Status in Glance:
  In Progress
Status in OpenStack Heat:
  In Progress
Status in Ironic:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in kuryr-libnetwork:
  New
Status in Magnum:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in octavia:
  Invalid
Status in openstack-ansible:
  New
Status in Sahara:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in Zun:
  New

Bug description:
  The projects which deploy OpenStack from source or using python wheels
  currently have to either carry templates for api-paste, policy and
  rootwrap files or need to source them from git during deployment. This
  results in some rather complex mechanisms which could be radically
  simplified by simply ensuring that all the same files are included in
  the built wheel.

  A precedence for this has already been set in neutron [1], glance [2]
  and designate [3] through the use of the data_files option in the
  files section of setup.cfg.

  [1] 
https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39
  [2] 
https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21
  [3] 
https://github.com/openstack/designate/blob/25eb143db04554d65efe2e5d60ad3afa6b51d73a/setup.cfg#L30-L37

  This bug will be used for a cross-project implementation of patches to
  normalise the implementation across the OpenStack projects. Hopefully
  the result will be a consistent implementation across all the major
  projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1718356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469498] Re: LbaasV2 session persistence- Create and update

2017-09-12 Thread Michael Johnson
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469498

Title:
  LbaasV2 session persistence- Create and update

Status in python-neutronclient:
  New

Bug description:
  When we create a Lbaas pool with session persistence it configured OK

  neutron lbaas-pool-create --session-persistence type=HTTP_COOKIE  
--lb-algorithm LEAST_CONNECTIONS --listener 
4658a507-dccc-41f9-87d7-913d31cab3a1 --protocol HTTP 
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | a626dc28-0126-48f7-acd3-f486827a89c1   |
  | lb_algorithm| LEAST_CONNECTIONS  |
  | listeners   | {"id": "4658a507-dccc-41f9-87d7-913d31cab3a1"} |
  | members ||
  | name||
  | protocol| HTTP   |
  | session_persistence | {"cookie_name": null, "type": "HTTP_COOKIE"}   |
  | tenant_id   | ae0954b9cf0c438e99211227a7f3f937   |

  BUT, when we create a pool without session persistence and update it
  to do session persistence, the action is different and not user
  friendly.

  [root@puma09 ~(keystone_redhat)]# neutron lbaas-pool-create --lb-algorithm 
LEAST_CONNECTIONS --listener 4658a507-dccc-41f9-87d7-913d31cab3a1 --protocol 
HTTP 
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | b9048a69-461a-4503-ba6b-8a2df281f804   |
  | lb_algorithm| LEAST_CONNECTIONS  |
  | listeners   | {"id": "4658a507-dccc-41f9-87d7-913d31cab3a1"} |
  | members ||
  | name||
  | protocol| HTTP   |
  | session_persistence ||
  | tenant_id   | ae0954b9cf0c438e99211227a7f3f937   |
  +-++
  [root@puma09 ~(keystone_redhat)]# neutron lbaas-pool-update 
b9048a69-461a-4503-ba6b-8a2df281f804 --session-persistence type=HTTP_COOKIE
  name 'HTTP_COOKIE' is not defined
  [root@puma09 ~(keystone_redhat)]# 


  we need to configure it in the following way- 
  neutron lbaas-pool-update b9048a69-461a-4503-ba6b-8a2df281f804 
--session-persistence type=dict type=HTTP_COOKIE
  Updated pool: b9048a69-461a-4503-ba6b-8a2df281f804

  The config and update should be done in same way.

  Kilo+ rhel 7.1
  openstack-neutron-common-2015.1.0-10.el7ost.noarch
  python-neutron-lbaas-2015.1.0-5.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-10.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-5.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-10.el7ost.noarch
  openstack-neutron-2015.1.0-10.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-10.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1469498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687366] Re: Radware LBaaS v2 driver should have config to skip SSL certificates verification

2017-09-12 Thread Michael Johnson
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687366

Title:
  Radware LBaaS v2 driver should have config to skip SSL certificates
  verification

Status in neutron:
  Fix Released

Bug description:
  Radware LBaaS v2 driver communicates with Radware's back-end system over 
HTTPS.
  Since this back-end system is internal (VA on openstack compute node), 
usually self-signed certicates are used. 

  If python's default behavior is to verify SSL certificates, and no valid 
certificates exist, HTTPS communication will be halted.
  Starting from releases 2.7.9/3.4.3, python verifies SSL certificates by 
default.

  This enhancement adds a new configuration parameter for the driver
  which will turn the SSL certificates verification OFF in case when
  it's ON.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556342] Re: Able to create pool with different protocol than listener protocol

2017-09-12 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556342

Title:
  Able to create pool with different protocol than listener protocol

Status in octavia:
  In Progress

Bug description:
  When creating a pool with different protocol than listener protocol, a pool 
is create even though the protocols are not compatible.
  Previously, this would not display any pools in neutron lbaas-pool-list since 
the protocols are not compatible. 

  
  Initial state
  $ neutron lbaas-loadbalancer-list
  
+--+--+-+-+--+
  | id   | name | vip_address | 
provisioning_status | provider |
  
+--+--+-+-+--+
  | bf449f65-633d-4859-b417-28b35f4eaea2 | lb1  | 10.0.0.3| ERROR   
| octavia  |
  | c6bf0765-47a9-49d9-a2f2-dd3f1ea81a5c | lb2  | 10.0.0.13   | ACTIVE  
| octavia  |
  | e1210b03-f440-4bc1-84ca-9ba70190854f | lb3  | 10.0.0.16   | ACTIVE  
| octavia  |
  
+--+--+-+-+--+

  $ neutron lbaas-listener-list
  
+--+--+---+--+---++
  | id   | default_pool_id  
| name  | protocol | protocol_port | admin_state_up |
  
+--+--+---+--+---++
  | 4cda881c-9209-42ac-9c97-e1bfab0300b2 | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc 
| list2 | HTTP |80 | True   |
  
+--+--+---+--+---++

  $ neutron lbaas-pool-list
  +--+---+--++
  | id   | name  | protocol | admin_state_up |
  +--+---+--++
  | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc | pool2 | HTTP | True   |
  +--+---+--++

  
  Create new listener with TCP protocol 
  $ neutron lbaas-listener-create --name list3 --loadbalancer lb3 --protocol 
TCP --protocol-port 22
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 9574801a-675b-4784-baf0-410d1a1fd941   |
  | loadbalancers | {"id": "e1210b03-f440-4bc1-84ca-9ba70190854f"} |
  | name  | list3  |
  | protocol  | TCP|
  | protocol_port | 22 |
  | sni_container_refs||
  | tenant_id | b24968d717804ffebd77803fce24b5a4   |
  +---++

  Create pool with HTTP protocol instead of TCP
  $ neutron lbaas-pool-create --name pool3 --lb-algorithm ROUND_ROBIN 
--listener list3 --protocol HTTP
  Listener protocol TCP and pool protocol HTTP are not compatible.

  Pool list shows pool3 even though the protocols are not compatible and should 
not be able to create pool
  $ neutron lbaas-pool-list
  +--+---+--++
  | id   | name  | protocol | admin_state_up |
  +--+---+--++
  | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc | pool2 | HTTP | True   |
  | 7e6fbe67-60b0-40cd-afdd-44cddd8c60a1 | pool3 | HTTP | True   |
  +--+---+--++

  From mysql, pool table from octavia DB. No pool3 
  

[Yahoo-eng-team] [Bug 1653086] Re: Hit internal server error in lb creation with no subnets network

2017-09-12 Thread Michael Johnson
Neutron-lbaas is no longer a neutron project, so removing neutron from
the affected project.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1653086

Title:
  Hit internal server error in lb creation with no subnets network

Status in octavia:
  Fix Released

Bug description:
  Currently, lbaas support create loadbalancer with vip-network. But if
  there isn't a subnet in this vip-network. Neutron server will hit
  internal error.

  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
six.reraise(self.type_, self.value, self.tb)
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 526, in do_create
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
return obj_creator(request.context, **kwargs)
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/plugin.py", line 
362, in create_loadbalancer
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
allocate_vip=not driver.load_balancer.allocates_vip)
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py", 
line 332, in create_loadbalancer
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
vip_address, vip_network_id)
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py", 
line 155, in _create_port_for_load_balancer
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
lb_db.vip_address = fixed_ip['ip_address']
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource 
^[[01;35m^[[00mTypeError: 'NoneType' object has no attribute '__getitem__'

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1653086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667259] Re: one more pool is created for a loadbalancer

2017-09-12 Thread Michael Johnson
As noted above, this was fixed in ocata.

Also, this didn't get updated as LBaaS is no longer part of neutron and
bugs are now tracked in the Octavia storyboard.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667259

Title:
  one more pool is created for a loadbalancer

Status in OpenStack Heat:
  Won't Fix
Status in neutron:
  Fix Released

Bug description:
  One more pool is created when creating a load balancer with two pools.
  That pool doesn't have complete information but related to that
  loadblancer, which caused failure when deleting loadbalancer.

  heat resource-list lbvd
  WARNING (shell) "heat resource-list" is deprecated, please use "openstack 
stack resource list" instead
  
+---+--+---+-+--+
  | resource_name | physical_resource_id | resource_type
 | resource_status | updated_time |
  
+---+--+---+-+--+
  | listener  | 12dfe005-80e0-4439-a4f8-1333f688e73b | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | listener2 | 26ba1151-3d4b-4732-826b-7f318800070d | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | loadbalancer  | 3a5bfa24-220c-4316-9c3d-57dd9c13feb8 | 
OS::Neutron::LBaaS::LoadBalancer  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor   | 241bc328-4c9b-4f58-a34a-4e25ed7431ea | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor2  | 6592b768-f3be-4ff9-bbf4-2c30b94f98e2 | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool2 | fae40172-7f16-4b1a-93f0-877d404fe466 | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  
+---+--+---+-+--+

  
  neutron lbaas-pool-list | grep lbvd
  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81 | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | fae40172-7f16-4b1a-93f0-877d404fe466 | lbvd-pool2-kn7rlwltbdxh  
  | HTTPS| True  |

  
  neutron lbaas-pool-show 095c94b8-8c18-443f-9ce9-3d34e94f0c81
  +-++
  | Field  | Value  |
  +-++
  | admin_state_up  | True  |
  | description||
  | healthmonitor_id||
  | id  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81  |
  | lb_algorithm| ROUND_ROBIN|
  | listeners  ||
  | loadbalancers  | {"id": "3a5bfa24-220c-4316-9c3d-57dd9c13feb8"} |
  | members||
  | name| lbvd-pool-ujtp6ddt4g6o|
  | protocol| HTTP  |
  | session_persistence ||
  | tenant_id  | 3dcf8b12327c460a966c1c1d4a6e2887  |
  +-++

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1667259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495430] Re: delete lbaasv2 can't delete lbaas namespace automatically.

2017-09-12 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495430

Title:
  delete lbaasv2 can't delete lbaas namespace automatically.

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive kilo series:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in octavia:
  Fix Released
Status in neutron-lbaas package in Ubuntu:
  Fix Released
Status in neutron-lbaas source package in Xenial:
  Triaged
Status in neutron-lbaas source package in Yakkety:
  Triaged
Status in neutron-lbaas source package in Zesty:
  Fix Released

Bug description:
  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
  The behavior is not consistent, namespace should be deleted from deleting 
listener too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1495430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711068] Re: lbaas listener update does not work when specifying listener name

2017-09-12 Thread Michael Johnson
LBaaS is no longer part of neutron.  LBaaS bugs should be submitted to
the Octavia project on Storyboard.

Mitaka is now EOL and the neutron client is deprecated.  If the issue
still existing in Newton or a non-EOL release of neutron client, please
re-open this bug against python-neutronclient.

** Project changed: neutron => python-neutronclient

** Tags removed: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1711068

Title:
  lbaas listener update does not work when specifying listener name

Status in python-neutronclient:
  New

Bug description:
  On MITAKA.

  When trying to update the name of the LBaaS listener which has a name,
  following is received: Unable to find listener with id 


  Updating with id works as expected.

  Example:

  radware@devstack131:~$ neutron lbaas-listener-list
  
+--+--+-+--+---++
  | id   | default_pool_id  
| name| protocol | protocol_port | admin_state_up |
  
+--+--+-+--+---++
  | cc2ddbd9-038d-4ff0-81bd-346ba9a47e23 | 3d379d02-3476-4d03-8e0f-3383102ff8f9 
| RADWARE_ANOTHER | HTTP |80 | True   |
  | 2491490d-12bc-4d4b-9744-bb8464e01672 |  
| RADWAREV2   | HTTP |80 | True   |
  
+--+--+-+--+---++
  radware@devstack131:~$ neutron lbaas-listener-update --name=RADWAREV2 
RADWAREV2
  Unable to find listener with id 'RADWAREV2'
  radware@devstack131:~$ neutron lbaas-listener-update --name=RADWAREV2 
2491490d-12bc-4d4b-9744-bb8464e01672
  Updated listener: 2491490d-12bc-4d4b-9744-bb8464e01672
  radware@devstack131:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1711068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699613] Re: LBaaS v2 agent security groups not filtering

2017-09-12 Thread Michael Johnson
LBaaS is no longer part of neutron and future bugs should be reported in
the Octavia project in Storyboard.

Mitaka is now EOL so this bug will be closed out.  If it is still
occurring in a non-EOL release, please re-open this bug in Storyboard
under the neutron-lbaas project under Octavia.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699613

Title:
  LBaaS v2 agent security groups not filtering

Status in neutron:
  Invalid

Bug description:
  Greetings:

  Current environment details:

  - Mitaka with LBaaS v2 agent configured.
  - Deployed via Openstack Ansible
  - Neutron Linuxbridge
  - Ubuntu 14.04.5 LTS

  We had followed documentation at https://docs.openstack.org/mitaka
  /networking-guide/config-lbaas.html to secure traffic to the VIP.

  We created two security groups.

  1) SG-allowToVIP: We didn't want to open it globally, so we limited ingress 
HTTP access to certain IPs. This SG was applied to VIP port.
  2) SG-allowLB: ingress HTTP from the VIP address. This SG was applied to the 
pool member(s). The idea behind this was web server (load-balanced pool member) 
will always see traffic from the VIP.

  End result is/was we can access the VIP from any source IP and any
  rule applied to the security group (SG-allowToVIP) is ignored.

  We have verified the following:
  - Appropriate SG is applied properly to each port
  - When we look at the iptables-save for the VIP port, we are seeing the rules 
originating from the SG but they are not working.
  - When we look at the iptables-save for the pool-member(s), we are seeing the 
rules originating from the SG and they are working.

  The only time we were able to block traffic to the VIP was to edit the
  iptables rules for the LBaaS agent which is not practical obviously,
  but we were just experimenting.

  I will provide detailed output - after I clean it up.

  Thanks in advance

  Luke

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1699613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713424] Re: [RFE] Support proxy protocol enablement in Neutron LBaaS API

2017-09-12 Thread Michael Johnson
LBaaS is no longer part of neutron.  LBaaS issues should be reported in
storyboard under the Octavia project.

That said, this is available in Octavia.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1713424

Title:
  [RFE] Support proxy protocol enablement in Neutron LBaaS API

Status in neutron:
  Invalid

Bug description:
  Problem: servers behind a TCP load balancer, as provisioned using the
  Neutron LBaaS API, can't determine the source IP of a TCP connection.
  Instead they will always see the load balancer IP as origin of
  requests. This makes troubleshooting client connection issues using
  logs gathered behind a LB very hard and often impossible.

  Solution: the PROXY protocol has been introduced to forward the
  missing information across a load balancer:

  http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

  A number of backend services can make use of it, such as Nginx

  https://www.nginx.com/resources/admin-guide/proxy-protocol/

  but also Apache, Squid, Undertow. Proxy protocol is also supported by
  Amazon ELB since 2013.

  As HAproxy, the implementation behind the Neutron LBaaS API, does
  already offer native support, this RFE is about its enablement using
  the LBaaS API and corresponding Heat resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1713424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711215] [NEW] Default neutron-ns-metadata-proxy threads setting is too low (newton)

2017-08-16 Thread Michael Johnson
Public bug reported:

In the older neutron-ns-metadata-proxy, in the newton release, the
number of threads is fixed at 100.  This is a drop from the previous
default setting of 1000 as a side effect of changing the number of wsgi
threads [1].

This is causing failures at sites with a large number of instances using 
deployment tools (instance cloud-init logs):
2017-08-01 15:44:36,773 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request 
timed out. (timeout=17.0)]
2017-08-01 15:44:37,775 - DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds

Setting the value, for neutron-ns-metadata-proxy only, back up to 1000 resolves 
this issue.
It should also be noted that in the Ocata forward version of the 
neutron-ns-metadata-proxy the default value is 1024 [2].

I am going to propose a patch for stable/newton that sets the default
thread count for the neutron-ns-metadata-proxy back up to 1000.

[1] 
https://github.com/openstack/neutron/blob/master/releasenotes/notes/config-wsgi-pool-size-a4c06753b79fee6d.yaml
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/metadata/driver.py#L44

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1711215

Title:
  Default neutron-ns-metadata-proxy threads setting is too low (newton)

Status in neutron:
  New

Bug description:
  In the older neutron-ns-metadata-proxy, in the newton release, the
  number of threads is fixed at 100.  This is a drop from the previous
  default setting of 1000 as a side effect of changing the number of
  wsgi threads [1].

  This is causing failures at sites with a large number of instances using 
deployment tools (instance cloud-init logs):
  2017-08-01 15:44:36,773 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request 
timed out. (timeout=17.0)]
  2017-08-01 15:44:37,775 - DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds

  Setting the value, for neutron-ns-metadata-proxy only, back up to 1000 
resolves this issue.
  It should also be noted that in the Ocata forward version of the 
neutron-ns-metadata-proxy the default value is 1024 [2].

  I am going to propose a patch for stable/newton that sets the default
  thread count for the neutron-ns-metadata-proxy back up to 1000.

  [1] 
https://github.com/openstack/neutron/blob/master/releasenotes/notes/config-wsgi-pool-size-a4c06753b79fee6d.yaml
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/metadata/driver.py#L44

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1711215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673754] Re: LBaaSv2: Cannot delete loadbalancer in PENDING_CREATE

2017-03-17 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1673754

Title:
  LBaaSv2: Cannot delete loadbalancer in PENDING_CREATE

Status in octavia:
  New

Bug description:
  If all neutron-lbaasv2-agent are down at a given time, you could get
  some loadbalancers stucked in PENDING_CREATE.

  With CLI tools it is impossible to delete these resources:

  (neutron) lbaas-loadbalancer-delete 5173ac41-194d-4d0c-b833-657b728c469d
  Invalid state PENDING_CREATE of loadbalancer resource 
5173ac41-194d-4d0c-b833-657b728c469d
  Neutron server returns request_ids: 
['req-970a6338-b0d0-4bcc-9108-ec94360b45e2']

  Even deleting this loadbalancer from the database with mysql commands
  does not clean it completely.

  the VIP port also needs to be deleted, not possible with API because I get:
  (neutron) port-delete 264f6125-ef0f-46ab-84b6-79da2d00eb28
  Port 264f6125-ef0f-46ab-84b6-79da2d00eb28 cannot be deleted directly via the 
port API: has device owner neutron:LOADBALANCERV2.
  Neutron server returns request_ids: 
['req-ed4718e0-e8a3-4c53-9416-de97cc73230f']

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1673754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2017-03-13 Thread Michael Johnson
** Changed in: neutron-lbaas-dashboard
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in anvil:
  Invalid
Status in craton:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in gce-api:
  Fix Released
Status in Glance:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kosmos:
  Fix Released
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-midonet:
  Fix Released
Status in networking-odl:
  Fix Released
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Fix Released
Status in octavia:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Solum:
  Fix Released
Status in Swift Authentication:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  Fix Released
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603458] Re: Cannot Delete loadbalancers due to undeleteable pools

2017-03-13 Thread Michael Johnson
Marking this invalid as you can delete a pool via horizon.  Did you
remember to delete the health monitor first?

I agree that in the future we could enable the cascade delete feature in
horizon with a warning, but that would be an RFE and not the bug as
reported.  Closing this as invalid as you can in fact delete pools via
the neutron-lbaas-dashboard.

** Changed in: horizon
   Status: New => Invalid

** Changed in: neutron-lbaas-dashboard
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603458

Title:
  Cannot Delete loadbalancers due to undeleteable pools

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in neutron:
  Invalid
Status in Neutron LBaaS Dashboard:
  Invalid

Bug description:
  To delete an LBaaSv2 loadbalancer, you must remove all the members
  from the pool, then delete the pool, then delete the listener, then
  you can delete the loadbalancer. Currently in Horizon you can do all
  of those except delete the pool. Since you can't delete the pool, you
  can't delete the listener, and therefore can't delete the
  loadbalancer.

  Either deleting the listener should trigger the pool delete too (since
  they're 1:1) or the Horizon Wizard for Listener should have a delete
  pool capability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1672345] Re: Loadbalancer V2 ports are not serviced by DVR

2017-03-13 Thread Michael Johnson
This is a neutron DVR bug and not an LBaaS/Octavia bug.  It may be a
duplicate of existing DVR bugs.

** Project changed: octavia => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672345

Title:
  Loadbalancer V2 ports are not serviced by DVR

Status in neutron:
  New

Bug description:
  I reported #1629539, which was on Mitaka/LBaaSv1, but I'm seeing the
  exact same behaviour on Newton/LBaaSv2.

  There's apparently a fix (for Kilo) in #1493809. There's also #1494003
  (a duplicate of #1493809), which have a lot of debug output and
  apparently a way to reproduce.

  When I reinstalled my Openstack setup from Debian GNU/Linux Sid/Mitaka
  to Debian GNU/Linux Jessie/Newton, I started out with a non-
  distributed router (DVR). The LBaaS v1 _and_ v2 worked just fine
  there. But as soon as I enabled/setup DVR, they stoped working.

  I'm unsure of what information would be required, but "ask and it will
  be supplied".

  The problem I'm seeing is that the FIPS of the LB responds, but not
  the VIP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661303] [NEW] neutron-ns-metadata-proxy process failing under python3.5

2017-02-02 Thread Michael Johnson
Public bug reported:

When running under python 3.5, we are seeing the neutron-ns-metadata-
proxy fail repeatedly on Ocata RC1 master.

This is causing instances to fail to boot under a python3.5 devstack.

A gate example is here:
http://logs.openstack.org/99/407099/25/check/gate-rally-dsvm-py35-neutron-neutron-ubuntu-xenial/4741e0d/logs/screen-q-l3.txt.gz?level=ERROR#_2017-02-02_11_41_52_029

2017-02-02 11:41:52.029 29906 ERROR neutron.agent.linux.external_process
[-] metadata-proxy for router with uuid
79af72b9-6b17-4864-8088-5dc96b9271df not found. The process should not
have died

Running this locally I see the debug output of the configuration
settings and it immediately exits with no error output.

To reproduce:
Stack a fresh devstack with the "USE_PYTHON3=True" setting in your localrc 
(NOTE: There are other python3x devstack bugs that may reconfigure your host in 
bad ways once you do this.  Plan to only stack with this setting on a throw 
away host or one you plan to use for Python3.x going forward)

Once this devstack is up and running, setup a neuron network and subnet,
then boot a cirros instance on that new subnet.

Check the cirros console.log to see that it cannot find a metadata
datasource (Due to this change disabling configdrive: https://github.com
/openstack-
dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4).

Check the q-l3.txt log to see the repeated "The process should not have
died" messages.

You will also note that the cirros instance did not receive it's ssh
keys and is requiring password login due to the missing datasource.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661303

Title:
  neutron-ns-metadata-proxy process failing under python3.5

Status in neutron:
  New

Bug description:
  When running under python 3.5, we are seeing the neutron-ns-metadata-
  proxy fail repeatedly on Ocata RC1 master.

  This is causing instances to fail to boot under a python3.5 devstack.

  A gate example is here:
  
http://logs.openstack.org/99/407099/25/check/gate-rally-dsvm-py35-neutron-neutron-ubuntu-xenial/4741e0d/logs/screen-q-l3.txt.gz?level=ERROR#_2017-02-02_11_41_52_029

  2017-02-02 11:41:52.029 29906 ERROR
  neutron.agent.linux.external_process [-] metadata-proxy for router
  with uuid 79af72b9-6b17-4864-8088-5dc96b9271df not found. The process
  should not have died

  Running this locally I see the debug output of the configuration
  settings and it immediately exits with no error output.

  To reproduce:
  Stack a fresh devstack with the "USE_PYTHON3=True" setting in your localrc 
(NOTE: There are other python3x devstack bugs that may reconfigure your host in 
bad ways once you do this.  Plan to only stack with this setting on a throw 
away host or one you plan to use for Python3.x going forward)

  Once this devstack is up and running, setup a neuron network and
  subnet, then boot a cirros instance on that new subnet.

  Check the cirros console.log to see that it cannot find a metadata
  datasource (Due to this change disabling configdrive:
  https://github.com/openstack-
  dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4).

  Check the q-l3.txt log to see the repeated "The process should not
  have died" messages.

  You will also note that the cirros instance did not receive it's ssh
  keys and is requiring password login due to the missing datasource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661086] [NEW] Failed to plug VIF VIFBridge

2017-02-01 Thread Michael Johnson
Public bug reported:

I did a fresh restack/reclone this morning and can no longer boot up a
cirros instance.

Nova client returns:

| fault| {"message": "Failure running
os_vif plugin plug method: Failed to plug VIF
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397
-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-
fe3fc3c7", "code": 500, "details": "  File
\"/opt/stack/nova/nova/compute/manager.py\", line 1780, in
_do_build_and_run_instance |

pip list:
nova (15.0.0.0b4.dev77, /opt/stack/nova)
os-vif (1.4.0)

n-cpu.log shows:
2017-02-01 11:13:32.880 DEBUG nova.network.os_vif_util 
[req-17c8b4e4-2197-4205-aed3-007d0f2837e4 admin admin] Converted object 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
 from (pid=69603) nova_to_osvif_vif 
/opt/stack/nova/nova/network/os_vif_util.py:425
2017-02-01 11:13:32.880 DEBUG os_vif [req-17c8b4e4-2197-4205-aed3-007d0f2837e4 
admin admin] Unplugging vif 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
 from (pid=69603) unplug 
/usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:112
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: 
request[139935485013840]: (3, b'vif_plug_ovs.linux_net.delete_bridge', 
('qbrd3377ad5-43', b'qvbd3377ad5-43'), {}) from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: Exception during 
request[139935485013840]: a bytes-like object is required, not 'str' from 
(pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139935485013840]: (5, 'builtins.TypeError', ("a bytes-like object is 
required, not 'str'",)) from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.882 ERROR os_vif [req-17c8b4e4-2197-4205-aed3-007d0f2837e4 
admin admin] Failed to unplug vif 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
2017-02-01 11:13:32.882 TRACE os_vif Traceback (most recent call last):
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/os_vif/__init__.py", line 113, in unplug
2017-02-01 11:13:32.882 TRACE os_vif plugin.unplug(vif, instance_info)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/vif_plug_ovs/ovs.py", line 216, in 
unplug
2017-02-01 11:13:32.882 TRACE os_vif self._unplug_bridge(vif, instance_info)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/vif_plug_ovs/ovs.py", line 192, in 
_unplug_bridge
2017-02-01 11:13:32.882 TRACE os_vif 
linux_net.delete_bridge(vif.bridge_name, v1_name)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", line 
205, in _wrap
2017-02-01 11:13:32.882 TRACE os_vif return self.channel.remote_call(name, 
args, kwargs)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 186, in 
remote_call
2017-02-01 11:13:32.882 TRACE os_vif exc_type = 
importutils.import_class(result[1])
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 30, in 
import_class
2017-02-01 11:13:32.882 TRACE os_vif __import__(mod_str)
2017-02-01 11:13:32.882 TRACE os_vif ImportError: No module named builtins
2017-02-01 11:13:32.882 TRACE os_vif

Full n-cpu.log is attached.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661086

Title:
  Failed to plug VIF VIFBridge

Status in OpenStack Compute (nova):
  New

Bug description:
  I did a fresh restack/reclone this morning and can no longer boot up a
  cirros instance.

  Nova client returns:

  | fault| {"message": "Failure running
  os_vif plugin plug method: Failed to plug VIF
  

[Yahoo-eng-team] [Bug 1654887] Re: Upgrade to 3.6.0 causes AttributeError: 'SecurityGroup' object has no attribute 'keys'

2017-01-08 Thread Michael Johnson
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654887

Title:
  Upgrade to 3.6.0 causes AttributeError: 'SecurityGroup' object has no
  attribute 'keys'

Status in neutron:
  New
Status in python-openstackclient:
  New

Bug description:
  When running the command:

  openstack security group create foo

  Under version 3.5.0 of python-openstackclient the command succeeds,
  but after doing a pip install --upgrade python-openstackclient to
  version 3.6.0 I get the following error:

  'SecurityGroup' object has no attribute 'keys'

  Neutron successfully created the security group.

  Running with --debug shows:

  Using http://172.21.21.125:9696/v2.0 as public network endpoint
  REQ: curl -g -i -X POST http://172.21.21.125:9696/v2.0/security-groups -H 
"User-Agent: openstacksdk/0.9.12 keystoneauth1/2.16.0 python-requests/2.12.4 
CPython/2.7.6" -H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}d46c48cdee00c9eefd4216b492f9b56e762749bc" -d '{"security_group": {"name": 
"foo", "description": "foo"}}'
  http://172.21.21.125:9696 "POST /v2.0/security-groups HTTP/1.1" 201 1302
  RESP: [201] Content-Type: application/json Content-Length: 1302 
X-Openstack-Request-Id: req-9bea5358-8341-4064-b7ea-54edd8e4fd53 Date: Sun, 08 
Jan 2017 20:27:07 GMT Connection: keep-alive
  RESP BODY: {"security_group": {"description": "foo", "tenant_id": 
"f0c5bc260c06423893b791890715a337", "created_at": "2017-01-08T20:27:07Z", 
"updated_at": "2017-01-08T20:27:07Z", "security_group_rules": [{"direction": 
"egress", "protocol": null, "description": null, "port_range_max": null, 
"updated_at": "2017-01-08T20:27:07Z", "revision_number": 1, "id": 
"fc82f0ef-df78-4b46-9b9e-96d71b5b34b4", "remote_group_id": null, 
"remote_ip_prefix": null, "created_at": "2017-01-08T20:27:07Z", 
"security_group_id": "b11e40a0-aed2-464e-851e-6901afa0f845", "tenant_id": 
"f0c5bc260c06423893b791890715a337", "port_range_min": null, "ethertype": 
"IPv4", "project_id": "f0c5bc260c06423893b791890715a337"}, {"direction": 
"egress", "protocol": null, "description": null, "port_range_max": null, 
"updated_at": "2017-01-08T20:27:07Z", "revision_number": 1, "id": 
"3e363162-93bf-49c4-9d00-203ffe1dd4ef", "remote_group_id": null, 
"remote_ip_prefix": null, "created_at": "2017-01-08T20:27:07Z", 
"security_group_id": "b
 11e40a0-aed2-464e-851e-6901afa0f845", "tenant_id": 
"f0c5bc260c06423893b791890715a337", "port_range_min": null, "ethertype": 
"IPv6", "project_id": "f0c5bc260c06423893b791890715a337"}], "revision_number": 
1, "project_id": "f0c5bc260c06423893b791890715a337", "id": 
"b11e40a0-aed2-464e-851e-6901afa0f845", "name": "foo"}}

  'SecurityGroup' object has no attribute 'keys'
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 400, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/osc_lib/command/command.py", 
line 41, in run
  return super(Command, self).run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 112, 
in run
  column_names, data = self.take_action(parsed_args)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/network/common.py", 
line 188, in take_action
  self.app.client_manager.network, parsed_args)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/network/v2/security_group.py",
 line 145, in take_action_network
  display_columns, property_columns = _get_columns(obj)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/network/v2/security_group.py",
 line 77, in _get_columns
  columns = list(item.keys())
File "/usr/local/lib/python2.7/dist-packages/openstack/resource2.py", line 
309, in __getattribute__
  return object.__getattribute__(self, name)
  AttributeError: 'SecurityGroup' object has no attribute 'keys'
  clean_up CreateSecurityGroup: 'SecurityGroup' object has no attribute 'keys'
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/osc_lib/shell.py", line 135, 
in run
  ret_val = super(OpenStackShell, self).run(argv)
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 279, in run
  result = self.run_subcommand(remainder)
File "/usr/local/lib/python2.7/dist-packages/osc_lib/shell.py", line 180, 
in run_subcommand
  ret_value = super(OpenStackShell, self).run_subcommand(argv)
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 400, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/osc_lib/command/command.py", 
line 41, in run
  return super(Command, self).run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 112, 
in run
  column_names, data = 

[Yahoo-eng-team] [Bug 1626093] Re: LBaaSV2: listener deletion causes LB port to be Detached "forever"

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1626093

Title:
  LBaaSV2: listener deletion causes LB port to be Detached "forever"

Status in octavia:
  New

Bug description:
  Case 1:
  Create a LBaaSV2 LB with a listener. Remove listener. Port Detached. Add 
listener. Nothing happens.

  Case 2:
  Create a LBaaSV2 LB with a listener. Add another listener. Remove one of the 
two. Port Detached.

  This is merely an annoyance.

  neutron port-show shows nothing for device_id and device_owner.
  In Horizon shows as Detached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1626093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602974] Re: [stable/liberty] LBaaS v2 haproxy: need a way to find status of listener

2016-12-05 Thread Michael Johnson
Is this a duplicate to https://bugs.launchpad.net/octavia/+bug/1632054 ?

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602974

Title:
  [stable/liberty] LBaaS v2 haproxy: need a way to find status of
  listener

Status in octavia:
  Incomplete

Bug description:
  Currently we dont have option to check status of listener. Below is
  the output of listener without status.

  root@runner:~# neutron lbaas-listener-show 
8c0e0289-f85d-4539-8970-467a45a5c191
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 8c0e0289-f85d-4539-8970-467a45a5c191   |
  | loadbalancers | {"id": "bda96c0a-0167-45ab-8772-ba92bc0f2d00"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   |
  +---++
  root@runner:~#

  Problem arise when we tried to configure listener and pool back to
  back without any delay. Pool create fails saying listener is not
  ready.

  Workaround is to add 3seconds delay between listener and pool
  creation.

  Logs:

  root@runner:~# neutron lbaas-loadbalancer-create --name test-lb vn-subnet; 
neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb 
--protocol HTTP --protocol-port 80; neutron lbaas-pool-create --name 
test-lb-pool-http  --lb-algorithm ROUND_ROBIN --listener test-lb-http  
--protocol HTTP
  Created a new loadbalancer:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | description |  |
  | id  | 3ed2ff4a-4d87-46da-8e5b-265364dd6861 |
  | listeners   |  |
  | name| test-lb  |
  | operating_status| OFFLINE  |
  | provider| haproxy  |
  | provisioning_status | PENDING_CREATE   |
  | tenant_id   | ce1d087209c64df4b7e8007dc35def22 |
  | vip_address | 20.0.0.62|
  | vip_port_id | 4c33365e-64b9-428f-bc0b-bce6c08c9b20 |
  | vip_subnet_id   | 63cbeccd-6887-4dda-b4d2-b7503bce870a |
  +-+--+
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 90260465-934a-44a4-a289-208e5af74cf5   |
  | loadbalancers | {"id": "3ed2ff4a-4d87-46da-8e5b-265364dd6861"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   |
  +---++
  Invalid state PENDING_UPDATE of loadbalancer resource 
3ed2ff4a-4d87-46da-8e5b-265364dd6861
  root@runner:~#

  
  Neutron:

  : 

[Yahoo-eng-team] [Bug 1464241] Re: Lbaasv2 command logs not seen

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464241

Title:
  Lbaasv2 command logs not seen

Status in octavia:
  New

Bug description:
  I am testing incorrect and correct lbaasv2 deletion. 
  even if a command fails we do not see it in the  
/var/log/neutron/lbaasv2-agent.log

  BUT 
  We see the lbaas (not lbaasv2) is being updated with information and has 
error. 

  2015-06-11 03:03:34.352 21274 WARNING neutron.openstack.common.loopingcall 
[-] task > run outlasted interval by 50.10 sec
  2015-06-11 03:04:34.366 21274 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve 
ready devices
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 152, in sync_state
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = 
set(self.plugin_rpc.get_ready_devices())
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_api.py",
 line 36, in get_ready_devices
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager return 
cctxt.call(self.context, 'get_ready_devices', host=self.host)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 156, in 
call
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in 
_send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
350, in send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
339, in _send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager result = 
self._waiter.wait(msg_id, timeout)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
243, in wait
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager message = 
self.waiters.get(msg_id, timeout=timeout)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
149, in get
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' 
% msg_id)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed 
out waiting for a reply to message ID 73130a6bb5444f259dbf810cfb1003b3
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager

  
  configure lbaasv2 setup- loadbalncer, listener, member, pool, healthmonitor. 

  see lbaasv2 logs and lbaas logs
   /var/log/neutron/lbaasv2-agent.log
   /var/log/neutron/lbaasv-agent.log


  lbaasv2
  kilo
  rhel7.1 
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1464241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622946] Re: lbaas with haproxy backend creates the lbaas namespace without the members' subnet

2016-12-05 Thread Michael Johnson
Can you provide your lbaas agent logs?

** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

** Changed in: octavia
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622946

Title:
  lbaas with haproxy backend creates the lbaas namespace without the
  members' subnet

Status in octavia:
  Incomplete

Bug description:
  When creating a new loadbalancer with haproxy, and the VIP and member
  subnets are different, the created lbaas namespace contains only the
  VIP subnet, so the members are unreachable.

  E.g.:
  neutron lbaas-loadbalancer-show 8e1c193a-ab63-4a1a-bc39-c663f2f9a0ee
  .
  .
  .
  | vip_subnet_id   | 23655977-d29f-4917-a519-de27951fde89   |

  neutron lbaas-member-list d3ebda43-53f8-4118-b4db-999c021c9680

  | 4fe79d5e-a517-4e4f-a145-3c80b414be08 |  | 192.168.168.8 |
  22 |  1 | 0a4a1f3e-43cb-4f9c-9d51-c71f0c231a3e | True   |

  Note that the two subnets are different.
  The created haproxy config is OK:
  .
  .
  .
  frontend 6821edd8-54ab-4fba-90e5-94831fcd0ec0
  option tcplog
  bind 10.97.37.1:22
  mode tcp

  backend d3ebda43-53f8-4118-b4db-999c021c9680
  mode tcp
  balance source
  timeout check 20
  server 4fe79d5e-a517-4e4f-a145-3c80b414be08 192.168.168.8:22 weight 1 
check inter 10s fall 3

  But the namespace is not:
  ip netns exec qlbaas-8e1c193a-ab63-4a1a-bc39-c663f2f9a0ee ip addr
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: ns-f56b5f8d-ef@if11:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
  link/ether fa:16:3e:82:9d:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
  inet 10.97.37.1/25 brd 10.97.37.127 scope global ns-f56b5f8d-ef
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe82:9d9a/64 scope link 
 valid_lft forever preferred_lft forever

  
  The member subnet is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1622946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624097] Re: Neutron LBaaS CLI quota show includes l7policy and doesn't include member

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624097

Title:
  Neutron LBaaS CLI quota show includes l7policy and doesn't include
  member

Status in octavia:
  In Progress
Status in python-openstackclient:
  Fix Released

Bug description:
  When running devstack and executing "neutron quota-show" it lists an
  l7 policy quota, but does not show a member quota.  However, the help
  message for "neutron quota-update" includes a member quota, but not an
  l7 policy quota.  The show command should not have the l7 policy
  quota, but should have the member quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1624097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632054] Re: Heat engine doesn't detect lbaas listener failures

2016-12-05 Thread Michael Johnson
The neutron project with lbaas tag was for neutron-lbaas, but now that
we have merged the projects, I am removing neutron as it is all under
octavia project now.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632054

Title:
  Heat engine doesn't detect lbaas listener failures

Status in heat:
  Triaged
Status in octavia:
  Triaged

Bug description:
  Please refer to the mail-list for comments from other developers,
  https://openstack.nimeyo.com/97427/openstack-neutron-octavia-doesnt-
  detect-listener-failures

  I am trying to use heat to launch lb resources with Octavia as backend. The
  template I used is from
  
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml
  .

  Following are a few observations:

  1. Even though Listener was created with ERROR status, heat will still go
  ahead and mark it Creation Complete. As in the heat code, it only check
  whether root Loadbalancer status is change from PENDING_UPDATE to ACTIVE.
  And Loadbalancer status will be changed to ACTIVE anyway no matter
  Listener's status.

  2. As heat engine wouldn't know the Listener's creation failure, it will
  continue to create Pool\Member\Heatthmonitor on top of an Listener which
  actually doesn't exist. It causes a few undefined behaviors. As a result,
  those LBaaS resources in ERROR state are unable to be cleaned up
  with either normal neutron or heat api.

  3. The bug is introduce from here,
  
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/lbaas/listener.py#L188.
  It only checks the provisioning status of the root loadbalancer.
  However the listener itself has its own provisioning status which may
  go into ERROR.

  4. The same scenario applies for not only listener but also pool,
  member, healthmonitor, etc., basically every resources except
  loadbalancer from lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1632054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585250] Re: Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585250

Title:
  Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

Status in octavia:
  In Progress

Bug description:
  There is no indication on the CLI when creating an LBaaSv2 object
  (other than a "loadbalancer") has failed...

  stack@openstack:~$ neutron lbaas-listener-create --name MyListener1 
--loadbalancer MyLB1 --protocol HTTP --protocol-port 80
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 5ca664d6-3a3a-4369-821d-e36c87ff5dc2   |
  | loadbalancers | {"id": "549982d9-7f52-48ac-a4fe-a905c872d71d"} |
  | name  | MyListener1|
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | 22000d943c5341cd88d27bd39a4ee9cd   |
  +---++

  There is no indication of any issue here, and lbaas-listener-show
  produces the same output.  However, in reality, the listener is in an
  error state...

  mysql> select * from lbaas_listeners;
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  | tenant_id| id   | 
name| description | protocol | protocol_port | connection_limit | 
loadbalancer_id  | default_pool_id | admin_state_up | 
provisioning_status | operating_status | default_tls_container_id |
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  | 22000d943c5341cd88d27bd39a4ee9cd | 5ca664d6-3a3a-4369-821d-e36c87ff5dc2 | 
MyListener1 | | HTTP |80 |   -1 | 
549982d9-7f52-48ac-a4fe-a905c872d71d | NULL|  1 | ERROR 
  | OFFLINE  | NULL |
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  1 row in set (0.00 sec)

  
  How is a CLI user who doesn't have access to the Neutron DB supposed to know 
an error has occurred (other than "it doesn't work", obviously)?

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1585250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618559] Re: LBaaS v2 healthmonitor wrong status detection

2016-12-05 Thread Michael Johnson
Are you still having this issue?  I cannot reproduce it on my devstack.

If you can reproduce this, can you provide the commands you used to
setup the load balancer (all of the steps), the output of neutron net-
list, the output of neutron subnet-list, and the output of "sudo ip
netns"?


** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618559

Title:
  LBaaS v2 healthmonitor wrong status detection

Status in octavia:
  Incomplete

Bug description:
  Summary:
  After enabling health monitor loadbalancer on any request returns 
  HTTP/1.0 503 Service Unavailable  

  I have loadbalancer with vip ip 10.123.21.15. HTTP listener, pool and
  member with IP 10.123.21.12.

  I check status of web-server by:
  curl -I -X GET http://10.123.21.15/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  But when I add healthmonitor:
  neutron lbaas-healthmonitor-create \
--delay 5 \
--max-retries 2 \
--timeout 10 \
--type HTTP \
--url-path /owncloud/status.php \
--pool owncloud-app-lb-http-pool

  neutron lbaas-healthmonitor-show 
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | delay  | 5  |
  | expected_codes | 200|
  | http_method| GET|
  | id | cf3cc795-ab1f-44c7-a521-799281e1ff64   |
  | max_retries| 2  |
  | name   ||
  | pools  | {"id": "edcd43a2-41ad-4dd7-809d-10d3e45a08a7"} |
  | tenant_id  | b5d8bbe7742540c2b9b2e1b324ea854e   |
  | timeout| 10 |
  | type   | HTTP   |
  | url_path   | /owncloud/status.php   |
  +++

  I expect:
  curl -I -X GET http://10.123.21.15/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  But result:
  curl -I -X GET http://10.123.21.15/owncloud/status.php
  ...
  HTTP/1.0 503 Service Unavailable

  Direct request to member:
  curl -I -X GET http://10.123.21.12/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  In neutron logs have no ERROR.

  Some detail about configuration:

  I have 3 controllers. Installed by Fuel with l3 population and DVR enabled.
  lbaas_agent.ini
  interface_driver=openvswitch

  neutron lbaas-loadbalancer-status owncloud-app-lb
  {
  "loadbalancer": {
  "name": "owncloud-app-lb", 
  "provisioning_status": "ACTIVE", 
  "listeners": [
  {
  "name": "owncloud-app-lb-http", 
  "provisioning_status": "ACTIVE", 
  "pools": [
  {
  "name": "owncloud-app-lb-http-pool", 
  "provisioning_status": "ACTIVE", 
  "healthmonitor": {
  "provisioning_status": "ACTIVE", 
  "type": "HTTP", 
  "id": "cf3cc795-ab1f-44c7-a521-799281e1ff64", 
  "name": ""
  }, 
  "members": [
  {
  "name": "", 
  "provisioning_status": "ACTIVE", 
  "address": "10.123.21.12", 
  "protocol_port": 80, 
  "id": "8a588ed1-8818-44b2-80df-90debee59720", 
  "operating_status": "ONLINE"
  }
  ], 
  "id": "edcd43a2-41ad-4dd7-809d-10d3e45a08a7", 
  "operating_status": "ONLINE"
  }
  ], 
  "l7policies": [], 
  "id": "7521308a-15d1-4898-87c8-8f1ed4330b6c", 
  "operating_status": "ONLINE"
  }
  ], 
  "pools": [
  {
  "name": "owncloud-app-lb-http-pool", 
  "provisioning_status": "ACTIVE", 
  "healthmonitor": {
  "provisioning_status": "ACTIVE", 
  "type": "HTTP", 
  "id": "cf3cc795-ab1f-44c7-a521-799281e1ff64", 
  "name": ""
  

[Yahoo-eng-team] [Bug 1627393] Re: Neuton-LBaaS and Octavia out of synch if TLS container secret ACLs not set up correctly

2016-12-05 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627393

Title:
  Neuton-LBaaS and Octavia out of synch if TLS container secret ACLs not
  set up correctly

Status in octavia:
  New

Bug description:
  I'm hoping this is something that will go away with the neutron-lbaas
  and Octavia merge.

  Create a self-signed certificate like so:

  openssl genrsa -des3 -out self-signed_encrypted.key 2048
  openssl rsa -in self-signed_encrypted.key -out self-signed.key
  openssl req -new -x509 -days 365 -key self-signed.key -out self-signed.crt

  As the admin user, grant the demo user the ability to create cloud
  resources on the demo project:

  openstack role add --project demo --user demo creator

  Now, become the demo user:

  source ~/devstack/openrc demo demo

  As the demo user, upload the self-signed certificate to barbican:

  openstack secret store --name='test_cert' --payload-content-type='text/plain' 
--payload="$(cat self-signed.crt)"
  openstack secret store --name='test_key' --payload-content-type='text/plain' 
--payload="$(cat self-signed.key)"
  openstack secret container create --name='test_tls_container' 
--type='certificate' --secret="certificate=$(openstack secret list | awk '/ 
test_cert / {print $2}')" --secret="private_key=$(openstack secret list | awk 
'/ test_key / {print $2}')"

  As the demo user, grant access to the the above secrets BUT NOT THE
  CONTAINER to the 'admin' user. In my test, the admin user has ID:
  02c0db7c648c4714971219ae81817ba7

  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_cert / {print $2}')
  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_key / {print $2}')

  Now, as the demo user, attempt to deploy a neutron-lbaas listener
  using the secret container above:

  neutron lbaas-loadbalancer-create --name lb1 private-subnet
  neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 
--protocol TERMINATED_HTTPS --name listener1 
--default-tls-container=$(openstack secret container list | awk '/ 
test_tls_container / {print $2}')

  The neutron-lbaas command succeeds, but the Octavia deployment fails
  since it can't access the secret container.

  This is fixed if you remember to grant access to the TLS container to
  the admin user like so:

  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack
  secret container list | awk '/ test_tls_container / {print $2}')

  However, neutron-lbaas and octavia should have similar failure
  scenarios if the permissions aren't set up exactly right in any case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1627393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624145] Re: Octavia should ignore project_id on API create commands (except load_balancer)

2016-12-05 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624145

Title:
  Octavia should ignore project_id on API create commands (except
  load_balancer)

Status in octavia:
  New

Bug description:
  Right now, the Octavia API allows the specification of the project_id
  on the create commands for the following objects:

  listener
  health_monitor
  member
  pool

  However, all of these objects should be inheriting their project_id
  from the ancestor load_balancer object. Allowing the specification of
  project_id when we create these objects could lead to a situation
  where the descendant object's project_id is different from said
  object's ancestor load_balancer project_id.

  We don't want to break our API's backward compatibility for at least
  two release cycles, so for now we should simply ignore this parameter
  if specified (and get it from the load_balancer object in the database
  directly), and insert TODO notes in the API code to remove the ability
  to specify project_id after a certain openstack release.

  We should also update the Octavia driver in neutron_lbaas to stop
  specifying the project_id on descendant object creation.

  This bug is related to https://bugs.launchpad.net/octavia/+bug/1624113

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1624145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596162] Re: lbaasv2:Member can be created with the same ip as vip in loadbalancer

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596162

Title:
  lbaasv2:Member can be created with the same ip as vip in loadbalancer

Status in octavia:
  In Progress

Bug description:
  Create a loadbalancer:
  [root@CrtlNode247 ~(keystone_admin)]# neutron lbaas-loadbalancer-show 
ebe0a748-7797-44fa-be09-1890ca2f5c1f
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | id  | ebe0a748-7797-44fa-be09-1890ca2f5c1f   |
  | listeners   | {"id": "3cfe5262-7e25-4433-a342-93eb118049f9"} |
  | | {"id": "a7c014d4-8c57-43ee-aeab-539847a37f43"} |
  | | {"id": "794efa5b-1e5d-4182-857a-6d8415973007"} |
  | | {"id": "6b64350e-335f-4aa5-b2dd-e86adcdbc0b3"} |
  | name| lb1|
  | operating_status| ONLINE |
  | provider| zxveglb|
  | provisioning_status | ACTIVE |
  | tenant_id   | 6403670bcb0f45cba4cb732a9a936da4   |
  | vip_address | 193.168.1.200  |
  | vip_port_id | f401e0ae-2537-4018-9252-742c16fc22ef   |
  | vip_subnet_id   | 73bee51e-7ea3-44ea-8d98-cf778cd171e0   |
  +-++

  vip address is 193.168.1.200.
  Then create a listener and pool.
  Then create a member,the ip of member is assigned to 193.168.1.200
  [root@CrtlNode247 ~(keystone_admin)]# neutron lbaas-member-create --subnet 
73bee51e-7ea3-44ea-8d98-cf778cd171e0 --address 193.168.1.200 --protocol-port 80 
pool1
  Created a new member:
  ++--+
  | Field  | Value|
  ++--+
  | address| 193.168.1.200|
  | admin_state_up | True |
  | id | e377f7a5-e2d8-493d-ad61-c2ab25ed7c0b |
  | protocol_port  | 80   |
  | subnet_id  | 73bee51e-7ea3-44ea-8d98-cf778cd171e0 |
  | tenant_id  | 6403670bcb0f45cba4cb732a9a936da4 |
  | weight | 1|
  ++--+
  It runs OK.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1596162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583955] Re: provisioning_status of loadbalancer is always PENDING_UPDATE when following these steps

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583955

Title:
  provisioning_status of loadbalancer is always PENDING_UPDATE  when
  following these steps

Status in octavia:
  New

Bug description:
  issue is in kilo branch;

  following these steps:
  1. update admin_state_up of loadbalancer to False
  2. restart lbaas agent
  3. update admin_state_up of loadbalancer to True

  then the provisioning_status of loadbalancer is always PENDING_UPDATE

  agent log is:
  2013-11-20 12:33:54.358 12601 ERROR oslo_messaging.rpc.dispatcher 
[req-add12f1f-f693-4f0b-9eae-5204d8a50a3f ] Exception during message handling: 
An unknown exception occurred.
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
282, in update_loadbalancer
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher driver 
= self._get_driver(loadbalancer.id)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
168, in _get_driver
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher raise 
DeviceNotFoundOnAgent(loadbalancer_id=loadbalancer_id)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
DeviceNotFoundOnAgent: An unknown exception occurred.
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1583955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584209] Re: Neutron-LBaaS v2: PortID should be returned with Loadbalancer resource (API)

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Status: In Progress => Incomplete

** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584209

Title:
  Neutron-LBaaS v2: PortID should be returned with Loadbalancer resource
  (API)

Status in octavia:
  Incomplete

Bug description:
  When creating a new loadbalancer with lbaas v2 (Octavia provider) and
  would like to create a floating ip attached to the vip port for
  loadbalancer.  Currently have to lookup the port id based on the ip
  address associated with the loadbalancer.  It would greatly simplify
  the workflow if the Port ID is returned in the loadbalancer API,
  similar to vip API in lbaas v1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1584209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551282] Re: devstack launches extra instance of lbaas agent

2016-12-05 Thread Michael Johnson
This was finished here: https://review.openstack.org/#/c/358255/

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551282

Title:
  devstack launches extra instance of lbaas agent

Status in neutron:
  Fix Released

Bug description:
  when using lbaas devstack plugin, two lbaas agents will be launced.
  one by devstack neutron-legacy, and another by neutron-lbaas devstack plugin.

  enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
  ENABLED_SERVICES+=,q-lbaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552119] Re: NSxv LBaaS stats error

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552119

Title:
  NSxv LBaaS stats error

Status in neutron:
  Fix Released

Bug description:
  - OpenStack Kilo (2015.1.1-1)
  - NSXv 6.2.1

  I see following errors in neutron.log after enabling LBaaS

  
  2016-03-02 07:36:19.145 27350 INFO neutron.wsgi 
[req-28324239-c925-4602-91c3-24378466d8ae ] 192.168.0.2 - - [02/Mar/2016 
07:36:19] "GET /v2.0/lb/pools/ba3c7e8a-81bf-4459-ad85-224b9f92594f/stats.json 
HTTP/1.1" 500 378 2.441363
  2016-03-02 07:36:19.176 27349 INFO neutron.wsgi [-] (27349) accepted 
('192.168.0.2', 54704)
  2016-03-02 07:36:21.740 27349 ERROR neutron.api.v2.resource 
[req-94a3960b-b01f-4665-a733-1621d7f7cbfa ] stats failed
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 131, in wrapper
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 209, in 
_handle_action
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 336, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource stats_data = 
driver.stats(context, pool_id)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/vmware/edge_driver.py",
 line 199, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
self._nsxv_driver.stats(context, pool_id, pool_mapping)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/vshield/edge_loadbalancer_driver.py",
 line 786, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource pools_stats = 
lb_stats.get('pool', [])
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource AttributeError: 
'tuple' object has no attribute 'get'
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468457] Re: Invalid Tempest tests cause A10 CI to fail

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468457

Title:
  Invalid Tempest tests cause A10 CI to fail

Status in octavia:
  New

Bug description:
  The following tests will not pass in A10's CI due to what appear to be 
incorrect tests.
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_for_another_tenant[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_missing_tenant_id_for_admin[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_missing_tenant_id_for_other_tenant[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_using_empty_tenant_field[smoke]

  --
  I'm creating this bug so I have one to reference when I @skip the tests per 
dougwig.

  The empty tenant ID tests need to be modified to expect an error
  condition, but this is not possible as Neutron's request handling
  fills in missing tenant IDs with the tenant ID of the logged in user.
  This is an error condition and should be handled as such.  Fixing it
  in the request handling is going to require fixes in a lot more places
  in Neutron, I believe.  I'll look for other similar tests that would
  expose such functionality.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1468457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464229] Re: LbaasV2 Health monitor status

2016-12-05 Thread Michael Johnson
Currently you can view the health status by using the load balancer
status API/command.

neutron lbaas-loadbalancer-status lb1

I am setting this to wishlist as I think there is a valid point that the
show commands should include the operating status.

** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464229

Title:
  LbaasV2 Health monitor status

Status in octavia:
  In Progress

Bug description:
  lbaasv2 healmonnitor:

  We have no way to see if an LbaasV2 health monitor is succesfful or failed.
  Additionally, we have no way to see if a VM in lbaasv2 pool is up or down ( 
from an Lbaasv2 point of view)

  neutron lbaas-pool-show - should show HealtMonitor status for VMs.

  kilo
  rhel7.1
  python-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-common-2015.1.0-1.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1464229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495430] Re: delete lbaasv2 can't delete lbaas namespace automatically.

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495430

Title:
  delete lbaasv2 can't delete lbaas namespace automatically.

Status in octavia:
  In Progress

Bug description:
  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
  The behavior is not consistent, namespace should be deleted from deleting 
listener too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1495430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498130] Re: LBaaSv2: Can't delete the Load balancer and also dependant entities if the load balancer provisioning_status is in PENDING_UPDATE

2016-12-05 Thread Michael Johnson
Marking this as invalid as it is as designed to not allow actions on load 
balancers in PENDING_* states.
PENDING_* means an action against that load balancer (DELETE or UPDATE) is 
already in progress.

As for load balancers getting stuck in a PENDING_* state, many bugs have been 
cleaned up for that situation.  If you find a situation that leads to a load 
balancer stuck in a PENDING_* state, please report that as a new bug.
Operators can clear load balnacers stuck in PENDING_* by manually updating the 
database record for the resource.

** Project changed: neutron => octavia

** Changed in: octavia
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498130

Title:
  LBaaSv2: Can't  delete the Load balancer and also dependant entities
  if the load balancer provisioning_status is  in PENDING_UPDATE

Status in octavia:
  Invalid

Bug description:
  If the Load balancer provisioning_status is  in PENDING_UPDATE

  cannot delete the Loadbalancer and also dependent entities like
  listener or pool

   neutron -v lbaas-listener-delete 6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://9.197.47.200:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] content-length: 338 vary: 
X-Auth-Token connection: keep-alive date: Mon, 21 Sep 2015 18:35:55 GMT 
content-type: application/json x-openstack-request-id: 
req-952f21b0-81bf-4e0f-a6c8-b3fc13ac4cd2
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://9.197.47.200:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: neutronclient.neutron.v2_0.lb.v2.listener.DeleteListener 
run(Namespace(id=u'6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6', 
request_format='json'))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://9.197.47.200:5000/v2.0/tokens
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://9.197.47.200:9696/v2.0/lbaas/listeners.json?fields=id=6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}9ea944020f06fa79f4b6db851dbd9e69aca65d58"
  DEBUG: keystoneclient.session RESP: [200] date: Mon, 21 Sep 2015 18:35:56 GMT 
connection: keep-alive content-type: application/json; charset=UTF-8 
content-length: 346 x-openstack-request-id: 
req-fd7ee22b-f776-4ebd-94c6-7548a5aff362
  RESP BODY: {"listeners": [{"protocol_port": 100, "protocol": "TCP", 
"description": "", "sni_container_ids": [], "admin_state_up": true, 
"loadbalancers": [{"id": "ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2"}], 
"default_tls_container_id": null, "connection_limit": 100, "default_pool_id": 
null, "id": "6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6", "name": "listener100"}]}

  DEBUG: keystoneclient.session REQ: curl -g -i -X DELETE 
http://9.197.47.200:9696/v2.0/lbaas/listeners/6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6.json
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}9ea944020f06fa79f4b6db851dbd9e69aca65d58"
  DEBUG: keystoneclient.session RESP:
  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"Invalid state PENDING_UPDATE of loadbalancer resource 
ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2", "type": "StateInvalid", "detail": ""}}
  ERROR: neutronclient.shell Invalid state PENDING_UPDATE of loadbalancer 
resource ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/neutronclient/shell.py", line 766, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/lib/python2.7/site-packages/neutronclient/shell.py", line 101, 
in run_command
  return cmd.run(known_args)
File 
"/usr/lib/python2.7/site-packages/neutronclient/neutron/v2_0/__init__.py", line 
581, in run
  obj_deleter(_id)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
932, in delete_listener
  return self.delete(self.lbaas_listener_path % (lbaas_listener))
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
289, in delete
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
270, in retry_request
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
211, in do_request
  self._handle_fault_response(status_code, replybody)
File 

[Yahoo-eng-team] [Bug 1440285] Re: When neutron lbaas agent is not running, 'neutron lb*’ commands must display an error instead of "404 Not Found"

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440285

Title:
  When neutron lbaas agent is not running, 'neutron lb*’ commands must
  display an error instead of "404 Not Found"

Status in octavia:
  Confirmed

Bug description:
  When neutron lbaas agent is not running, all the ‘neutron lb*’
  commands display "404 Not Found". This makes the user think that
  something is wrong with the lbaas agent (when it is not even
  running!).

  Instead, when neutron lbaas agent is not running, an error like
  “Neutron Load Balancer Agent not running” must be displayed so the
  user knows that the lbaas agent must be started first.

  The ‘ps’ command below shows that the neutron lbaas agent is not
  running.

  $ ps aux | grep lb
  $

  $ neutron lb-healthmonitor-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-member-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-pool-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-vip-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-healthmonitor-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-listener-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-loadbalancer-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-pool-list
  404 Not Found
  The resource could not be found.

  $ neutron --version
  2.3.11

  =

  Below are the neutron verbose messages that show "404 Not Found".

  $ neutron -v lb-healthmonitor-list
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://192.168.122.205:5000/v2.0/ -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] content-length: 341 vary: 
X-Auth-Token keep-alive: timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) 
connection: Keep-Alive date: Sat, 04 Apr 2015 04:37:54 GMT content-type: 
application/json x-openstack-request-id: 
req-95c6d1e1-02a7-4077-8ed2-0cb4f574a397
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://192.168.122.205:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: stevedore.extension found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('csv = 
cliff.formatters.commaseparated:CSVLister')
  DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = 
clifftablib.formatters:YamlFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('json = 
clifftablib.formatters:JsonFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('html = 
clifftablib.formatters:HtmlFormatter')
  DEBUG: neutronclient.neutron.v2_0.lb.healthmonitor.ListHealthMonitor 
get_data(Namespace(columns=[], fields=[], formatter='table', max_width=0, 
page_size=None, quote_mode='nonnumeric', request_format='json', 
show_details=False, sort_dir=[], sort_key=[]))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://192.168.122.205:5000/v2.0/tokens
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://192.168.122.205:9696/v2.0/lb/health_monitors.json -H "User-Agent: 
python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}23f2a54d0348e6bfc5364565ece4baf2e2148fa8"
  DEBUG: keystoneclient.session RESP:
  DEBUG: neutronclient.v2_0.client Error message: 404 Not Found

  The resource could not be found.

  ERROR: neutronclient.shell 404 Not Found

  The resource could not be found.

  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
760, in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
    File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
100, in run_command
  return cmd.run(known_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
29, in run
  return super(OpenStackCommand, self).run(parsed_args)
    File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 91, in 
run
  column_names, data = self.take_action(parsed_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
35, in take_action
  return self.get_data(parsed_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 691, in get_data
  data = self.retrieve_list(parsed_args)
    File 

[Yahoo-eng-team] [Bug 1426248] Re: lbaas v2 member create should not require subnet_id

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426248

Title:
  lbaas v2 member create should not require subnet_id

Status in octavia:
  Incomplete

Bug description:
  subnet_id on a member is currently required.  It should be optional
  and if not provided, it can be assumed the member can be reached by
  the load balancer (through the loadbalancer's vip subnet)

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1426248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603458] Re: Cannot Delete loadbalancers due to undeleteable pools

2016-12-05 Thread Michael Johnson
I agree with Brandon here, this is an lbaas-dashboard issue, so marking
the neutron side invalid.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603458

Title:
  Cannot Delete loadbalancers due to undeleteable pools

Status in OpenStack Dashboard (Horizon):
  New
Status in neutron:
  Invalid
Status in Neutron LBaaS Dashboard:
  New

Bug description:
  To delete an LBaaSv2 loadbalancer, you must remove all the members
  from the pool, then delete the pool, then delete the listener, then
  you can delete the loadbalancer. Currently in Horizon you can do all
  of those except delete the pool. Since you can't delete the pool, you
  can't delete the listener, and therefore can't delete the
  loadbalancer.

  Either deleting the listener should trigger the pool delete too (since
  they're 1:1) or the Horizon Wizard for Listener should have a delete
  pool capability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607061] Re: [RFE] Bulk LBaaS pool member operations

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607061

Title:
  [RFE] Bulk LBaaS pool member operations

Status in octavia:
  Triaged

Bug description:
  [Use-cases]
  - Configuration Management
  Perform administrative operations on a collection of members.

  [Limitations]
  Members must currently be created/modified/deleted one at a time.  This can 
be accomplished programmatically via neutron-api but is cumbersome through the 
CLI.

  [Enhancement]
  Embellish neutron-api (CLI) and GUI to support management of a group of 
members via one operation.  Pitching a few ideas on how to do this.

  - Extend existing API
  Add optional filter parameter to neutron-api to find and modify any member 
caught by the filter.

  - Create new API
  Create new lbaas-members-* commands that makes it clear we're changing a 
collection.  But leave the lbaas-pool-* command alone which are organizing 
collections.

  - Base inheritance
  Create new lbaas-member-base-* commands to define default settings then 
extend lbaas-member-* to specify to the base.  Updating the base would update 
all members that have not overridden the default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1607061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607052] Re: [RFE] Per-server port for LBaaS Health Monitoring

2016-12-05 Thread Michael Johnson
Is this a duplicate to https://bugs.launchpad.net/octavia/+bug/1541579?

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607052

Title:
  [RFE] Per-server port for LBaaS Health Monitoring

Status in octavia:
  Triaged

Bug description:
  [Use-cases]
  - Hierarchical health monitoring
  The operator wants to monitor member health for the pool separately from 
application health.

  - Micro-service deployment
  An application is deployed as docker containers, which consume an ephemeral 
port.

  [Limitations]
  LBaaSv2 health monitor is attached to the pool, but will use the 
protocol-port set in the member object.  Certain operators wish to monitor the 
health of the member (a.k.a member) separately, but in addition to the health 
of the service/application.  This model limits the granularity at which the 
operator can gauge the health of their cloud.

  [Enhancement]
  Add an optional application port field in the member object.  Default is 
.  Enhance health monitor creation with an optional parameter to use the 
service or application port.  Default is .

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1607052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611509] Re: lbaasv2 doesn't support "https" keystone endpoint

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611509

Title:
  lbaasv2 doesn't support "https" keystone endpoint

Status in octavia:
  Confirmed

Bug description:
  I am trying to enable lbaasv2 using octavia driver in one of our mitaka 
deployment. And we got the error
  {code}
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin 
[req-87d34869-7fec-4269-894b-81a4f1771736 6928cf223a0948699fab55612678cfdc 
10d7de26713241a2b623f2028c77e8eb - - -] There was an error in the driver
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin Traceback (most recent call last):
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 489, in _call_driver_operation
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin driver_method(context, db_entity)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 118, in func_wrapper
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin 
args[0].failed_completion(args[1], args[2])
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin self.force_reraise()
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin six.reraise(self.type_, 
self.value, self.tb)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 108, in func_wrapper
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin r = func(*args, **kwargs)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 220, in create
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin 
self.driver.req.post(self._url(lb), args)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 150, in post
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin return self.request('POST', url, 
args)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 131, in request
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin token = 
self.auth_session.get_token()
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 618, in 
get_token
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin return 
(self.get_auth_headers(auth) or {}).get('X-Auth-Token')
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 597, in 
get_auth_headers
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin return auth.get_headers(self, 
**kwargs)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/plugin.py", line 84, in 
get_headers
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin token = self.get_token(session)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 89, in 
get_token
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin return 
self.get_access(session).auth_token
  

[Yahoo-eng-team] [Bug 1629066] Re: RFE Optionally bind load balancer instance to multiple IPs to increase available (source IP, source port) space to support > 64k connections to a single backend

2016-12-05 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629066

Title:
  RFE Optionally bind load balancer instance to multiple IPs to increase
  available (source IP, source port) space to support > 64k connections
  to a single backend

Status in octavia:
  Triaged

Bug description:
  This limitation arose in while testing Neutron LBaaS using the HAProxy
  namespace driver, but applies to other proxying type backends
  including Octavia. A single load balancer instance (network namespace,
  or amphora) can only establish as many concurrent TCP connections to a
  single pool member as there are available distinct source IP, source
  TCP port combinations on the load balancing instance (network
  namespace or amphora). The source TCP port range is limited by the
  configured ephemeral port range, but this can be tuned to include all
  the unprivileged TCP ports (1024 - 65535) via sysctl. The available
  source addresses are limited to IP addresses bound to the instance,
  for the load balancing instance must be able to receive the response
  from the pool member.

  In short the total number of concurrent TCP connections to any single
  backend is limited to 64k times the number of available source IP
  addresses. This is because each TCP connection is identified by the
  4-tuple: (src-ip, src-port, dst-ip, dst-port) and (dst-ip, dst-port)
  is used to define a specific pool member. TCP ports are limited by the
  16bit field in the TCP protocol definition. In order to further
  increase the number of possible connections from a load balancing
  instance to a single backend we must increase this tuple space by
  increasing the number of available source IP addresses.

  Therefore, I propose we offer an option to attach multiple fixed-ips
  in the same subnet to the Neutron port of the load balancing instance
  facing the pool member. This would increase the tuple space allowing
  more than 64k concurrent connections to a single backend.

  While this limitation could be addressed by increasing the number of
  listening TCP ports on the pool member and adding additional members
  with the same IP address and different TCP ports, not all applications
  are suitable to this modification.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1629066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585680] Re: neutron-lbaas doesn't have tempest plugin

2016-12-05 Thread Michael Johnson
This was fixed in: https://review.openstack.org/#/c/317862/

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585680

Title:
  neutron-lbaas doesn't have tempest plugin

Status in neutron:
  Fix Released

Bug description:
  Puppet OpenStack CI is interested to run neutron-lbaas Tempest tests
  but it's currently not working because neutron-lbaas is missing a
  Tempest plugin and its entry-point, so discovery of tests does not
  work.

  Right now, to run tempest we need to go in the neutron-lbaas directory and 
run tox inside, etc.
  That's not the way to go and other projects already (Neutron itself does) 
provide tempest plugins.

  This is a official RFE to have it in neutron-lbaas so we can run the
  tests in a consistent way with other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585890] Re: No check that member address whether is in the member subnet

2016-12-05 Thread Michael Johnson
This could be a valid use case where the address is accessible via a
route on the specified subnet.

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585890

Title:
  No check that member address whether is in the member subnet

Status in octavia:
  Confirmed

Bug description:
  issue is in kilo branch

  member subnet cidr is 20.0.0.0/24, but member address is 30.0.0.11
  but it configured ok.

  [root@opencos2 v2(keystone_admin)]# neutron subnet-show 
502be3ac-f8d8-43b3-af5b-f0feada72aed
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "20.0.0.2", "end": "20.0.0.254"} |
  | cidr  | 20.0.0.0/24|
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 20.0.0.1   |
  | host_routes   ||
  | id| 502be3ac-f8d8-43b3-af5b-f0feada72aed   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  ||
  | network_id| 2e424980-14f0-4405-92dc-e4c57c32235a   |
  | subnetpool_id ||
  | tenant_id | be58eaec789d44f296a65f96b944a9f5   |
  +---++
  [root@opencos2 v2(keystone_admin)]# neutron lbaas-member-create pool101 
--subnet 502be3ac-f8d8-43b3-af5b-f0feada72aed --address 30.0.0.11 
--protocol-port 80
  Created a new member:
  ++--+
  | Field  | Value|
  ++--+
  | address| 30.0.0.11|
  | admin_state_up | True |
  | id | 1dcc-2f00-4fd7-9a68-6031a96a172b |
  | protocol_port  | 80   |
  | subnet_id  | 502be3ac-f8d8-43b3-af5b-f0feada72aed |
  | tenant_id  | be58eaec789d44f296a65f96b944a9f5 |
  | weight | 1|
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1585890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595416] Re: Add new config attribute to Radware driver

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595416

Title:
  Add new config attribute to Radware driver

Status in octavia:
  In Progress

Bug description:
  Need to add a new configuration attribute for Radware LBaaS v2 driver.
  add_allowed_address_pairs

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1595416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539717] Re: [RFE] Add F5 plugin driver to neutron-lbaas

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1539717

Title:
  [RFE] Add F5 plugin driver to neutron-lbaas

Status in octavia:
  Incomplete

Bug description:
  This is an RFE for adding a plugin driver to neturon-lbaas to support
  F5 Networks appliances. Our intent is to provide an LBaaSv2 driver
  that fully supports the LBaaS v2 design, and will be similar to other
  vendor implementations that are already part of neutron-lbaas (e.g.
  A10 Networks, Brocade, Kemp Technologies, etc.).  In doing so, F5
  Networks hopes to expand the use of OpenStack for load balancing
  services, and to provide a migration path to LBaaSv2 for customers
  currently using LBaaSv1.

  Note: by mistake we already created a blueprint request,
  https://blueprints.launchpad.net/neutron/+spec/f5-lbaasv2-driver, but
  understand that  this RFE needs to be discussed and accepted first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1539717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537169] Re: LBaaS should populate DNS Name on creating LoadBalancer

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537169

Title:
  LBaaS should populate DNS Name on creating LoadBalancer

Status in octavia:
  In Progress

Bug description:
  With the merge of https://blueprints.launchpad.net/neutron/+spec
  /external-dns-resolution (https://review.openstack.org/#/c/212213/)

  neutron supports a name parameter on a port. this can be used to
  create a DNS record for the port, both locally on the network, and
  globally in Designate.

  when creating a loadbalancer, LBaaS should populate this field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1537169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523219] Re: [RFE] Add support X-Forwarded-For header in LBaaSv2

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523219

Title:
  [RFE] Add support X-Forwarded-For header in LBaaSv2

Status in octavia:
  In Progress

Bug description:
  X-Forwarded-For headers are used by proxies and load balancers to pass on the 
original client's IP to the server, while NATing the request.
  This is very handy for some applications but has some overheads and therefore 
has to be configurable.
  LBaaSv2 API doesn't offer support for enabling XFF header.
  Without having an XFF header, the members cannot conclude which IP address 
originated the NATed request - e.g for auditing purposes.
  Changes required are addition of a boolean property to the listener - which 
will indicate that an XFF header should be appended to the requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1523219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541579] Re: [RFE] Port based HealthMonitor in neutron_lbaas

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541579

Title:
  [RFE] Port based HealthMonitor in neutron_lbaas

Status in octavia:
  Triaged

Bug description:
  Summary:
  Neutron LbaaS lags port based monitoring. 

  Description:
  The current HealthMonitory attached to pool that monitors the member port by 
default. But some use case may run their service monitoring in a different port 
rather than the service port, these type of ECV is incapable in the current 
HealthMonitoring Object. 

  Expected:
  We have to have a new field called 'destination': 'ip:port', since most of 
the external LBs support this feature and organizations uses it. since pool can 
have multiple HealthMonitors attached to it.

  'destination': {'allow_post': True, 'allow_put': True,
  'validate': {'type:string': None},
  'default': '*:*',
  'is_visible': True},

  Version: Kilo/Liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1541579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565511] Re: Loadbalancers should be rescheduled when a LBaaS agent goes offline

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565511

Title:
  Loadbalancers should be rescheduled when a LBaaS agent goes offline

Status in octavia:
  In Progress

Bug description:
  Currently, when a LBaaS agent goes offline the loadbalancers remain under 
that agent.
  In a similar logic to 'allow_automatic_l3agent_failover', the neutron server 
should reschedule loadbalancers from dead lbaas agents.

  this should be enabled with an option as well, such as:
  allow_automatic_lbaas_agent_failover

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1565511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585266] Re: [RFE] Can't specify an error type on LBaaS objects that fail to provision

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585266

Title:
  [RFE] Can't specify an error type on LBaaS objects that fail to
  provision

Status in octavia:
  Triaged

Bug description:
  LBaaSv2 objects have a provisioning_status field that can indicate
  when provisioning has failed, but there is no way to describe to the
  user what the error was.  The ability to specify an error message as a
  parameter to the BaseManagerMixin.failed_completion() function that
  can then be returned in "show" calls to the object would save users
  and administrators a lot of time when debugging issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1585266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457556] Re: [RFE] [LBaaS] ssh connection timeout

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457556

Title:
  [RFE] [LBaaS] ssh connection timeout

Status in octavia:
  In Progress
Status in python-neutronclient:
  Incomplete

Bug description:
  In the V2 api, we need a way to tune the lb connection timeouts so
  that we can have a pool of ssh servers that have long running tcp
  connections. ssh sessions can last days to weeks and users get grumpy
  if the session times out if they are in the middle of doing something.
  Currently the timeouts are tuned to drop connections that are too long
  running regardless of if  there is traffic on the connection or not.
  This is good for http, but bad for ssh.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1457556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581876] Re: neutron lbaas v2: update of default "device_driver" inside lbaas_agent.ini

2016-12-05 Thread Michael Johnson
This is correct in the code: 
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/agent/agent_manager.py#L38-L45
Marking invalid for neutron

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581876

Title:
  neutron lbaas v2: update of default "device_driver" inside
  lbaas_agent.ini

Status in neutron:
  Invalid
Status in puppet-neutron:
  New

Bug description:
  Dear,

  As from Mitaka only v2 of lbaas is supported please update default
  "device_driver" inside config file /etc/neutron/lbaas_agent.ini from:

  device_driver =
  
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

  to

  device_driver =
  neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

  More inside this IRC log:

  http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas
  /%23openstack-lbaas.2016-02-02.log.html

  Kind regards,
  Michal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643571] Re: lbaas data model is too recursive

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643571

Title:
  lbaas data model is too recursive

Status in octavia:
  New

Bug description:
  this is an example of pool to_api_dict().
  http://paste.openstack.org/show/589872/
  as you can see, it has too many copies of same objects.
  note: there are only 3 objects in the dict.

  while from_sqlalchemy_model has some recursion protection,
  it's better to make it shallower.

  especially, to_dict/to_api_dict should not do much recursion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1643571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609352] Re: LBaaS: API doesn't return correctly

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609352

Title:
  LBaaS: API doesn't return correctly

Status in octavia:
  In Progress

Bug description:
  = Problem Description =
  I want to get all ids about LBaaS's pools.

  I use this command:

  curl -g -i -X GET http://10.0.44.233:9696/v2.0/lbaas/pools.json?fields=id \
  -H "User-Agent: python-neutronclient" \
  -H "Accept: application/json" \
  -H "X-Auth-Token: a77ea1dd7fb748448d36142ef844802d"

  But the Neutron server didn't return correctly. The response is :

  HTTP/1.1 200 OK
  Content-Type: application/json; charset=UTF-8
  Content-Length: 344
  X-Openstack-Request-Id: req-8ed9d992-6a4c-44ac-9c59-de65794e919f
  Date: Wed, 03 Aug 2016 10:56:18 GMT

  {"pools": [{"lb_algorithm": "ROUND_ROBIN", "protocol": "HTTP",
  "description": "", "admin_state_up": true, "session_persistence":
  null, "healthmonitor_id": null, "listeners": [{"id":
  "f8392236-e065-4aa2-a4ef-d6c6821cc038"}], "members": [{"id":
  "ea1292f4-fb6a-4594-9d13-9ff0dec865d8"}], "id": "b360fc75-b23d-
  46a3-b936-6c9480d35219", "name": ""}]}[root@server-233
  ~(keystone_admin)]

  Neutron server returns all the infos about pools.

  In the request, I specify the url with "fields=id". But the Neutron
  server didn't return correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1609352/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586225] Re: No check that healthmonitor delay should >= timeout

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586225

Title:
  No check that healthmonitor delay should >= timeout

Status in octavia:
  New

Bug description:
  issue is in kilo branch

  healthmonitor delay is 10, timeout is 12

  it does not make sense


  [root@opencos2 ~(keystone_admin)]# neutron lbaas-healthmonitor-show 
6d29f448-1965-40b9-86e2-cf18d86ae6f8
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | delay  | 10 |
  | expected_codes | 305,205|
  | http_method| GET|
  | id | 6d29f448-1965-40b9-86e2-cf18d86ae6f8   |
  | max_retries| 10 |
  | pools  | {"id": "591be59b-eb81-4f1d-8ab7-b023df6cccfa"} |
  | tenant_id  | be58eaec789d44f296a65f96b944a9f5   |
  | timeout| 12 |
  | type   | PING   |
  | url_path   | /api/  |
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1586225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622793] Re: LBaaS back-end pool connection limit is 10% of listener connection limit for reference and namespace drivers

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622793

Title:
  LBaaS back-end pool connection limit is 10% of listener connection
  limit for reference and namespace drivers

Status in octavia:
  Confirmed

Bug description:
  Both the reference Octavia driver and the namespace driver use haproxy
  to deliver load balancing services with LBaaSv2. When closely looking
  at the operation of the haproxy daemons with a utility like hatop (
  https://github.com/feurix/hatop ), one can see that the connection
  limit for back-ends is exactly 10% of whatever the connection limit is
  set for the pool's listener front-ends. This behavior could cause an
  unexpectedly low effective connection limit if the user has a small
  number of back-end servers in the pool.

  From the haproxy documentation, this is because the default value of a
  backend's "fullconn" parameter is set to 10% of the sum of all front-
  ends referencing it. Specifically:

  "Since it's hard to get this value right, haproxy automatically sets it to
  10% of the sum of the maxconns of all frontends that may branch to this
  backend (based on "use_backend" and "default_backend" rules). That way it's
  safe to leave it unset."

  (Source: https://cbonte.github.io/haproxy-
  dconv/configuration-1.6.html#fullconn )

  The point of this calculation (according to the haproxy documentation)
  is to protect fragile back-end servers from spikes in load that might
  reach the front-ends' connection limits. However, for long-lasting but
  low-load connections to a small number of back-end servers through the
  load balancer, this means that the haproxy-based back-ends have an
  effective connection limit that is much smaller than what the user
  expects it to be.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1622793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640265] Re: LBaaSv2 uses fixed MTU of 1500, leading to packet dropping

2016-12-05 Thread Michael Johnson
New patch is here: https://review.openstack.org/#/c/399945/

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640265

Title:
  LBaaSv2 uses fixed MTU of 1500, leading to packet dropping

Status in octavia:
  In Progress

Bug description:
  The LBaaSv2's HAProxy plugin sets up a VIF without specifying its MTU.
  Therefore, the VIF always gets the default MTU of 1500. When attaching
  the load balancer to a VXLAN-backed project (tenant) network, which by
  default has a MTU of 1450, this leads to packet dropping.

  Pre-conditions: A standard OpenStack + Neutron deployment. A project
  (tenant) network backed by VXLAN, GRE, or other protocol that reduces
  MTU to less than 1500.

  Step-by-step reproduction steps:
  * Create a SSL load balancer, OR a TCP load balancer terminated in a SSL 
server.
  * Try connecting to it: curl -kv https://virtual_ip

  Expected behaviour: connection attempts should succeed

  Actual behaviour: 25% to 50% connection attempts will fail to complete

  Log output: neutron-lbaasv2-agent.log displays:
  WARNING neutron.agent.linux.interface [-] No MTU configured for port 

  OpenStack version: stable/newton
  Linux distro: Ubuntu 16.04
  Deployment mechanism: OpenStack-Ansible
  Environment: multi-node

  Perceived severity: This issue causes LBaaSv2 with HAProxy to be
  unusable for SSL and other protocols which need to transfer large
  (>1450 bytes) packets, unless external network equipment is set up to
  clamp the MSS or unless the deployer is able to set path_mtu to values
  greater than 1550.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1640265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565801] Re: Add process monitor for haproxy

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565801

Title:
  Add process monitor for haproxy

Status in octavia:
  In Progress

Bug description:
  Bug 1565511 aims to solve cases where the lbaas agent goes offline.
  To have a complete high-availability solution for lbaas agent with haproxy 
running in namespace, we would also want to handle a case where the haproxy 
process itself stopped. 

  This[1] neutron spec offers the following approach:  
  "We propose monitoring those processes, and taking a configurable action, 
making neutron more resilient to external failures."
   
  [1] 
http://specs.openstack.org/openstack/neutron-specs/specs/juno/agent-child-processes-status.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1565801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569827] Re: LBaaS agent floods log when stats socket is not found

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569827

Title:
  LBaaS agent floods log when stats socket is not found

Status in octavia:
  In Progress

Bug description:
  The LBaaS agent creates a lot of log messages, when a new lb-pool is
  created.

  As soon as I create a lb-pool:
  neutron lb-pool-create --lb-method ROUND_ROBIN --name log-test-lb --protocol 
TCP --subnet-id a6ce9a77-53ca-4704-aaf4-fc255cc5fa74
  The log file /var/log/neutron/neutron-lbaas-agent.log starts to fill up with 
messages like these:
  2016-04-13 12:56:08.922 15373 WARNING 
neutron.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats socket 
not found for pool 37cbf817-f1ac-4d47-9a04-93c911d0afdd

  The message is correct, as the file /var/lib/neutron/lbaas/37cbf817
  -f1ac-4d47-9a04-93c911d0afdd/sock is not present. But the message
  repeats every 10s.

  The messages stop as soon as the lb-pool gets a VIP. At this step the
  file /var/lib/neutron/lbaas/37cbf817-f1ac-4d47-9a04-93c911d0afdd/sock
  is present. I would expect the lbaas agent to verify the sock file
  could really be present before issuing the message.

  Version:
  Openstack Juno  on SLES 11 SP3.
  The package version of openstack-neutron-lbaas-agent is 2014.2.2.dev26-0.11.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1569827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556530] Re: neutron-lbaas needs a scenario test covering status query when health monitor is admin-state-up=False

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556530

Title:
  neutron-lbaas needs a scenario test covering status query when health
  monitor is admin-state-up=False

Status in octavia:
  Confirmed

Bug description:
  neutron-lbaas is missing a scenario test covering the case when a
  health monitor is in admin-state-up=False and a user queries for the
  status tree.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1556530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295424] Re: lbaas security group

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295424

Title:
  lbaas security group

Status in octavia:
  Incomplete

Bug description:
  There seems to be no way of specifying which security group a lbaas
  vip gets. It looks to default to 'default' in Havana. When you place a
  load balancer on a backend private neutron network, it gets the
  security group member rules from 'default' which are for the wrong
  subnet.

  Manually drilling down to find the port neutron port id, and then
  fixing the security_group on the vip port does seem to work.

  There needs to be a way to specify the security groups when you create
  the vip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1295424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540565] Re: lbaas v2 dashboard uses unclear "Admin State Up" terminology

2016-12-01 Thread Michael Johnson
** Project changed: neutron => neutron-lbaas-dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540565

Title:
  lbaas v2 dashboard uses unclear "Admin State Up" terminology

Status in Neutron LBaaS Dashboard:
  Confirmed

Bug description:
  The lbaas v2 Horizon plugin at https://github.com/openstack/neutron-
  lbaas-dashboard/ uses the phrase "Admin State Up" Yes/No. It seems
  clearer to change this terminology to "Admin State: Up (or Down)" as
  suggested in this code review:
  https://review.openstack.org/#/c/259142/6/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1540565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640882] Re: Lbaasv2 cookie sessions not working as haproxy( which is backend for lbaas) no longer supports appsession

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640882

Title:
  Lbaasv2 cookie sessions not working as haproxy( which is backend for
  lbaas) no longer supports appsession

Status in octavia:
  Triaged

Bug description:
  I have deployed lbaasv2 and launched bunch of load balancers.

  For one of them, I need to configure cookie sessions. When I added it
  through cli

  neutron lbaas-pool-update poolid --session-persistence type=dict
  type=APP_COOKIE,cookie_name=sessionid

  I see errors in logs stating "appsession" is no longer supported by
  haproxy

  Seems appsession is deprecated in haproxy but lbaas still uses it
  configure the sessions in the backedn haproxy configs

  Tried editing it manually in backend but lbaas quickly overwrites it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1640882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643829] Re: [neutron-lbaas] Migration contains innodb specific code

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643829

Title:
  [neutron-lbaas] Migration contains innodb specific code

Status in octavia:
  In Progress

Bug description:
  The migration contained in
  
neutron_lbaas/db/migration/alembic_migrations/versions/mitaka/expand/6aee0434f911_independent_pools.py
  drops the foreign key constraints from the lbaas_listeners table. It
  contains code for both PostgreSQL and MySQL, however, the MySQL path
  is only compatible with the innodb engine. The ndbcluster engine
  assigns random names to foreign keys unless told otherwise, resulting
  in the code erroring out when it tries to reference
  "lbaas_listeners_ibfk_2". This can be fixed using sqlalchemy's
  reflection feature to look up the foreign key names before dropping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1643829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541802] Re: lbaasv2 namespace missing host routes from VIP subnet

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541802

Title:
  lbaasv2 namespace missing host routes from VIP subnet

Status in octavia:
  Confirmed

Bug description:
  When a lbaasv2 namespace is created it only receives the default
  gateway for that subnet, the additional host routes defined against
  the subnet are ignored which results in certain areas of a network
  being inaccessible.

  # ip netns exec qlbaas-ae4b71ef-e874-46a1-a489-c2a6e186ffe3 ip r s
  default via 192.168.31.254 dev tap9e9051cd-ff 
  192.168.31.0/24 dev tap9e9051cd-ff  proto kernel  scope link  src 
192.168.31.48

  Version Info:

  OpenStack: Liberty
  Distro: Ubuntu 14.04.3

  Not sure if any more information is required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1541802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554601] Re: able to update health monitor attributes which is attached to pool in lbaas

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

** Tags removed: lbaas
** Tags added: lbaasv1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554601

Title:
  able to update health monitor attributes which is attached to pool in
  lbaas

Status in OpenStack Neutron LBaaS Integration:
  Confirmed
Status in octavia:
  Incomplete

Bug description:
  Reproduced a bug in Load Balancer:
  1.created a pool
  2.attached members to pool1
  3.then associate health monitor to pool
  4.associate VIP to pool
  5.when I edit the  attributes of "health monitor" it shows me error like in 
"Error: Failed to update health monitor " but it is updated successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/f5openstackcommunitylbaas/+bug/1554601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554960] Re: able to attached one health monitor to two different pool in lbaas neutron

2016-12-01 Thread Michael Johnson
** Tags removed: lbaas
** Tags added: lbaasv1

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554960

Title:
  able to attached one health monitor to two different pool in lbaas
  neutron

Status in OpenStack Neutron LBaaS Integration:
  Confirmed
Status in octavia:
  Confirmed

Bug description:
  Reproduced bug:
  1.created a pool in lbaas
  2.add member to pool
  3.associate monitor and add vip to pool
  4.now i am able to associate one health monitor to two different pool.
  5.pool and health monitor should have one to one mapping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/f5openstackcommunitylbaas/+bug/1554960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544729] Re: No grenade coverage for neutron-lbaas/octavia

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544729

Title:
  No grenade coverage for neutron-lbaas/octavia

Status in octavia:
  Confirmed

Bug description:
  Stock neutron grenade no longer covers this, so we need a grenade
  plugin for neutron-lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1544729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624165] Re: LBaaS-enabled devstack fails with deprecated error when fatal_deprecations is set to True

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624165

Title:
  LBaaS-enabled devstack fails with deprecated error when
  fatal_deprecations is set to True

Status in octavia:
  Confirmed

Bug description:
  If we set fatal_deprecations = True in neutron.conf and neutron-lbaas is 
enabled 
  neutron server fails with following error 

  http://paste.openstack.org/show/577816/

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1624165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516862] Re: LB_NFV KiloV1 :the default session limit is 2000 rather than unlimit

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1516862

Title:
  LB_NFV KiloV1 :the default session limit is 2000 rather than unlimit

Status in octavia:
  Incomplete

Bug description:
[Summary]
LB_NFV KiloV1 :the default session limit is 2000 rather than unlimit

[Topo]
  RDO-Kilo 
  CentOS 7.1

[Description and expect result]
if we don't config session limit , the expect the session shoule be not 
limit

[Reproduceable or not]
  it is easy   to recreate.

[Recreate Steps]
  if don't config session-limit , the default session limit is 2000 , but the 
GUI show the session limit is -1, as  our common understanding , the negative 
number mean unlimit .
  so this issue should be fixed

  # pxnamesvnameqcurqmaxscursmaxslim
  fda4febc-8efd-436e-9227-435916d50e93FRONTEND52000
2000<<< check haproxy statistic infor is 
2000

  and

  ID
  fda4febc-8efd-436e-9227-435916d50e93
  Name
  VIP1
  Description
  -
  Project ID
  d95aa65136e6413fb0e29ab3550097a4
  Subnet
  LB_Scale1_VipSubnet_1 20.1.1.0/24
  Address
  20.1.1.100
  Protocol Port
  80
  Protocol
  HTTP
  Pool
  pool_1
  Port ID
  7f003e70-ea64-4f52-8793-356d703f9003
  Session Persistence
  None
  Connection Limit
  -1<<
  Admin State Up
  Yes
  Status
  ACTIVE 

  [Configration]

  [root@nitinserver2 ~(keystone_admin)]# more 
/var/lib/neutron/lbaas/61a7696f-ded6-493c-98bc-27c2a82cca15/conf 
  global
  daemon
  user nobody
  group haproxy
  log /dev/log local0
  log /dev/log local1 notice
  stats socket 
/var/lib/neutron/lbaas/61a7696f-ded6-493c-98bc-27c2a82cca15/sock mode 0666 
level user
  defaults
  log global
  retries 3
  option redispatch
  timeout connect 5000
  timeout client 5
  timeout server 5
  frontend 46d5cb86-ec6c-474e-82dd-9dc70baa3222
  option tcplog
  bind 0.0.0.0:80
  mode http
  default_backend 61a7696f-ded6-493c-98bc-27c2a82cca15
  option forwardfor
  backend 61a7696f-ded6-493c-98bc-27c2a82cca15
  mode http
  balance roundrobin
  option forwardfor
  server 09b5c49f-100f-4fc7-86dd-05512de21ec3 10.1.1.107:80 weight 1
  server 0a65e548-84d7-4aac-96e6-4ad5a170a9ee 10.1.1.130:80 weight 1
  server 0cc99f63-8883-4655-8433-545840699e53 10.1.1.126:80 weight 1
  server 18c767db-c622-4104-834d-9573fab5b979 10.1.1.127:80 weight 1
[logs]


[Root cause anlyze or debug inf]


[Attachment]
Upload the attachment and explain it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1516862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632054] Re: Heat engine doesn't detect lbaas listener failures

2016-12-01 Thread Michael Johnson
The work in Octavia is complete for adding provisioning status to all of the 
objects.  We just need to make sure that is available via the APISs and clients.
Provisioning status work was done here: https://review.openstack.org/#/c/372791/

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632054

Title:
  Heat engine doesn't detect lbaas listener failures

Status in heat:
  Triaged
Status in octavia:
  Triaged

Bug description:
  Please refer to the mail-list for comments from other developers,
  https://openstack.nimeyo.com/97427/openstack-neutron-octavia-doesnt-
  detect-listener-failures

  I am trying to use heat to launch lb resources with Octavia as backend. The
  template I used is from
  
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml
  .

  Following are a few observations:

  1. Even though Listener was created with ERROR status, heat will still go
  ahead and mark it Creation Complete. As in the heat code, it only check
  whether root Loadbalancer status is change from PENDING_UPDATE to ACTIVE.
  And Loadbalancer status will be changed to ACTIVE anyway no matter
  Listener's status.

  2. As heat engine wouldn't know the Listener's creation failure, it will
  continue to create Pool\Member\Heatthmonitor on top of an Listener which
  actually doesn't exist. It causes a few undefined behaviors. As a result,
  those LBaaS resources in ERROR state are unable to be cleaned up
  with either normal neutron or heat api.

  3. The bug is introduce from here,
  
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/lbaas/listener.py#L188.
  It only checks the provisioning status of the root loadbalancer.
  However the listener itself has its own provisioning status which may
  go into ERROR.

  4. The same scenario applies for not only listener but also pool,
  member, healthmonitor, etc., basically every resources except
  loadbalancer from lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1632054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571909] Re: Neutron-LBaaS v2: Deleting pool that has members changes state of other load balancers

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571909

Title:
  Neutron-LBaaS v2: Deleting pool that has members changes state of
  other load balancers

Status in octavia:
  Triaged

Bug description:
  As an admin user, perform the following for a given tenant:
  1.  Create Load_Balancer_1
  2.  Create a Pool_1 for Load_Balancer_1.
  3.  Add 1 member_1 to Pool_1.   (wait for Load_Balancer_1 to be Active)
  4.  Create Load_Balancer_2.
  5.  Create a Pool_2 for Load_Balancer_2.
  6.  Add 1 member_2 to Pool_2.  (wait for Load_Balancer_2 to be Active)
  7.  Delete Pool_2.
  8.  Do a list load balancers and observe state of Load_Balancer_1 and 
Load_Balancer_2.

  Actual Result:   Load_Balancer_1 status transitions to
  "PENDING_UPDATE".   Load_Balancer_2 status transitions to
  "PENDING_UPDATE".

  Expected:   Load_Balancer_2 status should ONLY transition to
  "PENDING_UPDATE".   Load_Balancer_1 should only stay ACTIVE.

  note: The state  seems to change to "PENDING_UPDATE" with all active
  load balancers for a given account when deleting a pool that has
  members

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1571909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592612] Re: LBaaS TLS is not working with non-admin tenant

2016-12-01 Thread Michael Johnson
To my knowledge we can grant ACL access to just the container the user is 
requesting we use for the listener creation, so we would not be granting the 
LBaaS service account access to all of the user's secrets, but just the ones 
that user is requesting we use for the listener.
Is that a mis-understanding?

** Changed in: octavia
   Status: New => Confirmed

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592612

Title:
  LBaaS TLS is not working with non-admin tenant

Status in Barbican:
  New
Status in octavia:
  Confirmed

Bug description:
  I went through https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-
  to-create-tls-loadbalancer with devstack. And all my branches were set
  to stable/mitaka.

  If I set my user and tenant as "admin admin", the workflow passed.
  But it failed if I set the user and tenant to "admin demo" and rerun all the 
steps.

  Steps to reproduce:
  1. source ~/devstack/openrc admin demo
  2. barbican secret store --payload-content-type='text/plain' 
--name='certificate' --payload="$(cat server.crt)"
  3. barbican secret store --payload-content-type='text/plain' 
--name='private_key' --payload="$(cat server.key)"
  4 .barbican secret container create --name='tls_container' 
--type='certificate' --secret="certificate=$(barbican secret list | awk '/ 
certificate / {print $2}')" --secret="private_key=$(barbican secret list | awk 
'/ private_key / {print $2}')"
  5. neutron lbaas-loadbalancer-create $(neutron subnet-list | awk '/ 
private-subnet / {print $2}') --name lb1
  6. neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 
--protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(barbican 
secret container list | awk '/ tls_container / {print $2}')

  
  The error msg I got is 
  $ neutron lbaas-listener-create --loadbalancer 
738689bd-b54e-485e-b742-57bd6e812270 --protocol-port 443 --protocol 
TERMINATED_HTTPS --name listener2 --default-tls-container=$(barbican secret 
container list | awk '/ tls_container / {print $2}')
  WARNING:barbicanclient.barbican:This Barbican CLI interface has been 
deprecated and will be removed in the O release. Please use the openstack 
unified client instead.
  DEBUG:stevedore.extension:found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
  DEBUG:stevedore.extension:found extension EntryPoint.parse('json = 
cliff.formatters.json_format:JSONFormatter')
  DEBUG:stevedore.extension:found extension EntryPoint.parse('csv = 
cliff.formatters.commaseparated:CSVLister')
  DEBUG:stevedore.extension:found extension EntryPoint.parse('value = 
cliff.formatters.value:ValueFormatter')
  DEBUG:stevedore.extension:found extension EntryPoint.parse('yaml = 
cliff.formatters.yaml_format:YAMLFormatter')
  DEBUG:barbicanclient.client:Creating Client object
  DEBUG:barbicanclient.containers:Listing containers - offset 0 limit 10 name 
None type None
  DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://192.168.100.148:5000/v2.0/tokens
  INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection 
(1): 192.168.100.148
  Starting new HTTP connection (1): 192.168.100.148
  DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 
200 3924
  DEBUG:keystoneclient.session:REQ: curl -g -i -X GET 
http://192.168.100.148:9311 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection 
(1): 192.168.100.148
  Starting new HTTP connection (1): 192.168.100.148
  DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 300 353
  DEBUG:keystoneclient.session:RESP: [300] Content-Length: 353 Content-Type: 
application/json; charset=UTF-8 Connection: close
  RESP BODY: {"versions": {"values": [{"status": "stable", "updated": 
"2015-04-28T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.key-manager-v1+json"}], "id": "v1", "links": 
[{"href": "http://192.168.100.148:9311/v1/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}]}}
  DEBUG:keystoneclient.session:REQ: curl -g -i -X GET 
http://192.168.100.148:9311/v1/containers -H "User-Agent: 
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}203d7de65f6cfb1fb170437ae2da98fef35f0942"
  INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: 
192.168.100.148
  Resetting dropped connection: 192.168.100.148
  DEBUG:requests.packages.urllib3.connectionpool:"GET 
/v1/containers?limit=10=0 HTTP/1.1" 200 585
  DEBUG:keystoneclient.session:RESP: [200] Connection: close Content-Type: 
application/json; charset=UTF-8 Content-Length: 585 x-openstack-request-id: 
req-aa4bb861-3d1d-42c6-be3d-5d3935622043
  RESP BODY: {"total": 1, "containers": 

[Yahoo-eng-team] [Bug 1439696] Re: Referencing a lb-healthmonitor ID for the first time from Heat would fail

2016-12-01 Thread Michael Johnson
Can we confirm this is an issue with LBaaSv2 and it is still occurring?
If so, what OpenStack release is being used?

** Project changed: neutron => octavia

** Changed in: octavia
Milestone: ocata-2 => None

** Changed in: octavia
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439696

Title:
  Referencing a lb-healthmonitor ID for the first time from Heat would
  fail

Status in octavia:
  Incomplete

Bug description:
  Creating a stack with heat that creates a lb-healthmonitor would result in a 
404 for that ID.
  This happens only at the first attempt to do so. Deleting the heat stack and 
recreating it using the same template would result in a success so it does not 
look like an issue originating from heat. Later operations either by neutron or 
Heat would succeed and the only way to reproduce this specific issue is to 
unstack and re-stack.

  From heat's log (has neutron's answer):

  REQ: curl -i http://10.35.160.83:9696//v2.0/lb/health_monitors.json -X POST 
-H "User-Agent: python-neutronclient" -H "X-Auth-Token: 
40357276a5b34f1bb4980d566d36e9c4" -d '{"health_monitor": {"delay": 5, "max_retr
   from (pid=10195) http_log_req 
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:130
  2015-04-02 15:24:19.791 DEBUG neutronclient.client [-] RESP:404 {'date': 
'Thu, 02 Apr 2015 12:24:19 GMT', 'connection': 'keep-alive', 'content-type': 
'text/plain; cha

  The resource could not be found.

   from (pid=10195) http_log_resp 
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:139
  2015-04-02 15:24:19.791 DEBUG neutronclient.v2_0.client [-] Error message: 
404 Not Found

  The resource could not be found.

  from (pid=10195) _handle_fault_response 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:173
  2015-04-02 15:24:19.792 INFO heat.engine.resource [-] CREATE: HealthMonitor 
"monitor" Stack "test-001-load_balancer-ukmrf56u2dm4" 
[7aab3fa0-b71d-47b3-acc5-4767cb23b99
  2015-04-02 15:24:19.792 TRACE heat.engine.resource Traceback (most recent 
call last):
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 466, in _action_recorder
  2015-04-02 15:24:19.792 TRACE heat.engine.resource yield
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 536, in _do_action
  2015-04-02 15:24:19.792 TRACE heat.engine.resource yield 
self.action_handler_task(action, args=handler_args)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/scheduler.py", line 295, in wrapper
  2015-04-02 15:24:19.792 TRACE heat.engine.resource step = next(subtask)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 507, in action_handler_task
  2015-04-02 15:24:19.792 TRACE heat.engine.resource handler_data = 
handler(*args)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resources/neutron/loadbalancer.py", line 146, in 
handle_create
  2015-04-02 15:24:19.792 TRACE heat.engine.resource {'health_monitor': 
properties})['health_monitor']
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 99, in 
with_params
  2015-04-02 15:24:19.792 TRACE heat.engine.resource ret = 
self.function(instance, *args, **kwargs)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1064, in 
create_health_monitor
  2015-04-02 15:24:19.792 TRACE heat.engine.resource return 
self.post(self.health_monitors_path, body=body)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 295, in 
post
  2015-04-02 15:24:19.792 TRACE heat.engine.resource headers=headers, 
params=params)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 208, in 
do_request
  2015-04-02 15:24:19.792 TRACE heat.engine.resource 
self._handle_fault_response(status_code, replybody)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 182, in 
_handle_fault_response
  2015-04-02 15:24:19.792 TRACE heat.engine.resource 
exception_handler_v20(status_code, des_error_body)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 80, in 
exception_handler_v20
  2015-04-02 15:24:19.792 TRACE heat.engine.resource message=message)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource NeutronClientException: 
404 Not Found
  2015-04-02 15:24:19.792 TRACE heat.engine.resource
 

[Yahoo-eng-team] [Bug 1498476] Re: LBaas-LB performance just have 1 G low performance than LB bypass have 4G

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498476

Title:
  LBaas-LB performance  just have 1 G  low performance than LB bypass
  have 4G

Status in octavia:
  Confirmed

Bug description:
  LB performance  just have 1G low performance than LB bypass have 4G

  setup infor

  for LB bypass , client directly send traffic to server without LB , we have 
4G performance
  and for LB, client send traffic with LB , we just have 1G traffic
  so LB is a bottleneck

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1498476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498359] Re: lbaas:after create 375 LB pool , the new lb -pool and vip get in error status

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498359

Title:
  lbaas:after create 375 LB pool , the new lb -pool and vip get in error
  status

Status in octavia:
  Incomplete

Bug description:

  1 create two-arm LB with 1client and 1 backend server on a tenant
  2 repeat step1 to create 375 tenants
  3 after step 2 , the LB network unstable

  [root@nsj13 ~(keystone_admin)]# neutron lb-vip-list |grep ERROR
  | 054bc376-ff50-40cb-b003-831569b41f0b | Scale1_vip_420 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 06fcb386-4563-48d1-b6de-003967a97a54 | Scale1_vip_418 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 160f9c87-5cb1-4a5e-9829-cc5de30114c6 | Scale1_vip_444 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 1f49c1ca-0dc9-4f6a-9efd-2c741fcc6ff3 | Scale1_vip_380 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 25040c46-446d-4e3f-841c-da9166bb4ad0 | Scale1_vip_384 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 2690bbdb-30e3-4c6c-b346-f96be83ea745 | Scale1_vip_419 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 27e9fbb4-a3c8-4084-af11-c3e71af1a762 | Scale1_vip_431 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 283aea00-ba23-403d-a4cd-0d275c36c53f | Scale1_vip_375 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 28dbb59d-28a9-4af2-b02a-9f953d4c9dd3 | Scale1_vip_428 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 30e99e31-2739-4d1b-a05e-f67d41f26c14 | Scale1_vip_430 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 37ee787f-d684-4968-95ea-15666b0cd0e7 | Scale1_vip_410 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 38fe0379-2c5a-4bcc-9aab-c0f75466b5bd | Scale1_vip_408 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 4190852c-0327-4b05-b1fd-20ef62cf06c0 | Scale1_vip_450 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 461a6da3-07e3-47d3-9d47-18ba2add0692 | Scale1_vip_445 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 49fb200a-9a73-4d93-a2ef-a5d033377ee2 | Scale1_vip_378 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 556a7e3a-d922-43c8-b584-416a8689421d | Scale1_vip_383 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 55a500b0-682d-4fc6-93e3-588dfc590b35 | Scale1_vip_421 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 5cb8347d-fbcc-497e-ba47-244d5b3f394b | Scale1_vip_429 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 606380c4-fd1e-43a6-b67d-01a01c7ad327 | Scale1_vip_412 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 60ff74a8-c464-49d0-9f8c-143c7629201e | Scale1_vip_407 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 62ce8e28-0c8f-41e1-831d-5ef7500dbb1f | Scale1_vip_453 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 691ab6a0-c06e-4e6a-9c7d-b70fc72146e0 | Scale1_vip_436 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 6f5c12d8-3640-45b5-a600-19f57d2c | Scale1_vip_427 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 746ab955-20da-41cc-9830-9c33e41abc9c | Scale1_vip_415 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 75342fd8-99bd-4527-92f5-7f3d91193adb | Scale1_vip_414 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 851311f3-d84d-43e4-9bc2-6fa08c5d4e7e | Scale1_vip_416 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 8624e33d-bc23-4b78-bde2-ceb2f12e6b65 | Scale1_vip_417 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 895e8ee6-1e78-44e5-a542-3713e07f49cb | Scale1_vip_377 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 94855fcf-03fc-431a-b6bb-58abc0f2a084 | Scale1_vip_437 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 9af109fa-f301-4942-a559-f6ad75459f3d | Scale1_vip_448 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 9cf4bfa3-0073-4a46-a437-2f960ac41d34 | Scale1_vip_422 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | a138b439-c8b8-459f-85b6-3193d9df5e6d | Scale1_vip_433 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | a49b0dd1-55ce-4b52-8024-ce8f3219feab | Scale1_vip_435 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | ae57eadd-3a2d-457c-9149-014d43b2a91b | Scale1_vip_374 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | b70e7123-3e92-4ad1-bdf3-da58d5ed1a9f | Scale1_vip_426 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | b79eef81-8631-4012-97ed-b68cf7f5706b | Scale1_vip_434 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | bffecb85-1992-4c54-a411-8b8064c5b6ec | Scale1_vip_405 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | c51a6ed5-3134-412c-aad0-320ed7e7aaa1 | Scale1_vip_373 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | c6b8bb0f-eeca-4f2a-b9be-6162017daa5c | Scale1_vip_424 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | c8153eaa-fc15-4af4-91ce-eb26c8ff3478 | Scale1_vip_432 | 

[Yahoo-eng-team] [Bug 1635449] Re: Too many pools created from heat template when both listeners and pools depend on a item

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635449

Title:
  Too many pools created from heat template when both listeners and
  pools depend on a item

Status in octavia:
  Triaged

Bug description:
  When you deploy a heat template that has both listeners and pools
  depending on a item, due to the order of locking, you may get
  additional pools created erroneously.

  Excerpt of heat template showing the issue :

  # LOADBALANCERS #

test-loadbalancer:
  type: OS::Neutron::LBaaS::LoadBalancer
  properties:
name: test
description: test
vip_subnet: { get_param: subnet }

  # LISTENERS #

http-listener:
  type: OS::Neutron::LBaaS::Listener
  depends_on: test-loadbalancer
  properties:
name: listener1
description: listener1
protocol_port: 80
loadbalancer: { get_resource: test-loadbalancer } 
protocol: HTTP

https-listener:
  type: OS::Neutron::LBaaS::Listener
  depends_on: http-listener
  properties:
name: listener2
description: listener2
protocol_port: 443
loadbalancer: { get_resource: test-loadbalancer }
protocol: TERMINATED_HTTPS
default_tls_container_ref: ''

  # POOLS #

http-pool:
  type: OS::Neutron::LBaaS::Pool
  depends_on: http-listener
  properties:
name: pool1
description: pool1
lb_algorithm: 'ROUND_ROBIN'
listener: { get_resource: http-listener }
protocol: HTTP

https-pool:
  type: OS::Neutron::LBaaS::Pool
  depends_on: https-listener
  properties:
name: pool2
description: pool2
lb_algorithm: 'ROUND_ROBIN'
listener: { get_resource: https-listener }
protocol: HTTP

  After the http-listener is created, both a pool and another listener
  attempt to create but we end up with a number of pools (not always the
  same number).

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1635449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >