[Yahoo-eng-team] [Bug 2033681] [NEW] Calico still uses vif type tap and it causes failures with libvirt 9.5.0

2023-08-31 Thread Arun S A G
Public bug reported:


Description
===
Calico (out of tree) uses vif type tap. But libvirt doesn't like pre-existing 
tap devices https://github.com/libvirt/libvirt/commit/a2ae3d299cf from libvirt 
9.5.0. This causes openstack clusters that run calico networking backend to 
fail during instance creation.

Steps to reproduce
==
1. Configure calico
2. Run openstack with libvirt 9.5.0 (latest in centos 9 stream)
3. Boot a VM

Expected result
===
The VM is able to boot without any problems

Actual result

Other information
=

13:34:38 < sean-k-mooney> calico is apparently still using vif type tap
https://github.com/projectcalico/calico/blob/cf7fa35475eba84f5afcd7f53ac7d07dcb403202/networking-
calico/networking_calico/plugins/ml2/drivers/calico/test/lib.py#L66C31-L66C34

13:35:06 < sean-k-mooney> vif type tap is not supported by our os-vif code so 
its usign the legacy fallback
13:35:51 < sean-k-mooney> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L595-L596
13:36:15 < sean-k-mooney> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L420-L430
13:36:48 < sean-k-mooney> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/designer.py#L44-L55

13:37:40 < sean-k-mooney> zer0c00l: with that said the tap was always ment to 
be created by libvirt so it sound like calico might have been doing things it 
shoudl not have been
13:38:03 < zer0c00l> sean-k-mooney: Thanks for looking into this. :(
13:38:36 < sean-k-mooney> we could proably correct this with a bug fix
13:38:52 < sean-k-mooney> jsut setting managed='no'
13:39:13 < sean-k-mooney> here 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L427
13:39:54 < sean-k-mooney> the problem is that the there is no way to test this 
really upstream
13:40:06 < sean-k-mooney> well beyond unit/fucntional tests
13:40:12 < sean-k-mooney> but we dont have any calico ci
13:40:37 < sean-k-mooney> calico should be the only backend using vif_type=tap
13:40:52 < sean-k-mooney> but im not sure if we woudl need a config option in 
the workarounds section for this or not


Potential patch
===
diff --git a/nova/virt/libvirt/config.py b/nova/virt/libvirt/config.py
index 47e92e3..5af3ce4 100644
--- a/nova/virt/libvirt/config.py
+++ b/nova/virt/libvirt/config.py
@@ -1749,6 +1749,7 @@
 self.device_addr = None
 self.mtu = None
 self.alias = None
+self.managed = 'no'
 
 def __eq__(self, other):
 if not isinstance(other, LibvirtConfigGuestInterface):
@@ -1851,7 +1852,7 @@
 dev.append(vlan_elem)
 
 if self.target_dev is not None:
-dev.append(etree.Element("target", dev=self.target_dev))
+dev.append(etree.Element("target", dev=self.target_dev, 
managed=self.managed))
 
 if self.vporttype is not None:
 vport = etree.Element("virtualport", type=self.vporttype)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2033681

Title:
  Calico still uses vif type tap and it causes failures with libvirt
  9.5.0

Status in OpenStack Compute (nova):
  New

Bug description:

  Description
  ===
  Calico (out of tree) uses vif type tap. But libvirt doesn't like pre-existing 
tap devices https://github.com/libvirt/libvirt/commit/a2ae3d299cf from libvirt 
9.5.0. This causes openstack clusters that run calico networking backend to 
fail during instance creation.

  Steps to reproduce
  ==
  1. Configure calico
  2. Run openstack with libvirt 9.5.0 (latest in centos 9 stream)
  3. Boot a VM

  Expected result
  ===
  The VM is able to boot without any problems

  Actual result

  Other information
  =

  13:34:38 < sean-k-mooney> calico is apparently still using vif type
  tap
  
https://github.com/projectcalico/calico/blob/cf7fa35475eba84f5afcd7f53ac7d07dcb403202/networking-
  calico/networking_calico/plugins/ml2/drivers/calico/test/lib.py#L66C31-L66C34

  13:35:06 < sean-k-mooney> vif type tap is not supported by our os-vif code so 
its usign the legacy fallback
  13:35:51 < sean-k-mooney> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L595-L596
  13:36:15 < sean-k-mooney> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L420-L430
  13:36:48 < sean-k-mooney> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/designer.py#L44-L55

  13:37:40 < sean-k-mooney> zer0c00l: with that said the tap was always ment to 
be created by libvirt so it sound like calico might have been doing things it 
shoudl not have been
  13:38:03 < zer0c00l> sean-k-mooney: Thanks for looking into this. :(
  13:38:36 < sean-k-mooney> we could proably correct this with a bug fix
  13:38:52 < 

[Yahoo-eng-team] [Bug 1996758] [NEW] Default setting for "list_records_by_skipping_down_cells" causes unexpected results.

2022-11-16 Thread Arun Mani
Public bug reported:

Problem:

When a query to compute server GET all_tenants is sent we receive a cell
timeout and no response is received.

Root Cause:

The default Openstack behaviour with Cells is that when any cell does
not respond it is skipped and the API continues to return a success 200
response. In the logs we see "Cell %s is not responding and hence is
being omitted from the results" . This behaviour caused empty list of
resources to be sent back to the caller. Any caller using this API
assumes there are no resources in the cell and proceeds.

Workaround:

The solution here was to change the default configuration of
"list_records_by_skipping_down_cells" to False. This meant when any cell
did not return results a 500 error was returned, which now indicates a
problem with the API. This will alert the caller correctly and can be
handled in the right way.

** Affects: nova
 Importance: Undecided
 Assignee: Arun Mani (arun-mani)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Arun Mani (arun-mani)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1996758

Title:
  Default setting for "list_records_by_skipping_down_cells" causes
  unexpected results.

Status in OpenStack Compute (nova):
  New

Bug description:
  Problem:

  When a query to compute server GET all_tenants is sent we receive a
  cell timeout and no response is received.

  Root Cause:

  The default Openstack behaviour with Cells is that when any cell does
  not respond it is skipped and the API continues to return a success
  200 response. In the logs we see "Cell %s is not responding and hence
  is being omitted from the results" . This behaviour caused empty list
  of resources to be sent back to the caller. Any caller using this API
  assumes there are no resources in the cell and proceeds.

  Workaround:

  The solution here was to change the default configuration of
  "list_records_by_skipping_down_cells" to False. This meant when any
  cell did not return results a 500 error was returned, which now
  indicates a problem with the API. This will alert the caller correctly
  and can be handled in the right way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1996758/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1983188] [NEW] nova-manage db purge fails on large datasets

2022-07-30 Thread Arun S A G
Public bug reported:

This is similar to older bug https://bugs.launchpad.net/bugs/1543937 but
in 'nova-manage db purge'. Purging a large dataset causes this failure
in Galera cluster.  Mariadb log has following error when this happens:

2022-07-30 20:22:10 1567 [Warning] WSREP: transaction size limit (2147483647) 
exceeded: 2147483648
2022-07-30 20:22:10 1567 [ERROR] WSREP: rbr write fail, data_len: 0, 2

This happens because the transaction is too large for mariadb to handle.
'nova-manage db archive_deleted_rows' works around this by limiting the
number of max_rows using --max_rows argument and checking it against
db.MAX_INT variable, we might have to do similar thing for purge


Traceback:
nova-manage --config-file /etc/nova/nova.conf db purge --verbose --all-cells 
--all

An error has occurred:
Traceback (most recent call last):
  File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/cmd/manage.py", 
line 2793, in main
ret = fn(*fn_args, **fn_kwargs)
  File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/cmd/manage.py", 
line 454, in purge
status_fn=status)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/db/sqlalchemy/api.py", 
line 4426, in purge_shadow_tables
deleted = conn.execute(delete)
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1011, in execute
return meth(self, multiparams, params)
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/sql/elements.py", 
line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1130, in _execute_clauseelement
distilled_params,
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1317, in _execute_context
e, statement, parameters, cursor, context
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1508, in _handle_dbapi_exception
util.raise_(newraise, with_traceback=exc_info[2], from_=e)
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/util/compat.py", 
line 182, in raise_
raise exception
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1301, in _execute_context
self._root._commit_impl(autocommit=True)
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 773, in _commit_impl
self._handle_dbapi_exception(e, None, None, None, None)
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 1508, in _handle_dbapi_exception
util.raise_(newraise, with_traceback=exc_info[2], from_=e)
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/util/compat.py", 
line 182, in raise_
raise exception
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", 
line 771, in _commit_impl
self.engine.dialect.do_commit(self.connection)
  File 
"/var/lib/kolla/venv/lib64/python3.6/site-packages/sqlalchemy/dialects/mysql/base.py",
 line 2463, in do_commit
dbapi_connection.commit()
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/connections.py", line 
422, in commit
self._read_ok_packet()
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/connections.py", line 
396, in _read_ok_packet
pkt = self._read_packet()
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/connections.py", line 
676, in _read_packet
packet.raise_for_error()
  File "/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/protocol.py", 
line 223, in raise_for_error
err.raise_mysql_exception(self._data)
  File "/var/lib/kolla/venv/lib/python3.6/site-packages/pymysql/err.py", line 
107, in raise_mysql_exception
raise errorclass(errno, errval)
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1180, 'Got 
error 90 "Message too long" during COMMIT')
(Background on this error at: http://sqlalche.me/e/13/e3q8)


MariaDB log has following error:
2022-07-30 20:22:10 1567 [Warning] WSREP: transaction size limit (2147483647) 
exceeded: 2147483648
2022-07-30 20:22:10 1567 [ERROR] WSREP: rbr write fail, data_len: 0, 2

More information on how to replicate this and workaround this using mariadb is 
available here
https://www.percona.com/blog/2015/10/26/how-big-can-your-galera-transactions-be/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova-manage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1983188

Title:
  nova-manage db purge fails on large datasets

Status in OpenStack Compute (nova):
  New

Bug description:
  This is similar to older bug https://bugs.launchpad.net/bugs/1543937
  but in 'nova-manage db purge'. Purging a large dataset causes this
  

[Yahoo-eng-team] [Bug 1874400] [NEW] [agent][dhcp] When revision_number matches the one in cache it is not considered stale

2020-04-23 Thread Arun S A G
Public bug reported:

The revision_number is used to check if the message received at the
agent is stale or not. However neutron doesn't consider a message as
stale if the revision_number in NetworkCache is equal to the
revision_number of the incoming payload. It is only considered stale if
the revision_number in NetworkCache is greater than the revision_number
in incoming payload

https://opendev.org/openstack/neutron/src/commit/10230683a2ce2f26279feaa34af4c0eccbfcb16c/neutron/agent/dhcp/agent.py#L852

Because of this lot more messages not considered as stale even though
they are. This causes lot of "reload_allocations" and dhcp reloads and
can cause bottlenecks in busy openstack environments.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1874400

Title:
  [agent][dhcp] When revision_number matches the one in cache it is not
  considered stale

Status in neutron:
  New

Bug description:
  The revision_number is used to check if the message received at the
  agent is stale or not. However neutron doesn't consider a message as
  stale if the revision_number in NetworkCache is equal to the
  revision_number of the incoming payload. It is only considered stale
  if the revision_number in NetworkCache is greater than the
  revision_number in incoming payload

  
https://opendev.org/openstack/neutron/src/commit/10230683a2ce2f26279feaa34af4c0eccbfcb16c/neutron/agent/dhcp/agent.py#L852

  Because of this lot more messages not considered as stale even though
  they are. This causes lot of "reload_allocations" and dhcp reloads and
  can cause bottlenecks in busy openstack environments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1874400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1859887] [NEW] External connectivity broken because of stale FIP rule

2020-01-15 Thread Mithil Arun
Public bug reported:

Seen a few occurrences of this issue where I have a VM that does not
have a FIP attached, but has a port on a tenant network that is attached
to an external network via a router. I expect the VM to be able to reach
out to the external network, but I see nothing going through.

On the VM:
--snip--
[root@bob-trove-1 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1450 qdisc pfifo_fast state UP 
qlen 1000
link/ether fa:16:3e:97:b3:3b brd ff:ff:ff:ff:ff:ff
inet 172.20.7.16/24 brd 172.20.7.255 scope global dynamic eth0
   valid_lft 68868sec preferred_lft 68868sec
[root@bob-trove-1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
0.0.0.0 172.20.7.1  0.0.0.0 UG10000 eth0
169.254.169.254 172.20.7.1  255.255.255.255 UGH   10000 eth0
172.20.2.1920.0.0.0 255.255.255.192 U 10000 eth0
172.20.5.1920.0.0.0 255.255.255.192 U 10000 eth0
172.20.6.0  0.0.0.0 255.255.255.192 U 10000 eth0
172.20.6.64 0.0.0.0 255.255.255.192 U 10000 eth0
172.20.7.0  0.0.0.0 255.255.255.0   U 10000 eth0
--snip--

>From the router namespace:
--snip--
root@kvm02:/# ip netns exec qrouter-ea187315-b0c7-4f2e-98e9-128a923fca4e ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: rfp-ea187315-b@if292:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
link/ether 4e:54:d8:b1:6a:6d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.114.242/31 scope global rfp-ea187315-b
   valid_lft forever preferred_lft forever
inet6 fe80::4c54:d8ff:feb1:6a6d/64 scope link
   valid_lft forever preferred_lft forever
15636: qr-81061dca-85:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:94:27:37 brd ff:ff:ff:ff:ff:ff
inet 192.0.3.1/24 brd 192.0.3.255 scope global qr-81061dca-85
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe94:2737/64 scope link
   valid_lft forever preferred_lft forever
15703: qr-41aba180-7f:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:a5:64:9c brd ff:ff:ff:ff:ff:ff
inet 172.20.7.1/24 brd 172.20.7.255 scope global qr-41aba180-7f
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fea5:649c/64 scope link
   valid_lft forever preferred_lft forever
13957: qr-1408b658-c8:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:ac:80:c4 brd ff:ff:ff:ff:ff:ff
inet 172.20.6.1/26 brd 172.20.6.63 scope global qr-1408b658-c8
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feac:80c4/64 scope link
   valid_lft forever preferred_lft forever
11146: qr-127e45c0-8d:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:82:03:97 brd ff:ff:ff:ff:ff:ff
inet 172.20.5.193/26 brd 172.20.5.255 scope global qr-127e45c0-8d
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe82:397/64 scope link
   valid_lft forever preferred_lft forever
11147: qr-3ebb2a27-9a:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:cc:b9:95 brd ff:ff:ff:ff:ff:ff
inet 172.20.2.193/26 brd 172.20.2.255 scope global qr-3ebb2a27-9a
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fecc:b995/64 scope link
   valid_lft forever preferred_lft forever
13970: qr-35480bae-20:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:23:89:f3 brd ff:ff:ff:ff:ff:ff
inet 172.20.6.65/26 brd 172.20.6.127 scope global qr-35480bae-20
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe23:89f3/64 scope link
   valid_lft forever preferred_lft forever
root@kvm02:/# ip netns exec qrouter-ea187315-b0c7-4f2e-98e9-128a923fca4e ip rule
0:  from all lookup local
32766:  from all lookup main
32767:  from all lookup default
36707:  from 172.20.7.5 lookup 16
36709:  from 172.20.2.248 lookup 16
37304:  from 172.20.7.56 lookup 16
46130:  from 172.20.7.36 lookup 16
46133:  from 172.20.5.223 lookup 16
46134:  from 172.20.2.217 lookup 16
46138:  from 172.20.2.245 lookup 16
54173:  from 172.20.7.16 lookup 16
57482:  from 172.20.5.252 lookup 16
62083:  from 172.20.7.76 lookup 16
72399:  from 172.20.7.80 lookup 16
72454:  from 172.20.7.37 lookup 16
2886992577: from 172.20.2.193/26 lookup 2886992577
2886993345: from 

[Yahoo-eng-team] [Bug 1843801] [NEW] metadata-proxy process stops listening on port 80

2019-09-12 Thread Mithil Arun
Public bug reported:

I'm running a metadata agent on provider network and I see that the
metadata service stops listening on port 80 randomly.

I see that the process itself is running, but port 80 is not open in the
DHCP namespace. There are no logs in neutron-server, neutron-metadata-
agent, neutron-dhcp-agent or journalctl.

The only way to recover is to kill ns-metadata-proxy and have neutron-
metadata-agent restart it at which point, the port is up.

In addition to monitoring the process itself, neutron-metadata-agent
must watch for port 80 in the namespace as well.

ENV: Ubuntu 16.04 running neutron rocky.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843801

Title:
  metadata-proxy process stops listening on port 80

Status in neutron:
  New

Bug description:
  I'm running a metadata agent on provider network and I see that the
  metadata service stops listening on port 80 randomly.

  I see that the process itself is running, but port 80 is not open in
  the DHCP namespace. There are no logs in neutron-server, neutron-
  metadata-agent, neutron-dhcp-agent or journalctl.

  The only way to recover is to kill ns-metadata-proxy and have neutron-
  metadata-agent restart it at which point, the port is up.

  In addition to monitoring the process itself, neutron-metadata-agent
  must watch for port 80 in the namespace as well.

  ENV: Ubuntu 16.04 running neutron rocky.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1843801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832769] [NEW] designate client requires region name to fetch PTR records

2019-06-13 Thread Mithil Arun
Public bug reported:

Enable neutron's designate driver to accept the region name as a
parameter while making requests.

** Affects: neutron
 Importance: Undecided
 Assignee: Mithil Arun (arun-mithil)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Mithil Arun (arun-mithil)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832769

Title:
  designate client requires region name to fetch PTR records

Status in neutron:
  New

Bug description:
  Enable neutron's designate driver to accept the region name as a
  parameter while making requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1832769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738372] Re: Install and configure in keystone

2018-02-04 Thread Arun Kumar - அருண் குமார்
** Changed in: keystone
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1738372

Title:
  Install and configure in keystone

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  When I run the following command:
  su -s /bin/sh -c "keystone-manage db_sync" keystone

  One exception raised:
  AttributeError: 'module' object has no attribute 'DocumentedRuleDefault'

  I don't know what is? But I have searched on the internet, and this
  may be caused by the python Env. I want to know the exactly version
  python packages and version. Thanks a lot.

  
  This bug tracker is for errors with the documentation, use the following as a 
template and remove or add fields as you see fit. Convert [ ] into [x] to check 
boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.1.dev6 on 2017-11-16 21:02
  SHA: d0721d7cf4dc808946a7016b0ca2830c8850d5d9
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/pike/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1738372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680773] [NEW] Migration of a one VM deployed as part of group fails with NoValidHost

2017-04-07 Thread Arun Mani
Public bug reported:

Migrating one VM that got deployed as part of multi-deploy fails with
NoValidHost error from scheduler. I'll update this with more info soon
enough

** Affects: nova
 Importance: Undecided
 Assignee: Arun Mani (arun-mani)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Arun Mani (arun-mani)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680773

Title:
  Migration of a one VM deployed as part of group fails with NoValidHost

Status in OpenStack Compute (nova):
  New

Bug description:
  Migrating one VM that got deployed as part of multi-deploy fails with
  NoValidHost error from scheduler. I'll update this with more info soon
  enough

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647570] [NEW] l2 population fdb updates being sent to all agents

2016-12-05 Thread Arun Kumar
Public bug reported:

l2 population mechanism driver sends out fdb update to all registered
agents when a new VM is spawned on an agent for a network (First port
activated on current agent in this network)

It should only send out fdb updates to agents which have l2population
enabled as this affects performance in large scale deployments.

** Affects: neutron
 Importance: Undecided
 Assignee: Arun Kumar (arooncoomar)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Arun Kumar (arooncoomar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647570

Title:
  l2 population fdb updates being sent to all agents

Status in neutron:
  In Progress

Bug description:
  l2 population mechanism driver sends out fdb update to all registered
  agents when a new VM is spawned on an agent for a network (First port
  activated on current agent in this network)

  It should only send out fdb updates to agents which have l2population
  enabled as this affects performance in large scale deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624270] [NEW] openstack error 'oslo_config.cfg.NoSuchOptError'

2016-09-16 Thread arun
Public bug reported:


[root@controller1 ~]#
[root@controller1 ~]# openstack server create --flavor m1.nano --image cirros 
--nic net-id=b002f05b-5342-4a67-9c93-8a72158cab60 --security-group default 
--key-name mykey provider-instance
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-b06491f6-df6d-4d75-82af-546fa23790a1)
[root@controller1 ~]#
[root@controller1 ~]#
[root@controller1 ~]# uname -a
Linux controller1.arundell.com 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 
11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@controller1 ~]#

[root@controller1 nova]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@controller1 nova]#


Log file : /var/log/nova/nova-api.log logs pasted below 
==

2016-09-16 08:29:21.089 11479 INFO nova.api.openstack.wsgi 
[req-b92dbd45-5130-45d1-bc2c-003b2791fc38 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf4d77bb497d72d6b397 - - -] HTTP exception thrown: Image not found.
2016-09-16 08:29:21.090 11479 INFO nova.osapi_compute.wsgi.server 
[req-b92dbd45-5130-45d1-bc2c-003b2791fc38 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images/cirros HTTP/1.1" status: 404 len: 
351 time: 0.3027530
2016-09-16 08:29:21.159 11479 INFO nova.osapi_compute.wsgi.server 
[req-af640c20-e4cb-4bc8-ba18-a425db112776 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images HTTP/1.1" status: 200 len: 770 
time: 0.0645511
2016-09-16 08:29:21.209 11479 INFO nova.osapi_compute.wsgi.server 
[req-9417d9b1-73ef-41dc-b67f-fabe60d2406f 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images/e5e3e4d7-c74a-4e97-80f3-06fdc014a079
 HTTP/1.1" status: 200 len: 95time: 0.0442920
2016-09-16 08:29:21.240 11479 INFO nova.api.openstack.wsgi 
[req-dd435c55-c360-4dee-bcb3-8902576848b0 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf4d77bb497d72d6b397 - - -] HTTP exception thrown: Flavor m1.nano 
could not be found.
2016-09-16 08:29:21.241 11479 INFO nova.osapi_compute.wsgi.server 
[req-dd435c55-c360-4dee-bcb3-8902576848b0 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors/m1.nano HTTP/1.1" status: 404 
len: 369 time: 0.0274239

2016-09-16 08:29:21.278 11479 INFO nova.osapi_compute.wsgi.server 
[req-eb65e696-ee36-475a-9c24-a7539503e14d 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors HTTP/1.1" status: 200 len: 1740 
time: 0.0316298
2016-09-16 08:29:21.324 11479 INFO nova.osapi_compute.wsgi.server 
[req-23b1d340-4aa7-482b-a22c-c2c7e7157990 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors/0 HTTP/1.1" status: 200 len: 689 
time: 0.0391679
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions 
[req-b06491f6-df6d-4d75-82af-546fa23790a1 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] Unexpected exception in API method
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
iwrapped
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
6, in create
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-09-16 

[Yahoo-eng-team] [Bug 1615357] Re: VMs failed to get ip in vxlan setup due to keyerror

2016-08-23 Thread Arun Kumar
@Armando, Sorry for the confusion. I have marked this as a neutron bug
by mistake

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615357

Title:
  VMs failed to get ip in vxlan setup due to keyerror

Status in networking-vsphere:
  Incomplete

Bug description:
  When spawning around 4k VMs in 100 network scenario , around 500 Vms failed 
to get ip by throwing below keyerror exception.
   And it is getting ip by restarting dhcp (S40network) in the VM.

  build used: HOS 4.0 ((build 01-547)
   Attached logs

  2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
[-] Failed to handle VM_UPDATED event for VM: 
1e2b8347-6303-4417-aab8-e48d238f7902.
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
Traceback (most recent call last):
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/agent/ovsvapp_agent.py",
 line 1087, in _notify_device_updated
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
updated_port = self.ports_dict[vnic.port_uuid]
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
KeyError: 4db21599-5542-46a3-a994-937986eb230b
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
[-] This may result in failure of network provisioning for VirtualMachine 
1e2b8347-6303-4417-aab8-e48d238f7902.
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
[-] Cause of failure: 4db21599-5542-46a3-a994-937986eb230b.
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
Traceback (most recent call last):
  2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/agent/ovsvapp_agent.py",
 line 964, in process_event
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
self._notify_device_updated(vm, host, event.host_changed)
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/common/utils.py",
 line 91, in inner
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
return f(obj, *args, **kw)
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/agent/ovsvapp_agent.py",
 line 1108, in _notify_device_updated
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
raise error.OVSvAppNeutronAgentError(e)
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
OVSvAppNeutronAgentError: 4db21599-5542-46a3-a994-937986eb230b
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-vsphere/+bug/1615357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615357] [NEW] VMs failed to get ip in vxlan setup due to keyerror

2016-08-21 Thread Arun Kumar
Public bug reported:

When spawning around 4k VMs in 100 network scenario , around 500 Vms failed to 
get ip by throwing below keyerror exception.
 And it is getting ip by restarting dhcp (S40network) in the VM.

build used: HOS 4.0 ((build 01-547)
 Attached logs

2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent [-] 
Failed to handle VM_UPDATED event for VM: 1e2b8347-6303-4417-aab8-e48d238f7902.
 2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
Traceback (most recent call last):
 2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/agent/ovsvapp_agent.py",
 line 1087, in _notify_device_updated
 2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
updated_port = self.ports_dict[vnic.port_uuid]
 2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
KeyError: 4db21599-5542-46a3-a994-937986eb230b
 2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent
 2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent [-] 
This may result in failure of network provisioning for VirtualMachine 
1e2b8347-6303-4417-aab8-e48d238f7902.
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent [-] 
Cause of failure: 4db21599-5542-46a3-a994-937986eb230b.
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
Traceback (most recent call last):
2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/agent/ovsvapp_agent.py",
 line 964, in process_event
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
self._notify_device_updated(vm, host, event.host_changed)
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/common/utils.py",
 line 91, in inner
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
return f(obj, *args, **kw)
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/agent/ovsvapp_agent.py",
 line 1108, in _notify_device_updated
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
raise error.OVSvAppNeutronAgentError(e)
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
OVSvAppNeutronAgentError: 4db21599-5542-46a3-a994-937986eb230b
 2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent

** Affects: neutron
     Importance: Undecided
 Assignee: Arun Kumar (arooncoomar)
 Status: In Progress

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Arun Kumar (arooncoomar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615357

Title:
  VMs failed to get ip in vxlan setup due to keyerror

Status in neutron:
  In Progress

Bug description:
  When spawning around 4k VMs in 100 network scenario , around 500 Vms failed 
to get ip by throwing below keyerror exception.
   And it is getting ip by restarting dhcp (S40network) in the VM.

  build used: HOS 4.0 ((build 01-547)
   Attached logs

  2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
[-] Failed to handle VM_UPDATED event for VM: 
1e2b8347-6303-4417-aab8-e48d238f7902.
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
Traceback (most recent call last):
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
"/opt/stack/venv/neutron-20160621T031745Z/lib/python2.7/site-packages/networking_vsphere/agent/ovsvapp_agent.py",
 line 1087, in _notify_device_updated
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
updated_port = self.ports_dict[vnic.port_uuid]
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
KeyError: 4db21599-5542-46a3-a994-937986eb230b
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent
   2016-06-22 12:01:36.112 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
[-] This may result in failure of network provisioning for VirtualMachine 
1e2b8347-6303-4417-aab8-e48d238f7902.
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
[-] Cause of failure: 4db21599-5542-46a3-a994-937986eb230b.
   2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
Traceback (most recent call last):
  2016-06-22 12:01:36.113 31012 ERROR networking_vsphere.agent.ovsvapp_agent 
File 
&quo

[Yahoo-eng-team] [Bug 1578842] [NEW] gratuitous arping causes exception filling up logs with errors on Ubuntu14.04

2016-05-05 Thread Arun
Public bug reported:

Connecting an external network to a router causes l3-agent to send out
gratuitous arp packets to to the external network gateway IP in order to
pre-populate the mac table without expecting a response. Similar
behavior when associating floating IPs. Arping utility on CentOS7.2
returns 0 when no response but returns 1 on Ubuntu14.04 which causes an
exception and thus a Traceback in the log files.

Changing arping command call to not check return code status fixes this
issue and that is the proposed fix.

Pre-conditions: Router with external network attached.

Step-by-step reproduction steps:
1) Create a router
2) Attach an external network as gateway.
3) Attach a tenant network to the router.
4) Associate a Floating IP to a VM instance powered on in that tenant network.

Expected output: No errors in logs. Seen on CentOS7.2
Actual output: Traceback in l3 log. Seen on Ubuntu14.04
http://paste.openstack.org/show/496284/

Version: Openstack Liberty (Tag: 7.0.2). Ubuntu14.04.
uname -a
Linux ubuntu01 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22 15:32:26 UTC 
2016 x86_64 x86_64 x86_64 GNU/Linux

Services running: l3-agent in dvr mode, ovs-agent, dhcp-agent, nova-
compute.

Perceived Severity: Medium (Causes issues with active monitoring)

** Affects: neutron
 Importance: Undecided
 Assignee: Arun (sarun87)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Arun (sarun87)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1578842

Title:
  gratuitous arping causes exception filling up logs with errors on
  Ubuntu14.04

Status in neutron:
  New

Bug description:
  Connecting an external network to a router causes l3-agent to send out
  gratuitous arp packets to to the external network gateway IP in order
  to pre-populate the mac table without expecting a response. Similar
  behavior when associating floating IPs. Arping utility on CentOS7.2
  returns 0 when no response but returns 1 on Ubuntu14.04 which causes
  an exception and thus a Traceback in the log files.

  Changing arping command call to not check return code status fixes
  this issue and that is the proposed fix.

  Pre-conditions: Router with external network attached.

  Step-by-step reproduction steps:
  1) Create a router
  2) Attach an external network as gateway.
  3) Attach a tenant network to the router.
  4) Associate a Floating IP to a VM instance powered on in that tenant network.

  Expected output: No errors in logs. Seen on CentOS7.2
  Actual output: Traceback in l3 log. Seen on Ubuntu14.04
  http://paste.openstack.org/show/496284/

  Version: Openstack Liberty (Tag: 7.0.2). Ubuntu14.04.
  uname -a
  Linux ubuntu01 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22 15:32:26 
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

  Services running: l3-agent in dvr mode, ovs-agent, dhcp-agent, nova-
  compute.

  Perceived Severity: Medium (Causes issues with active monitoring)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1578842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561337] [NEW] Unable to launch instance

2016-03-23 Thread Arun V
Public bug reported:

I installed Openstack Liberty using the official guide for ubuntu 14.01.
I am not unable to launch instance.

Here's the log from nova-api.log


2016-03-24 10:12:53.412 14413 INFO nova.osapi_compute.wsgi.server 
[req-ec45686b-ad24-4949-83bb-42b3ed336b94 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/os-quota-sets/b3338b63521d4fb7a87011108e9b1107
 HTTP/1.1" status: 200 len: 568 time: 0.0969541

2016-03-24 10:12:57.869 14412 INFO nova.osapi_compute.wsgi.server 
[req-dcc90aa0-618f-4328-ace0-0e50d3a7bb53 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/servers/detail?all_tenants=True_id=b3338b63521d4fb7a87011108e9b1107
 HTTP/1.1" status: 200 len: 211 time: 3.3184321
2016-03-24 10:12:59.651 14412 INFO nova.osapi_compute.wsgi.server 
[req-95cb7922-c703-4036-ba13-005dff79741e 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/os-keypairs HTTP/1.1" status: 200 len: 212 
time: 0.0333679
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
[req-2efac7ae-b1ae-475c-bb03-ab7f28b8ac3d 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] Unexpected exception in API method
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
611, in create
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 149, in inner
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1581, in create
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1181, in 
_create_instance
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 955, in 
_validate_and_build_base_options
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
pci_request_info, requested_networks)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1059, in 
create_pci_requests_for_sriov_ports
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions neutron = 
get_client(context, admin=True)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 237, in 
get_client
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
auth_token = _ADMIN_AUTH.get_token(_SESSION)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/base.py", line 
200, in get_token
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
self.get_access(session).auth_token
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/base.py", line 
240, in get_access
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
self.auth_ref = self.get_auth_ref(session)
2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v2.py", line 88, 
in get_auth_ref
2016-03-24 10:13:14.307 14413 ERROR 

[Yahoo-eng-team] [Bug 1535900] [NEW] Add tar as a disk_format to glance

2016-01-19 Thread Arun S A G
Public bug reported:

We are adding support OS tarball images in ironic project. This feature
depends on adding  new value tar to disk_format. Please see
https://review.openstack.org/#/c/248968/

** Affects: glance
 Importance: Undecided
 Assignee: Arun S A G (sagarun)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Arun S A G (sagarun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1535900

Title:
  Add tar as a disk_format to glance

Status in Glance:
  New

Bug description:
  We are adding support OS tarball images in ironic project. This
  feature depends on adding  new value tar to disk_format. Please see
  https://review.openstack.org/#/c/248968/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1535900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389694] Re: Unable to list my networks

2014-11-05 Thread Mithil Arun
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389694

Title:
  Unable to list my networks

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I am unable to list my external network in
  PROJECTSNetwork-Network Topology

  But the network is available in Admin section.

  What is the ISSUE??

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1389694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188643] Re: notification queues are created in rabbit but never consumed

2014-11-04 Thread Arun Kant
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188643

Title:
  notification queues are created in rabbit but never consumed

Status in devstack - openstack dev environments:
  Invalid
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Messaging API for OpenStack:
  Confirmed

Bug description:
  The following queues are created in rabbit but there are no consumers
  for them. notifications.info, notifications.warn and
  notifications.error. This means that all events are queued up in them
  until rabbit is restarted or else someone consumes the queue.

  notifications.info in particular collects a large number of events
  very quickly

  All events should be published to an exchange and it should be up the
  consumers on how to configure any queues in rabbit and how they should
  be consumed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1188643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359215] [NEW] Issue in viewing floating IP association with Virtual IP of Load balancer in Horizon

2014-08-20 Thread Arun prasath S
Public bug reported:

When the user have to see the floating IP association they can view it
under Access and Security - Floating IPs tab.

But when a floating IP is associated with virtual IP of the load
balancer, under Floating IPs tab - Instance column , only '-' is
displayed. If we click on the '-' sign, it throws a error 'Unable to
find instance ID'.

Actually the ID displayed in the error is a Port ID of the Load
balancer (we can find it under 'Networks'). But I believe, when we click
on '-' the ID is searched in 'Instances' tables, that's why the error
is thrown.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359215

Title:
  Issue in viewing floating IP association with Virtual IP of Load
  balancer in Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the user have to see the floating IP association they can view it
  under Access and Security - Floating IPs tab.

  But when a floating IP is associated with virtual IP of the load
  balancer, under Floating IPs tab - Instance column , only '-' is
  displayed. If we click on the '-' sign, it throws a error 'Unable to
  find instance ID'.

  Actually the ID displayed in the error is a Port ID of the Load
  balancer (we can find it under 'Networks'). But I believe, when we
  click on '-' the ID is searched in 'Instances' tables, that's why
  the error is thrown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347262] [NEW] Ldap Live test failures

2014-07-22 Thread Arun Kant
 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/identity/core.py,
 line 193, in wrapper
return f(self, *args, **kwargs)
  File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/identity/core.py,
 line 528, in create_user
ref = driver.create_user(user['id'], user)
  File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/identity/backends/ldap.py,
 line 94, in create_user
user_ref = self.user.create(user)
  File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/identity/backends/ldap.py,
 line 230, in create
values = super(UserApi, self).create(values)
  File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py,
 line 1390, in create
ref = super(EnabledEmuMixIn, self).create(values)
  File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py,
 line 1085, in create
conn.add_s(self._id_to_dn(values['id']), attrs)
  File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py,
 line 656, in add_s
return self.conn.add_s(dn_utf8, ldap_attrs_utf8)
  File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py,
 line 551, in add_s
return self.conn.add_s(dn, modlist)
  File /usr/lib/python2.7/dist-packages/ldap/ldapobject.py, line 194, in add_s
return self.result(msgid,all=1,timeout=self.timeout)
  File /usr/lib/python2.7/dist-packages/ldap/ldapobject.py, line 422, in 
result
res_type,res_data,res_msgid = self.result2(msgid,all,timeout)
  File /usr/lib/python2.7/dist-packages/ldap/ldapobject.py, line 426, in 
result2
res_type, res_data, res_msgid, srv_ctrls = self.result3(msgid,all,timeout)
  File /usr/lib/python2.7/dist-packages/ldap/ldapobject.py, line 432, in 
result3
ldap_result = self._ldap_call(self._l.result3,msgid,all,timeout)
  File /usr/lib/python2.7/dist-packages/ldap/ldapobject.py, line 96, in 
_ldap_call
result = func(*args,**kwargs)
TYPE_OR_VALUE_EXISTS: {'info': attribute 'description' provided more than 
once, 'desc': 'Type or value exists'}

Test failed:
LiveLDAPIdentity.test_user_extra_attribute_mapping_description_is_returned

Reason: The issue is description attribute is sent twice in add_s call and the 
line which needs to be modified is
https://github.com/openstack/keystone/blob/master/keystone/common/ldap/core.py#L1073

Possible solution: Line 1350 in
https://review.openstack.org/#/c/95300/19/keystone/common/ldap/core.py,cm

** Affects: keystone
 Importance: Undecided
 Assignee: Arun Kant (arunkant-uws)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Arun Kant (arunkant-uws)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1347262

Title:
  Ldap Live test failures

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In keystone master, when live ldap test are executed against local
  openldap instance, 7 tests are failing.

  3 tests fail with following error.

  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/mock.py, line 1201, in patched
  return func(*args, **keywargs)
File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/tests/test_backend_ldap.py,
 line 1156, in test_chase_referrals_off
  user_api.get_connection(user=None, password=None)
File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py,
 line 965, in get_connection
  conn.simple_bind_s(user, password)
File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py,
 line 638, in simple_bind_s
  serverctrls, clientctrls)
File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/tests/fakeldap.py,
 line 246, in simple_bind_s
  attrs = self.db[self.key(who)]
  AttributeError: 'FakeLdap' object has no attribute 'db'

  The tests which are failing with above error are

  LiveLDAPIdentity.test_chase_referrals_off
  LiveLDAPIdentity.test_chase_referrals_on
  LiveLDAPIdentity.test_debug_level_set

  Reason: In FakeLdap, the livetest creds are different from
  backend_ldap.conf and does not match at
  
https://github.com/openstack/keystone/blob/master/keystone/tests/fakeldap.py#L242

  
  1 test fails with following error

  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/mock.py, line 1201, in patched
  return func(*args, **keywargs)
File 
/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/tests/test_backend_ldap.py,
 line 1288, in test_user_mixed_case_attribute
  user['email'])
  KeyError: 'email'

  Test failed:
  LiveLDAPIdentity.test_user_mixed_case_attribute

  Reason: CONF.ldap.user_mail_attribute is different in live test. Its
  mail and not email as in backend_ldap.conf so test code needs to be
  changed to handle both scenarios.

  2 tests

[Yahoo-eng-team] [Bug 1337787] [NEW] Port update crashes when device id does not need to be updated

2014-07-04 Thread Mithil Arun
Public bug reported:

When I call the update_port() method using the ML2 plugin, I see the following 
error:
2014-07-04 10:05:40.043 17585 ERROR neutron.api.v2.resource [-] 
add_router_interface failed
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py, line 84, in 
resource
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 185, in 
_handle_action
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/services/pn_services/router.py, line 
224, in add_router_interface
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource 
self.update_port(context, p['id'], port_info)
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/services/pn_services/router.py, line 
349, in update_port
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource 
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/db/db_base_plugin_v2.py, line 1397, 
in update_port
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource and 
(changed_device_id or changed_device_owner)):
2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource UnboundLocalError: 
local variable 'changed_device_id' referenced before assignment

On further inspection of the file in question (/usr/lib/python2.6/site-
packages/neutron/db/db_base_plugin_v2.py: update_port()), I see that the
variable 'changed_device_id' is only declared within an 'if' condition,
and not otherwise. This causes the crash as a later 'if' tries to read
from it but finds it not declared.

--snip--
def update_port(self, context, id, port):
p = port['port']

changed_ips = False
with context.session.begin(subtransactions=True):
port = self._get_port(context, id)
if 'device_owner' in p:
current_device_owner = p['device_owner']
changed_device_owner = True 
else:
current_device_owner = port['device_owner']
changed_device_owner = False
if p.get('device_id') != port['device_id']:
changed_device_id = True 

# if the current device_owner is ROUTER_INF and the device_id or
# device_owner changed check device_id is not another tenants
# router
if ((current_device_owner == constants.DEVICE_OWNER_ROUTER_INTF)
and (changed_device_id or changed_device_owner)):
self._enforce_device_owner_not_router_intf_or_device_id(
context, p, port['tenant_id'], port)
--snip-- 

'changed_device_id' should be set to 'False' by default.

** Affects: neutron
 Importance: Undecided
 Assignee: Mithil Arun (arun-mithil)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Mithil Arun (arun-mithil)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337787

Title:
  Port update crashes when device id does not need to be updated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When I call the update_port() method using the ML2 plugin, I see the 
following error:
  2014-07-04 10:05:40.043 17585 ERROR neutron.api.v2.resource [-] 
add_router_interface failed
  2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py, line 84, in 
resource
  2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 185, in 
_handle_action
  2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/services/pn_services/router.py, line 
224, in add_router_interface
  2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource 
self.update_port(context, p['id'], port_info)
  2014-07-04 10:05:40.043 17585 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/services/pn_services/router.py, line 
349, in update_port
  2014-07-04 10:05

[Yahoo-eng-team] [Bug 1320997] [NEW] Identity Ldap driver connection pooling

2014-05-19 Thread Arun Kant
Public bug reported:

Currently LDAP API handler establishes new connection for identity data
(user, group) lookup which becomes quite costly when TLS support is
enabled.

In performance testing with 100 concurrent users, with OpenLdap as ldap
server, we observed that ldap identity backend takes around 9-15 times
more time (around 7-10 seconds)  with respect to mysql identity backend.
And 77% of time is spent in ldap data retrieval for authentication
request.

So locally we tried to optimize ldap lookup by using connection pooling
(https://pypi.python.org/pypi/ldappool/1.0) and that has improved
performance numbers by 30%.

This request is to make similar enhancement in LDAP handler code to use
connection pooling.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: ldap

** Description changed:

  Currently LDAP API handler establishes new connection for identity data
  (user, group) lookup which becomes quite costly when TLS support is
  enabled.
  
  In performance testing with 100 concurrent users, with OpenLdap as ldap
  server, we observed that ldap identity backend takes around 9-15 times
  more time (around 7-10 seconds)  with respect to mysql identity backend.
  And 77% of time is spent in ldap data retrieval for authentication
  request.
  
  So locally we tried to optimize ldap lookup by using connection pooling
  (https://pypi.python.org/pypi/ldappool/1.0) and that has improved
  performance numbers by 30%.
  
- This request is to similar enhancement in LDAP handler code to use
+ This request is to make similar enhancement in LDAP handler code to use
  connection pooling.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1320997

Title:
  Identity Ldap driver connection pooling

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Currently LDAP API handler establishes new connection for identity
  data (user, group) lookup which becomes quite costly when TLS support
  is enabled.

  In performance testing with 100 concurrent users, with OpenLdap as
  ldap server, we observed that ldap identity backend takes around 9-15
  times more time (around 7-10 seconds)  with respect to mysql identity
  backend.  And 77% of time is spent in ldap data retrieval for
  authentication request.

  So locally we tried to optimize ldap lookup by using connection
  pooling (https://pypi.python.org/pypi/ldappool/1.0) and that has
  improved performance numbers by 30%.

  This request is to make similar enhancement in LDAP handler code to
  use connection pooling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1320997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306889] [NEW] instance groups and groupaffinityfilters

2014-04-12 Thread Arun Thulasi
Public bug reported:

Hello All,

[This is more of a question than a bug]

I am running Fedora20/IceHouse/RDO. I have been using the
GroupAntiAffinityFilter to implement some affinity/anti-affinity rules.
I have GroupAntiAffinityFilter set in nova.conf for my filters and I
start my instances with the name of the group as a hint.

nova boot --flavor 1 --image image-id --hint group=test tstvm1.

Since the last couple of days, I have been getting failures with the
error that instance group test could not be found. My current version
of nova client does not seem to have an option to set up instance groups
nor could I find any documentation from the most recent guide. Has
anyone hit this issue or been able to workaround this?

Looking through the stack, it appears that the failure happens since
there is no group. However, should nova be creating an instance group if
one is not available already?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306889

Title:
  instance groups and groupaffinityfilters

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hello All,

  [This is more of a question than a bug]

  I am running Fedora20/IceHouse/RDO. I have been using the
  GroupAntiAffinityFilter to implement some affinity/anti-affinity
  rules. I have GroupAntiAffinityFilter set in nova.conf for my filters
  and I start my instances with the name of the group as a hint.

  nova boot --flavor 1 --image image-id --hint group=test tstvm1.

  Since the last couple of days, I have been getting failures with the
  error that instance group test could not be found. My current
  version of nova client does not seem to have an option to set up
  instance groups nor could I find any documentation from the most
  recent guide. Has anyone hit this issue or been able to workaround
  this?

  Looking through the stack, it appears that the failure happens since
  there is no group. However, should nova be creating an instance group
  if one is not available already?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp