[Yahoo-eng-team] [Bug 1332917] [NEW] Deadlock when deleting from ipavailabilityranges

2014-06-22 Thread Eugene Nikanorov
Public bug reported:

Traceback:
 TRACE neutron.api.v2.resource Traceback (most recent call last):
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource
 TRACE neutron.api.v2.resource result = method(request=request, **args)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 477, in delete
 TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 608, in 
delete_subnet
 TRACE neutron.api.v2.resource break
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 463, 
in __exit__
 TRACE neutron.api.v2.resource self.rollback()
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
57, in __exit__
 TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 460, 
in __exit__
 TRACE neutron.api.v2.resource self.commit()
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 370, 
in commit
 TRACE neutron.api.v2.resource self._prepare_impl()
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 350, 
in _prepare_impl
 TRACE neutron.api.v2.resource self.session.flush()
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py, 
line 444, in _wrap
 TRACE neutron.api.v2.resource _raise_if_deadlock_error(e, 
self.bind.dialect.name)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py, 
line 427, in _raise_if_deadlock_error
 TRACE neutron.api.v2.resource raise exception.DBDeadlock(operational_error)
 TRACE neutron.api.v2.resource DBDeadlock: (OperationalError) (1213, 'Deadlock 
found when trying to get lock; try restarting transaction') 'DELETE FROM 
ipavailabilityranges WHERE ipavailabilityranges.allocation_pool_id = %s AND 
ipavailabilityranges.first_ip = %s AND ipavailabilityranges.last_ip = %s' 
('b19b08b6-90f2-43d6-bfe1-9cbe6e0e1d93', '10.100.0.2', '10.100.0.14')

http://logs.openstack.org/21/76021/12/check/check-tempest-dsvm-neutron-
full/7577c27/logs/screen-q-svc.txt.gz?level=TRACE#_2014-06-21_18_39_47_122

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: db gate-failure ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332917

Title:
  Deadlock when deleting from ipavailabilityranges

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Traceback:
   TRACE neutron.api.v2.resource Traceback (most recent call last):
   TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource
   TRACE neutron.api.v2.resource result = method(request=request, **args)
   TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 477, in delete
   TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs)
   TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 608, in 
delete_subnet
   TRACE neutron.api.v2.resource break
   TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 463, 
in __exit__
   TRACE neutron.api.v2.resource self.rollback()
   TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
57, in __exit__
   TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
   TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 460, 
in __exit__
   TRACE neutron.api.v2.resource self.commit()
   TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 370, 
in commit
   TRACE neutron.api.v2.resource self._prepare_impl()
   TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 350, 
in _prepare_impl
   TRACE neutron.api.v2.resource self.session.flush()
   TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py, 
line 444, in _wrap
   TRACE neutron.api.v2.resource _raise_if_deadlock_error(e, 
self.bind.dialect.name)
   TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py, 
line 427, in _raise_if_deadlock_error
   TRACE neutron.api.v2.resource raise 

[Yahoo-eng-team] [Bug 1332923] [NEW] Deadlock updating port with fixed ips

2014-06-22 Thread Eugene Nikanorov
Public bug reported:

Traceback:

 TRACE neutron.api.v2.resource Traceback (most recent call last):
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource
 TRACE neutron.api.v2.resource result = method(request=request, **args)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 531, in update
 TRACE neutron.api.v2.resource obj = obj_updater(request.context, id, 
**kwargs)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 682, in update_port
 TRACE neutron.api.v2.resource port)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1497, in 
update_port
 TRACE neutron.api.v2.resource p['fixed_ips'])
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 650, in 
_update_ips_for_port
 TRACE neutron.api.v2.resource ips = self._allocate_fixed_ips(context, 
network, to_add)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 612, in 
_allocate_fixed_ips
 TRACE neutron.api.v2.resource result = self._generate_ip(context, subnets)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 364, in 
_generate_ip
 TRACE neutron.api.v2.resource return 
NeutronDbPluginV2._try_generate_ip(context, subnets)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 381, in 
_try_generate_ip
 TRACE neutron.api.v2.resource range = 
range_qry.filter_by(subnet_id=subnet['id']).first()
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2333, in 
first
 TRACE neutron.api.v2.resource ret = list(self[0:1])
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2200, in 
__getitem__
 TRACE neutron.api.v2.resource return list(res)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2404, in 
__iter__
 TRACE neutron.api.v2.resource return self._execute_and_instances(context)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2419, in 
_execute_and_instances
 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 720, 
in execute
 TRACE neutron.api.v2.resource return meth(self, multiparams, params)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py, line 317, 
in _execute_on_connection
 TRACE neutron.api.v2.resource return 
connection._execute_clauseelement(self, multiparams, params)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 817, 
in _execute_clauseelement
 TRACE neutron.api.v2.resource compiled_sql, distilled_params
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 947, 
in _execute_context
 TRACE neutron.api.v2.resource context)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1108, 
in _handle_dbapi_exception
 TRACE neutron.api.v2.resource exc_info
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 185, 
in raise_from_cause
 TRACE neutron.api.v2.resource reraise(type(exception), exception, 
tb=exc_tb)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 940, 
in _execute_context
 TRACE neutron.api.v2.resource context)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
435, in do_execute
 TRACE neutron.api.v2.resource cursor.execute(statement, parameters)
 TRACE neutron.api.v2.resource DBAPIError: (TransactionRollbackError) deadlock 
detected
 TRACE neutron.api.v2.resource DETAIL:  Process 21690 waits for ShareLock on 
transaction 10397; blocked by process 21692.
 TRACE neutron.api.v2.resource Process 21692 waits for ShareLock on transaction 
10396; blocked by process 21690.
 TRACE neutron.api.v2.resource HINT:  See server log for query details.
 TRACE neutron.api.v2.resource  'SELECT ipavailabilityranges.allocation_pool_id 
AS ipavailabilityranges_allocation_pool_id, ipavailabilityranges.first_ip AS 
ipavailabilityranges_first_ip, ipavailabilityranges.last_ip AS 
ipavailabilityranges_last_ip \nFROM ipavailabilityranges JOIN ipallocationpools 
ON ipallocationpools.id = ipavailabilityranges.allocation_pool_id \nWHERE 
ipallocationpools.subnet_id = %(subnet_id_1)s \n LIMIT %(param_1)s FOR UPDATE' 
{'param_1': 1, 

[Yahoo-eng-team] [Bug 1327056] Re: fwaas:In firewall-rule-create cli --disabled should be removed

2014-06-22 Thread Eugene Nikanorov
** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327056

Title:
  fwaas:In  firewall-rule-create cli  --disabled should be removed

Status in Python client library for Neutron:
  New

Bug description:
   In neutron firewall-rule-create cli --disabled should be removed. Though we 
can able to disable the firewall rule using this option, we cannot  enable it 
again. However using --enabled True|False we can either enable or disable the 
firewall rule.
   And also --enable should be shown as optional arguments in help instead of 
--disable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1327056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327005] Re: Need change host to host_name in host resources

2014-06-22 Thread Christopher Yeoh
This is a python-novaclient bug, not a nova bug

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327005

Title:
  Need change host to host_name in host resources

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Nova:
  New

Bug description:
  step to reproduce:
  In python Terminal ,
   from novaclient.v1_1 import client
   ct = 
client.Client(admin,password,admin,http://192.168.1.100:5000/v2.0;)
   ct.hosts.get(hostname)

  error:
  File stdin, line 1, in module
File /opt/stack/python-novaclient/novaclient/v1_1/hosts.py, line 24, in 
__repr__
  return Host: %s % self.host_name
File 
/opt/stack/python-novaclient/novaclient/openstack/common/apiclient/base.py, 
line 464, in __getattr__
  raise AttributeError(k)
  AttributeError: host_name

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323981] Re: Can't determine instance's server_group via 'DescribeInstance()'

2014-06-22 Thread Christopher Yeoh
Marking this as WontFix only because I think its a feature request, not
a bug and we don't need to keep track of it in the bugs database. Please
submit this as a proposal to nova-specs and blueprint

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323981

Title:
  Can't determine instance's server_group via 'DescribeInstance()'

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The 'server_group' function is implemented in Icehouse.
  But we can't determine VM's server_group via 'DescribeInstance()'.

  The only way to get this info is to filter from all server_groups' 
memberships.
  That's not an elegant  convenient way.

  
  Imagine the use case below:

  1. One environment has lots of(like 100) server_groups, and each server_group 
involves 100 instances inside.
  2. One instance has been created into one server_group before. Now I need to 
create another one with anti-affinity policy with it.

  3. Now how can I determine which server_group should I choose?
  4. The only way here is to list all server_groups info, and filter their 
membership using vm's uuid.
  5. More and more workloads will be increased if environment has much more 
server_groups. 

  
  So, we need to add one item like 'server_group' in 'DescribeInstance()'s 
response IMO.
  The server_group info is already stored in db. We only need to feedback the 
relationship of instance and server_group via the API.

  In this case, the step above can be simplified into one step:

  1. Execute 'DescirbeInstance()' to get the server_group's uuid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1323981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332036] Re: Instance status would never changed from BUILD after rabbitmq restarted

2014-06-22 Thread Mitsuru Kanabuchi
Hi Andrea, sorry for lacking details.

I haven't thought this behavior related config values.
Please see following details:

==
1) When rabbitmq received SIGKILL, nova-compute dumped ERROR, but this behavior 
is normal

2014-06-23 11:23:11.691 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to 
consume message from queue: Socket closed
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit Traceback 
(most recent call last):
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py,
 line 639, in ensure
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit return 
method(*args, **kwargs)
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py,
 line 718, in _consume
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit return 
self.connection.drain_events(timeout=timeout)
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/local/lib/python2.7/dist-packages/kombu/connection.py, line 279, in 
drain_events
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit return 
self.transport.drain_events(self.connection, **kwargs)
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/local/lib/python2.7/dist-packages/kombu/transport/pyamqp.py, line 91, in 
drain_events
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit return 
connection.drain_events(**kwargs)
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/local/lib/python2.7/dist-packages/amqp/connection.py, line 299, in 
drain_events
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit chanmap, 
None, timeout=timeout,
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/local/lib/python2.7/dist-packages/amqp/connection.py, line 362, in 
_wait_multiple
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit channel, 
method_sig, args, content = read_timeout(timeout)
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/local/lib/python2.7/dist-packages/amqp/connection.py, line 326, in 
read_timeout
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit return 
self.method_reader.read_method()
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/local/lib/python2.7/dist-packages/amqp/method_framing.py, line 189, in 
read_method
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit raise m
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit IOError: 
Socket closed
2014-06-23 11:23:11.691 TRACE oslo.messaging._drivers.impl_rabbit

2) nova-compute was starting reconnect immediately, it's good

2014-06-23 11:23:12.414 INFO oslo.messaging._drivers.impl_rabbit [-] 
Reconnecting to AMQP server on 192.168.10.221:5672
2014-06-23 11:23:12.414 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying 
reconnect for 1.0 seconds...
2014-06-23 11:23:12.844 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server on 192.168.10.221:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying 
again in 1 seconds.
2014-06-23 11:23:13.428 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server on 192.168.10.221:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying 
again in 1 seconds.

3) after re-started rabbitmq, nova-compute re-connected rabbitmq, it's
very nice

2014-06-23 11:24:25.653 INFO oslo.messaging._drivers.impl_rabbit [-] 
Reconnecting to AMQP server on 192.168.10.221:5672
2014-06-23 11:24:25.653 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying 
reconnect for 1.0 seconds...
2014-06-23 11:24:25.789 INFO oslo.messaging._drivers.impl_rabbit [-] Connected 
to AMQP server on 192.168.10.221:5672
2014-06-23 11:24:26.706 INFO oslo.messaging._drivers.impl_rabbit [-] Connected 
to AMQP server on 192.168.10.221:5672
2014-06-23 11:24:31.916 INFO oslo.messaging._drivers.impl_rabbit [-] 
Reconnecting to AMQP server on 192.168.10.221:5672
2014-06-23 11:24:31.917 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying 
reconnect for 1.0 seconds...
2014-06-23 11:24:32.947 INFO oslo.messaging._drivers.impl_rabbit [-] Connected 
to AMQP server on 192.168.10.221:5672

4) But booting instance is freeze with following log

2014-06-23 11:24:34.012 WARNING nova.openstack.common.loopingcall [-] task run 
outlasted interval by 63.325336 sec
2014-06-23 11:25:05.777 DEBUG nova.openstack.common.lockutils 
[req-b23de1c3-55d8-4d78-9386-621b907f2f25 admin admin] Got semaphore 
7b68d649-cbf4-4e97-aa66-bf9b88199db7 from (pid=3704) lock 
/opt/stack/nova/nova/openstack/common/lockutils.py:168
2014-06-23 11:25:05.777 DEBUG nova.openstack.common.lockutils 
[req-b23de1c3-55d8-4d78-9386-621b907f2f25 admin admin] Got semaphore / lock 
do_run_instance from (pid=3704) inner 

[Yahoo-eng-team] [Bug 1324348] Re: Server_group shouldn't have same policies in it

2014-06-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/96645
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=af630da010ae7083e0a4da87b5014f45d90ac7ef
Submitter: Jenkins
Branch:master

commit af630da010ae7083e0a4da87b5014f45d90ac7ef
Author: wingwj win...@gmail.com
Date:   Fri May 30 09:21:30 2014 +0800

Don't store duplicate policies for server_group

It doesn't make sense to store same policies in a server_group.
We only need to store one and ignore the duplicate policies.

This patch relates to the bug I4f3ad544aef78cbbc076c7a47cca04832a2f5b4b
in Nova. So I need to skip one test-case here firstly in order to modify the
issue in Nova.

After the Nova's patch merged, this test-case will be restored,
and more correlate cases will definitly be supplied in tempest.

Change-Id: I26449a2a881be396daf75838451cfe01a915f513
Closes-Bug: #1324348


** Changed in: tempest
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324348

Title:
  Server_group shouldn't have same policies in it

Status in OpenStack Compute (Nova):
  In Progress
Status in Tempest:
  Fix Released

Bug description:
  It can put several same policies in one server_group now.
  It doesn't make sense, the duplicate policies need to be ignored.

  

  stack@devaio:~$ nova server-group-create --policy affinity --policy affinity 
wjsg1 
  
+--+---++-+--+
  | Id   | Name  | Policies   | 
Members | Metadata |
  
+--+---++-+--+
  | 4f6679b7-f6b1-4d1e-92cd-1a54e1fe0f3d | wjsg1 | [u'affinity', u'affinity'] | 
[]  | {}   |
  
+--+---++-+--+
  stack@devaio:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1324348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333074] [NEW] pci_passthrough_whitelist in nova.conf can only filter PCI devices by product_id and vender_id

2014-06-22 Thread Young
Public bug reported:

I want to use SR-IOV in Openstack.
I have one NIC with two slots.  Only one slot is plugged in. So the NIC has two 
Physical Function.  I enabled sr-iov on this machine. So I got 32 virtual 
functions(16 virtual function for each physical function).
Now I want to make openstack only use the 16 virtual functions for the physical 
functions which is plugged in.   However, I found that only  product_id and 
vender_id can be the filter criteria when I looked up the code in  
pci/pci_whitelist.py(Line 40,  _WHITELIST_SCHEMA).
I hope I could filter PCI devices by physical functions like this  
pci_passthrough_whitelist=[{ vendor_id:8086,product_id:1515, 
phys_function.0.3: 0x0}].

There is a same problem for the pci_alias.  I can't use extra_info to
define  the pci_alias filter(The physical function info is in
extra_info)

** Affects: nova
 Importance: Undecided
 Assignee: Young (afe-young)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Young (afe-young)

** Summary changed:

- pci_passthrough_whitelist in nova.conf  can  only filter by  product_id and 
vender_id
+ pci_passthrough_whitelist in nova.conf  can  only PCI devices  filter by  
product_id and vender_id

** Summary changed:

- pci_passthrough_whitelist in nova.conf  can  only PCI devices  filter by  
product_id and vender_id
+ pci_passthrough_whitelist in nova.conf  can  only  filter  PCI devices by  
product_id and vender_id

** Description changed:

- 
- I'mw working on using SR-IOV in Openstack.
+ I want to use SR-IOV in Openstack.
  I have one NIC with two slots.  Only one slot is plugged in. So the NIC has 
two Physical Function.  I enabled sr-iov on this machine. So I got 32 virtual 
functions(16 virtual function for each physical function).
- Now I want to make openstack only uses the 16 virtual functions for the 
physical functions which is plugged in.   However, I found that only  
product_id and vender_id is enabled when I looked up the code in  
pci/pci_whitelist.py(Line 40,  _WHITELIST_SCHEMA).   
+ Now I want to make openstack only uses the 16 virtual functions for the 
physical functions which is plugged in.   However, I found that only  
product_id and vender_id is enabled when I looked up the code in  
pci/pci_whitelist.py(Line 40,  _WHITELIST_SCHEMA).
  I hope I could filter by physical functions like this  
pci_passthrough_whitelist=[{ vendor_id:8086,product_id:1515, 
phys_function.0.3: 0x0}].
  
- 
- There is a same problem for the pci_alias.  I can't use extra_info to  define 
 the pci_alias  filter(The physical function info is in extra_info)
+ There is a same problem for the pci_alias.  I can't use extra_info to
+ define  the pci_alias  filter(The physical function info is in
+ extra_info)

** Description changed:

  I want to use SR-IOV in Openstack.
  I have one NIC with two slots.  Only one slot is plugged in. So the NIC has 
two Physical Function.  I enabled sr-iov on this machine. So I got 32 virtual 
functions(16 virtual function for each physical function).
- Now I want to make openstack only uses the 16 virtual functions for the 
physical functions which is plugged in.   However, I found that only  
product_id and vender_id is enabled when I looked up the code in  
pci/pci_whitelist.py(Line 40,  _WHITELIST_SCHEMA).
- I hope I could filter by physical functions like this  
pci_passthrough_whitelist=[{ vendor_id:8086,product_id:1515, 
phys_function.0.3: 0x0}].
+ Now I want to make openstack only use the 16 virtual functions for the 
physical functions which is plugged in.   However, I found that only  
product_id and vender_id can be the filter criteria when I looked up the code 
in  pci/pci_whitelist.py(Line 40,  _WHITELIST_SCHEMA).
+ I hope I could filter PCI devices by physical functions like this  
pci_passthrough_whitelist=[{ vendor_id:8086,product_id:1515, 
phys_function.0.3: 0x0}].
  
  There is a same problem for the pci_alias.  I can't use extra_info to
- define  the pci_alias  filter(The physical function info is in
+ define  the pci_alias filter(The physical function info is in
  extra_info)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333074

Title:
  pci_passthrough_whitelist in nova.conf  can  only  filter  PCI devices
  by  product_id and vender_id

Status in OpenStack Compute (Nova):
  New

Bug description:
  I want to use SR-IOV in Openstack.
  I have one NIC with two slots.  Only one slot is plugged in. So the NIC has 
two Physical Function.  I enabled sr-iov on this machine. So I got 32 virtual 
functions(16 virtual function for each physical function).
  Now I want to make openstack only use the 16 virtual functions for the 
physical functions which is plugged in.   However, I found that only  
product_id and vender_id can be the filter criteria when I looked up the code 
in  pci/pci_whitelist.py(Line 40,  

[Yahoo-eng-team] [Bug 1333084] [NEW] test_update_port_with_second_ip failed due to a server failure “Caught error: (TransactionRollbackError) deadlock detected”

2014-06-22 Thread sean mooney
Public bug reported:

tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_second_ip 
faild due
 to a deadlock in the db.

the trace below from the q-svc screen log appears to be very similar to
a similar open bug in the cinder project which suggest that this class
of intermitent deadlock may exist in other cases.

https://bugs.launchpad.net/cinder/+bug/1294855

full log avalable here 
http://logs.openstack.org/38/95138/8/check/check-tempest-dsvm-neutron-pg/99dc5cb/logs/


2014-06-21 18:07:30.852 21519 ERROR neutron.api.v2.resource 
[req-ac0f177d-9b3c-4ec7-9001-1f92dfd1bf16 None] update failed
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 531, in update
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 682, in update_port
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource port)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1497, in 
update_port
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource p['fixed_ips'])
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 650, in 
_update_ips_for_port
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource ips = 
self._allocate_fixed_ips(context, network, to_add)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 612, in 
_allocate_fixed_ips
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource result = 
self._generate_ip(context, subnets)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 364, in 
_generate_ip
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource return 
NeutronDbPluginV2._try_generate_ip(context, subnets)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 381, in 
_try_generate_ip
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource range = 
range_qry.filter_by(subnet_id=subnet['id']).first()
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2333, in 
first
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource ret = 
list(self[0:1])
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2200, in 
__getitem__
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource return list(res)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2404, in 
__iter__
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2419, in 
_execute_and_instances
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 720, 
in execute
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource return 
meth(self, multiparams, params)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py, line 317, 
in _execute_on_connection
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource return 
connection._execute_clauseelement(self, multiparams, params)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 817, 
in _execute_clauseelement
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource compiled_sql, 
distilled_params
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 947, 
in _execute_context
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource context)
2014-06-21 18:07:30.852 21519 TRACE neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1308405] Re: reschedule failed because port still in use

2014-06-22 Thread Liusheng
Chris Behrens: thanks, it is.  this bug has been fixed in
https://review.openstack.org/#/c/99400/.

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308405

Title:
  reschedule failed because port still in use

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  
  When booting an instance with a port specified, if the instance spawn faild 
for libvit error, the instance will be rescheduled, and will raise a 
PortInUse exception.

  To reproduce, we can add raise Exception after spawn in
  _build_instance() and restart nova-compute.

  more details plz see:
  Traceback (most recent call last):
    File /usr/lib64/python2.6/site-packages/nova/compute/manager.py, line 
1043, in _build_instance
  set_access_ip=set_access_ip)
    File /usr/lib64/python2.6/site-packages/nova/compute/manager.py, line 
1426, in _spawn
  LOG.exception(_('Instance failed to spawn'), instance=instance)
    File /usr/lib64/python2.6/site-packages/nova/compute/manager.py, line 
1423, in _spawn
  block_device_info)
    File /usr/lib64/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
2083, in spawn
  admin_pass=admin_password)
    File /usr/lib64/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
2480, in _create_image
  net = netutils.get_injected_network_template(network_info)
    File /usr/lib64/python2.6/site-packages/nova/virt/netutils.py, line 74, 
in get_injected_network_
  if not (network_info and template):
    File /usr/lib64/python2.6/site-packages/nova/network/model.py, line 379, 
in __len__
  return self._sync_wrapper(fn, *args, **kwargs)
    File /usr/lib64/python2.6/site-packages/nova/network/model.py, line 366, 
in _sync_wrapper
  self.wait()
    File /usr/lib64/python2.6/site-packages/nova/network/model.py, line 398, 
in wait
  self[:] = self._gt.wait()
    File /usr/lib64/python2.6/site-packages/eventlet/greenthread.py, line 
168, in wait
  return self._exit_event.wait()
    File /usr/lib64/python2.6/site-packages/eventlet/event.py, line 120, in 
wait
  current.throw(*self._exc)
    File /usr/lib64/python2.6/site-packages/eventlet/greenthread.py, line 
194, in main
  result = function(*args, **kwargs)
    File /usr/lib64/python2.6/site-packages/nova/compute/manager.py, line 
1244, in _allocate_network
  dhcp_options=dhcp_options)
    File /usr/lib64/python2.6/site-packages/nova/network/neutronv2/api.py, 
line 243, in allocate_for
  raise exception.PortInUse(port_id=port_id)
  PortInUse: Port faf3aa64-11f8-4fc7-81bc-084098014f4a is still in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp