[Yahoo-eng-team] [Bug 1837199] [NEW] nova-manage Tracebeck on missing arg

2019-07-19 Thread Attila Fazekas
Public bug reported:

# nova-manage cell_v2 
An error has occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 3179, 
in __getattr__
return getattr(self._conf._namespace, name)
AttributeError: '_Namespace' object has no attribute 'action_fn'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/stack/nova/nova/cmd/manage.py", line 2205, in main
fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
  File "/opt/stack/nova/nova/cmd/common.py", line 169, in get_action_fn
fn = CONF.category.action_fn
  File "/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 3181, 
in __getattr__
raise NoSuchOptError(name)
oslo_config.cfg.NoSuchOptError: no such option action_fn in group [DEFAULT]


# nova-manage cell_v2 help
usage: nova-manage cell_v2 [-h]
   
{create_cell,delete_cell,delete_host,discover_hosts,list_cells,list_hosts,map_cell0,map_cell_and_hosts,map_instances,simple_cell_setup,update_cell,verify_instance}
   ...
nova-manage cell_v2: error: argument action: invalid choice: 'help' (choose 
from 'create_cell', 'delete_cell', 'delete_host', 'discover_hosts', 
'list_cells', 'list_hosts', 'map_cell0', 'map_cell_and_hosts', 'map_instances', 
'simple_cell_setup', 'update_cell', 'verify_instance')


# nova-manage cell_v2 -h
usage: nova-manage cell_v2 [-h]
   
{create_cell,delete_cell,delete_host,discover_hosts,list_cells,list_hosts,map_cell0,map_cell_and_hosts,map_instances,simple_cell_setup,update_cell,verify_instance}
   ...

positional arguments:
  
{create_cell,delete_cell,delete_host,discover_hosts,list_cells,list_hosts,map_cell0,map_cell_and_hosts,map_instances,simple_cell_setup,update_cell,verify_instance}

optional arguments:
  -h, --helpshow this help message and exit


python version:
/usr/bin/python3 --version
Python 3.7.3

nova version:
$ git log -1
commit 78f9961d293e3b3e0ac62345b78abb1c9e2bd128 (HEAD -> master, origin/master, 
origin/HEAD)


Instead of printing Traceback, nova-manage should give a hint for the user 
choices.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo.config
 Importance: Undecided
 Status: New

** Also affects: oslo.config
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1837199

Title:
  nova-manage  Tracebeck on missing arg

Status in OpenStack Compute (nova):
  New
Status in oslo.config:
  New

Bug description:
  # nova-manage cell_v2 
  An error has occurred:
  Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 
3179, in __getattr__
  return getattr(self._conf._namespace, name)
  AttributeError: '_Namespace' object has no attribute 'action_fn'

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/opt/stack/nova/nova/cmd/manage.py", line 2205, in main
  fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
File "/opt/stack/nova/nova/cmd/common.py", line 169, in get_action_fn
  fn = CONF.category.action_fn
File "/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 
3181, in __getattr__
  raise NoSuchOptError(name)
  oslo_config.cfg.NoSuchOptError: no such option action_fn in group [DEFAULT]

  
  # nova-manage cell_v2 help
  usage: nova-manage cell_v2 [-h]
 
{create_cell,delete_cell,delete_host,discover_hosts,list_cells,list_hosts,map_cell0,map_cell_and_hosts,map_instances,simple_cell_setup,update_cell,verify_instance}
 ...
  nova-manage cell_v2: error: argument action: invalid choice: 'help' (choose 
from 'create_cell', 'delete_cell', 'delete_host', 'discover_hosts', 
'list_cells', 'list_hosts', 'map_cell0', 'map_cell_and_hosts', 'map_instances', 
'simple_cell_setup', 'update_cell', 'verify_instance')

  
  # nova-manage cell_v2 -h
  usage: nova-manage cell_v2 [-h]
 
{create_cell,delete_cell,delete_host,discover_hosts,list_cells,list_hosts,map_cell0,map_cell_and_hosts,map_instances,simple_cell_setup,update_cell,verify_instance}
 ...

  positional arguments:

{create_cell,delete_cell,delete_host,discover_hosts,list_cells,list_hosts,map_cell0,map_cell_and_hosts,map_instances,simple_cell_setup,update_cell,verify_instance}

  optional arguments:
-h, --helpshow this help message and exit

  
  python version:
  /usr/bin/python3 --version
  Python 3.7.3

  nova version:
  $ git log -1
  commit 78f9961d293e3b3e0ac62345b78abb1c9e2bd128 (HEAD -> master, 
origin/master, origin/HEAD)

  
  Instead of printing Traceback, nova-manage 

[Yahoo-eng-team] [Bug 1836568] [NEW] Logis filled with uneccesery policy derecation warning

2019-07-15 Thread Attila Fazekas
Public bug reported:

My today master version of keystone log is full with:

2019-07-15 10:47:25.316828 As of the Stein release, the domain API now 
understands how to handle
2019-07-15 10:47:25.316831 system-scoped tokens in addition to project-scoped 
tokens, making the API more
2019-07-15 10:47:25.316834 accessible to users without compromising security or 
manageability for
2019-07-15 10:47:25.316837 administrators. The new default policies for this 
API account for these changes
2019-07-15 10:47:25.316840 automatically
2019-07-15 10:47:25.316843 . Either ensure your deployment is ready for the new 
default or copy/paste the deprecated policy into your policy file and maintain 
it manually.
2019-07-15 10:47:25.316846   warnings.warn(deprecated_msg)
2019-07-15 10:47:25.316849 \x1b[00m

2019-07-15 10:47:25.132244 2019-07-15 10:47:25.131 22582 WARNING py.warnings 
[req-0162c9d3-9953-4b2d-9587-6046651033c3 7b0f3387e0f942f3bae75cea0a5766a3 
98500c83d03e4ba38aa27a78675d2b1b - default default] /usr/lo
cal/lib/python3.7/site-packages/oslo_policy/policy.py:695: UserWarning: Policy 
"identity:delete_credential":"rule:admin_required" was deprecated in S in favor 
of "identity:delete_credential":"(role:admin and sys
tem_scope:all) or user_id:%(target.credential.user_id)s". Reason: As of the 
Stein release, the credential API now understands how to handle system-scoped 
tokens in addition to project-scoped tokens, making the A
PI more accessible to users without compromising security or manageability for 
administrators. The new default policies for this API account for these changes 
automatically.. Either ensure your deployment is rea
dy for the new default or copy/paste the deprecated policy into your policy 
file and maintain it manually.
2019-07-15 10:47:25.132262   warnings.warn(deprecated_msg)
2019-07-15 10:47:25.132266 \x1b[00m
2019-07-15 10:47:25.132979 2019-07-15 10:47:25.132 22582 WARNING


This is fresh setup from `master` without any policy configuration, therefore 
keystone defaults itself triggers the warning.

grep -R  'As of the Stein release' keystone-error.log |wc -l
820


Current master is for `T` , there is no point to have 820 warning (first ~ 10 
minute) for using the keystone default.


Please make these warnings less noise .

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1836568

Title:
  Logis  filled with uneccesery policy derecation warning

Status in OpenStack Identity (keystone):
  New

Bug description:
  My today master version of keystone log is full with:

  2019-07-15 10:47:25.316828 As of the Stein release, the domain API now 
understands how to handle
  2019-07-15 10:47:25.316831 system-scoped tokens in addition to project-scoped 
tokens, making the API more
  2019-07-15 10:47:25.316834 accessible to users without compromising security 
or manageability for
  2019-07-15 10:47:25.316837 administrators. The new default policies for this 
API account for these changes
  2019-07-15 10:47:25.316840 automatically
  2019-07-15 10:47:25.316843 . Either ensure your deployment is ready for the 
new default or copy/paste the deprecated policy into your policy file and 
maintain it manually.
  2019-07-15 10:47:25.316846   warnings.warn(deprecated_msg)
  2019-07-15 10:47:25.316849 \x1b[00m

  2019-07-15 10:47:25.132244 2019-07-15 10:47:25.131 22582 WARNING py.warnings 
[req-0162c9d3-9953-4b2d-9587-6046651033c3 7b0f3387e0f942f3bae75cea0a5766a3 
98500c83d03e4ba38aa27a78675d2b1b - default default] /usr/lo
  cal/lib/python3.7/site-packages/oslo_policy/policy.py:695: UserWarning: 
Policy "identity:delete_credential":"rule:admin_required" was deprecated in S 
in favor of "identity:delete_credential":"(role:admin and sys
  tem_scope:all) or user_id:%(target.credential.user_id)s". Reason: As of the 
Stein release, the credential API now understands how to handle system-scoped 
tokens in addition to project-scoped tokens, making the A
  PI more accessible to users without compromising security or manageability 
for administrators. The new default policies for this API account for these 
changes automatically.. Either ensure your deployment is rea
  dy for the new default or copy/paste the deprecated policy into your policy 
file and maintain it manually.
  2019-07-15 10:47:25.132262   warnings.warn(deprecated_msg)
  2019-07-15 10:47:25.132266 \x1b[00m
  2019-07-15 10:47:25.132979 2019-07-15 10:47:25.132 22582 WARNING

  
  This is fresh setup from `master` without any policy configuration, therefore 
keystone defaults itself triggers the warning.

  grep -R  'As of the Stein release' keystone-error.log |wc -l
  820

  
  Current master is for `T` , there is no point to have 820 warning (first ~ 10 
minute) for using the keystone default.

  
  Please make these warnings less noise .

To manage 

[Yahoo-eng-team] [Bug 1821306] [NEW] Using or importing the ABCs from 'collections' is deprecated

2019-03-22 Thread Attila Fazekas
Public bug reported:

Mar 22 09:09:30 controller-02 glance-api[23536]: 
/opt/stack/glance/glance/location.py:189: DeprecationWarning: Using or 
importing the ABCs from 'collections' instead of from 'collections.abc' is 
deprecated, and>
Mar 22 09:09:30 controller-02 glance-api[23536]:   class 
StoreLocations(collections.MutableSequence):
Mar 22 09:09:30 controller-02 glance-api[23536]: 
/opt/stack/glance/glance/api/common.py:115: DeprecationWarning: invalid escape 
sequence \d
Mar 22 09:09:30 controller-02 glance-api[23536]:   pattern = 
re.compile('^(\d+)((K|M|G|T)?B)?$')

Mar 22 09:09:30 controller-02 glance-api[23536]:
/usr/local/lib/python3.7/site-
packages/os_brick/initiator/linuxrbd.py:24: DeprecationWarning: Using or
importing the ABCs from 'collections' instead of from 'col>

(Today version)
py: Python 3.7.2

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: os-brick
 Importance: Undecided
 Status: New

** Also affects: os-brick
   Importance: Undecided
   Status: New

** Description changed:

  Mar 22 09:09:30 controller-02 glance-api[23536]: 
/opt/stack/glance/glance/location.py:189: DeprecationWarning: Using or 
importing the ABCs from 'collections' instead of from 'collections.abc' is 
deprecated, and>
  Mar 22 09:09:30 controller-02 glance-api[23536]:   class 
StoreLocations(collections.MutableSequence):
  Mar 22 09:09:30 controller-02 glance-api[23536]: 
/opt/stack/glance/glance/api/common.py:115: DeprecationWarning: invalid escape 
sequence \d
  Mar 22 09:09:30 controller-02 glance-api[23536]:   pattern = 
re.compile('^(\d+)((K|M|G|T)?B)?$')
  
  Mar 22 09:09:30 controller-02 glance-api[23536]:
  /usr/local/lib/python3.7/site-
  packages/os_brick/initiator/linuxrbd.py:24: DeprecationWarning: Using or
  importing the ABCs from 'collections' instead of from 'col>
  
  (Today version)
- py: Python 2.7.15
+ py: Python 3.7.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1821306

Title:
  Using or importing the ABCs from 'collections'  is deprecated

Status in Glance:
  New
Status in os-brick:
  New

Bug description:
  Mar 22 09:09:30 controller-02 glance-api[23536]: 
/opt/stack/glance/glance/location.py:189: DeprecationWarning: Using or 
importing the ABCs from 'collections' instead of from 'collections.abc' is 
deprecated, and>
  Mar 22 09:09:30 controller-02 glance-api[23536]:   class 
StoreLocations(collections.MutableSequence):
  Mar 22 09:09:30 controller-02 glance-api[23536]: 
/opt/stack/glance/glance/api/common.py:115: DeprecationWarning: invalid escape 
sequence \d
  Mar 22 09:09:30 controller-02 glance-api[23536]:   pattern = 
re.compile('^(\d+)((K|M|G|T)?B)?$')

  Mar 22 09:09:30 controller-02 glance-api[23536]:
  /usr/local/lib/python3.7/site-
  packages/os_brick/initiator/linuxrbd.py:24: DeprecationWarning: Using
  or importing the ABCs from 'collections' instead of from 'col>

  (Today version)
  py: Python 3.7.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1821306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1820851] [NEW] Please do cell mapping without periodic task or active admin involvment

2019-03-19 Thread Attila Fazekas
Public bug reported:

'nova-manage cell_v2 discover_hosts' usage should not be needed,
the compute nodes at start up time could ask for registration
 and not repeat it during the n-cpu process lifetime.

In case of you have 30k compute node and restarting
them every two week in average, it would generate
~ 0.03 avg. request per sec. 

I do not see why it is mandatory to use

nova-manage cell_v2 discover_hosts  or 
[scheduler]
discover_hosts_in_cells_interval = 300

even in a small deployment.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1820851

Title:
  Please do cell mapping without periodic task or active admin
  involvment

Status in OpenStack Compute (nova):
  New

Bug description:
  'nova-manage cell_v2 discover_hosts' usage should not be needed,
  the compute nodes at start up time could ask for registration
   and not repeat it during the n-cpu process lifetime.

  In case of you have 30k compute node and restarting
  them every two week in average, it would generate
  ~ 0.03 avg. request per sec. 

  I do not see why it is mandatory to use

  nova-manage cell_v2 discover_hosts  or 
  [scheduler]
  discover_hosts_in_cells_interval = 300

  even in a small deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1820851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816443] [NEW] ovs agent can fail with oslo_config.cfg.NoSuchOptError

2019-02-18 Thread Attila Fazekas
Public bug reported:

Neutron ovs agent some cases have this in his log:

The rpc_response_max_timeout supposed to have a default value:

I wonder is the issue related to https://bugs.launchpad.net/cinder/+bug/1796759,
where the oslo.messaging change affected 2 other component.


Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.143 30426 ERROR neutron.agent.common.async_process [-] Error received 
from [ovsdb-client monitor tcp:127.0.0.1:6640 Interface nam>
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 CRITICAL neutron [-] Unhandled error: 
oslo_config.cfg.NoSuchOptError: no such option rpc_response_max_timeout in 
group >
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron Traceback (most recent call last):
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 2183, in 
__getattr__
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron return self._get(name)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 2617, in _get
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron value, loc = self._do_get(name, group, 
namespace)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 2635, in 
_do_get
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron info = self._get_opt_info(name, group)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 2835, in 
_get_opt_info
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron raise NoSuchOptError(opt_name, group)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron oslo_config.cfg.NoSuchOptError: no such option 
rpc_response_max_timeout in group [DEFAULT]
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron


Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron During handling of the above exception, 
another exception occurred:
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron Traceback (most recent call last):
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/bin/neutron-openvswitch-agent", line 10, in 
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron sys.exit(main())
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py", line 
20, in main
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron agent_main.main()
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/main.py", 
line 47, in main
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron mod.main()
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py",
 line 3>
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron 
'neutron.plugins.ml2.drivers.openvswitch.agent.'
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/os_ken/base/app_manager.py", line 370, 
in run_apps
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron hub.joinall(services)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/os_ken/lib/hub.py", line 102, in joinall
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron t.wait()
Feb 18 14:33:12 f29-dev-02 

[Yahoo-eng-team] [Bug 1728600] Re: Test test_network_basic_ops fails time to time, port doesn't become ACTIVE quickly

2019-01-21 Thread Attila Fazekas
Nova expected to wait for all connected port to become active on instance 
creation before reporting the instance active. 
No additional user/tempest wait is required, no random port status flipping is 
allowed.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1728600

Title:
  Test test_network_basic_ops fails time to time, port doesn't become
  ACTIVE quickly

Status in OpenStack Compute (nova):
  New
Status in tempest:
  In Progress

Bug description:
  Test test_network_basic_ops fails time to time, port doesn't become
  ACTIVE quickly

  Trace:
  Traceback (most recent call last):
File "tempest/scenario/test_security_groups_basic_ops.py", line 185, in 
setUp
  self._deploy_tenant(self.primary_tenant)
File "tempest/scenario/test_security_groups_basic_ops.py", line 349, in 
_deploy_tenant
  self._set_access_point(tenant)
File "tempest/scenario/test_security_groups_basic_ops.py", line 316, in 
_set_access_point
  self._assign_floating_ips(tenant, server)
File "tempest/scenario/test_security_groups_basic_ops.py", line 322, in 
_assign_floating_ips
  client=tenant.manager.floating_ips_client)
File "tempest/scenario/manager.py", line 836, in create_floating_ip
  port_id, ip4 = self._get_server_port_id_and_ip4(thing)
File "tempest/scenario/manager.py", line 814, in _get_server_port_id_and_ip4
  "No IPv4 addresses found in: %s" % ports)
File "/usr/local/lib/python2.7/dist-packages/unittest2/case.py", line 845, 
in assertNotEqual
  raise self.failureException(msg)
  AssertionError: 0 == 0 : No IPv4 addresses found in: 
[{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': 
u'2017-10-30T10:04:41Z', u'device_owner': u'compute:None', u'revision_number': 
9, u'port_security_enabled': True, u'binding:profile': {}, u'fixed_ips': 
[{u'subnet_id': u'd522b2e5-7e56-4d08-843c-c434c3c2af97', u'ip_address': 
u'10.100.0.12'}], u'id': u'20d59775-906d-4390-b193-a8ec81817ddb', 
u'security_groups': [u'908eb03d-2477-49ab-ab9a-fcfae454', 
u'cf62ee1b-eb73-44d0-9ad8-65bb32885505'], u'binding:vif_details': 
{u'port_filter': True, u'ovs_hybrid_plug': True}, u'binding:vif_type': u'ovs', 
u'mac_address': u'fa:16:3e:02:f3:e8', u'project_id': 
u'0a8532fba2194d32996c3ba46ae35c96', u'status': u'BUILD', u'binding:host_id': 
u'cfg01', u'description': u'', u'tags': [], u'device_id': 
u'5ad8f2be-3cbb-49aa-8d72-e81ca6789665', u'name': u'', u'admin_state_up': True, 
u'network_id': u'49491fd4-2c1e-4c46-8166-b4648eb75f84', u'tenant_id': 
u'0a8532fba2194d32996c3ba46ae35c96', u'created_at': u'2017-10-30T10:04:37Z', 
u'binding:vnic_type': u'normal'}]

  Ran 1 test in 25.096s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1728600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1801919] Re: brctl is obsolete use ip

2018-11-06 Thread Attila Fazekas
Adding nova, nova also using brctl for example in the
NeutronLinuxBridgeInterfaceDriver ,

yes, pyroute2 might be an alternative to the ip commands as well, however the
bridge create/enslave is not most documented part.


** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1801919

Title:
  brctl is obsolete  use ip

Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  New

Bug description:
  bridge-utils (brctl) is obsolete, no modern software should depend on it.
  Used in: neutron/agent/linux/bridge_lib.py

  http://man7.org/linux/man-pages/man8/brctl.8.html

  Please use `ip` for basic bridge operations,
  than we can drop one obsolete dependency..

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1801919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1801919] [NEW] brctl is obsolete use ip

2018-11-06 Thread Attila Fazekas
Public bug reported:

bridge-utils (brctl) is obsolete, no modern software should depend on it.
Used in: neutron/agent/linux/bridge_lib.py

http://man7.org/linux/man-pages/man8/brctl.8.html

Please use `ip` for basic bridge operations,
than we can drop one obsolete dependency..

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1801919

Title:
  brctl is obsolete  use ip

Status in neutron:
  New

Bug description:
  bridge-utils (brctl) is obsolete, no modern software should depend on it.
  Used in: neutron/agent/linux/bridge_lib.py

  http://man7.org/linux/man-pages/man8/brctl.8.html

  Please use `ip` for basic bridge operations,
  than we can drop one obsolete dependency..

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1801919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788403] Re: test_server_connectivity_cold_migration_revert randomly fails ssh check

2018-09-18 Thread Attila Fazekas
Searching for keywords in the neutron change log:
https://bugs.launchpad.net/neutron/+bug/1757089

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1788403

Title:
  test_server_connectivity_cold_migration_revert randomly fails ssh
  check

Status in neutron:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/98/591898/3/check/tempest-slow/c480e82/job-
  output.txt.gz#_2018-08-21_23_20_11_337095

  2018-08-21 23:20:11.337095 | controller | {0} 
tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_cold_migration_revert
 [200.028926s] ... FAILED
  2018-08-21 23:20:11.337187 | controller |
  2018-08-21 23:20:11.337260 | controller | Captured traceback:
  2018-08-21 23:20:11.337329 | controller | ~~~
  2018-08-21 23:20:11.337435 | controller | Traceback (most recent call 
last):
  2018-08-21 23:20:11.337591 | controller |   File 
"tempest/common/utils/__init__.py", line 89, in wrapper
  2018-08-21 23:20:11.337702 | controller | return f(*func_args, 
**func_kwargs)
  2018-08-21 23:20:11.338012 | controller |   File 
"tempest/scenario/test_network_advanced_server_ops.py", line 258, in 
test_server_connectivity_cold_migration_revert
  2018-08-21 23:20:11.338175 | controller | server, keypair, 
floating_ip)
  2018-08-21 23:20:11.338571 | controller |   File 
"tempest/scenario/test_network_advanced_server_ops.py", line 103, in 
_wait_server_status_and_check_network_connectivity
  2018-08-21 23:20:11.338766 | controller | 
self._check_network_connectivity(server, keypair, floating_ip)
  2018-08-21 23:20:11.339004 | controller |   File 
"tempest/scenario/test_network_advanced_server_ops.py", line 96, in 
_check_network_connectivity
  2018-08-21 23:20:11.339069 | controller | server)
  2018-08-21 23:20:11.339251 | controller |   File 
"tempest/scenario/manager.py", line 622, in check_vm_connectivity
  2018-08-21 23:20:11.339314 | controller | msg=msg)
  2018-08-21 23:20:11.339572 | controller |   File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
  2018-08-21 23:20:11.339683 | controller | raise 
self.failureException(msg)
  2018-08-21 23:20:11.339862 | controller | AssertionError: False is not 
true : Public network connectivity check failed
  2018-08-21 23:20:11.34 | controller | Timed out waiting for 
172.24.5.13 to become reachable

  The test is pretty simple:

  @decorators.idempotent_id('25b188d7-0183-4b1e-a11d-15840c8e2fd6')
  @testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration is not available.')
  @testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
'Less than 2 compute nodes, skipping multinode '
'tests.')
  @decorators.attr(type='slow')
  @utils.services('compute', 'network')
  def test_server_connectivity_cold_migration_revert(self):
  keypair = self.create_keypair()
  server = self._setup_server(keypair)
  floating_ip = self._setup_network(server, keypair)
  src_host = self._get_host_for_server(server['id'])
  self._wait_server_status_and_check_network_connectivity(
  server, keypair, floating_ip)

  self.admin_servers_client.migrate_server(server['id'])
  waiters.wait_for_server_status(self.servers_client, server['id'],
 'VERIFY_RESIZE')
  self.servers_client.revert_resize_server(server['id'])
  self._wait_server_status_and_check_network_connectivity(
  server, keypair, floating_ip)
  dst_host = self._get_host_for_server(server['id'])

  self.assertEqual(src_host, dst_host)

  It creates a server, resizes it, reverts the resize and then tries to
  ssh into the guest, which times out. I wonder if on the resize (or
  revert) we're losing the IP or failing to plug it properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1788403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690374] Re: remotefs fails to make Nova-assisted snapshot

2018-09-10 Thread Attila Fazekas
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1690374

Title:
  remotefs fails to make Nova-assisted snapshot

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Tempest tests creating snapshots fail in Vzstorage CI
  sample run and configuration: 
http://openstack-3rd-party-storage-ci-logs.virtuozzo.com/58/430858/5/check/dsvm-tempest-kvm/d959ccb

  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: DEBUG 
novaclient.v2.client [req-916f4da9-661e-441c-8efd-782d410cd1ce 
tempest-VolumesSnapshotTestJSON-2088544688 None] RESP: [403] 
Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: 
OpenStack-API-Version, X-OpenStack-Nova-API-Version Content-Type: 
application/json; charset=UTF-8 Content-Length: 131 X-Compute-Request-Id: 
req-afc0744f-7721-4ded-b8fa-4eedcae8d10b Date: Fri, 12 May 2017 08:38:05 GMT 
Connection: keep-alive
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: RESP BODY: 
{"forbidden": {"message": "Policy doesn't allow 
os_compute_api:os-assisted-volume-snapshots:create to be performed.", "code": 
403}}
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: {{(pid=57716) 
_http_log_response 
/usr/lib/python2.7/site-packages/keystoneauth1/session.py:395}}
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: DEBUG 
novaclient.v2.client [req-916f4da9-661e-441c-8efd-782d410cd1ce 
tempest-VolumesSnapshotTestJSON-2088544688 None] POST call to compute for 
http://10.161.193.63:8774/v2.1/os-assisted-volume-snapshots used request id 
req-afc0744f-7721-4ded-b8fa-4eedcae8d10b {{(pid=57716) request 
/usr/lib/python2.7/site-packages/keystoneauth1/session.py:640}}
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs [req-916f4da9-661e-441c-8efd-782d410cd1ce 
tempest-VolumesSnapshotTestJSON-2088544688 None] Call to Nova to create 
snapshot failed
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs Traceback (most recent call last):
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs   File 
"/opt/stack/new/cinder/cinder/volume/drivers/remotefs.py", line 1374, in 
_create_snapshot_online
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs connection_info)
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs   File 
"/opt/stack/new/cinder/cinder/compute/nova.py", line 168, in 
create_volume_snapshot
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs create_info=create_info)
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs   File 
"/usr/lib/python2.7/site-packages/novaclient/v2/assisted_volume_snapshots.py", 
line 43, in create
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs return 
self._create('/os-assisted-volume-snapshots', body, 'snapshot')
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs   File 
"/usr/lib/python2.7/site-packages/novaclient/base.py", line 361, in _create
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs resp, body = self.api.client.post(url, 
body=body)
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 229, in post
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs return self.request(url, 'POST', **kwargs)
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs   File 
"/usr/lib/python2.7/site-packages/novaclient/client.py", line 80, in request
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs raise exceptions.from_response(resp, body, 
url, method)
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs Forbidden: Policy doesn't allow 
os_compute_api:os-assisted-volume-snapshots:create to be performed. (HTTP 403) 
(Request-ID: req-afc0744f-7721-4ded-b8fa-4eedcae8d10b)
  May 12 11:38:05 host-10-161-193-63 cinder-volume[57523]: ERROR 
cinder.volume.drivers.remotefs

  Nova-api:
  May 12 11:38:05 host-10-161-193-63 nova-api[50095]: DEBUG nova.policy 
[req-afc0744f-7721-4ded-b8fa-4eedcae8d10b 
tempest-VolumesSnapshotTestJSON-2088544688 
tempest-VolumesSnapshotTestJSON-2088544688] Policy check for 
os_compute_api:os-assisted-volume-snapshots:create failed with credentials 
{'service_roles': [], 'user_id': u'228fd60b54e54f959d36fab497920e50', 'roles': 

[Yahoo-eng-team] [Bug 1744103] [NEW] nova interface-attach 500 on conflict

2018-01-18 Thread Attila Fazekas
Public bug reported:

Nova returns with 5xx response instead of 4xx in case of user error.

In this case it is clearly user issue, the user can know the 10.0.0.3 ip
address already allocated from the subnet and it cannot be allocated
twice.

nuva must return with 409/Conflict status code and state the problem to
the user as neutron did to nova.

$ nova boot --image cirros-0.3.5-x86_64-disk --flavor 42 --nic net-
id=ef952752-b81c-478e-a114-04083c63827c test

$ nova list
+--+--+++-++
| ID   | Name | Status | Task State | Power 
State | Networks   |
+--+--+++-++
| 7a453305-2684-4684-8005-04a98aebfc7e | test | ACTIVE | -  | Running   
  | private=fd64:8b83:fea2:0:f816:3eff:fe5f:3da3, 10.0.0.3 |
+--+--+++-++

$ nova interface-attach  7a453305-2684-4684-8005-04a98aebfc7e 
--net-id=ef952752-b81c-478e-a114-04083c63827c --fixed-ip 10.0.0.3
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-64d655fe-0564-46ec-85c2-c982d34f796c)

Jan 18 15:21:29 afazekas-1516283855.localdomain devstack@n-api.service[15371]: 
DEBUG nova.api.openstack.wsgi [None req-64d655fe-0564-46ec-85c2-c982d34f796c 
demo demo] Action: 'create', calling method: 
Jan 18 15:21:30 afazekas-1516283855.localdomain devstack@n-api.service[15371]: 
DEBUG nova.api.openstack.wsgi [None req-64d655fe-0564-46ec-85c2-c982d34f796c 
demo demo] Returning 500 to user: Unexpected API Error.
Jan 18 15:21:30 afazekas-1516283855.localdomain devstack@n-api.service[15371]: 
 {{(pid=15372) __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1079}}

Tested on Fedora 27 on Jan 18 2018 sources, the issue reproducible on
older versions (pike).

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Nova returns with 5xx response instead of 4xx in case of user error.
  
  In this case it is clearly user issue, the user can know the 10.0.0.3 ip
  address already allocated from the subnet and it cannot be allocated
  twice.
  
  nuva must return with 409/Conflict status code and state the problem to
  the user as neutron did to nova.
  
  $ nova boot --image cirros-0.3.5-x86_64-disk --flavor 42 --nic net-
  id=ef952752-b81c-478e-a114-04083c63827c test
  
  $ nova list
  
+--+--+++-++
  | ID   | Name | Status | Task State | Power 
State | Networks   |
  
+--+--+++-++
  | 7a453305-2684-4684-8005-04a98aebfc7e | test | ACTIVE | -  | Running 
| private=fd64:8b83:fea2:0:f816:3eff:fe5f:3da3, 10.0.0.3 |
  
+--+--+++-++
  
  $ nova interface-attach  7a453305-2684-4684-8005-04a98aebfc7e 
--net-id=ef952752-b81c-478e-a114-04083c63827c --fixed-ip 10.0.0.3
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-64d655fe-0564-46ec-85c2-c982d34f796c)
  
  Jan 18 15:21:29 afazekas-1516283855.localdomain 
devstack@n-api.service[15371]: DEBUG nova.api.openstack.wsgi [None 
req-64d655fe-0564-46ec-85c2-c982d34f796c demo demo] Action: 'create', calling 
method: 
  Jan 18 15:21:30 afazekas-1516283855.localdomain 
devstack@n-api.service[15371]: DEBUG nova.api.openstack.wsgi [None 
req-64d655fe-0564-46ec-85c2-c982d34f796c demo demo] Returning 500 to user: 
Unexpected API Error.
  Jan 18 15:21:30 afazekas-1516283855.localdomain 
devstack@n-api.service[15371]:  
{{(pid=15372) __call__ /opt/stack/nova/nova/api/openstack/wsgi.py:1079}}
  
- 
- Tested on Fedora 27 on Jan 18 2018 sources, the issue reproducible on older 
versions (pike).
+ Tested on Fedora 27 on Jan 18 2018 sources, the issue reproducible on
+ older versions (pike).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1744103

Title:
  nova interface-attach  500 on conflict

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova returns with 5xx response instead of 4xx in case of user error.

  In this case it is 

[Yahoo-eng-team] [Bug 1701541] [NEW] Keystone v3/roles has differnt response for HEAD and GET (again)

2017-06-30 Thread Attila Fazekas
Public bug reported:

The issue is very similar to the one already discussed at 
https://bugs.launchpad.net/keystone/+bug/1334368 , 
http://lists.openstack.org/pipermail/openstack-dev/2014-July/039140.html .

# curl -v -X HEAD  
http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
 -H "Content-Type: application/json" -H "X-Auth-Token: 
gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo"
* About to connect() to 172.17.1.18 port 5000 (#0)
*   Trying 172.17.1.18...
* Connected to 172.17.1.18 (172.17.1.18) port 5000 (#0)
> HEAD 
> /v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
>  HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.17.1.18:5000
> Accept: */*
> Content-Type: application/json
> X-Auth-Token: 
> gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo
> 
< HTTP/1.1 204 No Content
< Date: Fri, 30 Jun 2017 10:09:30 GMT
< Server: Apache
< Vary: X-Auth-Token
< x-openstack-request-id: req-e64410ae-5d4a-48f7-8508-615752877277
< Content-Type: text/plain
< 
* Connection #0 to host 172.17.1.18 left intact

# curl -v -X GET  
http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
 -H "Content-Type: application/json" -H "X-Auth-Token: 
gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo"
* About to connect() to 172.17.1.18 port 5000 (#0)
*   Trying 172.17.1.18...
* Connected to 172.17.1.18 (172.17.1.18) port 5000 (#0)
> GET 
> /v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
>  HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.17.1.18:5000
> Accept: */*
> Content-Type: application/json
> X-Auth-Token: 
> gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo
> 
< HTTP/1.1 200 OK
< Date: Fri, 30 Jun 2017 10:09:38 GMT
< Server: Apache
< Content-Length: 507
< Vary: X-Auth-Token,Accept-Encoding
< x-openstack-request-id: req-cc320571-a59d-4ea2-b459-117053367c55
< Content-Type: application/json
< 
* Connection #0 to host 172.17.1.18 left intact
{"role_inference": {"implies": {"id": "11b21cc37d7644c8bc955ff956b2d56e", 
"links": {"self": 
"http://172.17.1.18:5000/v3/roles/11b21cc37d7644c8bc955ff956b2d56e"}, "name": 
"tempest-role-1212191884"}, "prior_role": {"id": 
"7acb026c29a24fb2a1d92a4e5291de24", "links": {"self": 
"http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24"}, "name": 
"tempest-role-500046640"}}, "links": {"self": 
"http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92


mod_wsgi based on the version and configuration (WSGIMapHEADToGET (requires 
mod_wsgi >= 4.3.0)) mod_wsgi might send GET instead of HEAD in order to avoid 
invalid responses being cached in case of an application bug.

Unfortunately tempest expects the wrong behavior, is it also needs to be
changed,

tempest.api.identity.admin.v3.test_roles.RolesV3TestJSON.test_implied_roles_create_check_show_delete[id-c90c316c-d706-4728-bcba-eb1912081b69]
-

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/tempest/api/identity/admin/v3/test_roles.py", 
line 228, in test_implied_roles_create_check_show_delete
prior_role_id, implies_role_id)
  File 
"/usr/lib/python2.7/site-packages/tempest/lib/services/identity/v3/roles_client.py",
 line 233, in check_role_inference_rule
self.expected_success(204, resp.status)
  File 
"/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 252, 
in expected_success
raise exceptions.InvalidHttpSuccessCode(details)
tempest.lib.exceptions.InvalidHttpSuccessCode: The success code is 
different than the expected one
Details: Unexpected http success status code 200, The expected status code 
is 204

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1701541

Title:
  Keystone v3/roles has differnt response for HEAD and GET (again)

Status in OpenStack Identity (keystone):
  New
Status in tempest:
  New

Bug description:
  The issue 

[Yahoo-eng-team] [Bug 1672988] [NEW] Getting random string instead of lower case name as domin id

2017-03-15 Thread Attila Fazekas
Public bug reported:

The 'Default' domain was created with 'default' id, but for all other
domain I got some random string as an id.

The domain names are unique, and compared in a case intensive way,
therefore lowercase names can be ids.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1672988

Title:
  Getting random string instead of lower case name as domin id

Status in OpenStack Identity (keystone):
  New

Bug description:
  The 'Default' domain was created with 'default' id, but for all other
  domain I got some random string as an id.

  The domain names are unique, and compared in a case intensive way,
  therefore lowercase names can be ids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1672988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643444] [NEW] TenantUsagesTestJSON.test_list_usage_all_tenants 500 from Db layer

2016-11-20 Thread Attila Fazekas
Public bug reported:

I have a newton setup with 3 api(controller) node.

TenantUsagesTestJSON.test_list_usage_all_tenants failed once , the
failure looks similar to one described in the already fixed in
https://bugs.launchpad.net/nova/+bug/1487570 , but it is different api
call so it can have similar issue.

Likely you have an old list of ids, and trying to fetch more info about
an already deleted instance.


The tempest exception:


2016-11-20 00:07:18,606 27600 INFO [tempest.lib.common.rest_client] Request 
(TenantUsagesTestJSON:test_list_usage_all_tenants): 500 GET 
http://[2620:52:0:13b8:5054:ff:fe3e:4]:8774/v2.1/os-simple-tenant-usage?detailed=1=2016-11-19T00%3A07%3A17.645313=2016-11-21T00%3A07%3A17.645313
 0.134s
2016-11-20 00:07:18,607 27600 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '500', 'content-length': '205', 
'content-location': 
'http://[2620:52:0:13b8:5054:ff:fe3e:4]:8774/v2.1/os-simple-tenant-usage?detailed=1=2016-11-19T00%3A07%3A17.645313=2016-11-21T00%3A07%3A17.645313',
 'x-compute-request-id': 'req-3ff84c48-b03e-4f23-8f33-227719a0ced4', 'vary': 
'X-OpenStack-Nova-API-Version', 'openstack-api-version': 'compute 2.1', 
'connection': 'close', 'x-openstack-nova-api-version': '2.1', 'date': 'Sun, 20 
Nov 2016 05:07:18 GMT', 'content-type': 'application/json; charset=UTF-8'}
Body: {"computeFault": {"message": "Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", "code": 500}}
  File 
"/home/stack/tempest-dir/tempest/api/compute/admin/test_simple_tenant_usage.py",
 line 73, in test_list_usage_all_tenants
start=self.start, end=self.end, detailed="1")['tenant_usages'][0]
  File 
"/home/stack/tempest-dir/tempest/api/compute/admin/test_simple_tenant_usage.py",
 line 63, in call_until_valid
self.assertEqual(test_utils.call_until_true(is_valid, duration, 1),
  File "/home/stack/tempest-dir/tempest/lib/common/utils/test_utils.py", line 
103, in call_until_true
if func():
  File 
"/home/stack/tempest-dir/tempest/api/compute/admin/test_simple_tenant_usage.py",
 line 59, in is_valid
self.resp = func(*args, **kwargs)
  File 
"/home/stack/tempest-dir/tempest/lib/services/compute/tenant_usages_client.py", 
line 37, in list_tenant_usages
resp, body = self.get(url)
  File "/home/stack/tempest-dir/tempest/lib/common/rest_client.py", line 291, 
in get
return self.request('GET', url, extra_headers, headers)
  File 
"/home/stack/tempest-dir/tempest/lib/services/compute/base_compute_client.py", 
line 48, in request
method, url, extra_headers, headers, body, chunked)
  File "/home/stack/tempest-dir/tempest/lib/common/rest_client.py", line 664, 
in request
self._error_checker(resp, resp_body)
  File "/home/stack/tempest-dir/tempest/lib/common/rest_client.py", line 827, 
in _error_checker
message=message)
tempest.lib.exceptions.ServerFault: Got server fault
Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.



The related nova api log (node-2):
2016-11-20 05:07:18.476 111884 DEBUG nova.api.openstack.wsgi 
[req-3ff84c48-b03e-4f23-8f33-227719a0ced4 d4852c5eaf2645e2aab0c7485939395a 
998c56750d4a4056853829f088ce2be9 - default default] Calling method '>' _process_stack 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:636
2016-11-20 05:07:18.507 111884 DEBUG nova.objects.instance 
[req-3ff84c48-b03e-4f23-8f33-227719a0ced4 d4852c5eaf2645e2aab0c7485939395a 
998c56750d4a4056853829f088ce2be9 - default default] Lazy-loading 'flavor' on 
Instance uuid 5f3a04c2-ab22-4378-9512-bfd4f9fb0a52 obj_load_attr 
/usr/lib/python2.7/site-packages/nova/objects/instance.py:1013
2016-11-20 05:07:18.556 111884 DEBUG nova.objects.instance 
[req-3ff84c48-b03e-4f23-8f33-227719a0ced4 d4852c5eaf2645e2aab0c7485939395a 
998c56750d4a4056853829f088ce2be9 - default default] Lazy-loading 'flavor' on 
Instance uuid 5f3a04c2-ab22-4378-9512-bfd4f9fb0a52 obj_load_attr 
/usr/lib/python2.7/site-packages/nova/objects/instance.py:1013
2016-11-20 05:07:18.597 111884 ERROR nova.api.openstack.extensions 
[req-3ff84c48-b03e-4f23-8f33-227719a0ced4 d4852c5eaf2645e2aab0c7485939395a 
998c56750d4a4056853829f088ce2be9 - default default] Unexpected exception in API 
method
2016-11-20 05:07:18.597 111884 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-11-20 05:07:18.597 111884 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 338, 
in wrapped
2016-11-20 05:07:18.597 111884 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-11-20 05:07:18.597 111884 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/simple_tenant_usage.py",
 line 238, in index

[Yahoo-eng-team] [Bug 1630899] [NEW] mysql 1305 errores handled differntly with Mysql-Python

2016-10-06 Thread Attila Fazekas
Public bug reported:

The following check https://review.openstack.org/#/c/326927/6/neutron/db/api.py 
, does not works when I am using:
MySQL-python (1.2.5)
oslo.db (4.13.3)
SQLAlchemy (1.1.0)

2016-10-06 04:39:20.674 16262 ERROR neutron.api.v2.resource OperationalError: 
(_mysql_exceptions.OperationalError) (1305, 'SAVEPOINT sa_savepoint_1 does not 
exist') [SQL: u'ROLLBACK TO SAVEPOINT sa_savepoint_1']
2016-10-06 04:39:20.674 16262 ERROR neutron.api.v2.resource 

In the log and not catches by the is_retriable , because it fails on the 
_is_nested_instance(e, db_exc.DBError) check. The exception's type is: 
 .


I did not used the '+pymysql', so it is the old MySQL-python driver.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: oslo.db
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630899

Title:
  mysql 1305 errores handled differntly with Mysql-Python

Status in neutron:
  New
Status in oslo.db:
  New

Bug description:
  The following check 
https://review.openstack.org/#/c/326927/6/neutron/db/api.py , does not works 
when I am using:
  MySQL-python (1.2.5)
  oslo.db (4.13.3)
  SQLAlchemy (1.1.0)

  2016-10-06 04:39:20.674 16262 ERROR neutron.api.v2.resource OperationalError: 
(_mysql_exceptions.OperationalError) (1305, 'SAVEPOINT sa_savepoint_1 does not 
exist') [SQL: u'ROLLBACK TO SAVEPOINT sa_savepoint_1']
  2016-10-06 04:39:20.674 16262 ERROR neutron.api.v2.resource 

  In the log and not catches by the is_retriable , because it fails on the 
_is_nested_instance(e, db_exc.DBError) check. The exception's type is: 
   .

  
  I did not used the '+pymysql', so it is the old MySQL-python driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630161] [NEW] nova image-list is deprecated, but it should work even now

2016-10-04 Thread Attila Fazekas
Public bug reported:

On newton it looks like:
$ nova image-list
WARNING: Command image-list is deprecated and will be removed after Nova 15.0.0 
is released. Use python-glanceclient or openstackclient instead.
ERROR (VersionNotFoundForAPIMethod): API version 'API Version Major: 2, Minor: 
37' is not supported on 'list' method.

It is supposed to be still supported, since newton is just 14.


nova (14.0.0.0rc2.dev21)
python-novaclient (6.0.0)

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: python-novaclient
 Importance: Undecided
 Status: New

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630161

Title:
  nova image-list is deprecated, but it should work even now

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  New

Bug description:
  On newton it looks like:
  $ nova image-list
  WARNING: Command image-list is deprecated and will be removed after Nova 
15.0.0 is released. Use python-glanceclient or openstackclient instead.
  ERROR (VersionNotFoundForAPIMethod): API version 'API Version Major: 2, 
Minor: 37' is not supported on 'list' method.

  It is supposed to be still supported, since newton is just 14.

  
  nova (14.0.0.0rc2.dev21)
  python-novaclient (6.0.0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603038] [NEW] Execption on admin_token usage ValueError: Unrecognized

2016-07-14 Thread Attila Fazekas
Public bug reported:

1. iniset keystone.conf DEFAULT admin_token deprecated
2. reload keystone (systemctl restart httpd)
3. curl -g -i -X GET http://192.168.9.98/identity_v2_admin/v2.0/users -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: deprecated"


I know the admin_token is deprecated, but is should be handled without
throwing an extra exception.


2016-07-14 11:00:28.487 20453 WARNING keystone.middleware.core 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] The admin_token_auth 
middleware presents a security risk and should be removed from the 
[pipeline:api_v3], [pipeline:admin_api], and [pipeline:public_api] sections of 
your paste ini file.
2016-07-14 11:00:28.593 20453 DEBUG keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Authenticating user token 
process_request 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py:354
2016-07-14 11:00:28.593 20453 WARNING keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Invalid token contents.
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth Traceback (most 
recent call last):
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 399, in _do_fetch_token
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth return data, 
access.create(body=data, auth_token=token)
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth   File 
"/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth return 
wrapped(*args, **kwargs)
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/access/access.py", line 49, in 
create
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth raise 
ValueError('Unrecognized auth response')
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth ValueError: 
Unrecognized auth response
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth 
2016-07-14 11:00:28.594 20453 INFO keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Invalid user token
2016-07-14 11:00:28.595 20453 DEBUG keystone.middleware.auth 
[req-d1c79cbf-698f-4844-9efd-7be444040cf0 - - - - -] RBAC: auth_context: {} 
fill_context /opt/stack/keystone/keystone/middleware/auth.py:219
2016-07-14 11:00:28.604 20453 INFO keystone.common.wsgi 
[req-d1c79cbf-698f-4844-9efd-7be444040cf0 - - - - -] GET 
http://192.168.9.98/identity_v2_admin/v2.0/users
2016-07-14 11:00:28.604 20453 WARNING oslo_log.versionutils 
[req-d1c79cbf-698f-4844-9efd-7be444040cf0 - - - - -] Deprecated: get_users of 
the v2 API is deprecated as of Mitaka in favor of a similar function in the v3 
API and may be removed in Q.
2016-07-14 11:00:28.622 20453 DEBUG oslo_db.sqlalchemy.engines 
[req-d1c79cbf-698f-4844-9efd-7be444040cf0 - - - - -] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:256

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603038

Title:
  Execption on admin_token usage ValueError: Unrecognized

Status in OpenStack Identity (keystone):
  New

Bug description:
  1. iniset keystone.conf DEFAULT admin_token deprecated
  2. reload keystone (systemctl restart httpd)
  3. curl -g -i -X GET http://192.168.9.98/identity_v2_admin/v2.0/users -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: deprecated"


  I know the admin_token is deprecated, but is should be handled without
  throwing an extra exception.


  2016-07-14 11:00:28.487 20453 WARNING keystone.middleware.core 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] The admin_token_auth 
middleware presents a security risk and should be removed from the 
[pipeline:api_v3], [pipeline:admin_api], and [pipeline:public_api] sections of 
your paste ini file.
  2016-07-14 11:00:28.593 20453 DEBUG keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Authenticating user token 
process_request 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py:354
  2016-07-14 11:00:28.593 20453 WARNING keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Invalid token contents.
  2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth Traceback (most 
recent call last):
  2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth   File 

[Yahoo-eng-team] [Bug 1555019] [NEW] glance image download does not decompress the content as curl does

2016-03-09 Thread Attila Fazekas
Public bug reported:

When you download this image with curl, you get a qcow2 file, NOT a gz file.
https://fedorapeople.org/groups/magnum/fedora-21-atomic-5.qcow2

When you use 
glance --os-image-api-version=1 image-create --copy-from 
https://fedorapeople.org/groups/magnum/fedora-21-atomic-5.qcow2  
--container-format bare --disk-format qcow2 --property os_distro=fedora-atomic 
--property os_type=linux --min-disk 8 --min-ram 512 --name fedora-21-atomic-5 
--is-public True


It will be stored as gz compressed file, and when you or nova downloads it will 
be gzip compressed as well.
glance image-download   --file test.qcow2.gz


glance MUST not ask for gzip compression when it is unable to handle it.
glance SHOULD be able handle compressed content.

Note:
I had rbd backend.

I do not have this issue with the
https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images
/Fedora-Cloud-Base-23-20151030.x86_64.qcow2  url, because the server
refuses to compress , regardless to what the client requests.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1555019

Title:
  glance image download does not decompress the content as curl does

Status in Glance:
  New

Bug description:
  When you download this image with curl, you get a qcow2 file, NOT a gz file.
  https://fedorapeople.org/groups/magnum/fedora-21-atomic-5.qcow2

  When you use 
  glance --os-image-api-version=1 image-create --copy-from 
https://fedorapeople.org/groups/magnum/fedora-21-atomic-5.qcow2  
--container-format bare --disk-format qcow2 --property os_distro=fedora-atomic 
--property os_type=linux --min-disk 8 --min-ram 512 --name fedora-21-atomic-5 
--is-public True

  
  It will be stored as gz compressed file, and when you or nova downloads it 
will be gzip compressed as well.
  glance image-download   --file test.qcow2.gz

  
  glance MUST not ask for gzip compression when it is unable to handle it.
  glance SHOULD be able handle compressed content.

  Note:
  I had rbd backend.

  I do not have this issue with the
  
https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images
  /Fedora-Cloud-Base-23-20151030.x86_64.qcow2  url, because the server
  refuses to compress , regardless to what the client requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1555019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518902] [NEW] revokation list queried twice on token query

2015-11-23 Thread Attila Fazekas
Public bug reported:

curl -H 'Host: 127.0.0.1:35357' -H 'Accept-Encoding: gzip, deflate' -H
'X-Subject-Token: ' -H 'Accept: application/json' -H 'X
-Auth-Token: ' -H 'Connection: keep-alive' -H 'User-Agent:
python-keystoneclient' http://localhost:35357/v3/auth/tokens


Keystone on behalf of the same HTTP request queries the full revocation_event 
table twice.

SELECT revocation_event.id AS revocation_event_id, revocation_event.domain_id 
AS revocation_event_domain_id, revocation_event.project_id AS 
revocation_event_project_id, revocation_event.user_id AS 
revocation_event_user_id, revocation_event.role_id AS revocation_event_role_id, 
revocation_event.trust_id AS revocation_event_trust_id, 
revocation_event.consumer_id AS revocation_event_consumer_id, 
revocation_event.access_token_id AS revocation_event_access_token_id, 
revocation_event.issued_before AS revocation_event_issued_before, 
revocation_event.expires_at AS revocation_event_expires_at, 
revocation_event.revoked_at AS revocation_event_revoked_at, 
revocation_event.audit_id AS revocation_event_audit_id, 
revocation_event.audit_chain_id AS revocation_event_audit_chain_id 
FROM revocation_event ORDER BY revocation_event.revoked_at

The full revocation_event table must not be queried multiple times on behalf of 
the same HTTP request,
it is a  waste of resources.

PS.: It happens also when X-Subject-Token = X-Auth-Token.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1518902

Title:
  revokation list queried twice on token query

Status in OpenStack Identity (keystone):
  New

Bug description:
  curl -H 'Host: 127.0.0.1:35357' -H 'Accept-Encoding: gzip, deflate' -H
  'X-Subject-Token: ' -H 'Accept: application/json' -H 'X
  -Auth-Token: ' -H 'Connection: keep-alive' -H 'User-Agent:
  python-keystoneclient' http://localhost:35357/v3/auth/tokens

  
  Keystone on behalf of the same HTTP request queries the full revocation_event 
table twice.

  SELECT revocation_event.id AS revocation_event_id, revocation_event.domain_id 
AS revocation_event_domain_id, revocation_event.project_id AS 
revocation_event_project_id, revocation_event.user_id AS 
revocation_event_user_id, revocation_event.role_id AS revocation_event_role_id, 
revocation_event.trust_id AS revocation_event_trust_id, 
revocation_event.consumer_id AS revocation_event_consumer_id, 
revocation_event.access_token_id AS revocation_event_access_token_id, 
revocation_event.issued_before AS revocation_event_issued_before, 
revocation_event.expires_at AS revocation_event_expires_at, 
revocation_event.revoked_at AS revocation_event_revoked_at, 
revocation_event.audit_id AS revocation_event_audit_id, 
revocation_event.audit_chain_id AS revocation_event_audit_chain_id 
  FROM revocation_event ORDER BY revocation_event.revoked_at

  The full revocation_event table must not be queried multiple times on behalf 
of the same HTTP request,
  it is a  waste of resources.

  PS.: It happens also when X-Subject-Token = X-Auth-Token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1518902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484847] [NEW] image_cache_manager messsage storm

2015-08-14 Thread Attila Fazekas
Public bug reported:

The image_cache_manager  periodic task running on behalf of the n-cpu.
image_cache_manager queries all instances which uses the same file-system as 
him.
(The message may contain all compute nodes in the region, if they are using the 
same shared pNFS) 

https://github.com/openstack/nova/blob/b91f3f60997dddb2f7c2fc007fe02b7dff1e0224/nova/compute/manager.py#L6333

After all instance received it does looped  query  via rpc  (typically one 
response selects).
https://github.com/openstack/nova/blob/b91f3f60997dddb2f7c2fc007fe02b7dff1e0224/nova/virt/imagecache.py#L105

At the end it will just needs to know which image is used.

If we consider a default settings on 1024 compute node with shared
filesystem where each hosts 16 vm we will have

nr_cpu * nr_vm / interval_sec

1024*16384/2400 = 6990.50 message/sec.
It will take down the nova conductor queue.

https://github.com/openstack/nova/blob/b91f3f60997dddb2f7c2fc007fe02b7dff1e0224/nova/compute/manager.py#L6329
Mentions some future re-factoring, but that TODO note is ~3 years old.

The looped BlockDeviceMappingList messages MUST be eliminated!

One option is to remote the whole statistic calculation to the service which 
has direct DB connection,
and able to select multiple related BlockDeviceMapping.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484847

Title:
  image_cache_manager messsage storm

Status in OpenStack Compute (nova):
  New

Bug description:
  The image_cache_manager  periodic task running on behalf of the n-cpu.
  image_cache_manager queries all instances which uses the same file-system as 
him.
  (The message may contain all compute nodes in the region, if they are using 
the same shared pNFS) 

  
https://github.com/openstack/nova/blob/b91f3f60997dddb2f7c2fc007fe02b7dff1e0224/nova/compute/manager.py#L6333

  After all instance received it does looped  query  via rpc  (typically one 
response selects).
  
https://github.com/openstack/nova/blob/b91f3f60997dddb2f7c2fc007fe02b7dff1e0224/nova/virt/imagecache.py#L105

  At the end it will just needs to know which image is used.

  If we consider a default settings on 1024 compute node with shared
  filesystem where each hosts 16 vm we will have

  nr_cpu * nr_vm / interval_sec

  1024*16384/2400 = 6990.50 message/sec.
  It will take down the nova conductor queue.

  
https://github.com/openstack/nova/blob/b91f3f60997dddb2f7c2fc007fe02b7dff1e0224/nova/compute/manager.py#L6329
  Mentions some future re-factoring, but that TODO note is ~3 years old.

  The looped BlockDeviceMappingList messages MUST be eliminated!

  One option is to remote the whole statistic calculation to the service which 
has direct DB connection,
  and able to select multiple related BlockDeviceMapping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458913] [NEW] haproxy-driver lock is held while gratuitous arping

2015-05-26 Thread Attila Fazekas
Public bug reported:

https://github.com/openstack/neutron-
lbaas/blob/b868d6d3ef0066a1ac4318d7e91b4d7a076a2e61/neutron_lbaas/drivers/haproxy/namespace_driver.py#L327

arping can take relative long time (2 sec), while the global lock is
held.

The gratuitous arping should not block other threads.

The neutron code base already contains a `non-block` version:
https://github.com/openstack/neutron/blob/6d2794345db2d7b12502f6e7b2d99e05a85b9030/neutron/agent/linux/ip_lib.py#L732

Please do not increase  the lock held time by arping.
Consider using the send_gratuitous_arp function from the ip_lib.py .

PS.:
The same blocking arping also re-implemented in the 
neutron_lbaas/drivers/haproxy/synchronous_namespace_driver.py.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458913

Title:
  haproxy-driver lock is held while gratuitous arping

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  https://github.com/openstack/neutron-
  
lbaas/blob/b868d6d3ef0066a1ac4318d7e91b4d7a076a2e61/neutron_lbaas/drivers/haproxy/namespace_driver.py#L327

  arping can take relative long time (2 sec), while the global lock is
  held.

  The gratuitous arping should not block other threads.

  The neutron code base already contains a `non-block` version:
  
https://github.com/openstack/neutron/blob/6d2794345db2d7b12502f6e7b2d99e05a85b9030/neutron/agent/linux/ip_lib.py#L732

  Please do not increase  the lock held time by arping.
  Consider using the send_gratuitous_arp function from the ip_lib.py .

  PS.:
  The same blocking arping also re-implemented in the 
neutron_lbaas/drivers/haproxy/synchronous_namespace_driver.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447945] Re: check-tempest-dsvm-postgres-full fails with mismatch_error

2015-05-19 Thread Attila Fazekas
** Project changed: tempest = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447945

Title:
  check-tempest-dsvm-postgres-full fails with mismatch_error

Status in OpenStack Compute (Nova):
  New

Bug description:
  
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_terminate_instance
  failed with following stack trace

  
--
  2015-04-23 18:28:42.950 | 
  2015-04-23 18:28:42.950 | Captured traceback:
  2015-04-23 18:28:42.950 | ~~~
  2015-04-23 18:28:42.950 | Traceback (most recent call last):
  2015-04-23 18:28:42.950 |   File 
tempest/thirdparty/boto/test_ec2_instance_run.py, line 216, in 
test_run_terminate_instance
  2015-04-23 18:28:42.950 | self.assertInstanceStateWait(instance, 
'_GONE')
  2015-04-23 18:28:42.950 |   File tempest/thirdparty/boto/test.py, line 
373, in assertInstanceStateWait
  2015-04-23 18:28:42.950 | state = self.waitInstanceState(lfunction, 
wait_for)
  2015-04-23 18:28:42.950 |   File tempest/thirdparty/boto/test.py, line 
358, in waitInstanceState
  2015-04-23 18:28:42.951 | self.valid_instance_state)
  2015-04-23 18:28:42.951 |   File tempest/thirdparty/boto/test.py, line 
349, in state_wait_gone
  2015-04-23 18:28:42.951 | self.assertIn(state, valid_set | 
self.gone_set)
  2015-04-23 18:28:42.951 |   File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 356, in assertIn
  2015-04-23 18:28:42.951 | self.assertThat(haystack, Contains(needle), 
message)
  2015-04-23 18:28:42.951 |   File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  2015-04-23 18:28:42.951 | raise mismatch_error
  2015-04-23 18:28:42.951 | testtools.matchers._impl.MismatchError: 
u'error' not in set(['terminated', 'paused', 'stopped', 'running', 'stopping', 
'shutting-down', 'pending', '_GONE'])

  Logs: http://logs.openstack.org/38/145738/11/check/check-tempest-dsvm-
  postgres-full/fd21577/console.html#_2015-04-23_18_28_42_950

  http://logs.openstack.org/38/145738/11/check/check-tempest-dsvm-
  postgres-full/fd1a680/console.html#_2015-04-23_15_02_51_607

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410854] Re: NoNetworkFoundInMaximumAllowedAttempts with multipe API workers

2015-04-23 Thread Attila Fazekas
*** This bug is a duplicate of bug 1382064 ***
https://bugs.launchpad.net/bugs/1382064

** This bug has been marked a duplicate of bug 1382064
   Failure to allocate tunnel id when creating networks concurrently

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1410854

Title:
  NoNetworkFoundInMaximumAllowedAttempts with multipe API workers

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When neutron configured with, in regular devstack job it Fails to
  create several networks in regular tempest run.

  iniset /etc/neutron/neutron.conf DEFAULT api_workers 4

  http://logs.openstack.org/82/140482/2/check/check-tempest-dsvm-
  neutron-
  full/95aea86/logs/screen-q-svc.txt.gz?#_2015-01-14_13_56_07_268

  2015-01-14 13:56:07.267 2814 WARNING neutron.plugins.ml2.drivers.helpers 
[req-f6402b6d-de49-4675-a766-b45a6bc99061 None] Allocate vxlan segment from 
pool failed after 10 failed attempts
  2015-01-14 13:56:07.268 2814 ERROR neutron.api.v2.resource 
[req-f6402b6d-de49-4675-a766-b45a6bc99061 None] create failed
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 451, in create
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 502, in 
create_network
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource tenant_id)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/managers.py, line 161, in 
create_network_segments
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource segment = 
self.allocate_tenant_segment(session)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/managers.py, line 190, in 
allocate_tenant_segment
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource segment = 
driver.obj.allocate_tenant_segment(session)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/type_tunnel.py, line 150, 
in allocate_tenant_segment
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource alloc = 
self.allocate_partially_specified_segment(session)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/helpers.py, line 144, in 
allocate_partially_specified_segment
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource raise 
exc.NoNetworkFoundInMaximumAllowedAttempts()
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource 
NoNetworkFoundInMaximumAllowedAttempts: Unable to create the network. No 
available network found in maximum allowed attempts.
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource

  vxlan_vni': 1008L is successfully allocated on behalf of pid=2813 ,
  req-f3866173-7766-46fc-9dea-e5387be7190d.

  pid=2814,req-f6402b6d-de49-4675-a766-b45a6bc99061 tries to allocate
  the same VNI for 10 times without success.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1410854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444310] [NEW] keystone token response contains InternalURL for non admin user

2015-04-15 Thread Attila Fazekas
Public bug reported:

keystone token responses contains both the InternalURL and adminURL for
non admin user (demo).

This information should not be exposed to a non-admin user.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1444310

Title:
  keystone token response contains InternalURL for non admin user

Status in OpenStack Identity (Keystone):
  New

Bug description:
  keystone token responses contains both the InternalURL and adminURL
  for non admin user (demo).

  This information should not be exposed to a non-admin user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1444310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442040] [NEW] aggregate_hosts does not uses deleted in search indexes

2015-04-09 Thread Attila Fazekas
Public bug reported:

Now the table is declared in this way:

show create table aggregate_hosts;

CREATE TABLE `aggregate_hosts` (
  `created_at` datetime DEFAULT NULL,
  `updated_at` datetime DEFAULT NULL,
  `deleted_at` datetime DEFAULT NULL,
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `host` varchar(255) DEFAULT NULL,
  `aggregate_id` int(11) NOT NULL,
  `deleted` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `uniq_aggregate_hosts0host0aggregate_id0deleted` 
(`host`,`aggregate_id`,`deleted`),
  KEY `aggregate_id` (`aggregate_id`),
  CONSTRAINT `aggregate_hosts_ibfk_1` FOREIGN KEY (`aggregate_id`) REFERENCES 
`aggregates` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8


The aggregate_hosts table in this form allows to have multiple deleted records 
for the same `host`,`aggregate_id`.
Does it really needed ?

- yes
Add an INDEX/KEY with (`deleted`,`host`)  OR Change the UNIQUE KEY to start 
with `deleted` : (`deleted`, `host`,`aggregate_id`) 
Add an INDEX/KEY with (`deleted`,`aggregate_id`) or extend the aggregate_id 
Index.

- no, enough to preserve only one record
Change the UNIQUE KEY  to (`host`,`aggregate_id`)   Consider using this as a  
primary key instead of the id.
Add an INDEX/KEY with (`deleted`,`aggregate_id`) OR extend the aggregate_id 
Index.
Add an INDEX/KEY with (`deleted`,`host`)

- not at all
  Change the UNIQUE KEY  (`host`,`aggregate_id`)   Consider using this as a  
primary key instead of the `id`.
  remove the `updated_at`, `deleted_at` , `deleted`  fields.

Note: `host` field should reference to an another table.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442040

Title:
  aggregate_hosts does not uses deleted in search indexes

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now the table is declared in this way:

  show create table aggregate_hosts;

  CREATE TABLE `aggregate_hosts` (
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`deleted_at` datetime DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
`host` varchar(255) DEFAULT NULL,
`aggregate_id` int(11) NOT NULL,
`deleted` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_aggregate_hosts0host0aggregate_id0deleted` 
(`host`,`aggregate_id`,`deleted`),
KEY `aggregate_id` (`aggregate_id`),
CONSTRAINT `aggregate_hosts_ibfk_1` FOREIGN KEY (`aggregate_id`) REFERENCES 
`aggregates` (`id`)
  ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

  
  The aggregate_hosts table in this form allows to have multiple deleted 
records for the same `host`,`aggregate_id`.
  Does it really needed ?

  - yes
  Add an INDEX/KEY with (`deleted`,`host`)  OR Change the UNIQUE KEY to start 
with `deleted` : (`deleted`, `host`,`aggregate_id`) 
  Add an INDEX/KEY with (`deleted`,`aggregate_id`) or extend the aggregate_id 
Index.

  - no, enough to preserve only one record
  Change the UNIQUE KEY  to (`host`,`aggregate_id`)   Consider using this as a  
primary key instead of the id.
  Add an INDEX/KEY with (`deleted`,`aggregate_id`) OR extend the aggregate_id 
Index.
  Add an INDEX/KEY with (`deleted`,`host`)

  - not at all
Change the UNIQUE KEY  (`host`,`aggregate_id`)   Consider using this as a  
primary key instead of the `id`.
remove the `updated_at`, `deleted_at` , `deleted`  fields.

  Note: `host` field should reference to an another table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442004] [NEW] instance group data model allows multiple polices

2015-04-09 Thread Attila Fazekas
Public bug reported:

Now only two policy available, but only one can be used with a server
group.

$  nova server-group-create name affinity anti-affinity
ERROR (BadRequest): Invalid input received: Conflicting policies configured! 
(HTTP 400) (Request-ID: req-1af553f8-5fd6-4227-870b-be963aad2b62)
$  nova server-group-create name affinity affinity
ERROR (BadRequest): Invalid input received: Duplicate policies configured! 
(HTTP 400) (Request-ID: req-4b697798-89ec-48e1-9840-5e627c08657b)

The 
https://review.openstack.org/#/c/168372/1/specs/liberty/approved/soft-affinity-for-server-group.rst,cm,
contains two additional policy name,  but

These new soft-affinity and soft-anti-affinity policies are mutually
exclusive with each other and with the other existing server-group
policies.

Suggesting to remove the 'instance_group_policy' table and add the
'policy' field to the 'instance_groups' tables.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442004

Title:
  instance group data model allows multiple polices

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now only two policy available, but only one can be used with a server
  group.

  $  nova server-group-create name affinity anti-affinity
  ERROR (BadRequest): Invalid input received: Conflicting policies configured! 
(HTTP 400) (Request-ID: req-1af553f8-5fd6-4227-870b-be963aad2b62)
  $  nova server-group-create name affinity affinity
  ERROR (BadRequest): Invalid input received: Duplicate policies configured! 
(HTTP 400) (Request-ID: req-4b697798-89ec-48e1-9840-5e627c08657b)

  The 
https://review.openstack.org/#/c/168372/1/specs/liberty/approved/soft-affinity-for-server-group.rst,cm,
  contains two additional policy name,  but

  These new soft-affinity and soft-anti-affinity policies are mutually
  exclusive with each other and with the other existing server-group
  policies.

  Suggesting to remove the 'instance_group_policy' table and add the
  'policy' field to the 'instance_groups' tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442098] [NEW] instance_group_member entries not deleted when the instance deleted

2015-04-09 Thread Attila Fazekas
Public bug reported:

Just the not deleted members needs to be selected, an instance group can
gather many-many deleted instances during on his lifetime.

The selecting query contains a condition for omitting the deleted
records:

SELECT instance_groups.created_at AS instance_groups_created_at,
instance_groups.updated_at AS instance_groups_updated_at,
instance_groups.deleted_at AS instance_groups_deleted_at,
instance_groups.deleted AS instance_groups_deleted, instance_groups.id
AS instance_groups_id, instance_groups.user_id AS
instance_groups_user_id, instance_groups.project_id AS
instance_groups_project_id, instance_groups.uuid AS
instance_groups_uuid, instance_groups.name AS instance_groups_name,
instance_group_policy_1.created_at AS
instance_group_policy_1_created_at, instance_group_policy_1.updated_at
AS instance_group_policy_1_updated_at,
instance_group_policy_1.deleted_at AS
instance_group_policy_1_deleted_at, instance_group_policy_1.deleted AS
instance_group_policy_1_deleted, instance_group_policy_1.id AS
instance_group_policy_1_id, instance_group_policy_1.policy AS
instance_group_policy_1_policy, instance_group_policy_1.group_id AS
instance_group_policy_1_group_id, instance_group_member_1.created_at AS
instance_group_member_1_created_at, instance_group_member_1.updated_at
AS instance_group_member_1_updated_at,
instance_group_member_1.deleted_at AS
instance_group_member_1_deleted_at, instance_group_member_1.deleted AS
instance_group_member_1_deleted, instance_group_member_1.id AS
instance_group_member_1_id, instance_group_member_1.instance_id AS
instance_group_member_1_instance_id, instance_group_member_1.group_id AS
instance_group_member_1_group_id  FROM instance_groups LEFT OUTER JOIN
instance_group_policy AS instance_group_policy_1 ON instance_groups.id =
instance_group_policy_1.group_id AND instance_group_policy_1.deleted = 0
AND instance_groups.deleted = 0 LEFT OUTER JOIN instance_group_member AS
instance_group_member_1 ON instance_groups.id =
instance_group_member_1.group_id AND instance_group_member_1.deleted = 0
AND instance_groups.deleted = 0  WHERE instance_groups.deleted = 0 AND
instance_groups.project_id = '6da55626d6a04f4c99980dc17d34235f';

(Captured at $nova server-group-list)

But actually nova fetches the deleted records because the `deleted` field is 0,
even if the instance already deleted.

For figuring out the instance  is actually deleted the nova API issues
other otherwise  not needed queries.

The instance_group_member records actually set to deleted only when
instance_group deleted.

show create table instance_group_member;

CREATE TABLE `instance_group_member` (
  `created_at` datetime DEFAULT NULL,
  `updated_at` datetime DEFAULT NULL,
  `deleted_at` datetime DEFAULT NULL,
  `deleted` int(11) DEFAULT NULL,
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `instance_id` varchar(255) DEFAULT NULL,
  `group_id` int(11) NOT NULL,
  PRIMARY KEY (`id`),
  KEY `group_id` (`group_id`),
  KEY `instance_group_member_instance_idx` (`instance_id`),
  CONSTRAINT `instance_group_member_ibfk_1` FOREIGN KEY (`group_id`) REFERENCES 
`instance_groups` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

1, Please  delete the instance_group_member  records when the instance gets 
deleted.
2, Please add (`deleted`,`group_id`) BTREE index  as combined index, in this 
way it will be  usable in other situations as well, for example  when only a 
single group's members is needed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442098

Title:
  instance_group_member entries not deleted when the instance deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  Just the not deleted members needs to be selected, an instance group
  can gather many-many deleted instances during on his lifetime.

  The selecting query contains a condition for omitting the deleted
  records:

  SELECT instance_groups.created_at AS instance_groups_created_at,
  instance_groups.updated_at AS instance_groups_updated_at,
  instance_groups.deleted_at AS instance_groups_deleted_at,
  instance_groups.deleted AS instance_groups_deleted, instance_groups.id
  AS instance_groups_id, instance_groups.user_id AS
  instance_groups_user_id, instance_groups.project_id AS
  instance_groups_project_id, instance_groups.uuid AS
  instance_groups_uuid, instance_groups.name AS instance_groups_name,
  instance_group_policy_1.created_at AS
  instance_group_policy_1_created_at, instance_group_policy_1.updated_at
  AS instance_group_policy_1_updated_at,
  instance_group_policy_1.deleted_at AS
  instance_group_policy_1_deleted_at, instance_group_policy_1.deleted AS
  instance_group_policy_1_deleted, instance_group_policy_1.id AS
  instance_group_policy_1_id, instance_group_policy_1.policy AS
  instance_group_policy_1_policy, 

[Yahoo-eng-team] [Bug 1441242] [NEW] instances internal_id attribute not in use

2015-04-07 Thread Attila Fazekas
Public bug reported:

The nova instances table contains internal_id field, Looks like it is
always NULL and never referenced.

The ec2 API uses a variable with same name for snapshots and volumes,
but not for instance id.

For the ec2 id - uuid mapping now a separated table is responsible:
instance_id_mappings which contains an id Integer for  ec2 instance-01 
strings and referencing to the instances table with an uuid.

Instead of using instance_id_mappings  the internal_id field could be
used.

Note.: The ec2 could use the instances.id, but  the ec2 users does not
likes the random instance-id change, so it might need a special
handling.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441242

Title:
  instances internal_id attribute not in use

Status in OpenStack Compute (Nova):
  New

Bug description:
  The nova instances table contains internal_id field, Looks like it is
  always NULL and never referenced.

  The ec2 API uses a variable with same name for snapshots and volumes,
  but not for instance id.

  For the ec2 id - uuid mapping now a separated table is responsible:
  instance_id_mappings which contains an id Integer for  ec2 instance-01 
strings and referencing to the instances table with an uuid.

  Instead of using instance_id_mappings  the internal_id field could be
  used.

  Note.: The ec2 could use the instances.id, but  the ec2 users does not
  likes the random instance-id change, so it might need a special
  handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438113] [NEW] Use plain HTTP listeners in the conductor

2015-03-30 Thread Attila Fazekas
Public bug reported:

The conductor is consuming messages form single queue which has performance 
limitation due to various reasons.:
- per queue lock
- Some broker also limiting same part of the message handling to single CPU 
thread/queue
- Multiple broker instances needs to synchronise to queue content, which causes 
additional delays die to the tcp request/response times

The single queue limitation is much greater than the limits getting by
single mysql server, the rate is even worse when you consider slave
reads.

This can be workarounded by explicitly or implicit distributing the rpc
calls to multiple different queue.

The message broker provides additional message durability properties which is 
not needed just for an rpc_call,
we spend resource on what we actually do not need.

For TCP/HTTP traffic load balancing we have many-many tools even hardware 
assisted options are available providing virtually unlimited scalability.
At TCP level also possible to exclude the loadbalancer node(s) form the 
response traffic.

Why HTTP?
Basically any protocol which can do request/response `thing` with arbitrary  
type and size of data with keep-alive connection and with ssl option, could be 
used.
HTTP is a simple and well know protocol, with already existing many-many load 
balancing tool.

Why not have the agents to do a regular API call?
The regular API calls needs to do policy check, which in this case is not 
required, every authenticated user can be considered as admin.  

The  the conductor clients needs to use at least a single shared key configured 
in every nova host.
It has similar security as openstack used with the brokers, basically all nova 
node had credentials in one rabbitmq virtual host,
configured in the /etc/nova/nova.conf . If any of those credentials stolen it 
provided access to the whole virtual host. 

NOTE.: HTTPs can be used with certificate or kerberos based
authentication as well.


I think the for `rpc_calls` which are served by the agents using AMQP is still 
better option,  this bug is just about the situation when the conductor itself 
serves  rpc_call(s). 

NOTE.: The 1 Million msq/sec rabbitmq benchmark is done 186 queues, in
way which does not hits the single queue limitations.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438113

Title:
  Use plain HTTP listeners in the conductor

Status in OpenStack Compute (Nova):
  New

Bug description:
  The conductor is consuming messages form single queue which has performance 
limitation due to various reasons.:
  - per queue lock
  - Some broker also limiting same part of the message handling to single CPU 
thread/queue
  - Multiple broker instances needs to synchronise to queue content, which 
causes additional delays die to the tcp request/response times

  The single queue limitation is much greater than the limits getting by
  single mysql server, the rate is even worse when you consider slave
  reads.

  This can be workarounded by explicitly or implicit distributing the
  rpc calls to multiple different queue.

  The message broker provides additional message durability properties which is 
not needed just for an rpc_call,
  we spend resource on what we actually do not need.

  For TCP/HTTP traffic load balancing we have many-many tools even hardware 
assisted options are available providing virtually unlimited scalability.
  At TCP level also possible to exclude the loadbalancer node(s) form the 
response traffic.

  Why HTTP?
  Basically any protocol which can do request/response `thing` with arbitrary  
type and size of data with keep-alive connection and with ssl option, could be 
used.
  HTTP is a simple and well know protocol, with already existing many-many load 
balancing tool.

  Why not have the agents to do a regular API call?
  The regular API calls needs to do policy check, which in this case is not 
required, every authenticated user can be considered as admin.  

  The  the conductor clients needs to use at least a single shared key 
configured in every nova host.
  It has similar security as openstack used with the brokers, basically all 
nova node had credentials in one rabbitmq virtual host,
  configured in the /etc/nova/nova.conf . If any of those credentials stolen it 
provided access to the whole virtual host. 

  NOTE.: HTTPs can be used with certificate or kerberos based
  authentication as well.

  
  I think the for `rpc_calls` which are served by the agents using AMQP is 
still better option,  this bug is just about the situation when the conductor 
itself serves  rpc_call(s). 

  NOTE.: The 1 Million msq/sec rabbitmq benchmark is done 186 queues, in
  way which does not hits the single queue limitations.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1438159] [NEW] Made neutron agents silent by using AMQP

2015-03-30 Thread Attila Fazekas
Public bug reported:

Problem.: Neutron agents does a lot of periodic task which leads  an rpc call + 
database transaction, which does not even provide a new information, because 
nothing changed.
This behaviour in scale can be called as `DDOS attack`, generally this kind of 
architecture is bad at scaling and can be considered as an any-pattern.

Instead of periodic poll, we can leverage the AMQP brokers bind capabilities. 
Neutron has many situation like security group rule change or dvr related 
changes which needs to be communicated with multiple agents, but usually not 
with all agent.

The agent at startup needs to synchronise the as usual, but during the
sync the agent can subscribe to the interesting events to avoid the
periodic tasks. (Note.: After the first subscribe loop a second one is
needed to do not miss changes during the subscribe process ).

The AMQP queues with 'auto-delete' can be considered as a reliable source of 
information which does not miss any event notification. 
On connection loss or full broker cluster die the agent needs to re sync 
everything guarded in this way,
in these cases, the queue will disappear so the situation easily detectable.

1. Create a Direct exchange for all kind of resourcestype what needs
to be synchronised in this way, for example.: 'neutron.securitygroups' .
The exchange declaration needs to happen at q-svc start-up time or at
full broker cluster die (not-found exception will tell it). The exchange
SHOULD NOT be redeclared or verified at every message publish.

2. Every agent creates a dedicated per agent queue with auto-delete flag, if 
the agent already maintains a queue with this property he MAY reuse that one. 
The agents SHOULD avoid to creating multiple queues per resource type. The 
messages MUST contain a type information. 
3. All agent creates a binding between his queue and the resource type queue 
with he realise he depends on the resource, for example it maintains at least 
one port with the given security-group. (The agents needs to remove the 
binding. when they stop using it)
4. The q-svc publishes just a single message  when the resource related change 
happened. The routing key is the uuid.

Alternatively a topic exchange can be used, with a single  exchange.
In this case the routing keys MUST contains the resource type like: 
neutron.resource_type.uuid ,
this type exchange is generally more expensive than a direct exchange (pattern 
matching), and only useful if you have agents which needs to listens to ALL 
event related to a type, but others just interested just in a few of them.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438159

Title:
  Made neutron agents silent by using AMQP

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Problem.: Neutron agents does a lot of periodic task which leads  an rpc call 
+ database transaction, which does not even provide a new information, because 
nothing changed.
  This behaviour in scale can be called as `DDOS attack`, generally this kind 
of architecture is bad at scaling and can be considered as an any-pattern.

  Instead of periodic poll, we can leverage the AMQP brokers bind capabilities. 
  Neutron has many situation like security group rule change or dvr related 
changes which needs to be communicated with multiple agents, but usually not 
with all agent.

  The agent at startup needs to synchronise the as usual, but during the
  sync the agent can subscribe to the interesting events to avoid the
  periodic tasks. (Note.: After the first subscribe loop a second one is
  needed to do not miss changes during the subscribe process ).

  The AMQP queues with 'auto-delete' can be considered as a reliable source of 
information which does not miss any event notification. 
  On connection loss or full broker cluster die the agent needs to re sync 
everything guarded in this way,
  in these cases, the queue will disappear so the situation easily detectable.

  1. Create a Direct exchange for all kind of resourcestype what needs
  to be synchronised in this way, for example.: 'neutron.securitygroups'
  . The exchange declaration needs to happen at q-svc start-up time or
  at full broker cluster die (not-found exception will tell it). The
  exchange SHOULD NOT be redeclared or verified at every message
  publish.

  2. Every agent creates a dedicated per agent queue with auto-delete flag, if 
the agent already maintains a queue with this property he MAY reuse that one. 
The agents SHOULD avoid to creating multiple queues per resource type. The 
messages MUST contain a type information. 
  3. All agent creates a binding between his queue and the resource type queue 
with he realise he depends on the resource, for example it maintains at least 
one port with the given security-group. 

[Yahoo-eng-team] [Bug 1437902] [NEW] nova redeclares the `nova` named exchange zillion times without a real need

2015-03-29 Thread Attila Fazekas
Public bug reported:

The AMQP broker preserves the exchanges, they are replaced to all broker even 
in non HA mode.
A transient exchange can disappear ONLY when the user explicitly requests it's 
deletion or when the full rabbit cluster dies.

More efficient to declare exchanges only when it is really missing.

Application MUST redeclare the exchange when it was reported as Not Found.
Note.: The Channel exceptions causes channel termination, but not connection 
termination.
Application MAY try to redeclare the exchange on connection breakage, it can 
assume the messaging cluster  dead.
Application SHOULD redeclare the exchange at application start up to verify the 
attributes (Before the first usage).
Application does not needs to redeclare the exchange in any other cases.

Now, significant amount of the AMQP request/response-es is
Exchange.Declare - Exchange.Declare-Ok. (One per publish?)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437902

Title:
  nova redeclares the `nova` named exchange zillion times without a real
  need

Status in OpenStack Compute (Nova):
  New

Bug description:
  The AMQP broker preserves the exchanges, they are replaced to all broker even 
in non HA mode.
  A transient exchange can disappear ONLY when the user explicitly requests 
it's deletion or when the full rabbit cluster dies.

  More efficient to declare exchanges only when it is really missing.

  Application MUST redeclare the exchange when it was reported as Not Found.
  Note.: The Channel exceptions causes channel termination, but not connection 
termination.
  Application MAY try to redeclare the exchange on connection breakage, it can 
assume the messaging cluster  dead.
  Application SHOULD redeclare the exchange at application start up to verify 
the attributes (Before the first usage).
  Application does not needs to redeclare the exchange in any other cases.

  Now, significant amount of the AMQP request/response-es is
  Exchange.Declare - Exchange.Declare-Ok. (One per publish?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437350] Re: cirros uses exit status 0 when trying to login as root

2015-03-29 Thread Attila Fazekas
Adding cloud-init as affacted project, because it also behavaes in this
way.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1437350

Title:
  cirros uses exit status 0 when trying to login as root

Status in CirrOS a tiny cloud guest:
  In Progress
Status in Init scripts for use on cloud images:
  New

Bug description:
  $ ssh -i mykey root@10.1.0.2 ls
  Warning: Permanently added '10.1.0.2' (RSA) to the list of known hosts.
  Please login as 'cirros' user, not as root

  $ echo $?
  0

  Since the command is not executed the exit status should be non 0.

  
  /root/.ssh/authorized_keys:
  command=echo Please login as \'cirros\' user, not as root; echo; sleep 10 
this part should be changed to:
  echo Please login as \'cirros\' user, not as root; echo; sleep 10; exit 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1437350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437199] [NEW] zookeeper driver used with O(n^2) complexity by the scheduler

2015-03-27 Thread Attila Fazekas
Public bug reported:

(Loop1) 
https://github.com/openstack/nova/blob/af2d6c9576b1ac5f3b3768870bb15d9b5cf1610b/nova/scheduler/driver.py#L55
(Loop2) 
https://github.com/openstack/nova/blob/af2d6c9576b1ac5f3b3768870bb15d9b5cf1610b/nova/servicegroup/drivers/zk.py#L177

Iterating the hosts through  the  ComputeFilter also has this issue,  
ComputeFilter usage in a loop has other performance issues .

The zk driver issue can be mitigated by doing the testing `filtering` in
the is_up instead of the get_all , by reorganizing the code.


However better solution would be to have the scheduler to use the get_all,
or redesigning the servicegroup management.

A better design would be to use the DB even with the zk,mc drvier, but
do update ONLY when the service actually came up or dies, in this case
the sg drivers MAY need dedicated service processes.

NOTE: The servicegroup driver concept was introduced to avoid doing 10_000 DB 
update/sec @100_000 host (10/sec  update freq),
if your servers are bad and every server has 1:1000 chance to die on the given 
day,  it would lead only to 0.001 UPDATE/sec (100/day) @100_000 host.

NOTE: If the up/down is knowable just form the DB, the scheduler could
eliminate the dead hosts at the first DB query, without using
ComputeFilter as it is used now. (The plugins SHOULD be able to extend
the  base hosts query)

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- zookeper driver used with O(n^2) complexity  by the scheduler
+ zookeeper driver used with O(n^2) complexity  by the scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437199

Title:
  zookeeper driver used with O(n^2) complexity  by the scheduler

Status in OpenStack Compute (Nova):
  New

Bug description:
  (Loop1) 
https://github.com/openstack/nova/blob/af2d6c9576b1ac5f3b3768870bb15d9b5cf1610b/nova/scheduler/driver.py#L55
  (Loop2) 
https://github.com/openstack/nova/blob/af2d6c9576b1ac5f3b3768870bb15d9b5cf1610b/nova/servicegroup/drivers/zk.py#L177

  Iterating the hosts through  the  ComputeFilter also has this issue,  
  ComputeFilter usage in a loop has other performance issues .

  The zk driver issue can be mitigated by doing the testing `filtering`
  in the is_up instead of the get_all , by reorganizing the code.

  
  However better solution would be to have the scheduler to use the get_all,
  or redesigning the servicegroup management.

  A better design would be to use the DB even with the zk,mc drvier, but
  do update ONLY when the service actually came up or dies, in this case
  the sg drivers MAY need dedicated service processes.

  NOTE: The servicegroup driver concept was introduced to avoid doing 10_000 DB 
update/sec @100_000 host (10/sec  update freq),
  if your servers are bad and every server has 1:1000 chance to die on the 
given day,  it would lead only to 0.001 UPDATE/sec (100/day) @100_000 host.

  NOTE: If the up/down is knowable just form the DB, the scheduler could
  eliminate the dead hosts at the first DB query, without using
  ComputeFilter as it is used now. (The plugins SHOULD be able to extend
  the  base hosts query)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436986] [NEW] PciDeviceStats not compared properly

2015-03-26 Thread Attila Fazekas
Public bug reported:

https://github.com/openstack/nova/blob/5b77c108f14f2bcd42fecfcd060331e57a2e07dd/nova/compute/resource_tracker.py#L554
is always true, since the nova.pci.stats.PciDeviceStats  is different
object even if it has an equivalent content.

Please compare the resources properly and send updated resource info
ONLY when it is REALLY needed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436986

Title:
  PciDeviceStats not compared properly

Status in OpenStack Compute (Nova):
  New

Bug description:
  
https://github.com/openstack/nova/blob/5b77c108f14f2bcd42fecfcd060331e57a2e07dd/nova/compute/resource_tracker.py#L554
  is always true, since the nova.pci.stats.PciDeviceStats  is different
  object even if it has an equivalent content.

  Please compare the resources properly and send updated resource info
  ONLY when it is REALLY needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1436986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436869] [NEW] update_available_resource peridic task freqency should be configureable

2015-03-26 Thread Attila Fazekas
Public bug reported:

update_available_resource (nova/compute/manager.py)  can be considered
as  expensive at scale,  its frequency should be configurable.

Please add a config option for the update_available_resource  frequency.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436869

Title:
  update_available_resource peridic task freqency should be
  configureable

Status in OpenStack Compute (Nova):
  New

Bug description:
  update_available_resource (nova/compute/manager.py)  can be considered
  as  expensive at scale,  its frequency should be configurable.

  Please add a config option for the update_available_resource
  frequency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1436869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426867] [NEW] Remove orphaned tables: iscsi_targets, volumes

2015-03-01 Thread Attila Fazekas
Public bug reported:

The `iscsi_targets` and `volumes` table was used with nova-volumes,
which is deprecated and removed, but the related tables are still created.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426867

Title:
  Remove orphaned tables: iscsi_targets, volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  The `iscsi_targets` and `volumes` table was used with nova-volumes,
  which is deprecated and removed, but the related tables are still created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426873] [NEW] Remove shadow tables

2015-03-01 Thread Attila Fazekas
Public bug reported:

$ nova-manage db archive_deleted_rows 1 
command fails with integrity error.

No-one wants to preserve those records in shadow table,
Instead of fixing the archiving issue the tables should be removed.

Later an archive to /dev/null function should be added to the nova
manage.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426873

Title:
  Remove shadow tables

Status in OpenStack Compute (Nova):
  New

Bug description:
  $ nova-manage db archive_deleted_rows 1 
  command fails with integrity error.

  No-one wants to preserve those records in shadow table,
  Instead of fixing the archiving issue the tables should be removed.

  Later an archive to /dev/null function should be added to the nova
  manage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421616] Re: Cannot create project using Horizon - Could not find default role _member_

2015-02-27 Thread Attila Fazekas
I consider it as keystone bug.

Keystone should handle this like the default domain.
https://github.com/openstack/keystone/blob/master/keystone/common/sql/migrate_repo/versions/034_havana.py#L282
.

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1421616

Title:
  Cannot create project using Horizon - Could not find default role
  _member_

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Identity (Keystone):
  New

Bug description:
  The following error occurs when i try to create a new project using
  Horizon

  On the dashboard - Danger: An error occurred. Please try again later.
  On the horizon screen - NotFound: Could not find default role _member_ in 
Keystone

  Steps to reproduce:
  ===
  1. Install openstack using devstack
  2. Log in with your admin credentials
  3. Go to Identity - Projects
  4. Click on + Create Project

  It would throw the error mentioned above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1421616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421863] Re: Can not find policy directory: policy.d spams the logs

2015-02-23 Thread Attila Fazekas
** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421863

Title:
  Can not find policy directory: policy.d spams the logs

Status in OpenStack Telemetry (Ceilometer):
  New
Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in The Oslo library incubator:
  Triaged
Status in Oslo Policy:
  Fix Released

Bug description:
  This hits over 118 million times in 24 hours in Jenkins runs:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2FuIG5vdCBmaW5kIHBvbGljeSBkaXJlY3Rvcnk6IHBvbGljeS5kXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzg2Njk0MTcxOH0=

  We can probably just change something in devstack to avoid this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1421863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253896] Re: Attempts to verify guests are running via SSH fails. SSH connection to guest does not work.

2015-01-30 Thread Attila Fazekas
** No longer affects: tempest/havana

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1253896

Title:
  Attempts to verify guests are running via SSH fails. SSH connection to
  guest does not work.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Won't Fix
Status in OpenStack Compute (Nova):
  Triaged
Status in Tempest:
  Confirmed

Bug description:
  An example of this can be found at
  http://logs.openstack.org/74/57774/2/gate/gate-tempest-devstack-vm-
  full/e592961/console.html. This test failing seems to cause the
  tearDownClass failure and the process exit code failure.

  Judging by the logs below, the VM is coming up, and the test is
  connecting to the SSH server (dropbear) running in the VM, but the
  authentication is failing. It appears that authentication is attempted
  several times before paramiko gives up causing the test to fail. I
  think this indicates there isn't a network or compute problem, instead
  is possible the client doesn't have the correct key or the authorized
  keys aren't configured properly on the server side. But these are just
  guesses, I haven't been able to get any concrete data that would
  support these theories.

  2013-11-22 05:36:33.980 | 2013-11-22 05:32:17,029 Adding SecurityGroupRule 
from_port=-1, group={}, id=14, ip_protocol=icmp, ip_range={u'cidr': 
u'0.0.0.0/0'}, parent_group_id=105, to_port=-1 to shared resources of 
TestMinimumBasicScenario
  2013-11-22 05:36:33.980 | 2013-11-22 05:32:34,226 starting thread (client 
mode): 0x52e7e50L
  2013-11-22 05:36:33.980 | 2013-11-22 05:32:34,232 Connected (version 2.0, 
client dropbear_2012.55)
  2013-11-22 05:36:33.981 | 2013-11-22 05:32:34,237 kex 
algos:['diffie-hellman-group1-sha1', 'diffie-hellman-group14-sha1'] server 
key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-ctr', '3des-ctr', 
'aes256-ctr', 'aes128-cbc', '3des-cbc', 'aes256-cbc', 'twofish256-cbc', 
'twofish-cbc', 'twofish128-cbc'] server encrypt:['aes128-ctr', '3des-ctr', 
'aes256-ctr', 'aes128-cbc', '3des-cbc', 'aes256-cbc', 'twofish256-cbc', 
'twofish-cbc', 'twofish128-cbc'] client mac:['hmac-sha1-96', 'hmac-sha1', 
'hmac-md5'] server mac:['hmac-sha1-96', 'hmac-sha1', 'hmac-md5'] client 
compress:['none'] server compress:['none'] client lang:[''] server lang:[''] 
kex follows?False
  2013-11-22 05:36:33.981 | 2013-11-22 05:32:34,238 Ciphers agreed: 
local=aes128-ctr, remote=aes128-ctr
  2013-11-22 05:36:33.981 | 2013-11-22 05:32:34,238 using kex 
diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, 
remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local 
none, remote none
  2013-11-22 05:36:33.981 | 2013-11-22 05:32:34,433 Switch to new keys ...
  2013-11-22 05:36:33.982 | 2013-11-22 05:32:34,434 Adding ssh-rsa host key for 
172.24.4.227: 189c16acb93fe44ae975e1c653f1856c
  2013-11-22 05:36:33.982 | 2013-11-22 05:32:34,434 Trying SSH key 
9a9afe52a9485c15495a59b94ebca6b6
  2013-11-22 05:36:33.982 | 2013-11-22 05:32:34,437 userauth is OK
  2013-11-22 05:36:33.982 | 2013-11-22 05:32:35,104 Authentication (publickey) 
failed.
  2013-11-22 05:36:33.982 | 2013-11-22 05:32:36,693 starting thread (client 
mode): 0x52f9190L
  2013-11-22 05:36:33.982 | 2013-11-22 05:32:36,697 Connected (version 2.0, 
client dropbear_2012.55)
  2013-11-22 05:36:33.983 | 2013-11-22 05:32:36,699 kex 
algos:['diffie-hellman-group1-sha1', 'diffie-hellman-group14-sha1'] server 
key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-ctr', '3des-ctr', 
'aes256-ctr', 'aes128-cbc', '3des-cbc', 'aes256-cbc', 'twofish256-cbc', 
'twofish-cbc', 'twofish128-cbc'] server encrypt:['aes128-ctr', '3des-ctr', 
'aes256-ctr', 'aes128-cbc', '3des-cbc', 'aes256-cbc', 'twofish256-cbc', 
'twofish-cbc', 'twofish128-cbc'] client mac:['hmac-sha1-96', 'hmac-sha1', 
'hmac-md5'] server mac:['hmac-sha1-96', 'hmac-sha1', 'hmac-md5'] client 
compress:['none'] server compress:['none'] client lang:[''] server lang:[''] 
kex follows?False
  2013-11-22 05:36:33.983 | 2013-11-22 05:32:36,699 Ciphers agreed: 
local=aes128-ctr, remote=aes128-ctr
  2013-11-22 05:36:33.983 | 2013-11-22 05:32:36,699 using kex 
diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, 
remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local 
none, remote none
  2013-11-22 05:36:33.983 | 2013-11-22 05:32:36,903 Switch to new keys ...
  2013-11-22 05:36:33.983 | 2013-11-22 05:32:36,904 Trying SSH key 
9a9afe52a9485c15495a59b94ebca6b6
  2013-11-22 05:36:33.984 | 2013-11-22 05:32:36,906 userauth is OK
  2013-11-22 05:36:33.984 | 2013-11-22 05:32:37,438 Authentication (publickey) 
failed.
  2013-11-22 05:36:33.984 | 2013-11-22 05:32:39,035 starting thread (client 
mode): 0x24c62d0L
  2013-11-22 05:36:33.984 | 2013-11-22 05:32:39,043 Connected (version 2.0, 
client dropbear_2012.55)
  

[Yahoo-eng-team] [Bug 1412348] [NEW] Missing index on allocated

2015-01-19 Thread Attila Fazekas
Public bug reported:

ml2_vxlan_allocations, ml2_gre_allocations, ml2_vlan_allocations tables
has field named 'allocated'.

These tables  frequently used by similar queries:
SELECT ml2_vxlan_allocations.vxlan_vni AS ml2_vxlan_allocations_vxlan_vni, 
ml2_vxlan_allocations.allocated AS ml2_vxlan_allocations_allocated FROM 
ml2_vxlan_allocations WHERE ml2_vxlan_allocations.allocated = 0  LIMIT 1;

This does select without an index, which causes very poor performance.
If Transaction which involved in allocation took long time, in parallel can 
lead to an allocation failure and retry.

These tables needs to have index on the 'allocated' field.

In the ml2_vlan_allocations table case consider creating an index on
(physical_network, allocation) together.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  ml2_vxlan_allocations, ml2_gre_allocations, ml2_vlan_allocations tables
- has field named 'allocation'.
+ has field named 'allocated'.
  
- These tables  frequently used by similar queries: 
+ These tables  frequently used by similar queries:
  SELECT ml2_vxlan_allocations.vxlan_vni AS ml2_vxlan_allocations_vxlan_vni, 
ml2_vxlan_allocations.allocated AS ml2_vxlan_allocations_allocated FROM 
ml2_vxlan_allocations WHERE ml2_vxlan_allocations.allocated = 0  LIMIT 1;
  
  This does select without an index, which causes very poor performance.
  If Transaction which involved in allocation took long time, in parallel can 
lead to an allocation failure and retry.
  
  These tables needs to have index on the 'allocated' field.
  
  In the ml2_vlan_allocations table case consider creating an index on
  (physical_network, allocation) together.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412348

Title:
  Missing index on allocated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ml2_vxlan_allocations, ml2_gre_allocations, ml2_vlan_allocations
  tables has field named 'allocated'.

  These tables  frequently used by similar queries:
  SELECT ml2_vxlan_allocations.vxlan_vni AS ml2_vxlan_allocations_vxlan_vni, 
ml2_vxlan_allocations.allocated AS ml2_vxlan_allocations_allocated FROM 
ml2_vxlan_allocations WHERE ml2_vxlan_allocations.allocated = 0  LIMIT 1;

  This does select without an index, which causes very poor performance.
  If Transaction which involved in allocation took long time, in parallel can 
lead to an allocation failure and retry.

  These tables needs to have index on the 'allocated' field.

  In the ml2_vlan_allocations table case consider creating an index on
  (physical_network, allocation) together.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1412348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412394] [NEW] schema-image.json should contain the hypervisor specific parameters

2015-01-19 Thread Attila Fazekas
Public bug reported:

xen driver uses the os_type property which is not in 
https://github.com/openstack/glance/blob/master/etc/schema-image.json.
libvirt has several additional properties 
https://wiki.openstack.org/wiki/LibvirtCustomHardware .

The default schema specification should not restrict these frequently
used parameters, or any parameter which nova can understand.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1412394

Title:
  schema-image.json should contain the hypervisor specific parameters

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  xen driver uses the os_type property which is not in 
https://github.com/openstack/glance/blob/master/etc/schema-image.json.
  libvirt has several additional properties 
https://wiki.openstack.org/wiki/LibvirtCustomHardware .

  The default schema specification should not restrict these frequently
  used parameters, or any parameter which nova can understand.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1412394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410854] [NEW] NoNetworkFoundInMaximumAllowedAttempts with multipe API workers

2015-01-14 Thread Attila Fazekas
Public bug reported:

When neutron configured with, in regular devstack job it Fails to create
several networks in regular tempest run.

iniset /etc/neutron/neutron.conf DEFAULT api_workers 4

http://logs.openstack.org/82/140482/2/check/check-tempest-dsvm-neutron-
full/95aea86/logs/screen-q-svc.txt.gz?level=AUDIT#_2015-01-14_13_56_07_268

2015-01-14 13:56:07.267 2814 WARNING neutron.plugins.ml2.drivers.helpers 
[req-f6402b6d-de49-4675-a766-b45a6bc99061 None] Allocate vxlan segment from 
pool failed after 10 failed attempts
2015-01-14 13:56:07.268 2814 ERROR neutron.api.v2.resource 
[req-f6402b6d-de49-4675-a766-b45a6bc99061 None] create failed
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 451, in create
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 502, in 
create_network
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource tenant_id)
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/managers.py, line 161, in 
create_network_segments
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource segment = 
self.allocate_tenant_segment(session)
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/managers.py, line 190, in 
allocate_tenant_segment
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource segment = 
driver.obj.allocate_tenant_segment(session)
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/type_tunnel.py, line 150, 
in allocate_tenant_segment
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource alloc = 
self.allocate_partially_specified_segment(session)
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/helpers.py, line 144, in 
allocate_partially_specified_segment
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource raise 
exc.NoNetworkFoundInMaximumAllowedAttempts()
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource 
NoNetworkFoundInMaximumAllowedAttempts: Unable to create the network. No 
available network found in maximum allowed attempts.
2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1410854

Title:
  NoNetworkFoundInMaximumAllowedAttempts with multipe API workers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When neutron configured with, in regular devstack job it Fails to
  create several networks in regular tempest run.

  iniset /etc/neutron/neutron.conf DEFAULT api_workers 4

  http://logs.openstack.org/82/140482/2/check/check-tempest-dsvm-
  neutron-
  full/95aea86/logs/screen-q-svc.txt.gz?level=AUDIT#_2015-01-14_13_56_07_268

  2015-01-14 13:56:07.267 2814 WARNING neutron.plugins.ml2.drivers.helpers 
[req-f6402b6d-de49-4675-a766-b45a6bc99061 None] Allocate vxlan segment from 
pool failed after 10 failed attempts
  2015-01-14 13:56:07.268 2814 ERROR neutron.api.v2.resource 
[req-f6402b6d-de49-4675-a766-b45a6bc99061 None] create failed
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 451, in create
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 502, in 
create_network
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource tenant_id)
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/managers.py, line 161, in 
create_network_segments
  2015-01-14 13:56:07.268 2814 TRACE neutron.api.v2.resource segment = 
self.allocate_tenant_segment(session)
  2015-01-14 

[Yahoo-eng-team] [Bug 1410172] [NEW] 500 on deleteing a not existing ec2 security group

2015-01-13 Thread Attila Fazekas
Public bug reported:

Looks like there are 2 reason for seeing 500 errors on  not existing ec2
security group delete attempt:

*  Unexpected TypeError raised: expected string or buffer
*  Unexpected UnboundLocalError raised: local variable 'group' referenced 
before assignment

Since it is server error the euca2ools automatically and silently
retries the request multiple (unlimited?) times.

1. source the ec2 credentials:
$source /opt/stack/new/devstack/accrc/demo/demo

2.a:
$  euca-delete-group --debug 42
...
2015-01-13 09:57:02,907 euca2ools [DEBUG]:Received 500 response.  Retrying in 
1.9 seconds

2.b:
$  euca-delete-group --debug fortytwo

Relevant lines from the n-api log (It does not contains a full trace,
however on this kind of errors it should):

action: DeleteSecurityGroup __call__ 
/opt/stack/new/nova/nova/api/ec2/__init__.py:379
arg: GroupName  val: fortytwo __call__ 
/opt/stack/new/nova/nova/api/ec2/__init__.py:382
Neutron security group fortytwo not found get 
/opt/stack/new/nova/nova/network/security_group/neutron_driver.py:138
Unexpected UnboundLocalError raised: local variable 'group' referenced before 
assignment
EC2 error response: UnboundLocalError: Unknown error occurred. 
ec2_error_response /opt/stack/new/nova/nova/api/ec2/faults.py:29

Note: The issue seen in a neutron setup.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ec2

** Tags added: ec2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1410172

Title:
  500 on deleteing a not existing ec2 security group

Status in OpenStack Compute (Nova):
  New

Bug description:
  Looks like there are 2 reason for seeing 500 errors on  not existing
  ec2 security group delete attempt:

  *  Unexpected TypeError raised: expected string or buffer
  *  Unexpected UnboundLocalError raised: local variable 'group' referenced 
before assignment

  Since it is server error the euca2ools automatically and silently
  retries the request multiple (unlimited?) times.

  1. source the ec2 credentials:
  $source /opt/stack/new/devstack/accrc/demo/demo

  2.a:
  $  euca-delete-group --debug 42
  ...
  2015-01-13 09:57:02,907 euca2ools [DEBUG]:Received 500 response.  Retrying in 
1.9 seconds

  2.b:
  $  euca-delete-group --debug fortytwo

  Relevant lines from the n-api log (It does not contains a full trace,
  however on this kind of errors it should):

  action: DeleteSecurityGroup __call__ 
/opt/stack/new/nova/nova/api/ec2/__init__.py:379
  arg: GroupName  val: fortytwo __call__ 
/opt/stack/new/nova/nova/api/ec2/__init__.py:382
  Neutron security group fortytwo not found get 
/opt/stack/new/nova/nova/network/security_group/neutron_driver.py:138
  Unexpected UnboundLocalError raised: local variable 'group' referenced before 
assignment
  EC2 error response: UnboundLocalError: Unknown error occurred. 
ec2_error_response /opt/stack/new/nova/nova/api/ec2/faults.py:29

  Note: The issue seen in a neutron setup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1410172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408987] Re: tempest failing with boto==2.35.0

2015-01-09 Thread Attila Fazekas
Change for changing the global requirements
https://review.openstack.org/#/c/146049/.

Adding nova for implementing HMAC-V4 support.
Nova expects a 'Signature' named header, which is now part of the 
'Authorization'.

The issue can be reproduced with euca2ools:

# source /opt/stack/new/devstack/accrc/demo/demo
# euca-describe-keypairs --debug
2015-01-09 13:14:32,349 euca2ools [DEBUG]:Using access key provided by client.
2015-01-09 13:14:32,350 euca2ools [DEBUG]:Using secret key provided by client.
2015-01-09 13:14:32,351 euca2ools [DEBUG]:Method: POST
2015-01-09 13:14:32,351 euca2ools [DEBUG]:Path: /services/Cloud/
2015-01-09 13:14:32,352 euca2ools [DEBUG]:Data: 
2015-01-09 13:14:32,352 euca2ools [DEBUG]:Headers: {}
2015-01-09 13:14:32,352 euca2ools [DEBUG]:Host: 172.16.40.26
2015-01-09 13:14:32,353 euca2ools [DEBUG]:Port: 8773
2015-01-09 13:14:32,353 euca2ools [DEBUG]:Params: {'Action': 
'DescribeKeyPairs', 'Version': '2010-08-31'}
2015-01-09 13:14:32,354 euca2ools [DEBUG]:establishing HTTP connection: 
kwargs={'port': 8773, 'timeout': 70}
2015-01-09 13:14:32,354 euca2ools [DEBUG]:Token: None
2015-01-09 13:14:32,355 euca2ools [DEBUG]:CanonicalRequest:
POST
/services/Cloud/

host:172.16.40.26:8773
x-amz-date:20150109T131432Z

host;x-amz-date
93691be75657638bb0188c9dd56303b89bb2818598871011d73eee11e14e0cec
2015-01-09 13:14:32,356 euca2ools [DEBUG]:StringToSign:
AWS4-HMAC-SHA256
20150109T131432Z
20150109/16/172/aws4_request
f8748433ff623a4e9cbd616ef63ebe6e506b36f1fd341a41983c955e59b82de7
2015-01-09 13:14:32,357 euca2ools [DEBUG]:Signature:
2dfa2098a8b893cec02f42b0e2abbe7ae5c6077ca1e5d8e1426cad5621e93e24
2015-01-09 13:14:32,357 euca2ools [DEBUG]:Final headers: {'Content-Length': 
'42', 'User-Agent': 'Boto/2.35.0 Python/2.7.5 Linux/3.17.7-200.fc20.x86_64', 
'Host': '172.16.40.26:8773', 'X-Amz-Date': '20150109T131432Z', 'Content-Type': 
'application/x-www-form-urlencoded; charset=UTF-8', 'Authorization': 
'AWS4-HMAC-SHA256 
Credential=6d8332aeeeb94e11bb23d4fc09c0a0f3/20150109/16/172/aws4_request,SignedHeaders=host;x-amz-date,Signature=2dfa2098a8b893cec02f42b0e2abbe7ae5c6077ca1e5d8e1426cad5621e93e24'}
send: 'POST /services/Cloud/ HTTP/1.1\r\nAccept-Encoding: 
identity\r\nContent-Length: 42\r\nUser-Agent: Boto/2.35.0 Python/2.7.5 
Linux/3.17.7-200.fc20.x86_64\r\nHost: 172.16.40.26:8773\r\nX-Amz-Date: 
20150109T131432Z\r\nContent-Type: application/x-www-form-urlencoded; 
charset=UTF-8\r\nAuthorization: AWS4-HMAC-SHA256 
Credential=6d8332aeeeb94e11bb23d4fc09c0a0f3/20150109/16/172/aws4_request,SignedHeaders=host;x-amz-date,Signature=2dfa2098a8b893cec02f42b0e2abbe7ae5c6077ca1e5d8e1426cad5621e93e24\r\n\r\nAction=DescribeKeyPairsVersion=2010-08-31'
reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Type: text/xml
header: Content-Length: 203
header: Date: Fri, 09 Jan 2015 13:14:32 GMT
2015-01-09 13:14:32,365 euca2ools [DEBUG]:Response headers: [('date', 'Fri, 09 
Jan 2015 13:14:32 GMT'), ('content-length', '203'), ('content-type', 
'text/xml')]
2015-01-09 13:14:32,366 euca2ools [DEBUG]:?xml version=1.0?
ResponseErrorsErrorCodeAuthFailure/CodeMessageSignature not 
provided/Message/Error/ErrorsRequestIDreq-5e70be08-7c34-4cf7-84f3-e907a7c4765c/RequestID/Response
2015-01-09 13:14:32,366 euca2ools [ERROR]:400 Bad Request
2015-01-09 13:14:32,366 euca2ools [ERROR]:?xml version=1.0?
ResponseErrorsErrorCodeAuthFailure/CodeMessageSignature not 
provided/Message/Error/ErrorsRequestIDreq-5e70be08-7c34-4cf7-84f3-e907a7c4765c/RequestID/Response
AuthFailure: Signature not provided


** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408987

Title:
  tempest failing with boto==2.35.0

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  logstash: message: 'Signature not provided' and message: 'AuthFailure'

  Gate permanently failing since the boto 2.35.0 release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385225] [NEW] typo in the policy.json rule_admin_api

2014-10-24 Thread Attila Fazekas
Public bug reported:

http://logstash.openstack.org/#eyJzZWFyY2giOiInRmFpbGVkIHRvIHVuZGVyc3RhbmQgcnVsZSBydWxlX2FkbWluX2FwaScgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQxNDg2Nzk5MjMsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

[-] Failed to understand rule rule_admin_api
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy Traceback 
(most recent call last):
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy   File 
/opt/stack/new/nova/nova/openstack/common/policy.py, line 533, in _parse_check
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy kind, 
match = rule.split(':', 1)
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy ValueError: 
need more than 1 value to unpack
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy 

https://github.com/openstack/nova/blob/e53cb39c298d84a6a8c505c70bf7ceff43173947/etc/nova/policy.json#L165

** Affects: nova
 Importance: Undecided
 Assignee: Attila Fazekas (afazekas)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385225

Title:
  typo in the policy.json  rule_admin_api

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiInRmFpbGVkIHRvIHVuZGVyc3RhbmQgcnVsZSBydWxlX2FkbWluX2FwaScgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQxNDg2Nzk5MjMsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

[-] Failed to understand rule rule_admin_api
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy Traceback 
(most recent call last):
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy   File 
/opt/stack/new/nova/nova/openstack/common/policy.py, line 533, in _parse_check
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy kind, 
match = rule.split(':', 1)
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy ValueError: 
need more than 1 value to unpack
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy 

  
https://github.com/openstack/nova/blob/e53cb39c298d84a6a8c505c70bf7ceff43173947/etc/nova/policy.json#L165

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385257] [NEW] Scary Cannot add or update a child row: a foreign key constraint fails ERROR

2014-10-24 Thread Attila Fazekas
Public bug reported:

I see similar message in non dvr setups as mentioned in #1371696.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2Fubm90IGFkZCBvciB1cGRhdGUgYSBjaGlsZCByb3dcXDogYSBmb3JlaWduIGtleSBjb25zdHJhaW50IGZhaWxzXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQxNTQ0NTgzNDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=


2014-10-24 12:01:09.517 1409 TRACE neutron.agent.l3_agent RemoteError: Remote 
error: DBReferenceError (IntegrityError) (1452, 'Cannot add or update a child 
row: a foreign key constraint fails (`neutron`.`routerl3agentbindings`, 
CONSTRAINT `routerl3agentbindings_ibfk_2` FOREIGN KEY (`router_id`) REFERENCES 
`routers` (`id`) ON DELETE CASCADE)') 'INSERT INTO routerl3agentbindings 
(router_id, l3_agent_id) VALUES (%s, %s)' 
('63e69dd6-2964-42a2-ad67-9e7048c044e8', '24b9520c-0543-4968-9b2c-f4e86c5d26e4')

The ERROR most likely harmless, but very annoying when searching for a real 
issue.
Only a shorter message with lower error level (DEBUG)  should be logged on a 
concurrent delete event.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385257

Title:
  Scary Cannot add or update a child row: a foreign key constraint
  fails ERROR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I see similar message in non dvr setups as mentioned in #1371696.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2Fubm90IGFkZCBvciB1cGRhdGUgYSBjaGlsZCByb3dcXDogYSBmb3JlaWduIGtleSBjb25zdHJhaW50IGZhaWxzXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQxNTQ0NTgzNDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  
  2014-10-24 12:01:09.517 1409 TRACE neutron.agent.l3_agent RemoteError: Remote 
error: DBReferenceError (IntegrityError) (1452, 'Cannot add or update a child 
row: a foreign key constraint fails (`neutron`.`routerl3agentbindings`, 
CONSTRAINT `routerl3agentbindings_ibfk_2` FOREIGN KEY (`router_id`) REFERENCES 
`routers` (`id`) ON DELETE CASCADE)') 'INSERT INTO routerl3agentbindings 
(router_id, l3_agent_id) VALUES (%s, %s)' 
('63e69dd6-2964-42a2-ad67-9e7048c044e8', '24b9520c-0543-4968-9b2c-f4e86c5d26e4')

  The ERROR most likely harmless, but very annoying when searching for a real 
issue.
  Only a shorter message with lower error level (DEBUG)  should be logged on a 
concurrent delete event.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385266] [NEW] Too many 'Enforcing rule' logged on the gate

2014-10-24 Thread Attila Fazekas
Public bug reported:

After #1356679 the debug logging became too verbose.
Logging the policy rule checking is useful when you are editing the 
policy.json, but for general usage it is too verbose.

http://logs.openstack.org/53/130753/1/check/check-tempest-dsvm-neutron-
full/4ad9772/logs/screen-q-svc.txt.gz#_2014-10-24_11_44_41_049

$ wget 
http://logs.openstack.org/53/130753/1/check/check-tempest-dsvm-neutron-full/4ad9772/logs/screen-q-svc.txt.gz
$ wc -l screen-q-svc.txt.gz
94283 screen-q-svc.txt.gz
$ grep 'Enforcing rule' screen-q-svc.txt.gz | wc -l
25320

26.85% of log lines contains the 'Enforcing rule'.

These  'Enforcing rule' log messages should be disabled by default (even
with debug=True).

Note:
Maybe 'default_log_levels'  can be used for make it less verbose by default.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385266

Title:
  Too many 'Enforcing rule' logged on the gate

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  After #1356679 the debug logging became too verbose.
  Logging the policy rule checking is useful when you are editing the 
policy.json, but for general usage it is too verbose.

  http://logs.openstack.org/53/130753/1/check/check-tempest-dsvm-
  neutron-full/4ad9772/logs/screen-q-svc.txt.gz#_2014-10-24_11_44_41_049

  $ wget 
http://logs.openstack.org/53/130753/1/check/check-tempest-dsvm-neutron-full/4ad9772/logs/screen-q-svc.txt.gz
  $ wc -l screen-q-svc.txt.gz
  94283 screen-q-svc.txt.gz
  $ grep 'Enforcing rule' screen-q-svc.txt.gz | wc -l
  25320

  26.85% of log lines contains the 'Enforcing rule'.

  These  'Enforcing rule' log messages should be disabled by default
  (even with debug=True).

  Note:
  Maybe 'default_log_levels'  can be used for make it less verbose by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383617] [NEW] SAWarning contradiction IN-predicate on instances.uuid

2014-10-21 Thread Attila Fazekas
Public bug reported:

/usr/lib64/python2.7/site-
packages/sqlalchemy/sql/default_comparator.py:35: SAWarning: The IN-
predicate on instances.uuid was invoked with an empty sequence. This
results in a contradiction, which nonetheless can be expensive to
evaluate.  Consider alternative strategies for improved performance.

The above warning reported in the n-cond (or n-cpu) log when using
SQLAlchemy 0.9.8.

The system doing an invain query at the end.

The warning generated by this code part:
https://github.com/openstack/nova/blob/9fd059b938a2acca8bf5d58989c78d834fbb0ad8/nova/compute/manager.py#L696
driver_uuids can be an empty list. In this case the sql query is not necessary.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383617

Title:
  SAWarning contradiction IN-predicate on instances.uuid

Status in OpenStack Compute (Nova):
  New

Bug description:
  /usr/lib64/python2.7/site-
  packages/sqlalchemy/sql/default_comparator.py:35: SAWarning: The IN-
  predicate on instances.uuid was invoked with an empty sequence. This
  results in a contradiction, which nonetheless can be expensive to
  evaluate.  Consider alternative strategies for improved performance.

  The above warning reported in the n-cond (or n-cpu) log when using
  SQLAlchemy 0.9.8.

  The system doing an invain query at the end.

  The warning generated by this code part:
  
https://github.com/openstack/nova/blob/9fd059b938a2acca8bf5d58989c78d834fbb0ad8/nova/compute/manager.py#L696
  driver_uuids can be an empty list. In this case the sql query is not 
necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1383617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359808] Re: extended_volumes slows down the nova instance list by 40..50%

2014-10-17 Thread Attila Fazekas
This bug points to the number of queries made, you do not really need to
measure anything to see doing 4096 query in a loop is bad instead if
doing only one (or smaller group).

for id in ids:
 SELECT attr from table where ID=id;

vs.

 SELECT attr from table where ID in ids;


Mysql default max query size is 16777216 byte, so probably you can't specify 
significantly more than 256k uuid in one select statement. postgresql limit is 
bigger.


** Changed in: nova
   Status: Opinion = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359808

Title:
  extended_volumes slows down the nova instance list by 40..50%

Status in OpenStack Compute (Nova):
  New

Bug description:
  When listing ~4096 instances, the nova API (n-api) service has high CPU(100%) 
 usage because it does individual SELECTs,
  for every server's block_device_mapping. This adds ~20-25 sec to the response 
time.

  Please use more efficient way for getting the block_device_mapping,
  when multiple instance queried.

  This line initiating the individual select:
  
https://github.com/openstack/nova/blob/4b414adce745c07fbf2003ec25a5e554e634c8b7/nova/api/openstack/compute/contrib/extended_volumes.py#L32

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382564] [NEW] memcache servicegroup driver does not logs connection issues

2014-10-17 Thread Attila Fazekas
Public bug reported:

servicegroup_driver = mc
memcached_servers = blabla  # blabla does not exists

Neither n-cpu or n-api log indicates any connection issue or give any
clue the join was unsuccessful, the n-cpu logs the same two DEBUG line
regardless to the success.

The services are reported down, with nova service-list, as expected.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382564

Title:
  memcache servicegroup driver does not logs connection issues

Status in OpenStack Compute (Nova):
  New

Bug description:
  servicegroup_driver = mc
  memcached_servers = blabla  # blabla does not exists

  Neither n-cpu or n-api log indicates any connection issue or give any
  clue the join was unsuccessful, the n-cpu logs the same two DEBUG line
  regardless to the success.

  The services are reported down, with nova service-list, as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382568] [NEW] get_multi not used for get_all in mc serviegroup driver

2014-10-17 Thread Attila Fazekas
Public bug reported:

MemcachedDriver get_all method calls the is_up for every record which requests 
for a single key, instead of using a more efficient  get_multi
https://github.com/linsomniac/python-memcached/blob/master/memcache.py#L1049 
which is able to retrieve multiple records with a single query.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382568

Title:
  get_multi not used for get_all in mc serviegroup driver

Status in OpenStack Compute (Nova):
  New

Bug description:
  MemcachedDriver get_all method calls the is_up for every record which 
requests for a single key, instead of using a more efficient  get_multi
  https://github.com/linsomniac/python-memcached/blob/master/memcache.py#L1049 
which is able to retrieve multiple records with a single query.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342961] Re: Exception during message handling: Pool FOO could not be found

2014-10-17 Thread Attila Fazekas
** Changed in: neutron
   Status: Expired = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342961

Title:
  Exception during message handling: Pool  FOO could not be found

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  $subjecyt style exception appears both in successful and failed jobs.

  message: Exception during message handling AND message:Pool AND
  message:could not be found AND filename:logs/screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiUG9vbFwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNTUzMzU3ODE2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[req-201dcd14-dc9d-4fb5-8eb5-c66c35991cb3 ] Exception during message 
handling: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py,
 line 232, in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.update_pool_stats(context, pool_id, data=stats)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 512, 
in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher pool_db 
= self._get_resource(context, Pool, pool_id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 218, 
in _get_resource
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher raise 
loadbalancer.PoolNotFound(pool_id=id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
PoolNotFound: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1342961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342961] Re: Exception during message handling: Pool FOO could not be found

2014-10-17 Thread Attila Fazekas
Restoring it new since it still happens very frequently. Tempest just
using 4 client in parallel and  spent most if it's time in sleep.

Is there any way to make neutron to handle the load, for example by
increasing the number of workers ?

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342961

Title:
  Exception during message handling: Pool  FOO could not be found

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  $subjecyt style exception appears both in successful and failed jobs.

  message: Exception during message handling AND message:Pool AND
  message:could not be found AND filename:logs/screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiUG9vbFwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNTUzMzU3ODE2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[req-201dcd14-dc9d-4fb5-8eb5-c66c35991cb3 ] Exception during message 
handling: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py,
 line 232, in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.update_pool_stats(context, pool_id, data=stats)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 512, 
in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher pool_db 
= self._get_resource(context, Pool, pool_id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 218, 
in _get_resource
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher raise 
loadbalancer.PoolNotFound(pool_id=id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
PoolNotFound: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1342961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382153] [NEW] n-cond shoul not joining to servicegroup an all worker

2014-10-16 Thread Attila Fazekas
Public bug reported:

All nova conductor worker process attempts to join to the service on the same 
host. It does not seams required.
If you have 48 conductor worker on a node, it means it tries to maintain the 
membership with all 48 worker.

Since the workers are started almost at the same time, it means 48 burst
update attempt close to each other.

The situation even worse with zk driver,  it does not works with multiple 
workers 1 , because all worker thread inherited the same zookeeper connection 
from it's parent.  (4096 connection allowed from the same ip on my zk servers)
(The api service does not do status report, so it can work with multiple 
workers)

The  lsof -P |grep cond | grep 2181  indicates all conductor worker
uses the same tcp source port  -- the same socket inherited.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382153

Title:
  n-cond shoul not joining to servicegroup an all worker

Status in OpenStack Compute (Nova):
  New

Bug description:
  All nova conductor worker process attempts to join to the service on the same 
host. It does not seams required.
  If you have 48 conductor worker on a node, it means it tries to maintain the 
membership with all 48 worker.

  Since the workers are started almost at the same time, it means 48
  burst update attempt close to each other.

  The situation even worse with zk driver,  it does not works with multiple 
workers 1 , because all worker thread inherited the same zookeeper connection 
from it's parent.  (4096 connection allowed from the same ip on my zk servers)
  (The api service does not do status report, so it can work with multiple 
workers)

  The  lsof -P |grep cond | grep 2181  indicates all conductor worker
  uses the same tcp source port  -- the same socket inherited.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377203] [NEW] Allow to multiple mullitcast group for handling the BUM traffic in tenant networks with vxlan

2014-10-03 Thread Attila Fazekas
Public bug reported:

Neutron at the moment have very limited capabilities for handling in
tenant network multicast/unknown/broadcast traffic.

Not just the ARP/DHCP traffic need to be considered, for example
applications like inifinispan  can and prefers to use multicast for
communication.

vxlan is designed to be usable together with udp multicast, so  instead of 
having the sender node to send  high number of uni-cast packet flood  to every 
possible node on every BUM frame.
The physical switches/routers nowadays able to handle even 32k multi-cast 
groups.

If we have more virtual networks than the switch is able to handle
several network could  share on the same mcast
group.(mcast_addr[avaliable_mcast_addrs modulo VNI])

Later this could be make smarter or could be combined with other packet
replication strategies.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377203

Title:
  Allow to multiple mullitcast group for handling the BUM traffic in
  tenant networks with vxlan

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron at the moment have very limited capabilities for handling in
  tenant network multicast/unknown/broadcast traffic.

  Not just the ARP/DHCP traffic need to be considered, for example
  applications like inifinispan  can and prefers to use multicast for
  communication.

  vxlan is designed to be usable together with udp multicast, so  instead of 
having the sender node to send  high number of uni-cast packet flood  to every 
possible node on every BUM frame.
  The physical switches/routers nowadays able to handle even 32k multi-cast 
groups.

  If we have more virtual networks than the switch is able to handle
  several network could  share on the same mcast
  group.(mcast_addr[avaliable_mcast_addrs modulo VNI])

  Later this could be make smarter or could be combined with other
  packet replication strategies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357677] Re: Instances failes to boot from volume

2014-10-01 Thread Attila Fazekas
** Changed in: nova
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357677

Title:
  Instances failes to boot from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Logstash query for full console outputs which does not contains 'info:
  initramfs loading root from /dev/vda' , but contains the previous boot
  message.

  These issues look like ssh connectivity issue, but the instance is not
  booted and it happens regardless to the network type.

  message: Freeing unused kernel memory AND message: Initializing
  cgroup subsys cpuset AND NOT message: initramfs loading root from
  AND tags:console

  49 incident/week.

  Example console log:
  
http://logs.openstack.org/75/113175/1/gate/check-tempest-dsvm-neutron-full/827c854/console.html.gz#_2014-08-14_11_23_30_120

  It failed when it's tried to ssh 3th server.
  WARNING: The conole.log contains two instances serial console output,  try no 
to mix them when reading.

  The fail point in the test code was here:
  
https://github.com/openstack/tempest/blob/b7144eb08175d010e1300e14f4f75d04d9c63c98/tempest/scenario/test_volume_boot_pattern.py#L175

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303625] Re: tempest-dsvm-neutron-heat-slow fails with StackBuildErrorException (security group already exists))

2014-09-11 Thread Attila Fazekas
*** This bug is a duplicate of bug 1194579 ***
https://bugs.launchpad.net/bugs/1194579

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaXMgaW4gQ1JFQVRFX0ZBSUxFRCBzdGF0dXMgZHVlIHRvIFJlc291cmNlIENSRUFURSBmYWlsZWQ6IEJhZFJlcXVlc3Q6IFNlY3VyaXR5IGdyb3VwIGRlZmF1bHQgYWxyZWFkeSBleGlzdHMgZm9yIHByb2plY3QgXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTA0MzcyNDQ1MzJ9

tempest does not creates 'default' security group explicitly, the issue 
probably is somewhere else.
https://bugs.launchpad.net/neutron/+bug/1194579

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: Triaged = Incomplete

** This bug has been marked a duplicate of bug 1194579
   Race condition exists for default security group creation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303625

Title:
  tempest-dsvm-neutron-heat-slow fails with StackBuildErrorException
  (security group already exists))

Status in Orchestration API (Heat):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Incomplete

Bug description:
  A StackBuildErrorException is raised because the security groups is
  already existing:

  http://logs.openstack.org/89/76489/5/check/check-tempest-dsvm-neutron-
  heat-slow/032beb0/console.html

   2014-04-05 08:18:05.239 | 
==
   2014-04-05 08:18:05.240 | FAIL: 
tempest.api.orchestration.stacks.test_server_cfn_init.ServerCfnInitTestJSON.test_can_log_into_created_server[slow]
   2014-04-05 08:18:05.240 | 
tempest.api.orchestration.stacks.test_server_cfn_init.ServerCfnInitTestJSON.test_can_log_into_created_server[slow]
   2014-04-05 08:18:05.240 | 
--
   2014-04-05 08:18:05.240 | _StringException: Empty attachments:
   2014-04-05 08:18:05.240 |   stderr
   2014-04-05 08:18:05.240 |   stdout
   2014-04-05 08:18:05.240 | 
   2014-04-05 08:18:05.240 | pythonlogging:'': {{{ 
   2014-04-05 08:18:05.240 | 2014-04-05 08:13:19,183 Request 
(ServerCfnInitTestJSON:test_can_log_into_created_server): 200 GET 
http://162.242.239.53:8004/v1/db647826b32c490cbdea807a84f41ba4/stacks/heat-217555013/04ed822c-d2f2-4984-9579-c6bf468941b2
 0.042s
   2014-04-05 08:18:05.240 | 2014-04-05 08:13:20,235 Request 
(ServerCfnInitTestJSON:test_can_log_into_created_server): 200 GET 
http://162.242.239.53:8004/v1/db647826b32c490cbdea807a84f41ba4/stacks/heat-217555013/04ed822c-d2f2-4984-9579-c6bf468941b2
 0.049s
   2014-04-05 08:18:05.240 | }}} 
   2014-04-05 08:18:05.240 | 
   2014-04-05 08:18:05.241 | Traceback (most recent call last):
   2014-04-05 08:18:05.241 |   File 
tempest/api/orchestration/stacks/test_server_cfn_init.py, line 64, in 
test_can_log_into_created_server
   2014-04-05 08:18:05.241 | self.client.wait_for_stack_status(sid, 
'CREATE_COMPLETE')
   2014-04-05 08:18:05.241 |   File 
tempest/services/orchestration/json/orchestration_client.py, line 185, in 
wait_for_stack_status
   2014-04-05 08:18:05.241 | 
stack_status_reason=body['stack_status_reason'])
   2014-04-05 08:18:05.241 | StackBuildErrorException: Stack 
heat-217555013/04ed822c-d2f2-4984-9579-c6bf468941b2 is in CREATE_FAILED status 
due to 'Resource CREATE failed: BadRequest: Security group default already 
exists for project db647826b32c490cbdea807a84f41ba4. (HTTP 400) (Request-ID: 
req-23737d4c-85d4-4dff-b6fd-ed0382128c9e)'

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1303625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361631] [NEW] Do not query datetime type filed when it is not needed

2014-08-26 Thread Attila Fazekas
Public bug reported:

creating a datetime object is more expensive then any other type used in
the database.

Creating the datetime object is expensive especially for mysql drivers,
because creating the object from a datetime string representation is
expensive.

When listing 4k instances with details without the volumes_extension,
approximately 2 second spent in the mysql driver, which spent 1 second
for parsing the datetime (DateTime_or_None).

The datetime format is only useful when you are intended to present the
time for an end user, for the system the float or integer
representations are more efficient.

* consider changing the store type to float or int
* exclude the datetime fields from the query when it will not be part of an api 
response
* remove the datetime fields from the database where it is is not really needed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361631

Title:
  Do not query datetime type filed when it is not needed

Status in OpenStack Compute (Nova):
  New

Bug description:
  creating a datetime object is more expensive then any other type used
  in the database.

  Creating the datetime object is expensive especially for mysql
  drivers, because creating the object from a datetime string
  representation is expensive.

  When listing 4k instances with details without the volumes_extension,
  approximately 2 second spent in the mysql driver, which spent 1 second
  for parsing the datetime (DateTime_or_None).

  The datetime format is only useful when you are intended to present
  the time for an end user, for the system the float or integer
  representations are more efficient.

  * consider changing the store type to float or int
  * exclude the datetime fields from the query when it will not be part of an 
api response
  * remove the datetime fields from the database where it is is not really 
needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361190] [NEW] Too huge space reserved for tenant_id in the database

2014-08-25 Thread Attila Fazekas
Public bug reported:

Keystone defines the project/user/domain IDs as varchar(64), but neutron
uses varchar(255) on every resources.

But the  tenant id actually generated by keystone is 32 character.

Please change to the tenant id lengths to =64 =32.

The record size has impact on the memory usage  and to the db/disk
caching efficiency.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361190

Title:
  Too huge space reserved for tenant_id in the database

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Keystone defines the project/user/domain IDs as varchar(64), but
  neutron uses varchar(255) on every resources.

  But the  tenant id actually generated by keystone is 32 character.

  Please change to the tenant id lengths to =64 =32.

  The record size has impact on the memory usage  and to the db/disk
  caching efficiency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361238] [NEW] Too huge space reserved for tenant_id/user_id/domain_id in the database

2014-08-25 Thread Attila Fazekas
Public bug reported:

Keystone uses 32 byte/character domain/user/project id , which contains
a hexadecimal representation of 128 bit (16 byte) integer.

Please reduce the filed size at least to 32 byte varchar, it helps to
the db for using the caches (disk/ memory/ record per physical sector..)
more efficiently.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361238

Title:
  Too huge space reserved for tenant_id/user_id/domain_id in the
  database

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Keystone uses 32 byte/character domain/user/project id , which
  contains a hexadecimal representation of 128 bit (16 byte) integer.

  Please reduce the filed size at least to 32 byte varchar, it helps to
  the db for using the caches (disk/ memory/ record per physical
  sector..) more efficiently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359808] [NEW] extended_volumes slows down the nova instance list by 40..50%

2014-08-21 Thread Attila Fazekas
Public bug reported:

When listing ~4096 instances, the nova API (n-api) service has high CPU(100%)  
usage because it does individual SELECTs,
for every server's block_device_mapping.

Please use more efficient way for getting the block_device_mapping, when
multiple instance queried.

This line initiating the individual select:
https://github.com/openstack/nova/blob/4b414adce745c07fbf2003ec25a5e554e634c8b7/nova/api/openstack/compute/contrib/extended_volumes.py#L32

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359808

Title:
  extended_volumes slows down the nova instance list by 40..50%

Status in OpenStack Compute (Nova):
  New

Bug description:
  When listing ~4096 instances, the nova API (n-api) service has high CPU(100%) 
 usage because it does individual SELECTs,
  for every server's block_device_mapping.

  Please use more efficient way for getting the block_device_mapping,
  when multiple instance queried.

  This line initiating the individual select:
  
https://github.com/openstack/nova/blob/4b414adce745c07fbf2003ec25a5e554e634c8b7/nova/api/openstack/compute/contrib/extended_volumes.py#L32

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353939] [NEW] Rescue failes with 'Failed to terminate process' in the n-cpu log

2014-08-07 Thread Attila Fazekas
Public bug reported:

message: Failed to terminate process AND
message:'InstanceNotRescuable' AND message: 'Exception during message
handling' AND tags:screen-n-cpu.txt

The above log stash-query reports back only the failed jobs, the 'Failed to 
terminate process' close other failed rescue tests,
but tempest does not always reports them as an error at the end.

message: Failed to terminate process AND tags:screen-n-cpu.txt

Usual console log:
Details: (ServerRescueTestJSON:test_rescue_unrescue_instance) Server 
0573094d-53da-40a5-948a-747d181462f5 failed to reach RESCUE status and task 
state None within the required time (196 s). Current status: SHUTOFF. Current 
task state: None.

http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-
full/90726cb/console.html#_2014-08-07_03_50_26_520

Usual n-cpu exception:
http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-full/90726cb/logs/screen-n-cpu.txt.gz#_2014-08-07_03_32_02_855

2014-08-07 03:32:02.855 ERROR oslo.messaging.rpc.dispatcher 
[req-39ce7a3d-5ceb-41f5-8f9f-face7e608bd1 ServerRescueTestJSON-2035684545 
ServerRescueTestJSON-1017508309] Exception during message handling: Instance 
0573094d-53da-40a5-948a-747d181462f5 cannot be rescued: Driver Error: Failed to 
terminate process 26425 with SIGKILL: Device or resource busy
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 408, in decorated_function
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 88, in wrapped
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher payload)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 71, in wrapped
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 292, in decorated_function
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher pass
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 278, in decorated_function
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 342, in decorated_function
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 320, in decorated_function
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 308, in decorated_function

[Yahoo-eng-team] [Bug 1353962] [NEW] Test job failes with FixedIpLimitExceeded with nova network

2014-08-07 Thread Attila Fazekas
Public bug reported:

VM creation failed due to a `shortage` in fixed IP.

The fixed range is /24, tempest normally does not keeps up more than ~8
VM.

message: FixedIpLimitExceeded AND filename:logs/screen-n-net.txt

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkZpeGVkSXBMaW1pdEV4Y2VlZGVkXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1uZXQudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc0MTA0MzE3MTgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

http://logs.openstack.org/23/112523/1/check/check-tempest-dsvm-postgres-
full/acac6d9/logs/screen-n-cpu.txt.gz#_2014-08-07_09_42_18_481

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

+ VM creation failed due to a `shortage` in fixed IP.
+ 
+ The fixed range is /24, tempest normally does not keeps up more than ~8
+ VM.
+ 
  message: FixedIpLimitExceeded AND filename:logs/screen-n-net.txt
  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkZpeGVkSXBMaW1pdEV4Y2VlZGVkXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1uZXQudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc0MTA0MzE3MTgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=
  
  http://logs.openstack.org/23/112523/1/check/check-tempest-dsvm-postgres-
  full/acac6d9/logs/screen-n-cpu.txt.gz#_2014-08-07_09_42_18_481
- 
- The fixed range is /24, tempest normally does not keeps up more than ~8
- VM.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353962

Title:
  Test job failes with FixedIpLimitExceeded with nova network

Status in OpenStack Compute (Nova):
  New

Bug description:
  VM creation failed due to a `shortage` in fixed IP.

  The fixed range is /24, tempest normally does not keeps up more than
  ~8 VM.

  message: FixedIpLimitExceeded AND filename:logs/screen-n-net.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkZpeGVkSXBMaW1pdEV4Y2VlZGVkXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1uZXQudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc0MTA0MzE3MTgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  http://logs.openstack.org/23/112523/1/check/check-tempest-dsvm-
  postgres-
  full/acac6d9/logs/screen-n-cpu.txt.gz#_2014-08-07_09_42_18_481

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1353962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347833] [NEW] os-server-external-events responses 500 to neutron

2014-07-23 Thread Attila Fazekas
Public bug reported:

The issue usually not fatal, but nova MUST not return 500 on these
requests.

http://logs.openstack.org/73/104673/6/check/check-tempest-dsvm-neutron-full/a1ee0ee/logs/screen-n-api.txt.gz#_2014-07-22_17_15_25_548
2014-07-22 17:15:25.547 AUDIT 
nova.api.openstack.compute.contrib.server_external_events 
[req-82b210ca-21a9-4b40-b26a-ff0a4b1a3209 nova service] Create event 
network-vif-plugged:52f5dc11-1e59-48fe-867c-e059b5c652c6 for instance 
f918bf3f-2fca-45fe-bf3c-d52dd98b90b3
2014-07-22 17:15:25.548 ERROR nova.api.openstack 
[req-82b210ca-21a9-4b40-b26a-ff0a4b1a3209 nova service] Caught error: Unable to 
find host for Instance f918bf3f-2fca-45fe-bf3c-d52dd98b90b3
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack Traceback (most recent 
call last):
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/__init__.py, line 121, in __call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
559, in __call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack return 
self._app(env, start_response)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 131, in 
__call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 906, in __call__
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack content_type, body, 
accept)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 972, in _process_stack
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 1056, in dispatch
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack return 
method(req=request, **action_args)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/compute/contrib/server_external_events.py,
 line 128, in create
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack accepted)
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/compute/api.py, line 3141, in external_instance_event
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack context, 
instances_by_host[host], events_by_host[host])
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/compute/rpcapi.py, line 1025, in 
external_instance_event
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack 
server=_compute_host(None, instances[0]),
2014-07-22 17:15:25.548 26476 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/compute/rpcapi.py, line 61, in _compute_host

[Yahoo-eng-team] [Bug 1344030] [NEW] nova throws 500 with TestVolumeBootPatternV2

2014-07-18 Thread Attila Fazekas
Public bug reported:

message: KeyError: 'image_id' AND tags:screen-n-api.txt

The issue first appeared at 2014-07-17T16:36:43.000.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIktleUVycm9yOiAnaW1hZ2VfaWQnXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDU2ODgxMzUzNTIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

http://logs.openstack.org/15/101415/2/gate/gate-tempest-dsvm-
neutron/27776fc/console.html#_2014-07-18_02_26_21_397

The exception in the nova log:
http://logs.openstack.org/15/101415/2/gate/gate-tempest-dsvm-neutron/27776fc/logs/screen-n-api.txt.gz?level=TRACE#_2014-07-18_02_26_18_112

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1344030

Title:
  nova throws 500 with TestVolumeBootPatternV2

Status in OpenStack Compute (Nova):
  New

Bug description:
  message: KeyError: 'image_id' AND tags:screen-n-api.txt

  The issue first appeared at 2014-07-17T16:36:43.000.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIktleUVycm9yOiAnaW1hZ2VfaWQnXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDU2ODgxMzUzNTIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  http://logs.openstack.org/15/101415/2/gate/gate-tempest-dsvm-
  neutron/27776fc/console.html#_2014-07-18_02_26_21_397

  The exception in the nova log:
  
http://logs.openstack.org/15/101415/2/gate/gate-tempest-dsvm-neutron/27776fc/logs/screen-n-api.txt.gz?level=TRACE#_2014-07-18_02_26_18_112

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1344030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342880] [NEW] Exception during message handling: 'NoneType' object is not iterable

2014-07-16 Thread Attila Fazekas
Public bug reported:

q-svc frequently tries to iterate on None Type.

The job can succeed even if this issue happens.

message: Exception during message handling AND message:NoneType AND
message:object is not iterable AND filename:logs/screen-q-svc.txt

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiTm9uZVR5cGVcIiBBTkQgbWVzc2FnZTpcIm9iamVjdCBpcyBub3QgaXRlcmFibGVcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1xLXN2Yy50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA1NTMzMDE3NzE4LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

[req-ef892503-3f93-4c68-adeb-17394b66406c ] Exception during message 
handling: 'NoneType' object is not iterable
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/l3_rpc_base.py, line 63, in sync_routers
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher 
self._ensure_host_set_on_ports(context, plugin, host, routers)
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/l3_rpc_base.py, line 76, in 
_ensure_host_set_on_ports
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher interface)
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/l3_rpc_base.py, line 84, in 
_ensure_host_set_on_port
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher {'port': 
{portbindings.HOST_ID: host}})
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 870, in update_port
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher 
need_notify=need_port_update_notify)
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 302, in 
_bind_port_if_needed
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher 
plugin_context, port_id, binding, bind_context)
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher TypeError: 
'NoneType' object is not iterable
2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342880

Title:
  Exception during message handling: 'NoneType' object is not iterable

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  q-svc frequently tries to iterate on None Type.

  The job can succeed even if this issue happens.

  message: Exception during message handling AND message:NoneType
  AND message:object is not iterable AND
  filename:logs/screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiTm9uZVR5cGVcIiBBTkQgbWVzc2FnZTpcIm9iamVjdCBpcyBub3QgaXRlcmFibGVcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1xLXN2Yy50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA1NTMzMDE3NzE4LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

[req-ef892503-3f93-4c68-adeb-17394b66406c ] Exception during message 
handling: 'NoneType' object is not iterable
  2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-16 17:28:47.782 26480 TRACE oslo.messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1342961] [NEW] Exception during message handling: Pool FOO could not be found

2014-07-16 Thread Attila Fazekas
Public bug reported:

$subjecyt style exception appears both in successful and failed jobs.

message: Exception during message handling AND message:Pool AND
message:could not be found AND filename:logs/screen-q-svc.txt

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiUG9vbFwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNTUzMzU3ODE2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[req-201dcd14-dc9d-4fb5-8eb5-c66c35991cb3 ] Exception during message 
handling: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py,
 line 232, in update_pool_stats
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.update_pool_stats(context, pool_id, data=stats)
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 512, 
in update_pool_stats
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher pool_db = 
self._get_resource(context, Pool, pool_id)
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py, line 218, 
in _get_resource
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher raise 
loadbalancer.PoolNotFound(pool_id=id)
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher PoolNotFound: 
Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342961

Title:
  Exception during message handling: Pool  FOO could not be found

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  $subjecyt style exception appears both in successful and failed jobs.

  message: Exception during message handling AND message:Pool AND
  message:could not be found AND filename:logs/screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiUG9vbFwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNTUzMzU3ODE2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[req-201dcd14-dc9d-4fb5-8eb5-c66c35991cb3 ] Exception during message 
handling: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher result 
= 

[Yahoo-eng-team] [Bug 1315095] Re: grenade nova network (n-net) fails to start

2014-07-08 Thread Attila Fazekas
The same logstash query founds this error in another jobs, and they are
runtime issues.

http://logs.openstack.org/94/105194/2/check/check-tempest-dsvm-postgres-
full/f029225/logs/screen-n-net.txt.gz#_2014-07-07_17_24_19_012

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1315095

Title:
  grenade nova network (n-net) fails to start

Status in Grenade - OpenStack upgrade testing:
  Confirmed
Status in OpenStack Compute (Nova):
  New

Bug description:
  Here we see that n-net never started logging to it's screen:

  http://logs.openstack.org/02/91502/1/check/check-grenade-
  dsvm/912e89e/logs/new/

  The errors in n-cpu seem to support that the n-net service never
  started.

  According to http://logs.openstack.org/02/91502/1/check/check-grenade-
  dsvm/912e89e/logs/grenade.sh.log.2014-05-01-042623, circa 2014-05-01
  04:31:15.580 the interesting bits should be in:

  /opt/stack/status/stack/n-net.failure

  But I don't see that captured.

  I'm not sure why n-net did not start.

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1315095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329546] Re: Upon rebuild instances might never get to Active state

2014-07-03 Thread Attila Fazekas
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1329546

Title:
  Upon rebuild instances might never get to Active state

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  VMware mine sweeper for Neutron (*) recently showed a 100% failure
  rate on tempest.api.compute.v3.servers.test_server_actions

  Logs for two instances of these failures are available at [1] and [2]
  The failure manifested as an instance unable to go active after a rebuild.
  A bit of instrumentation and log analysis revealed no obvious error on the 
neutron side - and also that the instance was actually in running state even 
if its take state was rebuilding/spawning

  N-API logs [3] revealed that the instance spawn was timing out on a
  missed notification from neutron regarding VIF plug - however the same
  log showed such notification was received [4]

  It turns out that, after rebuild, the instance network cache had still
  'active': False for the instance's VIF, even if the status for the
  corresponding port was 'ACTIVE'. This happened because after the
  network-vif-plugged event was received, nothing triggered a refresh of
  the instance network info. For this reason, the VM, after a rebuild,
  kept waiting for an even which obviously was never sent from neutron.

  While this manifested only on mine sweeper - this appears to be a nova bug - 
manifesting in vmware minesweeper only because of the way the plugin 
synchronizes with the backend for reporting the operational status of a port.
  A simple solution for this problem would be to reload the instance network 
info cache when network-vif-plugged events are received by nova. (But as the 
reporter knows nothing about nova this might be a very bad idea as well)

  [1] http://208.91.1.172/logs/neutron/98278/2/413209/testr_results.html
  [2] http://208.91.1.172/logs/neutron/73234/34/413213/testr_results.html
  [3] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=WARNING#_2014-06-06_01_46_36_219
  [4] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=DEBUG#_2014-06-06_01_41_31_767

  (*) runs libvirt/KVM + NSX

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1329546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332788] [NEW] AttributeError: 'MeteringAgentWithStateReport' object has no attribute '_periodic_last_run'

2014-06-21 Thread Attila Fazekas
Public bug reported:

http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3JcXDogJ01ldGVyaW5nQWdlbnRXaXRoU3RhdGVSZXBvcnQnIFwiIiwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJvZmZzZXQiOjAsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIiwic3RhbXAiOjE0MDMzNDAzMDI5MjB9

All screen-q-metering.log contains a similar exception:
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
Traceback (most recent call last):
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/new/neutron/neutron/openstack/common/loopingcall.py, line 76, in 
_inner
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/new/neutron/neutron/service.py, line 294, in periodic_tasks
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/new/neutron/neutron/manager.py, line 45, in periodic_tasks
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
self.run_periodic_tasks(context, raise_on_error=raise_on_error)
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/new/neutron/neutron/openstack/common/periodic_task.py, line 160, 
in run_periodic_tasks
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
last_run = self._periodic_last_run[task_name]
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
AttributeError: 'MeteringAgentWithStateReport' object has no attribute 
'_periodic_last_run'
2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332788

Title:
  AttributeError: 'MeteringAgentWithStateReport' object has no attribute
  '_periodic_last_run'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3JcXDogJ01ldGVyaW5nQWdlbnRXaXRoU3RhdGVSZXBvcnQnIFwiIiwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJvZmZzZXQiOjAsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIiwic3RhbXAiOjE0MDMzNDAzMDI5MjB9

  All screen-q-metering.log contains a similar exception:
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
Traceback (most recent call last):
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall   
File /opt/stack/new/neutron/neutron/openstack/common/loopingcall.py, line 76, 
in _inner
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall   
File /opt/stack/new/neutron/neutron/service.py, line 294, in periodic_tasks
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall   
File /opt/stack/new/neutron/neutron/manager.py, line 45, in periodic_tasks
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
self.run_periodic_tasks(context, raise_on_error=raise_on_error)
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall   
File /opt/stack/new/neutron/neutron/openstack/common/periodic_task.py, line 
160, in run_periodic_tasks
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
last_run = self._periodic_last_run[task_name]
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall 
AttributeError: 'MeteringAgentWithStateReport' object has no attribute 
'_periodic_last_run'
  2014-06-21 07:20:37.543 14529 TRACE neutron.openstack.common.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332789] [NEW] frightful ERROR level messages in the screen-q-vpn.txt

2014-06-21 Thread Attila Fazekas
Public bug reported:

http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiU3RkZXJyOiAnRGV2aWNlIFwiIEFORCBtZXNzYWdlOiBcImRvZXMgbm90IGV4aXN0XCIiLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIiLCJzdGFtcCI6MTQwMzM0MDY3ODk1Nn0=

865671 hit.

2014-06-21 08:32:12.391 14321 ERROR neutron.agent.linux.utils [-]
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-09b99e41-f9a6-4427-97b6-eb99c7c8f107', 'ip', 
'-o', 'link', 'show', 'qr-8f7490f2-9d']
Exit code: 1
Stdout: ''
Stderr: 'Device qr-8f7490f2-9d does not exist.\n'

q-vpn tries to manipulate a not yet created device and reports it as an
ERROR.

The device at this time only could exists if it were created before the q-vpn 
started or another service would create it.
None of them true. 

Please decrease this message verbosity or use different interface
creation strategy.

The DEBUG or INFO level is appropriate level at this time.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332789

Title:
  frightful  ERROR level messages in the screen-q-vpn.txt

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiU3RkZXJyOiAnRGV2aWNlIFwiIEFORCBtZXNzYWdlOiBcImRvZXMgbm90IGV4aXN0XCIiLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIiLCJzdGFtcCI6MTQwMzM0MDY3ODk1Nn0=

  865671 hit.

  2014-06-21 08:32:12.391 14321 ERROR neutron.agent.linux.utils [-]
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-09b99e41-f9a6-4427-97b6-eb99c7c8f107', 'ip', 
'-o', 'link', 'show', 'qr-8f7490f2-9d']
  Exit code: 1
  Stdout: ''
  Stderr: 'Device qr-8f7490f2-9d does not exist.\n'

  q-vpn tries to manipulate a not yet created device and reports it as
  an ERROR.

  The device at this time only could exists if it were created before the q-vpn 
started or another service would create it.
  None of them true. 

  Please decrease this message verbosity or use different interface
  creation strategy.

  The DEBUG or INFO level is appropriate level at this time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311500] Re: Nova 'os-security-group-default-rules' API does not work with neutron

2014-06-19 Thread Attila Fazekas
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1311500

Title:
  Nova 'os-security-group-default-rules' API does not work with neutron

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Nova APIs 'os-security-group-default-rules' does not work if
  'conf-security_group_api' is 'neutron'.

  I wrote the test cases for above Nova APIs
  (https://review.openstack.org/#/c/87924) and it fails in gate neutron
  tests.

  I further investigated this issue and found that in
  'nova/api/openstack/compute/contrib/security_group_default_rules.py',
  'security_group_api' is set according to  'conf-security_group_api'
  
(https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/security_group_default_rules.py#L107).

  If 'conf-security_group_api' is 'nova' then,
  'NativeNovaSecurityGroupAPI(NativeSecurityGroupExceptions,
  compute_api.SecurityGroupAPI)' is being used in this API and no issue
  here. It works fine.

  If 'conf-security_group_api' is 'neutron' then,
  'NativeNeutronSecurityGroupAPI(NativeSecurityGroupExceptions,
  neutron_driver.SecurityGroupAPI)' is being used in this API and
  'neutron_driver.SecurityGroupAPI'
  
(https://github.com/openstack/nova/blob/master/nova/network/security_group/neutron_driver.py#L48)
  does not have any of the  function which are being called from this
  API class. So gives AttributeError
  (http://logs.openstack.org/24/87924/2/check/check-tempest-dsvm-
  neutron-full/7951abf/logs/screen-n-api.txt.gz).

  Traceback -
  .
  .
  2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/compute/contrib/security_group_default_rules.py,
 line 130, in create
  2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack if 
self.security_group_api.default_rule_exists(context, values):
  2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack AttributeError: 
'NativeNeutronSecurityGroupAPI' object has no attribute 'default_rule_exists'

  I think this API is only for Nova-network as currently there is no
  such feature exist in neutron. So this API should always use the nova
  network security group driver
  
(https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/security_groups.py#L669).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1311500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330856] [NEW] Confusing fault reasion when the flavors disk size was too small

2014-06-17 Thread Attila Fazekas
Public bug reported:

Fedora-x86_64-20-20140407-sda has 2 GiB virtual size.

$ nova boot fed_1G_2  --image Fedora-x86_64-20-20140407-sda --flavor 1 
--key-name mykey
$ nova show fed_1G_2
+--+--+
| Property | Value  
  |
+--+--+
| OS-DCF:diskConfig| MANUAL 
  |
| OS-EXT-AZ:availability_zone  | nova   
  |
| OS-EXT-STS:power_state   | 0  
  |
| OS-EXT-STS:task_state| -  
  |
| OS-EXT-STS:vm_state  | error  
  |
| OS-SRV-USG:launched_at   | -  
  |
| OS-SRV-USG:terminated_at | -  
  |
| accessIPv4   |
  |
| accessIPv6   |
  |
| config_drive |
  |
| created  | 2014-06-17T07:35:43Z   
  |
| fault| {message: No valid host was found. 
, code: 500, created: 2014-06-17T07:35:44Z} |
| flavor   | m1.tiny (1)
  |
| hostId   | 
a904a292f4eb7f6735bef786c4a240a0b9240a6bc4f002519cb0e2b7
 |
| id   | 3c908a54-9682-40ad-8f12-a5bf6400   
  |
| image| Fedora-x86_64-20-20140407-sda 
(085610a8-77ae-4bc8-9a28-3bcc1020e06e) |
| key_name | mykey  
  |
| metadata | {} 
  |
| name | fed_1G_2   
  |
| os-extended-volumes:volumes_attached | [] 
  |
| private network  | 10.1.0.5   
  |
| security_groups  | default
  |
| status   | ERROR  
  |
| tenant_id| 1d26ad7003cf47e5b0107313be4832c3   
  |
| updated  | 2014-06-17T07:35:44Z   
  |
| user_id  | bf52e56b9ca14648b391c5b6d490a0c1   
  |
+--+--+

$ # nova flavor-list
+-+---+---+--+---+-+---+-+---+
| ID  | Name  | Memory_MB | Disk | Ephemeral | Swap_MB | VCPUs | 
RXTX_Factor | Is_Public |
+-+---+---+--+---+-+---+-+---+
| 1   | m1.tiny   | 512   | 1| 0 | | 1 | 1.0
 | True  |
| 2   | m1.small  | 2048  | 20   | 0 | | 1 | 1.0
 | True  |
| 3   | m1.medium | 4096  | 40   | 0 | | 2 | 1.0
 | True  |
| 4   | m1.large  | 8192  | 80   | 0 | | 4 | 1.0
 | True  |
| 42  | m1.nano   | 64| 0| 0 | | 1 | 1.0
 | True  |
| 451 | m1.heat   | 1024  | 0| 0  

[Yahoo-eng-team] [Bug 1330098] [NEW] Volume failed to reach in-use status within the required time

2014-06-14 Thread Attila Fazekas
Public bug reported:

http://logs.openstack.org/59/99559/4/check/check-tempest-dsvm-postgres-
full/be6a190/console.html.gz#_2014-06-13_21_20_11_328

The volume attache did not happen in normal time.
(The failed change  is completely unrelated to this).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1330098

Title:
  Volume failed to reach in-use status within the required time

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/59/99559/4/check/check-tempest-dsvm-
  postgres-full/be6a190/console.html.gz#_2014-06-13_21_20_11_328

  The volume attache did not happen in normal time.
  (The failed change  is completely unrelated to this).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1330098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257626] Re: Timeout while waiting on RPC response - topic: network, RPC method: allocate_for_instance info: unknown

2014-06-12 Thread Attila Fazekas
check-grenade-dsvm-icehouse gate job failed with this bugs signature:

http://logs.openstack.org/99/91899/2/check/check-grenade-dsvm-
icehouse/d18de65/logs/old/screen-n-cpu.txt.gz?level=ERROR#_2014-06-04_04_25_35_546

** Changed in: nova
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257626

Title:
  Timeout while waiting on RPC response - topic: network, RPC method:
  allocate_for_instance info: unknown

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  http://logs.openstack.org/21/59121/6/check/gate-tempest-dsvm-large-
  ops/fdd1002/logs/screen-n-cpu.txt.gz?level=TRACE#_2013-12-04_06_20_16_658

  
  2013-12-04 06:20:16.658 21854 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
  ...
  2013-12-04 06:20:16.658 21854 TRACE nova.compute.manager Timeout: Timeout 
while waiting on RPC response - topic: network, RPC method: 
allocate_for_instance info: unknown

  
  It appears there has been a  performance regression and that 
gate-tempest-dsvm-large-ops is now failing because of RPC timeouts to 
allocate_for_instance

  
  logstash query: message:nova.compute.manager Timeout: Timeout while waiting 
on RPC response - topic: \network\, RPC method: \allocate_for_instance\

  There seems to have been a major rise in this bug on December 3rd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297560] Re: *tempest-dsvm-neutron-heat-slow fails with WaitConditionTimeout

2014-06-12 Thread Attila Fazekas
The net 10.0.3.0 used by both the host and by test_neutron_resources.

test_neutron_resources must use a different range.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297560

Title:
  *tempest-dsvm-neutron-heat-slow fails with WaitConditionTimeout

Status in Orchestration API (Heat):
  Triaged
Status in Tempest:
  New

Bug description:
  There are actually two test failures here:

  http://logs.openstack.org/10/82410/4/check/check-tempest-dsvm-neutron-
  heat-slow/094a340/console.html

  And I see one stack trace in the h-engine log:

  http://logs.openstack.org/10/82410/4/check/check-tempest-dsvm-neutron-
  heat-
  slow/094a340/logs/screen-h-eng.txt.gz?level=TRACE#_2014-03-25_14_28_27_890

  2014-03-25 14:28:27.890 22679 ERROR heat.engine.resource [-] CREATE : 
WaitCondition WaitCondition [WaitHandle] Stack heat-1205857075 
[ae6bfe9c-5c98-4830-b88f-a66f34e2575a]
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource Traceback (most 
recent call last):
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource   File 
/opt/stack/new/heat/heat/engine/resource.py, line 420, in _do_action
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource while not 
check(handle_data):
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource   File 
/opt/stack/new/heat/heat/engine/resources/wait_condition.py, line 252, in 
check_create_complete
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource return 
runner.step()
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource   File 
/opt/stack/new/heat/heat/engine/scheduler.py, line 179, in step
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource 
self._runner.throw(self._timeout)
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource   File 
/opt/stack/new/heat/heat/engine/resources/wait_condition.py, line 227, in 
_wait
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource raise timeout
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource 
WaitConditionTimeout: 0 of 1 received
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource 

  
  There are 19 hits in the last 7 days, both check and gate queues:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaGVhdC5lbmdpbmUucmVzb3VyY2UgV2FpdENvbmRpdGlvblRpbWVvdXQ6IDAgb2YgMSByZWNlaXZlZFwiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1oLWVuZy50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTU3OTQ2NTg2NTN9

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1297560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297560] Re: *tempest-dsvm-neutron-heat-slow fails with WaitConditionTimeout

2014-06-11 Thread Attila Fazekas
http://logs.openstack.org/76/71476/16/check/check-tempest-dsvm-neutron-
heat-slow-icehouse/d066094/logs/screen-q-vpn.txt.gz

Normally two reuters needs to be created in heat job.
But only one created by the q-svc.

e6da91c3-01b0-4963-9359-96a3cdf4b824 not found.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297560

Title:
  *tempest-dsvm-neutron-heat-slow fails with WaitConditionTimeout

Status in Orchestration API (Heat):
  Triaged
Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  There are actually two test failures here:

  http://logs.openstack.org/10/82410/4/check/check-tempest-dsvm-neutron-
  heat-slow/094a340/console.html

  And I see one stack trace in the h-engine log:

  http://logs.openstack.org/10/82410/4/check/check-tempest-dsvm-neutron-
  heat-
  slow/094a340/logs/screen-h-eng.txt.gz?level=TRACE#_2014-03-25_14_28_27_890

  2014-03-25 14:28:27.890 22679 ERROR heat.engine.resource [-] CREATE : 
WaitCondition WaitCondition [WaitHandle] Stack heat-1205857075 
[ae6bfe9c-5c98-4830-b88f-a66f34e2575a]
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource Traceback (most 
recent call last):
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource   File 
/opt/stack/new/heat/heat/engine/resource.py, line 420, in _do_action
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource while not 
check(handle_data):
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource   File 
/opt/stack/new/heat/heat/engine/resources/wait_condition.py, line 252, in 
check_create_complete
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource return 
runner.step()
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource   File 
/opt/stack/new/heat/heat/engine/scheduler.py, line 179, in step
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource 
self._runner.throw(self._timeout)
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource   File 
/opt/stack/new/heat/heat/engine/resources/wait_condition.py, line 227, in 
_wait
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource raise timeout
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource 
WaitConditionTimeout: 0 of 1 received
  2014-03-25 14:28:27.890 22679 TRACE heat.engine.resource 

  
  There are 19 hits in the last 7 days, both check and gate queues:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaGVhdC5lbmdpbmUucmVzb3VyY2UgV2FpdENvbmRpdGlvblRpbWVvdXQ6IDAgb2YgMSByZWNlaXZlZFwiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1oLWVuZy50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTU3OTQ2NTg2NTN9

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1297560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316926] Re: failed to reach ACTIVE status and task state None within the required time

2014-06-10 Thread Attila Fazekas
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316926

Title:
  failed to reach ACTIVE status and task state None within the
  required time

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  Running test_reset_network_inject_network_info see a failure where
  unable to reach ACTIVE state.

  http://logs.openstack.org/71/91171/11/gate/gate-tempest-dsvm-
  full/8cf415d/console.html

  2014-05-07 03:33:22.910 | {3} 
tempest.api.compute.v3.admin.test_servers.ServersAdminV3Test.test_reset_network_inject_network_info
 [196.920138s] ... FAILED
  2014-05-07 03:33:22.910 | 
  2014-05-07 03:33:22.910 | Captured traceback:
  2014-05-07 03:33:22.910 | ~~~
  2014-05-07 03:33:22.910 | Traceback (most recent call last):
  2014-05-07 03:33:22.910 |   File 
tempest/api/compute/v3/admin/test_servers.py, line 170, in 
test_reset_network_inject_network_info
  2014-05-07 03:33:22.910 | resp, server = 
self.create_test_server(wait_until='ACTIVE')
  2014-05-07 03:33:22.910 |   File tempest/api/compute/base.py, line 233, 
in create_test_server
  2014-05-07 03:33:22.910 | raise ex
  2014-05-07 03:33:22.910 | TimeoutException: Request timed out
  2014-05-07 03:33:22.910 | Details: Server 
4491ab2f-2228-4d3f-b364-77d0276c18da failed to reach ACTIVE status and task 
state None within the required time (196 s). Current status: BUILD. Current 
task state: spawning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328276] [NEW] test_list_image_filters.ListImageFiltersTest failed to create image

2014-06-09 Thread Attila Fazekas
Public bug reported:

This test boots two server almost at the same time (starting the second 
instance before 1 first active),
waits until both server is ACTIVE.

Than creates the first snapshot create:
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-cpu.txt.gz#_2014-06-06_20_16_17_836
Instance: 33e632b0-1162-482e-8d41-b31f0d333429
snapshot: 5b88b608-7fdf-4073-8beb-749dc32ad10f


When n-cpu fails to acquire state change lock, the image gets deleted.
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-cpu.txt.gz#_2014-06-06_20_16_54_800

Instance: 85c8873a-4066-4596-a8b4-5a6b2c221774
snapshot: cdc2a7a1-f384-46a7-ab01-78fb7555af81

The lock acquire / image creation should be retried instead of deleting
the image.

Console Exception (print delayed, by cleanup):
...
2014-06-06 20:17:05.764 | NotFound: Object not found
2014-06-06 20:17:05.764 | Details: {itemNotFound: {message: Image not 
found., code: 404}}

http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-
full/ae1f95a/console.html.gz#_2014-06-06_20_17_05_758


Actual GET request in the n-api log.
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-api.txt.gz#_2014-06-06_20_16_55_824

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328276

Title:
  test_list_image_filters.ListImageFiltersTest failed to create  image

Status in OpenStack Compute (Nova):
  New

Bug description:
  This test boots two server almost at the same time (starting the second 
instance before 1 first active),
  waits until both server is ACTIVE.

  Than creates the first snapshot create:
  
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-cpu.txt.gz#_2014-06-06_20_16_17_836
  Instance: 33e632b0-1162-482e-8d41-b31f0d333429
  snapshot: 5b88b608-7fdf-4073-8beb-749dc32ad10f

  
  When n-cpu fails to acquire state change lock, the image gets deleted.
  
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-cpu.txt.gz#_2014-06-06_20_16_54_800

  Instance: 85c8873a-4066-4596-a8b4-5a6b2c221774
  snapshot: cdc2a7a1-f384-46a7-ab01-78fb7555af81

  The lock acquire / image creation should be retried instead of
  deleting the image.

  Console Exception (print delayed, by cleanup):
  ...
  2014-06-06 20:17:05.764 | NotFound: Object not found
  2014-06-06 20:17:05.764 | Details: {itemNotFound: {message: Image 
not found., code: 404}}

  http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-
  full/ae1f95a/console.html.gz#_2014-06-06_20_17_05_758

  
  Actual GET request in the n-api log.
  
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-api.txt.gz#_2014-06-06_20_16_55_824

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244457] Re: ServiceCatalogException: Invalid service catalog service: compute

2014-06-05 Thread Attila Fazekas
message:ServiceCatalogException AND tags:horizon_error.txt 11 hit in
7 days.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU2VydmljZUNhdGFsb2dFeGNlcHRpb25cIiBBTkQgdGFnczpcImhvcml6b25fZXJyb3IudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDIwMjgwOTM2ODUsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

** Changed in: horizon
   Status: Expired = New

** Also affects: grenade
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1244457

Title:
  ServiceCatalogException: Invalid service catalog service: compute

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the following review - https://review.openstack.org/#/c/53712/

  We failed the tempest tests on the dashboard scenario tests for the pg 
version of the job: 
  2013-10-24 21:26:00.445 | 
==
  2013-10-24 21:26:00.445 | FAIL: 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]
  2013-10-24 21:26:00.445 | 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]
  2013-10-24 21:26:00.445 | 
--
  2013-10-24 21:26:00.446 | _StringException: Empty attachments:
  2013-10-24 21:26:00.446 |   pythonlogging:''
  2013-10-24 21:26:00.446 |   stderr
  2013-10-24 21:26:00.446 |   stdout
  2013-10-24 21:26:00.446 | 
  2013-10-24 21:26:00.446 | Traceback (most recent call last):
  2013-10-24 21:26:00.446 |   File 
tempest/scenario/test_dashboard_basic_ops.py, line 73, in test_basic_scenario
  2013-10-24 21:26:00.447 | self.user_login()
  2013-10-24 21:26:00.447 |   File 
tempest/scenario/test_dashboard_basic_ops.py, line 64, in user_login
  2013-10-24 21:26:00.447 | self.opener.open(req, urllib.urlencode(params))
  2013-10-24 21:26:00.447 |   File /usr/lib/python2.7/urllib2.py, line 406, 
in open
  2013-10-24 21:26:00.447 | response = meth(req, response)
  2013-10-24 21:26:00.447 |   File /usr/lib/python2.7/urllib2.py, line 519, 
in http_response
  2013-10-24 21:26:00.447 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.448 |   File /usr/lib/python2.7/urllib2.py, line 438, 
in error
  2013-10-24 21:26:00.448 | result = self._call_chain(*args)
  2013-10-24 21:26:00.448 |   File /usr/lib/python2.7/urllib2.py, line 378, 
in _call_chain
  2013-10-24 21:26:00.448 | result = func(*args)
  2013-10-24 21:26:00.448 |   File /usr/lib/python2.7/urllib2.py, line 625, 
in http_error_302
  2013-10-24 21:26:00.448 | return self.parent.open(new, 
timeout=req.timeout)
  2013-10-24 21:26:00.448 |   File /usr/lib/python2.7/urllib2.py, line 406, 
in open
  2013-10-24 21:26:00.449 | response = meth(req, response)
  2013-10-24 21:26:00.449 |   File /usr/lib/python2.7/urllib2.py, line 519, 
in http_response
  2013-10-24 21:26:00.449 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.449 |   File /usr/lib/python2.7/urllib2.py, line 438, 
in error
  2013-10-24 21:26:00.449 | result = self._call_chain(*args)
  2013-10-24 21:26:00.449 |   File /usr/lib/python2.7/urllib2.py, line 378, 
in _call_chain
  2013-10-24 21:26:00.449 | result = func(*args)
  2013-10-24 21:26:00.450 |   File /usr/lib/python2.7/urllib2.py, line 625, 
in http_error_302
  2013-10-24 21:26:00.450 | return self.parent.open(new, 
timeout=req.timeout)
  2013-10-24 21:26:00.450 |   File /usr/lib/python2.7/urllib2.py, line 406, 
in open
  2013-10-24 21:26:00.450 | response = meth(req, response)
  2013-10-24 21:26:00.450 |   File /usr/lib/python2.7/urllib2.py, line 519, 
in http_response
  2013-10-24 21:26:00.450 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.450 |   File /usr/lib/python2.7/urllib2.py, line 444, 
in error
  2013-10-24 21:26:00.451 | return self._call_chain(*args)
  2013-10-24 21:26:00.451 |   File /usr/lib/python2.7/urllib2.py, line 378, 
in _call_chain
  2013-10-24 21:26:00.451 | result = func(*args)
  2013-10-24 21:26:00.451 |   File /usr/lib/python2.7/urllib2.py, line 527, 
in http_error_default
  2013-10-24 21:26:00.451 | raise HTTPError(req.get_full_url(), code, msg, 
hdrs, fp)
  2013-10-24 21:26:00.451 | HTTPError: HTTP Error 500: INTERNAL SERVER ERROR

  The horizon logs have the following error info:

  [Thu Oct 24 21:18:43 2013] [error] Internal Server Error: /project/
  [Thu Oct 24 21:18:43 2013] [error] Traceback (most recent call last):
  [Thu Oct 24 21:18:43 2013] [error]   File 
/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py, line 
115, in get_response
  [Thu Oct 24 21:18:43 2013] [error] 

[Yahoo-eng-team] [Bug 1312199] Re: cirros 0.3.1 fails to boot

2014-05-28 Thread Attila Fazekas
AFAIK Detecting a not accelerated qemu as hypervisor is not an easy task
even on a booted system [1].

When the image is UEC (the kernel image) is separated, nova would be able to 
pass no_timer_check as kernel parameter.
This is only required when the CONF.libvirt.virt_type=qemu.
Linux automatically turns off the  timer_check when the hypervisor is mshyperv 
and kvm.
AFAIK xen also uses para virtualized clock.
This seams like this is only way to provide stable boot with existing uec 
images in soft qemu.

Adding nova to this bug for the above change.

Devstack automatically decide when to to use kvm  or qemu.
The kvm is selected when the system is able to use hardware acceleration with 
qemu/kvm.

The cloud image needs to be altered in most cases, when qemu is selected type 
and the cloud image is not uec in order to use no_timer_check parameter.
This includes the f20 cloud image and all cloud images I saw so far.
It affects the heat-slow jobs.

Adding devstack as affected component for this change.

A Bug for Linux kernel and F20 could image will be created as well.

[1] http://fedorapeople.org/cgit/rjones/public_git/virt-what.git/tree
/virt-what.in?id=8aa72773cebbc742d9378fed6b6ac13cb57b0eb3#n228

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1312199

Title:
  cirros 0.3.1 fails to boot

Status in CirrOS a tiny cloud guest:
  New
Status in devstack - openstack dev environments:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Logstash query: message: MP-BIOS bug AND tags:console

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIk1QLUJJT1MgYnVnXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTgzNDg0NzIzNzcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  cirros-0.3.1-x86_64-uec  sometimes fails to boot with libvirt/ soft
  qemu in the openstack gate jobs.

  The VM's serial console log ends with:

  [1.096067] ftrace: allocating 27027 entries in 106 pages
  [1.140070] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
  [1.148071] ..MP-BIOS bug: 8254 timer not connected to IO-APIC
  [1.148071] ...trying to set up timer (IRQ0) through the 8259A ...
  [1.148071] . (found apic 0 pin 2) ...
  [1.152071] ... failed.
  [1.152071] ...trying to set up timer as Virtual Wire IRQ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1312199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322440] [NEW] NoSuchOptError: no such option: state_path

2014-05-23 Thread Attila Fazekas
Public bug reported:

http://logs.openstack.org/30/92630/1/check/check-neutron-dsvm-
functional/209a36f/console.html

check-neutron-dsvm-functional failed on a stable/icehouse change.

neutron.tests.functional.agent.linux.test_async_process.TestAsyncProcess.test_async_process_respawns
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestSimpleInterfaceMonitor.test_has_updates
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestOvsdbMonitor.test_killed_monitor_respawns
neutron.tests.functional.agent.linux.test_async_process.TestAsyncProcess.test_stopping_async_process_lifecycle

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: openstack-ci
 Importance: Undecided
 Status: New

** Also affects: openstack-ci
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1322440

Title:
  NoSuchOptError: no such option: state_path

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Core Infrastructure:
  New

Bug description:
  http://logs.openstack.org/30/92630/1/check/check-neutron-dsvm-
  functional/209a36f/console.html

  check-neutron-dsvm-functional failed on a stable/icehouse change.

  
neutron.tests.functional.agent.linux.test_async_process.TestAsyncProcess.test_async_process_respawns
  
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestSimpleInterfaceMonitor.test_has_updates
  
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestOvsdbMonitor.test_killed_monitor_respawns
  
neutron.tests.functional.agent.linux.test_async_process.TestAsyncProcess.test_stopping_async_process_lifecycle

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1322440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321653] [NEW] Got 500 when adding list type host to an aggregate

2014-05-21 Thread Attila Fazekas
Public bug reported:

Steps to reproduce as admin:
1. create an aggregate  (ID 1) 
$ nova aggregate-create foo
2. curl -i http://127.0.0.1:8774/v2/`keystone token-get | awk '/ tenant_id 
/{print $4}'`/os-aggregates/1/action -X POST -H Content-Type: 
application/json -H X-Auth-Token: `keystone token-get | awk '/ id /{print 
$4}'` -d '{add_host: {host: [host-2, host-1]}}

HTTP/1.1 500 Internal Server Error
Content-Length: 128
Content-Type: application/json; charset=UTF-8
X-Compute-Request-Id: req-f6ea35a8-029a-444a-9741-7c6abd27f294
Date: Wed, 21 May 2014 08:34:57 GMT

{computeFault: {message: The server has either erred or is
incapable of performing the requested operation., code: 500}


Expected solution:
A:  Response with  400(Bad Request) when the host value is not the expected 
type.
B:  Add multiple hosts by  a single request

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: exception.txt
   
https://bugs.launchpad.net/bugs/1321653/+attachment/4116708/+files/aggr-multi-exc.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321653

Title:
  Got 500  when adding list type host to an aggregate

Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to reproduce as admin:
  1. create an aggregate  (ID 1) 
  $ nova aggregate-create foo
  2. curl -i http://127.0.0.1:8774/v2/`keystone token-get | awk '/ tenant_id 
/{print $4}'`/os-aggregates/1/action -X POST -H Content-Type: 
application/json -H X-Auth-Token: `keystone token-get | awk '/ id /{print 
$4}'` -d '{add_host: {host: [host-2, host-1]}}

  HTTP/1.1 500 Internal Server Error
  Content-Length: 128
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-f6ea35a8-029a-444a-9741-7c6abd27f294
  Date: Wed, 21 May 2014 08:34:57 GMT

  {computeFault: {message: The server has either erred or is
  incapable of performing the requested operation., code: 500}

  
  Expected solution:
  A:  Response with  400(Bad Request) when the host value is not the expected 
type.
  B:  Add multiple hosts by  a single request

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320670] Re: 404 on GET /v3/OS-SIMPLE-CERT/ca at grenade

2014-05-21 Thread Attila Fazekas
Since basically the ichehouse of openstack expected to work with havana
config, and the grenade expected do to nothing special, I'll remove
grenade from the affected components.

Since the root cause of this issue not really keystone, it is more like
a gating confgi issue, I remove keystone as affected component.

Mainly this issue exits, because the check-grenade-dsvm-icehouse  test a
different thing than  what is expected.

** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1320670

Title:
  404 on GET /v3/OS-SIMPLE-CERT/ca at grenade

Status in Grenade - OpenStack upgrade testing:
  In Progress
Status in OpenStack Core Infrastructure:
  New
Status in Tempest:
  New

Bug description:
  In this [1] grenade job, both  test_get_ca_certificate and
  test_get_certificates failed [2].

  The 404 also visible in the keystone log [3] .
  I do not knew why the CA cart not always visible, but is should.

  [1] http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/
  [2] 
http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/console.html#_2014-05-18_15_23_57_034
  
[3]http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/logs/new/screen-key.txt.gz#_2014-05-18_15_23_57_006

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1320670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320617] [NEW] failed to reach ACTIVE status within the required time (196 s). Current status: SAVING

2014-05-18 Thread Attila Fazekas
Public bug reported:

http://logs.openstack.org/08/92608/1/gate/gate-tempest-dsvm-postgres-
full/57b137a/console.html.gz

2014-05-09 21:44:09.857 | Captured traceback:
2014-05-09 21:44:09.858 | ~~~
2014-05-09 21:44:09.858 | Traceback (most recent call last):
2014-05-09 21:44:09.858 |   File 
tempest/api/compute/images/test_list_image_filters.py, line 45, in setUpClass
2014-05-09 21:44:09.858 | cls.server1['id'], wait_until='ACTIVE')
2014-05-09 21:44:09.858 |   File tempest/api/compute/base.py, line 302, 
in create_image_from_server
2014-05-09 21:44:09.858 | kwargs['wait_until'])
2014-05-09 21:44:09.858 |   File 
tempest/services/compute/xml/images_client.py, line 140, in 
wait_for_image_status
2014-05-09 21:44:09.858 | waiters.wait_for_image_status(self, image_id, 
status)
2014-05-09 21:44:09.858 |   File tempest/common/waiters.py, line 129, in 
wait_for_image_status
2014-05-09 21:44:09.858 | raise exceptions.TimeoutException(message)
2014-05-09 21:44:09.859 | TimeoutException: Request timed out
2014-05-09 21:44:09.859 | Details: (ListImageFiltersTestXML:setUpClass) 
Image 20b6e7a9-f65d-4d17-b025-59f9237ff8cb failed to reach ACTIVE status within 
the required time (196 s). Current status: SAVING.

logstash:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZmFpbGVkIHRvIHJlYWNoIEFDVElWRSBzdGF0dXMgd2l0aGluIHRoZSByZXF1aXJlZCB0aW1lXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIgQU5EIG1lc3NhZ2U6XCJDdXJyZW50IHN0YXR1czogU0FWSU5HXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDA0MTEyNDcwMzksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

message:failed to reach ACTIVE status within the required time AND
filename:console.html AND message:Current status: SAVING

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1320617

Title:
  failed to reach ACTIVE status within the required time (196 s).
  Current status: SAVING

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/08/92608/1/gate/gate-tempest-dsvm-postgres-
  full/57b137a/console.html.gz

  2014-05-09 21:44:09.857 | Captured traceback:
  2014-05-09 21:44:09.858 | ~~~
  2014-05-09 21:44:09.858 | Traceback (most recent call last):
  2014-05-09 21:44:09.858 |   File 
tempest/api/compute/images/test_list_image_filters.py, line 45, in setUpClass
  2014-05-09 21:44:09.858 | cls.server1['id'], wait_until='ACTIVE')
  2014-05-09 21:44:09.858 |   File tempest/api/compute/base.py, line 302, 
in create_image_from_server
  2014-05-09 21:44:09.858 | kwargs['wait_until'])
  2014-05-09 21:44:09.858 |   File 
tempest/services/compute/xml/images_client.py, line 140, in 
wait_for_image_status
  2014-05-09 21:44:09.858 | waiters.wait_for_image_status(self, 
image_id, status)
  2014-05-09 21:44:09.858 |   File tempest/common/waiters.py, line 129, 
in wait_for_image_status
  2014-05-09 21:44:09.858 | raise exceptions.TimeoutException(message)
  2014-05-09 21:44:09.859 | TimeoutException: Request timed out
  2014-05-09 21:44:09.859 | Details: (ListImageFiltersTestXML:setUpClass) 
Image 20b6e7a9-f65d-4d17-b025-59f9237ff8cb failed to reach ACTIVE status within 
the required time (196 s). Current status: SAVING.

  logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZmFpbGVkIHRvIHJlYWNoIEFDVElWRSBzdGF0dXMgd2l0aGluIHRoZSByZXF1aXJlZCB0aW1lXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIgQU5EIG1lc3NhZ2U6XCJDdXJyZW50IHN0YXR1czogU0FWSU5HXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDA0MTEyNDcwMzksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  message:failed to reach ACTIVE status within the required time AND
  filename:console.html AND message:Current status: SAVING

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1320617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320628] [NEW] Double powering-off state confuses the clinets and causes gate failure

2014-05-18 Thread Attila Fazekas
Public bug reported:

http://logs.openstack.org/52/73152/8/check/check-tempest-dsvm-
full/9352c04/console.html.gz#_2014-05-13_18_12_38_547

At client side the only way to know an instance action is doable is to
making sure the status  is a permanent status like ACTIVE or SHUTOFF and
no action in progress, so the task-state is None.

In the above linked case tempest stopped the instance and the instance reached 
the  SHUTOFF/None.
'State transition ACTIVE/powering-off == SHUTOFF/None after 10 second wait'

Cool, at this time we can start the instance right ? No, other attribute
needs to be checked.

The start attempt was rewarded with 409 :
 u'Instance 7bc9de3b-1960-476f-b964-2ab2da986ec7 in task_state powering-off. 
Cannot start while the instance is in this  state

The below line indicates the  task state, silently moved back to
SHUTOFF/powering-off  , before the 'start' attempt.

2014-05-13 18:09:13,610 State transition SHUTOFF/powering-off ==
SHUTOFF/None after 1 second wait

Please do not set the 'None' task-state when the 'powering-off' is not
finished.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1320628

Title:
  Double powering-off state confuses the clinets and causes gate failure

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/52/73152/8/check/check-tempest-dsvm-
  full/9352c04/console.html.gz#_2014-05-13_18_12_38_547

  At client side the only way to know an instance action is doable is to
  making sure the status  is a permanent status like ACTIVE or SHUTOFF
  and no action in progress, so the task-state is None.

  In the above linked case tempest stopped the instance and the instance 
reached the  SHUTOFF/None.
  'State transition ACTIVE/powering-off == SHUTOFF/None after 10 second 
wait'

  Cool, at this time we can start the instance right ? No, other
  attribute needs to be checked.

  The start attempt was rewarded with 409 :
   u'Instance 7bc9de3b-1960-476f-b964-2ab2da986ec7 in task_state powering-off. 
Cannot start while the instance is in this  state

  The below line indicates the  task state, silently moved back to
  SHUTOFF/powering-off  , before the 'start' attempt.

  2014-05-13 18:09:13,610 State transition SHUTOFF/powering-off ==
  SHUTOFF/None after 1 second wait

  Please do not set the 'None' task-state when the 'powering-off' is not
  finished.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1320628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320655] [NEW] UnicodeDecodeError in the nova gate logs

2014-05-18 Thread Attila Fazekas
Public bug reported:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlVuaWNvZGVEZWNvZGVFcnJvclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAwNDI3ODM3NzE0LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9
message: UnicodeDecodeError

Looks like the n-cpu tries to long something not unicide.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1320655

Title:
  UnicodeDecodeError in the nova gate logs

Status in OpenStack Compute (Nova):
  New

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlVuaWNvZGVEZWNvZGVFcnJvclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAwNDI3ODM3NzE0LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9
  message: UnicodeDecodeError

  Looks like the n-cpu tries to long something not unicide.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1320655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320670] [NEW] 404 on GET /v3/OS-SIMPLE-CERT/ca at grenade

2014-05-18 Thread Attila Fazekas
Public bug reported:

In this [1] grenade job, both  test_get_ca_certificate and
test_get_certificates failed [2].

The 404 also visible in the keystone log [3] .
I do not knew why the CA cart not always visible, but is should.

[1] http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/
[2] 
http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/console.html#_2014-05-18_15_23_57_034
[3]http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/logs/new/screen-key.txt.gz#_2014-05-18_15_23_57_006

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1320670

Title:
  404 on GET /v3/OS-SIMPLE-CERT/ca at grenade

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In this [1] grenade job, both  test_get_ca_certificate and
  test_get_certificates failed [2].

  The 404 also visible in the keystone log [3] .
  I do not knew why the CA cart not always visible, but is should.

  [1] http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/
  [2] 
http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/console.html#_2014-05-18_15_23_57_034
  
[3]http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/logs/new/screen-key.txt.gz#_2014-05-18_15_23_57_006

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1320670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320670] Re: 404 on GET /v3/OS-SIMPLE-CERT/ca at grenade

2014-05-18 Thread Attila Fazekas
This test just recently merged https://review.openstack.org/#/c/87750/,
looks like the grenade is not ruining on tempest changes.

** Also affects: grenade
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1320670

Title:
  404 on GET /v3/OS-SIMPLE-CERT/ca at grenade

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Identity (Keystone):
  New

Bug description:
  In this [1] grenade job, both  test_get_ca_certificate and
  test_get_certificates failed [2].

  The 404 also visible in the keystone log [3] .
  I do not knew why the CA cart not always visible, but is should.

  [1] http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/
  [2] 
http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/console.html#_2014-05-18_15_23_57_034
  
[3]http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/logs/new/screen-key.txt.gz#_2014-05-18_15_23_57_006

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1320670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320670] Re: 404 on GET /v3/OS-SIMPLE-CERT/ca at grenade

2014-05-18 Thread Attila Fazekas
** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1320670

Title:
  404 on GET /v3/OS-SIMPLE-CERT/ca at grenade

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Identity (Keystone):
  New
Status in Tempest:
  New

Bug description:
  In this [1] grenade job, both  test_get_ca_certificate and
  test_get_certificates failed [2].

  The 404 also visible in the keystone log [3] .
  I do not knew why the CA cart not always visible, but is should.

  [1] http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/
  [2] 
http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/console.html#_2014-05-18_15_23_57_034
  
[3]http://logs.openstack.org/29/93029/1/check/check-grenade-dsvm/320139c/logs/new/screen-key.txt.gz#_2014-05-18_15_23_57_006

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1320670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1112912] Re: get_firewall_required should use VIF parameter from neutron

2014-05-17 Thread Attila Fazekas
tempest change: https://review.openstack.org/#/c/62702/

** Changed in: tempest
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1112912

Title:
  get_firewall_required should use VIF parameter from neutron

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  This bug report is from the discussion of
  https://review.openstack.org/#/c/19126/17/nova/virt/libvirt/vif.py

  I'm going to file this as a bug for tracking issue

  The patch introduces get_firewall_required function.
  But the patch checks only conf file.
  This should use quantum VIF parameter.
  
https://github.com/openstack/quantum/blob/master/quantum/plugins/openvswitch/ovs_quantum_plugin.py#L513

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1112912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316959] [NEW] neutron unit test py27 failure

2014-05-07 Thread Attila Fazekas
Public bug reported:

This is likely some download/install related issue.
http://logs.openstack.org/18/92018/3/gate/gate-neutron-python27/b57b3c6/console.html

The py26 version of this job was successful.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: openstack-ci
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1316959

Title:
  neutron unit test py27 failure

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Core Infrastructure:
  New

Bug description:
  This is likely some download/install related issue.
  
http://logs.openstack.org/18/92018/3/gate/gate-neutron-python27/b57b3c6/console.html

  The py26 version of this job was successful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1316959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315467] [NEW] Neutron deletes the router interface instead of adding a floatingip

2014-05-02 Thread Attila Fazekas
Public bug reported:

After parsing a lot of log files related to check failure, looks like the q-vpn 
at the time when I would expect to add
floating ip , it destroys the router's qg- and qr-  interfaces.

However after the floating ip deletion request the q-vpn service
restores the qg-, qr- interfaces.


tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario[compute,image,network,volume]
failed in 
http://logs.openstack.org/79/88579/2/check/check-tempest-dsvm-neutron-full/f76ee0e/console.html.

admin user: admin/9e02f14321454af6bb27587770f27d9b
admin tenant id: admin/413bb1232bca45069f3a3256839effa1

test user: TestMinimumBasicScenario-1306090821/c8dd95056c0b407e8dd168dbf410a66a
test Tenant:  
TestMinimumBasicScenario-993819377/2527b8222e3343bca9f70343e608880c
External Net : public/c29040d3-7e73-4a87-9f73-bb5cbe602afb
External subnet: public-subnet/ee754eb6-6194-4a25-a4cc-f9233d366c1e
Network: 
TestMinimumBasicScenario-1375749858-network/f029d7a8-54e0-484c-a215-cc34066ae830
Subnet: 
TestMinimumBasicScenario-1375749858-subnet/7edc72be-1207-4571-95d4-911223885ae7 
 10.100.0.0/28
Router 
id:TestMinimumBasicScenario-1375749858-router/08216822-5ee2-4313-be7e-dad2d84147db
Expected interfaces in the qrouter-08216822-5ee2-4313-be7e-dad2d84147db:
* lo 127.0.0.1
* qr-529eddd4-2c 10.100.0.1/28 iface_id: 529eddd4-2ca8-43ec-9cab-29c3a6632604, 
attached-mac: fa:16:3e:2a:f8:ba, ofport 166
* qg-9be8f502-93 172.24.4.85/24 iface-id: 9be8f502-9360-47dc-9eff-33c8743e7c2b  
attached-mac: fa:16:3e:be:a1:54, ofport 37

Floating IP: 172.24.4.87 (Never appears in the q-vpn log)
port: 
(net/subnet/port)(c29040d3-7e73-4a87-9f73-bb5cbe602afb/ee754eb6-6194-4a25-a4cc-f9233d366c1e/013b0b2d-80ed-403d-b380-6b6895ce34f5)
mac?: fa:16:3e:15:f2:57
floating ip uuid: cd84111e-af6a-4c26-af73-3167419c664a

Instance:
Ipv4: 10.100.0.2 mac: FA:16:3E:18:F1:69 
Instance uuid: 8a552eda-2fbd-4972-bfcf-cee7e6472871
iface_id/port_id: 4188c532-3265-4294-8b4e-9bbfe5a482e8
ovsdb_interface_uuid: 9d7b858b-745e-482b-b91a-1e9ae34fc545
intbr tag: 49

dhcp server dev: tap4c6c6e06-e4
ns: qdhcp-f029d7a8-54e0-484c-a215-cc34066ae830
ip: 10.100.0.3 mac: fa:16:3e:a2:f1:ea
intbr tag: 49

Host:
eth0: 10.7.16.229/15 mac: 02:16:3e:52:5d:ff


Router + router interface creation in the logs:
http://logs.openstack.org/79/88579/2/check/check-tempest-dsvm-neutron-full/f76ee0e/logs/tempest.txt.gz#_2014-05-01_19_49_46_924
http://logs.openstack.org/79/88579/2/check/check-tempest-dsvm-neutron-full/f76ee0e/logs/screen-q-svc.txt.gz#_2014-05-01_19_49_46_724

Floating IP create:
http://logs.openstack.org/79/88579/2/check/check-tempest-dsvm-neutron-full/f76ee0e/logs/screen-n-api.txt.gz#_2014-05-01_19_50_18_447
Floating IP associate:
http://logs.openstack.org/79/88579/2/check/check-tempest-dsvm-neutron-full/f76ee0e/logs/screen-n-api.txt.gz#_2014-05-01_19_50_18_814

q-svc starts destroying the router:
http://logs.openstack.org/79/88579/2/check/check-tempest-dsvm-neutron-full/f76ee0e/logs/screen-q-vpn.txt.gz#_2014-05-01_19_50_20_277
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-c0cd93e7-5cb0-403b-a509-8e07b352b89d', 'ipsec', 'whack', '--ctlbase', 
'/opt/stack/data/neutron/ipsec/c0cd93e7-5cb0-403b-a509-8e07b352b89d/var/run/pluto',
 '--status']
Exit code: 1
Stdout: ''
Stderr: 'whack: Pluto is not running (no 
/opt/stack/data/neutron/ipsec/c0cd93e7-5cb0-403b-a509-8e07b352b89d/var/run/pluto.ctl)\n'
 execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:74
2014-05-01 19:50:20.277 19279 DEBUG neutron.openstack.common.lockutils [-] 
Semaphore / lock released sync inner 
/opt/stack/new/neutron/neutron/openstack/common/lockutils.py:252
2014-05-01 19:50:20.277 19279 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-08216822-5ee2-4313-be7e-dad2d84147db', 'ip', '-o', 'link', 'show', 
'qr-529eddd4-2c'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:48
2014-05-01 19:50:20.516 19279 DEBUG neutron.agent.linux.utils [-] 
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-08216822-5ee2-4313-be7e-dad2d84147db', 'ip', '-o', 'link', 'show', 
'qr-529eddd4-2c']
Exit code: 0
Stdout: '605: qr-529eddd4-2c: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
noqueue state UNKNOWN \\link/ether fa:16:3e:2a:f8:ba brd 
ff:ff:ff:ff:ff:ff\n'
Stderr: '' execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:74
2014-05-01 19:50:20.517 19279 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', '-o', 'link', 'show', 'br-int'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:48
2014-05-01 19:50:20.533 19279 DEBUG neutron.agent.linux.utils [-] 
Command: ['ip', '-o', 'link', 'show', 'br-int']
Exit code: 0
Stdout: '6: br-int: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN \\
link/ether 

[Yahoo-eng-team] [Bug 1251784] Re: nova+neutron scheduling error: Connection to neutron failed: Maximum attempts reached

2014-04-24 Thread Attila Fazekas
** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251784

Title:
  nova+neutron scheduling error: Connection to neutron failed: Maximum
  attempts reached

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  VMs are failing to schedule with the following error

  2013-11-15 20:50:21.405 ERROR nova.scheduler.filter_scheduler [req-
  d2c26348-53e6-448a-8975-4f22f4e89782 demo demo] [instance: c8069c13
  -593f-48fb-aae9-198961097eb2] Error from last host: devstack-precise-
  hpcloud-az3-662002 (node devstack-precise-hpcloud-az3-662002):
  [u'Traceback (most recent call last):\n', u'  File
  /opt/stack/new/nova/nova/compute/manager.py, line 1030, in
  _build_instance\nset_access_ip=set_access_ip)\n', u'  File
  /opt/stack/new/nova/nova/compute/manager.py, line 1439, in _spawn\n
  LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n',
  u'  File /opt/stack/new/nova/nova/compute/manager.py, line 1436, in
  _spawn\nblock_device_info)\n', u'  File
  /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2100, in
  spawn\nadmin_pass=admin_password)\n', u'  File
  /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2451, in
  _create_image\ncontent=files, extra_md=extra_md,
  network_info=network_info)\n', u'  File
  /opt/stack/new/nova/nova/api/metadata/base.py, line 165, in
  __init__\n
  ec2utils.get_ip_info_for_instance_from_nw_info(network_info)\n', u'
  File /opt/stack/new/nova/nova/api/ec2/ec2utils.py, line 149, in
  get_ip_info_for_instance_from_nw_info\nfixed_ips =
  nw_info.fixed_ips()\n', u'  File
  /opt/stack/new/nova/nova/network/model.py, line 368, in
  _sync_wrapper\nself.wait()\n', u'  File
  /opt/stack/new/nova/nova/network/model.py, line 400, in wait\n
  self[:] = self._gt.wait()\n', u'  File /usr/local/lib/python2.7/dist-
  packages/eventlet/greenthread.py, line 168, in wait\nreturn
  self._exit_event.wait()\n', u'  File /usr/local/lib/python2.7/dist-
  packages/eventlet/event.py, line 120, in wait\n
  current.throw(*self._exc)\n', u'  File /usr/local/lib/python2.7/dist-
  packages/eventlet/greenthread.py, line 194, in main\nresult =
  function(*args, **kwargs)\n', u'  File
  /opt/stack/new/nova/nova/compute/manager.py, line 1220, in
  _allocate_network_async\ndhcp_options=dhcp_options)\n', u'  File
  /opt/stack/new/nova/nova/network/neutronv2/api.py, line 359, in
  allocate_for_instance\nnw_info =
  self._get_instance_nw_info(context, instance, networks=nets)\n', u'
  File /opt/stack/new/nova/nova/network/api.py, line 49, in wrapper\n
  res = f(self, context, *args, **kwargs)\n', u'  File
  /opt/stack/new/nova/nova/network/neutronv2/api.py, line 458, in
  _get_instance_nw_info\nnw_info =
  self._build_network_info_model(context, instance, networks)\n', u'
  File /opt/stack/new/nova/nova/network/neutronv2/api.py, line 1022,
  in _build_network_info_model\nsubnets =
  self._nw_info_get_subnets(context, port, network_IPs)\n', u'  File
  /opt/stack/new/nova/nova/network/neutronv2/api.py, line 924, in
  _nw_info_get_subnets\nsubnets =
  self._get_subnets_from_port(context, port)\n', u'  File
  /opt/stack/new/nova/nova/network/neutronv2/api.py, line 1066, in
  _get_subnets_from_port\ndata =
  neutronv2.get_client(context).list_ports(**search_opts)\n', u'  File
  /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py,
  line 111, in with_params\nret = self.function(instance, *args,
  **kwargs)\n', u'  File /opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py, line 306, in list_ports\n
  **_params)\n', u'  File /opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py, line 1250, in list\n
  for r in self._pagination(collection, path, **params):\n', u'  File
  /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py,
  line 1263, in _pagination\nres = self.get(path, params=params)\n',
  u'  File /opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py, line 1236, in get\n
  headers=headers, params=params)\n', u'  File /opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py, line 1228, in
  retry_request\nraise exceptions.ConnectionFailed(reason=_(Maximum
  attempts reached))\n', u'ConnectionFailed: Connection to neutron
  failed: Maximum attempts reached\n']

  
  Connection to neutron failed: Maximum attempts reached

  http://logs.openstack.org/96/56496/1/gate/gate-tempest-devstack-vm-
  neutron-
  isolated/8df6c6c/logs/screen-n-sch.txt.gz#_2013-11-15_20_50_21_405

  
  logstash query: Connection to neutron failed: Maximum attempts reached  AND 
filename:logs/screen-n-sch.txt

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1305892] [NEW] nova-manage db archive_deleted_rows fails with pgsql on low row count

2014-04-10 Thread Attila Fazekas
Public bug reported:

# nova-manage db archive_deleted_rows 10 fails with postgresql, when
I do not have at least 10 rows for archive.

 
# nova delete 27d7de76-3d41-4b37-8980-2a783f8296ac
# nova list
+--++++-+--+
| ID   | Name   | Status | Task State | Power 
State | Networks |
+--++++-+--+
| 526d13d4-420d-4b5c-b469-bd997ef4da99 | server | ACTIVE | -  | Running 
| private=10.1.0.4 |
| d01ce4e4-a33d-4583-96a4-b9a942d08dd8 | server | ACTIVE | -  | Running 
| private=10.1.0.6 |
+--++++-+--+
# /usr/bin/nova-manage db archive_deleted_rows 1  SUCESSS 
# nova delete  526d13d4-420d-4b5c-b469-bd997ef4da99
# nova delete d01ce4e4-a33d-4583-96a4-b9a942d08dd8
# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
# /usr/bin/nova-manage db archive_deleted_rows 3 # FAILURE 
Command failed, please check log for more info
2014-04-10 13:40:06.716 CRITICAL nova [req-43b1f10f-9ece-4aae-8812-cd77f6556d38 
None None] ProgrammingError: (ProgrammingError) column locked_by is of type 
shadow_instances0locked_by but expression is of type instances0locked_by
LINE 1: ...ces.cell_name, instances.node, instances.deleted, instances
 ^
HINT:  You will need to rewrite or cast the expression.
 'INSERT INTO shadow_instances SELECT instances.created_at, 
instances.updated_at, instances.deleted_at, instances.id, 
instances.internal_id, instances.user_id, instances.project_id, 
instances.image_ref, instances.kernel_id, instances.ramdisk_id, 
instances.launch_index, instances.key_name, instances.key_data, 
instances.power_state, instances.vm_state, instances.memory_mb, 
instances.vcpus, instances.hostname, instances.host, instances.user_data, 
instances.reservation_id, instances.scheduled_at, instances.launched_at, 
instances.terminated_at, instances.display_name, instances.display_description, 
instances.availability_zone, instances.locked, instances.os_type, 
instances.launched_on, instances.instance_type_id, instances.vm_mode, 
instances.uuid, instances.architecture, instances.root_device_name, 
instances.access_ip_v4, instances.access_ip_v6, instances.config_drive, 
instances.task_state, instances.default_ephemeral_device, 
instances.default_swap_device, instances.progress, instances.au
 to_disk_config, instances.shutdown_terminate, instances.disable_terminate, 
instances.root_gb, instances.ephemeral_gb, instances.cell_name, instances.node, 
instances.deleted, instances.locked_by, instances.cleaned, 
instances.ephemeral_key_uuid \nFROM instances \nWHERE instances.deleted != 
%(deleted_1)s ORDER BY instances.id \n LIMIT %(param_1)s' {'param_1': 1, 
'deleted_1': 0}
2014-04-10 13:40:06.716 14789 TRACE nova Traceback (most recent call last):
2014-04-10 13:40:06.716 14789 TRACE nova   File /usr/bin/nova-manage, line 
10, in module
2014-04-10 13:40:06.716 14789 TRACE nova sys.exit(main())
2014-04-10 13:40:06.716 14789 TRACE nova   File 
/opt/stack/new/nova/nova/cmd/manage.py, line 1376, in main
2014-04-10 13:40:06.716 14789 TRACE nova ret = fn(*fn_args, **fn_kwargs)
2014-04-10 13:40:06.716 14789 TRACE nova   File 
/opt/stack/new/nova/nova/cmd/manage.py, line 902, in archive_deleted_rows
2014-04-10 13:40:06.716 14789 TRACE nova 
db.archive_deleted_rows(admin_context, max_rows)
2014-04-10 13:40:06.716 14789 TRACE nova   File 
/opt/stack/new/nova/nova/db/api.py, line 1915, in archive_deleted_rows
2014-04-10 13:40:06.716 14789 TRACE nova return 
IMPL.archive_deleted_rows(context, max_rows=max_rows)
2014-04-10 13:40:06.716 14789 TRACE nova   File 
/opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 146, in wrapper
2014-04-10 13:40:06.716 14789 TRACE nova return f(*args, **kwargs)
2014-04-10 13:40:06.716 14789 TRACE nova   File 
/opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 5647, in 
archive_deleted_rows
2014-04-10 13:40:06.716 14789 TRACE nova max_rows=max_rows - rows_archived)
2014-04-10 13:40:06.716 14789 TRACE nova   File 
/opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 146, in wrapper
2014-04-10 13:40:06.716 14789 TRACE nova return f(*args, **kwargs)
2014-04-10 13:40:06.716 14789 TRACE nova   File 
/opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 5617, in 
archive_deleted_rows_for_table
2014-04-10 13:40:06.716 14789 TRACE nova result_insert = 
conn.execute(insert_statement)
2014-04-10 13:40:06.716 14789 TRACE nova   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, 

[Yahoo-eng-team] [Bug 1271706] Re: Misleading warning about MySQL TRADITIONAL mode not being set

2014-04-03 Thread Attila Fazekas
** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271706

Title:
  Misleading warning about MySQL TRADITIONAL mode not being set

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  common.db.sqlalchemy.session logs a scary warning if create_engine is
  not being called with mysql_traditional_mode set to True:

  WARNING keystone.openstack.common.db.sqlalchemy.session [-] This
  application has not enabled MySQL traditional mode, which means silent
  data corruption may occur. Please encourage the application developers
  to enable this mode.

  That warning is problematic for several reasons:

  (1) It suggests the wrong mode. Arguably TRADITIONAL is better than the 
default, but STRICT_ALL_TABLES would actually be more useful.
  (2) The user has no way to fix the warning.
  (3) The warning does not take into account that a global sql-mode may in fact 
have been set via the server-side MySQL configuration, in which case the 
session *may* in fact be using TRADITIONAL mode all along, despite the warning 
saying otherwise. This makes (2) even worse.

  My suggested approach would be:
  - Remove the warning.
  - Make the SQL mode a config option.

  Patches forthcoming.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1271706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1182883] Re: List servers matching a regex fails with Quantum

2014-04-01 Thread Attila Fazekas
** Changed in: neutron
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1182883

Title:
  List servers matching a regex fails with Quantum

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  The test
  
tempest.api.compute.servers.test_list_server_filters:ListServerFiltersTestXML.test_list_servers_filtered_by_ip_regex
  tries to search a server with only a fragment of its IP (GET
  http://XX/v2/$Tenant/servers?ip=10.0.) which calls the following
  Quantum request :
  http://XX/v2.0/ports.json?fixed_ips=ip_address%3D10.0. But it seems
  this regex search is not supporter by Quantum. Thus the tempest test
  fauls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1182883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271706] Re: Misleading warning about MySQL TRADITIONAL mode not being set

2014-04-01 Thread Attila Fazekas
** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271706

Title:
  Misleading warning about MySQL TRADITIONAL mode not being set

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  common.db.sqlalchemy.session logs a scary warning if create_engine is
  not being called with mysql_traditional_mode set to True:

  WARNING keystone.openstack.common.db.sqlalchemy.session [-] This
  application has not enabled MySQL traditional mode, which means silent
  data corruption may occur. Please encourage the application developers
  to enable this mode.

  That warning is problematic for several reasons:

  (1) It suggests the wrong mode. Arguably TRADITIONAL is better than the 
default, but STRICT_ALL_TABLES would actually be more useful.
  (2) The user has no way to fix the warning.
  (3) The warning does not take into account that a global sql-mode may in fact 
have been set via the server-side MySQL configuration, in which case the 
session *may* in fact be using TRADITIONAL mode all along, despite the warning 
saying otherwise. This makes (2) even worse.

  My suggested approach would be:
  - Remove the warning.
  - Make the SQL mode a config option.

  Patches forthcoming.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1271706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292782] [NEW] test_create_backup 500 on image get

2014-03-15 Thread Attila Fazekas
Public bug reported:

Log stash:
message: Object GET failed AND filename:logs/screen-g-api.txt

53 failure in the past 7 days.


http://logs.openstack.org/43/76543/1/gate/gate-tempest-dsvm-full/30a3ee1/console.html#_2014-03-14_18_26_47_575
Test runner worker pid: 1541


http://logs.openstack.org/43/76543/1/gate/gate-tempest-dsvm-full/30a3ee1/logs/tempest.txt.gz#_2014-03-14_17_55_47_916
http://logs.openstack.org/43/76543/1/gate/gate-tempest-dsvm-full/30a3ee1/logs/screen-g-api.txt.gz#_2014-03-14_17_55_47_912
http://logs.openstack.org/43/76543/1/gate/gate-tempest-dsvm-full/30a3ee1/logs/screen-g-api.txt.gz#_2014-03-14_17_55_47_912

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1292782

Title:
  test_create_backup 500 on image get

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Log stash:
  message: Object GET failed AND filename:logs/screen-g-api.txt

  53 failure in the past 7 days.


  
http://logs.openstack.org/43/76543/1/gate/gate-tempest-dsvm-full/30a3ee1/console.html#_2014-03-14_18_26_47_575
  Test runner worker pid: 1541

  
  
http://logs.openstack.org/43/76543/1/gate/gate-tempest-dsvm-full/30a3ee1/logs/tempest.txt.gz#_2014-03-14_17_55_47_916
  
http://logs.openstack.org/43/76543/1/gate/gate-tempest-dsvm-full/30a3ee1/logs/screen-g-api.txt.gz#_2014-03-14_17_55_47_912
  
http://logs.openstack.org/43/76543/1/gate/gate-tempest-dsvm-full/30a3ee1/logs/screen-g-api.txt.gz#_2014-03-14_17_55_47_912

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1292782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >