[Yahoo-eng-team] [Bug 1671548] [NEW] Updating mac_address of port doesn't update its autoconfigured IPv6 address

2017-03-09 Thread Derek Higgins
Public bug reported:

PUT /v2.0/ports/d38564ff-8a98-4a21-a162-9b2841c78ebc.json HTTP/1.1
...
{"port": {"mac_address": "fa:16:3e:d2:03:61"}}


This updates the ports MAC address but doesn't update the IP address.
If using slaac or stateless address mode it should as the IP address is derived 
for the MAC address.

Version - Master from 20170127

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1671548

Title:
  Updating mac_address of port doesn't update its autoconfigured IPv6
  address

Status in neutron:
  New

Bug description:
  PUT /v2.0/ports/d38564ff-8a98-4a21-a162-9b2841c78ebc.json HTTP/1.1
  ...
  {"port": {"mac_address": "fa:16:3e:d2:03:61"}}

  
  This updates the ports MAC address but doesn't update the IP address.
  If using slaac or stateless address mode it should as the IP address is 
derived for the MAC address.

  Version - Master from 20170127

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1671548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633447] [NEW] nova stop/start or reboot --hard rests uefi nvram

2016-10-14 Thread Derek Higgins
Public bug reported:

Using nova to boot UEFI instances in certain circumstances the nvram is
cleared

e.g. on a deployed node my nvram is set too boot from the grub installed
on the EFI partition

[root@t1 boot]# efibootmgr 
Timeout: 0 seconds
BootOrder: 0004,0002,,0001,0003
Boot* EFI Floppy
Boot0001* EFI Floppy 1
Boot0002* EFI Hard Drive
Boot0003* EFI Network
Boot0004* centos


This is working I can run 
> nova reboot dbdc6b36-1f17-4722-89e5-117986b10059

but if I run a nova reboot --hard or a combination of nova stop/start
then the libvirt domain is redefined, as part of this process the nvram
is reset, the boot process stalls at the boot menu and I have to select
boot from file

[root@t1 boot]# efibootmgr 
Timeout: 0 seconds
BootOrder: 0002,,0001,0003
Boot* EFI Floppy
Boot0001* EFI Floppy 1
Boot0002* EFI Hard Drive
Boot0003* EFI Network

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633447

Title:
  nova stop/start or reboot --hard rests uefi nvram

Status in OpenStack Compute (nova):
  New

Bug description:
  Using nova to boot UEFI instances in certain circumstances the nvram
  is cleared

  e.g. on a deployed node my nvram is set too boot from the grub
  installed on the EFI partition

  [root@t1 boot]# efibootmgr 
  Timeout: 0 seconds
  BootOrder: 0004,0002,,0001,0003
  Boot* EFI Floppy
  Boot0001* EFI Floppy 1
  Boot0002* EFI Hard Drive
  Boot0003* EFI Network
  Boot0004* centos

  
  This is working I can run 
  > nova reboot dbdc6b36-1f17-4722-89e5-117986b10059

  but if I run a nova reboot --hard or a combination of nova stop/start
  then the libvirt domain is redefined, as part of this process the
  nvram is reset, the boot process stalls at the boot menu and I have to
  select boot from file

  [root@t1 boot]# efibootmgr 
  Timeout: 0 seconds
  BootOrder: 0002,,0001,0003
  Boot* EFI Floppy
  Boot0001* EFI Floppy 1
  Boot0002* EFI Hard Drive
  Boot0003* EFI Network

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1633447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622545] [NEW] archive_deleted_rows isn't archiving instances

2016-09-12 Thread Derek Higgins
Public bug reported:

Running "nova-manage archive_deleted_rows ..." clears out little or none
of the deleted nova instances

for example running the command several times

$ nova-manage --debug db archive_deleted_rows --max_rows 10
--verbose

I get
+--+-+
| Table| Number of Rows Archived |
+--+-+
| block_device_mapping | 10108   |
| instance_actions | 31838   |
| instance_actions_events  | 2   |
| instance_extra   | 10108   |
| instance_faults  | 459 |
| instance_info_caches | 10108   |
| instance_metadata| 6037|
| instance_system_metadata | 17883   |
| reservations | 9   |
+--+-+

the only way I've been able to get an instances archived is to lower the
--max-rows parameter, but this only deletes a small number of the
instances and sometimes doesn't archive any at all

In my nova-mange.log I have the following error

2016-09-12 09:22:21.658 17603 WARNING nova.db.sqlalchemy.api [-]
IntegrityError detected when archiving table instances:
(pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent
row: a foreign key constraint fails (`nova`.`instance_extra`, CONSTRAINT
`instance_extra_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`)
REFERENCES `instances` (`uuid`))') [SQL: u'DELETE FROM instances WHERE
instances.id in (SELECT T1.id FROM (SELECT instances.id \nFROM instances
\nWHERE instances.deleted != %s ORDER BY instances.id \n LIMIT %s) as
T1)'] [parameters: (0, 787)]


mysql -e 'select count(*) from instances where deleted_at is not NULL;' nova
+--+
| count(*) |
+--+
|70829 |
+--+

I'm running mitaka with this patch installed
https://review.openstack.org/#/c/326730/1

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622545

Title:
  archive_deleted_rows isn't archiving instances

Status in OpenStack Compute (nova):
  New

Bug description:
  Running "nova-manage archive_deleted_rows ..." clears out little or
  none of the deleted nova instances

  for example running the command several times

  $ nova-manage --debug db archive_deleted_rows --max_rows 10
  --verbose

  I get
  +--+-+
  | Table| Number of Rows Archived |
  +--+-+
  | block_device_mapping | 10108   |
  | instance_actions | 31838   |
  | instance_actions_events  | 2   |
  | instance_extra   | 10108   |
  | instance_faults  | 459 |
  | instance_info_caches | 10108   |
  | instance_metadata| 6037|
  | instance_system_metadata | 17883   |
  | reservations | 9   |
  +--+-+

  the only way I've been able to get an instances archived is to lower
  the --max-rows parameter, but this only deletes a small number of the
  instances and sometimes doesn't archive any at all

  In my nova-mange.log I have the following error

  2016-09-12 09:22:21.658 17603 WARNING nova.db.sqlalchemy.api [-]
  IntegrityError detected when archiving table instances:
  (pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent
  row: a foreign key constraint fails (`nova`.`instance_extra`,
  CONSTRAINT `instance_extra_instance_uuid_fkey` FOREIGN KEY
  (`instance_uuid`) REFERENCES `instances` (`uuid`))') [SQL: u'DELETE
  FROM instances WHERE instances.id in (SELECT T1.id FROM (SELECT
  instances.id \nFROM instances \nWHERE instances.deleted != %s ORDER BY
  instances.id \n LIMIT %s) as T1)'] [parameters: (0, 787)]

  
  mysql -e 'select count(*) from instances where deleted_at is not NULL;' nova
  +--+
  | count(*) |
  +--+
  |70829 |
  +--+

  I'm running mitaka with this patch installed
  https://review.openstack.org/#/c/326730/1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1622545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226342] Re: nova delete when a baremetal node is not responding to power management leaves the node orphaned

2016-05-03 Thread Derek Higgins
tripleo has move to ironic
Closing this bug, please feel free to reopen it if you disagree.

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226342

Title:
  nova delete when a baremetal node is not responding to power
  management leaves the node orphaned

Status in OpenStack Compute (nova):
  Won't Fix
Status in tripleo:
  Fix Released

Bug description:
  If you nova delete an instance on baremetal and the baremetal power
  manager fails for some reason, you end up with a stale instance_uuid
  in the bm_nodes table. This is unrecoverable via the API - db surgery
  is needed.

  To reproduce, configure a bad power manager, nova boot something on
  bm, then nova delete, and check the DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513879] [NEW] NeutronClientException: 404 Not Found

2015-11-06 Thread Derek Higgins
ackages/neutronclient/v2_0/client.py", line 
320, in _pagination
 res = self.get(path, params=params)
   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
293, in get
 headers=headers, params=params)
   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
270, in retry_request
 headers=headers, params=params)
   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
211, in do_request
 self._handle_fault_response(status_code, replybody)
   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
185, in _handle_fault_response
 exception_handler_v20(status_code, des_error_body)
   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
83, in exception_handler_v20
 message=message)
 NeutronClientException: 404 Not Found

 The resource could not be found.

** Affects: nova
 Importance: Undecided
 Assignee: Derek Higgins (derekh)
 Status: In Progress

** Affects: tripleo
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513879

Title:
  NeutronClientException: 404 Not Found

Status in OpenStack Compute (nova):
  In Progress
Status in tripleo:
  New

Bug description:
  Tripleo isn't currently working with trunk nova, the undercloud is
  failing to build overcloud instances, nova compute is showing this
  exception


  Nov 05 13:10:45 instack.localdomain nova-compute[21338]: 2015-11-05
  13:10:45.163 21338 ERROR nova.virt.ironic.driver [req-7df4cae6-f00a-
  41a2-91e0-db1e6f130059 a800cb834fbd4a70915e2272dce924ac
  102a2b78e079410f9afd8b8b46278c19 - - -] Error preparing deploy for
  instance 9ae5b605-58e3-40ee-b944-56cbf5806e51 on baremetal node
  f5c30846-4ada-444e-85d9-6e3be2a74782.

  
  Nov 05 13:10:45 instack.localdomain nova-compute[21338]: 2015-11-05 
13:10:45.434 21338 DEBUG nova.virt.ironic.driver 
[req-7df4cae6-f00a-41a2-91e0-db1e6f130059 a800cb834fbd4a70915e2272dce924ac 
102a2b78e079410f9afd8b8b46278c19 - - -] unplug: 
instance_uuid=9ae5b605-58e3-40ee-b944-56cbf5806e51 vif=[] _unplug_vifs 
/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:1093
   Instance failed to spawn
   Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 
2165, in _build_resources
   yield resources
 File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 
2012, in _build_and_run_instance
   block_device_info=block_device_info)
 File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 
791, in spawn
   flavor=flavor)
 File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 197, 
in __exit__
   six.reraise(self.type_, self.value, self.tb)
 File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 
782, in spawn
   self._plug_vifs(node, instance, network_info)
 File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 
1058, in _plug_vifs
   network_info_str = str(network_info)
 File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 515, 
in __str__
   return self._sync_wrapper(fn, *args, **kwargs)
 File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 498, 
in _sync_wrapper
   self.wait()
 File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 530, 
in wait
   self[:] = self._gt.wait()
 File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, 
in wait
   return self._exit_event.wait()
 File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in 
wait
   current.throw(*self._exc)
 File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, 
in main
   result = function(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/nova/utils.py", line 1178, in 
context_wrapper
   return func(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 
1574, in _allocate_network_async
   six.reraise(*exc_info)
 File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 
1557, in _allocate_network_async
   dhcp_options=dhcp_options)
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 733, in allocate_for_instance
   update_cells=True)
 File "/usr/lib/python2.7/site-packages/nova/network/base_api.py", line 
244, in get_instance_nw_info
   result = self._get_instance_nw_info(context, instance, **kwargs)
 File "/usr/lib/python2

[Yahoo-eng-team] [Bug 1438133] [NEW] django-admin.py collectstatic failing to find static/themes/default

2015-03-30 Thread Derek Higgins
Public bug reported:

At some stage over the last 10 days tripleo has stared to to fail to run

DJANGO_SETTINGS_MODULE=openstack_dashboard.settings django-admin.py
collectstatic --noinput

Exiting with the following error

os-collect-config: dib-run-parts Fri Mar 27 03:56:44 UTC 2015 Running 
/usr/libexec/os-refresh-config/post-configure.d/14-horizon
os-collect-config: WARNING:root:dashboards and default_dashboard in 
(local_)settings is DEPRECATED now and may be unsupported in some future 
release. The preferred way to specify the order of dashboards and the default 
dashboard is the pluggable dashboard mechanism (in 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/openstack_dashboard/enabled,
 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/openstack_dashboard/local/enabled).
os-collect-config: WARNING:py.warnings:DeprecationWarning: The oslo namespace 
package is deprecated. Please use oslo_config instead.
os-collect-config: WARNING:py.warnings:DeprecationWarning: The oslo namespace 
package is deprecated. Please use oslo_serialization instead. 
os-collect-config: Traceback (most recent call last):
os-collect-config:   File /opt/stack/venvs/openstack/bin/django-admin.py, 
line 5, in module
os-collect-config: management.execute_from_command_line()
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 399, in execute_from_command_line
os-collect-config: utility.execute()
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 392, in execute
os-collect-config: self.fetch_command(subcommand).run_from_argv(self.argv)
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 242, in run_from_argv
os-collect-config: self.execute(*args, **options.__dict__)
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 285, in execute
os-collect-config: output = self.handle(*args, **options)
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 415, in handle
os-collect-config: return self.handle_noargs(**options)
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py,
 line 173, in handle_noargs
os-collect-config: collected = self.collect()
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py,
 line 103, in collect
os-collect-config: for path, storage in finder.list(self.ignore_patterns):
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/contrib/staticfiles/finders.py,
 line 106, in list
os-collect-config: for path in utils.get_files(storage, ignore_patterns):
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/contrib/staticfiles/utils.py,
 line 25, in get_files
os-collect-config: directories, files = storage.listdir(location)   
os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/files/storage.py,
 line 249, in listdir
os-collect-config: for entry in os.listdir(path):
os-collect-config: OSError: [Errno 2] No such file or directory: 
'/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/openstack_dashboard/local/static/themes/default'
os-collect-config: [2015-03-27 03:56:45,548] (os-refresh-config) [ERROR] during 
post-configure phase. [Command '['dib-run-parts', 
'/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit 
status


example here
http://logs.openstack.org/05/168205/1/check-tripleo/check-tripleo-ironic-undercloud-precise-nonha/3f384bd/logs/undercloud-undercloud_logs/os-collect-config.txt.gz#_Mar_27_03_56_45

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Status: Triaged


** Tags: ci

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1438133

Title:
  django-admin.py collectstatic failing to find static/themes/default

Status in OpenStack Dashboard (Horizon):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  At some stage over the last 10 days tripleo has stared to to fail to
  run

  DJANGO_SETTINGS_MODULE=openstack_dashboard.settings django-
  admin.py collectstatic --noinput

  Exiting with the following error

  os-collect-config: dib-run-parts Fri Mar 27 03:56:44 UTC 2015 Running 

[Yahoo-eng-team] [Bug 1423228] Re: L3 agent for nova compute could not be found

2015-02-18 Thread Derek Higgins
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423228

Title:
  L3 agent for nova compute could not be found

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  At some stage since friday tripleo has started failing to get a
  floating IP for the demo instance, with the following error

  from
  
http://logs.openstack.org/40/156240/5/check-tripleo/check-tripleo-ironic-overcloud-f20-nonha/9d972f5/console.html

  2015-02-18 11:49:02.617 | ++ neutron floatingip-create ext-net --port-id 
5fe3c9b3-6c11-45e8-905b-e27509a8f247
  2015-02-18 11:49:04.605 | Agent with agent_type=L3 agent and 
host=ov-gdjrjqdrx2-1-mstyijedqzer-novacompute-kxvrtp4gk3nu could not be found

  Its hard to nail down exactly when this started as ci has been hit by
  a number of simultaneous regressions

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421835] Re: Timeout reached while waiting for callback for node

2015-02-18 Thread Derek Higgins
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421835

Title:
  Timeout reached while waiting for callback for node

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  All of our overcloud jobs are failing this way now.  Appears to be the
  same symptoms as https://bugs.launchpad.net/bugs/1417026 but that bug
  claims to be fixed.  Looks like a bunch more stuff just merged to
  ironic, so chances are one of those is the culprit.  Will try some
  tempreverts to figure out which.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416321] Re: Attempt to boot from volume - no image supplied

2015-02-03 Thread Derek Higgins
** Changed in: tripleo
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416321

Title:
  Attempt to boot from volume - no image supplied

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  About 12 UTC 29/1/2015 all tripleo overcloud jobs started failing with

  
  2015-01-29 17:32:21.475 | + wait_for -w 50 --delay 5 -- neutron port-list -f 
csv -c id --quote none '|' grep id
  2015-01-29 17:33:11.364 | Timing out after 50 seconds:
  2015-01-29 17:33:11.364 | COMMAND=neutron port-list -f csv -c id --quote none 
| grep id
  2015-01-29 17:33:11.364 | OUTPUT=
  2015-01-29 17:33:11.364 | + cleanup

  
  reproducing this localy shows a failure in getting the block device
  | created  | 2015-01-30T02:43:05Z 

 

  |
  | fault| {message: Build of instance 
4c958085-7a95-4825-af04-cf574c3614c7 aborted: Failure prepping block device., 
code: 500, details:   File \/opt/stack/venvs/nova/lib/
  python2.7/site-packages/nova/compute/manager.py\, line 2077, in 
_do_build_and_run_instance |
  |  | filter_properties)   


 |
  |  |   File 
\/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py\, 
line 2192, in _build_and_run_instance   

|
  |  | 'create.error', fault=e) 


 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416321] Re: Attempt to boot from volume - no image supplied

2015-01-30 Thread Derek Higgins
A revert of this nova commit seems to fix the problem
https://review.openstack.org/#/c/143054/

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416321

Title:
  Attempt to boot from volume - no image supplied

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  About 12 UTC 29/1/2015 all tripleo overcloud jobs started failing with

  
  2015-01-29 17:32:21.475 | + wait_for -w 50 --delay 5 -- neutron port-list -f 
csv -c id --quote none '|' grep id
  2015-01-29 17:33:11.364 | Timing out after 50 seconds:
  2015-01-29 17:33:11.364 | COMMAND=neutron port-list -f csv -c id --quote none 
| grep id
  2015-01-29 17:33:11.364 | OUTPUT=
  2015-01-29 17:33:11.364 | + cleanup

  
  reproducing this localy shows a failure in getting the block device
  | created  | 2015-01-30T02:43:05Z 

 

  |
  | fault| {message: Build of instance 
4c958085-7a95-4825-af04-cf574c3614c7 aborted: Failure prepping block device., 
code: 500, details:   File \/opt/stack/venvs/nova/lib/
  python2.7/site-packages/nova/compute/manager.py\, line 2077, in 
_do_build_and_run_instance |
  |  | filter_properties)   


 |
  |  |   File 
\/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py\, 
line 2192, in _build_and_run_instance   

|
  |  | 'create.error', fault=e) 


 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415043] Re: neutron-ovs-agent : sequence item 0: expected string, int found

2015-01-28 Thread Derek Higgins
** Changed in: tripleo
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415043

Title:
  neutron-ovs-agent : sequence item 0: expected string, int found

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  All overcloud ci tests failing

  Example : http://logs.openstack.org/63/148663/3/check-tripleo/check-
  tripleo-ironic-overcloud-f20-nonha/df5b0a9/logs/ov-rlixa6l4l4j-0
  -e3bvgksy2rux-Controller_logs/neutron-openvswitch-agent.txt.gz

  
  ERROR Exception during message handling: sequence item 0: expected string, 
int found
Traceback (most recent call last):  
 
File 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 137, in _dispatch_and_reply
incoming.message))
File 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 180, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)  
   
File 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 126, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)
 
File 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 343, in tunnel_update
tunnel_type)
File 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1052, in _setup_tunnel_port
ofports = ','.join(self.tun_br_ofports[tunnel_type].values())   
   
TypeError: sequence item 0: expected string, int found


  Seems to be connected to changes to 
neutron/plugins/openvswitch/agent/ovs_neutron_agent.py in
  https://review.openstack.org/#/c/143709/20

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1415043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404115] Re: can't copy 'neutron/tests/functional/contrib'

2014-12-24 Thread Derek Higgins
** Changed in: tripleo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404115

Title:
  can't copy 'neutron/tests/functional/contrib'

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in Taskflow for task-oriented systems.:
  New
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  This appeared in CI while installing neutron
  
http://logs.openstack.org/17/139217/10/check-tripleo/check-tripleo-ironic-undercloud-precise-nonha/3974020/console.html

  
  2014-12-19 03:48:18.034 | 
  2014-12-19 03:48:18.034 | creating 
build/lib.linux-x86_64-2.7/neutron/plugins/cisco/l3/configdrive_templates
  2014-12-19 03:48:18.034 | 
  2014-12-19 03:48:18.034 | copying 
neutron/plugins/cisco/l3/configdrive_templates/csr1kv_cfg_template - 
build/lib.linux-x86_64-2.7/neutron/plugins/cisco/l3/configdrive_templates
  2014-12-19 03:48:18.034 | 
  2014-12-19 03:48:18.034 | copying neutron/plugins/ml2/drivers/arista/README 
- build/lib.linux-x86_64-2.7/neutron/plugins/ml2/drivers/arista
  2014-12-19 03:48:18.034 | 
  2014-12-19 03:48:18.034 | error: can't copy 
'neutron/tests/functional/contrib': doesn't exist or not a regular file
  2014-12-19 03:48:18.034 | 
  2014-12-19 03:48:18.034 | 
  2014-12-19 03:48:18.034 | Cleaning up...

  probably related to https://review.openstack.org/#/c/142558/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390427] Re: nova.conf config options were removed

2014-11-11 Thread Derek Higgins
** Changed in: tripleo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1390427

Title:
  nova.conf config options were removed

Status in OpenStack Compute (Nova):
  Won't Fix
Status in Oslo configuration management library:
  New
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  As of 2014-11-06 15:00 approx All tripleo jobs are failing

  a load of deprecated options have been removed from nova causing the
  failure

  https://review.openstack.org/#/c/132900/
  https://review.openstack.org/#/c/132887/
  https://review.openstack.org/#/c/132885/
  https://review.openstack.org/#/c/132901/

  
  They were deprecated but it looks like no warning were being generated in the 
logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1390427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373430] Re: Error while compressing files

2014-09-24 Thread Derek Higgins
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1373430

Title:
  Error while compressing files

Status in OpenStack Dashboard (Horizon):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  All ci jobs failing

  Earliest Failure : 2014-09-24 09:51:55 UTC
  Example : 
http://logs.openstack.org/50/123150/3/check-tripleo/check-tripleo-ironic-undercloud-precise-nonha/3c60b32/console.html

  
  Sep 24 11:51:43 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
dib-run-parts Wed Sep 24 11:51:43 UTC 2014 Running 
/opt/stack/os-config-refresh/post-configure.d/14-horizon
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
CommandError: An error occured during rendering 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/horizon/templates/horizon/_scripts.html:
 'horizon/lib/bootstrap_datepicker/locales/bootstrap-datepicker..js' could not 
be found in the COMPRESS_ROOT 
'/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/static'
 or with staticfiles.
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
Found 'compress' tags in:
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/horizon/templates/horizon/_scripts.html
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/horizon/templates/horizon/_conf.html
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/templates/_stylesheets.html
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
Compressing... [2014-09-24 11:51:53,459] (os-refresh-config) [ERROR] during 
post-configure phase. [Command '['dib-run-parts', 
'/opt/stack/os-config-refresh/post-configure.d']' returned non-zero exit status 
1]

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1373430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351466] Re: can't copy '.../cisco_cfg_agent.ini': doesn't exist

2014-09-02 Thread Derek Higgins
** Changed in: tripleo
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351466

Title:
  can't copy '.../cisco_cfg_agent.ini': doesn't exist

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Started roughly 1800 UTC this evening

  2014-08-01 19:36:06.878 | error: can't copy
  'etc/neutron/plugins/cisco/cisco_cfg_agent.ini': doesn't exist or not
  a regular file

  http://logs.openstack.org/70/111370/1/check-tripleo/check-tripleo-
  ironic-undercloud-precise-nonha/3bc75ae/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1351466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351466] Re: can't copy '.../cisco_cfg_agent.ini': doesn't exist

2014-08-11 Thread Derek Higgins
** Changed in: tripleo
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351466

Title:
  can't copy '.../cisco_cfg_agent.ini': doesn't exist

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Started roughly 1800 UTC this evening

  2014-08-01 19:36:06.878 | error: can't copy
  'etc/neutron/plugins/cisco/cisco_cfg_agent.ini': doesn't exist or not
  a regular file

  http://logs.openstack.org/70/111370/1/check-tripleo/check-tripleo-
  ironic-undercloud-precise-nonha/3bc75ae/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1351466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349774] Re: Error parsing _stylesheets.html

2014-08-11 Thread Derek Higgins
** Changed in: tripleo
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1349774

Title:
  Error parsing _stylesheets.html

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  As of about 10 UTC last night, all tripleo jobs are failing

  http://logs.openstack.org/60/104060/2/check-tripleo/check-tripleo-
  novabm-overcloud-f20-nonha/ebd8814/logs/overcloud-controller0_logs/os-
  collect-config.txt.gz

  
  Jul 29 08:12:06 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 
dib-run-parts Tue Jul 29 08:12:06 UTC 2014 Running 
/opt/stack/os-config-refresh/post-configure.d/14-horizon
  Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 
CommandError: An error occured during rendering 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/templates/_stylesheets.html:
 Error parsing block:
  Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 1
  Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 2

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1349774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349774] Re: Error parsing _stylesheets.html

2014-07-29 Thread Derek Higgins
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1349774

Title:
  Error parsing _stylesheets.html

Status in OpenStack Dashboard (Horizon):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  As of about 10 UTC last night, all tripleo jobs are failing

  http://logs.openstack.org/60/104060/2/check-tripleo/check-tripleo-
  novabm-overcloud-f20-nonha/ebd8814/logs/overcloud-controller0_logs/os-
  collect-config.txt.gz

  
  Jul 29 08:12:06 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 
dib-run-parts Tue Jul 29 08:12:06 UTC 2014 Running 
/opt/stack/os-config-refresh/post-configure.d/14-horizon
  Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 
CommandError: An error occured during rendering 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/templates/_stylesheets.html:
 Error parsing block:
  Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 1
  Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 2

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1349774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347795] Re: All baremetal instance going to ERROR state

2014-07-29 Thread Derek Higgins
** Changed in: tripleo
   Status: Triaged = Fix Released

** Changed in: tripleo
 Assignee: (unassigned) = Derek Higgins (derekh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347795

Title:
  All baremetal instance going to ERROR state

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Triaged
Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  As of 1300 UTC approce all tripleo CI is failing to bring up instances

  looks like the commit that caused it is
  https://review.openstack.org/#/c/71557/

  just waiting for some CI to finish to confirm.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1347795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348178] [NEW] test_list_security_groups_list_all_tenants_filter

2014-07-24 Thread Derek Higgins
Public bug reported:

Error during grenade in the gate

http://logs.openstack.org/33/109033/1/gate/gate-grenade-dsvm-partial-
ncpu/00379f7/logs/grenade.sh.txt.gz

2014-07-24 02:47:56.532 | 
tempest.api.compute.security_groups.test_security_groups.SecurityGroupsTestJSON.test_server_security_groups[gate,network,smoke]
18.994
2014-07-24 02:47:56.532 | 
2014-07-24 02:47:56.532 | ==
2014-07-24 02:47:56.532 | Failed 1 tests - output below:
2014-07-24 02:47:56.532 | ==
2014-07-24 02:47:56.533 | 
2014-07-24 02:47:56.533 | 
tempest.api.compute.admin.test_security_groups.SecurityGroupsTestAdminXML.test_list_security_groups_list_all_tenants_filter[gate,network,smoke]
2014-07-24 02:47:56.533 | 
---
2014-07-24 02:47:56.533 | 
2014-07-24 02:47:56.533 | Captured traceback:
2014-07-24 02:47:56.533 | ~~~
2014-07-24 02:47:56.533 | Traceback (most recent call last):
2014-07-24 02:47:56.533 |   File tempest/test.py, line 128, in wrapper
2014-07-24 02:47:56.533 | return f(self, *func_args, **func_kwargs)
2014-07-24 02:47:56.533 |   File 
tempest/api/compute/admin/test_security_groups.py, line 69, in 
test_list_security_groups_list_all_tenants_filter
2014-07-24 02:47:56.533 | description))
2014-07-24 02:47:56.533 |   File 
tempest/services/compute/xml/security_groups_client.py, line 73, in 
create_security_group
2014-07-24 02:47:56.533 | str(xml_utils.Document(security_group)))
2014-07-24 02:47:56.533 |   File tempest/common/rest_client.py, line 218, 
in post
2014-07-24 02:47:56.533 | return self.request('POST', url, 
extra_headers, headers, body)
2014-07-24 02:47:56.533 |   File tempest/common/rest_client.py, line 430, 
in request
2014-07-24 02:47:56.533 | resp, resp_body)
2014-07-24 02:47:56.533 |   File tempest/common/rest_client.py, line 526, 
in _error_checker
2014-07-24 02:47:56.533 | raise exceptions.ServerFault(message)
2014-07-24 02:47:56.533 | ServerFault: Got server fault
2014-07-24 02:47:56.533 | Details: The server has either erred or is 
incapable of performing the requested operation.
2014-07-24 02:47:56.533 | 
2014-07-24 02:47:56.534 | 
2014-07-24 02:47:56.534 | Captured pythonlogging:
2014-07-24 02:47:56.534 | ~~~
2014-07-24 02:47:56.534 | 2014-07-24 02:41:09,715 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST http://127.0.0.1:5000/v2.0/tokens
2014-07-24 02:47:56.534 | 2014-07-24 02:41:09,791 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST 
http://127.0.0.1:8774/v2/4cfeb6b7364e48fcbd7b930c9c3eea3c/os-security-groups 
0.075s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:09,916 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST 
http://127.0.0.1:8774/v2/4cfeb6b7364e48fcbd7b930c9c3eea3c/os-security-groups 
0.124s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,163 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST http://127.0.0.1:5000/v2.0/tokens
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,268 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
200 POST 
http://127.0.0.1:8774/v2/6ef984821ac44a64b42a9f8a30e6a08b/os-security-groups 
0.104s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,348 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:test_list_security_groups_list_all_tenants_filter): 
500 POST 
http://127.0.0.1:8774/v2/6ef984821ac44a64b42a9f8a30e6a08b/os-security-groups 
0.079s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,441 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2/6ef984821ac44a64b42a9f8a30e6a08b/os-security-groups/6 
0.089s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,568 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2/4cfeb6b7364e48fcbd7b930c9c3eea3c/os-security-groups/4 
0.126s
2014-07-24 02:47:56.534 | 2014-07-24 02:41:10,847 1886 INFO 
[tempest.common.rest_client] Request 
(SecurityGroupsTestAdminXML:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2/4cfeb6b7364e48fcbd7b930c9c3eea3c/os-security-groups/3 
0.277s
2014-07-24 02:47:56.534 | 
2014-07-24 02:47:56.534 | 
2014-07-24 02:47:56.534 |

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this 

[Yahoo-eng-team] [Bug 1347795] Re: All baremetal instance going to ERROR state

2014-07-23 Thread Derek Higgins
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347795

Title:
  All baremetal instance going to ERROR state

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  As of 1300 UTC approce all tripleo CI is failing to bring up instances

  looks like the commit that caused it is
  https://review.openstack.org/#/c/71557/

  just waiting for some CI to finish to confirm.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326289] Re: Failing to launch instances : Filter ComputeCapabilitiesFilter returned 0 hosts

2014-06-04 Thread Derek Higgins
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326289

Title:
  Failing to launch instances : Filter ComputeCapabilitiesFilter
  returned 0 hosts

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  Failure started between 1 and 2 AM UTC

  Running nova in debug mode shows the problem

  Jun 04 09:15:55 localhost nova-scheduler[9605]: 2014-06-04 09:15:55.259 9605 
DEBUG nova.filters [req-c37d26da-66de-4658-ba6f-a06a775f1a28 None] Filter 
ComputeFilter returned 1 host(s) get_filtered_objects 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/filters.py:88
  Jun 04 09:15:55 localhost nova-scheduler[9605]: 2014-06-04 09:15:55.259 9605 
DEBUG nova.scheduler.filters.compute_capabilities_filter 
[req-c37d26da-66de-4658-ba6f-a06a775f1a28 None] (seed, 
8f3d2259-ef0b-44fc-a0c4-4d5cc2ef1443) ram:3072 disk:40960 io_ops:0 instances:0 
fails instance_type extra_specs requirements host_passes 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/scheduler/filters/compute_capabilities_filter.py:72
  Jun 04 09:15:55 localhost nova-scheduler[9605]: 2014-06-04 09:15:55.260 9605 
INFO nova.filters [req-c37d26da-66de-4658-ba6f-a06a775f1a28 None] Filter 
ComputeCapabilitiesFilter returned 0 hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280692] [NEW] Keystone wont start

2014-02-15 Thread Derek Higgins
Public bug reported:

Keystone has started crashing during devtest with the error
CRITICAL keystone [-] ConfigFileNotFound: The Keystone configuration file 
keystone-paste.ini could not be found.

The timing and code touched by this commit
https://review.openstack.org/#/c/73621 seems to suggest its relevant

paste_deploy.config_file now defaults to keystone-paste.ini, in keystone
we don't have a keystone-paste.ini as we have  paste configs in
keystone.conf

A commit to verify reversing this would fix the problem
https://review.openstack.org/#/c/73838/1/elements/keystone/os-apply-config/etc/keystone/keystone.conf

shows ci passing again
https://jenkins02.openstack.org/job/check-tripleo-seed-precise/359/

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Status: Triaged

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1280692

Title:
  Keystone wont start

Status in OpenStack Identity (Keystone):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  Keystone has started crashing during devtest with the error
  CRITICAL keystone [-] ConfigFileNotFound: The Keystone configuration file 
keystone-paste.ini could not be found.

  The timing and code touched by this commit
  https://review.openstack.org/#/c/73621 seems to suggest its relevant

  paste_deploy.config_file now defaults to keystone-paste.ini, in
  keystone we don't have a keystone-paste.ini as we have  paste configs
  in keystone.conf

  A commit to verify reversing this would fix the problem
  
https://review.openstack.org/#/c/73838/1/elements/keystone/os-apply-config/etc/keystone/keystone.conf

  shows ci passing again
  https://jenkins02.openstack.org/job/check-tripleo-seed-precise/359/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1280692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231930] Re: Rules dissapear after 300 seconds of inactivity

2013-10-24 Thread Derek Higgins
** Changed in: tripleo
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1231930

Title:
  Rules dissapear after 300 seconds of inactivity

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Using Feodra 19 for TripleO

  In the TripleO overcloud I can successfully connect to VM's but soon
  after loose ssh connections,

  I think this is after at least 5 minutes of inactivity, I'm observing
  the learned rule in table 20 vanish if the hard_age hits 300, see the
  two snippets from ovs-ofctl dump-flows br-tun with the learned rule
  and then gone the next second

  
  Fri 27 Sep 09:49:14 UTC 2013

   cookie=0x0, duration=1930.786s, table=10, n_packets=247, n_bytes=32673, 
idle_age=300, priority=1 
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:
  
0-NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]-NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
   cookie=0x0, duration=800.717s, table=20, n_packets=231, n_bytes=23336, 
hard_timeout=300, idle_age=300, hard_age=300, 
priority=1,vlan_tci=0x0002/0x0fff,dl_dst=fa:16:3e:e4:de:d6 
actions=load:0-NXM_OF_VLAN_TCI[],load:0x1-NXM_NX_TUN_ID[],output:2
   

  
  Fri 27 Sep 09:49:15 UTC 2013

   cookie=0x0, duration=1931.798s, table=10, n_packets=247,
  n_bytes=32673, idle_age=301, priority=1
  
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]-NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1

  Is the correct thing to do just to remove the hard_timeout=300, this
  seems to work for me

  diff --git a/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py 
b/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py
  index eefe384..62c87e3 100644
  --- a/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py
  +++ b/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py
  @@ -715,7 +715,6 @@ class 
OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
   # adresses (assumes that lvid has already been set by a previous 
flow)
   learned_flow = (table=%s,
   priority=1,
  -hard_timeout=300,
   NXM_OF_VLAN_TCI[0..11],
   NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],
   load:0-NXM_OF_VLAN_TCI[],

  
  or should this rule just reappear when I try to reconnect, I've also observed 
the rule returning when I try to connect, and I can then connect so the loss of 
connectivity doesn't always happen after 5 minutes. But the rule not 
reappearing seems to line up with times I can't ssh to the VM.

  Also its worth noting that the hard_timeout is whats being set but it
  appears to be acting more like a idle_timeout although I may be
  understanding something wroge here, ovs newbie...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1231930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp