[Yahoo-eng-team] [Bug 1752115] Re: detach multiattach volume disconnects innocent bystander

2018-02-27 Thread John Griffith
So looking into this the problem appears to be that Nova calls the brick
initiator disconnect_volume method indiscriminately.  Brick has no way
currently to interrogate usage of a connection, and I'm not sure that
something like that could be added in this case.

My first thought was that it would be logical to check for multiattach on the 
same host here:
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1249
by using the objects.BlockDeviceMapping.get_by_volume() HOWEVER it turns out 
that's another special thing that isn't allowed when a volume is 
multiattach=True (I haven't figured out why that's there yet, but looking). 

** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1752115

Title:
  detach multiattach volume disconnects innocent bystander

Status in OpenStack Compute (nova):
  New

Bug description:
  Detaching a multi-attached lvm volume from one server, causes the
  other server to lose connectivity to the volume. I found this while
  developing a new tempest test to test this scenario.

  - create 2 instances on the same host, both simple instances with ephemeral 
disks
  - create a multi-attach lvm volume, attach to both instances
  - check that you can re-read the partition table from inside each instance 
(via ssh):

 $ sudo blockdev --rereadpt /dev/vdb

This succeeds on both instances (no output or err message is
  returned).

  - detach the volume from one of the instances
  - recheck connectivity. The expected result is that the command will now fail 
in the instance where 
the volume was detached. But it also fails on the instance where the volume 
is still supposedly 
attached:

 $ sudo blockdev --rereadpt /dev/vdb
 BLKRRPART: Input/output error

  cinder & nova still think that the volume is attached correctly:

  $ cinder show 2cf26a15-8937-4654-ba81-70cbcb97a238 | grep attachment
  | attachment_ids | ['f5876aff-5b5b-45a0-a020-515ca339eae4']   

  $ nova show vm1 | grep attached
  | os-extended-volumes:volumes_attached | [{"id": 
"2cf26a15-8937-4654-ba81-70cbcb97a238", "delete_on_termination": false}] |

  cinder version:

  :/opt/stack/cinder$ git show
  commit 015b1053990f00d1522c1074bcd160b4b57a5801
  Merge: 856e636 481535e
  Author: Zuul 
  Date:   Thu Feb 22 14:00:17 2018 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1752115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742204] [NEW] intermittent attach failures due to libvirt locking error

2018-01-09 Thread John Griffith
Public bug reported:

While attempting nova volume-attach on current devstack deployment I'm getting 
intermittent failures during the attach operation:
http://paste.openstack.org/show/641383/

Each time I've encountered this I've been able to simply rerun the
command and it completes succesfully.

To reproduce:
Deploy devstack with cinder multi-attach patches 
(https://review.openstack.org/#/c/531569/)
Create a multiattach volume-type:
cinder type-create multiattach 
cinder type-key multiattach set multiattach=' True'
Create two instances: 
nova boot --image  --flavor 1 i-1
nova boot --image  --flavor 1 i-2
Create a multiattach volume (cinder create --volume-type multiattach --name 
vol-1 1)
Attach the volume to each instance:
nova volume-attach  )
nova volume-attach  )

Sometimes this works, sometimes it doesn't.  It appears that the
frequency of this failing is proportional to the length of time the Nova
service has been up and running on the system but that might be
nonsense.

I've yet to encounter a case where running the second attach command
again did not succeed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742204

Title:
  intermittent attach failures due to libvirt locking error

Status in OpenStack Compute (nova):
  New

Bug description:
  While attempting nova volume-attach on current devstack deployment I'm 
getting intermittent failures during the attach operation:
  http://paste.openstack.org/show/641383/

  Each time I've encountered this I've been able to simply rerun the
  command and it completes succesfully.

  To reproduce:
  Deploy devstack with cinder multi-attach patches 
(https://review.openstack.org/#/c/531569/)
  Create a multiattach volume-type:
  cinder type-create multiattach 
  cinder type-key multiattach set multiattach=' True'
  Create two instances: 
  nova boot --image  --flavor 1 i-1
  nova boot --image  --flavor 1 i-2
  Create a multiattach volume (cinder create --volume-type multiattach --name 
vol-1 1)
  Attach the volume to each instance:
  nova volume-attach  )
  nova volume-attach  )

  Sometimes this works, sometimes it doesn't.  It appears that the
  frequency of this failing is proportional to the length of time the
  Nova service has been up and running on the system but that might be
  nonsense.

  I've yet to encounter a case where running the second attach command
  again did not succeed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639350] Re: Use of CONF.libvirt.volume_use_multipath should not be mandatory

2016-11-04 Thread John Griffith
My bad on this; the actual problem is an unhandled failure/crash if
multipath-d isn't installed/running.

** Changed in: os-brick
   Status: New => Opinion

** Changed in: os-brick
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639350

Title:
  Use of CONF.libvirt.volume_use_multipath should not be mandatory

Status in OpenStack Compute (nova):
  Opinion
Status in os-brick:
  Opinion

Bug description:
  Currently if iscsi_multipath is set in nova.conf we require ALL
  attachments to use multipath.  The problem with this is that it's not
  uncommon to have a mix of Cinder backends; one that supports multipath
  and one that doesn't.  The result with how we do this now is that you
  can have only one or the other but not both.

  We should be able to revert to single-path when multipath doesn't work
  for the volume; also might be worth considering if the multipath
  support should just be embedded as part of the volume object and skip
  the need to configure it in Nova at all.

  To reproduce, set up a default devstack with LIO and LVM; set
  nova.conf iscsi_multipath=True, restart nova.

  Create a volume, create an instance, try and attach the volume to the
  instance.

  Example stack trace here:
  http://paste.openstack.org/show/587939/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1639350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639350] [NEW] Use of iscsi_multipath should not be mandatory

2016-11-04 Thread John Griffith
Public bug reported:

Currently if iscsi_multipath is set in nova.conf we require ALL
attachments to use multipath.  The problem with this is that it's not
uncommon to have a mix of Cinder backends; one that supports multipath
and one that doesn't.  The result with how we do this now is that you
can have only one or the other but not both.

We should be able to revert to single-path when multipath doesn't work
for the volume; also might be worth considering if the multipath support
should just be embedded as part of the volume object and skip the need
to configure it in Nova at all.

To reproduce, set up a default devstack with LIO and LVM; set nova.conf
iscsi_multipath=True, restart nova.

Create a volume, create an instance, try and attach the volume to the
instance.

Example stack trace here:
http://paste.openstack.org/show/587939/

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: os-brick
 Importance: Undecided
 Status: New

** Also affects: os-brick
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639350

Title:
  Use of iscsi_multipath should not be mandatory

Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New

Bug description:
  Currently if iscsi_multipath is set in nova.conf we require ALL
  attachments to use multipath.  The problem with this is that it's not
  uncommon to have a mix of Cinder backends; one that supports multipath
  and one that doesn't.  The result with how we do this now is that you
  can have only one or the other but not both.

  We should be able to revert to single-path when multipath doesn't work
  for the volume; also might be worth considering if the multipath
  support should just be embedded as part of the volume object and skip
  the need to configure it in Nova at all.

  To reproduce, set up a default devstack with LIO and LVM; set
  nova.conf iscsi_multipath=True, restart nova.

  Create a volume, create an instance, try and attach the volume to the
  instance.

  Example stack trace here:
  http://paste.openstack.org/show/587939/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1639350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538620] Re: Attach with host and instance_uuid not backwards compatible

2016-01-27 Thread John Griffith
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538620

Title:
  Attach with host and instance_uuid not backwards compatible

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Patch https://review.openstack.org/#/c/266006/ added the ability for
  Cinder to accept both host and instance_uuid when doing an attach.
  This is not backwards compatible to earlier API versions, so when Nova
  calls attach with versions prior to this change with both arguments it
  causes a failure.

  This information is needed for the multiattach work, but we should
  revert that change and try to find a cleaner way to do this that will
  not break backwards compatibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1538620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532076] [NEW] Nova intermittently fails test_volume_boot_patters with db error

2016-01-07 Thread John Griffith
Public bug reported:

This test seems randomly problematic, but noticed 3 failures today with
the following error logged in nova.api:

2016-01-08 03:04:42.603 ERROR oslo_db.api 
[req-9fb82769-155d-4f50-87db-c912c8ad34a6 
tempest-TestVolumeBootPattern-388230709 
tempest-TestVolumeBootPattern-1026177222] DB error.
2016-01-08 03:04:42.603 12908 ERROR oslo_db.api Traceback (most recent call 
last):
2016-01-08 03:04:42.603 12908 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
2016-01-08 03:04:42.603 12908 ERROR oslo_db.api return f(*args, **kwargs)
2016-01-08 03:04:42.603 12908 ERROR oslo_db.api   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 1717, in instance_destroy
2016-01-08 03:04:42.603 12908 ERROR oslo_db.api raise 
exception.ConstraintNotMet()
2016-01-08 03:04:42.603 12908 ERROR oslo_db.api ConstraintNotMet: Constraint 
not met.
2016-01-08 03:04:42.603 12908 ERROR oslo_db.api

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532076

Title:
  Nova intermittently fails test_volume_boot_patters with db error

Status in OpenStack Compute (nova):
  New

Bug description:
  This test seems randomly problematic, but noticed 3 failures today
  with the following error logged in nova.api:

  2016-01-08 03:04:42.603 ERROR oslo_db.api 
[req-9fb82769-155d-4f50-87db-c912c8ad34a6 
tempest-TestVolumeBootPattern-388230709 
tempest-TestVolumeBootPattern-1026177222] DB error.
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api Traceback (most recent call 
last):
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api return f(*args, **kwargs)
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 1717, in instance_destroy
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api raise 
exception.ConstraintNotMet()
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api ConstraintNotMet: Constraint 
not met.
  2016-01-08 03:04:42.603 12908 ERROR oslo_db.api

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530214] Re: Tempest failures due to iSCSI DB failure on compute node

2015-12-30 Thread John Griffith
** Also affects: os-brick
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1530214

Title:
  Tempest failures due to iSCSI DB failure on compute node

Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New

Bug description:
  Noticed a couple of these today in the SolidFire CI system.  These are 
initiator side errors in Nova.  Excerpt from log is below, but additional logs 
can also be viewed here:
  
http://54.164.167.86/solidfire-ci-logs/refs-changes-67-244867-12/logs/screen-n-cpu.log.txt


  2015-12-30 20:53:50.829 ERROR nova.virt.block_device 
[req-0f1602ac-0604-4051-8d65-81ede0976dc5 
tempest-DeleteServersTestJSON-1052804341 
tempest-DeleteServersTestJSON-308638552] [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] Driver failed to attach volume 
23049317-f6c2-40b2-9306-0b838848f911 at /dev/vdb
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] Traceback (most recent call last):
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 288, in attach
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] device_type=self['device_type'], 
encryption=encryption)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1114, in attach_volume
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] self._connect_volume(connection_info, 
disk_info)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1064, in _connect_volume
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] 
vol_driver.connect_volume(connection_info, disk_info)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/volume/iscsi.py", line 84, in connect_volume
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] device_info = 
self.connector.connect_volume(connection_info['data'])
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] return f(*args, **kwargs)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 763, in 
connect_volume
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] connection_properties)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 591, in 
_get_potential_volume_paths
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] if 
self._connect_to_iscsi_portal(props):
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 1044, in 
_connect_to_iscsi_portal
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] 
self._run_iscsiadm(connection_properties, ())
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 948, in 
_run_iscsiadm
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] delay_on_retry=delay_on_retry)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
312, in execute
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] cmd=sanitized_cmd)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] ProcessExecutionError: Unexpected error 
while running command.
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 

[Yahoo-eng-team] [Bug 1530214] Re: Tempest failures due to iSCSI DB failure on compute node

2015-12-30 Thread John Griffith
*** This bug is a duplicate of bug 1324670 ***
https://bugs.launchpad.net/bugs/1324670

Looks like this is an old one we thought was fixed on the brick side.  Removing 
Nova and marking as a duplicate of the original bug:
1324670


** No longer affects: nova

** This bug has been marked a duplicate of bug 1324670
   'iscsiadm ... -o delete' fails occasionally on bulk deployments

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1530214

Title:
  Tempest failures due to iSCSI DB failure on compute node

Status in os-brick:
  New

Bug description:
  Noticed a couple of these today in the SolidFire CI system.  These are 
initiator side errors in Nova.  Excerpt from log is below, but additional logs 
can also be viewed here:
  
http://54.164.167.86/solidfire-ci-logs/refs-changes-67-244867-12/logs/screen-n-cpu.log.txt


  2015-12-30 20:53:50.829 ERROR nova.virt.block_device 
[req-0f1602ac-0604-4051-8d65-81ede0976dc5 
tempest-DeleteServersTestJSON-1052804341 
tempest-DeleteServersTestJSON-308638552] [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] Driver failed to attach volume 
23049317-f6c2-40b2-9306-0b838848f911 at /dev/vdb
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] Traceback (most recent call last):
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 288, in attach
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] device_type=self['device_type'], 
encryption=encryption)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1114, in attach_volume
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] self._connect_volume(connection_info, 
disk_info)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1064, in _connect_volume
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] 
vol_driver.connect_volume(connection_info, disk_info)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/volume/iscsi.py", line 84, in connect_volume
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] device_info = 
self.connector.connect_volume(connection_info['data'])
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] return f(*args, **kwargs)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 763, in 
connect_volume
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] connection_properties)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 591, in 
_get_potential_volume_paths
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] if 
self._connect_to_iscsi_portal(props):
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 1044, in 
_connect_to_iscsi_portal
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] 
self._run_iscsiadm(connection_properties, ())
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 948, in 
_run_iscsiadm
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] delay_on_retry=delay_on_retry)
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
312, in execute
  2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]

[Yahoo-eng-team] [Bug 1530214] [NEW] Tempest failures due to iSCSI DB failure on compute node

2015-12-30 Thread John Griffith
Public bug reported:

Noticed a couple of these today in the SolidFire CI system.  These are 
initiator side errors in Nova.  Excerpt from log is below, but additional logs 
can also be viewed here:
http://54.164.167.86/solidfire-ci-logs/refs-changes-67-244867-12/logs/screen-n-cpu.log.txt


2015-12-30 20:53:50.829 ERROR nova.virt.block_device 
[req-0f1602ac-0604-4051-8d65-81ede0976dc5 
tempest-DeleteServersTestJSON-1052804341 
tempest-DeleteServersTestJSON-308638552] [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] Driver failed to attach volume 
23049317-f6c2-40b2-9306-0b838848f911 at /dev/vdb
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] Traceback (most recent call last):
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 288, in attach
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] device_type=self['device_type'], 
encryption=encryption)
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1114, in attach_volume
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] self._connect_volume(connection_info, 
disk_info)
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1064, in _connect_volume
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] 
vol_driver.connect_volume(connection_info, disk_info)
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/nova/nova/virt/libvirt/volume/iscsi.py", line 84, in connect_volume
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] device_info = 
self.connector.connect_volume(connection_info['data'])
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] return f(*args, **kwargs)
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 763, in 
connect_volume
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] connection_properties)
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 591, in 
_get_potential_volume_paths
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] if 
self._connect_to_iscsi_portal(props):
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 1044, in 
_connect_to_iscsi_portal
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] 
self._run_iscsiadm(connection_properties, ())
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/opt/stack/os-brick/os_brick/initiator/connector.py", line 948, in 
_run_iscsiadm
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] delay_on_retry=delay_on_retry)
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
312, in execute
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] cmd=sanitized_cmd)
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] ProcessExecutionError: Unexpected error 
while running command.
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m node -T 
iqn.2010-01.com.solidfire:l74j.uuid-23049317-f6c2-40b2-9306-0b838848f911.374375 
-p 10.10.64.3:3260
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] Exit code: 6
2015-12-30 20:53:50.829 13456 ERROR nova.virt.block_device [instance: 
d1526309-a49f-46b3-9a29-7d7ae285be9c] 

[Yahoo-eng-team] [Bug 1496222] [NEW] Requirements update breaks keystone install on 3'rd party CI systems

2015-09-15 Thread John Griffith
Public bug reported:

After this change: 
https://github.com/openstack/keystone/commit/db6c7d9779378a3a6a6c52c47fa0a303c9038508
 systems that run clean devstack installs are now failing during stack.sh for:
2015-09-16 02:30:22.901 | Ignoring dnspython3: markers "python_version=='3.4'" 
don't match your environment
2015-09-16 02:30:23.035 | Obtaining file:///opt/stack/keystone
2015-09-16 02:30:23.464 | Complete output from command python setup.py 
egg_info:
2015-09-16 02:30:23.464 | error in setup command: Invalid environment 
marker: (python_version=='2.7' # MPL)
2015-09-16 02:30:23.464 | 
2015-09-16 02:30:23.464 | 
2015-09-16 02:30:23.465 | Command "python setup.py egg_info" failed with error 
code 1 in /opt/stack/keystone

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496222

Title:
  Requirements update breaks keystone install on 3'rd party CI systems

Status in Keystone:
  New

Bug description:
  After this change: 
https://github.com/openstack/keystone/commit/db6c7d9779378a3a6a6c52c47fa0a303c9038508
 systems that run clean devstack installs are now failing during stack.sh for:
  2015-09-16 02:30:22.901 | Ignoring dnspython3: markers 
"python_version=='3.4'" don't match your environment
  2015-09-16 02:30:23.035 | Obtaining file:///opt/stack/keystone
  2015-09-16 02:30:23.464 | Complete output from command python setup.py 
egg_info:
  2015-09-16 02:30:23.464 | error in setup command: Invalid environment 
marker: (python_version=='2.7' # MPL)
  2015-09-16 02:30:23.464 | 
  2015-09-16 02:30:23.464 | 
  2015-09-16 02:30:23.465 | Command "python setup.py egg_info" failed with 
error code 1 in /opt/stack/keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488233] Re: FC with LUN ID 255 not recognized

2015-08-26 Thread John Griffith
** Also affects: cinder/kilo
   Importance: Undecided
   Status: New

** Tags removed: volumes
** Tags added: fibre-channel ibm

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488233

Title:
  FC with LUN ID 255 not recognized

Status in Cinder:
  New
Status in Cinder kilo series:
  In Progress
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  In Progress

Bug description:
  (s390 architecture/System z Series only) FC LUNs with LUN ID 255 are not 
recognized by neither Cinder nor Nova when trying to attach the volume.
  The issue is that Fibre-Channel volumes need to be added using the unit_add 
command with a properly formatted LUN string.
  The string is set correctly for LUNs =0xff. But not for LUN IDs within the 
range 0xff and 0x.
  Due to this the volumes do not get properly added to the hypervisor 
configuration and the hypervisor does not find them.

  Note: The change for Liberty os-brick is ready. I would also like to
  patch it back to Kilo. Since os-brick has been integrated with
  Liberty, but was separate before, I need to release a patch for Nova,
  Cinder, and os-brick. Unfortunately there is no option on this page to
  nominate the patch for Kilo. Can somebody help? Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1488233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486178] [NEW] Boot from image (creates a new volume) Doesn't allow specification of volume-type

2015-08-18 Thread John Griffith
Public bug reported:

Horizon has a cool feature that wrap cinder create-volume from image and
novas boot from volume all up into a single command under launch
instance.  The only missing thing here is the ability to specify volume-
type when doing this.  There should probably be a follow up that let's a
user specify the cinder volume-type when using this.

** Affects: horizon
 Importance: High
 Assignee: David Lyle (david-lyle)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1486178

Title:
  Boot from image (creates a new volume) Doesn't allow specification of
  volume-type

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Horizon has a cool feature that wrap cinder create-volume from image
  and novas boot from volume all up into a single command under launch
  instance.  The only missing thing here is the ability to specify
  volume-type when doing this.  There should probably be a follow up
  that let's a user specify the cinder volume-type when using this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1486178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484588] [NEW] glance startup gives ERROR: Could not bind to 0.0.0.0:9292 after trying for 30 seconds on stable-kilo after attempting service restart

2015-08-13 Thread John Griffith
Public bug reported:

Have a running stable-kilo setup, recently did a restart of all services
and glance won't start up.  Following error in g-api log:

2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 
image_format.disk_formats  = ['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 
'qcow2', 'vdi', 'iso'] log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 
oslo_policy.policy_default_rule = default log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 
oslo_policy.policy_dirs= ['policy.d'] log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 
oslo_policy.policy_file= policy.json log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 

 log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2197
^[kERROR: Could not bind to 0.0.0.0:9292 after trying for 30 seconds

And same in the g-reg logs:

-08-13 09:44:20.743 28806 DEBUG glance.common.config [-] 
keystone_authtoken.user_id = None log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:20.743 28806 DEBUG glance.common.config [-] 
keystone_authtoken.username= glance log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:20.743 28806 DEBUG glance.common.config [-] 
image_format.container_formats = ['ami', 'ari', 'aki', 'bare', 'ovf', 'ova'] 
log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:20.743 28806 DEBUG glance.common.config [-] 
image_format.disk_formats  = ['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 
'qcow2', 'vdi', 'iso'] log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:20.743 28806 DEBUG glance.common.config [-] 
oslo_policy.policy_default_rule = default log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:20.744 28806 DEBUG glance.common.config [-] 
oslo_policy.policy_dirs= ['policy.d'] log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:20.744 28806 DEBUG glance.common.config [-] 
oslo_policy.policy_file= policy.json log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
2015-08-13 09:44:20.744 28806 DEBUG glance.common.config [-] 

 log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2197
ERROR: Could not bind to 0.0.0.0:9191 after trying for 30 seconds

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1484588

Title:
  glance startup gives ERROR: Could not bind to 0.0.0.0:9292 after
  trying for 30 seconds on stable-kilo after attempting service restart

Status in Glance:
  New

Bug description:
  Have a running stable-kilo setup, recently did a restart of all
  services and glance won't start up.  Following error in g-api log:

  2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 
image_format.disk_formats  = ['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 
'qcow2', 'vdi', 'iso'] log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
  2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 
oslo_policy.policy_default_rule = default log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
  2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 
oslo_policy.policy_dirs= ['policy.d'] log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
  2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 
oslo_policy.policy_file= policy.json log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
  2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-] 

 log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2197
  ^[kERROR: Could not bind to 0.0.0.0:9292 after trying for 30 seconds

  And same in the g-reg logs:

  -08-13 09:44:20.743 28806 DEBUG glance.common.config [-] 
keystone_authtoken.user_id = None log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
  2015-08-13 09:44:20.743 28806 DEBUG glance.common.config [-] 
keystone_authtoken.username= glance log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2195
  2015-08-13 

[Yahoo-eng-team] [Bug 1423165] Re: https: client can cause nova/cinder to leak sockets for 'get' 'show' 'delete' 'update'

2015-06-09 Thread John Griffith
Going to close it for Cinder as well, as I don't know of a way to fix a
broken glanceclient from the consumer end.

If you're interested however I did throw together a patched version of 0.14.2 
here:
https://github.com/j-griffith/python-glanceclient/tree/stable/icehouse

Maybe you or somebody else could test it out, and we could convince the
glance folks to push a branch for it; or people that need it can maybe
just use it.

Thanks

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423165

Title:
  https: client can cause nova/cinder to leak sockets for 'get' 'show'
  'delete' 'update'

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Glance:
  Fix Released

Bug description:
  
  Other OpenStack services which instantiate a 'https' glanceclient using
  ssl_compression=False and insecure=False (eg Nova, Cinder) are leaking
  sockets due to glanceclient not closing the connection to the Glance
  server.
  
  This could happen for a sub-set of calls, eg 'show', 'delete', 'update'.
  
  netstat -nopd would show the sockets would hang around forever:
  
  ... 127.0.0.1:9292  ESTABLISHED 9552/python  off (0.00/0/0)
  
  urllib's ConnectionPool relies on the garbage collector to tear down
  sockets which are no longer in use. The 'verify_callback' function used to
  validate SSL certs was holding a reference to the VerifiedHTTPSConnection
  instance which prevented the sockets being torn down.

  
  --

  to reproduce, set up devstack with nova talking to glance over https (must be 
performing full cert verification) and
  perform a nova operation such as:

  
   $ nova image-meta 53854ea3-23ed-4682-abf7-8415f2d6b7d9 set foo=bar

  you will see connections from nova to glance which have no timeout
  (off):

   $ netstat -nopd | grep 9292

   tcp0  0 127.0.0.1:34204 127.0.0.1:9292
  ESTABLISHED 9552/python  off (0.00/0/0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1423165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460786] Re: Primary and secondary DNS servers get swapped

2015-06-03 Thread John Griffith
Removing Cinder and moving to Neutron

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460786

Title:
  Primary and secondary DNS servers get swapped

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This defect is related to Neutron component.

  When a network is created  with DNS server ip addresses, they get
  swapped if primary DNS is greater than secondary DNS (from sorted list
  perspective).

  When 8.8.8.8 is entered as primary DNS and 7.7.7.7 is entered as
  secondary DNS :

  From : /usr/lib/python2.7/site-
  packages/neutron/db/db_base_plugin_v2.py

  def _create_subnet_from_implicit_pool(self, context, subnet):
  pdb.set_trace()
  s = subnet['subnet']
  self._validate_subnet(context, s)
  tenant_id = self._get_tenant_id_for_create(context, s)
  id = s.get('id', uuidutils.generate_uuid())
  detail = ipam.SpecificSubnetRequest(tenant_id,
  id,
  s['cidr'])
  with context.session.begin(subtransactions=True):
  network = self._get_network(context, s[network_id])
  self._validate_subnet_cidr(context, network, s['cidr'])
 subnet = self._save_subnet(context,
 network,
 self._make_subnet_args(context,
network.shared,
detail,
s),
 s['dns_nameservers'],
 s['host_routes'],
 s['allocation_pools'])
  if network.external:
  self._update_router_gw_ports(context,
   subnet['id'],
   subnet['network_id'])
  return self._make_subnet_dict(subnet)

  subnet variable before _save_subnet is invoked : (8.8.8.8 and 7.7.7.7)

  (Pdb) p subnet
  {u'subnet': {'host_routes': object object at 0x7fd428331400, 'prefixlen': 
object object at 0x7fd428331400, 'name': '', u'enable_dhcp': False, 
u'network_id': u'e8d3b629-b2e3-484c-84a3-c015e3dd082d', 'tenant_id': 
u'c4af4f17fb5d413c9f9a7bcda537c621', u'dns_nameservers': [u'8.8.8.8', 
u'7.7.7.7'], 'ipv6_ra_mode': object object at 0x7fd428331400, 
u'allocation_pools': [{u'start': u'1.1.0.1', u'end': u'1.1.1.0'}, {u'start': 
u'1.1.1.2', u'end': u'1.1.15.254'}], u'gateway_ip': u'1.1.1.1', u'ip_version': 
4, 'ipv6_address_mode': object object at 0x7fd428331400, u'cidr': 
'1.1.0.0/20', 'subnetpool_id': object object at 0x7fd428331400}}
  (Pdb)

  
  After invoking _save_subnet, 

  (Pdb) p subnet['dns_nameservers']
  [neutron.db.models_v2.DNSNameServer[object at 4217b90] {address=u'7.7.7.7', 
subnet_id=u'dd999140-7d9b-4361-b507-4505ebd42bb0'}, 
neutron.db.models_v2.DNSNameServer[object at 4217c10] {address=u'8.8.8.8', 
subnet_id=u'dd999140-7d9b-4361-b507-4505ebd42bb0'}]

  The order of ip addresses is swapped from 8.8.8.8 and 7.7.7.7 to
  7.7.7.7 and 8.8.8.8. When network details are retrieved and presented,
  since the order is changed, primary and secondary DNS ips are swapped.
  This does not happen when 7.7.7.7 is entered as primary and 8.8.8.8 is
  entered as secondary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447288] Re: create volume from snapshot using horizon error

2015-04-26 Thread John Griffith
Turns out this is worse than I thought at first glance.  So it appears
that running from Horizon isn't honoring create from snap, it's also not
honoring bootable settings.

What's worse however is that at first check it appears that it's not
actually creating from snap at all.  To test I created a bootable
volume, booted it up, wrote some stuff.  Shutdown the instance, created
a snapshot, created a volume from snapshot.  Attached the new volume and
inspected it, turns out it's an empty/raw volume with nothing on it.

Given this still seems to work from cinder side, I think this is a
Horizon specific bug but haven't verified/triaged.

** Changed in: cinder
   Status: New = Confirmed

** Changed in: cinder
   Importance: Undecided = Critical

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447288

Title:
  create volume from snapshot using horizon error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I try to create a volume from snapshot using the OpenStack UI it
  creates a new raw volume with correct size, but it's not created from
  a snapshot.

  $ cinder show 9d5d0ca1-3dd0-47b4-b9f4-86f97d65724e
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-04-22T18:08:53.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
9d5d0ca1-3dd0-47b4-b9f4-86f97d65724e |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | v2s2
 |
  | os-vol-host-attr:host | ubuntu@ns_nfs-1#nfs 
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   4968203f183641b283e111a2f2db  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 |
  |  size |  2  
 |
  |  snapshot_id  | None
 |
  |  source_volid | None
 |
  | status|  available  
 |
  |user_id|   c8163c5313504306b40377a0775e9ffa  
 |
  |  volume_type  | None
 |
  
+---+--+

  But when I use cinder command line everything seems to be fine.

  $ cinder create --snapshot-id 382a0e1d-168b-4cf6-a9ff-715d8ad385eb 1
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-04-22T18:15:08.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
b33ec1ef-9d29-4231-8d15-8cf22ca3c502 |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | None 

[Yahoo-eng-team] [Bug 1447288] Re: create volume from snapshot using horizon error

2015-04-26 Thread John Griffith
Verified using cinderclient for these ops works as expected.  No idea
what Horizon is calling/doing here.  Removing Cinder for now, we can
readd if there's infact something weird on our side.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447288

Title:
  create volume from snapshot using horizon error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I try to create a volume from snapshot using the OpenStack UI it
  creates a new raw volume with correct size, but it's not created from
  a snapshot.

  $ cinder show 9d5d0ca1-3dd0-47b4-b9f4-86f97d65724e
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-04-22T18:08:53.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
9d5d0ca1-3dd0-47b4-b9f4-86f97d65724e |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | v2s2
 |
  | os-vol-host-attr:host | ubuntu@ns_nfs-1#nfs 
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   4968203f183641b283e111a2f2db  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 |
  |  size |  2  
 |
  |  snapshot_id  | None
 |
  |  source_volid | None
 |
  | status|  available  
 |
  |user_id|   c8163c5313504306b40377a0775e9ffa  
 |
  |  volume_type  | None
 |
  
+---+--+

  But when I use cinder command line everything seems to be fine.

  $ cinder create --snapshot-id 382a0e1d-168b-4cf6-a9ff-715d8ad385eb 1
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-04-22T18:15:08.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
b33ec1ef-9d29-4231-8d15-8cf22ca3c502 |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | None
 |
  | os-vol-host-attr:host | ubuntu@ns_nfs-1#nfs 
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   4968203f183641b283e111a2f2db  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 

[Yahoo-eng-team] [Bug 1161557] Re: Race condition in handling of udev generated symlinks

2015-03-30 Thread John Griffith
** Changed in: cinder
   Status: Triaged = Incomplete

** Changed in: cinder
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161557

Title:
  Race condition in handling of udev generated symlinks

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  In several components in both Cinder and Nova udev provided symlinks are 
expected to appear immediately.
  This is not the case as udev rules run async of device plugging.
  Volume drivers in Cinder and the libvirt driver in Nova seem to be the 
primary culprits.

  To solve this we should wait for udevd to finish processing events, this can 
be done with udevadm by calling settle. This is done by simply calling udevadm 
settle. Additionally it can be called with a timeout (probably a good idea).
  Excerpt from udevadm settle --help:

  Usage: udevadm settle OPTIONS
    --timeout=seconds maximum time to wait for events
    --seq-start=seqnumfirst seqnum to wait for
    --seq-end=seqnum  last seqnum to wait for
    --exit-if-exists=file stop waiting if file exists
    --quiet do not print list after timeout
    --help

  For more intelligent use we could wrap this in a function that can use
  the --exit-if-exists behavior.

  This will ensure the symlink actually exists before we try use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1161557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431260] Re: Instance failed block device setup

2015-03-27 Thread John Griffith
so this is typically because on some backends the image-
download/conversion can take a relatively long time.  The feature of
rolling this up into one command in Horizon was a good idea, but it
unfortunately doesn't coordinate things very well or check status before
trying to move on.

this is actually an issue in Horizon as there's not direct API call that
does this but instead it's a chained sequence built into Horizon.

Work around as you've already figured out is do things in two steps 1.
Create bootable volume, 2. Boot the volume.

** Project changed: cinder = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1431260

Title:
  Instance failed block device setup

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I'm trying to launch instances from Horizon using the option Boot form
  image - Creates a new volume.

  The instance fails with block_device_mapping ERROR.

  On the controller cinder/api.log and cinder/volume.log shows no error
  or relevant information.

  On the compute node, nova-compute.log does show the problem:

  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] Traceback (most recent call last):
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1819, in 
_prep_block_device
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] do_check_attach=do_check_attach) +
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 407, in 
attach_block_devices
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] map(_log_and_attach, block_device_mapping)
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 405, in 
_log_and_attach
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] bdm.attach(*attach_args, **attach_kwargs)
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 333, in 
attach
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] wait_func(context, vol['id'])
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1263, in 
_await_block_device_map_created
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] attempts=attempts)
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] VolumeNotCreated: Volume 
abc781af-0960-4a65-87d2-a5cb15ce7273 did not finish being created even after we 
waited 250 seconds or 61 attempts.
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] Traceback (most recent call last):
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2218, in 
_build_resources
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] block_device_mapping)
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1847, in 
_prep_block_device
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] raise exception.InvalidBDM()
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] InvalidBDM: Block Device Mapping is 
Invalid.
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]
  2015-03-11 11:23:02.848 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] Traceback (most recent call last):
  2015-03-11 11:23:02.848 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2030, in 
_do_build_and_run_instance
  2015-03-11 11:23:02.848 72172 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1373513] Re: Lvm hang during tempest tests

2015-03-16 Thread John Griffith
** Changed in: nova
   Status: New = Invalid

** Changed in: cinder
 Assignee: (unassigned) = John Griffith (john-griffith)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373513

Title:
  Lvm hang during tempest tests

Status in Cinder:
  Triaged
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Managed to trigger a hang in lvm create

  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.704164] 
INFO: task lvm:14805 blocked for more than 120 seconds.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.705096] 
  Not tainted 3.13.0-35-generic #62-Ubuntu
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.705839] 
echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706871] lvm 
D 8801ffd14440 0 14805  14804 0x
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706876]  
880068f9dae0 0082 8801a14bc800 880068f9dfd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706879]  
00014440 00014440 8801a14bc800 8801ffd14cd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706881]  
 88004063c280  8801a14bc800
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706883] 
Call Trace:
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706895]  
[81722a6d] io_schedule+0x9d/0x140
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706914]  
[811fac94] do_blockdev_direct_IO+0x1ce4/0x2910
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706918]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706920]  
[811fb915] __blockdev_direct_IO+0x55/0x60
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706922]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706924]  
[811f61f6] blkdev_direct_IO+0x56/0x60
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706926]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706937]  
[8115106b] generic_file_aio_read+0x69b/0x700
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706947]  
[811cca78] ? path_openat+0x158/0x640
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706953]  
[810f3c92] ? from_kgid_munged+0x12/0x20
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706955]  
[811f667b] blkdev_aio_read+0x4b/0x70
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706958]  
[811bc99a] do_sync_read+0x5a/0x90
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706960]  
[811bd035] vfs_read+0x95/0x160
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706962]  
[811bdb49] SyS_read+0x49/0xa0
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706966]  
[8172ed6d] system_call_fastpath+0x1a/0x1f
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706968] 
INFO: task lvs:14822 blocked for more than 120 seconds.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.707774] 
  Not tainted 3.13.0-35-generic #62-Ubuntu
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.708507] 
echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709535] lvs 
D 8801ffc14440 0 14822  14821 0x
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709537]  
880009ffdae0 0082 8800095e1800 880009ffdfd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709539]  
00014440 00014440 8800095e1800 8801ffc14cd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709541]  
 880003d59900  8800095e1800
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709543] 
Call Trace:
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709547]  
[81722a6d] io_schedule+0x9d/0x140
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709549]  
[811fac94] do_blockdev_direct_IO+0x1ce4/0x2910
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709551]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709554]  
[811fb915] __blockdev_direct_IO+0x55/0x60
  Sep 22 21:19:01 devstack

[Yahoo-eng-team] [Bug 1373513] Re: Lvm hang during tempest tests

2015-03-10 Thread John Griffith
I've added Nova to the projects here because currently we're at a stale
mate where there seems to be a single case during unrescue that tirggers
this.  Patc is proposed but looks like it won't be accepted, want to
make sure we link this and keep it tracked although it is different than
the original bug posted by Vish, I believe it's related.

** Project changed: cinder = nova

** Changed in: nova
Milestone: kilo-3 = None

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
   Status: New = In Progress

** Changed in: cinder
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373513

Title:
  Lvm hang during tempest tests

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Managed to trigger a hang in lvm create

  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.704164] 
INFO: task lvm:14805 blocked for more than 120 seconds.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.705096] 
  Not tainted 3.13.0-35-generic #62-Ubuntu
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.705839] 
echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706871] lvm 
D 8801ffd14440 0 14805  14804 0x
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706876]  
880068f9dae0 0082 8801a14bc800 880068f9dfd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706879]  
00014440 00014440 8801a14bc800 8801ffd14cd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706881]  
 88004063c280  8801a14bc800
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706883] 
Call Trace:
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706895]  
[81722a6d] io_schedule+0x9d/0x140
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706914]  
[811fac94] do_blockdev_direct_IO+0x1ce4/0x2910
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706918]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706920]  
[811fb915] __blockdev_direct_IO+0x55/0x60
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706922]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706924]  
[811f61f6] blkdev_direct_IO+0x56/0x60
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706926]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706937]  
[8115106b] generic_file_aio_read+0x69b/0x700
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706947]  
[811cca78] ? path_openat+0x158/0x640
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706953]  
[810f3c92] ? from_kgid_munged+0x12/0x20
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706955]  
[811f667b] blkdev_aio_read+0x4b/0x70
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706958]  
[811bc99a] do_sync_read+0x5a/0x90
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706960]  
[811bd035] vfs_read+0x95/0x160
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706962]  
[811bdb49] SyS_read+0x49/0xa0
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706966]  
[8172ed6d] system_call_fastpath+0x1a/0x1f
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706968] 
INFO: task lvs:14822 blocked for more than 120 seconds.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.707774] 
  Not tainted 3.13.0-35-generic #62-Ubuntu
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.708507] 
echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709535] lvs 
D 8801ffc14440 0 14822  14821 0x
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709537]  
880009ffdae0 0082 8800095e1800 880009ffdfd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709539]  
00014440 00014440 8800095e1800 8801ffc14cd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709541]  
 880003d59900  8800095e1800
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709543] 
Call Trace:
  Sep 22 21:19:01 

[Yahoo-eng-team] [Bug 1423654] [NEW] Nova rescue causes LVM timeouts after moving attachments

2015-02-19 Thread John Griffith
Public bug reported:

The Nova rescue feature powers off a running instance and, boots a
rescue instance attaching the ephemeral disk of the original instance to
it to allow an admin to try and recover the instance.  The problem is
that if a Cinder Volume is attached to that instance when we do a rescue
we don't do a detach or any sort of maintenance on the block mapping
that we have set up for it.  We do check to see if we have it, and
verify it's attached but that's it.

The result is that after the rescue operation subsequent LVM calls to do things 
like lvs and vgs will attempt to open a device file that no longer exists which 
takes up to 60 seconds for each device.  An example is the current tempest test:
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume[gate,negative,volume]

Which if you look at tempest results you'll notice that test always
takes in excess of 100 seconds, but it's not just because it's a long
test, it's the blocking LVM calls.

We should detach any cinder volumes that are attached to an instance during the 
rescue process.  One concern with this that came from folks on the Nova team 
was 'what about boot from volume'?  Rescue of a volume booted instance is 
currently an invalid case as is evident by the code that checks for it and 
fails here:
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2822

Probably no reason we can't automate this as part of rescue in the
future but for now it's a separate enhancement independent of this bug.

** Affects: nova
 Importance: Undecided
 Assignee: John Griffith (john-griffith)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = John Griffith (john-griffith)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423654

Title:
  Nova rescue causes LVM timeouts after moving attachments

Status in OpenStack Compute (Nova):
  New

Bug description:
  The Nova rescue feature powers off a running instance and, boots a
  rescue instance attaching the ephemeral disk of the original instance
  to it to allow an admin to try and recover the instance.  The problem
  is that if a Cinder Volume is attached to that instance when we do a
  rescue we don't do a detach or any sort of maintenance on the block
  mapping that we have set up for it.  We do check to see if we have it,
  and verify it's attached but that's it.

  The result is that after the rescue operation subsequent LVM calls to do 
things like lvs and vgs will attempt to open a device file that no longer 
exists which takes up to 60 seconds for each device.  An example is the current 
tempest test:
  
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume[gate,negative,volume]

  Which if you look at tempest results you'll notice that test always
  takes in excess of 100 seconds, but it's not just because it's a long
  test, it's the blocking LVM calls.

  We should detach any cinder volumes that are attached to an instance during 
the rescue process.  One concern with this that came from folks on the Nova 
team was 'what about boot from volume'?  Rescue of a volume booted instance is 
currently an invalid case as is evident by the code that checks for it and 
fails here:
  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2822

  Probably no reason we can't automate this as part of rescue in the
  future but for now it's a separate enhancement independent of this
  bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1423654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406784] Re: Can't create volume from non-raw image

2014-12-31 Thread John Griffith
I think this is up to your install or distribution that you're using.
In other words, Cinder does not install packages, that's deployment.
What you're reporting here is not a bug, if there's no info in the docs
about installing qemu tools that is possibly something we could add.

What OpenStack distribution are you using?

** Changed in: cinder
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406784

Title:
  Can't create volume from non-raw image

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  1. Create an image using a non-raw image (qcow2 or vmdk is ok)
  2. Copy the image to a volume,  and failed.

  Log:
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 363, in 
create_volume
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
_run_flow()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 356, in 
_run_flow
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
flow_engine.run()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/utils/lock_utils.py, line 53, in 
wrapper
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py, 
line 111, in run
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._run()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py, 
line 121, in _run
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._revert(misc.Failure())
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py, 
line 78, in _revert
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
misc.Failure.reraise_if_any(failures.values())
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py, line 558, in 
reraise_if_any
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
failures[0].reraise()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py, line 565, in reraise
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py, 
line 36, in _execute_task
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
task.execute(**arguments)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py,
 line 594, in execute
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
**volume_spec)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py,
 line 556, in _create_from_image
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
image_id, image_location, image_service)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py,
 line 463, in _copy_image_to_volume
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher raise 
exception.ImageUnacceptable(ex)

[Yahoo-eng-team] [Bug 1404037] Re: SimpleReadOnlySaharaClientTest.test_sahara_help fails in gate

2014-12-18 Thread John Griffith
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404037

Title:
  SimpleReadOnlySaharaClientTest.test_sahara_help fails in gate

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  New

Bug description:
  Fails on various gate jobs, example patch here:
  https://review.openstack.org/#/c/141931/  at Dec 18, 22:34 UTC

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387945] Re: nova volume-attach is giving wrong device ID

2014-10-31 Thread John Griffith
This is a VERY old and long running issue with how things work on the
Nova side of the house.  The volumes are going to get attached to the
next available drive mapping (vdb, vdc, vdd) based on the Block Device
Mapping table in Nova.  The specification you provide to attach-volume
is really more of a hint than anything else.

Anyway, over the past the answer has been just use 'auto' and save
yourself the false sense of control here.  Not acceptable for some,
regardless this is a Nova operation and Cinder actually has no control
or input here.

Marking invalid for Cinder and adding Nova.

** Changed in: cinder
   Status: New = Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387945

Title:
  nova volume-attach is giving wrong device ID

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  Sometimes while attaching volume to the instance using nova volume-
  attach it is giving wrong device ID (mountpoint : /dev/vdb).

  root@techpatron:~# nova volume-attach VM1 201b2fe8-7f77-446d-a6e4-5d077914329c
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdd |
  | id   | 201b2fe8-7f77-446d-a6e4-5d077914329c |
  | serverId | 2f319155-06d2-4aca-9f0f-49b415112568 |
  | volumeId | 201b2fe8-7f77-446d-a6e4-5d077914329c |
  +--+--+

  Here it is showing /dev/vdd, but volume actually attached as
  /dev/vdc to the instance VM1.

  Because of this when I am running some automation scripts (which will
  perform operations on the attached device with in the instance) facing
  problem. From the output that script taking the device id as
  /dev/vdd but device is attached to some other mount point.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1387945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1227321] Re: DBDuplicateEntry not being translated for DB2

2014-09-19 Thread John Griffith
Not sure of the status in Cinder (oslo moves may cover this) but nobody
seems to care as this has been stagnant for a year on cinder.

Feel free to log a new bug if needed.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1227321

Title:
  DBDuplicateEntry not being translated for DB2

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Triaged
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  The
  
tempest.api.compute.keypairs.test_keypairs.test_create_keypair_with_duplicate_name
  test fails if you're running with a DB2 backend because the nova code
  is not currently translating the db integrity error if the backing
  engine is DB2 (ibm_db_sa) in
  nova.openstack.common.db.sqlalchemy.session._raise_if_duplicate_entry_error.

  Per full disclosure, nova is not claiming support for DB2 and there is
  a lot of work that would need to be done for that which my team is
  planning for icehouse and there is a blueprint here:

  https://blueprints.launchpad.net/nova/+spec/db2-database

  My team does have DB2 10.5 working with nova trunk but we have changes
  to the migration scripts to support that.  Also, you have to run with
  the DB2 patch for sqlalchemy-migrate posted here:

  https://code.google.com/p/sqlalchemy-migrate/issues/detail?id=151

  And you must run with the ibm-db/ibm-db-sa drivers:

  https://code.google.com/p/ibm-db/source/clones?repo=ibm-db-sa

  We're trying to get the sqlalchemy-migrate support for DB2 accepted in
  the icehouse timeframe but need to show the migrate maintainer that he
  can use the free express-c version of DB2 in ubuntu for the test
  backend.

  Anyway, having said all that, fixing the DBDuplicateEntry translation
  is part of the story so I'm opening a bug to track it and get the
  patch up to get the ball rolling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1227321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367982] [NEW] ERROR [tempest.scenario.test_volume_boot_pattern] ssh to server failed

2014-09-10 Thread John Griffith
Public bug reported:

Failure encountered in gate testing dsvm-full

http://logs.openstack.org/98/120298/2/check/check-tempest-dsvm-
full/a739161/console.html#_2014-09-10_15_53_23_821

It appears that the volume was created and nova reported it as booted
successfully however the ssh connection timed out.  Haven't looked
closely to see if perhaps this is an issue in the test itself.

There are some issues related to instance state listed in the n-cpu logs
of this run, but they don't appear to be related to this specific test.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367982

Title:
  ERROR[tempest.scenario.test_volume_boot_pattern] ssh to server
  failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  Failure encountered in gate testing dsvm-full

  http://logs.openstack.org/98/120298/2/check/check-tempest-dsvm-
  full/a739161/console.html#_2014-09-10_15_53_23_821

  It appears that the volume was created and nova reported it as booted
  successfully however the ssh connection timed out.  Haven't looked
  closely to see if perhaps this is an issue in the test itself.

  There are some issues related to instance state listed in the n-cpu
  logs of this run, but they don't appear to be related to this specific
  test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366149] [NEW] neturon-dsvm-full test_server_connectivity_stop_start test fails

2014-09-05 Thread John Griffith
Public bug reported:

Failure in gate neutron-dsvm-full:
http://logs.openstack.org/98/117898/2/gate/gate-tempest-dsvm-neutron-full/40cf18a/console.html#_2014-09-05_12_25_05_730

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366149

Title:
  neturon-dsvm-full test_server_connectivity_stop_start test fails

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Failure in gate neutron-dsvm-full:
  
http://logs.openstack.org/98/117898/2/gate/gate-tempest-dsvm-neutron-full/40cf18a/console.html#_2014-09-05_12_25_05_730

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357677] Re: Instances failes to boot from volume

2014-08-29 Thread John Griffith
I'm not sure why this is logged as a Cinder bug?  Other than the fact
that it's boot from volume perhaps, but the instance appears to boot
correctly and is in ACTIVE state.  The issue here seems to be networking
as the ssh connection fails...  no?

http://logs.openstack.org/75/113175/1/gate/check-tempest-dsvm-neutron-
full/827c854/console.html.gz#_2014-08-14_11_23_29_423

Not sure if this is on the Neutron side or the Nova side, but I suspect
it's a networking issue, regardless doesn't seem like a Cinder issue as
far as I can tell.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357677

Title:
  Instances failes to boot from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Logstash query for full console outputs which does not contains 'info:
  initramfs loading root from /dev/vda' , but contains the previous boot
  message.

  These issues look like ssh connectivity issue, but the instance is not
  booted and it happens regardless to the network type.

  message: Freeing unused kernel memory AND message: Initializing
  cgroup subsys cpuset AND NOT message: initramfs loading root from
  AND tags:console

  49 incident/week.

  Example console log:
  
http://logs.openstack.org/75/113175/1/gate/check-tempest-dsvm-neutron-full/827c854/console.html.gz#_2014-08-14_11_23_30_120

  It failed when it's tried to ssh 3th server.
  WARNING: The conole.log contains two instances serial console output,  try no 
to mix them when reading.

  The fail point in the test code was here:
  
https://github.com/openstack/tempest/blob/b7144eb08175d010e1300e14f4f75d04d9c63c98/tempest/scenario/test_volume_boot_pattern.py#L175

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362854] [NEW] Incorrect regex on rootwrap for encrypted volumes ln creation

2014-08-28 Thread John Griffith
Public bug reported:

While running Tempest tests against my device, the encryption tests
consistently fail to attach.  Turns out the problem is an attempt to
create symbolic link for encryption process, however the rootwrap spec
is restricted to targets with the default openstack.org iqn.

Error Message from n-cpu:

Stderr: '/usr/local/bin/nova-rootwrap: Unauthorized command: ln
--symbolic --force /dev/mapper/ip-10.10.8.112:3260-iscsi-
iqn.2010-01.com.solidfire:3gd2.uuid-6f210923-36bf-46a4-b04a-
6b4269af9d4f.4710-lun-0 /dev/disk/by-path/ip-10.10.8.112:3260-iscsi-
iqn.2010-01.com.sol


Rootwrap entry currently implemented:

ln: RegExpFilter, ln, root, ln, --symbolic, --force, /dev/mapper/ip
-.*-iscsi-iqn.2010-10.org.openstack:volume-.*, /dev/disk/by-path/ip
-.*-iscsi-iqn.2010-10.org.openstack:volume-.*

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- Missing rootwrap for encrypted volumes
+ Incorrect regex on rootwrap for encrypted volumes ln creation

** Description changed:

+ While running Tempest tests against my device, the encryption tests
+ consistently fail to attach.  Turns out the problem is an attempt to
+ create symbolic link for encryption process, however the rootwrap spec
+ is restricted to targets with the default openstack.org iqn.
  
- Stderr: '/usr/local/bin/nova-rootwrap: Unauthorized command: ln --symbolic 
--force 
/dev/mapper/ip-10.10.8.112:3260-iscsi-iqn.2010-01.com.solidfire:3gd2.uuid-6f210923-36bf-46a4-b04a-6b4269af9d4f.4710-lun-0
 /dev/disk/by-path/ip-10.10.8.112:3260-iscsi-iqn.2010-01.com.sol
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 412, in decorated_function
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 88, in wrapped
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher payload)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 71, in wrapped
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 296, in decorated_function
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher pass
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 282, in decorated_function
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 324, in decorated_function
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 312, in decorated_function
- 

[Yahoo-eng-team] [Bug 1350466] Re: deadlock in scheduler expire reservation periodic task

2014-08-01 Thread John Griffith
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
   Status: New = Triaged

** Changed in: cinder
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350466

Title:
  deadlock in scheduler expire reservation periodic task

Status in Cinder:
  Triaged
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  http://logs.openstack.org/54/105554/4/check/gate-tempest-dsvm-neutron-
  large-
  ops/45501af/logs/screen-n-sch.txt.gz?level=TRACE#_2014-07-30_16_26_20_158

  
  2014-07-30 16:26:20.158 17209 ERROR nova.openstack.common.periodic_task [-] 
Error during SchedulerManager._expire_reservations: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE 
reservations SET updated_at=updated_at, deleted_at=%s, deleted=id WHERE 
reservations.deleted = %s AND reservations.expire  %s' 
(datetime.datetime(2014, 7, 30, 16, 26, 20, 152098), 0, datetime.datetime(2014, 
7, 30, 16, 26, 20, 149665))
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/periodic_task.py, line 198, in 
run_periodic_tasks
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/scheduler/manager.py, line 157, in 
_expire_reservations
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
QUOTAS.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/quota.py, line 1401, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._driver.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/quota.py, line 651, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
db.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/api.py, line 1173, in reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return IMPL.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 149, in wrapper
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 3394, in 
reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
reservation_query.soft_delete(synchronize_session=False)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
694, in soft_delete
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
synchronize_session=synchronize_session)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2690, in 
update
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_op.exec_()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
816, in exec_
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._do_exec()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
913, in _do_exec
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_stmt, params=self.query._params)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
444, in _wrap
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
_raise_if_deadlock_error(e, self.bind.dialect.name)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
427, in _raise_if_deadlock_error
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
raise exception.DBDeadlock(operational_error)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying to get lock; 

[Yahoo-eng-team] [Bug 1332855] Re: grenade test fails due to tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]

2014-06-21 Thread John Griffith
excerpts from Sean's email that just went out but hasn't hit archives
yet:

```
Horizon in icehouse is now 100% failing

 [Sat Jun 21 16:17:35 2014] [error] Internal Server Error: /
[Sat Jun 21 16:17:35 2014] [error] Traceback (most recent call last):
[Sat Jun 21 16:17:35 2014] [error]   File
/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py,
line 112, in get_response
[Sat Jun 21 16:17:35 2014] [error] response =
wrapped_callback(request, *callback_args, **callback_kwargs)
[Sat Jun 21 16:17:35 2014] [error]   File
/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py, line
36, in inner_func
[Sat Jun 21 16:17:35 2014] [error] response = func(*args, **kwargs)
[Sat Jun 21 16:17:35 2014] [error]   File
/opt/stack/old/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/views.py,
line 35, in splash
[Sat Jun 21 16:17:35 2014] [error] form = views.Login(request)
[Sat Jun 21 16:17:35 2014] [error] AttributeError: 'module' object has
no attribute 'Login'

This suspiciously times with django_openstack_auth 1.1.6 being released.
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkF0dHJpYnV0ZUVycm9yOiAnbW9kdWxlJyBvYmplY3QgaGFzIG5vIGF0dHJpYnV0ZSAnTG9naW4nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDMzNjk0MjQ4NjR9

Because this breaks smoke tests on icehouse, it means that any project
with upgrade testing fails.

Would be great if horizon folks code make this a top priority. Also, in
future, releasing new library versions on a saturday maybe best avoided. :)

-Sean


** Project changed: tempest = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1332855

Title:
  grenade test fails due to
  
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Grenade dsvm-jobs are failing.  Console output doesn't offer much, but
  looking at the grenade summary logs the culprit seems to be a
  dashboard ops test:

  http://logs.openstack.org/52/101252/1/check/check-grenade-
  dsvm/25e55c2/logs/grenade.sh.log.2014-06-21-153223

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1332855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329563] [NEW] test_suspend_server_invalid_state fails with 400 response

2014-06-12 Thread John Griffith
Public bug reported:

Encountered what looks like a new gate failure.  The
test_suspend_server_invalid_state test fails with a bad request response
/ unhandled exception.

http://logs.openstack.org/48/96548/1/gate/gate-tempest-dsvm-postgres-
full/fa5c27d/console.html#_2014-06-12_23_33_59_830

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329563

Title:
  test_suspend_server_invalid_state fails with 400 response

Status in OpenStack Compute (Nova):
  New

Bug description:
  Encountered what looks like a new gate failure.  The
  test_suspend_server_invalid_state test fails with a bad request
  response / unhandled exception.

  http://logs.openstack.org/48/96548/1/gate/gate-tempest-dsvm-postgres-
  full/fa5c27d/console.html#_2014-06-12_23_33_59_830

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329138] [NEW] Image status fails to become active in test_list_image_filters

2014-06-11 Thread John Griffith
Public bug reported:

Have seen the following  at least a couple of times lately in gate
failures:

http://logs.openstack.org/48/96548/1/check/check-tempest-dsvm-
full/682586b/console.html#_2014-06-10_09_35_06_587

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329138

Title:
  Image status fails to become active in test_list_image_filters

Status in OpenStack Compute (Nova):
  New

Bug description:
  Have seen the following  at least a couple of times lately in gate
  failures:

  http://logs.openstack.org/48/96548/1/check/check-tempest-dsvm-
  full/682586b/console.html#_2014-06-10_09_35_06_587

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328672] [NEW] resize instance fails in dsvm-full test_list_migrations_in_flavor_resize_situation

2014-06-10 Thread John Griffith
Public bug reported:

Gate failure encountered here: http://logs.openstack.org/39/97639/2/gate
/gate-tempest-dsvm-full/6e2a9e4/console.html.gz#_2014-06-05_12_06_52_960

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328672

Title:
  resize instance fails in dsvm-full
  test_list_migrations_in_flavor_resize_situation

Status in OpenStack Compute (Nova):
  New

Bug description:
  Gate failure encountered here:
  http://logs.openstack.org/39/97639/2/gate/gate-tempest-dsvm-
  full/6e2a9e4/console.html.gz#_2014-06-05_12_06_52_960

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321508] [NEW] Unable to create an instance from remote machine using novaclient

2014-05-20 Thread John Griffith
Public bug reported:

When attempting to boot an instance from a remote host using novaclient
and API access file downloaded via dashboard I'm unable to create
instances due to an error in attempting to retrieve networks.  This is
reproducible via devstack on both Precise and Trusty, and I've verified
that using the same creds file on the actual compute host is succesful.

It looks like this might be related to the following commit:
https://github.com/openstack/nova/commit/869b435dca27e06f4160b781d86bba708475866a

Nova-API Trace:
```

etag: 51bc16b900bf0f814bb6c0c3dd8f0790
x-image-meta-is_public: True
x-image-meta-min_ram: 0
x-image-meta-owner: 8109ccec0fec4106ad3f005fd76130bb
x-image-meta-updated_at: 2014-05-21T00:19:30
content-type: text/html; charset=UTF-8
x-openstack-request-id: req-35660f52-3922-456b-b44c-6d65d8f696fa
x-image-meta-disk_format: qcow2
x-image-meta-name: Fedora-x86_64-20-20131211.1-sda
 from (pid=25312) log_http_response 
/opt/stack/python-glanceclient/glanceclient/common/http.py:153
2014-05-20 18:53:38.456 ERROR nova.api.openstack 
[req-44e54528-f86b-4771-a7a6-91f6af164058 demo demo] Caught error: No networks 
defined.
Traceback (most recent call last):

  File /opt/stack/nova/nova/conductor/manager.py, line 597, in 
_object_dispatch
return getattr(target, method)(context, *args, **kwargs)

  File /opt/stack/nova/nova/objects/base.py, line 115, in wrapper
result = fn(cls, context, *args, **kwargs)

  File /opt/stack/nova/nova/objects/network.py, line 183, in get_by_uuids
project_only)

  File /opt/stack/nova/nova/db/api.py, line 1003, in network_get_all_by_uuids
project_only=project_only)

  File /opt/stack/nova/nova/db/sqlalchemy/api.py, line 164, in wrapper
return f(*args, **kwargs)

  File /opt/stack/nova/nova/db/sqlalchemy/api.py, line 2600, in 
network_get_all_by_uuids
raise exception.NoNetworksFound()

NoNetworksFound: No networks defined.

Traceback (most recent call last):

  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
incoming.message))

  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)

  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)

  File /opt/stack/nova/nova/network/manager.py, line 1351, in 
validate_networks
self._get_networks_by_uuids(context, network_uuids)

  File /opt/stack/nova/nova/network/manager.py, line 1377, in 
_get_networks_by_uuids
context, network_uuids, project_only=allow_none)

  File /opt/stack/nova/nova/objects/base.py, line 113, in wrapper
args, kwargs)

  File /opt/stack/nova/nova/conductor/rpcapi.py, line 355, in 
object_class_action
objver=objver, args=args, kwargs=kwargs)

  File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, 
line 150, in call
wait_for_reply=True, timeout=timeout)

  File /usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, 
line 90, in _send
timeout=timeout)

  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 386, in send
return self._send(target, ctxt, message, wait_for_reply, timeout)

  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 379, in _send
raise result

NoNetworksFound_Remote: No networks defined.
Traceback (most recent call last):

  File /opt/stack/nova/nova/conductor/manager.py, line 597, in 
_object_dispatch
return getattr(target, method)(context, *args, **kwargs)

  File /opt/stack/nova/nova/objects/base.py, line 115, in wrapper
result = fn(cls, context, *args, **kwargs)

  File /opt/stack/nova/nova/objects/network.py, line 183, in get_by_uuids
project_only)

  File /opt/stack/nova/nova/db/api.py, line 1003, in network_get_all_by_uuids
project_only=project_only)

  File /opt/stack/nova/nova/db/sqlalchemy/api.py, line 164, in wrapper
return f(*args, **kwargs)

  File /opt/stack/nova/nova/db/sqlalchemy/api.py, line 2600, in 
network_get_all_by_uuids
raise exception.NoNetworksFound()

NoNetworksFound: No networks defined.


2014-05-20 18:53:38.456 TRACE nova.api.openstack Traceback (most recent call 
last):
2014-05-20 18:53:38.456 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 125, in __call__
2014-05-20 18:53:38.456 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-05-20 18:53:38.456 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
2014-05-20 18:53:38.456 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-05-20 18:53:38.456 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 

[Yahoo-eng-team] [Bug 1301519] Re: nova.conf.sample missing from the 2014.1.rc1 tarball

2014-04-17 Thread John Griffith
** Also affects: cinder
   Importance: Undecided
   Status: New

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301519

Title:
  nova.conf.sample missing from the 2014.1.rc1 tarball

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python Build Reasonableness:
  In Progress

Bug description:
  This patch [1] removed the nova.conf.sample because it's not gated but now we 
are left without the sample config file in the tarball.
  We could generate the nova.conf.sample in setup.py (based on this comment 
[2]) and include it in the tarball for rc2.

  [1] https://review.openstack.org/#/c/81588/
  [2] https://bugs.launchpad.net/nova/+bug/1294774/comments/4

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304234] Re: the problem of updating quota

2014-04-16 Thread John Griffith
I don't know if I see this as a bug.  There's a use case in my opinion
that a provider or private admin may want to adjust a users quota to a
lower level even if it is below what they're currently using.  The idea
here is that an admin shouldn't be limited to what the user is actually
using at the time.

Also, the idea is they may have N objects now, but as they delete
those objects, we still want to ensure they stay at a new lower quota
etc.

Not sure I'm explaining this well, but I don't view this as a bug.

** Changed in: cinder
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304234

Title:
  the problem of updating quota

Status in Cinder:
  Invalid
Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In nova, if you update the value of quota from large to small, it will
  failed, but if use DbQuotaDriver, it will allow update the value of
  quota from large to small, so I think it should do as in nova!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1304234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247603] Re: nova-conductor process can't create cosumer connection to qpid after HeartbeatTimeout in heavy workload

2014-03-26 Thread John Griffith
Hit this today on latest Havana build, logs below.  I reproduced doing
some stress testing; create 50 instances boot from volume in one
operation.  Need to try it in my Icehouse setup next.

2014-03-20 00:42:51.725 17580 INFO nova.compute.manager 
[req-ef61a326-288b-494d-9d30-f533e7739949 None None] Updating bandwidth usage 
cache
2014-03-20 00:43:51.843 17580 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2014-03-20 00:43:52.607 17580 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 257773
2014-03-20 00:43:52.607 17580 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 49
2014-03-20 00:43:52.607 17580 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 48
2014-03-20 00:43:52.677 17580 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for os-1.solidfire.net:os-1.solidfire.net
2014-03-20 00:44:52.808 17580 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2014-03-20 00:44:53.589 17580 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 257773
2014-03-20 00:44:53.589 17580 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 49
2014-03-20 00:44:53.590 17580 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 48
2014-03-20 00:44:53.657 17580 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for os-1.solidfire.net:os-1.solidfire.net
2014-03-20 08:47:05.277 17580 ERROR nova.openstack.common.rpc.impl_qpid [-] 
Failed to publish message to topic 'conductor': heartbeat timeout
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
Traceback (most recent call last):
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, line 
540, in ensure
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
return method(*args, **kwargs)
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, line 
632, in _publisher_send
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
publisher = cls(self.conf, self.session, topic)
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, line 
398, in __init__
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
super(TopicPublisher, self).__init__(conf, session, node_name)
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, line 
328, in __init__
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
self.reconnect(session)
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, line 
332, in reconnect
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
self.sender = session.sender(self.address)
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
string, line 6, in sender
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 592, in 
sender
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
sender._ewait(lambda: sender.linked)
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 799, in 
_ewait
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
result = self.session._ewait(lambda: self.error or predicate(), timeout)
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 566, in 
_ewait
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
result = self.connection._ewait(lambda: self.error or predicate(), timeout)
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 209, in 
_ewait
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
self.check_error()
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 202, in 
check_error
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
raise self.error
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid 
HeartbeatTimeout: heartbeat timeout
2014-03-20 08:47:05.277 17580 TRACE nova.openstack.common.rpc.impl_qpid
2014-03-20 08:47:05.295 17580 ERROR nova.openstack.common.rpc.impl_qpid [-] 
Failed to consume message from queue: heartbeat timeout
2014-03-20 08:47:05.295 17580 

[Yahoo-eng-team] [Bug 1266590] Re: db connection string is cleartext in debug log

2014-03-24 Thread John Griffith
Cinder has the secret=True setting in the conf options already, so the
DNE Cinder.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1266590

Title:
  db connection string is cleartext in debug log

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  
  When I start up keystone-all with --debug it logs the config settings. The 
config setting for the database connection string is printed out:

  (keystone-all): 2014-01-06 16:32:56,983 DEBUG cfg log_opt_values
  database.connection=
  mysql://root:rootpwd@127.0.0.1/keystone?charset=utf8

  The database connection string will typically contain the user
  password, so this value should be masked (like admin_token).

  This is a regression from Havana, which masked the db connection
  string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1266590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293631] Re: when delete a dead VM the status of attached cinder volume is not updated

2014-03-23 Thread John Griffith
There's no way currently for Cinder to know about this situation.
It's actually a failure on Nova's part to clean up after itself when
deleting a VM IMO.

Also.. FYI there's a reset-state extension that you can/should use
rather than manipulating the DB directly.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1293631

Title:
  when delete a dead VM the status of attached cinder volume is not
  updated

Status in Cinder:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  when I terminate a VM which is in suspended status and has a cinder
  volume attached, the status of cinder volume can not be updated. the
  volume status will be kept as attached and the attached-host in
  database isn't updated. the cinder volume becomes an orphan on which
  you can't do delete/update/attach/detach.  the only option is goto the
  database and update the volumes table manually.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1293631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294132] Re: Volume status set to error extending when new size exceeds quota

2014-03-20 Thread John Griffith
So I ran some tests on this, as long as the backend doesn't fail to do
the extend the quota is checked up front and the API responds with an
error before ever changing state or attempting the resize.

This is what I would expect.  If the cmd passes quota check and is sent
to the driver, but the driver fails (ie not enough space) then it raises
the error I think you're seeing and sets the status to error.  This is
what I would expect and I think is appropriate behavior.  It's in line
with how all of the other async calls work.

** Changed in: cinder
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1294132

Title:
  Volume status set to error extending when new size exceeds quota

Status in Cinder:
  Invalid
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  extend_volume in cinder.volume.manager should not set the status to
  error_extending when the quota was exceeded. The status should still
  be available

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1294132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285478] Re: Enforce alphabetical ordering in requirements file

2014-03-06 Thread John Griffith
I don't think this is a real problem, especially considering the req
files should be auto-updated anyway.  I don't see any value in messing
with this, other than making sure we're in alphabetical order once then
let the requirements update tools update files correctly.  Adding a
check for this doesn't seem to add any value.

** Changed in: cinder
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285478

Title:
  Enforce alphabetical ordering in requirements file

Status in Cinder:
  Invalid
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Message Queuing Service (Marconi):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Triaged
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  New
Status in Python client library for heat:
  New
Status in Python client library for Ironic:
  Fix Committed
Status in Python client library for Neutron:
  New
Status in Trove client binding:
  In Progress
Status in OpenStack contribution dashboard:
  New
Status in Storyboard database creator:
  In Progress
Status in Tempest:
  In Progress
Status in Trove - Database as a Service:
  In Progress
Status in Tuskar:
  Fix Committed

Bug description:
  
  Sorting requirement files in alphabetical order makes code more readable, and 
can check whether specific library
  in the requirement files easily. Hacking donesn't check *.txt files.
  We had  enforced  this check in oslo-incubator 
https://review.openstack.org/#/c/66090/.

  This bug is used to track syncing the check gating.

  How to sync this to other projects:

  1.  Copy  tools/requirements_style_check.sh  to project/tools.

  2. run tools/requirements_style_check.sh  requirements.txt test-
  requirements.txt

  3. fix the violations

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1285478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179709] Re: keystone shuts down it's own listening socket with 'too many files open'

2014-02-25 Thread John Griffith
Seems we hit this today on a Havana build.  Nothing really going on with
the system... went to dinner, user came back and couldn't log in.  I
logged on to controller and noticed traces in keystone, restarted
services and back in business, but not sure what's actually causing
this.

** Changed in: keystone
   Status: Expired = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1179709

Title:
  keystone shuts down it's own listening socket with 'too many files
  open'

Status in OpenStack Identity (Keystone):
  New

Bug description:
  We're running with mysql - I can grab a rendered config file for you
  if needed.

  Here is what I see in the log:
  Traceback (most recent call last):
File 
/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/eventlet/hubs/poll.py,
 line 97, in wait
  readers.get(fileno, noop).cb(fileno)
File 
/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 194, in main
  result = function(*args, **kwargs)
File 
/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/keystone/common/wsgi.py,
 line 135, in _run
  log=WritableLogger(log))
File 
/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/eventlet/wsgi.py, 
line 663, in server
  client_socket = sock.accept()
File 
/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/eventlet/greenio.py,
 line 166, in accept
  res = socket_accept(fd)
File 
/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/eventlet/greenio.py,
 line 56, in socket_accept
  return descriptor.accept()
File /usr/lib/python2.7/socket.py, line 202, in accept
  sock, addr = self._sock.accept()
  error: [Errno 24] Too many open files
  Removing descriptor: 5

  keystone-all is still running, netstat -anp shows:
  tcp0  0 0.0.0.0:35357   0.0.0.0:*   LISTEN
  25530/python
  unix  3  [ ] STREAM CONNECTED 44834812 25530/python   
 

  restarting it gets me:
  root@ubuntu:~# netstat -anp | grep 25267
  tcp0  0 0.0.0.0:35357   0.0.0.0:*   LISTEN
  25267/python
  tcp0  0 0.0.0.0:50000.0.0.0:*   LISTEN
  25267/python
  unix  3  [ ] STREAM CONNECTED 44848565 25267/python   
 

  Which is rather more useful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1179709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281351] Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern volume status not available

2014-02-17 Thread John Griffith
I had decided that tonight was the night I was going to fix this on the
Cinder side, but alas I'm stuck.

The problem here is that we run into the odd case with nova booting an
instance from a volume, compute API starts up the process, grabs the
volume and makes the attach (so now the volume status is in-use).
Then while it's booting something goes very wrong in nova (as can be
seen by the multitudes of traces in the n-api logs) so we punch out and
call tearDown.

The problem is we never cleaned up the volume-status, so it's still
listed as in-use but the tearDown is a neat little loop on
thing.delete so it's not accounting for things like removing
attachments etc.

I was thinking doCleanup but realized that won't work because it's
called after tearDown which is what fails.  So the only thing I can
think of that's practical as a hack is to add a if isinstance Volume
in the delete loop and change the state using the admin api to available
so that teardown can do it's thing.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281351

Title:
  tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern volume
  status not available

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  During a run of check-tempest-dsvm-postgres-full

  
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:41,733 Waiting for Server: 
scenario-server--1813410115 to get to NotFound status. Currently in ACTIVE 
status
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:41,733 Sleeping for 1 seconds
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:42,734 
  2014-02-17 22:04:31.037 | REQ: curl -i 
'http://127.0.0.1:8774/v2/3437c74f89904598a189851959e53779/servers/72361b22-1ea4-49ce-be61-003467145fe5'
 -X GET -H X-Auth-Project-Id: TestVolumeBootPattern-1661167687 -H 
User-Agent: python-novaclient -H Accept: application/json -H X-Auth-Token: 
MIISvQYJKoZIhvcNAQcCoIISrjCCEqoCAQExCTAHBgUrDgMCGjCCERMGCSqGSIb3DQEHAaCCEQQEghEAeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wMi0xN1QyMjowMzowOC4yMjQyMTgiLCAiZXhwaXJlcyI6ICIyMDE0LTAyLTE3VDIzOjAzOjA4WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlRlc3RWb2x1bWVCb290UGF0dGVybi0xNjYxMTY3Njg3LWRlc2MiLCAiZW5hYmxlZCI6IHRydWUsICJpZCI6ICIzNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJuYW1lIjogIlRlc3RWb2x1bWVCb290UGF0dGVybi0xNjYxMTY3Njg3In19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzQzN2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzQzN
 
2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkiLCAiaWQiOiAiZDg2ZWYzMjI1M2U0NDBkYTk0YjMxMmQxNjdkNTE3NTAiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJpZCI6ICJkYWIxMWRhMGY0MWQ0OGRlYWQwNGE1YjQ0OWJiOTdiMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YyLzM0MzdjNzRmODk5MDQ1OThhMTg5ODUxOTU5ZTUzNzc5In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVydjIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMiLCAiaWQiOiAiYzY2ZTU2ZjhlODE1NDZiNGE2ZTgzNWRkZjhkNTY0NTkiLCAicHVibG
 
ljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92MyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRldjMiLCAibmFtZSI6ICJub3ZhdjMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjMzMzMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjMzMzMiLCAiaWQiOiAiZDUxMTgwMDM2YWQ3NGYzZGEyZWVmZDJmM2M0MmUzZWYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6MzMzMyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJzMyIsICJuYW1lIjogInMzIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5MjkyIiwgImlkIjogImNkN2E2NGIyNDA2OTRkZTM4Y2FmMWE0NGQ5OTE2MGI1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjkyOTIifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaW1hZ2UiLCAibmFtZSI6ICJnbGFuY2UifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzcvIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc3LyIsICJpZCI6ICJjMGEyN2Q4ZTQ4NDc0OWE
 

[Yahoo-eng-team] [Bug 1281351] Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern volume status not available

2014-02-17 Thread John Griffith
** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281351

Title:
  tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern volume
  status not available

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  In Progress

Bug description:
  During a run of check-tempest-dsvm-postgres-full

  
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:41,733 Waiting for Server: 
scenario-server--1813410115 to get to NotFound status. Currently in ACTIVE 
status
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:41,733 Sleeping for 1 seconds
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:42,734 
  2014-02-17 22:04:31.037 | REQ: curl -i 
'http://127.0.0.1:8774/v2/3437c74f89904598a189851959e53779/servers/72361b22-1ea4-49ce-be61-003467145fe5'
 -X GET -H X-Auth-Project-Id: TestVolumeBootPattern-1661167687 -H 
User-Agent: python-novaclient -H Accept: application/json -H X-Auth-Token: 
MIISvQYJKoZIhvcNAQcCoIISrjCCEqoCAQExCTAHBgUrDgMCGjCCERMGCSqGSIb3DQEHAaCCEQQEghEAeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wMi0xN1QyMjowMzowOC4yMjQyMTgiLCAiZXhwaXJlcyI6ICIyMDE0LTAyLTE3VDIzOjAzOjA4WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlRlc3RWb2x1bWVCb290UGF0dGVybi0xNjYxMTY3Njg3LWRlc2MiLCAiZW5hYmxlZCI6IHRydWUsICJpZCI6ICIzNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJuYW1lIjogIlRlc3RWb2x1bWVCb290UGF0dGVybi0xNjYxMTY3Njg3In19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzQzN2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzQzN
 
2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkiLCAiaWQiOiAiZDg2ZWYzMjI1M2U0NDBkYTk0YjMxMmQxNjdkNTE3NTAiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJpZCI6ICJkYWIxMWRhMGY0MWQ0OGRlYWQwNGE1YjQ0OWJiOTdiMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YyLzM0MzdjNzRmODk5MDQ1OThhMTg5ODUxOTU5ZTUzNzc5In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVydjIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMiLCAiaWQiOiAiYzY2ZTU2ZjhlODE1NDZiNGE2ZTgzNWRkZjhkNTY0NTkiLCAicHVibG
 
ljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92MyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRldjMiLCAibmFtZSI6ICJub3ZhdjMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjMzMzMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjMzMzMiLCAiaWQiOiAiZDUxMTgwMDM2YWQ3NGYzZGEyZWVmZDJmM2M0MmUzZWYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6MzMzMyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJzMyIsICJuYW1lIjogInMzIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5MjkyIiwgImlkIjogImNkN2E2NGIyNDA2OTRkZTM4Y2FmMWE0NGQ5OTE2MGI1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjkyOTIifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaW1hZ2UiLCAibmFtZSI6ICJnbGFuY2UifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzcvIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc3LyIsICJpZCI6ICJjMGEyN2Q4ZTQ4NDc0OWE
 
2Yjk1NTI3YWU5MTU0NWI1YiIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc3LyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJtZXRlcmluZyIsICJuYW1lIjogImNlaWxvbWV0ZXIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjgwMDAvdjEiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjgwMDAvdjEiLCAiaWQiOiAiYTAyMDYxNmQ3YzFiNDk2ZjkwMzAyMGM2OGVlMDU2ZDQiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODAwMC92MSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjbG91ZGZvcm1hdGlvbiIsICJuYW1lIjogImhlYXQtY2ZuIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YxLzM0MzdjNzRmODk5MDQ1OThhMTg5ODUxOTU5ZTUzNzc5IiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YxLzM0MzdjNzRmODk5MDQ1OThhMTg5ODUxOTU5ZTUzNzc5IiwgImlkIjogIjUzMmU2ZWQ4ZGEyMTQwMzE4N2Y0NmRjMDNmN2M4ODQ4IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzYvdjEvMzQzN2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAidm9sdW1l
 

[Yahoo-eng-team] [Bug 1280072] [NEW] FAIL: tempest.api.compute.admin.test_aggregates.AggregatesAdminTestJSON.test_aggregate_add_host_create_server_with_az

2014-02-13 Thread John Griffith
Public bug reported:

Started seeing rash of gate failures in all devstack tests for this
today.  Looks like others have been logging this against bug #1254890,
but that doesn't seem accurate, or at least not detailed enough.

Here's an example of the failure being seen:
http://logs.openstack.org/74/73474/1/check/check-tempest-dsvm-full/be41408/console.html

We continue to fall apart after the first error here.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280072

Title:
  FAIL:
  
tempest.api.compute.admin.test_aggregates.AggregatesAdminTestJSON.test_aggregate_add_host_create_server_with_az

Status in OpenStack Compute (Nova):
  New

Bug description:
  Started seeing rash of gate failures in all devstack tests for this
  today.  Looks like others have been logging this against bug #1254890,
  but that doesn't seem accurate, or at least not detailed enough.

  Here's an example of the failure being seen:
  
http://logs.openstack.org/74/73474/1/check/check-tempest-dsvm-full/be41408/console.html

  We continue to fall apart after the first error here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1280072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270608] Re: n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to fail

2014-01-28 Thread John Griffith
Turns out this does appear to be a side effect of commit: 
e2e0ed80799c1ba04b37278996a171fc74b6f9eb
does seem to be the root of the problem.  It appears that the initialize is in 
some cases doing a delete of 
the targets.

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
   Status: New = Confirmed

** Changed in: cinder
   Importance: Undecided = Critical

** Changed in: cinder
 Assignee: (unassigned) = John Griffith (john-griffith)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270608

Title:
  n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to
  fail

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  Changes are failing the gate-tempest-*-full gate due to an error message in 
the logs.
  The error message is like

  2014-01-18 20:13:19.437 | Log File: n-cpu
  2014-01-18 20:13:20.482 | 2014-01-18 20:04:05.189 ERROR nova.compute.manager 
[req-25a1842c-ce9a-4035-8975-651f6ee5ddfc 
tempest.scenario.manager-tempest-1060379467-user 
tempest.scenario.manager-tempest-1060379467-tenant] [instance: 
0b1c1b55-b520-4ff2-bac2-8457ba3f4b6a] Error: iSCSI device not found at 
/dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-a6e86002-dc25-4782-943b-58cc0c68238d-lun-1

  Here's logstash for the query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpcImxvZ3Mvc2NyZWVuLW4tY3B1LnR4dFwiIEFORCBtZXNzYWdlOlwiRXJyb3I6IGlTQ1NJIGRldmljZSBub3QgZm91bmQgYXQgL2Rldi9kaXNrL2J5LXBhdGgvaXAtMTI3LjAuMC4xOjMyNjAtaXNjc2ktaXFuLjIwMTAtMTAub3JnLm9wZW5zdGFjazp2b2x1bWUtXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAxNTA4NTU5NTJ9

  shows several failures starting at 2014-01-17T14:00:00

  Maybe tempest is doing something that generates the ERROR message and then 
isn't accepting the error message it should?
  Or nova is logging an error message when it shouldn't?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1270608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270608] Re: n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to fail

2014-01-27 Thread John Griffith
Addressed by: https://review.openstack.org/#/c/69443/

** Changed in: cinder
   Status: New = Fix Committed

** Project changed: cinder = nova-project

** Project changed: nova-project = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270608

Title:
  n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to
  fail

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  Changes are failing the gate-tempest-*-full gate due to an error message in 
the logs.
  The error message is like

  2014-01-18 20:13:19.437 | Log File: n-cpu
  2014-01-18 20:13:20.482 | 2014-01-18 20:04:05.189 ERROR nova.compute.manager 
[req-25a1842c-ce9a-4035-8975-651f6ee5ddfc 
tempest.scenario.manager-tempest-1060379467-user 
tempest.scenario.manager-tempest-1060379467-tenant] [instance: 
0b1c1b55-b520-4ff2-bac2-8457ba3f4b6a] Error: iSCSI device not found at 
/dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-a6e86002-dc25-4782-943b-58cc0c68238d-lun-1

  Here's logstash for the query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpcImxvZ3Mvc2NyZWVuLW4tY3B1LnR4dFwiIEFORCBtZXNzYWdlOlwiRXJyb3I6IGlTQ1NJIGRldmljZSBub3QgZm91bmQgYXQgL2Rldi9kaXNrL2J5LXBhdGgvaXAtMTI3LjAuMC4xOjMyNjAtaXNjc2ktaXFuLjIwMTAtMTAub3JnLm9wZW5zdGFjazp2b2x1bWUtXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAxNTA4NTU5NTJ9

  shows several failures starting at 2014-01-17T14:00:00

  Maybe tempest is doing something that generates the ERROR message and then 
isn't accepting the error message it should?
  Or nova is logging an error message when it shouldn't?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273292] Re: Timed out waiting for thing ... to become in-use causes tempest-dsvm-* failures

2014-01-27 Thread John Griffith
*** This bug is a duplicate of bug 1270608 ***
https://bugs.launchpad.net/bugs/1270608

I believe this is a duplicate of:
https://bugs.launchpad.net/nova/+bug/1270608

Checking here in this instance of the failure:
http://logs.openstack.org/36/69236/2/check/check-tempest-dsvm-
full/8820082/logs/screen-n-cpu.txt.gz#_2014-01-26_22_03_48_841

You can see nova timed out after 15 seconds waiting for the iscsi mount
to complete.  Given the VERY heavy load caused by this test I think it
fits with the theory that these ops are horribly slow under heavy load.

I'm marking this as a duplicate, it's the same root cause regardless of
whether it ends up that waiting longer helps us or not.

** No longer affects: cinder

** Changed in: nova
 Assignee: (unassigned) = John Griffith (john-griffith)

** This bug has been marked a duplicate of bug 1270608
   n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to fail

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273292

Title:
  Timed out waiting for thing ... to become in-use causes tempest-
  dsvm-* failures

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  This is a spin-off of bug 1254890.  That bug was originally covering
  failures for both timing out waiting for an instance to become ACTIVE,
  as well as waiting for a volume to become in-use or available.

  It seems valuable to split out the cases of waiting for volumes to
  become in-use or available into its own bug.

  message:Details: Timed out waiting for thing AND message:to become
  AND (message:in-use OR message:available)

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIgQU5EIG1lc3NhZ2U6XCJ0byBiZWNvbWVcIiBBTkQgKG1lc3NhZ2U6XCJpbi11c2VcIiBPUiBtZXNzYWdlOlwiYXZhaWxhYmxlXCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzkwODQwODI1MDkxfQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272456] [NEW] Instance fails network setup TRACE in tempest tests

2014-01-24 Thread John Griffith
Public bug reported:

Gate test tempest-dsvm-large-ops fails due to failure setting up network
on instance.


http://logs.openstack.org/26/68726/1/check/gate-tempest-dsvm-large-ops/69a94b4/

Relevant Trace in n-cpu logs are here:
http://logs.openstack.org/26/68726/1/check/gate-tempest-dsvm-large-
ops/69a94b4/logs/screen-n-cpu.txt.gz#_2014-01-23_19_36_58_565

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272456

Title:
  Instance fails network setup TRACE in tempest tests

Status in OpenStack Compute (Nova):
  New

Bug description:
  Gate test tempest-dsvm-large-ops fails due to failure setting up
  network on instance.

  
  
http://logs.openstack.org/26/68726/1/check/gate-tempest-dsvm-large-ops/69a94b4/

  Relevant Trace in n-cpu logs are here:
  http://logs.openstack.org/26/68726/1/check/gate-tempest-dsvm-large-
  ops/69a94b4/logs/screen-n-cpu.txt.gz#_2014-01-23_19_36_58_565

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1272456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272447] Re: Instances fail to boot properly with more than 5 cinder volumes attached

2014-01-24 Thread John Griffith
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272447

Title:
  Instances fail to boot properly with more than 5 cinder volumes
  attached

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Instances will start, but will not boot into their respective
  operating systems with more than five cinder volumes attached to the
  instance. The instance sits at a No bootable device error.

  Openstack Version: OpenStack Havana (2013.2)

  uname -a:   Linux controller01 3.8.0-29-generic #42~precise1-Ubuntu
  SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

  Log files have been attached including cinder  nova logs from the
  controller and from the compute node where the instance resides. For
  reference, the instance name is:  'alex-test-volume' and volumes are
  named: 'alex-test-1' through 'alex-test-6'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1272447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils

2014-01-07 Thread John Griffith
I'm not crazy about this approach of making changes throughout the
project; updating all of the projects and then removing the wrapper in
oslo, then updating the libs in all of the projects again is really
something that should not be a top priority.

I do however think that the usage should be allowed to fall off
naturally as other efforts are made to update to using mock, once that's
done we should eventually just find that this wrapper is no longer
needed and remove it from oslo at that time.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266962

Title:
  Remove set_time_override in timeutils

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  Invalid
Status in Gantt:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  New
Status in Manila:
  New
Status in OpenStack Message Queuing Service (Marconi):
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New
Status in Messaging API for OpenStack:
  New
Status in Python client library for Keystone:
  New
Status in Python client library for Nova:
  New
Status in Tuskar:
  New

Bug description:
  set_time_override was written as a helper function to mock utcnow in
  unittests.

  However we now use mock or fixture to mock our objects so
  set_time_override has become obsolete.

  We should first remove all usage of set_time_override from downstream
  projects before deleting it from oslo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265740] Re: incorrect return from exception.InvalidInput()

2014-01-05 Thread John Griffith
don't see any Cinder info here

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265740

Title:
  incorrect return from exception.InvalidInput()

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  when I create instance, and set min_count=-1, like this:

  openstack@openstack-001:~$ curl -i -H 
X-Auth-Token:6db450ec28174970be674af55c644e23 -H 
Content-Type:application/json 
http://127.0.0.1:8774/v2/e7fdc71e46bd4945a57104f3899b1335/servers -d 
'{server:{name:test,flavorRef:42,imageRef:2e33aff9-63b4-497c-9c1b-8fe4ee567cce,min_count:-1}}'
  HTTP/1.1 400 Bad Request
  Content-Length: 110
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-b33a8f37-e4a4-42b3-8488-ca669e80911d
  Date: Fri, 03 Jan 2014 08:04:18 GMT

  {badRequest: {message:
  \u6536\u5230\u65e0\u6548\u7684\u8f93\u5165: min_count must be = 1,
  code: 400}}

  the return message is messy code. I review the code, find the code:
   if min_value is not None:
  if value  min_value:
  msg = _('%(value_name)s must be = %(min_value)d')
  raise exception.InvalidInput(
  reason=(msg % {'value_name': name,
 'min_value': min_value}))

  exception.InvalidInput has no para named reason. the para is message.
  replace message to reason, it is ok.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1265740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262176] Re: EC2: 'io1' type volume should also return IOPS associated with it

2014-01-05 Thread John Griffith
Making a compatible version for EBS here isn't a terrible idea, however
i hardly see this as a bug.  This is most definitely a feature request
IMO, and it has almost nothing to do with Cinder.  As per my comments in
the review:

If there's real value in emulating this, then I think this needs to include 
the work to provide the option on create as well as provide it in the get. 
However, there's only ONE option that I'm aware of in EBS today, so this should 
filter out anything that isn't that IO1 type.
In other words, if an admin creates a type with this name, fine... we can 
expose it and allow it to be selected for creation. But we shouldn't expose any 
of the other types and we most certainly should not require this type exist or 
be built by default.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262176

Title:
  EC2: 'io1' type volume should also return IOPS associated with it

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  This patch https://review.openstack.org/#/c/61041 exposes volume type
  in the EC2 API. Amazon's API documentation says that if the volume
  type is 'io1', that is, if the volume has guaranteed IOPS set for it,
  a 'DescribeVolumes' call should return the IOPS for such volumes. We
  need to add that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1262176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213215] Re: Volumes left in Error state prevent tear down of nova compute resource

2013-12-13 Thread John Griffith
The issue from the perspective of the Cinder delete is that the tempest
min scenario test doesn't bother to deal with things like failures in
it's sequence.  What's happening here is that the ssh is raising a
timeout exception which is not handled and blows things up.  So we dump
out of the scenario test and try do cleanup, that's great, but we left
the instance in it's current state with a volume attached.

From the volume perspective, just catch the exception and do some proper
clean up.  I'll put a patch up in tempest in a moment to at least
address that portion of it.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213215

Title:
  Volumes left in Error state prevent tear down of nova compute resource

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Confirmed

Bug description:
  Occasionally running tempest in parallel will fail several tests with
  timeout errors. The only nontimeout failure message is that the
  ServerRescueTest failed to delete a volume because it was still marked
  as in use. My guess is that the leftover volume is somehow interfering
  with the other tests causing them to timeout. But, I haven't looked at
  the logs in detail so it's just a wild guess.

  
  2013-08-16 14:11:42.074 | 
==
  2013-08-16 14:11:42.075 | FAIL: 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_auto_disk_config[gate]
  2013-08-16 14:11:42.075 | 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_auto_disk_config[gate]
  2013-08-16 14:11:42.075 | 
--
  2013-08-16 14:11:42.075 | _StringException: Empty attachments:
  2013-08-16 14:11:42.075 |   stderr
  2013-08-16 14:11:42.076 |   stdout
  2013-08-16 14:11:42.076 | 
  2013-08-16 14:11:42.076 | Traceback (most recent call last):
  2013-08-16 14:11:42.076 |   File 
tempest/api/compute/servers/test_disk_config.py, line 64, in 
test_rebuild_server_with_auto_disk_config
  2013-08-16 14:11:42.076 | wait_until='ACTIVE')
  2013-08-16 14:11:42.076 |   File tempest/api/compute/base.py, line 140, in 
create_server
  2013-08-16 14:11:42.076 | server['id'], kwargs['wait_until'])
  2013-08-16 14:11:42.077 |   File 
tempest/services/compute/json/servers_client.py, line 160, in 
wait_for_server_status
  2013-08-16 14:11:42.077 | time.sleep(self.build_interval)
  2013-08-16 14:11:42.077 |   File 
/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py, line 
52, in signal_handler
  2013-08-16 14:11:42.077 | raise TimeoutException()
  2013-08-16 14:11:42.077 | TimeoutException
  2013-08-16 14:11:42.077 | 
  2013-08-16 14:11:42.077 | 
  2013-08-16 14:11:42.078 | 
==
  2013-08-16 14:11:42.078 | FAIL: setUpClass 
(tempest.api.compute.images.test_image_metadata.ImagesMetadataTestXML)
  2013-08-16 14:11:42.078 | setUpClass 
(tempest.api.compute.images.test_image_metadata.ImagesMetadataTestXML)
  2013-08-16 14:11:42.078 | 
--
  2013-08-16 14:11:42.078 | _StringException: Traceback (most recent call last):
  2013-08-16 14:11:42.078 |   File 
tempest/api/compute/images/test_image_metadata.py, line 46, in setUpClass
  2013-08-16 14:11:42.078 | cls.client.wait_for_image_status(cls.image_id, 
'ACTIVE')
  2013-08-16 14:11:42.079 |   File 
tempest/services/compute/xml/images_client.py, line 167, in 
wait_for_image_status
  2013-08-16 14:11:42.079 | raise exceptions.TimeoutException
  2013-08-16 14:11:42.079 | TimeoutException: Request timed out
  2013-08-16 14:11:42.079 | 
  2013-08-16 14:11:42.079 | 
  2013-08-16 14:11:42.079 | 
==
  2013-08-16 14:11:42.079 | FAIL: 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
  2013-08-16 14:11:42.080 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
  2013-08-16 14:11:42.080 | 
--
  2013-08-16 14:11:42.080 | _StringException: Empty attachments:
  2013-08-16 14:11:42.080 |   stderr
  2013-08-16 14:11:42.080 |   stdout
  2013-08-16 14:11:42.080 | 
  2013-08-16 14:11:42.081 | Traceback (most recent call last):
  2013-08-16 14:11:42.081 |   File 
tempest/api/compute/servers/test_server_rescue.py, line 184, in 
test_rescued_vm_detach_volume
  2013-08-16 14:11:42.081 | 
self.servers_client.wait_for_server_status(self.server_id, 'RESCUE')
  2013-08-16 14:11:42.081 |   File 
tempest/services/compute/json/servers_client.py, 

[Yahoo-eng-team] [Bug 1257411] [NEW] Intermittent boot instance failure, libvirt unable to read from monitor

2013-12-03 Thread John Griffith
Public bug reported:

devstack install using master on precise intermittent failures when
trying to boot instances.  (cirros image, flavor 1).  Typically simply
running again this will work.  n-cpu logs contain the following trace:

2013-12-03 11:11:01.124 DEBUG nova.compute.manager 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] Re-scheduling run_instance: attempt 1 
from (pid=30610) _reschedule /opt/stack/nova/nova/compute/man
ager.py:1167
2013-12-03 11:11:01.124 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Making synchronous call on 
conductor ... from (pid=30610) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553
2013-12-03 11:11:01.125 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] MSG_ID is 
7b83e1059204445ba23ed876943eea2d from (pid=30610) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556
2013-12-03 11:11:01.125 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] UNIQUE_ID is 
67de22630ca94eee9f409ee8aeaece1c. from (pid=30610) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
2013-12-03 11:11:01.237 DEBUG nova.openstack.common.lockutils 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Got semaphore 
compute_resources from (pid=30610) lock 
/opt/stack/nova/nova/openstack/common/lockutils.py:167
2013-12-03 11:11:01.237 DEBUG nova.openstack.common.lockutils 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Got semaphore / lock 
update_usage from (pid=30610) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:247
2013-12-03 11:11:01.237 DEBUG nova.openstack.common.lockutils 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Semaphore / lock released 
update_usage from (pid=30610) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:251
2013-12-03 11:11:01.239 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Making asynchronous cast 
on scheduler... from (pid=30610) cast 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:582
2013-12-03 11:11:01.239 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] UNIQUE_ID is 
ce107999c53949fa8aef7d13586a3d5a. from (pid=30610) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
2013-12-03 11:11:01.247 ERROR nova.compute.manager 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] Error: Unable to read from monitor: 
Connection reset by peer
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] Traceback (most recent call last):
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
/opt/stack/nova/nova/compute/manager.py, line 1049, in _build_instance
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] set_access_ip=set_access_ip)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
/opt/stack/nova/nova/compute/manager.py, line 1453, in _spawn
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
/opt/stack/nova/nova/compute/manager.py, line 1450, in _spawn
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] block_device_info)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2161, in spawn
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] block_device_info)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 3395, in 
_create_domain_and_network
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] domain = self._create_domain(xml, 
instance=instance, power_on=power_on)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 3338, in _create_domain
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] domain.XMLDesc(0))
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line , in _create_domain
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] 

[Yahoo-eng-team] [Bug 1257420] [NEW] boot instance fails, libvirt unable to allocate memory

2013-12-03 Thread John Griffith
Public bug reported:

Intermittent failures trying to boot an instance using devstack/master
on precise VM.  In most cases deleting the failed instance and retrying
the boot command seems to work.

2013-12-03 11:28:24.514 DEBUG nova.compute.manager 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Re-scheduling run_instance: attempt 1 
from (pid=5873) _reschedule /opt/stack/nova/nova/compute/mana
ger.py:1167
2013-12-03 11:28:24.514 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Making synchronous call on 
conductor ... from (pid=5873) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553
2013-12-03 11:28:24.514 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] MSG_ID is 
ea9adfa2f6564cd193d6baec7bf7f8a3 from (pid=5873) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556
2013-12-03 11:28:24.515 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] UNIQUE_ID is 
33300a17273f4529bd36156c4406ada3. from (pid=5873) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
2013-12-03 11:28:24.627 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Got semaphore 
compute_resources from (pid=5873) lock 
/opt/stack/nova/nova/openstack/common/lockutils.py:167
2013-12-03 11:28:24.628 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Got semaphore / lock 
update_usage from (pid=5873) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:247
2013-12-03 11:28:24.628 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Semaphore / lock released 
update_usage from (pid=5873) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:251
2013-12-03 11:28:24.630 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Making asynchronous cast 
on scheduler... from (pid=5873) cast 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:582
2013-12-03 11:28:24.630 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] UNIQUE_ID is 
501ebe16dd814daaa37c648f8f9848df. from (pid=5873) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
2013-12-03 11:28:24.642 ERROR nova.compute.manager 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Error: internal error process exited 
while connecting to monitor: char device redirected to /dev/pt
s/30
Failed to allocate 536870912 B: Cannot allocate memory

2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Traceback (most recent call last):
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/compute/manager.py, line 1049, in _build_instance
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] set_access_ip=set_access_ip)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/compute/manager.py, line 1453, in _spawn
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/compute/manager.py, line 1450, in _spawn
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] block_device_info)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2161, in spawn
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] block_device_info)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 3395, in 
_create_domain_and_network
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] domain = self._create_domain(xml, 
instance=instance, power_on=power_on)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 3338, in _create_domain
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] domain.XMLDesc(0))
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line , in _create_domain
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1231887] Re: Create a volume from image fails

2013-11-26 Thread John Griffith
** Changed in: cinder
   Status: Confirmed = Invalid

** Changed in: cinder
Milestone: icehouse-1 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1231887

Title:
  Create a  volume from image fails

Status in Cinder:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When trying to create a volume from an image, the creation is started,
  but stays in state downloading

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1231887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1225664] Re: tempest.api.volume.test_volumes_actions.VolumesActionsTestXML flakey failure

2013-11-03 Thread John Griffith
Root cause of this appears to be Glance--Swift

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1225664

Title:
  tempest.api.volume.test_volumes_actions.VolumesActionsTestXML flakey
  failure

Status in Cinder:
  Invalid
Status in devstack - openstack dev environments:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  Intermittent failures with
  tempest.api.volume.test_volumes_actions.VolumesActionsTestXML:

  2013-09-15 12:19:50,370 Glance request id 
req-e344e633-818b-4e74-a153-dc5495300cbe
  2013-09-15 12:19:52,372 Request: HEAD 
http://127.0.0.1:9292/v1/images/5e4d189e-8de7-428a-923f-1c1090745bf9
  2013-09-15 12:19:52,400 Response Status: 200
  2013-09-15 12:19:52,400 Glance request id 
req-c6b0838c-6c4a-44ee-a23b-7a251d6a75db
  2013-09-15 12:19:54,403 Request: HEAD 
http://127.0.0.1:9292/v1/images/5e4d189e-8de7-428a-923f-1c1090745bf9
  2013-09-15 12:19:54,428 Response Status: 200
  2013-09-15 12:19:54,428 Glance request id 
req-46580425-9cc0-46fc-8719-86b103604d60
  2013-09-15 12:19:56,430 Request: HEAD 
http://127.0.0.1:9292/v1/images/5e4d189e-8de7-428a-923f-1c1090745bf9
  2013-09-15 12:19:56,458 Response Status: 200
  2013-09-15 12:19:56,458 Glance request id 
req-de198fae-8eb6-4217-a757-847e188f187c
  2013-09-15 12:19:58,460 Request: HEAD 
http://127.0.0.1:9292/v1/images/5e4d189e-8de7-428a-923f-1c1090745bf9
  2013-09-15 12:19:58,485 Response Status: 200
  2013-09-15 12:19:58,485 Glance request id 
req-5797d453-f722-47ab-a6e5-662694636761
  2013-09-15 12:19:59,490 Request: DELETE 
http://127.0.0.1:9292/v1/images/5e4d189e-8de7-428a-923f-1c1090745bf9
  2013-09-15 12:19:59,560 Response Status: 200
  2013-09-15 12:19:59,560 Glance request id 
req-7f160544-13d4-465d-a0bf-bc969a8021b5
  }}}

  Traceback (most recent call last):
File tempest/api/volume/test_volumes_actions.py, line 112, in 
test_volume_upload
  self.image_client.wait_for_image_status(image_id, 'active')
File tempest/services/image/v1/json/image_client.py, line 276, in 
wait_for_image_status
  raise exceptions.TimeoutException(message)
  TimeoutException: Request timed out
  Details: Time Limit Exceeded! (400s)while waiting for active, but we got 
killed.

  Full logs here: http://logs.openstack.org/23/42523/4/check/gate-
  tempest-devstack-vm-full/8352d30/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1225664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244333] [NEW] server build error in tempest neutron gate

2013-10-24 Thread John Griffith
Public bug reported:

http://logs.openstack.org/40/53440/2/check/check-tempest-devstack-vm-
neutron/6ca7666/console.html

** Affects: nova
 Importance: Undecided
 Status: Invalid

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244333

Title:
  server build error in tempest neutron gate

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/40/53440/2/check/check-tempest-devstack-vm-
  neutron/6ca7666/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226337] Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern flake failure

2013-10-01 Thread John Griffith
Believe this can be marked invalid for Nova.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226337

Title:
  tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern flake
  failure

Status in Cinder:
  Fix Committed
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Triaged

Bug description:
  When running tempest
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
  fails with the server going into an ERROR state.

  From the console log:

  2013-09-24 04:16:36.948 | Traceback (most recent call last):
  2013-09-24 04:16:36.948 | File 
tempest/scenario/test_volume_boot_pattern.py, line 154, in 
test_volume_boot_pattern
  2013-09-24 04:16:36.948 | keypair)
  2013-09-24 04:16:36.948 | File 
tempest/scenario/test_volume_boot_pattern.py, line 53, in 
_boot_instance_from_volume
  2013-09-24 04:16:36.948 | create_kwargs=create_kwargs)
  2013-09-24 04:16:36.948 | File tempest/scenario/manager.py, line 390, in 
create_server
  2013-09-24 04:16:36.949 | self.status_timeout(client.servers, server.id, 
'ACTIVE')
  2013-09-24 04:16:36.949 | File tempest/scenario/manager.py, line 290, in 
status_timeout
  2013-09-24 04:16:36.949 | self._status_timeout(things, thing_id, 
expected_status=expected_status)
  2013-09-24 04:16:36.949 | File tempest/scenario/manager.py, line 338, in 
_status_timeout
  2013-09-24 04:16:36.949 | self.config.compute.build_interval):
  2013-09-24 04:16:36.949 | File tempest/test.py, line 237, in call_until_true
  2013-09-24 04:16:36.950 | if func():
  2013-09-24 04:16:36.950 | File tempest/scenario/manager.py, line 329, in 
check_status
  2013-09-24 04:16:36.950 | raise exceptions.BuildErrorException(message)
  2013-09-24 04:16:36.950 | BuildErrorException: Server %(server_id)s failed to 
build and is in ERROR status
  2013-09-24 04:16:36.950 | Details: Server: scenario-server-89179012 failed 
to get to expected status. In ERROR state.

  The exception:
  
http://logs.openstack.org/64/47264/2/gate/gate-tempest-devstack-vm-full/dced339/logs/screen-n-cpu.txt.gz#_2013-09-24_04_44_31_806

  Logs are located here:
  
http://logs.openstack.org/64/47264/2/gate/gate-tempest-devstack-vm-full/dced339

  -

  Originally the failure was (before some changes to timeouts in
  tempest):

  t178.1: 
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute,image,volume]_StringException:
 Empty attachments:
    stderr
    stdout

  pythonlogging:'': {{{
  2013-09-16 15:59:44,214 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 15:59:44,417 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 15:59:45,348 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 15:59:45,495 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 15:59:47,644 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 15:59:48,762 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 15:59:49,879 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 15:59:50,980 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:00:52,581 Connected (version 2.0, client dropbear_2012.55)
  2013-09-16 16:00:52,897 Authentication (publickey) successful!
  2013-09-16 16:00:53,105 Connected (version 2.0, client dropbear_2012.55)
  2013-09-16 16:00:53,428 Authentication (publickey) successful!
  2013-09-16 16:00:53,431 Secsh channel 1 opened.
  2013-09-16 16:00:53,607 Connected (version 2.0, client dropbear_2012.55)
  2013-09-16 16:00:53,875 Authentication (publickey) successful!
  2013-09-16 16:00:53,880 Secsh channel 1 opened.
  2013-09-16 16:01:58,999 Connected (version 2.0, client dropbear_2012.55)
  2013-09-16 16:01:59,288 Authentication (publickey) successful!
  2013-09-16 16:01:59,457 Connected (version 2.0, client dropbear_2012.55)
  2013-09-16 16:01:59,784 Authentication (publickey) successful!
  2013-09-16 16:01:59,801 Secsh channel 1 opened.
  2013-09-16 16:02:00,005 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:02:00,080 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:02:01,127 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:02:01,192 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:02:01,414 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:02:02,494 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:02:03,615 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:02:04,724 Starting new HTTP connection (1): 127.0.0.1
  2013-09-16 16:02:05,825 Starting new HTTP connection (1): 127.0.0.1
  }}}

  Traceback (most recent call last):
    File tempest/scenario/test_volume_boot_pattern.py, line 157, in 
test_volume_boot_pattern
  ssh_client = self._ssh_to_server(instance_from_snapshot, keypair)
    File tempest/scenario/test_volume_boot_pattern.py, line 

[Yahoo-eng-team] [Bug 1014689] Re: Create Volume snapshot force parameter is not validated

2013-08-05 Thread John Griffith
So yes the client should only do like:  --force which would set a real
boolean = True.

Breaking compat is an issue here though.

That being said, the issue of not handling garbage input is addressed in
the cinder API now via bool_from_str(), so that if garbage is passed in
it will give an Invalid parameter exception which is correct.  I have no
problem with doing a check earlier in the client (so long as it's the
same as what we implemented in the API [bool_from_str()]).

When we're ready to version bump again I would like to make the --force
arguments better but until then I think we need to leave it as is and
just fix the issues.  Marking as invalid for cinderclient for that
reason.

** Changed in: python-cinderclient
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1014689

Title:
  Create Volume snapshot force parameter is not validated

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Cinder:
  Invalid

Bug description:
  Description:

  Create Volume snapshot with invalid Force value is returning 200 ok
  instead of raising Bad Request.

  
  Expected Result:

  Should return error code 400. (raise Bad Request)

  Actual Result:

  Is not raising exception. Returning 200 ok.

  LOG:
  ---

  --

  rajalakshmi_ganesan@pshys0183~tests:-)nova --debug volume-snapshot-create 22 
--force '~!@#$%^*()_+'
  connect: (10.233.53.165, 8774)
  send: 'GET /v1.1/ HTTP/1.1\r\nHost: 10.233.53.165:8774\r\nx-auth-project-id: 
admin\r\nx-auth-key: testuser\r\naccept-encoding: gzip, deflate\r\nx-auth-user: 
admin\r\nuser-agent: python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 204 No Content\r\n'
  header: Content-Length: 0
  header: X-Auth-Token: admin:admin
  header: X-Server-Management-Url: http://10.233.53.165:8774/v1.1/admin
  header: Content-Type: text/plain; charset=UTF-8
  header: Date: Mon, 18 Jun 2012 19:52:37 GMT
  send: 'POST /v1.1/admin/os-snapshots HTTP/1.1\r\nHost: 
10.233.53.165:8774\r\nContent-Length: 108\r\nx-auth-project-id: 
admin\r\nx-auth-token: admin:admin\r\ncontent-type: 
application/json\r\naccept-encoding: gzip, deflate\r\nuser-agent: 
python-novaclient\r\n\r\n{snapshot: {display_name: null, force: 
~!@#$%^*()_+, display_description: null, volume_id: 22}}'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: req-bc28a045-4308-4c65-819d-2870ebf45adc
  header: Content-Type: application/json
  header: Content-Length: 165
  header: Date: Mon, 18 Jun 2012 19:52:37 GMT
  rajalakshmi_ganesan@pshys0183~tests:-)nova --debug volume-snapshot-create 22 
--force 'alphabet1234567890-='
  connect: (10.233.53.165, 8774)
  send: 'GET /v1.1/ HTTP/1.1\r\nHost: 10.233.53.165:8774\r\nx-auth-project-id: 
admin\r\nx-auth-key: testuser\r\naccept-encoding: gzip, deflate\r\nx-auth-user: 
admin\r\nuser-agent: python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 204 No Content\r\n'
  header: Content-Length: 0
  header: X-Auth-Token: admin:admin
  header: X-Server-Management-Url: http://10.233.53.165:8774/v1.1/admin
  header: Content-Type: text/plain; charset=UTF-8
  header: Date: Mon, 18 Jun 2012 19:53:10 GMT
  send: 'POST /v1.1/admin/os-snapshots HTTP/1.1\r\nHost: 
10.233.53.165:8774\r\nContent-Length: 115\r\nx-auth-project-id: 
admin\r\nx-auth-token: admin:admin\r\ncontent-type: 
application/json\r\naccept-encoding: gzip, deflate\r\nuser-agent: 
python-novaclient\r\n\r\n{snapshot: {display_name: null, force: 
alphabet1234567890-=, display_description: null, volume_id: 22}}'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: req-a004c5de-c53f-4155-8a9e-ebeab21fa63a
  header: Content-Type: application/json
  header: Content-Length: 165
  header: Date: Mon, 18 Jun 2012 19:53:10 GMT

  rajalakshmi_ganesan@pshys0183~tests:-( nova --debug volume-snapshot-create 
22 --force ''
  connect: (10.233.53.165, 8774)
  send: 'GET /v1.1/ HTTP/1.1\r\nHost: 10.233.53.165:8774\r\nx-auth-project-id: 
admin\r\nx-auth-key: testuser\r\naccept-encoding: gzip, deflate\r\nx-auth-user: 
admin\r\nuser-agent: python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 204 No Content\r\n'
  header: Content-Length: 0
  header: X-Auth-Token: admin:admin
  header: X-Server-Management-Url: http://10.233.53.165:8774/v1.1/admin
  header: Content-Type: text/plain; charset=UTF-8
  header: Date: Mon, 18 Jun 2012 19:55:15 GMT
  send: 'POST /v1.1/admin/os-snapshots HTTP/1.1\r\nHost: 
10.233.53.165:8774\r\nContent-Length: 95\r\nx-auth-project-id: 
admin\r\nx-auth-token: admin:admin\r\ncontent-type: 
application/json\r\naccept-encoding: gzip, deflate\r\nuser-agent: 
python-novaclient\r\n\r\n{snapshot: {display_name: null, force: , 
display_description: null, volume_id: 22}}'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: 

[Yahoo-eng-team] [Bug 1201418] Re: Volume in-use although VM doesn't exist

2013-07-21 Thread John Griffith
This is likely part of the cleanup on the BDM side or in the caching.
There are some other issues related to this, like failed attach never
cleaning up on the compute side.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1201418

Title:
  Volume in-use although VM doesn't exist

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  Setup:

  devstack on master using default settings.

  Steps:

1) Using tempest/stress with patch https://review.openstack.org/#/c/36652/:
cd /opt/stack/tempest/tempest/stress
./run_stress.py etc/volume-assign-delete-test.json -d 60
2) Test will do the following work flow:
 - create a volume
 - create a VM
 - attach volume to VM
 - delete VM
 - delete volume

  Problem:

  Deletion of volume causes problem, since the state is still in-use
  even the VM is already deleted:

  2013-07-15 12:30:58,563 31273 tempest.stress  : INFO creating volume: 
volume663095989
  2013-07-15 12:30:59,992 31273 tempest.stress  : INFO created volume: 
cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60
  2013-07-15 12:30:59,993 31273 tempest.stress  : INFO creating vm: 
instance331154488
  2013-07-15 12:31:11,097 31273 tempest.stress  : INFO created vm 
4e20442b-8f72-482d-9e7c-59725748784b
  2013-07-15 12:31:11,098 31273 tempest.stress  : INFO attach volume 
(cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60) to vm 
4e20442b-8f72-482d-9e7c-59725748784b
  2013-07-15 12:31:11,265 31273 tempest.stress  : INFO volume 
(cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60) attached to vm 
4e20442b-8f72-482d-9e7c-59725748784b
  2013-07-15 12:31:11,265 31273 tempest.stress  : INFO deleting vm: 
instance331154488
  2013-07-15 12:31:13,780 31273 tempest.stress  : INFO deleted vm: 
4e20442b-8f72-482d-9e7c-59725748784b
  2013-07-15 12:31:13,781 31273 tempest.stress  : INFO deleting volume: 
cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60
  Process Process-1:
  Traceback (most recent call last):
File /usr/lib/python2.7/multiprocessing/process.py, line 258, in 
_bootstrap
  self.run()
File /usr/lib/python2.7/multiprocessing/process.py, line 114, in run
  self._target(*self._args, **self._kwargs)
File /opt/stack/tempest/tempest/stress/actions/volume_attach_delete.py, 
line 61, in create_delete
  resp, _ = manager.volumes_client.delete_volume(volume['id'])
File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py, 
line 86, in delete_volume
  return self.delete(volumes/%s % str(volume_id))
File /opt/stack/tempest/tempest/common/rest_client.py, line 264, in delete
  return self.request('DELETE', url, headers)
File /opt/stack/tempest/tempest/common/rest_client.py, line 386, in 
request
  resp, resp_body)
File /opt/stack/tempest/tempest/common/rest_client.py, line 436, in 
_error_checker
  raise exceptions.BadRequest(resp_body)
  BadRequest: Bad request
  Details: {u'badRequest': {u'message': u'Invalid volume: Volume status must be 
available or error', u'code': 400}}
  2013-07-15 12:31:58,622 31264 tempest.stress  : INFO cleaning up

  nova list:
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+

  cinder list
  
+--++--+--+-+--+--+
  |  ID  | Status |   Display Name   | Size | 
Volume Type | Bootable | Attached to  |
  
+--++--+--+-+--+--+
  | cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60 | in-use | volume663095989  |  1   |   
  None|  False   | 4e20442b-8f72-482d-9e7c-59725748784b |
  
+--++--+--+-+--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1201418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1050359] Re: Tests fail on 32bit machines (_get_hash_str is platform dependent)

2013-05-30 Thread John Griffith
** Changed in: cinder/folsom
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1050359

Title:
  Tests fail on 32bit machines (_get_hash_str is platform dependent)

Status in Cinder:
  Fix Released
Status in Cinder folsom series:
  Won't Fix
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Fix Released
Status in “nova” package in Ubuntu:
  Fix Released
Status in “nova” source package in Quantal:
  Fix Released

Bug description:
  Running the nova tests, 3 tests failed on my system, due to the fact,
  that My machine is running a 32 bit operating system:

  
  ==
  FAIL: _get_hash_str should calculation correct value
  --
  Traceback (most recent call last):
File /home/matel/nova/nova/tests/test_nfs.py, line 216, in 
test_get_hash_str
  drv._get_hash_str(self.TEST_NFS_EXPORT1))
  AssertionError: '12118957640568004265' != '18446744073140166313'
  '12118957640568004265' != '18446744073140166313' = '%s != %s' % 
(safe_repr('12118957640568004265'), safe_repr('18446744073140166313'))
  '12118957640568004265' != '18446744073140166313' = 
self._formatMessage('12118957640568004265' != '18446744073140166313', 
'12118957640568004265' != '18446744073140166313')
raise self.failureException('12118957640568004265' != 
'18446744073140166313')
  

  ==
  FAIL: _get_mount_point_for_share should calculate correct value
  --
  Traceback (most recent call last):
File /home/matel/nova/nova/tests/test_nfs.py, line 225, in 
test_get_mount_point_for_share
  drv._get_mount_point_for_share(self.TEST_NFS_EXPORT1))
  AssertionError: '/mnt/test/12118957640568004265' != 
'/mnt/test/18446744073140166313'
  '/mnt/test/12118957640568004265' != '/mnt/test/18446744073140166313' = 
'%s != %s' % (safe_repr('/mnt/test/12118957640568004265'), 
safe_repr('/mnt/test/18446744073140166313'))
  '/mnt/test/12118957640568004265' != '/mnt/test/18446744073140166313' = 
self._formatMessage('/mnt/test/12118957640568004265' != 
'/mnt/test/18446744073140166313', '/mnt/test/12118957640568004265' != 
'/mnt/test/18446744073140166313')
raise self.failureException('/mnt/test/12118957640568004265' != 
'/mnt/test/18446744073140166313')
  

  ==
  FAIL: local_path common use case
  --
  Traceback (most recent call last):
File /home/matel/nova/nova/tests/test_nfs.py, line 113, in test_local_path
  drv.local_path(volume))
  AssertionError: '/mnt/test/12118957640568004265/volume-123' != 
'/mnt/test/18446744073140166313/volume-123'
  '/mnt/test/12118957640568004265/volume-123' != 
'/mnt/test/18446744073140166313/volume-123' = '%s != %s' % 
(safe_repr('/mnt/test/12118957640568004265/volume-123'), 
safe_repr('/mnt/test/18446744073140166313/volume-123'))
  '/mnt/test/12118957640568004265/volume-123' != 
'/mnt/test/18446744073140166313/volume-123' = 
self._formatMessage('/mnt/test/12118957640568004265/volume-123' != 
'/mnt/test/18446744073140166313/volume-123', 
'/mnt/test/12118957640568004265/volume-123' != 
'/mnt/test/18446744073140166313/volume-123')
raise self.failureException('/mnt/test/12118957640568004265/volume-123' 
!= '/mnt/test/18446744073140166313/volume-123')
  

  --

  
  See my inline comments here:
  
https://github.com/openstack/nova/commit/772c5d47d5bdffcd4ff8e09f4116d22568bf6eb9#L3R215

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1050359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1112998] Re: Attach volume via Nova API != Attach volume via Cinder API

2013-05-15 Thread John Griffith
** Changed in: cinder
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1112998

Title:
  Attach volume via Nova API != Attach volume via Cinder API

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  I just discovered that novaclient's volumes.create_server_volume
  command and cinderclient's volumes.attach command are not equal.
  They both result in the specified volume being attached to the
  specified instance, but only novaclient's command makes Nova aware of
  the attachment such that novaclient's volumes.get_server_volumes
  command returns the attachment data.

  Unfortunately, without novaclient's volumes.get_server_volumes command
  returning the correct data there is no good way to go from an instance
  ID to a list of attached volumes. That data is not returned with the
  detailed instance info (as it should be), and it can't be filtered out
  of a volume listing without retrieving every volume and manually
  filtering them based on their attachments dict.

  My recommendation is twofold:

1. Cinder needs an easy way to retrieve a list of volumes attached to a 
given instance, and
2. Nova needs to include the list of attached volumes with the instance 
details the same way it returns lists of IP addresses, security groups, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1112998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1174480] Re: snapshotting an instance with attached volumes remembers the volumes are attached when it shouldnt.

2013-04-29 Thread John Griffith
This is all handled on the Compute side, there's very little that Cinder
actually knows in terms of the attach process.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1174480

Title:
  snapshotting an instance with attached volumes remembers the volumes
  are attached when it shouldnt.

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  this may not be a cinder thing but it was as close as I could think
  of.

  so I have an instance, it has a volume attached to it.
  I snapshot the image and terminate when snapshot is done.
  Boot the new snapshot, the problem lies in that the system still thinks
  there is a volume attached to the instance when there really isnt.
  in horizon the volume shows as available. but in the instance info page
  it lists every instance that has been ever attached to the previous instance 
(before snapshot'ed)
  as being still attached.
  so in trying to mount the volume again at the same /dev/vdb device for 
example fails as it thinks
  there is still something there.
  crank up the device to an empty on and it mounts, and it mounts at the lowest 
device /dev/vdb
  which it thought was used just moments before. 
  cinder show id  command shows the volume as on /dev/vdd but is really in the 
instance at /dev/vdb
  for example.
  this repeats for each snapshot so the in use device list grows.

  steve

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1174480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1074061] Re: Creating a volume out of an image using the nova api fails

2013-04-03 Thread John Griffith
Based on the discussion you referenced, this would be classified as a
packstack  but and not a Cinder or Nova bug.  Unless there's some
additional detail I'm missing, it seems this has been identified as an
issue with how packstack is (or more accurately is NOT) initializing the
Glance DataBase.

** Changed in: cinder
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1074061

Title:
  Creating a volume out of an image using the nova api fails

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Trying to create a volume using this command:

  nova volume-create --image-id 9542ab1e-3602-4940-86d3-1e7593c32ed7
  --display-name something 50

  The nova-volume.log shows:

  2012-11-01 17:16:58 DEBUG nova.volume.manager 
[req-67d0a83a-b014-400f-b63f-169bef91a432 4629023da94042e5ab9b1fab79ddc2ce 
b73867eacaa64a90a77e16aa5cc86686] volume 
volume-4e7657a5-3e03-495b-a48f-9dcab7a5f8ca: creating lv of size 50G from 
(pid=1592) create_volume 
/usr/lib/python2.7/dist-packages/nova/volume/manager.py:137
  2012-11-01 17:16:58 ERROR nova.openstack.common.rpc.amqp [-] Exception during 
message handling
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 275, 
in _process_data
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 145, in dispatch
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/volume/manager.py, line 166, in 
create_volume
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp 
volume_ref['id'], {'status': 'error'})
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp self.gen.next()
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/volume/manager.py, line 150, in 
create_volume
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp image_location = 
image_service.get_location(context, image_id)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/image/glance.py, line 210, in 
get_location
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp 
_reraise_translated_image_exception(image_id)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/image/glance.py, line 208, in 
get_location
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp image_meta = 
client.call(context, 2, 'get', image_id)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/image/glance.py, line 138, in call
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp return 
getattr(client.images, method)(*args, **kwargs)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py, line 54, in get
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp resp, body = 
self.http_client.json_request('GET', url)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 174, in 
json_request
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp resp, body_iter 
= self._http_request(url, method, **kwargs)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 158, in 
_http_request
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp raise 
exc.from_response(resp)
  2012-11-01 17:16:58 TRACE nova.openstack.common.rpc.amqp 
HTTPInternalServerError: HTTPInternalServerError (HTTP 500)

  
  The glance api.log and the apache error.log shows nothing in relation to 
this. I'm not sure how to further debug it.

  I'm using Folsom on Ubuntu 12.04.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1074061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 970409] Re: Deleting volumes with snapshots should be allowed for some backends

2013-02-12 Thread John Griffith
** Changed in: cinder
Milestone: grizzly-3 = None

** Changed in: cinder
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/970409

Title:
  Deleting volumes with snapshots should be allowed for some backends

Status in Cinder:
  Won't Fix
Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Right now, nova-volumes does not allow volumes to be deleted that have
  snapshots attached. Some backends may support this so it should be
  configurable by the administrator whether to allow volumes with
  snapshots to be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/970409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp