Public bug reported:
http://logs.openstack.org/40/53440/2/check/check-tempest-devstack-vm-
neutron/6ca7666/console.html
** Affects: nova
Importance: Undecided
Status: Invalid
** Changed in: nova
Status: New = Invalid
--
You received this bug notification because you are a
Root cause of this appears to be Glance--Swift
** Changed in: cinder
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1225664
Title:
** Changed in: cinder
Status: Confirmed = Invalid
** Changed in: cinder
Milestone: icehouse-1 = None
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
Public bug reported:
devstack install using master on precise intermittent failures when
trying to boot instances. (cirros image, flavor 1). Typically simply
running again this will work. n-cpu logs contain the following trace:
2013-12-03 11:11:01.124 DEBUG nova.compute.manager
Public bug reported:
Intermittent failures trying to boot an instance using devstack/master
on precise VM. In most cases deleting the failed instance and retrying
the boot command seems to work.
2013-12-03 11:28:24.514 DEBUG nova.compute.manager
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo
The issue from the perspective of the Cinder delete is that the tempest
min scenario test doesn't bother to deal with things like failures in
it's sequence. What's happening here is that the ssh is raising a
timeout exception which is not handled and blows things up. So we dump
out of the
don't see any Cinder info here
** Changed in: cinder
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265740
Title:
incorrect return from
Making a compatible version for EBS here isn't a terrible idea, however
i hardly see this as a bug. This is most definitely a feature request
IMO, and it has almost nothing to do with Cinder. As per my comments in
the review:
If there's real value in emulating this, then I think this needs to
I'm not crazy about this approach of making changes throughout the
project; updating all of the projects and then removing the wrapper in
oslo, then updating the libs in all of the projects again is really
something that should not be a top priority.
I do however think that the usage should be
Public bug reported:
Gate test tempest-dsvm-large-ops fails due to failure setting up network
on instance.
http://logs.openstack.org/26/68726/1/check/gate-tempest-dsvm-large-ops/69a94b4/
Relevant Trace in n-cpu logs are here:
http://logs.openstack.org/26/68726/1/check/gate-tempest-dsvm-large-
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272447
Title:
Instances fail to boot properly with
Addressed by: https://review.openstack.org/#/c/69443/
** Changed in: cinder
Status: New = Fix Committed
** Project changed: cinder = nova-project
** Project changed: nova-project = nova
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
, it's the same root cause regardless of
whether it ends up that waiting longer helps us or not.
** No longer affects: cinder
** Changed in: nova
Assignee: (unassigned) = John Griffith (john-griffith)
** This bug has been marked a duplicate of bug 1270608
n-cpu 'iSCSI device not found' log
** Changed in: cinder
Status: New = Confirmed
** Changed in: cinder
Importance: Undecided = Critical
** Changed in: cinder
Assignee: (unassigned) = John Griffith (john-griffith)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
Public bug reported:
Started seeing rash of gate failures in all devstack tests for this
today. Looks like others have been logging this against bug #1254890,
but that doesn't seem accurate, or at least not detailed enough.
Here's an example of the failure being seen:
I had decided that tonight was the night I was going to fix this on the
Cinder side, but alas I'm stuck.
The problem here is that we run into the odd case with nova booting an
instance from a volume, compute API starts up the process, grabs the
volume and makes the attach (so now the volume
** Changed in: cinder
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281351
Title:
Public bug reported:
While running Tempest tests against my device, the encryption tests
consistently fail to attach. Turns out the problem is an attempt to
create symbolic link for encryption process, however the rootwrap spec
is restricted to targets with the default openstack.org iqn.
Error
I'm not sure why this is logged as a Cinder bug? Other than the fact
that it's boot from volume perhaps, but the instance appears to boot
correctly and is in ACTIVE state. The issue here seems to be networking
as the ssh connection fails... no?
Public bug reported:
Failure in gate neutron-dsvm-full:
http://logs.openstack.org/98/117898/2/gate/gate-tempest-dsvm-neutron-full/40cf18a/console.html#_2014-09-05_12_25_05_730
** Affects: neutron
Importance: Undecided
Status: New
** Affects: nova
Importance: Undecided
Public bug reported:
Failure encountered in gate testing dsvm-full
http://logs.openstack.org/98/120298/2/check/check-tempest-dsvm-
full/a739161/console.html#_2014-09-10_15_53_23_821
It appears that the volume was created and nova reported it as booted
successfully however the ssh connection
Not sure of the status in Cinder (oslo moves may cover this) but nobody
seems to care as this has been stagnant for a year on cinder.
Feel free to log a new bug if needed.
** No longer affects: cinder
--
You received this bug notification because you are a member of Yahoo!
Engineering Team,
Public bug reported:
When attempting to boot an instance from a remote host using novaclient
and API access file downloaded via dashboard I'm unable to create
instances due to an error in attempting to retrieve networks. This is
reproducible via devstack on both Precise and Trusty, and I've
Public bug reported:
Gate failure encountered here: http://logs.openstack.org/39/97639/2/gate
/gate-tempest-dsvm-full/6e2a9e4/console.html.gz#_2014-06-05_12_06_52_960
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member
Public bug reported:
Have seen the following at least a couple of times lately in gate
failures:
http://logs.openstack.org/48/96548/1/check/check-tempest-dsvm-
full/682586b/console.html#_2014-06-10_09_35_06_587
** Affects: nova
Importance: Undecided
Status: New
--
You received
Public bug reported:
Encountered what looks like a new gate failure. The
test_suspend_server_invalid_state test fails with a bad request response
/ unhandled exception.
http://logs.openstack.org/48/96548/1/gate/gate-tempest-dsvm-postgres-
full/fa5c27d/console.html#_2014-06-12_23_33_59_830
**
excerpts from Sean's email that just went out but hasn't hit archives
yet:
```
Horizon in icehouse is now 100% failing
[Sat Jun 21 16:17:35 2014] [error] Internal Server Error: /
[Sat Jun 21 16:17:35
** Also affects: cinder
Importance: Undecided
Status: New
** Changed in: cinder
Status: New = Triaged
** Changed in: cinder
Importance: Undecided = High
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
Seems we hit this today on a Havana build. Nothing really going on with
the system... went to dinner, user came back and couldn't log in. I
logged on to controller and noticed traces in keystone, restarted
services and back in business, but not sure what's actually causing
this.
** Changed in:
I don't think this is a real problem, especially considering the req
files should be auto-updated anyway. I don't see any value in messing
with this, other than making sure we're in alphabetical order once then
let the requirements update tools update files correctly. Adding a
check for this
So I ran some tests on this, as long as the backend doesn't fail to do
the extend the quota is checked up front and the API responds with an
error before ever changing state or attempting the resize.
This is what I would expect. If the cmd passes quota check and is sent
to the driver, but the
There's no way currently for Cinder to know about this situation.
It's actually a failure on Nova's part to clean up after itself when
deleting a VM IMO.
Also.. FYI there's a reset-state extension that you can/should use
rather than manipulating the DB directly.
** Changed in: cinder
Cinder has the secret=True setting in the conf options already, so the
DNE Cinder.
** No longer affects: cinder
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1266590
Title:
db
Hit this today on latest Havana build, logs below. I reproduced doing
some stress testing; create 50 instances boot from volume in one
operation. Need to try it in my Icehouse setup next.
2014-03-20 00:42:51.725 17580 INFO nova.compute.manager
[req-ef61a326-288b-494d-9d30-f533e7739949 None
I don't know if I see this as a bug. There's a use case in my opinion
that a provider or private admin may want to adjust a users quota to a
lower level even if it is below what they're currently using. The idea
here is that an admin shouldn't be limited to what the user is actually
using at the
** Also affects: cinder
Importance: Undecided
Status: New
** No longer affects: cinder
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301519
Title:
** Changed in: cinder
Milestone: grizzly-3 = None
** Changed in: cinder
Status: Triaged = Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/970409
Based on the discussion you referenced, this would be classified as a
packstack but and not a Cinder or Nova bug. Unless there's some
additional detail I'm missing, it seems this has been identified as an
issue with how packstack is (or more accurately is NOT) initializing the
Glance DataBase.
This is all handled on the Compute side, there's very little that Cinder
actually knows in terms of the attach process.
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: cinder
Status: New = Invalid
--
You received this bug notification because you are a
** Changed in: cinder
Status: Confirmed = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1112998
Title:
Attach volume via Nova API != Attach volume via
** Changed in: cinder/folsom
Status: Confirmed = Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1050359
Title:
Tests fail on 32bit machines
This is likely part of the cleanup on the BDM side or in the caching.
There are some other issues related to this, like failed attach never
cleaning up on the compute side.
** Changed in: cinder
Status: New = Invalid
--
You received this bug notification because you are a member of
So yes the client should only do like: --force which would set a real
boolean = True.
Breaking compat is an issue here though.
That being said, the issue of not handling garbage input is addressed in
the cinder API now via bool_from_str(), so that if garbage is passed in
it will give an Invalid
Believe this can be marked invalid for Nova.
** Changed in: nova
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226337
Title:
This is a VERY old and long running issue with how things work on the
Nova side of the house. The volumes are going to get attached to the
next available drive mapping (vdb, vdc, vdd) based on the Block Device
Mapping table in Nova. The specification you provide to attach-volume
is really more
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404037
Title:
I think this is up to your install or distribution that you're using.
In other words, Cinder does not install packages, that's deployment.
What you're reporting here is not a bug, if there's no info in the docs
about installing qemu tools that is possibly something we could add.
What OpenStack
in the
future but for now it's a separate enhancement independent of this bug.
** Affects: nova
Importance: Undecided
Assignee: John Griffith (john-griffith)
Status: New
** Changed in: nova
Assignee: (unassigned) = John Griffith (john-griffith)
--
You received this bug notification
** Changed in: nova
Status: New = Invalid
** Changed in: cinder
Assignee: (unassigned) = John Griffith (john-griffith)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https
I've added Nova to the projects here because currently we're at a stale
mate where there seems to be a single case during unrescue that tirggers
this. Patc is proposed but looks like it won't be accepted, want to
make sure we link this and keep it tracked although it is different than
the
** Changed in: cinder
Status: Triaged = Incomplete
** Changed in: cinder
Status: Incomplete = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161557
so this is typically because on some backends the image-
download/conversion can take a relatively long time. The feature of
rolling this up into one command in Horizon was a good idea, but it
unfortunately doesn't coordinate things very well or check status before
trying to move on.
this is
Turns out this is worse than I thought at first glance. So it appears
that running from Horizon isn't honoring create from snap, it's also not
honoring bootable settings.
What's worse however is that at first check it appears that it's not
actually creating from snap at all. To test I created a
Verified using cinderclient for these ops works as expected. No idea
what Horizon is calling/doing here. Removing Cinder for now, we can
readd if there's infact something weird on our side.
** No longer affects: cinder
--
You received this bug notification because you are a member of Yahoo!
Removing Cinder and moving to Neutron
** Also affects: neutron
Importance: Undecided
Status: New
** No longer affects: cinder
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Going to close it for Cinder as well, as I don't know of a way to fix a
broken glanceclient from the consumer end.
If you're interested however I did throw together a patched version of 0.14.2
here:
https://github.com/j-griffith/python-glanceclient/tree/stable/icehouse
Maybe you or somebody
Public bug reported:
Horizon has a cool feature that wrap cinder create-volume from image and
novas boot from volume all up into a single command under launch
instance. The only missing thing here is the ability to specify volume-
type when doing this. There should probably be a follow up that
Public bug reported:
Have a running stable-kilo setup, recently did a restart of all services
and glance won't start up. Following error in g-api log:
2015-08-13 09:44:29.916 28813 DEBUG glance.common.config [-]
image_format.disk_formats = ['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw',
** Also affects: cinder/kilo
Importance: Undecided
Status: New
** Tags removed: volumes
** Tags added: fibre-channel ibm
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
After this change:
https://github.com/openstack/keystone/commit/db6c7d9779378a3a6a6c52c47fa0a303c9038508
systems that run clean devstack installs are now failing during stack.sh for:
2015-09-16 02:30:22.901 | Ignoring dnspython3: markers "python_version=='3.4'"
don't match
Public bug reported:
This test seems randomly problematic, but noticed 3 failures today with
the following error logged in nova.api:
2016-01-08 03:04:42.603 ERROR oslo_db.api
[req-9fb82769-155d-4f50-87db-c912c8ad34a6
tempest-TestVolumeBootPattern-388230709
** Also affects: os-brick
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1530214
Title:
Tempest failures due to iSCSI DB
*** This bug is a duplicate of bug 1324670 ***
https://bugs.launchpad.net/bugs/1324670
Looks like this is an old one we thought was fixed on the brick side. Removing
Nova and marking as a duplicate of the original bug:
1324670
** No longer affects: nova
** This bug has been marked a
Public bug reported:
Noticed a couple of these today in the SolidFire CI system. These are
initiator side errors in Nova. Excerpt from log is below, but additional logs
can also be viewed here:
http://54.164.167.86/solidfire-ci-logs/refs-changes-67-244867-12/logs/screen-n-cpu.log.txt
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538620
Title:
Attach with host and instance_uuid not
Public bug reported:
Currently if iscsi_multipath is set in nova.conf we require ALL
attachments to use multipath. The problem with this is that it's not
uncommon to have a mix of Cinder backends; one that supports multipath
and one that doesn't. The result with how we do this now is that you
My bad on this; the actual problem is an unhandled failure/crash if
multipath-d isn't installed/running.
** Changed in: os-brick
Status: New => Opinion
** Changed in: os-brick
Importance: Undecided => Wishlist
--
You received this bug notification because you are a member of Yahoo!
Public bug reported:
While attempting nova volume-attach on current devstack deployment I'm getting
intermittent failures during the attach operation:
http://paste.openstack.org/show/641383/
Each time I've encountered this I've been able to simply rerun the
command and it completes
So looking into this the problem appears to be that Nova calls the brick
initiator disconnect_volume method indiscriminately. Brick has no way
currently to interrogate usage of a connection, and I'm not sure that
something like that could be added in this case.
My first thought was that it would
69 matches
Mail list logo