Re: [openstack-dev] [cinder][nova] proper syncing of cinder volume state

2014-11-28 Thread D'Angelo, Scott
A Cinder blueprint has been submitted to allow the python-cinderclient to 
involve the back end storage driver in resetting the state of a cinder volume:
https://blueprints.launchpad.net/cinder/+spec/reset-state-with-driver
and the spec:
https://review.openstack.org/#/c/134366

This blueprint contains various use cases for a volume that may be listed in 
the Cinder DataBase in state detaching|attaching|creating|deleting.
The Proposed solution involves augmenting the python-cinderclient command 
'reset-state', but other options are listed, including those that
involve Nova, since the state of a volume in the Nova XML found in 
/etc/libvirt/qemu/.xml may also be out-of-sync with the
Cinder DB or storage back end.

A related proposal for adding a new non-admin API for changing volume status 
from 'attaching' to 'error' has also been proposed:
https://review.openstack.org/#/c/137503/

Some questions have arisen:
1) Should 'reset-state' command be changed at all, since it was originally just 
to modify the Cinder DB?
2) Should 'reset-state' be fixed to prevent the naïve admin from changing the 
CinderDB to be out-of-sync with the back end storage?
3) Should 'reset-state' be kept the same, but augmented with new options?
4) Should a new command be implemented, with possibly a new admin API to 
properly sync state?
5) Should Nova be involved? If so, should this be done as a separate body of 
work?

This has proven to be a complex issue and there seems to be a good bit of 
interest. Please provide feedback, comments, and suggestions.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Change reset-state to involve the driver

2015-01-22 Thread D'Angelo, Scott
Thanks to everyone who commented on the spec to change reset-state to involve 
the driver: https://review.openstack.org/#/c/134366/

I've put some comments in reply, and I'm going to attempt to capture the 
various ideas here. I hope we can discuss this at the Mid-Cycle in Austin.
1) The existing reset-state python-cinderclient command should not change in 
unexpected ways and shouldn't have any new parameters (general consensus here). 
It should not fail if the driver does not implement my proposed changes (my 
opinion).
2) The existing reset-state is broken for some use cases (my UseCase2, for 
example, when stuck in 'attaching' but volume is still attached to instance). 
Existing reset-state will work for other situations (my UseCase1, when stuck in 
'attaching' but not really attached.
3)MikeP pointed out that moving _reset_status() would break clients. I could 
use help with understanding some of the API code here.
4) Xing had noted that this doesn't fix Nova. I hope we can do that separately, 
since this is proving contentious enough. Some cases such as a timeout during 
initialize_connection() could be fixed in Nova with a bug once this change is 
in. Other Nova changes might require a new Nova API to call for cleanup during 
reset-state, and that sounds much more difficult to get through the Nova change 
process.
5) Walt suggested a new driver method reset_state(). This seems fine, although 
I had hoped terminate_connection() and detach_volume() would cover all possible 
cleanup in the driver.
6) MikeP pointed out that difficulty of getting 30+ drivers to implement a 
change. I hope that this can be done in such a way that the reset-state 
commands works exactly as it does today if this is not implemented in the 
driver. Putting code in the driver to improve what exists today would be 
strictly optional.

Thanks again. See you in Austin.
scottda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Mid-cycle meetup for Cinder devs

2014-06-11 Thread D'Angelo, Scott
During the June 11 #openstack-cinder meeting we discussed a mid-cycle meetup. 
The agenda is To be Determined.
I have inquired and HP in Fort Collins, CO has room and network connectivity 
available. There were some dates that worked well for reserving a nice room:
July 14,15,17,18, 21-25, 27-Aug 1
But a room could be found regardless.
Virtual connectivity would also be available.

Some of the open questions are:
Are developers interested in a mid-cycle meetup?
What dates are Not Good (Blackout dates)?
What dates are Good?
Whom might be able to be physically present in Ft Collins, CO?
Are there alternative locations to be considered?

Someone had mentioned a Google Survey. Would someone like to create that? Which 
questions should be asked?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Mid-cycle meetup for Cinder devs

2014-06-16 Thread D'Angelo, Scott
The HP site is available Aug 11 -15th. I’m getting some help from our admin to 
book the room. Some questions she had and my tentative answers:

• Is wireless ok?   # Wireless should be fine, everyone will be using 
laptops

• Do you need a speaker phone and projector?  # Projector would be 
good, speaker phone won’t be required (I might be wrong here)

• Any additional special hardware, equipment or technical setup 
required?  # ???

• Happy to set up catering just need to know who is paying and if you 
want morning continental breakfast, lunch and afternoon coffee break?  Any 
dietary restrictions?  # Not sure about funding for catering, but we’ll see if 
we can get any volunteers from management ☺



From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Monday, June 16, 2014 1:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Mid-cycle meetup for Cinder devs



On Thu, Jun 12, 2014 at 3:58 PM, John Griffith 
mailto:john.griff...@solidfire.com>> wrote:


On Wed, Jun 11, 2014 at 3:16 PM, D'Angelo, Scott 
mailto:scott.dang...@hp.com>> wrote:
During the June 11 #openstack-cinder meeting we discussed a mid-cycle meetup. 
The agenda is To be Determined.
I have inquired and HP in Fort Collins, CO has room and network connectivity 
available. There were some dates that worked well for reserving a nice room:
July 14,15,17,18, 21-25, 27-Aug 1
But a room could be found regardless.
Virtual connectivity would also be available.

Some of the open questions are:
Are developers interested in a mid-cycle meetup?
What dates are Not Good (Blackout dates)?
What dates are Good?
Whom might be able to be physically present in Ft Collins, CO?
Are there alternative locations to be considered?

Someone had mentioned a Google Survey. Would someone like to create that? Which 
questions should be asked?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​I've put together a basic Google Form to get some input and to try and nail 
down some dates.

https://docs.google.com/forms/d/1k0QsOtNR2-Q2S1YETyUHyFyt6zg0u41b_giz6byJBXA/viewform

Thanks,
John​


​All,

There are a number of folks that have asked that we do this ​the week of Aug 
11-15 due to some travel restrictions etc.  All of the respondents to the 
survey have indicated this will work.

Scott,
Is the HP site in Fort Collins available during this week?

John



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

2014-06-17 Thread D'Angelo, Scott
Congratulations Xing!


-Original Message-
From: yang, xing [mailto:xing.y...@emc.com] 
Sent: Tuesday, June 17, 2014 10:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

Thanks Ben!  It's my pleasure to join the Manila core team!


Xing



-Original Message-
From: Swartzlander, Ben [mailto:ben.swartzlan...@netapp.com] 
Sent: Tuesday, June 17, 2014 11:46 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!

The Manila core team welcomes Xing Yang! She has been a very active reviewer 
and has been consistently involved with the project.

Xing, thank you for all your effort and keep up the great work!

-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Discuss Cinder testing Wednesdays at 1500 UTC

2016-06-03 Thread D'Angelo, Scott
For those interested in various aspects of Cinder testing, we're planning on 
discussing and coordinating efforts. Please join us:
 #openstack-cinder
 1500 UTC Wednesdays
 (just before the Weekly Cinder meeting)

Testing subjects:
Multi-node Cinder testing
Active-Active HA testing
Improved Tempest coverage
Improved Functional Tests
Cleanup Unit tests
Partial multi-node Grenade testing
More details in the etherpads below

from the Newton Summit:
https://etherpad.openstack.org/p/cinder-newton-testingprocess
multi-node:
https://etherpad.openstack.org/p/cinder-multinode-testing

Cheers,
Scott DAngelo (scottda)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Discuss Cinder testing Wednesdays at 1500 UTC

2016-06-03 Thread D'Angelo, Scott
In the interest of yet-another-etherpad I created:
https://etherpad.openstack.org/p/Cinder-testing

I'll put an agenda, folks can sign up to be pinged, we'll keep the notes and 
action items here, etc
____
From: D'Angelo, Scott
Sent: Friday, June 03, 2016 8:14 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Cinder] Discuss Cinder testing Wednesdays at 1500 
UTC

For those interested in various aspects of Cinder testing, we're planning on 
discussing and coordinating efforts. Please join us:
 #openstack-cinder
 1500 UTC Wednesdays
 (just before the Weekly Cinder meeting)

Testing subjects:
Multi-node Cinder testing
Active-Active HA testing
Improved Tempest coverage
Improved Functional Tests
Cleanup Unit tests
Partial multi-node Grenade testing
More details in the etherpads below

from the Newton Summit:
https://etherpad.openstack.org/p/cinder-newton-testingprocess
multi-node:
https://etherpad.openstack.org/p/cinder-multinode-testing

Cheers,
Scott DAngelo (scottda)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version header for OpenStack microversion support

2016-06-20 Thread D'Angelo, Scott
FYI, Cinder implemented using the style recommended by the API-wg:

https://review.openstack.org/#/c/224910


From: Sean Dague 
Sent: Monday, June 20, 2016 6:32:10 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Version header for OpenStack microversion support

On 06/18/2016 06:32 AM, Jamie Lennox wrote:
> Quick question: why do we need the service type or name in there? You
> really should know what API you're talking to already and it's just
> something that makes it more difficult to handle all the different APIs
> in a common way.

It is also extremely useful in wire interactions to be explicit so that
you know for sure you are interacting with the thing you think you are.
There was also the potential question of compound API operations (a Nova
call that calls other microversioned services that may impact
representation) and whether that may need to be surfaced to the user.
For instance network portions of the 'servers' object may get impacted
by Neutron.

With all those possibilities, putting in the extra ~8 bytes to handle
contingencies seemed prudent.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Tempest][Infra] Cinder team seeks test/tempest/infra experts

2016-06-21 Thread D'Angelo, Scott
The Cinder team has begun a test working group[1] with the goal of improving 
coverage and quality.

We meet Weekly[2] at 1500 UTC in #openstack-cinder and would welcome anyone 
interested in providing expertise in Tempest/Infra/QA.


Our Cinder team contributors have a smattering of knowledge and experience with 
Tempest and the Infra changes required, but we find we've gaps . We've embarked 
on adding tests for multi-backend scenarios and are especially interested in 
multi-node Cinder testing (and Infra changes) and Microversion testing. Please 
join us if interested, and/or ping in the #openstack-cinder room.


Thanks!

Scott DAngelo (scottda)


[1] 
https://etherpad.openstack.org/p/Cinder-testing

[2] https://wiki.openstack.org/wiki/CinderMeetings#Weekly_Cinder_Test_meeting

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] How to delete a volume which is attached to a server who was deleted

2016-06-22 Thread D'Angelo, Scott
Please note that this Mailing List is not for usage questions. It is for 
development issues.

You can reset the state of the volume and then delete it (admin only):

cinder reset-state   #sets to available


From: Will Zhou 
Sent: Wednesday, June 22, 2016 1:22:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][cinder] How to delete a volume which is 
attached to a server who was deleted

Hi all,

I'd like to delete a volume which is attached to a server who was deleted. It 
failed. How can I delete the volume? Thanks.

[root@b-node3 ~]# openstack volume delete aff1c13f-3c08-4230-9c1f-7d48e342054a
Invalid volume: Volume status must be available or error or error_restoring or 
error_extending and must not be migrating, attached, belong to a consistency 
group or have snapshots. (HTTP 400) (Request-ID: 
req-50033eb3-c8f2-41bb-8841-d8436cdb464d)

[root@b-node3 ~]# openstack volume list
+--+--+---+--+---+
| ID   | Display Name | Status| Size | 
Attached to   |
+--+--+---+--+---+
| aff1c13f-3c08-4230-9c1f-7d48e342054a | 1| in-use|1 | 
Attached to e0929b6b-8880-45be-a400-23c06e399e64 on /dev/vdb  |
+--+--+---+--+---+

[root@b-node3 ~]# openstack server show e0929b6b-8880-45be-a400-23c06e399e64
+--+--+
| Field| Value  
  |
+--+--+
| OS-DCF:diskConfig| MANUAL 
  |
| OS-EXT-AZ:availability_zone  | nova   
  |
| OS-EXT-SRV-ATTR:host | b-node3
  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | b-node3
  |
| OS-EXT-SRV-ATTR:instance_name| instance-0083  
  |
| OS-EXT-STS:power_state   | 4  
  |
| OS-EXT-STS:task_state| None   
  |
| OS-EXT-STS:vm_state  | soft-delete
  |
| OS-SRV-USG:launched_at   | 2016-06-16T06:23:42.00 
  |
| OS-SRV-USG:terminated_at | None   
  |
| accessIPv4   |
  |
| accessIPv6   |
  |
| addresses| public01=192.168.201.44
  |
| config_drive |
  |
| created  | 2016-06-16T06:22:41Z   
  |
| flavor   | 1CPU_2GB (2)   
  |
| hostId   | 
4e4564c50cedaa6451f4ffd5d0b835c12a92ef5f188d7a1a58e4e27f |
| id   | e0929b6b-8880-45be-a400-23c06e399e64   
  |
| image| CentOS-6.6-with-agent 
(3ab17999-374d-4638-a5ad-2dd3015cb8bc) |
| key_name | None   
  |
| name | f  
  |
| os-extended-volumes:volumes_attached | [{u'id': 
u'aff1c13f-3c08-4230-9c1f-7d48e342054a'}]   |
| project_id   | c172d37d71114546b4ea3d96c0b27876   
  |
| properties   |
  |
| security_groups  | [{u'name': u'default'}]
  |
| status   | SOFT_DELETED   
  |
| updated  | 2016-06-22T06:23:43Z   
  |
| user_id  | bf8316d6867d4ea6a043661375f4a7c6   
   

Re: [openstack-dev] Change in openstack/trove[master]: Ophaned Volume Not Removed on Instance Delete

2016-06-28 Thread D'Angelo, Scott
If a volume is attached to an instance, and the instance is deleted, the volume 
will be DETACHED, but the volume will still exist, it will NOT be DELETED.

It is up to the volume owner to delete the volume if they wish.


From: Will Zhou 
Sent: Tuesday, June 28, 2016 8:43:51 AM
To: aa...@tesora.com; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] Change in openstack/trove[master]: Ophaned Volume 
Not Removed on Instance Delete

Hi all,

I'd like to make sure should the volume, which is attached to an instance, be 
detached or be deleted after the instance is deleted? Thanks.


On Tue, Jun 28, 2016 at 10:16 PM Ali Asgar Adil (Code Review) 
mailto:rev...@openstack.org>> wrote:
Ali Asgar Adil has posted comments on this change.

Change subject: Ophaned Volume Not Removed on Instance Delete
..


Patch Set 1:

In what situation would a nova instance be in "available" state. Also, we are 
are deleting the instance so we would want the volume to be deleted as well not 
detached.

--
To view, visit https://review.openstack.org/334722
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: Ie921a8ff2851e2d9d76a3c3836945c750f090c4e
Gerrit-PatchSet: 1
Gerrit-Project: openstack/trove
Gerrit-Branch: master
Gerrit-Owner: Ali Asgar Adil mailto:aa...@tesora.com>>
Gerrit-Reviewer: Ali Asgar Adil mailto:aa...@tesora.com>>
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: zzxwill mailto:zzxw...@gmail.com>>
Gerrit-HasComments: No
--

-
?
???
Mobile: 13701280947
?WeChat: 472174291

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-07-06 Thread D'Angelo, Scott
Thanks Everyone!


Scott(da)


From: Sean McGinnis 
Sent: Wednesday, July 6, 2016 3:12:57 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

I'm a little late following through on this, but since Scott is on
vacation right now anyway I suppose that's OK.

Since there were no objections and all respondents were positive, I've
now added Scott to the cinder-core group.

Welcome Scott!

Sean

On Mon, Jun 27, 2016 at 12:27:06PM -0500, Sean McGinnis wrote:
> I would like to nominate Scott D'Angelo to core. Scott has been very
> involved in the project for a long time now and is always ready to help
> folks out on IRC. His contributions [1] have been very valuable and he
> is a thorough reviewer [2].
>
> Please let me know if there are any objects to this within the next
> week. If there are none I will switch Scott over by next week, unless
> all cores approve prior to then.
>
> Thanks!
>
> Sean McGinnis (smcginnis)
>
> [1] 
> https://review.openstack.org/#/q/owner:%22Scott+DAngelo+%253Cscott.dangelo%2540hpe.com%253E%22+status:merged
> [2] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread D'Angelo, Scott
I'll throw this out there: Fort Collins HPE site is available.

Scott D'Angelo (scottda)

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Tuesday, April 12, 2016 8:05 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder] Newton Midcycle Planning

Hey Cinder team (and those interested),

We've had a few informal conversations on the channel and in meetings,
but wanted to capture some things here and spread awareness.

I think it would be good to start planning for our Newton midcycle.
These have been incredibly productive in the past (at least in my
opinion) so I'd like to get it on the schedule so folks can start
planning for it.

For Mitaka we held our midcycle in the R-10 week. That seemed to work
out pretty well, but I also think it might be useful to hold it a little
earlier in the cycle to keep some momentum going and make sure things
stay pretty focused for the rest of the cycle.

For reference, here is the current release schedule for Newton:

http://releases.openstack.org/newton/schedule.html

R-10 puts us in the last week of July.

I would have a conflict R-16, R-15. We probably want to avoid US
Independence Day R-13, and milestone weeks R-18 and R12.

So potential weeks look like:

* R-17
* R-14
* R-11
* R-10
* R-9

Nova is in the process of figuring out their date. If we have that, it
would be good to try to avoid an overlap there. Our linked midcycle
session worked out well, but probably better if they don't conflict.

We also need to work out locations. Anyone able and willing to host,
just let me know. We need a facility with wifi, able to hold ~30-40
people, wifi, close to an airport. And wifi.

At some point I still think it would be nice for our international folks
to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
Collins our somewhere similar.

Thanks!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] API features discoverability

2016-05-06 Thread D'Angelo, Scott
I don't think we actually should be moving all the extensions to core, just the 
ones that are supported by all vendors and fully vetted. In other words, we 
should be moving extensions to core based on the original intent of extensions.
That would mean that for backups we could continue to use 
/v2|3//extensions to determine backup support (and anything else 
that is not supported by all vendors, and therefore in core).
As to whether or not the admin disables extensions that are not support by the 
deployment, I believe that admin should be responsible for their own 
deployment's UX.
Perhaps Deepti's new API has a use here, but I think it's worth discussing 
whether we can get the desired functionality out of the extensions, and whether 
we should strive to use extensions the way they were originally intended.

Scott (scottda)


Ramakrishna, Deepti deepti.ramakrishna at intel.com 

Mon Apr 18 07:17:41 UTC 2016


Hi Michal,

This seemed like a good idea when I first read it. What more, the server code 
for extension listing [1]
 does not do any authorization, so it can be used for any logged in user.

However, I don't know if requiring the admin to manually disable an extension 
is practical. First, admins
 can always forget to do that. Second, even if they wanted to, it is not clear 
how they could disable specific
 extensions. I assume they would need to edit the cinder.conf file. This file 
currently lists the set of
 extensions to load as cinder.api.contrib.standard_extensions. The server code 
[2] implements this by walking
 the cinder/api/contrib directory and loading all discovered extensions. How is 
it possible to subtract just
one extension from the "standard extensions"? Also, system capabilities and 
extensions may not have a 1:1
 relationship in general.

Having a new extension API (as proposed by me in [3]) for returning the 
available services/functionality does
 not have the above problems. It will dynamically check the existence of the 
cinder-backup service, so it does
 not need manual action from admin. I have published a BP [4] related to this. 
Can you please comment on that?

Thanks,
Deepti

[1] 
https://github.com/openstack/cinder/blob/2596004a542053bc19bb56b9a99f022368816871/cinder/api/extensions.py#L152
[2] 
https://github.com/openstack/cinder/blob/2596004a542053bc19bb56b9a99f022368816871/cinder/api/extensions.py#L312
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077209.html
[4] https://review.openstack.org/#/c/306930/

-Original Message-
From: Michał Dulko [mailto:michal.dulko at 
intel.com]
Sent: Thursday, April 14, 2016 7:06 AM
To: OpenStack Development Mailing List (not for usage questions) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>
Subject: [openstack-dev] [Cinder] API features discoverability

Hi,

When looking at bug [1] I've thought that we could simply use 
/v2//extensions to signal features
 available in the deployment - in this case backups, as these are implemented 
as API extension too. Cloud admin
 can disable an extension if his cloud doesn't support a particular feature and 
this is easily discoverable using
aforementioned call. Looks like that solution weren't proposed when the bug was 
initially raised.

Now the problem is that we're actually planning to move all API extensions to 
the core API. Do we plan to keep this
 API for features discovery? How to approach API compatibility in this case if 
we want to change it? Do we have a plan
 for that?

We could keep this extensions API controlled from the cinder.conf, regardless 
of the fact that we've moved everything
 to the core, but that doesn't seem right (API will still be functional, even 
if administrator disables it in configuration,
 am I right?)

Anyone have thoughts on that?

Thanks,
Michal

[1] https://bugs.launchpad.net/cinder/+bug/1334856


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-12-10 Thread D'Angelo, Scott
Could the work for the tooz variant be leveraged to add a truly distributed 
solution (with the proper tooz distributed backend)? IF so, then +1 to this 
idea. Cinder will be implementing a version of tooz based distribute locks, so 
having it in Olso someday is a goal I'd think.


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Wednesday, December 09, 2015 6:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][all] The lock files saga (and where we can 
go from here)

So,

To try to reach some kind of conclusion here I am wondering if it would
be acceptable to folks (would people even adopt such a change?) if we
(oslo folks/others) provided a new function in say lockutils.py (in
oslo.concurrency) that would let users of oslo.concurrency pick which
kind of lock they would want to use...

The two types would be:

1. A pid based lock, which would *not* be resistant to crashing
processes, it would perhaps use
https://github.com/openstack/pylockfile/blob/master/lockfile/pidlockfile.py
internally. It would be more easily breakable and more easily
introspect-able (by either deleting the file or `cat` the file to see
the pid inside of it).
2. The existing lock that is resistant to crashing processes (it
automatically releases on owner process crash) but is not easily
introspect-able (to know who is using the lock) and is not easily
breakable (aka to forcefully break the lock and release waiters and the
current lock holder).

Would people use these two variants if (oslo) provided them, or would
the status quo exist and nothing much would change?

A third possibility is to spend energy using/integrating tooz
distributed locks and treating different processes on the same system as
distributed instances [even though they really are not distributed in
the classical sense]). These locks that tooz supports are already
introspect-able (via various means) and can be broken if needed (work is
in progress to make this breaking process more useable via API).

Thoughts?

-Josh

Clint Byrum wrote:
> Excerpts from Joshua Harlow's message of 2015-12-01 09:28:18 -0800:
>> Sean Dague wrote:
>>> On 12/01/2015 08:08 AM, Duncan Thomas wrote:
 On 1 December 2015 at 13:40, Sean Dague>>> >   wrote:


   The current approach means locks block on their own, are processed in
   the order they come in, but deletes aren't possible. The busy lock 
 would
   mean deletes were normal. Some extra cpu spent on waiting, and lock
   order processing would be non deterministic. It's trade offs, but I
   don't know anywhere that we are using locks as queues, so order
   shouldn't matter. The cpu cost on the busy wait versus the lock file
   cleanliness might be worth making. It would also let you actually see
   what's locked from the outside pretty easily.


 The cinder locks are very much used as queues in places, e.g. making
 delete wait until after an image operation finishes. Given that cinder
 can already bring a node into resource issues while doing lots of image
 operations concurrently (such as creating lots of bootable volumes at
 once) I'd be resistant to anything that makes it worse to solve a
 cosmetic issue.
>>> Is that really a queue? Don't do X while Y is a lock. Do X, Y, Z, in
>>> order after W is done is a queue. And what you've explains above about
>>> Don't DELETE while DOING OTHER ACTION, is really just the queue model.
>>>
>>> What I mean by treating locks as queues was depending on X, Y, Z
>>> happening in that order after W. With a busy wait approach they might
>>> happen as Y, Z, X or X, Z, B, Y. They will all happen after W is done.
>>> But relative to each other, or to new ops coming in, no real order is
>>> enforced.
>>>
>> So ummm, just so people know the fasteners lock code (and the stuff that
>> has existed for file locks in oslo.concurrency and prior to that
>> oslo-incubator...) never has guaranteed the aboved sequencing.
>>
>> How it works (and has always worked) is the following:
>>
>> 1. A lock object is created
>> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L85)
>> 2. That lock object acquire is performed
>> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L125)
>> 3. At that point do_open is called to ensure the file exists (if it
>> exists already it is opened in append mode, so no overwrite happen) and
>> the lock object has a reference to the file descriptor of that file
>> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L112)
>> 4. A retry loop starts, that repeats until either a provided timeout is
>> elapsed or the lock is acquired, the retry logic u can skip over but the
>> code that the retry loop calls is
>> https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L9

Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-25 Thread D'Angelo, Scott
There is currently no simple way to clean up Cinder attachments if the Nova 
node (or the instance) has gone away. We’ve put this topic on the agenda for 
the Cinder mid-cycle this week:
https://etherpad.openstack.org/p/mitaka-cinder-midcycle 
L#113

From: Avishay Traeger [mailto:avis...@stratoscale.com]
Sent: Monday, January 25, 2016 7:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed 
nodes

Hi all,
I was wondering if there was any way to cleanly detach volumes from failed 
nodes.  In the case where the node is up nova-compute will call Cinder's 
terminate_connection API with a "connector" that includes information about the 
node - e.g., hostname, IP, iSCSI initiator name, FC WWPNs, etc.
If the node has died, this information is no longer available, and so the 
attachment cannot be cleaned up properly.  Is there any way to handle this 
today?  If not, does it make sense to save the connector elsewhere (e.g., DB) 
for cases like these?

Thanks,
Avishay

--
Avishay Traeger, PhD
System Architect

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com

[http://www.stratoscale.com/wp-content/uploads/Logo-Signature-Stratoscale-230.jpg]


Web | Blog | 
Twitter | 
Google+
 | Linkedin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Would anyone from Nova Core be able to join the Cinder mid-cycle meetup?

2015-07-30 Thread D'Angelo, Scott
We'll have a Google Hangout for the Cinder mid-cycle for virtual attendees and 
will post the links on IRC and the etherpad:
https://etherpad.openstack.org/p/cinder-liberty-midcycle-meetup

We'll make sure we ping mreidem and the Nova channel when we're discussing 
Cinder <-> Nova topics.

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Thursday, July 30, 2015 8:18 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][cinder] Would anyone from Nova Core be able 
to join the Cinder mid-cycle meetup?



On 7/30/2015 12:31 AM, Anita Kuno wrote:
> On 07/29/2015 10:59 PM, John Garbutt wrote:
>> On 24 July 2015 at 22:49, Jay S. Bryant  
>> wrote:
>>> All,
>>>
>>> I had the opportunity to chat with John Garbutt when he was here in 
>>> Rochester for the Nova mid-cycle meet-up.  We discussed the fact 
>>> that there was much to be gained by improving the communication 
>>> between the Cinder and Nova teams.
>>
>> We do have names written down for CrossProjectLiaisons for other projects.
>> That is helping a little, but I am sure we can refine the idea so its 
>> more effective:
>> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Inter-project_Li
>> aisons
>>
>> The basic ideas would be to get folks who are in the Nova meeting 
>> regularly to jump into the Cinder meeting when required, and someone 
>> else to do the reverse.
>
> What I am finding is that some folks are reluctant to come forward as 
> Liasons initially, despite their possible interest. Some reasons for 
> hesitation might include uncertainty of what responsibilities are 
> understood with the role, uncertainty if there actually is time in the 
> individual's schedule to attend meetings and be available for 
> questions, clarification and follow up, that sort of thing.
>
> I am finding good success by encouraging someone who might consider 
> trying on a cross project hat to do something along those lines: 
> attend a meeting, follow up on an email, engage in a conversation with 
> someone in another project on a specific issue and see how it feels. 
> If it doesn't feel like a good fit, fair enough, thanks for trying it 
> out. If you can help with one thing (example: cells with nova and 
> neutron) great, you don't have to take on everything with the two 
> projects, just focus on one issue and that is most welcome, thank you. 
> (Thanks to mlavelle for volunteering with the cells cross project work 
> here.)
>
> If you have done a few things, shown up for meetings for two projects, 
> followed a particular spec, helped both projects become better 
> informed about how the issue affects those projects, wonderful. Then 
> if you feel like added your name to the wikipage, great, that helps 
> others. But please don't feel that signing up has to be the first 
> step, for some they experience the work for a while before they agree 
> to it publicly, what ever works for you.
>
> Thanks Jay, John, and Paul, glad to see this conversation progress, 
> Anita.
>
>>
>> For specific patches that really important to cinder, it would be 
>> good to advertise them in the usual place for nova:
>> https://etherpad.openstack.org/p/liberty-nova-priorities-tracking
>>
>>> With that idea in mind, it was suggested that the Cinder team open 
>>> an invitation to Nova members to attend our mid-cycle meet-up.  The 
>>> mid-cycle meet-up, of course, is not a secret.  A member of the Nova 
>>> team has always been welcome to join, just hoping that an explicit 
>>> invitation here may spark some interest.  :-)  Note:  John 
>>> considered attending but is unable to do so.
>>
>> So looks like Paul might be there.
>>
>> I can try and join a hangout, if there is something urgent.
>> That approach has big limitations, but happy to try (ideally AM, 
>> given I am in the UK).
>>
>>> The mid-cycle meet-up for Cinder is 8/4 through 8/7 at the HP site 
>>> in Fort Collins, CO .  Friday is an optional code sprint day for 
>>> those who are able to stay that long.  Details about the meet-up can 
>>> be seen on our mid-cycle meetup planning etherpad [1].
>>>
>>> If you are able to join us and help us work through the various 
>>> challenges we are having between Cinder and Nova it would be greatly 
>>> appreciated!
>>
>> +1
>>
>> Thanks,
>> John
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

If we want a name for nova liaison to cinder, then I can put my name in, I've 
spent enou

Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-03 Thread D'Angelo, Scott
We'll have a chance to discuss DB mutual exclusion at the API nodes at the 
Cider mid-cyle, which starts tomorrow.
The details, issues, and realistic schedule for that will be a key piece to 
this whole puzzle, since anything else is seen as a temporary solution.

-Original Message-
From: Gorka Eguileor [mailto:gegui...@redhat.com] 
Sent: Monday, August 03, 2015 11:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

On Mon, Aug 03, 2015 at 07:01:45PM +1000, Morgan Fainberg wrote:
> Lets step back away from tooz. Tooz for the sake of this conversation is as 
> much the same as saying zookeeper or consul or etcd, etc. We should be 
> focused (as both Flavio and Thierry said) on if we need DLM and what it will 
> solve.

What do you mean we should be focused on if we need a DLM and what it will 
solve?

I don't know what you mean, as those answers are quite clear:

- The DLM replaces our current local file locks and extends them among
  nodes, it does not provide any additional functionality.

- Do we need a DLM?  Need is a strong word, if you are asking if we can
  do it without a DLM, then the answer is yes, we can do it without it.
  And if you ask if it will take more time than using a DLM and has the
  potential to introduce more bugs, then the answer is yes as well.

- Will we keep using a DLM forever?  No, we will change the DLM locks
  with DB mutual exclusion at the API nodes later.

Gorka.

> 
> Once we have all of that defined, the use of an abstraction such as tooz (or 
> just the direct bindings for some specific choice) can be made. 
> 
> I want to voice that we should be very picky about the solution (if we decide 
> on a DLM) so that we are implementing to the strengths of the solution rather 
> than try and make everything work seamlessly.  
> 
> --Morgan
> 
> Sent via mobile
> 
> >> On Aug 3, 2015, at 18:49, Julien Danjou  wrote:
> >> 
> >> On Mon, Aug 03 2015, Thierry Carrez wrote:
> >> 
> >> The last thing we want is to rush a solution that would only solve 
> >> a particular project use case. Personally I'd like us to pick the 
> >> simplest solution that can solve most of the use cases. Each of the 
> >> solutions bring something to the table -- Zookeeper is mature, 
> >> Consul is featureful, etcd is lean and simple... Let's not dive 
> >> into the best solution but clearly define the problem space first.
> > 
> > Or just start using Tooz – like some of OpenStack are already doing 
> > for months – and let the operators pick the backend that they are 
> > the most comfortable with? :)
> > 
> > --
> > Julien Danjou
> > -- Free Software hacker
> > -- http://julien.danjou.info
> > 
> > __ OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-14 Thread D'Angelo, Scott
Thanks Mike. You've done a great job, including making contributors feel 
welcome.

-Original Message-
From: Mike Perez [mailto:thin...@gmail.com] 
Sent: Monday, September 14, 2015 10:16 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [cinder] PTL Non-Candidacy

Hello all,

I will not be running for Cinder PTL this next cycle. Each cycle I ran was for 
a reason [1][2], and the Cinder team should feel proud of our
accomplishments:

* Spearheading the Oslo work to allow *all* OpenStack projects to have their 
database being independent of services during upgrades.
* Providing quality to OpenStack operators and distributors with over
60 accepted block storage vendor drivers with reviews and enforced CI [3].
* Helping other projects with third party CI for their needs.
* Being a welcoming group to new contributors. As a result we grew greatly [4]!
* Providing documentation for our work! We did it for Kilo [5], and I was very 
proud to see the team has already started doing this on their own to prepare 
for Liberty.

I would like to thank this community for making me feel accepted in 2010. I 
would like to thank John Griffith for starting the Cinder project, and 
empowering me to lead the project through these couple of cycles.

With the community's continued support I do plan on continuing my efforts, but 
focusing cross project instead of just Cinder. The accomplishments above are 
just some of the things I would like to help others with to make OpenStack as a 
whole better.


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
[2] - http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
[3] - 
http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
[4] - http://thing.ee/cinder/active_contribs.png
[5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread D'Angelo, Scott
Eduard, Gorka has done a great job of explaining some of the issues with 
Active-Active Cinder-volume services in his blog:
http://gorka.eguileor.com/

TL;DR: The hacks to use the same hostname or use Pacemaker + VIP are dangerous 
because of races, and are not recommended for Enterprise deployments.

From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
Sent: Tuesday, September 15, 2015 8:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is 
down

Hi,

Let me see if i got this:
- running 3 (multiple) c-vols won't automatically give you failover
- each c-vol is "master" of a certain number of volumes
-- if the c-vol is "down" then those volumes cannot be managed by another c-vol

What i'm trying to achieve is making sure ANY volume is managed (manageable) by 
WHICHEVER c-vol is running (and gets the call first) - sort of A/A - so this 
means i need to look into Pacemaker and virtual-ips, or i should try first the 
"same name".

Thanks,

Eduard

PS. @Michal: Where are volumes physically in case of your driver? <- similar to 
ceph, on a distributed object storage service (whose disks can be anywhere even 
on the same compute host)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread D'Angelo, Scott
I’m just not sure that you can evacuate with the c-vol service for those volume 
down. Not without the un-safe HA active-active hacks.
In our public cloud, if the c-vol service for a backend/volumes is down, we get 
woken up in the middle of the night and stay at it until we get c-vol back up. 
That’s the only way I know of getting access to those volumes that are 
associated with a c-vol service: get the service back up.

From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
Sent: Tuesday, September 15, 2015 9:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is 
down

Thanks Scott,
But the question remains: if the "hacks" are not recommended then how can i 
perform Evacuate when the c-vol service of the volumes i need evacuated is 
"down", but there are two more controller node with c-vol services running?

Thanks,

Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] RemoteFS drivers refactoring: move code, which works with images to separate classes

2015-10-13 Thread D'Angelo, Scott
If you create a blueprint and a spec for this, the details can be discussed in 
the spec.

-Original Message-
From: Dmitry Guryanov [mailto:dgurya...@virtuozzo.com] 
Sent: Tuesday, October 13, 2015 12:57 PM
To: OpenStack Development Mailing List; Maxim Nestratov
Subject: [openstack-dev] [cinder] RemoteFS drivers refactoring: move code, 
which works with images to separate classes

Hello,

RemoteFS drivers combine 2 logical tasks. The first one is how to mount a 
filesystem and select proper share for a new or existing volume. The second 
one: how to deal with an image files in given directory (mount
point) (create, delete, create snapshot e.t.c.).

The first part is different for each volume driver. The second - the same for 
all volume drivers, but it depends on selected volume format: 
you can create qcow2 file on NFS or smbfs with the same code.

Since there are several volume formats (raw, qcow2, vhd and possibly some 
others), I propose to move the code, which works with image to separate 
classes, 'VolumeFormat' handlers.

This change have 3 advantages:

1. Duplicated code from remotefs driver will be removed.
2. All drivers will support all volume formats.
3. New volume formats could be added easily, including non-qcow2 snapshots.

Here is a draft version of a patch:
https://review.openstack.org/#/c/234359/

Although there are problems in it, most of the operations with volumes work and 
there are only about 10 fails in tempest.


I'd like to discuss this approach before further work on the patch.


--
Dmitry Guryanov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-18 Thread D'Angelo, Scott
Cinder team is proposing to add support for API microversions [1]. It came up 
at our mid-cycle that we should add a new /v3 endpoint [2]. Discussions on IRC 
have raised questions about this [3]

Please weigh in on the design decision to add a new /v3 endpoint for Cinder for 
clients to use when they wish to have api-microversions.

PRO add new /v3 endpoint: A client should not ask for new-behaviour against old 
/v2 endpoint, because that might hit an old pre-microversion (i.e. Liberty) 
server, and that server might carry on with old behaviour. The client would not 
know this without checking, and so strange things happen silently.
It is possible for client to check the response from the server, but his 
requires an extra round trip.
It is possible to implement some type of caching of supported (micro-)version, 
but not all clients will do this.
Basic argument is that  continuing to use /v2 endpoint either requires an extra 
trip for each request (absent caching) meaning performance slow-down, or 
possibility of unnoticed errors.

CON add new endpoint:
Downstream cost of changing endpoints is large. It took ~3 years to move from 
/v1 -> /v2 and we will have to support the deprecated /v2 endpoint forever.
If we add microversions with /v2 endpoint, old scripts will keep working on /v2 
and they will continue to work.
We would assume that people who choose to use microversions will check that the 
server supports it.

Scottda

[1] https://etherpad.openstack.org/p/cinder-api-microversions
[2] https://www.youtube.com/watch?v=tfEidbzPOCc around 1:20
[3] 
http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2016-02-18.log.html
  around 13:17



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Cinder - Wishlist

2016-03-01 Thread D'Angelo, Scott
I like the idea of a Cinder Wishlist or perhaps titled: Cinder Future Design 
and Architecture list. I think the Cinder community could benefit if we 
continued to refine the work we did on this Wishlist at the Mitaka midcycle and 
spent a few minutes each cycle going over the list, prioritizing, and 
commenting.
This seems useful for new contributors, such as Mohammed and his team, as well 
as others who wish to plan work.

Note: Future work for Cinder <-> Nova API changes are tracked and discussed 
here:
https://etherpad.openstack.org/p/cinder-nova-api-changes

Scott D'Angelo (scottda)

From: Michał Dulko [michal.du...@intel.com]
Sent: Tuesday, March 01, 2016 5:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Openstack Cinder - Wishlist

On 03/01/2016 11:31 AM, mohammed.asha...@wipro.com wrote:
>
> Hi,
>
>
>
> Would like to know if there’s  feature wish list/enhancement request
> for Open stack Cinder  I.e. a list of features that we would like to
> add to Cinder Block Storage ; but hasn’t been taken up for development
> yet.
>
> We have couple  developers who are interested to work on OpenStack
> Cinder... Hence would like to take a look at those wish list…
>
>
>
> Thanks ,
>
> Ashraf
>
>

Hi!

At the Cinder Midcycle Meetup in January we've created a list of
developers answers to "if you would have time what would you want to
sort out in Cinder?". The list can be find at the bottom of etherpad
[1]. It may seem a little vague for someone not into Cinder's internals,
so I can provide some highlights:

* Quotas - Cinder have issues with quota management. Right now there are
efforts to sort this out.
* Notifications - we do not version or standardize notifications sent
over RPC. That's a problem if someone relies on them.
* A/A HA - there are ongoing efforts to make cinder-volume service
scalable in A/A manner.
* Cinder/Nova API - the way Nova talks with Cinder needs revisiting as
the limitations of current design are blocking us.
* State management - the way Cinder resources states are handled isn't
strongly defined. We may need some kind of state machine for that? (this
one is controversial ;)).
* Objectification - we've started converting Cinder to use
oslo.versionedobjects back in Kilo cycle. This still needs to be finished.
* Adding CI that tests rolling upgrades - starting from Mitaka we have
tech preview of upgrades without downtime. To get this feature out of
experimental stage we need a CI that will test it in gate.
* Tempest testing - we should increase our integration tests coverage.

If you're interested in any of these items feel free to ask me on IRC
(dulek on freenode) so I can point you to correct people for details.

Apart from that you can look through the blueprint list [2]. Note that a
lot of items there may be outdated and not fitting well into current
state of Cinder.

[1] https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3
[2] https://blueprints.launchpad.net/cinder

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-03-01 Thread D'Angelo, Scott
Matt, changing Nova to store the connector info at volume attach time does 
help. Where the gap will remain is after Nova evacuation or live migration, 
when that info will need to be updated in Cinder. We need to change the Cinder 
API to have some mechanism to allow this.
We'd also like Cinder to store the appropriate info to allow a force-detach for 
the cases where Nova cannot make the call to Cinder.
Ongoing work for this and related issues is tracked and discussed here:
https://etherpad.openstack.org/p/cinder-nova-api-changes

Scott D'Angelo (scottda)

From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
Sent: Monday, February 29, 2016 7:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching 
and force detach

On 2/22/2016 4:08 PM, Walter A. Boring IV wrote:
> On 02/22/2016 11:24 AM, John Garbutt wrote:
>> Hi,
>>
>> Just came up on IRC, when nova-compute gets killed half way through a
>> volume attach (i.e. no graceful shutdown), things get stuck in a bad
>> state, like volumes stuck in the attaching state.
>>
>> This looks like a new addition to this conversation:
>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082683.html
>>
>> And brings us back to this discussion:
>> https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
>>
>> What if we move our attention towards automatically recovering from
>> the above issue? I am wondering if we can look at making our usually
>> recovery code deal with the above situation:
>> https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24c79f4bf615/nova/compute/manager.py#L934
>>
>>
>> Did we get the Cinder APIs in place that enable the force-detach? I
>> think we did and it was this one?
>> https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force-detach-needs-cinderclient-api
>>
>>
>> I think diablo_rojo might be able to help dig for any bugs we have
>> related to this. I just wanted to get this idea out there before I
>> head out.
>>
>> Thanks,
>> John
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> .
>>
> The problem is a little more complicated.
>
> In order for cinder backends to be able to do a force detach correctly,
> the Cinder driver needs to have the correct 'connector' dictionary
> passed in to terminate_connection.  That connector dictionary is the
> collection of initiator side information which is gleaned here:
> https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connector.py#L99-L144
>
>
> The plan was to save that connector information in the Cinder
> volume_attachment table.  When a force detach is called, Cinder has the
> existing connector saved if Nova doesn't have it.  The problem was live
> migration.  When you migrate to the destination n-cpu host, the
> connector that Cinder had is now out of date.  There is no API in Cinder
> today to allow updating an existing attachment.
>
> So, the plan at the Mitaka summit was to add this new API, but it
> required microversions to land, which we still don't have in Cinder's
> API today.
>
>
> Walt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Regarding storing off the initial connector information from the attach,
does this [1] help bridge the gap? That adds the connector dict to the
connection_info dict that is serialized and stored in the nova
block_device_mappings table, and then in that patch is used to pass it
to terminate_connection in the case that the host has changed.

[1] https://review.openstack.org/#/c/266095/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread D'Angelo, Scott
+1 to making the testing process better.
It has been discussed that services could/should consider devoting some or all 
of a release cycle to stability and/or quality.
I propose the Cinder team makes improving and fixing the tests and test process 
a priority for the Newton cycle.

Scott D'Angelo (scottda)

From: Ivan Kolodyazhny [e...@e0ne.info]
Sent: Wednesday, March 02, 2016 4:25 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [cinder] Proposal: changes to our current testing 
process

Hi Team,

Here are my thoughts and proposals how to make Cinder testing process better. I 
won't cover "3rd party CI's" topic here. I will share my opinion about current 
and feature jobs.


Unit-tests

  *   Long-running tests. I hope, everybody will agree that unit-tests must be 
quite simple and very fast. Unit tests which takes more than 3-5 seconds should 
be refactored and/or moved to 'integration' tests.
Thanks to Tom Barron for several fixes like [1]. IMO, we it would be good to 
have some hacking checks to prevent such issues in a future.

  *   Tests coverage. We don't check it in an automatic way on gates. Usually, 
we require to add some unit-tests during code review process. Why can't we add 
coverage job to our CI and do not merge new patches, with will decrease tests 
coverage rate? Maybe, such job could be voting in a future to not ignore it. 
For now, there is not simple way to check coverage because 'tox -e cover' 
output is not useful [2].

Functional tests for Cinder

We introduced some functional tests last month [3]. Here is a patch to infra to 
add new job [4]. Because these tests were moved from unit-tests, I think we're 
OK to make this job voting. Such tests should not be a replacement for Tempest. 
They even could tests Cinder with Fake Driver to make it faster and not related 
on storage backends issues.


Tempest in-tree tests

Sean started work on it [5] and I think it's a good idea to get them in Cinder 
repo to run them on Tempest jobs and 3-rd party CIs against a real backend.


Functional tests for python-brick-cinderclient-ext

There are patches that introduces functional tests [6] and new job [7].


Functional tests for python-cinderclient

We've got a very limited set of such tests and non-voting job. IMO, we can run 
them even with Cinder Fake Driver to make them not depended on a storage 
backend and make it faster. I believe, we can make this job voting soon. Also, 
we need more contributors to this kind of tests.


Integrated tests for python-cinderclient

We need such tests to make sure that we won't break Nova, Heat or other 
python-cinderclient consumers with a next merged patch. There is a thread in 
openstack-dev ML about such tests [8] and proposal [9] to introduce them to 
python-cinderclient.


Rally tests

IMO, it would be good to have new Rally scenarios for every patches like 
'improves performance', 'fixes concurrency issues', etc. Even if we as a Cinder 
community don't have enough time to implement them, we have to ask for them in 
reviews, openstack-dev ML, file Rally bugs and blueprints if needed.


[1] https://review.openstack.org/#/c/282861/
[2] http://paste.openstack.org/show/488925/
[3] https://review.openstack.org/#/c/267801/
[4] https://review.openstack.org/#/c/287115/
[5] https://review.openstack.org/#/c/274471/
[6] https://review.openstack.org/#/c/265811/
[7] https://review.openstack.org/#/c/265925/
[8] http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.html
[9] https://review.openstack.org/#/c/279432/


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Let's do presentations/sessions on Mitaka's new complex features in Design Summit

2016-03-18 Thread D'Angelo, Scott
I can do a presentation on microversions.



Scott D'Angelo (scottda)

-- Original Message --
>From : Gorka Eguileor
Subject : [openstack-dev] [cinder] Let's do presentations/sessions on Mitaka's 
new complex features in Design Summit


Hi,

As you all probably know, during this cycle we have introduced quite a
big number of changes in cinder that will have a great impact in the
development of the new functionality as well as changes to existing ones
moving forward from an implementation perspective.

These changes to the cinder code include, but are not limited to,
microversions, rolling upgrades, and conditional DB update functionality
to remove API races, and while the latter has a good number of examples
already merged and more patches under review, the other 2 have just been
introduced and there are no patches in cinder that can serve as easy
reference on how to use them.

As cinder developers we will all have to take these changes into account
in our new patches, but it is hard to do so when one doesn't have an
in-depth knowledge of them, and while we all probably know quite a bit
about them, it will take some time to get familiar enough to be aware of
*all* the implications of the changes made by newer patches.

And it's for this reason that I would like to suggest that during this
summit's cinder design sessions we take the time to go through the
changes giving not only an example of how they should be used in a
patch, but also the do's, dont's and gotchas.

A possible format for these explanations could be a presentation -around
30 minutes- by the people that were involved in the development,
followed by Q&A.

I would have expected to see some of these in the "Upstream Dev" track,
but unfortunately I don't (maybe I'm just missing them with all the cool
title names).  And maybe these talks are not relevant for that track,
being so specific and only relevant to cinder developers and all.

I believe these presentations would help the cinder team increase the
adoption speed of these features while reducing the learning curve and
the number of bugs introduced in the code caused by gaps in our
knowledge and misinterpretations of the new functionality.

I would take lead on the conditional DB updates functionality, and I
would have no problem doing the Rolling upgrades presentation as well.
But I believe there are people more qualified and more deserving of
doing that one; though I offer my help if they want it.

I have added those 3 topics to the Etherpad with Newton Cinder Design
Summit Ideas [1] so people can volunteer and express their ideas in
there.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Fixing stuck volumes - part II

2015-02-11 Thread D'Angelo, Scott
At the cinder mid-cycle it was decided that the best way to fix volumes stuck 
in 'attaching' or 'detaching' was NOT to fix the broken reset-state command. 
The doc string and help message for reset-state have been modified to warn the 
user that the tool only affects Cinder DB and can cause problems. But, 
ultimately, a separate command to 'force-detach' would be better. I've 
abandoned the original BP/spec for reset-state involving the driver.

I have looked at the existing function 'force-detach' in Cinder and it seems to 
work...except that Nova must be involved. Nova uses the BlockDeviceMapping 
table to keep track of attached volumes and, if Nova is not involved, a 
force-detach'ed volume will not be capable of being re-attached.
So, my plan is to submit a blueprint + spec for Novaclient to add a 
'force-detach' command. This is technically fairly simple and only involves 
stripping away the checks for proper state in Nova, and calling Cinder 
force-detach. I don't plan on asking for an exception to feature freeze, unless 
there is optimism from the community that this could possible get in for L.
The existing Cinder force-detach calls terminate_connection() and 
detach_volume().  I assume detach_volume() is covered by the "Volume Detach" 
minimum feature? I see many drivers have terminate_connection(), but not all. I 
believe this will not be a minimum feature, but others may disagree.

thanks,
scottda
scott.dang...@hp.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Fixing stuck volumes - part II

2015-02-11 Thread D'Angelo, Scott


From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: Wednesday, February 11, 2015 5:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] Fixing stuck volumes - part II


On Feb 11, 2015, at 3:45 PM, D'Angelo, Scott 
mailto:scott.dang...@hp.com>> wrote:


At the cinder mid-cycle it was decided that the best way to fix volumes stuck 
in 'attaching' or 'detaching' was NOT to fix the broken reset-state command. 
The doc string and help message for reset-state have been modified to warn the 
user that the tool only affects Cinder DB and can cause problems. But, 
ultimately, a separate command to 'force-detach' would be better. I've 
abandoned the original BP/spec for reset-state involving the driver.

I have looked at the existing function 'force-detach' in Cinder and it seems to 
work...except that Nova must be involved. Nova uses the BlockDeviceMapping 
table to keep track of attached volumes and, if Nova is not involved, a 
force-detach'ed volume will not be capable of being re-attached.
So, my plan is to submit a blueprint + spec for Novaclient to add a 
'force-detach' command. This is technically fairly simple and only involves 
stripping away the checks for proper state in Nova, and calling Cinder 
force-detach. I don't plan on asking for an exception to feature freeze, unless 
there is optimism from the community that this could possible get in for L.
The existing Cinder force-detach calls terminate_connection() and 
detach_volume().  I assume detach_volume() is covered by the "Volume Detach" 
minimum feature? I see many drivers have terminate_connection(), but not all. I 
believe this will not be a minimum feature, but others may disagree.

If you are going to add a force-detach command to nova, I think it would be 
good to make it detach even if the cinder request fails. Currently if you try 
to detach a volume (or terminate an instance with an attached volume), if 
cinder is down or the volume node where the volume resides is down, nova 
refuses to continue, which is pretty bad user experience.

Vish

The only problem with that is, what happens when cinder comes back up? You have 
an user/admin who think that the volume should be available, but cinder DB 
still lists as attaching | detaching and the backend may still be exporting the 
volume to the Nova compute host.
We could expose 'force-detach' through the cinderclient to fix this, but the 
admin/user might think that this is the first place to start, and leave Nova 
out, which results in a volume that cannot be re-attached.
I think that you are right about user experience, but I'm cautious about other 
problems that might result.

thanks,
scottda
scott.dang...@hp.com<mailto:scott.dang...@hp.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Fixing stuck volumes - part II

2015-02-11 Thread D'Angelo, Scott


From: Guo, Ruijing [mailto:ruijing@intel.com]
Sent: Wednesday, February 11, 2015 5:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] Fixing stuck volumes - part II

Force is a good idea. I'd like to add 2 comments:


1)  It is option instead of new command. IOW, detach with force option 
instead of force-detach.

> OK with me. I'll leave it to the community to decide which is best.

2)  Can we extend to another command: delete LUN/snapshot with force?

  > We could easily expose 'force-detach' through the cinderclient for 
volumes and snapshots. But it might be confusing for the admin who thinks that 
this is the primary way to clean things up and leave out Nova, which would put 
the volume in a state where it cannot be re-attached.

Thanks,
-Ruijing

From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: Thursday, February 12, 2015 8:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] Fixing stuck volumes - part II


On Feb 11, 2015, at 3:45 PM, D'Angelo, Scott 
mailto:scott.dang...@hp.com>> wrote:

At the cinder mid-cycle it was decided that the best way to fix volumes stuck 
in 'attaching' or 'detaching' was NOT to fix the broken reset-state command. 
The doc string and help message for reset-state have been modified to warn the 
user that the tool only affects Cinder DB and can cause problems. But, 
ultimately, a separate command to 'force-detach' would be better. I've 
abandoned the original BP/spec for reset-state involving the driver.

I have looked at the existing function 'force-detach' in Cinder and it seems to 
work...except that Nova must be involved. Nova uses the BlockDeviceMapping 
table to keep track of attached volumes and, if Nova is not involved, a 
force-detach'ed volume will not be capable of being re-attached.
So, my plan is to submit a blueprint + spec for Novaclient to add a 
'force-detach' command. This is technically fairly simple and only involves 
stripping away the checks for proper state in Nova, and calling Cinder 
force-detach. I don't plan on asking for an exception to feature freeze, unless 
there is optimism from the community that this could possible get in for L.
The existing Cinder force-detach calls terminate_connection() and 
detach_volume().  I assume detach_volume() is covered by the "Volume Detach" 
minimum feature? I see many drivers have terminate_connection(), but not all. I 
believe this will not be a minimum feature, but others may disagree.

If you are going to add a force-detach command to nova, I think it would be 
good to make it detach even if the cinder request fails. Currently if you try 
to detach a volume (or terminate an instance with an attached volume), if 
cinder is down or the volume node where the volume resides is down, nova 
refuses to continue, which is pretty bad user experience.

Vish


thanks,
scottda
scott.dang...@hp.com<mailto:scott.dang...@hp.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ Cinder] HA Active-Active meeting Tuesdays at 1600 UTC in openstack-cinder

2016-07-21 Thread D'Angelo, Scott
We've decided to meet each week to discuss status, patches, and testing of 
Cinder-volume Active-Active HA:

Tuesdays 1600 UTC in openstack-cinder (same time as weekly meeting, one day 
earlier)


https://etherpad.openstack.org/p/cinder-active-active-HA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]tempest test case for force detach volume

2016-09-19 Thread D'Angelo, Scott

The cinder team would welcome submission of any missing Tempest test cases. I'm 
not certain of the history of the os-force_detach API, but you could write 
tests for them and they would be reviewed.


Scott D'Angelo


From: joehuang 
Sent: Saturday, September 17, 2016 8:47:48 PM
To: OpenStack Development Mailing List (not for usage questions); shinobu.kj; 
Shinobu KINJO
Subject: Re: [openstack-dev] [cinder]tempest test case for force detach volume

Hello, Ken,

Thank you for your information, for APIs without tempest test cases,
it's due to hard to build the test environment, or it's just for the API
is not mature enough? I want to know why the tempest test cases
were not added at the same time when the features were implemented.

Best Regards
Chaoyi Huang(joehuang)

From: Ken'ichi Ohmichi [ken1ohmi...@gmail.com]
Sent: 15 September 2016 2:02
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder]tempest test case for force detach volume

Hi Chaoyi,

That is a nice point.
Now Tempest have tests for some volume v2 action APIs which doesn't
contain os-force_detach.
The available APIs of tempest are two: os-set_image_metadata and
os-unset_image_metadata like
https://github.com/openstack/tempest/blob/master/tempest/services/volume/v2/json/volumes_client.py#L27
That is less than I expected by comparing the API reference.

The corresponding API tests' patches are welcome if interested in :-)

Thanks
Ken Ohmichi

---


2016-09-13 17:58 GMT-07:00 joehuang :
> Hello,
>
> Is there ant tempest test case for "os-force_detach" action to force detach
> a volume? I didn't find such a test case both in the repository
> https://github.com/openstack/cinder/tree/master/cinder/tests/tempest
> and https://github.com/openstack/tempest
>
> The API link is:
> http://developer.openstack.org/api-ref-blockstorage-v2.html#forcedetachVolume
>
> Best Regards
> Chaoyi Huang(joehuang)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Should we discuss the use cases of force volume detach in the Tricircle

2016-09-19 Thread D'Angelo, Scott
Please keep in mind that Nova keeps some volume state in the BlockDeviceMapping 
table. Without changes to Nova, a force_detach function is not complete.

I am interested in this use case, as are other Cinder developers. Please feel 
free to contact me in IRC with questions as "scottda".


Scott D'Angelo


From: joehuang 
Sent: Sunday, September 18, 2016 3:29:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] Should we discuss the use cases of 
force volume detach in the Tricircle

This is a good question. I also sent a mail in cinder thread that wants to know 
why the tempest test cases missing for the "force volume detach".
The spec for the "force volume detach" could be found here: 
https://github.com/openstack/cinder-specs/blob/master/specs/liberty/implement-force-detach-for-safe-cleanup.rst

From: cr_...@126.com [cr_...@126.com]
Sent: 18 September 2016 16:53
To: openstack-dev
Subject: [openstack-dev] [tricircle] Should we discuss the use cases of force 
volume detach in the Tricircle

Hello,
When the patch "force volume detach" has submited , some proposals have came 
back.
The important point is wheathe this function is needed or safe.
Should we disscuss some uses cases of this function. Such as the define of this 
function, when this function been triggered.



Best regards,
Ronghui Cao, Ph.D. Candidate
College of Information Science and Engineering
Hunan University, Changsha 410082, Hunan, China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Doc] Inclusion of microversion API support in CLI reference

2016-10-12 Thread D'Angelo, Scott
We added this patch to the cinderclient:

b76f5944130e29ee1bf3095c966a393c489c05e6


Which basically only shows help for the features available at the requested API 
version. It is by design.


From: Sean McGinnis 
Sent: Wednesday, October 12, 2016 7:03:28 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Doc] Inclusion of microversion API support in CLI 
reference

Just getting this out there to either get educated or to start a
conversation...

While going through some of the DocImpact generated bugs for
python-cinderclient I noticed a few that added new parameters to
existing CLI commands. As Cinder has now moved to using microversions
for all API changes, these new parameters are only available at a
certain microversion level.

A specific case is here:

https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v3/shell.py#L1485

We have two parameters that are marked "start_version='3.1'" that do not
show up in the generated CLI reference.

This appears to be due to (or related to) the fact that the command line
help does not output anything for these. Now before I dig into why that
is, I know there are others that are already much more knowledgable
about this area than I am. So my question is, is this by design? Or is
something missing here that is needed to recognize these params with the
start_version value so they get printed?

My expectation as an end user would be that the help information would
be printed, with something like "(Requires API 3.1 or later)" appended
to the help text.

Anyone have any insight on this?

Thanks!

Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev