[openstack-dev] Ask for help reviewing nova-specs

2014-04-29 Thread sxmatch
Hi,guys:

There is a nova-specs add force detach volume to nova
(https://review.openstack.org/#/c/84048/)that need more review.
It has core review +1 by John. I wish it could be merged before May Day.

Thanks your help a lot!
app:ds:May%20Day
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] create server from a volume snapshot, 180 reties is sufficient?

2014-04-10 Thread sxmatch


? 2014-04-10 8:58, Lingxian Kong ??:


2014-04-10 0:33 GMT+08:00 Nikola ?ipanov ndipa...@redhat.com 
mailto:ndipa...@redhat.com:


On 04/09/2014 03:54 AM, Lingxian Kong wrote:
 yes, the bp also make sense to nova-cinder interaction, may I
submmit
 a blueprint about that?

 Any comments?


I was going to propose that same thing for Nova as well, as well as a
summit session for Atlanta. Would be good to coordinate the work.

Would you be interested in looking at it from Cinder side?

Thanks,

N.


hi nikola:

Sounds great! very interested in collaborating to make a contribution
towards this effort.

Could you provide your bp and your summit session? I am willing to get
involved in the discussion and/or the design and implementation.


hi guys:

I have registered a bp for this issue from Cinder side.
https://blueprints.launchpad.net/cinder/+spec/send-volume-changed-notifications-to-nova
Lingxian and I will get involved in this implementation and collaborate 
with nikola.




--
*-*
*Lingxian Kong*


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-14 Thread sxmatch


于 2014-03-14 11:59, Zhangleiqiang (Trump) 写道:

From: sxmatch [mailto:sxmatch1...@gmail.com]
Sent: Friday, March 14, 2014 11:08 AM
To: Zhangleiqiang (Trump)
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
protection


于 2014-03-11 19:24, Zhangleiqiang 写道:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 5:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang
zhangleiqi...@huawei.com
wrote:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 4:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
zhangleiqi...@huawei.com wrote:

Hi all,



Besides the soft-delete state for volumes, I think there is need
for introducing another fake delete state for volumes which have

snapshot.


Current Openstack refuses the delete request for volumes which
have snapshot. However, we will have no method to limit users to
only use the specific snapshot other than the original volume ,
because the original volume is always visible for the users.



So I think we can permit users to delete volumes which have
snapshots, and mark the volume as fake delete state. When all of
the snapshots of the volume have already deleted, the original
volume will be removed automatically.


Can you describe the actual use case for this?  I not sure I follow
why operator would like to limit the owner of the volume to only
use specific version of snapshot.  It sounds like you are adding
another layer.  If that's the case, the problem should be solved at
upper layer

instead of Cinder.

For example, one tenant's volume quota is five, and has 5 volumes
and 1

snapshot already. If the data in base volume of the snapshot is
corrupted, the user will need to create a new volume from the
snapshot, but this operation will be failed because there are already
5 volumes, and the original volume cannot be deleted, too.
Hmm, how likely is it the snapshot is still sane when the base volume
is corrupted?

If the snapshot of volume is COW, then the snapshot will be still sane when

the base volume is corrupted.
So, if we delete volume really, just keep snapshot alive, is it possible? User
don't want to use this volume at now, he can take a snapshot and then delete
volume.


If we delete volume really, the COW snapshot cannot be used. But if the data in 
base volume is corrupt, we can use the snapshot normally or create an available 
volume from the snapshot.

The COW means copy-on-write, when the data-block in base volume is being to 
written, this block will first copy to the snapshot.

Hope it helps.

Thanks for your explain,it's very helpful.

If he want it again, can create volume from this snapshot.

Any ideas?

Even if this case is possible, I don't see the 'fake delete' proposal
is the right way to solve the problem.  IMO, it simply violates what
quota system is designed for and complicates quota metrics
calculation (there would be actual quota which is only visible to
admin/operator and an end-user facing quota).  Why not contact
operator to bump the upper limit of the volume quota instead?

I had some misunderstanding on Cinder's snapshot.
Fake delete is common if there is chained snapshot or snapshot tree

mechanism. However in cinder, only volume can make snapshot but snapshot
cannot make snapshot again.

I agree with your bump upper limit method.

Thanks for your explanation.





Any thoughts? Welcome any advices.







--

zhangleiqiang



Best Regards



From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM


To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection







On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt
j...@johngarbutt.com

wrote:

On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:

It seems to be an interesting idea. In fact, a China-based public
IaaS, QingCloud, has provided a similar feature to their virtual
servers. Within 2 hours after a virtual server is deleted, the
server owner can decide whether or not to cancel this deletion
and re-cycle that deleted virtual server.

People make mistakes, while such a feature helps in urgent cases.
Any idea here?

Nova has soft_delete and restore for servers. That sounds similar?

John



-Original Message-
From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
Sent: Thursday, March 06, 2014 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

Hi all,

Current openstack provide the delete volume function to the user

Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-13 Thread sxmatch


于 2014-03-11 19:24, Zhangleiqiang 写道:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 5:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
protection

On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang zhangleiqi...@huawei.com
wrote:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 4:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
zhangleiqi...@huawei.com wrote:

Hi all,



Besides the soft-delete state for volumes, I think there is need
for introducing another fake delete state for volumes which have

snapshot.



Current Openstack refuses the delete request for volumes which have
snapshot. However, we will have no method to limit users to only
use the specific snapshot other than the original volume ,  because
the original volume is always visible for the users.



So I think we can permit users to delete volumes which have
snapshots, and mark the volume as fake delete state. When all of
the snapshots of the volume have already deleted, the original
volume will be removed automatically.


Can you describe the actual use case for this?  I not sure I follow
why operator would like to limit the owner of the volume to only use
specific version of snapshot.  It sounds like you are adding another
layer.  If that's the case, the problem should be solved at upper layer

instead of Cinder.

For example, one tenant's volume quota is five, and has 5 volumes and 1

snapshot already. If the data in base volume of the snapshot is corrupted, the
user will need to create a new volume from the snapshot, but this operation
will be failed because there are already 5 volumes, and the original volume
cannot be deleted, too.
Hmm, how likely is it the snapshot is still sane when the base volume is
corrupted?

If the snapshot of volume is COW, then the snapshot will be still sane when the 
base volume is corrupted.
So, if we delete volume really, just keep snapshot alive, is it 
possible? User don't want to use this volume at now, he can take a 
snapshot and then delete volume.


If he want it again, can create volume from this snapshot.

Any ideas?



Even if this case is possible, I don't see the 'fake delete' proposal
is the right way to solve the problem.  IMO, it simply violates what quota
system is designed for and complicates quota metrics calculation (there would
be actual quota which is only visible to admin/operator and an end-user facing
quota).  Why not contact operator to bump the upper limit of the volume
quota instead?

I had some misunderstanding on Cinder's snapshot.
Fake delete is common if there is chained snapshot or snapshot tree 
mechanism. However in cinder, only volume can make snapshot but snapshot cannot make snapshot again.

I agree with your bump upper limit method.

Thanks for your explanation.






Any thoughts? Welcome any advices.







--

zhangleiqiang



Best Regards



From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM


To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection







On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com

wrote:

On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:

It seems to be an interesting idea. In fact, a China-based public
IaaS, QingCloud, has provided a similar feature to their virtual
servers. Within 2 hours after a virtual server is deleted, the
server owner can decide whether or not to cancel this deletion and
re-cycle that deleted virtual server.

People make mistakes, while such a feature helps in urgent cases.
Any idea here?

Nova has soft_delete and restore for servers. That sounds similar?

John



-Original Message-
From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
Sent: Thursday, March 06, 2014 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

Hi all,

Current openstack provide the delete volume function to the user.
But it seems there is no any protection for user's delete operation miss.

As we know the data in the volume maybe very important and valuable.
So it's better to provide a method to the user to avoid the volume
delete miss.

Such as:
We can provide a safe delete for the volume.
User can specify how long the volume will be delay
deleted(actually
deleted) when he deletes the volume.
Before the volume is actually deleted, user can cancel the delete
operation and find back the volume.
After the specified time, the volume will be actually deleted by
the system.

Any thoughts? Welcome any advices.

Best regards to you.


--
zhangleiqiang

Best