Re: [openstack-dev] [Cinder] Qcow2 support for cinder-backup

2014-05-28 Thread Zhangleiqiang (Trump)
> I think the problem being referred to in this thread is that the backup code
> assumes the *source* is a raw volume. The destination (i.e. swift) should
> absolutely remain universal across all volume back-ends - a JSON list with
> pointers. The JSON file is versioned, so there is scope to add more to it 
> (like we
> did volume metadata), but I don't want to see QCOW or similar going into
> swift.

I agreed with Duncan. I will finish the spec for it within the next few days.

--
zhangleiqiang (Trump)

Best Regards

> -Original Message-
> From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
> Sent: Wednesday, May 28, 2014 9:41 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] Qcow2 support for cinder-backup
> 
> On 18 May 2014 12:32, Murali Balcha  wrote:
> > Hi,
> > I did a design session on Friday though my proposal was to capture the
> > delta as qcow2. Here is the link to ether pad notes.
> >
> > https://etherpad.openstack.org/p/juno-cinder-changed-block-list
> >
> >
> > Do you see synergies between what you are proposing and my proposal?
> > Shouldn¹t we standardize on one format for all backups? I believe
> > Cinder backup API currently uses JSON based list with pointers to all
> > swift objects that make up the backup data of a volume.
> 
> I think the problem being referred to in this thread is that the backup code
> assumes the *source* is a raw volume. The destination (i.e. swift) should
> absolutely remain universal across all volume back-ends - a JSON list with
> pointers. The JSON file is versioned, so there is scope to add more to it 
> (like we
> did volume metadata), but I don't want to see QCOW or similar going into
> swift.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Question about storage backend capacity expansion

2014-05-14 Thread Zhangleiqiang (Trump)
Hi, all:
I meet a requirement in my OpenStack environment which initially uses 
one LVMISCSI backend. Along with the usage, the storage is insufficient, so I 
want to add a NFS backend to the exists Cinder. 

There is only a single Cinder-volume in environment, so I need to 
configure the Cinder to use "multi-backend", which means the initial LVMISCSI 
storage and the new added NFS storage are both used as the backend. However, 
the existing volume on initial LVMISCSI backend will not be handled normally 
after using multi-backend, because the "host" of the exists volume will be 
thought down. 

I know that the "migrate" and "retype" APIs aim to handle the "backend 
capacity expansion", however, each of them can't used for this situation. 

I think the use case above is common in production environment. Is 
there some existing method can achieve it ? Currently, I manually updated the 
"host" value of the existing volumes in database, and the existing volumes can 
then be handled normally.

Thanks.

--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Qcow2 support for cinder-backup

2014-05-12 Thread Zhangleiqiang (Trump)
Hi, all:

I have planned to add the support to create qcow2 format file in NFS 
driver ([1]). From the comment of Eric Harney, I know that cinder-backup 
service currently assumes the volume is raw-formatted, and enable creating 
qcow2 in NFS driver will break backups of NFS volumes . 

After reading the code of backup service, I find we can first mount the 
qcow2 volume as a NBD device and then pass the NBD device as the "source 
volume_file" to backup service. Similar method of mounting qcow2 as NBD device 
has already used in Nova now. I think we can add it to NFS driver for backup, 
and it can be used for GlusterFS too.

Any advice? Is there something I have not expected?

[1] https://review.openstack.org/#/c/92011/

--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Question about synchronized decoration usage in cinder-volume

2014-05-07 Thread Zhangleiqiang (Trump)
Thanks for your detailed explanation.

> > 2. Specific to cinder.volume.manager.VolumeManager:attach_volume, all
> operations in "do_attach" method are database related. As said in [1],
> operations to the database will block the main thread of a service, so another
> question I want to know is why this method is needed to be synchronized?
> Currently db operations block the main thread of the service, but hopefully 
> this
> will change in the future.

There may be another reason here which is mentioned by DuncanT in IRC a few 
days ago. Cinder-backup will also call some methods (at least the 
"attach_volume) of manager, and since cinder-backup is a standalone process, 
"external=True" should be used here.

--
zhangleiqiang (Trump)

Best Regards


> -Original Message-
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> Sent: Wednesday, May 07, 2014 12:35 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] Question about synchronized decoration
> usage in cinder-volume
> 
> 
> On Apr 26, 2014, at 2:56 AM, Zhangleiqiang (Trump)
>  wrote:
> 
> > Hi, all:
> >
> > I find almost all of the @utils.synchronized decoration usage in
> cinder-volume (cinder.volume.manager / cinder.volume.drivers.*) with an
> "external=True" param. Such as
> cinder.volume.manager.VolumeManager:attach_volume:
> >
> > def attach_volume(self, context, volume_id, instance_uuid,
> host_name,
> >  mountpoint, mode):
> >"""Updates db to show volume is attached."""
> >@utils.synchronized(volume_id, external=True)
> >def do_attach():
> >
> > However, in docstring of common.lockutils.synchronized, I find param
> "external" is used for multi-workers scenario:
> >
> > :param external: The external keyword argument denotes whether
> this lock
> > should work across multiple processes. This means that if two
> different
> > workers both run a a method decorated with
> @synchronized('mylock',
> > external=True), only one of them will execute at a time.
> >
> > I have two questions about it.
> > 1. As far as I know, cinder-api has supported multi-worker mode and
> cinder-volume doesn't support it, does it? So I wonder why the "external=True"
> param is used here?
> 
> Before the multibackend support in cinder-volume it was common to run more
> than one cinder-volume for different backends on the same host. This would
> require external=True
> > 2. Specific to cinder.volume.manager.VolumeManager:attach_volume, all
> operations in "do_attach" method are database related. As said in [1],
> operations to the database will block the main thread of a service, so another
> question I want to know is why this method is needed to be synchronized?
> Currently db operations block the main thread of the service, but hopefully 
> this
> will change in the future.
> 
> Vish
> 
> >
> > Thanks.
> >
> > [1]
> http://docs.openstack.org/developer/cinder/devref/threading.html#mysql-acc
> ess-and-eventlet
> > --
> > zhangleiqiang (Trump)
> >
> > Best Regards
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata

2014-05-07 Thread Zhangleiqiang (Trump)
Thanks for the summary and detailed explanation. 

> 1. Volume metadata - this is for the tenant's own use. Cinder and nova don't
> assign meaning to it, other than treating it as stuff the tenant can set. It 
> is
> entirely unrelated to glance_metadata

Does it means that the "volume_metadata" is something like "tagging" for 
volume? Users can use it to do the filtering or grouping work.

> 2. admin_metadata - this is an internal
> implementation detail for cinder to avoid every extension having to alter the
> core volume db model.

I find the original commit and decision of introducing admin_metadta according 
your info above. Hope it is helpful for others:

http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-07-17-16.00.log.txt
https://review.openstack.org/#/c/38322



--
zhangleiqiang (Trump)

Best Regards


> -Original Message-
> From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
> Sent: Wednesday, May 07, 2014 9:57 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] Confusion about the respective use cases
> for volume's admin_metadata, metadata and glance_image_metadata
> 
> On 7 May 2014 09:36, Trump.Zhang  wrote:
> > @Tripp, Thanks for your reply and info.
> >
> > I am also thinking if it is proper to add support for updating the
> > volume's glance_image_metadta to reflect the "newest status" of volume.
> >
> > However, there may be alternative ways to achieve it:
> > 1. Using the volume's metatadata
> > 2. Using the volume's admin_metadata
> >
> > So I am wondering which is the most proper method.
> 
> 
> We're suffering from a total overload of the term 'metadata' here, and there
> are 3 totally separate things that are somehow becoming mangled:
> 
> 1. Volume metadata - this is for the tenant's own use. Cinder and nova don't
> assign meaning to it, other than treating it as stuff the tenant can set. It 
> is
> entirely unrelated to glance_metadata 2. admin_metadata - this is an internal
> implementation detail for cinder to avoid every extension having to alter the
> core volume db model. It is not the same thing as glance metadata or
> volume_metadata.
> 
> An interface to modify volume_glance_metadata sounds reasonable, however
> it is *unrelated* to the other two types of metadata. They are different 
> things,
> not replacements or anything like that.
> 
> Glance protected properties need to be tied into the modification API somehow,
> or else it becomes a trivial way of bypassing protected properties. Hopefully 
> a
> glance expert can pop up and suggest a way of achieving this integration.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata

2014-05-04 Thread Zhangleiqiang (Trump)
Hi, stackers:

I have some confusion about the respective use cases for volume's 
admin_metadata, metadata and glance_image_metadata. 

I know glance_image_metadata comes from image which is the volume created from, 
and it is immutable. Glance_image_metadata is used for many cases, such as 
billing, ram requirement, etc. And it also includes property which can effects 
the "use-pattern" of volume, such as volumes with "hw_scsi_mode=virtio-scsi" 
will be supposed that there is corresponding virtio-scsi driver installed and 
will be used as a device of "virtio-scsi" controller which has higher 
performance when booting from it with scsi bus type.

However, volume is constantly having blocks changed, which may result in 
situations as follows:

1. A volume not created from image or created from image without "hw_scsi_mode" 
property at first but then has the virtio-scsi driver manually installed, there 
will be no method to make the volume used with virito-scsi controller when 
booting from it. 

2. If a volume was created from an image with "hw_scsi_mode" property at first, 
and then the "virtio-scsi" driver in the instance is uninstalled, there will be 
no method to make the volume not used with "virtio-scsi" controller when 
booting from it.

For the first situation, is it suitable to set corresponding metadata to 
volume? Should we use metadata or admin_metadata? I notice that volumes will 
have "attach_mode" and "readonly" admin_metadata and empty metadata after 
created, and I can't find the respective use cases for admin_metada and 
metadata.

For the second situation, what is the better way to handle it?

Any advice?


--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] About store faults info for volumes

2014-04-29 Thread Zhangleiqiang (Trump)
Hi stackers:

I found when a instance status become "error", I will see the detailed 
fault info at times when I "show" the detail of Instance.  And it is very 
convenient for me to find the failed reason. Indeed, there is a 
"nova.instance_faults" which stores the fault info.

Maybe it is helpful for users if Cinder also introduces the similar 
mechanism. Any advice?


--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Add Qcow2 volume encryption support

2014-04-29 Thread Zhangleiqiang (Trump)
@Daniel:

Thanks for your explanation, it helps me a lot. 


--
zhangleiqiang (Trump)

Best Regards


> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Tuesday, April 29, 2014 5:33 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] Add Qcow2 volume encryption support
> 
> On Tue, Apr 29, 2014 at 09:17:05AM +, Zhangleiqiang (Trump) wrote:
> > Hi, all:
> >
> > I find Nova has supported volume encryption for LVM volume ([1]).
> > Currently , qcow2 also support encryption now, and there is libvirt's
> > support too ([2]). After reading up the implementation, qcow2's
> > support can be added to current framework.
> > Do you think it is meaningful to introduce the support for qcow2
> > volume encryption? The use case can be found in [1].
> 
> Support for qcow2 encryption has been proposed before and explicitly rejected
> because qcow2's encryption scheme is considered fatally flawed by design. See
> the warnings here
> 
>   http://qemu.weilnetz.de/qemu-doc.html#disk_005fimages_005fformats
> 
> In the short term simply avoid all use qcow2 where encryption is required and
> instead use LVM with dm-crypt which is known secure & well reviewed by
> cryptographers.
> 
> In the medium-long term QCow2's built-in encryption scheme has to be
> completely thrown away, and replaced by a new scheme that uses the LUKS file
> format specification internally.
> 
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o-
> http://virt-manager.org :|
> |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-
> http://live.gnome.org/gtk-vnc :|
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Add Qcow2 volume encryption support

2014-04-29 Thread Zhangleiqiang (Trump)
Hi, all:

I find Nova has supported volume encryption for LVM volume ([1]). 
Currently , qcow2 also support encryption now, and there is libvirt's support 
too ([2]). After reading up the implementation, qcow2's support can be added to 
current framework.
Do you think it is meaningful to introduce the support for qcow2 volume 
encryption? The use case can be found in [1].

[1] https://wiki.openstack.org/wiki/VolumeEncryption
[2] http://libvirt.org/formatstorageencryption.html


--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Question about the magic "100M" when creating zero-size volume

2014-04-29 Thread Zhangleiqiang (Trump)
Hi, all:

I find in some of the cinder backend volume drivers, there are codes in 
create_volume as follows:

#cinder.volume.drivers.lvm
def _sizestr(self, size_in_g):
if int(size_in_g) == 0:
return '100m'

Similar codes also exist in ibm.gpfs, san.hp.hp_lefthand_cliq_proxy, 
san.solaris and huawei.ssh_common. I wonder why the "100M" is used here, from 
the git log I cannot find useful info.

Thanks.


--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression

2014-04-28 Thread Zhangleiqiang (Trump)
Currently, Nova API achieve this feature based on the database’s REGEX support. 
Do you have advice on alternative way to achieve it?


--
zhangleiqiang (Trump)

Best Regards

From: laserjetyang [mailto:laserjety...@gmail.com]
Sent: Tuesday, April 29, 2014 1:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] cinder not support query volume/snapshot 
with regular expression

It looks to me the Nova API will be dangerous source of DoS attacks due to the 
regexp?

On Mon, Apr 28, 2014 at 7:04 PM, Duncan Thomas 
mailto:duncan.tho...@gmail.com>> wrote:
Regex matching in APIs can be a dangerous source of DoS attacks - see
http://en.wikipedia.org/wiki/ReDoS. Unless this is mitigated sensibly,
I will continue to resist any cinder patch that adds them.

Glob matches might be safer?

On 26 April 2014 05:02, Zhangleiqiang (Trump) 
mailto:zhangleiqi...@huawei.com>> wrote:
> Hi, all:
>
> I see Nova allows search instances by name, ip and ip6 fields which 
> can be normal string and regular expression:
>
> [stack@leiqzhang-stack cinder]$ nova help list
>
> List active servers.
>
> Optional arguments:
> --ip   Search with regular expression match by 
> IP address
> (Admin only).
> --ip6 Search with regular expression match by 
> IPv6 address
>  (Admin only).
> --name   Search with regular expression match by 
> name
> --instance-name  Search with regular expression 
> match by server name
> (Admin only).
>
> I think it is also needed for Cinder when query the 
> volume/snapshot/backup by name. Any advice?
>
> --
> zhangleiqiang (Trump)
>
> Best Regards
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Question about synchronized decoration usage in cinder-volume

2014-04-26 Thread Zhangleiqiang (Trump)
Hi, all:

I find almost all of the @utils.synchronized decoration usage in 
cinder-volume (cinder.volume.manager / cinder.volume.drivers.*) with an 
"external=True" param. Such as 
cinder.volume.manager.VolumeManager:attach_volume:

def attach_volume(self, context, volume_id, instance_uuid, 
host_name,
  mountpoint, mode):
"""Updates db to show volume is attached."""
@utils.synchronized(volume_id, external=True)
def do_attach():

However, in docstring of common.lockutils.synchronized, I find param 
"external" is used for multi-workers scenario:

:param external: The external keyword argument denotes whether 
this lock
should work across multiple processes. This means that if two different
workers both run a a method decorated with @synchronized('mylock',
external=True), only one of them will execute at a time.

I have two questions about it.
1. As far as I know, cinder-api has supported multi-worker mode and 
cinder-volume doesn't support it, does it? So I wonder why the "external=True" 
param is used here?
2. Specific to cinder.volume.manager.VolumeManager:attach_volume, all 
operations in "do_attach" method are database related. As said in [1], 
operations to the database will block the main thread of a service, so another 
question I want to know is why this method is needed to be synchronized?

Thanks.

[1] 
http://docs.openstack.org/developer/cinder/devref/threading.html#mysql-access-and-eventlet
--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression

2014-04-25 Thread Zhangleiqiang (Trump)
Hi, all:

I see Nova allows search instances by name, ip and ip6 fields which can 
be normal string and regular expression:

[stack@leiqzhang-stack cinder]$ nova help list

List active servers.

Optional arguments:
--ip   Search with regular expression match by 
IP address
(Admin only).
--ip6 Search with regular expression match by 
IPv6 address
 (Admin only).
--name   Search with regular expression match by 
name
--instance-name  Search with regular expression 
match by server name
(Admin only).

I think it is also needed for Cinder when query the 
volume/snapshot/backup by name. Any advice?

--
zhangleiqiang (Trump)

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Question about modifying instance attribute(such as cpu-QoS, disk-QoS ) without shutdown the instance

2014-04-08 Thread Zhangleiqiang (Trump)
Hi, Stackers, 

For Amazon, after calling ModifyInstanceAttribute API , the instance 
must be stopped. 

In fact, the hypervisor can online-adjust these attribute. But amzon 
and openstack do not support it.

So I want to know what are your advice about introducing the capability 
of online adjusting these instance attribute?


Thanks


--
zhangleiqiang (Trump)

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] the ability about list the available volume back-ends and their capabilities

2014-04-04 Thread Zhangleiqiang (Trump)
Hi, Mike:

Thanks for your time and your advice. 

I will contact Avishay in #openstack-cinder tonight.


--
zhangleiqiang (Trump)

Best Regards


> -Original Message-
> From: Mike Perez [mailto:thin...@gmail.com]
> Sent: Friday, April 04, 2014 1:51 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder] the ability about list the available 
> volume
> back-ends and their capabilities
> 
> On 06:11 Thu 03 Apr , Zhangleiqiang (Trump) wrote:
> > Hi stackers:
> >
> > I think the ability about list the available volume back-ends, along 
> > with
> their capabilities, total capacity, available capacity is useful for admin. 
> For
> example, this can help admin to select a destination for volume migration.
> > But I can't find the cinder api about this ability.
> >
> > I find a BP about this ability:
> > https://blueprints.launchpad.net/cinder/+spec/list-backends-and-capabi
> > lities But the BP is not approved. Who can tell me the reason?
> 
> Hi Zhangleiqiang,
> 
> I think it's not approved because it has not been set to a series goal by the
> drafter. I don't have permission myself to change the series goal, but I would
> recommend going into the #openstack-cinder IRC channel and ask for the BP to
> be set for the Juno release assuming there is a good approach. We'd also need
> a contributor to take on this task.
> 
> I think it would be good to use the os-hosts extension which can be found in
> cinder.api.contrib.hosts and add the additional response information there. It
> already lists total volume/snapshot count and capacity used [1].
> 
> [1] - http://paste.openstack.org/show/74996
> 
> --
> Mike Perez
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] the ability about list the available volume back-ends and their capabilities

2014-04-02 Thread Zhangleiqiang (Trump)
Hi stackers:

I think the ability about list the available volume back-ends, along with 
their capabilities, total capacity, available capacity is useful for admin. For 
example, this can help admin to select a destination for volume migration.
But I can't find the cinder api about this ability.

I find a BP about this ability:  
https://blueprints.launchpad.net/cinder/+spec/list-backends-and-capabilities
But the BP is not approved. Who can tell me the reason?

Thanks.


--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature about QEMU Assisted online-extend volume

2014-03-28 Thread Zhangleiqiang (Trump)
Hi, Duncan:
Thanks for your advice. 

About the "summit session" you mentioned, what things can I do for it ? 


--
zhangleiqiang (Trump)

Best Regards

> -Original Message-
> From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
> Sent: Friday, March 28, 2014 12:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Feature about QEMU Assisted
> online-extend volume
> 
> It sounds like a useful feature, and there are a growing number of touch 
> points
> for libvirt assisted cinder features. A summit session to discuss how that
> interface should work (hopefully get a few nova folks there as well, the
> interface has two ends) might be a good idea
> 
> On 27 March 2014 16:15, Trump.Zhang  wrote:
> > Online-extend volume feature aims to extend a cinder volume which is
> > in-use, and make the corresponding disk in instance extend without
> > stop the instance.
> >
> >
> > The background is that, John Griffith has proposed a BP ([1]) aimed to
> > provide an cinder extension to enable extend of in-use/attached volumes.
> > After discussing with Paul Marshall, the assignee of this BP, he only
> > focus on OpenVZ driver currently, so I want to take the work of
> > libvirt/qemu based on his current work.
> >
> > A volume can be extended or not is determined by Cinder. However, if
> > we want the capacity of corresponding disk in instance extends, Nova
> > must be involved.
> >
> > Libvirt provides "block_resize" interface for this situation. For
> > QEMU, the internal workflow for block_resize as follows:
> >
> > 1) Drain all IO of this disk from instance
> > 2) If the backend of disk is a normal file, such as raw, qcow2, etc,
> > qemu will do the *extend* work
> > 3) If the backend of disk is block device, qemu will first judge if
> > there is enough free space on the device, if only so, it will do the 
> > *extend*
> work.
> >
> > So I think the "online-extend" volume will need QEMU Assisted, which
> > is simlar to BP [2].
> >
> > Do you think we should introduce this feature?
> >
> > [1]
> > https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-exte
> > nsion [2]
> > https://blueprints.launchpad.net/nova/+spec/qemu-assisted-snapshots
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> --
> Duncan Thomas
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-19 Thread zhangleiqiang

After second thought, it will be more meaningful to just add virtio-SCSI bus 
type support to block-device-mapping. 

RDM can then be used or not, depend on the bus type and device type of bdm 
specified by user.  And user can also just use virtio-SCSI bus for performance 
other than pass through.

Any suggestions? 


"Zhangleiqiang (Trump)"  :

>> From: Huang Zhiteng [mailto:winsto...@gmail.com]
>> Sent: Wednesday, March 19, 2014 12:14 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
>> Mapping
>> 
>> On Tue, Mar 18, 2014 at 5:33 PM, Zhangleiqiang (Trump)
>>  wrote:
>>>> From: Huang Zhiteng [mailto:winsto...@gmail.com]
>>>> Sent: Tuesday, March 18, 2014 4:40 PM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
>>>> Mapping
>>>> 
>>>> On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
>>>>  wrote:
>>>>>> From: Huang Zhiteng [mailto:winsto...@gmail.com]
>>>>>> Sent: Tuesday, March 18, 2014 10:32 AM
>>>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>>>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw
>>>>>> Device Mapping
>>>>>> 
>>>>>> On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
>>>>>>  wrote:
>>>>>>> Hi, stackers:
>>>>>>> 
>>>>>>>With RDM, the storage logical unit number (LUN) can be
>>>>>>> directly
>>>>>> connected to a instance from the storage area network (SAN).
>>>>>>> 
>>>>>>>For most data center applications, including Databases,
>>>>>>> CRM and
>>>>>> ERP applications, RDM can be used for configurations involving
>>>>>> clustering between instances, between physical hosts and instances
>>>>>> or where SAN-aware applications are running inside a instance.
>>>>>> If 'clustering' here refers to things like cluster file system,
>>>>>> which requires LUNs to be connected to multiple instances at the same
>> time.
>>>>>> And since you mentioned Cinder, I suppose the LUNs (volumes) are
>>>>>> managed by Cinder, then you have an extra dependency for
>>>>>> multi-attach
>>>>>> feature:
>>>> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
>>>>> 
>>>>> Yes.  "Clustering" include Oracle RAC, MSCS, etc. If they want to
>>>>> work in
>>>> instance-based cloud environment, RDM and multi-attached-volumes are
>>>> both needed.
>>>>> 
>>>>> But RDM is not only used for clustering, and haven't dependency for
>>>> multi-attach-volume.
>>>> 
>>>> Set clustering use case and performance improvement aside, what other
>>>> benefits/use cases can RDM bring/be useful for?
>>> 
>>> Thanks for your reply.
>>> 
>>> The advantages of Raw device mapping are all introduced by its capability of
>> "pass" scsi command to the device, and the most common use cases are
>> clustering and performance improvement mentioned above.
>> As mentioned in earlier email, I doubt the performance improvement comes
>> from 'virtio-scsi' interface instead of RDM.  We can actually test them to
>> verify.  Here's what I would do: create one LUN(volume) on the SAN, attach
>> the volume to instance using current attach code path but change the virtual
>> bus to 'virtio-scsi' and then measure the IO performance using standard IO
>> benchmark; next, attach the volume to instance using 'lun' device for 'disk' 
>> and
>> 'virtio-scsi' for bus, and do the measurement again.  We shall be able to see
>> the performance difference if there is any.  Since I don't have a SAN to play
>> with, could you please do the test and share the results?
> 
> The performance improvement does comes from "virtio-scsi" controller, and is 
> not caused by using "lun" device instead of "disk" device.
> I don't have a usable SAN at present. But from the libvirt's doc ([1]), the 
> "lun" device behaves identically to "disk" device except that generic SCSI 
&g

Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-19 Thread Zhangleiqiang (Trump)
> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> Sent: Wednesday, March 19, 2014 12:14 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
> Mapping
> 
> On Tue, Mar 18, 2014 at 5:33 PM, Zhangleiqiang (Trump)
>  wrote:
> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> Sent: Tuesday, March 18, 2014 4:40 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
> >> Mapping
> >>
> >> On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
> >>  wrote:
> >> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> >> Sent: Tuesday, March 18, 2014 10:32 AM
> >> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw
> >> >> Device Mapping
> >> >>
> >> >> On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
> >> >>  wrote:
> >> >> > Hi, stackers:
> >> >> >
> >> >> > With RDM, the storage logical unit number (LUN) can be
> >> >> > directly
> >> >> connected to a instance from the storage area network (SAN).
> >> >> >
> >> >> > For most data center applications, including Databases,
> >> >> > CRM and
> >> >> ERP applications, RDM can be used for configurations involving
> >> >> clustering between instances, between physical hosts and instances
> >> >> or where SAN-aware applications are running inside a instance.
> >> >> If 'clustering' here refers to things like cluster file system,
> >> >> which requires LUNs to be connected to multiple instances at the same
> time.
> >> >> And since you mentioned Cinder, I suppose the LUNs (volumes) are
> >> >> managed by Cinder, then you have an extra dependency for
> >> >> multi-attach
> >> >> feature:
> >> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
> >> >
> >> > Yes.  "Clustering" include Oracle RAC, MSCS, etc. If they want to
> >> > work in
> >> instance-based cloud environment, RDM and multi-attached-volumes are
> >> both needed.
> >> >
> >> > But RDM is not only used for clustering, and haven't dependency for
> >> multi-attach-volume.
> >>
> >> Set clustering use case and performance improvement aside, what other
> >> benefits/use cases can RDM bring/be useful for?
> >
> > Thanks for your reply.
> >
> > The advantages of Raw device mapping are all introduced by its capability of
> "pass" scsi command to the device, and the most common use cases are
> clustering and performance improvement mentioned above.
> >
> As mentioned in earlier email, I doubt the performance improvement comes
> from 'virtio-scsi' interface instead of RDM.  We can actually test them to
> verify.  Here's what I would do: create one LUN(volume) on the SAN, attach
> the volume to instance using current attach code path but change the virtual
> bus to 'virtio-scsi' and then measure the IO performance using standard IO
> benchmark; next, attach the volume to instance using 'lun' device for 'disk' 
> and
> 'virtio-scsi' for bus, and do the measurement again.  We shall be able to see
> the performance difference if there is any.  Since I don't have a SAN to play
> with, could you please do the test and share the results?

The performance improvement does comes from "virtio-scsi" controller, and is 
not caused by using "lun" device instead of "disk" device.
I don't have a usable SAN at present. But from the libvirt's doc ([1]), the 
"lun" device behaves identically to "disk" device except that generic SCSI 
commands from the instance are accepted and passed through to the physical 
device. 

Sorry for misleading. The "RDM" I mentioned in earlier email includes the "lun" 
device and the "virtio-scsi" controller.

Now, the performance improvement comes from "virtio-scsi" controller, however, 
boot-from a volume using virtio-scsi interface or attach a volume with a new 
virtio-scsi interface are both unsupported currently. I think add these 
features is meaningful. And as mentioned in the first email, set the 
"virtio-scsi" con

Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-18 Thread Zhangleiqiang (Trump)
> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> Sent: Tuesday, March 18, 2014 4:40 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
> Mapping
> 
> On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
>  wrote:
> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> Sent: Tuesday, March 18, 2014 10:32 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
> >> Mapping
> >>
> >> On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
> >>  wrote:
> >> > Hi, stackers:
> >> >
> >> > With RDM, the storage logical unit number (LUN) can be
> >> > directly
> >> connected to a instance from the storage area network (SAN).
> >> >
> >> > For most data center applications, including Databases, CRM
> >> > and
> >> ERP applications, RDM can be used for configurations involving
> >> clustering between instances, between physical hosts and instances or
> >> where SAN-aware applications are running inside a instance.
> >> If 'clustering' here refers to things like cluster file system, which
> >> requires LUNs to be connected to multiple instances at the same time.
> >> And since you mentioned Cinder, I suppose the LUNs (volumes) are
> >> managed by Cinder, then you have an extra dependency for multi-attach
> >> feature:
> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
> >
> > Yes.  "Clustering" include Oracle RAC, MSCS, etc. If they want to work in
> instance-based cloud environment, RDM and multi-attached-volumes are both
> needed.
> >
> > But RDM is not only used for clustering, and haven't dependency for
> multi-attach-volume.
> 
> Set clustering use case and performance improvement aside, what other
> benefits/use cases can RDM bring/be useful for?

Thanks for your reply.

The advantages of Raw device mapping are all introduced by its capability of 
"pass" scsi command to the device, and the most common use cases are clustering 
and performance improvement mentioned above.

And besides these two scenarios, there is another use case: running SAN-aware 
application inside instances, such as:
1. SAN management app
2. Apps which can offload the device related works, such as snapshot, backup, 
etc, to SAN. 


> >
> >> > RDM, which permits the use of existing SAN commands, is
> >> generally used to improve performance in I/O-intensive applications
> >> and block locking. Physical mode provides access to most hardware
> >> functions of the storage system that is mapped.
> >> It seems to me that the performance benefit mostly from virtio-scsi,
> >> which is just an virtual disk interface, thus should also benefit all
> >> virtual disk use cases not just raw device mapping.
> >> >
> >> > For libvirt driver, RDM feature can be enabled through the "lun"
> >> device connected to a "virtio-scsi" controller:
> >> >
> >> > 
> >> >
> >> > >> dev='/dev/mapper/360022a11ecba5db427db0023'/>
> >> >
> >> >
> >> > 
> >> >
> >> > 
> >> >
> >> > Currently,the related works in OpenStack as follows:
> >> > 1. block-device-mapping-v2 extension has already support
> >> > the
> >> "lun" device with "scsi" bus type listed above, but cannot make the
> >> disk use "virtio-scsi" controller instead of default "lsi" scsi controller.
> >> > 2. libvirt-virtio-scsi-driver BP ([1]) whose milestone
> >> > target is
> >> icehouse-3 is aim to support generate a virtio-scsi controller when
> >> using an image with "virtio-scsi" property, but it seems not to take
> >> boot-from-volume and attach-rdm-volume into account.
> >> >
> >> > I think it is meaningful if we provide the whole support
> >> > for RDM
> >> feature in OpenStack.
> >> >
> >> > Any thoughts? Welcome any advices.
> >> >
> >> >
> >> > [1]
> >> > https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-dri
> >> > ver
> >> > --
> >> > zhangleiqiang (Tr

Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-17 Thread Zhangleiqiang (Trump)
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> Sent: Tuesday, March 18, 2014 2:28 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
> stopping VM, data will be rollback automatically), do you think we shoud
> introduce this feature?
> 
> 
> On Mar 17, 2014, at 4:34 AM, Yuzhou (C)  wrote:
> 
> > Hi Duncan Thomas,
> >
> > Maybe the statement about approval process is not very exact. In fact in
> my mail, I mean:
> > In the enterprise private cloud, if beyond the quota, you want to create a 
> > new
> VM ,that needs to wait for approval process.
> >
> >
> > @stackers,
> >
> > I think the following two use cases show why non-persistent disk is useful:
> >
> > 1.Non-persistent VDI:
> > When users access a non-persistent desktop, none of their settings or
> data is saved once they log out. At the end of a session,
> > the desktop reverts back to its original state and the user receives a 
> > fresh
> image the next time he logs in.
> > 1). Image manageability, Since non-persistent desktops are built from a
> master image, it's easier for administrators to patch and update the image,
> back it up quickly and deploy company-wide applications to all end users.
> > 2). Greater security, Users can't alter desktop settings or install 
> > their own
> applications, making the image more secure.
> > 3). Less storage.
> >
> > 2.As the use case mentioned several days ago by zhangleiqiang:
> >
> > "Let's take a virtual machine which hosts a web service, but it is 
> > primarily
> a read-only web site with content that rarely changes. This VM has three 
> disks.
> Disk 1 contains the Guest OS and web application (e.g.Apache). Disk 2
> contains the web pages for the web site. Disk 3 contains all the logging 
> activity.
> > In this case, disk 1 (OS & app) are dependent (default) settings and
> is backed up nightly. Disk 2 is independent non-persistent (not backed up, and
> any changes to these pages will be discarded). Disk 3 is  independent
> persistent (not backed up, but any changes are persisted to the disk).
> > If updates are needed to the web site's pages, disk 2 must be
> taken out of independent non-persistent mode temporarily to allow the
> changes to be made.
> > Now let's say that this site gets hacked, and the pages are
> doctored with something which is not very nice. A simple reboot of this host 
> will
> discard the changes made to the web pages on disk 2, but will persist 
> the
> logs on disk 3 so that a root cause analysis can be carried out."
> >
> > Hope to get more suggestions about non-persistent disk!
> 
> 
> Making the disk rollback on reboot seems like an unexpected side-effect we
> should avoid. Rolling back the system to a known state is a useful feature, 
> but
> this should be an explicit api command, not a side-effect of rebooting the
> machine, IMHO.

I think there is some misunderstanding about non-persistent disk, the 
non-persistent disk will only rollback if the instance is shutdown and start 
again, and will persistent the data if it is soft-reboot.

Non-persistent disk does have use cases. Using explicit API command can achieve 
it, but I think there will be some work need to be done before booting the 
instance or after shutdown the instance, including:
1. For cinder volume, create a snapshot; For libvirt ephemeral image backend, 
create new image
2.Update attached volume info for instance
3.Delete the cinder snapshot and libvirt ephemeral image, and update 
volume/image info for instance again

These works can be done by users manually or by some "Upper system" ? Or 
non-persistent can be set as a metadata/property of volume/image, and handled 
by Nova?



> Vish
> 
> >
> > Thanks.
> >
> > Zhou Yu
> >
> >
> >
> >
> >> -Original Message-
> >> From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
> >> Sent: Saturday, March 15, 2014 12:56 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [nova][cinder] non-persistent
> >> storage(after stopping VM, data will be rollback automatically), do
> >> you think we shoud introduce this feature?
> >>
> >> On 7 March 2014 08:17, Yuzhou (C)  wrote:
> >>>First, generally, in public or private cloud, the end users
> >>> of VMs
> >> have no right to create new VMs directly.
> >>> If someone wa

Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-17 Thread Zhangleiqiang (Trump)
> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> Sent: Tuesday, March 18, 2014 10:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
> Mapping
> 
> On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
>  wrote:
> > Hi, stackers:
> >
> > With RDM, the storage logical unit number (LUN) can be directly
> connected to a instance from the storage area network (SAN).
> >
> > For most data center applications, including Databases, CRM and
> ERP applications, RDM can be used for configurations involving clustering
> between instances, between physical hosts and instances or where SAN-aware
> applications are running inside a instance.
> If 'clustering' here refers to things like cluster file system, which 
> requires LUNs
> to be connected to multiple instances at the same time.
> And since you mentioned Cinder, I suppose the LUNs (volumes) are managed by
> Cinder, then you have an extra dependency for multi-attach
> feature: https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.

Yes.  "Clustering" include Oracle RAC, MSCS, etc. If they want to work in 
instance-based cloud environment, RDM and multi-attached-volumes are both 
needed.

But RDM is not only used for clustering, and haven't dependency for 
multi-attach-volume. 

> > RDM, which permits the use of existing SAN commands, is
> generally used to improve performance in I/O-intensive applications and block
> locking. Physical mode provides access to most hardware functions of the
> storage system that is mapped.
> It seems to me that the performance benefit mostly from virtio-scsi, which is
> just an virtual disk interface, thus should also benefit all virtual disk use 
> cases
> not just raw device mapping.
> >
> > For libvirt driver, RDM feature can be enabled through the "lun"
> device connected to a "virtio-scsi" controller:
> >
> > 
> >
> > dev='/dev/mapper/360022a11ecba5db427db0023'/>
> >
> >
> > 
> >
> > 
> >
> > Currently,the related works in OpenStack as follows:
> > 1. block-device-mapping-v2 extension has already support the
> "lun" device with "scsi" bus type listed above, but cannot make the disk use
> "virtio-scsi" controller instead of default "lsi" scsi controller.
> > 2. libvirt-virtio-scsi-driver BP ([1]) whose milestone target is
> icehouse-3 is aim to support generate a virtio-scsi controller when using an
> image with "virtio-scsi" property, but it seems not to take boot-from-volume
> and attach-rdm-volume into account.
> >
> > I think it is meaningful if we provide the whole support for RDM
> feature in OpenStack.
> >
> > Any thoughts? Welcome any advices.
> >
> >
> > [1]
> > https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-driver
> > --
> > zhangleiqiang (Trump)
> >
> > Best Regards
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Regards
> Huang Zhiteng
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-17 Thread Zhangleiqiang (Trump)
Hi, stackers:

With RDM, the storage logical unit number (LUN) can be directly 
connected to a instance from the storage area network (SAN).

For most data center applications, including Databases, CRM and ERP 
applications, RDM can be used for configurations involving clustering between 
instances, between physical hosts and instances or where SAN-aware applications 
are running inside a instance.
RDM, which permits the use of existing SAN commands, is generally used 
to improve performance in I/O-intensive applications and block locking. 
Physical mode provides access to most hardware functions of the storage system 
that is mapped.

For libvirt driver, RDM feature can be enabled through the "lun" device 
connected to a "virtio-scsi" controller:


   
   
   
   




Currently,the related works in OpenStack as follows:
1. block-device-mapping-v2 extension has already support the "lun" 
device with "scsi" bus type listed above, but cannot make the disk use 
"virtio-scsi" controller instead of default "lsi" scsi controller.
2. libvirt-virtio-scsi-driver BP ([1]) whose milestone target is 
icehouse-3 is aim to support generate a virtio-scsi controller when using an 
image with "virtio-scsi" property, but it seems not to take boot-from-volume 
and attach-rdm-volume into account.

I think it is meaningful if we provide the whole support for RDM 
feature in OpenStack. 

Any thoughts? Welcome any advices.


[1] https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-driver
--
zhangleiqiang (Trump)

Best Regards

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-13 Thread Zhangleiqiang (Trump)
> From: sxmatch [mailto:sxmatch1...@gmail.com]
> Sent: Friday, March 14, 2014 11:08 AM
> To: Zhangleiqiang (Trump)
> Cc: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
> protection
> 
> 
> 于 2014-03-11 19:24, Zhangleiqiang 写道:
> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> Sent: Tuesday, March 11, 2014 5:37 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> >> delete protection
> >>
> >> On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang
> >> 
> >> wrote:
> >>>> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >>>> Sent: Tuesday, March 11, 2014 4:29 PM
> >>>> To: OpenStack Development Mailing List (not for usage questions)
> >>>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> >>>> delete protection
> >>>>
> >>>> On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
> >>>>  wrote:
> >>>>> Hi all,
> >>>>>
> >>>>>
> >>>>>
> >>>>> Besides the "soft-delete" state for volumes, I think there is need
> >>>>> for introducing another "fake delete" state for volumes which have
> >> snapshot.
> >>>>>
> >>>>>
> >>>>> Current Openstack refuses the delete request for volumes which
> >>>>> have snapshot. However, we will have no method to limit users to
> >>>>> only use the specific snapshot other than the original volume ,
> >>>>> because the original volume is always visible for the users.
> >>>>>
> >>>>>
> >>>>>
> >>>>> So I think we can permit users to delete volumes which have
> >>>>> snapshots, and mark the volume as "fake delete" state. When all of
> >>>>> the snapshots of the volume have already deleted, the original
> >>>>> volume will be removed automatically.
> >>>>>
> >>>> Can you describe the actual use case for this?  I not sure I follow
> >>>> why operator would like to limit the owner of the volume to only
> >>>> use specific version of snapshot.  It sounds like you are adding
> >>>> another layer.  If that's the case, the problem should be solved at
> >>>> upper layer
> >> instead of Cinder.
> >>> For example, one tenant's volume quota is five, and has 5 volumes
> >>> and 1
> >> snapshot already. If the data in base volume of the snapshot is
> >> corrupted, the user will need to create a new volume from the
> >> snapshot, but this operation will be failed because there are already
> >> 5 volumes, and the original volume cannot be deleted, too.
> >> Hmm, how likely is it the snapshot is still sane when the base volume
> >> is corrupted?
> > If the snapshot of volume is COW, then the snapshot will be still sane when
> the base volume is corrupted.
> So, if we delete volume really, just keep snapshot alive, is it possible? User
> don't want to use this volume at now, he can take a snapshot and then delete
> volume.
> 
If we delete volume really, the COW snapshot cannot be used. But if the data in 
base volume is corrupt, we can use the snapshot normally or create an available 
volume from the snapshot.

The "COW" means copy-on-write, when the data-block in base volume is being to 
written, this block will first copy to the snapshot.

Hope it helps.

> If he want it again, can create volume from this snapshot.
> 
> Any ideas?
> >
> >> Even if this case is possible, I don't see the 'fake delete' proposal
> >> is the right way to solve the problem.  IMO, it simply violates what
> >> quota system is designed for and complicates quota metrics
> >> calculation (there would be actual quota which is only visible to
> >> admin/operator and an end-user facing quota).  Why not contact
> >> operator to bump the upper limit of the volume quota instead?
> > I had some misunderstanding on Cinder's snapshot.
> > "Fake delete" is common if there is "chained snapshot" or "snapshot tree"
> mechanism. However in cinder, only volume can make snapshot but snapshot
> cannot make snapshot again.
> >
> > I agree with your bump upper limit method.
> >
> > Thanks for your exp

Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Zhangleiqiang (Trump)
About the (1) [Single VM], the use cases as follows can be supplement.

1. Protection Group: Define the set of instances to be protected.
2. Protection Policy: Define the policy for protection group, such as sync 
period, sync priority, advanced features, etc.
3. Recovery Plan:Define the recovery steps during recovery, such as the 
power-off and boot order of instances, etc

--
zhangleiqiang (Ray)

Best Regards


> -Original Message-
> From: Bruce Montague [mailto:bruce_monta...@symantec.com]
> Sent: Thursday, March 13, 2014 2:38 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for
> stakeholder
> 
> 
> Hi, regarding the call to create a list of disaster recovery (DR) use cases
> ( http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html
>  ), the following list sketches some speculative OpenStack DR use cases. These
> use cases do not reflect any specific product behavior and span a wide
> spectrum. This list is not a proposal, it is intended primarily to solicit 
> additional
> discussion. The first basic use case, (1), is described in a bit more detail 
> than
> the others; many of the others are elaborations on this basic theme.
> 
> 
> 
> * (1) [Single VM]
> 
> A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy
> Services) installed runs a key application and integral database. VSS can 
> quiesce
> the app, database, filesystem, and I/O on demand and can be invoked external
> to the guest.
> 
>a. The VM's volumes, including the boot volume, are replicated to a remote
> DR site (another OpenStack deployment).
> 
>b. Some form of replicated VM or VM metadata exists at the remote site.
> This VM/description includes the replicated volumes. Some systems might use
> cold migration or some form of wide-area live VM migration to establish this
> remote site VM/description.
> 
>c. When specified by an SLA or policy, VSS is invoked, putting the VM's
> volumes in an application-consistent state. This state is flushed all the way
> through to the remote volumes. As each remote volume reaches its
> application-consistent state, this is recognized in some fashion, perhaps by 
> an
> in-band signal, and a snapshot of the volume is made at the remote site.
> Volume replication is re-enabled immediately following the snapshot. A backup
> is then made of the snapshot on the remote site. At the completion of this 
> cycle,
> application-consistent volume snapshots and backups exist on the remote site.
> 
>d.  When a disaster or firedrill happens, the replication network
> connection is cut. The remote site VM pre-created or defined so as to use the
> replicated volumes is then booted, using the latest application-consistent 
> state
> of the replicated volumes. The entire VM environment (management accounts,
> networking, external firewalling, console access, etc..), similar to that of 
> the
> primary, either needs to pre-exist in some fashion on the secondary or be
> created dynamically by the DR system. The booting VM either needs to attach
> to a virtual network environment similar to at the primary site or the VM 
> needs
> to have boot code that can alter its network personality. Networking
> configuration may occur in conjunction with an update to DNS and other
> networking infrastructure. It is necessary for all required networking
> configuration  to be pre-specified or done automatically. No manual admin
> activity should be required. Environment requirements may be stored in a DR
> configuration !
> or database associated with the replication.
> 
>e. In a firedrill or test, the virtual network environment at the remote 
> site
> may be a "test bubble" isolated from the real network, with some provision for
> protected access (such as NAT). Automatic testing is necessary to verify that
> replication succeeded. These tests need to be configurable by the end-user and
> admin and integrated with DR orchestration.
> 
>f. After the VM has booted and been operational, the network connection
> between the two sites is re-established. A replication connection between the
> replicated volumes is restablished, and the replicated volumes are re-synced,
> with the roles of primary and secondary reversed. (Ongoing replication in this
> configuration may occur, driven from the new primary.)
> 
>g. A planned failback of the VM to the old primary proceeds similar to the
> failover from the old primary to the old replica, but with roles reversed and 
> the
> process minimizing offline time and data loss.
> 
> 
> 
> * (2) [Core tenant/project infrastructure VMs]
> 
> Twenty VMs power the co

Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread Zhangleiqiang
> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> Sent: Tuesday, March 11, 2014 5:37 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
> protection
> 
> On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang 
> wrote:
> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> Sent: Tuesday, March 11, 2014 4:29 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> >> delete protection
> >>
> >> On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
> >>  wrote:
> >> > Hi all,
> >> >
> >> >
> >> >
> >> > Besides the "soft-delete" state for volumes, I think there is need
> >> > for introducing another "fake delete" state for volumes which have
> snapshot.
> >> >
> >> >
> >> >
> >> > Current Openstack refuses the delete request for volumes which have
> >> > snapshot. However, we will have no method to limit users to only
> >> > use the specific snapshot other than the original volume ,  because
> >> > the original volume is always visible for the users.
> >> >
> >> >
> >> >
> >> > So I think we can permit users to delete volumes which have
> >> > snapshots, and mark the volume as "fake delete" state. When all of
> >> > the snapshots of the volume have already deleted, the original
> >> > volume will be removed automatically.
> >> >
> >> Can you describe the actual use case for this?  I not sure I follow
> >> why operator would like to limit the owner of the volume to only use
> >> specific version of snapshot.  It sounds like you are adding another
> >> layer.  If that's the case, the problem should be solved at upper layer
> instead of Cinder.
> >
> > For example, one tenant's volume quota is five, and has 5 volumes and 1
> snapshot already. If the data in base volume of the snapshot is corrupted, the
> user will need to create a new volume from the snapshot, but this operation
> will be failed because there are already 5 volumes, and the original volume
> cannot be deleted, too.
> >
> Hmm, how likely is it the snapshot is still sane when the base volume is
> corrupted?  

If the snapshot of volume is COW, then the snapshot will be still sane when the 
base volume is corrupted.

> Even if this case is possible, I don't see the 'fake delete' proposal
> is the right way to solve the problem.  IMO, it simply violates what quota
> system is designed for and complicates quota metrics calculation (there would
> be actual quota which is only visible to admin/operator and an end-user facing
> quota).  Why not contact operator to bump the upper limit of the volume
> quota instead?

I had some misunderstanding on Cinder's snapshot. 
"Fake delete" is common if there is "chained snapshot" or "snapshot tree" 
mechanism. However in cinder, only volume can make snapshot but snapshot cannot 
make snapshot again. 

I agree with your bump upper limit method. 

Thanks for your explanation.


> >> >
> >> >
> >> >
> >> >
> >> > Any thoughts? Welcome any advices.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> >
> >> > zhangleiqiang
> >> >
> >> >
> >> >
> >> > Best Regards
> >> >
> >> >
> >> >
> >> > From: John Griffith [mailto:john.griff...@solidfire.com]
> >> > Sent: Thursday, March 06, 2014 8:38 PM
> >> >
> >> >
> >> > To: OpenStack Development Mailing List (not for usage questions)
> >> > Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> >> > delete protection
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
> >> wrote:
> >> >
> >> > On 6 March 2014 08:50, zhangyu (AI)  wrote:
> >> >> It seems to be an interesting idea. In fact, a China-based public
> >> >> IaaS, QingCloud, has provided a similar feature to their virtual
> >> >> servers. Within 2 hours after a virtual server is deleted, the
> >> >> server owner ca

Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread Zhangleiqiang
> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> Sent: Tuesday, March 11, 2014 4:29 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
> protection
> 
> On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
>  wrote:
> > Hi all,
> >
> >
> >
> > Besides the "soft-delete" state for volumes, I think there is need for
> > introducing another "fake delete" state for volumes which have snapshot.
> >
> >
> >
> > Current Openstack refuses the delete request for volumes which have
> > snapshot. However, we will have no method to limit users to only use
> > the specific snapshot other than the original volume ,  because the
> > original volume is always visible for the users.
> >
> >
> >
> > So I think we can permit users to delete volumes which have snapshots,
> > and mark the volume as "fake delete" state. When all of the snapshots
> > of the volume have already deleted, the original volume will be
> > removed automatically.
> >
> Can you describe the actual use case for this?  I not sure I follow why 
> operator
> would like to limit the owner of the volume to only use specific version of
> snapshot.  It sounds like you are adding another layer.  If that's the case, 
> the
> problem should be solved at upper layer instead of Cinder.

For example, one tenant's volume quota is five, and has 5 volumes and 1 
snapshot already. If the data in base volume of the snapshot is corrupted, the 
user will need to create a new volume from the snapshot, but this operation 
will be failed because there are already 5 volumes, and the original volume 
cannot be deleted, too.

> >
> >
> >
> >
> > Any thoughts? Welcome any advices.
> >
> >
> >
> >
> >
> >
> >
> > --
> >
> > zhangleiqiang
> >
> >
> >
> > Best Regards
> >
> >
> >
> > From: John Griffith [mailto:john.griff...@solidfire.com]
> > Sent: Thursday, March 06, 2014 8:38 PM
> >
> >
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> > delete protection
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
> wrote:
> >
> > On 6 March 2014 08:50, zhangyu (AI)  wrote:
> >> It seems to be an interesting idea. In fact, a China-based public
> >> IaaS, QingCloud, has provided a similar feature to their virtual
> >> servers. Within 2 hours after a virtual server is deleted, the server
> >> owner can decide whether or not to cancel this deletion and re-cycle
> >> that "deleted" virtual server.
> >>
> >> People make mistakes, while such a feature helps in urgent cases. Any
> >> idea here?
> >
> > Nova has soft_delete and restore for servers. That sounds similar?
> >
> > John
> >
> >
> >>
> >> -Original Message-
> >> From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
> >> Sent: Thursday, March 06, 2014 2:19 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
> >> protection
> >>
> >> Hi all,
> >>
> >> Current openstack provide the delete volume function to the user.
> >> But it seems there is no any protection for user's delete operation miss.
> >>
> >> As we know the data in the volume maybe very important and valuable.
> >> So it's better to provide a method to the user to avoid the volume
> >> delete miss.
> >>
> >> Such as:
> >> We can provide a safe delete for the volume.
> >> User can specify how long the volume will be delay deleted(actually
> >> deleted) when he deletes the volume.
> >> Before the volume is actually deleted, user can cancel the delete
> >> operation and find back the volume.
> >> After the specified time, the volume will be actually deleted by the
> >> system.
> >>
> >> Any thoughts? Welcome any advices.
> >>
> >> Best regards to you.
> >>
> >>
> >> --
> >> zhangleiqiang
> >>
> >> Best Regards
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev m

Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-10 Thread Zhangleiqiang
Hi all,



Besides the "soft-delete" state for volumes, I think there is need for 
introducing another "fake delete" state for volumes which have snapshot.



Current Openstack refuses the delete request for volumes which have snapshot. 
However, we will have no method to limit users to only use the specific 
snapshot other than the original volume ,  because the original volume is 
always visible for the users.



So I think we can permit users to delete volumes which have snapshots, and mark 
the volume as "fake delete" state. When all of the snapshots of the volume have 
already deleted, the original volume will be removed automatically.





Any thoughts? Welcome any advices.



--
zhangleiqiang

Best Regards

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection



On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 6 March 2014 08:50, zhangyu (AI) 
mailto:zhangy...@huawei.com>> wrote:
> It seems to be an interesting idea. In fact, a China-based public IaaS, 
> QingCloud, has provided a similar feature
> to their virtual servers. Within 2 hours after a virtual server is deleted, 
> the server owner can decide whether
> or not to cancel this deletion and re-cycle that "deleted" virtual server.
>
> People make mistakes, while such a feature helps in urgent cases. Any idea 
> here?
Nova has soft_delete and restore for servers. That sounds similar?

John

>
> -Original Message-
> From: Zhangleiqiang 
> [mailto:zhangleiqi...@huawei.com<mailto:zhangleiqi...@huawei.com>]
> Sent: Thursday, March 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection
>
> Hi all,
>
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
>
> As we know the data in the volume maybe very important and valuable.
> So it's better to provide a method to the user to avoid the volume delete 
> miss.
>
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually deleted) 
> when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete operation 
> and find back the volume.
> After the specified time, the volume will be actually deleted by the system.
>
> Any thoughts? Welcome any advices.
>
> Best regards to you.
>
>
> --
> zhangleiqiang
>
> Best Regards
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think a soft-delete for Cinder sounds like a neat idea.  You should file a BP 
that we can target for Juno.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-10 Thread Zhangleiqiang
Hi, Joe & Qin Zhao:

I think the user case from [1] is more typical for the "non-persistent 
volume" and the related "independent persistent volume" feature.

Let's take a virtual machine which hosts a web service, but it is 
primarily a read-only web site with content that rarely changes. This VM has 
three disks. Disk 1 contains the Guest OS and web application (e.g. Apache). 
Disk 2 contains the web pages for the web site. Disk 3 contains all the logging 
activity.
 In this case, disk 1 (OS & app) are dependent (default) settings and 
is backed up nightly. Disk 2 is independent non-persistent (not backed up, and 
any changes to these pages will be discarded). Disk 3 is independent persistent 
(not backed up, but any changes are persisted to the disk).
 If updates are needed to the web site's pages, disk 2 must be taken 
out of independent non-persistent mode temporarily to allow the changes to be 
made.
 Now let's say that this site gets hacked, and the pages are doctored 
with something which is not very nice. A simple reboot of this host will 
discard the changes made to the web pages on disk 2, but will persist the logs 
on disk 3 so that a root cause analysis can be carried out.

The "in-place snapshot" and "file system support snapshot" can both 
achieve the purpose for test particular functionality. 
 However, compared to non-persistent volume, "in-place snapshot" is 
more or less heavier, and the Instance-level snapshot has more larger 
granularity than volume, especially for the use case mentioned above. File 
system which supports snapshot will not be applicable for the situation when 
the system is got hacked.

So I think the "non-persistent  volume" feature is meaningful for 
public cloud. 

P.S. There is a misunderstanding before:
Non-Persistent Volume:  All the write options are temp. Changes are 
discarded when the virtual machine is force reset or powered off . If you 
restart the system , the data will still be available on the disk. Changes will 
be discarded only when the system is force RESET or POWERED OFF.

Are there any other suggestions?  Thanks.

[1]  
http://cormachogan.com/2013/04/16/what-are-dependent-independent-disks-persistent-and-non-persisent-modes/

--
zhangleiqiang

Best Regards

From: Joe Gordon [mailto:joe.gord...@gmail.com] 
Sent: Saturday, March 08, 2014 4:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after 
stopping VM, data will be rollback automatically), do you think we shoud 
introduce this feature?



On Fri, Mar 7, 2014 at 1:26 AM, Qin Zhao  wrote:
Hi Joe,
Maybe my example is very rare. However, I think a new type of 'in-place' 
snapshot will have other advantages. For instance, the hypervisor can support 
to save memory content in snapshot file, so that user can revert his VM to 
running state. In this way, the user do not need to start each application 
again. Every thing is there. User can continue his work very easily. If the 
user spawn and boot a new VM, he will need to take a lot of time to resume his 
work. Does that make sense?

I am not sure I follow. I think the use case you have brought up can be solved 
inside of the VM with something like http://unionfs.filesystems.org/ are a 
filesystem that supports snapshotting.

 

On Fri, Mar 7, 2014 at 2:20 PM, Joe Gordon  wrote:
On Wed, Mar 5, 2014 at 11:45 AM, Qin Zhao  wrote:
> Hi Joe,
> For example, I used to use a private cloud system, which will calculate
> charge bi-weekly. and it charging formula looks like "Total_charge =
> Instance_number*C1 + Total_instance_duration*C2 + Image_number*C3 +
> Volume_number*C4".  Those Instance/Image/Volume number are the number of
> those objects that user created within these two weeks. And it also has
> quota to limit total image size and total volume size. That formula is not
> very exact, but you can see that it regards each of my 'create' operation ass
> a 'ticket', and will charge all those tickets, plus the instance duration
Charging for creating a VM creation is not very cloud like.  Cloud
instances should be treated as ephemeral and something that you can
throw away and recreate at any time.  Additionally cloud should charge
on resources used (instance CPU hour, network load etc), and not API
calls (at least in any meaningful amount).

> fee. In order to reduce the expense of my department, I am asked not to
> create instance very frequently, and not to create too many images and
> volume. The image quota is not very big. And I would never be permitted to
> exceed the quota, since it request additional dollars.
>
>
> On Thu, Mar 6, 2014 at 1:33 AM, Joe Gordon  wrote:
>>
>> On Wed, Mar 5, 2

Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-06 Thread Zhangleiqiang
> get them working. For example, in a devstack VM the only way I can get the
> iSCSI target to show the new size (after an lvextend) is to delete and 
> recreate
> the target, something jgriffiths said he doesn't want to support ;-).

I know a method can achieve it, but it maybe need the instance to pause first 
(during the step2 below), but without detaching/reattaching. The steps as 
follows:

1. Extend the LV
2.Refresh the size info in tgtd:
  a) tgtadm --op show --mode target # get the "tid" and "lun_id" properties of 
target related to the lv; the "size" property in output result is still the old 
size before lvextend
  b) tgtadm --op delete --mode logicalunit --tid={tid} --lun={lun_id}  # delete 
lun mapping in tgtd
  c) tgtadm --op new --mode logicalunit --tid={tid} --lun={lun_id} 
--backing-store=/dev/cinder-volumes/{lv-name} # re-add lun mapping
  d) tgtadm --op show --mode target #now the "size" property in output result 
is the new size
*PS*:  
a) During the procedure, the corresponding device on the compute node won't 
disappear. But I am not sure the result if Instance has IO on this volume, so 
maybe the instance may be paused during this procedure.
b) Maybe we can modify tgtadm, and make it support the operation which is just 
"refresh" the size of backing store.

3. Rescan the lun info in compute node: iscsiadm -m node --targetname 
{target_name} -R

>I also
> haven't dived into any of those other limits you mentioned (nfs_used_ratio,
> etc.).

Till now, we focused on the "volume" which is based on *block device*. Under 
this scenario, we must first try to "extend" the volume and notify the 
hypervisor, I think one of the preconditions is to make sure the extend 
operation will not affect the IO in Instance.

However, there is another scenario which maybe a little different. For 
*online-extend" virtual disks (qcow2, sparse, etc) whose backend storage is 
file system (ext3, nfs, glusterfs, etc), the current implementation of QEMU is 
as follows:
1. QEMU drain all IO
2. *QEMU* extend the virtual disk
3. QEMU resume IO

The difference is the *extend* work need be done by QEMU other than cinder 
driver. 

> Feel free to ping me on IRC (pdmars).

I don't know your time zone, we can continue the discussion on IRC, :)

--
zhangleiqiang

Best Regards


> -Original Message-
> From: Paul Marshall [mailto:paul.marsh...@rackspace.com]
> Sent: Thursday, March 06, 2014 12:56 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Luohao (brian)
> Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the
> online-extend feature to cinder ?
> 
> Hey,
> 
> Sorry I missed this thread a couple of days ago. I am working on a first-pass 
> of
> this and hope to have something soon. So far I've mostly focused on getting
> OpenVZ and the HP LH SAN driver working for online extend. I've had trouble
> with libvirt+kvm+lvm so I'd love some help there if you have ideas about how 
> to
> get them working. For example, in a devstack VM the only way I can get the
> iSCSI target to show the new size (after an lvextend) is to delete and 
> recreate
> the target, something jgriffiths said he doesn't want to support ;-). I also
> haven't dived into any of those other limits you mentioned (nfs_used_ratio,
> etc.). Feel free to ping me on IRC (pdmars).
> 
> Paul
> 
> 
> On Mar 3, 2014, at 8:50 PM, Zhangleiqiang 
> wrote:
> 
> > @john.griffith. Thanks for your information.
> >
> > I have read the BP you mentioned ([1]) and have some rough thoughts about
> it.
> >
> > As far as I know, the corresponding online-extend command for libvirt is
> "blockresize", and for Qemu, the implement differs among disk formats.
> >
> > For the regular qcow2/raw disk file, qemu will take charge of the 
> > drain_all_io
> and truncate_disk actions, but for raw block device, qemu will only check if 
> the
> *Actual* size of the device is larger than current size.
> >
> > I think the former need more consideration, because the extend work is done
> by libvirt, Nova may need to do this first and then notify Cinder. But if we 
> take
> allocation limit of different cinder backend drivers (such as quota,
> nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be
> more complicated.
> >
> > This scenario is not included by the Item 3 of BP ([1]), as it cannot be 
> > simply
> "just work" or notified by the compute node/libvirt after the volume is
> extended.
> >
> > This regular qcow2/raw disk files are normally stored in file system based
> storage, maybe the Manila project is more appropriate for this 

Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread Zhangleiqiang
Agree with you and thanks for your advice, :)



--
zhangleiqiang

Best Regards

From: Alex Meade [mailto:mr.alex.me...@gmail.com]
Sent: Friday, March 07, 2014 12:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection

Just so everyone is aware. Glance supports 'delayed deletes' where image data 
will not actually be deleted at the time of the request. Glance also has the 
concept of 'protected images', which allows for setting an image as protected, 
preventing it from being deleted until the image is intentionally set to 
unprotected. This avoids any actual deletion of prized images.

Perhaps cinder could emulate that behavior or improve upon it for volumes.

-Alex

On Thu, Mar 6, 2014 at 8:45 AM, zhangyu (AI) 
mailto:zhangy...@huawei.com>> wrote:
Got it. Many thanks!

Leiqiang, you can take action now :)

From: John Griffith 
[mailto:john.griff...@solidfire.com<mailto:john.griff...@solidfire.com>]
Sent: Thursday, March 06, 2014 8:38 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection



On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 6 March 2014 08:50, zhangyu (AI) 
mailto:zhangy...@huawei.com>> wrote:
> It seems to be an interesting idea. In fact, a China-based public IaaS, 
> QingCloud, has provided a similar feature
> to their virtual servers. Within 2 hours after a virtual server is deleted, 
> the server owner can decide whether
> or not to cancel this deletion and re-cycle that "deleted" virtual server.
>
> People make mistakes, while such a feature helps in urgent cases. Any idea 
> here?
Nova has soft_delete and restore for servers. That sounds similar?

John

>
> -Original Message-
> From: Zhangleiqiang 
> [mailto:zhangleiqi...@huawei.com<mailto:zhangleiqi...@huawei.com>]
> Sent: Thursday, March 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection
>
> Hi all,
>
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
>
> As we know the data in the volume maybe very important and valuable.
> So it's better to provide a method to the user to avoid the volume delete 
> miss.
>
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually deleted) 
> when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete operation 
> and find back the volume.
> After the specified time, the volume will be actually deleted by the system.
>
> Any thoughts? Welcome any advices.
>
> Best regards to you.
>
>
> --
> zhangleiqiang
>
> Best Regards
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think a soft-delete for Cinder sounds like a neat idea.  You should file a BP 
that we can target for Juno.

Thanks,
John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-06 Thread Zhangleiqiang
OK. We have proposed a blueprint here.

https://blueprints.launchpad.net/cinder/+spec/volume-delete-protect

Thanks.


--
zhangleiqiang

Best Regards

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection



On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 6 March 2014 08:50, zhangyu (AI) 
mailto:zhangy...@huawei.com>> wrote:
> It seems to be an interesting idea. In fact, a China-based public IaaS, 
> QingCloud, has provided a similar feature
> to their virtual servers. Within 2 hours after a virtual server is deleted, 
> the server owner can decide whether
> or not to cancel this deletion and re-cycle that "deleted" virtual server.
>
> People make mistakes, while such a feature helps in urgent cases. Any idea 
> here?
Nova has soft_delete and restore for servers. That sounds similar?

John

>
> -Original Message-
> From: Zhangleiqiang 
> [mailto:zhangleiqi...@huawei.com<mailto:zhangleiqi...@huawei.com>]
> Sent: Thursday, March 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection
>
> Hi all,
>
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
>
> As we know the data in the volume maybe very important and valuable.
> So it's better to provide a method to the user to avoid the volume delete 
> miss.
>
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually deleted) 
> when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete operation 
> and find back the volume.
> After the specified time, the volume will be actually deleted by the system.
>
> Any thoughts? Welcome any advices.
>
> Best regards to you.
>
>
> --
> zhangleiqiang
>
> Best Regards
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think a soft-delete for Cinder sounds like a neat idea.  You should file a BP 
that we can target for Juno.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-05 Thread Zhangleiqiang
Hi all,

Current openstack provide the delete volume function to the user.
But it seems there is no any protection for user's delete operation miss.

As we know the data in the volume maybe very important and valuable. 
So it's better to provide a method to the user to avoid the volume delete miss.

Such as:
We can provide a safe delete for the volume.
User can specify how long the volume will be delay deleted(actually deleted) 
when he deletes the volume.
Before the volume is actually deleted, user can cancel the delete operation and 
find back the volume.
After the specified time, the volume will be actually deleted by the system.

Any thoughts? Welcome any advices.

Best regards to you.


--
zhangleiqiang

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-03 Thread Zhangleiqiang
> 
> This sounds like ephemeral storage plus snapshots.  You build a base image,
> snapshot it then boot from the snapshot.


Non-persistent storage/disk is useful for sandbox-like environment, and this 
feature has already exists in VMWare ESX from version 4.1. The implementation 
of ESX is the same as what you said, boot from snapshot of the disk/volume, but 
it will also *automatically* delete the transient snapshot after the instance 
reboots or shutdowns. I think the whole procedure may be controlled by 
OpenStack other than user's manual operations.

As far as I know, libvirt already defines the corresponding  element 
in domain xml for non-persistent disk ( [1] ), but it cannot specify the 
location of the transient snapshot. Although qemu-kvm has provided support for 
this feature by the "-snapshot" command argument, which will create the 
transient snapshot under /tmp directory, the qemu driver of libvirt don't 
support  element currently.

I think the steps of creating and deleting transient snapshot may be better to 
done by Nova/Cinder other than waiting for the  support added to 
libvirt, as the location of transient snapshot should specified by Nova. 


[1] http://libvirt.org/formatdomain.html#elementsDisks
--
zhangleiqiang

Best Regards


> -Original Message-
> From: Joe Gordon [mailto:joe.gord...@gmail.com]
> Sent: Tuesday, March 04, 2014 11:26 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Luohao (brian)
> Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
> stopping VM, data will be rollback automatically), do you think we shoud
> introduce this feature?
> 
> On Mon, Mar 3, 2014 at 6:00 PM, Yuzhou (C) 
> wrote:
> > Hi stackers,
> >
> > As far as I know ,there are two types of storage used by VM in openstack:
> Ephemeral Storage and Persistent Storage.
> > Data on ephemeral storage ceases to exist when the instance it is associated
> with is terminated. Rebooting the VM or restarting the host server, however,
> will not destroy ephemeral data.
> > Persistent storage means that the storage resource outlives any other
> resource and is always available, regardless of the state of a running 
> instance.
> >
> > There is a use case that maybe need a new type of storage, maybe we can
> call it non-persistent storage .
> > The use case is that VMs are assigned to the public ephemerally in public
> areas.
> > After the VM is used, new data on storage of VM ceases to exist when the
> instance it is associated with is stopped.
> > It means stop the VM, Non-persistent storage used by VM will be rollback
> automatically.
> >
> > Is there any other suggestions? Or any BPs about this use case?
> >
> 
> This sounds like ephemeral storage plus snapshots.  You build a base image,
> snapshot it then boot from the snapshot.
> 
> > Thanks!
> >
> > Zhou Yu
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-03 Thread Zhangleiqiang
@john.griffith. Thanks for your information.

I have read the BP you mentioned ([1]) and have some rough thoughts about it.

As far as I know, the corresponding online-extend command for libvirt is 
"blockresize", and for Qemu, the implement differs among disk formats.

For the regular qcow2/raw disk file, qemu will take charge of the drain_all_io 
and truncate_disk actions, but for raw block device, qemu will only check if 
the *Actual* size of the device is larger than current size.

I think the former need more consideration, because the extend work is done by 
libvirt, Nova may need to do this first and then notify Cinder. But if we take 
allocation limit of different cinder backend drivers (such as quota, 
nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be more 
complicated.

This scenario is not included by the Item 3 of BP ([1]), as it cannot be simply 
"just work" or notified by the compute node/libvirt after the volume is 
extended.

This regular qcow2/raw disk files are normally stored in file system based 
storage, maybe the Manila project is more appropriate for this scenario?


Thanks.


[1]: https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension

--
zhangleiqiang

Best Regards

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Tuesday, March 04, 2014 1:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the 
online-extend feature to cinder ?



On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang 
mailto:zhangleiqi...@huawei.com>> wrote:
Hi, stackers:

Libvirt/qemu have supported online-extend for multiple disk formats, 
including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
currently.

Offline-extend volume will force the instance to be shutoff or the volume 
to be detached. I think it will be useful if we introduce the online-extend 
feature to cinder, especially for the file system based driver, e.g. nfs, 
glusterfs, etc.

Is there any other suggestions?

Thanks.


--
zhangleiqiang

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Zhangleiqiang,

So yes, there's a rough BP for this here: [1], and some of the folks from the 
Trove team (pdmars on IRC) have actually started to dive into this.  Last I 
checked with him there were some sticking points on the Nova side but we should 
synch up with Paul, it's been a couple weeks since I've last caught up with him.

Thanks,
John
[1]: https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-03 Thread Zhangleiqiang
Hi, stackers:

Libvirt/qemu have supported online-extend for multiple disk formats, 
including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
currently. 

Offline-extend volume will force the instance to be shutoff or the volume 
to be detached. I think it will be useful if we introduce the online-extend 
feature to cinder, especially for the file system based driver, e.g. nfs, 
glusterfs, etc.

Is there any other suggestions?

Thanks.


--
zhangleiqiang

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Zhangleiqiang
Hi, Jay & Sylvain:

I found  the OpenStack-Neat Project (http://openstack-neat.org/) have already 
aimed to do the things similar to DRS and DPM.

Hope it will be helpful.


--
Leiqzhang

Best Regards

From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Sent: Wednesday, February 26, 2014 9:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage compute/storage resource

Hi Tim,

As per I'm reading your design document, it sounds more likely related to 
something like Solver Scheduler subteam is trying to focus on, ie. intelligent 
agnostic resources placement on an holistic way [1]
IIRC, Jay is more likely talking about adaptive scheduling decisions based on 
feedback with potential counter-measures that can be done for decreasing load 
and preserving QoS of nodes.

That said, maybe I'm wrong ?

[1]https://blueprints.launchpad.net/nova/+spec/solver-scheduler

2014-02-26 1:09 GMT+01:00 Tim Hinrichs 
mailto:thinri...@vmware.com>>:
Hi Jay,

The Congress project aims to handle something similar to your use cases.  I 
just sent a note to the ML with a Congress status update with the tag 
[Congress].  It includes links to our design docs.  Let me know if you have 
trouble finding it or want to follow up.

Tim

- Original Message -
| From: "Sylvain Bauza" 
mailto:sylvain.ba...@gmail.com>>
| To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
| Sent: Tuesday, February 25, 2014 3:58:07 PM
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage
| compute/storage resource
|
|
|
| Hi Jay,
|
|
| Currently, the Nova scheduler only acts upon user request (either
| live migration or boot an instance). IMHO, that's something Gantt
| should scope later on (or at least there could be some space within
| the Scheduler) so that Scheduler would be responsible for managing
| resources on a dynamic way.
|
|
| I'm thinking of the Pets vs. Cattles analogy, and I definitely think
| that Compute resources could be treated like Pets, provided the
| Scheduler does a move.
|
|
| -Sylvain
|
|
|
| 2014-02-26 0:40 GMT+01:00 Jay Lau < 
jay.lau@gmail.com > :
|
|
|
|
| Greetings,
|
|
| Here I want to bring up an old topic here and want to get some input
| from you experts.
|
|
| Currently in nova and cinder, we only have some initial placement
| polices to help customer deploy VM instance or create volume storage
| to a specified host, but after the VM or the volume was created,
| there was no policy to monitor the hypervisors or the storage
| servers to take some actions in the following case:
|
|
| 1) Load Balance Policy: If the load of one server is too heavy, then
| probably we need to migrate some VMs from high load servers to some
| idle servers automatically to make sure the system resource usage
| can be balanced.
|
| 2) HA Policy: If one server get down for some hardware failure or
| whatever reasons, there is no policy to make sure the VMs can be
| evacuated or live migrated (Make sure migrate the VM before server
| goes down) to other available servers to make sure customer
| applications will not be affect too much.
|
| 3) Energy Saving Policy: If a single host load is lower than
| configured threshold, then low down the frequency of the CPU to save
| energy; otherwise, increase the CPU frequency. If the average load
| is lower than configured threshold, then shutdown some hypervisors
| to save energy; otherwise, power on some hypervisors to load
| balance. Before power off a hypervisor host, the energy policy need
| to live migrate all VMs on the hypervisor to other available
| hypervisors; After Power on a hypervisor host, the Load Balance
| Policy will help live migrate some VMs to the new powered
| hypervisor.
|
| 4) Customized Policy: Customer can also define some customized
| policies based on their specified requirement.
|
| 5) Some run-time policies for block storage or even network.
|
|
|
| I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
| indeed many customers want such features.
|
|
|
| I have filed a bp here [1] long ago, but after some discussion with
| Russell, we think that this should not belong to nova but other
| projects. Till now, I did not find a good place where we can put
| this in, can any of you show some comments?
|
|
|
| [1]
| https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
|
| --
|
|
| Thanks,
|
| Jay
|
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
|
|
|
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org