Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-02 Thread Eric Harney

On 11/1/18 4:44 PM, Jay Bryant wrote:

On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:


Hi,all

  Recently, I use the nfs driver as the cinder-backup backend, when I
use it to backup the volume snapshot, the result is return the
NotImplementedError[1].And the nfs.py doesn't has the
create_volume_from_snapshot function. Does the community plan to achieve
it which is as nfs as the cinder-backup backend?Can you tell me about
this?Thank you very much!

Rambo,


The NFS driver doesn't have full snapshot support. I am not sure if that
function missing was an oversight or not. I would reach out to Eric Harney
as he implemented that code.

Jay



create_volume_from_snapshot is implemented in the NFS driver.  It is in 
the remotefs code that the NFS driver inherits from.


But, I'm not sure I understand what's being asked here -- how is this 
related to using NFS as the backup backend?






[1]
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142








Best Regards
Rambo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-10-03 Thread Eric Harney

On 9/27/18 1:36 PM, Markus Hentsch wrote:

Dear OpenStack developers,

we would like to propose the introduction of an encrypted image format
in OpenStack. We already created a basic implementation involving Nova,
Cinder, OSC and Glance, which we'd like to contribute.

We originally created a full spec document but since the official
cross-project contribution workflow in OpenStack is a thing of the past,
we have no single repository to upload it to. Thus, the Glance team
advised us to post this on the mailing list [1].

Ironically, Glance is the least affected project since the image
transformation processes affected are taking place elsewhere (Nova and
Cinder mostly).

Below you'll find the most important parts of our spec that describe our
proposal - which our current implementation is based on. We'd love to
hear your feedback on the topic and would like to encourage all affected
projects to join the discussion.

Subsequently, we'd like to receive further instructions on how we may
contribute to all of the affected projects in the most effective and
collaborative way possible. The Glance team suggested starting with a
complete spec in the glance-specs repository, followed by individual
specs/blueprints for the remaining projects [1]. Would that be alright
for the other teams?

[1]
http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html

Best regards,
Markus Hentsch

(excerpts from our image encryption spec below)

Problem description
===

An image, when uploaded to Glance or being created through Nova from an
existing server (VM), may contain sensitive information. The already
provided signature functionality only protects images against
alteration. Images may be stored on several hosts over long periods of
time. First and foremost this includes the image storage hosts of Glance
itself. Furthermore it might also involve caches on systems like compute
hosts. In conclusion they are exposed to a multitude of potential
scenarios involving different hosts with different access patterns and
attack surfaces. The OpenStack components involved in those scenarios do
not protect the confidentiality of image data. That’s why we propose the
introduction of an encrypted image format.

Use Cases
-

* A user wants to upload an image, which includes sensitive information.
To ensure the integrity of the image, a signature can be generated and
used for verification. Additionally, the user wants to protect the
confidentiality of the image data through encryption. The user generates
or uploads a key in the key manager (e.g. Barbican) and uses it to
encrypt the image locally using the OpenStack client (osc) when
uploading it. Consequently, the image stored on the Glance host is
encrypted.

* A user wants to create an image from an existing server with ephemeral
storage. This server may contain sensitive user data. The corresponding
compute host then generates the image based on the data of the ephemeral
storage disk. To protect the confidentiality of the data within the
image, the user wants Nova to also encrypt the image using a key from
the key manager, specified by its secret ID. Consequently, the image
stored on the Glance host is encrypted.

* A user wants to create a new server or volume based on an encrypted
image created by any of the use cases described above. The corresponding
compute or volume host has to be able to decrypt the image using the
symmetric key stored in the key manager and transform it into the
requested resource (server disk or volume).




Although not required on a technical level, all of the use cases
described above assume the usage of encrypted volume types and encrypted
ephemeral storage as provided by OpenStack.


Proposed changes


* Glance: Adding a container type for encrypted images that supports
different mechanisms (format, cipher algorithms, secret ID) via a
metadata property. Whether introducing several container types or
outsourcing the mechanism definition into metadata properties may still
be up for discussion, although we do favor the latter.

* Nova: Adding support for decrypting an encrypted image when a servers
ephemeral disk is created. This includes direct decryption streaming for
encrypted disks. Nova should select a suitable mechanism according to
the image container type and metadata. The symmetric key will be
retrieved from the key manager (e.g. Barbican).

* Cinder: Adding support for decrypting an encrypted image when a volume
is created from it. Cinder should select a suitable mechanism according
to the image container type and metadata. The symmetric key will be
retrieved from the key manager (e.g. Barbican).


Are you aware of the existing Cinder support for similar functionality?

When encrypted volumes are uploaded to Glance images from Cinder, 
encryption keys are cloned in Barbican and tied to Glance images as 
metadata.  Then, volumes created from those images can consume the 
Barbican key to 

Re: [openstack-dev] [cinder][nova] RBD multi-attach

2018-04-13 Thread Eric Harney
On 04/12/2018 10:25 PM, 李俊波 wrote:
> Hello Nova, Cinder developers,
> 
>  
> 
> I would like to ask you a question concerns a Cinder patch [1].
> 
>  
> 
> In this patch, it mentioned that RBD features were incompatible with
> multi-attach, which disabled multi-attach for RBD. I would like to know
> which RBD features that are incompatible?
> 
>  
> 
> In the Bug [2], yao ning also raised this question, and in his envrionment,
> it proved that they did not find ant problems when enable this feature.
> 
>  
> 
> So, I also would like to know which features in ceph will make this feature
> unsafe? 
> 
>  
> 
> [1] https://review.openstack.org/#/c/283695/
> 
> [2] https://bugs.launchpad.net/cinder/+bug/1535815
> 
>  
> 
>  
> 
> Best wishes and Regards
> 
> junboli
> 
>  

Hi,

As noted in the comment in the code [1] -- the exclusive lock feature
must be disabled.  However, this feature is required for RBD mirroring
[2], which will be the basis of Cinder volume replication for RBD.

We are currently prioritizing completing support for replication over
multi-attach for this driver, since there is more demand for that
feature.  After that, we will look more at multi-attach and how to let
deployers choose to enable replication or multi-attach.

[1]
https://git.openstack.org/cgit/openstack/cinder/tree/cinder/volume/drivers/rbd.py?id=d1bae7462e3bc#n485

[2]
http://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-image-journaling-support

Thanks,
Eric

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][requirements][cinder] Handling requirements for driverfixes branches

2018-01-08 Thread Eric Harney
Hi all,

I'm trying to sort out how to run unit tests on Cinder driverfixes branches.

These branches are similar to stable branches, but live longer (and have
a different set of rules for what changes are appropriate).

In order for unit tests to work on these branches, requirements need to
be pinned in the same way they are for stable branches (i.e.
driverfixes/ocata matches stable/ocata's requirements).  Currently, unit
test jobs on these branches end up using requirements from master.

It is not clear how I can pin requirements on these branches, since they
aren't recognized as equivalent to stable branches by any of the normal
tooling used in CI.  I tried manually adding an upper-constraints.txt
here [1] but this does not result in the correct dependencies being used.

Where do changes need to be made for us to set the
requirements/upper-constraints correctly for these branches?


[1] https://review.openstack.org/#/c/503711/

Thanks,
Eric

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder][third-party][ci] Tintri Cinder CI failure

2017-09-28 Thread Eric Harney
On 09/28/2017 12:22 PM, Apoorva Deshpande wrote:
> It appears that Cinder started using NFS locks around Sept 19th. That
> resulted in our CI failures as we don't support it. Tempest tests succeeded
> when we added a nolock option in the NFS configuration[1].
> 
> Can someone provide more information on this change?
> 

I'm not sure there was a change in Cinder that would result in this
error message:

Command: /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824
--cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C
qemu-img info
/opt/stack/data/cinder/mnt/a67d7d4be86399df850bfa711f7837f7/volume-43b54c3e-ae10-47b1-8a43-c9427551f923
Sep 26 12:14:13.919553 303-openstack-test2 cinder-volume[19641]: Exit
code: 1

Stderr: u"qemu-img: Could not open
'/opt/stack/data/cinder/mnt/a67d7d4be86399df850bfa711f7837f7/volume-43b54c3e-ae10-47b1-8a43-c9427551f923':
Failed to lock byte 100\n"

You may need to look at what else changed on the system -- qemu-img
versions, NFS utilities, etc.

> Thanks,
> Apoorva
> 
> [1] http://openstack-ci.tintri.com/tintri/refs-changes-09-504009-2/
> 
> On Tue, Sep 26, 2017 at 1:28 PM, Apoorva Deshpande 
> wrote:
> 
>> I patched sos-ci and logs are available now [1]. First exception
>> occurrence I spot in c-vol.txt is here [2]
>>
>> [1] http://openstack-ci.tintri.com/tintri/refs-changes-59-507359-1/logs/
>> [2] http://paste.openstack.org/show/621983/
>>
>> On Mon, Sep 25, 2017 at 11:32 PM, Silvan Kaiser 
>> wrote:
>>
>>> Hi Apoorva!
>>> The test run is sadly missing the service logs, probably because you're
>>> using a current DevStack (systemd based services) but an older sos-ci
>>> version? If you apply https://github.com/j-gri
>>> ffith/sos-ci/commit/f0f2ce2e2f2b12727ee5aa75a751376dcc1ea3a4 you should
>>> be able to get the logs for new test runs. This will help debugging this.
>>> Best
>>> Silvan
>>>
>>>
>>>
>>> 2017-09-26 1:54 GMT+02:00 Apoorva Deshpande :
>>>
 Hello,

 Tintri's Cinder CI started failing around Sept 19, 2017. There are 29
 tests failing[1] with following errors [2][3][4]. Tintri Cinder driver
 inherit nfs cinder driver and it's available here[5].

 Please let me know if anyone has recently seen these failures or has any
 pointers on how to fix.

 Thanks,
 Apoorva

 IRC: Apoorva

 [1] http://openstack-ci.tintri.com/tintri/refs-changes-57-50
 5357-1/testr_results.html
 [2] http://paste.openstack.org/show/621886/
 [3] http://paste.openstack.org/show/621858/
 [4] http://paste.openstack.org/show/621857/
 [5] https://github.com/openstack/cinder/blob/master/cinder/v
 olume/drivers/tintri.py

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Dr. Silvan Kaiser
>>> Quobyte GmbH
>>> Hardenbergplatz 2, 10623 Berlin - Germany
>>> +49-30-814 591 800 <+49%2030%20814591800> - www.quobyte.com>> uobyte.com/>
>>> Amtsgericht Berlin-Charlottenburg, HRB 149012B
>>> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>>>
>>> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Requirements for re-adding Gluster support

2017-07-26 Thread Eric Harney
On 07/26/2017 05:08 PM, John Griffith wrote:
> On Wed, Jul 26, 2017 at 10:42 AM, Sean McGinnis <sean.mcgin...@gmx.com>
> wrote:
> 
>> On Wed, Jul 26, 2017 at 12:30:49PM +, Jeremy Stanley wrote:
>>> On 2017-07-26 12:56:55 +0200 (+0200), Niels de Vos wrote:
>>> [...]
>>>> My current guess is that adding a 3rd party CI [3] for Gluster is
>>>> the only missing piece?
>>> [...]
>>>
>>> I thought GlusterFS was free/libre software. If so, won't the Cinder
>>> team allow upstream testing in OpenStack's CI system for free
>>> backends/drivers? Maintaining a third-party CI system for that seems
>>> like overkill, but I'm unfamiliar with Cinder's particular driver
>>> testing policies.
>>> --
>>> Jeremy Stanley
>>
>> You are correct Jeremy. It wasn't a CI issue that caused the removal.
>> IIRC, Red Hat decided to focus on Ceph as the platform for Cinder
>> storage.
>>
>>
> Just confirming Sean's recollection, Eric Harney from Redhat was pretty
> much the sole maintainer of the Gluster code in Cinder and the decision was
> made that he would stop maintaining/supporting the Gluster driver in Cinder
> (and I believe he actually put out some calls asking for any volunteers
> that might want to pick it up).  I'll certainly let Eric speak to any
> details if he wishes so I don't misrepresent.
> 
> The bottom line is there was only one person maintaining it, CI is
> relatively easy with Gluster, there was even (IIRC) infra already in place
> to deploy/test in the upstream gate.
> 
> 
Yeah, this was pretty much it -- bringing back the driver just revolves
around having a person assigned to own and maintain it.

From a technical point of view there are not a lot of steps involved
here, we can restore the previous gate jobs and driver code and I expect
things would still be in working order.

I can help coordinate these things with the new owner.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-26 Thread Eric Harney
On 06/19/2017 09:22 AM, Matt Riedemann wrote:
> On 6/16/2017 8:58 AM, Eric Harney wrote:
>> I'm not convinced yet that this failure is purely Ceph-specific, at a
>> quick look.
>>
>> I think what happens here is, unshelve performs an asynchronous delete
>> of a glance image, and returns as successful before the delete has
>> necessarily completed.  The check in tempest then sees that the image
>> still exists, and fails -- but this isn't valid, because the unshelve
>> API doesn't guarantee that this image is no longer there at the time it
>> returns.  This would fail on any image delete that isn't instantaneous.
>>
>> Is there a guarantee anywhere that the unshelve API behaves how this
>> tempest test expects it to?
> 
> There are no guarantees, no. The unshelve API reference is here [1]. The
> asynchronous postconditions section just says:
> 
> "After you successfully shelve a server, its status changes to ACTIVE.
> The server appears on the compute node.
> 
> The shelved image is deleted from the list of images returned by an API
> call."
> 
> It doesn't say the image is deleted immediately, or that it waits for
> the image to be gone before changing the instance status to ACTIVE.
> 
> I see there is also a typo in there, that should say after you
> successfully *unshelve* a server.
> 
> From an API user point of view, this is all asynchronous because it's an
> RPC cast from the nova-api service to the nova-conductor and finally
> nova-compute service when unshelving the instance.
> 
> So I think the test is making some wrong assumptions on how fast the
> image is going to be deleted when the instance is active.
> 
> As Ken'ichi pointed out in the Tempest change, Glance returns a 204 when
> deleting an image in the v2 API [2]. If the image delete is asynchronous
> then that should probably be a 202.
> 
> Either way the Tempest test should probably be in a wait loop for the
> image to be gone if it's really going to assert this.
> 

Thanks for confirming this.

What do we need to do to get this fixed in Tempest?  Nobody from Tempest
Core has responded to the revert patch [3] since this explanation was
posted.

IMO we should revert this for now and someone can implement a fixed
version if this test is needed.

[3] https://review.openstack.org/#/c/471352/

> [1]
> https://developer.openstack.org/api-ref/compute/?expanded=unshelve-restore-shelved-server-unshelve-action-detail#unshelve-restore-shelved-server-unshelve-action
> 
> [2]
> https://developer.openstack.org/api-ref/image/v2/index.html?expanded=delete-an-image-detail#delete-an-image
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Eric Harney
On 06/16/2017 10:21 AM, Sean McGinnis wrote:
> 
> I don't think merging tests that are showing failures, then blacklisting
> them, is the right approach. And as Eric points out, this isn't
> necessarily just a failure with Ceph. There is a legitimate logical
> issue with what this particular test is doing.
> 
> But in general, to get back to some of the earlier points, I don't think
> we should be merging tests with known breakages until those breakages
> can be first addressed.
> 

As another example, this was the last round of this, in May:

https://review.openstack.org/#/c/332670/

which is a new tempest test for a Cinder API that is not supported by
all drivers.  The Ceph job failed on the tempest patch, correctly, the
test was merged, then the Ceph jobs broke:

https://bugs.launchpad.net/glance/+bug/1687538
https://review.openstack.org/#/c/461625/

This is really not a sustainable model.

And this is the _easy_ case, since Ceph jobs run in OpenStack infra and
are easily visible and trackable.  I'm not sure what the impact is on
Cinder third-party CI for other drivers.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Eric Harney
On 06/15/2017 10:51 PM, Ghanshyam Mann wrote:
> On Fri, Jun 16, 2017 at 9:43 AM,   wrote:
>> https://review.openstack.org/#/c/471352/   may be an example
> 
> If this is case which is ceph related, i think we already discussed
> these kind of cases, where functionality depends on backend storage
> and how to handle corresponding tests failure [1].
> 
> Solution on that was Ceph job should exclude such test case which
> functionality is not implemented/supported in ceph byregex. Jon
> Bernard is working on this tests blacklist [2].
> 
> If there is any other job or case, then we can discuss/think of having
> job running for Tempest gate also which i think we do in most cases.
> 
> And about making ceph job as voting, i remember we did not do that due
> to stability ok job. Ceph job fails frequently and once Jon patches
> merge and job is consistently stable then we can make voting.
> 

I'm not convinced yet that this failure is purely Ceph-specific, at a
quick look.

I think what happens here is, unshelve performs an asynchronous delete
of a glance image, and returns as successful before the delete has
necessarily completed.  The check in tempest then sees that the image
still exists, and fails -- but this isn't valid, because the unshelve
API doesn't guarantee that this image is no longer there at the time it
returns.  This would fail on any image delete that isn't instantaneous.

Is there a guarantee anywhere that the unshelve API behaves how this
tempest test expects it to?

>>
>>
>> Original Mail
>> Sender:  ;
>> To:  ;
>> Date: 2017/06/16 05:25
>> Subject: Re: [openstack-dev] [all][qa][glance] some recent tempest problems
>>
>>
>> On 06/15/2017 01:04 PM, Brian Rosmaita wrote:
>>> This isn't a glance-specific problem though we've encountered it quite
>>> a few times recently.
>>>
>>> Briefly, we're gating on Tempest jobs that tempest itself does not
>>> gate on.  This leads to a situation where new tests can be merged in
>>> tempest, but wind up breaking our gate. We aren't claiming that the
>>> added tests are bad or don't provide value; the problem is that we
>>> have to drop everything and fix the gate.  This interrupts our current
>>> work and forces us to prioritize bugs to fix based not on what makes
>>> the most sense for the project given current priorities and resources,
>>> but based on whatever we can do to get the gates un-blocked.
>>>
>>> As we said earlier, this situation seems to be impacting multiple
>>> projects.
>>>
>>> One solution for this is to change our gating so that we do not run
>>> any Tempest jobs against Glance repositories that are not also gated
>>> by Tempest.  That would in theory open a regression path, which is why
>>> we haven't put up a patch yet.  Another way this could be addressed is
>>> by the Tempest team changing the non-voting jobs causing this
>>> situation into voting jobs, which would prevent such changes from
>>> being merged in the first place.  The key issue here is that we need
>>> to be able to prioritize bugs based on what's most important to each
>>> project.
>>>
>>> We want to be clear that we appreciate the work the Tempest team does.
>>> We abhor bugs and want to squash them too.  The problem is just that
>>> we're stretched pretty thin with resources right now, and being forced
>>> to prioritize bug fixes that will get our gate un-blocked is
>>> interfering with our ability to work on issues that may have a higher
>>> impact on end users.
>>>
>>> The point of this email is to find out whether anyone has a better
>>> suggestion for how to handle this situation.
>>
>> It would be useful to provide detailed examples. Everything is trade
>> offs, and having the conversation in the abstract is very difficult to
>> understand those trade offs.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
> 
> 
> ..1 http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.html
> 
> ..2 https://review.openstack.org/#/c/459774/ ,
> https://review.openstack.org/#/c/459445/
> 
> 
> -gmann
> 
>> __

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread Eric Harney
On 06/02/2017 03:47 PM, John Griffith wrote:
> Hey Everyone,
> 
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
> 
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.

We'd need separate CI jobs for the different target classes too.


> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
> 
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
> 
> What I'm wondering is, even though I certainly think this is a FAR SUPERIOR
> design to what we had, I don't like having both code-paths and designs in
> the code base.  Should we consider reverting the drivers that are using the
> new model back and remove cinder/volume/targets?  Or should we start
> flagging those new drivers that don't use the new model during review?
> Also, what about the legacy/burden of all the other drivers that are
> already in place?
> 
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
> 
> Thanks,
> John
> 

Some perspective from my side here:  before reading this mail, I had a
bit different idea of what the target_drivers were actually for.

The LVM, block_device, and DRBD drivers use this target_driver system
because they manage "local" storage and then layer an iSCSI target on
top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
original POV of the LVM driver, which was doing this to work on multiple
different distributions that had to pick scsi-target-utils or LIO to
function at all.  The important detail here is that the
scsi-target-utils/LIO code could also then be applied to different
volume drivers.

The Solidfire driver is doing something different here, and using the
target_driver classes as an interface upon which it defines its own
target driver.  In this case, this splits up the code within the driver
itself, but doesn't enable plugging in other target drivers to the
Solidfire driver.  So the fact that it's tied to this defined
target_driver class interface doesn't change much.

The question, I think, mostly comes down to whether you get better code,
or better deployment configurability, by a) defining a few target
classes for your driver or b) defining a few volume driver classes for
your driver.   (See coprhd or Pure for some examples.)

I'm not convinced there is any difference in the outcome, so I can't see
why we would enforce any policy around this.  The main difference is in
which cinder.conf fields you set during deployment, the rest pretty much
ends up the same in either scheme.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Eric Harney
On 05/25/2017 05:51 AM, Jiri Suchomel wrote:
> Hi,
> it seems to me that the way of adding extra NFS options to the cinder
> backend is somewhat confusing.
> 
> 1. There is  nfs_mount_options in cinder config file [1]
> 
> 2. Then I can put my options to the nfs_shares_config file - that
> it could contain additional options mentiones [2] or the
> commit message that adds the feature [3]
> 
> Now, when I put my options to both of these places, cinder-volume
> actually uses them twice and executes the command like this
> 
> mount -t nfs -o nfsvers=3 -o nfsvers=3
> 192.168.241.10:/srv/nfs/vi7/cinder 
> /var/lib/cinder/mnt/f5689da9ea41a66eff2ce0ef89b37bce
> 
> BTW, the options coming from nfs_shares_config are called 'flags' by
> cinder/volume/drivers/nfs ([4]).
> 
> Now, to make it more fun, when I actually want to attach a volume to
> running instance, nova uses different way of realizing which NFS options to 
> use:
> 
> - It reads them from _nova_ config option of libvirt.nfs_mount_options
> [5]
> - or it uses those it gets them from cinder when creating cinder
> connection [6] But these are only the options defined in
> nfs_shares_config file, NOT those nfs_mount_options specified in cinder
> config file.
> 
> 
> So. If I put my options to both places, nfs_shares_config file and
> nfs_mount_options, it actually works how I want it to work, as
> current mount does not complain that the option was provided twice. 
> 
> But it looks ugly. And I'm wondering - am I doing it wrong, or
> is there a problem with either cinder or nova (or both)?
> 

This has gotten a bit more confusing than in necessary in Cinder due to
how the configuration for the NFS and related drivers has been tweaked
over time.

The method of putting a list of shares in the nfs_shares_config file is
effectively deprecated, but still works for now.

The preferred method now is to set the following options:
   nas_host:  server address
   nas_share_path:  export path
   nas_mount_options:  options for mounting the export

So whereas before the nfs_shares_config file would have:
   127.0.0.1:/srv/nfs1 -o nfsvers=3

This would now translate to:
   nas_host=127.0.0.1
   nas_share_path=/srv/nfs1
   nas_mount_options = -o nfsvers=3

I believe if you try configuring the driver this way, you will get the
desired result.

The goal was to remove the nfs_shares_config config method, but this
hasn't happened yet -- I/we need to revisit this area and see about
doing this.

Eric

> 
> Jiri
> 
> 
> [1] https://docs.openstack.org/admin-guide/blockstorage-nfs-backend.html
> [2]
> https://docs.openstack.org/newton/config-reference/block-storage/drivers/nfs-volume-driver.html
> [3]
> https://github.com/openstack/cinder/commit/553e0d92c40c73aa1680743c4287f31770131c97
> [4]
> https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L163
> [5]
> https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L87
> [6] 
> https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L89
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder][ceph] should Tempest tests the backend specific feature?

2017-05-02 Thread Eric Harney
On 05/02/2017 01:42 AM, Ghanshyam Mann wrote:
> In Cinder, there are many features/APIs which are backend specific and
> will return 405 or 501 if same is not implemented on any backend [1].
> If such tests are implemented in Tempest, then it will break some gate
> where that backend job is voting. like ceph job in glance_store gate.
> 
> There been many such cases recently where ceph jobs were broken due to
> such tests and recently it is for force-delete backup feature[2].

This problem was detected on the initial patch [5], where the ceph gate
failed with the 405 error, but the patch was merged anyway.  Why are
there "many such cases" of these jobs getting broken?

[5] https://review.openstack.org/#/c/332670/

> Reverting force-delete tests in [3]. To resolve such cases at some
> extend, Jon is going to add a white/black list of tests which can run
> on ceph job [4] depends on what all feature ceph implemented. But this
> does not resolve it completely due to many reason like
> 1. External use of Tempest become difficult where user needs to know
> what all tests to skip for which backend
> 2. Tempest tests become too specific to backend.
> 
> Now there are few options to resolve this:
> 1. Tempest should not tests such API/feature which are backend
> specific like mentioned by api-ref like[1].

Is the proposal here to test these features via the in-tree Cinder
tempest tests instead of from tempest itself, or just not test any
features in Cinder which have backend-specific differences?

> 2. Tempest test can be disabled/skip based on backend. - This is not
> good idea as it increase config options and overhead of setting those.

This option seems like the most straightforward way to get both good
test coverage and handle compatibility.

> 3. Tempest test can verify behavior with if else condition as per
> backend. This is bad idea and lose the test strength.
> 
> IMO options 1 is better options. More feedback are welcome.
> 
> ..1 
> https://developer.openstack.org/api-ref/block-storage/v3/?expanded=force-delete-a-backup-detail#force-delete-a-backup
> ..2 https://bugs.launchpad.net/glance/+bug/1687538
> ..3 https://review.openstack.org/#/c/461625/
> ..4 http://lists.openstack.org/pipermail/openstack-dev/2017-April/115229.html
> 
> -gmann
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][cinder][nova] zip for raw Disk Format

2016-10-05 Thread Eric Harney
On 10/05/2016 05:46 AM, Chen CH Ji wrote:
> 
> From [1] we support a few common and vendor specific disk format, as use
> raw image sometimes will be very big and do we have existing any method to
> consider zip the raw disk so the disk space might be saved as an option to
> offer to end user and admin  ? so something like [2] could be enhanced to
> support zipped format?
> Thanks
> 

The qcow2 format supports compression and is one of the most widely
supported image formats that can be used with OpenStack.

You can convert a raw image to qcow2 with:
 $ qemu-img convert -f raw -O qcow2 -c image.raw image.qcow2

And then upload image.qcow2 to Glance.

> [1] http://docs.openstack.org/developer/glance/formats.html
> [2]
> https://github.com/openstack/cinder/blob/master/cinder/image/image_utils.py#L182
> 
> Best Regards!
> 
> Kevin (Chen) Ji 纪 晨
> 
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
> Phone: +86-10-82454158
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-09-02 Thread Eric Harney
On 08/15/2016 04:48 AM, Thierry Carrez wrote:
> Sean Dague wrote:
>> On 08/12/2016 01:10 PM, Walter A. Boring IV wrote:
>>> I believe there is a compromise that we could implement in Cinder that
>>> enables us to have a deprecation
>>> of unsupported drivers that aren't meeting the Cinder driver
>>> requirements and allow upgrades to work
>>> without outright immediately removing a driver.
>>>
>>>  1. Add a 'supported = True' attribute to every driver.
>>>  2. When a driver no longer meets Cinder community requirements, put a
>>> patch up against the driver
>>>  3. When c-vol service starts, check the supported flag.  If the flag is
>>> False, then log an exception, and disable the driver.
>>>  4. Allow the admin to put an entry in cinder.conf for the driver in
>>> question "enable_unsupported_driver = True".  This will allow the
>>> c-vol service to start the driver and allow it to work.  Log a
>>> warning on every driver call.
>>>  5. This is a positive acknowledgement by the operator that they are
>>> enabling a potentially broken driver. Use at your own risk.
>>>  6. If the vendor doesn't get the CI working in the next release, then
>>> remove the driver. 
>>>  7. If the vendor gets the CI working again, then set the supported flag
>>> back to True and all is good. 
>>>
>>> This allows a deprecation period for a driver, and keeps operators who
>>> upgrade their deployment from losing access to their volumes they have
>>> on those back-ends.  It will give them time to contact the community
>>> and/or do some research, and find out what happened to the driver.  
>>> This also potentially gives the operator time to find a new supported
>>> backend and start migrating volumes.  I say potentially, because the
>>> driver may be broken, or it may work enough to migrate volumes off of it
>>> to a new backend.
>>>
>>> Having unsupported drivers in tree is terrible for the Cinder community,
>>> and in the long run terrible for operators.
>>> Instantly removing drivers because CI is unstable is terrible for
>>> operators in the short term, because as soon as they upgrade OpenStack,
>>> they lose all access to managing their existing volumes.   Just because
>>> we leave a driver in tree in this state, doesn't mean that the operator
>>> will be able to migrate if the drive is broken, but they'll have a
>>> chance depending on the state of the driver in question.  It could be
>>> horribly broken, but the breakage might be something fixable by someone
>>> that just knows Python.   If the driver is gone from tree entirely, then
>>> that's a lot more to overcome.
>>>
>>> I don't think there is a way to make everyone happy all the time, but I
>>> think this buys operators a small window of opportunity to still manage
>>> their existing volumes before the driver is removed.  It also still
>>> allows the Cinder community to deal with unsupported drivers in a way
>>> that will motivate vendors to keep their stuff working.
>>
>> This seems very reasonable. It allows the cinder team to mark stuff
>> unsupported at any point that vendors do not meet their upstream
>> commitments, but still provides some path forward for operators that
>> didn't realize their chosen vendor abandoned them and the community
>> until after they are in the midst of upgrade. It's very important that
>> the cinder team is able to keep a very visible hammer for vendors not
>> living up to their commitments.
>>
>> Keeping some visible data around drivers that are flapping (going
>> unsupported, showing up with CI to get back out of the state,
>> disappearing again) would be great as well, to further give operators
>> data on what vendors are working in good faith and which aren't.
> 
> I like this a lot, and it certainly would address the deprecation policy
> part.
> 
> Sean: I was wondering if that would not still be considered breaking
> upgrades, though... Since you end up upgrading and your c-vol would not
> restart until you set enable_unsupported_driver = True ?
> 

Kind of late jumping in here, but I'm curious what you guys think on
this last point.

I have a similar feeling, that this may still be breaking upgrades in an
undesirable way.

There are softer alternatives that could still communicate that a driver
is unsupported and not halt the upgrade path, such as printing warning
messages when the c-vol service starts up, etc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Volume Drivers unit tests

2016-07-22 Thread Eric Harney
On 07/21/2016 05:26 PM, Knight, Clinton wrote:
> Nate, you have to press Ctrl-C to see the in-progress test, that’s why you 
> don’t 
> see it in the logs.  The bug report shows this and points to the patch where 
> it 
> appeared to begin. https://bugs.launchpad.net/cinder/+bug/1578986
> 
> Clinton
> 

I think this only gives a backtrace of the test runner and not the test.

I attached gdb when this hang occured and see this.  Looks like we still
have a thread running the oslo.messaging fake driver.

http://paste.openstack.org/raw/539769/

(Linked in the bug report as well.)

> *From: *"Potter, Nathaniel" 
> *Reply-To: *"OpenStack Development Mailing List (not for usage questions)" 
> 
> *Date: *Thursday, July 21, 2016 at 7:17 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" 
> 
> *Subject: *Re: [openstack-dev] [cinder] Volume Drivers unit tests
> 
> Hi all,
> 
> I’m not totally sure that this is the same issue, but lately I’ve seen the 
> gate 
> tests fail while hanging at this point [1], but they say ‘ok’ rather than 
> ‘inprogress’. Has anyone else come across this? It only happens sometimes, 
> and a 
> recheck can get past it. The full log is here [2].
> 
> [1] http://paste.openstack.org/show/539314/
> 
> [2] 
> http://logs.openstack.org/90/341090/6/check/gate-cinder-python34-db/ea65de5/console.html
> 
> Thanks,
> 
> Nate
> 
> *From:*yang, xing [mailto:xing.y...@emc.com]
> *Sent:* Thursday, July 21, 2016 3:17 PM
> *To:* OpenStack Development Mailing List (not for usage questions) 
> 
> *Subject:* Re: [openstack-dev] [cinder] Volume Drivers unit tests
> 
> Hi Ivan,
> 
> Do you have any logs for the VMAX driver?  We'll take a look.
> 
> Thanks,
> 
> Xing
> 
> 
> 
> *From:*Ivan Kolodyazhny [e...@e0ne.info]
> *Sent:* Thursday, July 21, 2016 4:44 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [cinder] Volume Drivers unit tests
> 
> Thank you Xing,
> 
> The issue is related both to VNX and VMAX EMC drivers
> 
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> 
> On Thu, Jul 21, 2016 at 11:00 PM, yang, xing  > wrote:
> 
> Hi Ivan,
> 
> Thanks for sending this out.  Regarding the issue in the EMC VNX driver 
> unit
> tests, it is tracked by this bug
> https://bugs.launchpad.net/cinder/+bug/1578986. The driver was recently
> refactored so this is probably a new issue introduced by the refactor. 
> We are investigating this issue.
> 
> Thanks,
> 
> Xing
> 
> 
> 
> 
> *From:*Ivan Kolodyazhny [e...@e0ne.info ]
> *Sent:* Thursday, July 21, 2016 1:02 PM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [cinder] Volume Drivers unit tests
> 
> Hi team,
> 
> First of all, I would like to apologize, if my mail is be too emotional. I
> spent too much of time to fix it and failed.
> 
> TL;DR;
> 
> What I want to say is: "Let's spend some time to make our tests better and
> fix all issues". Patch [1] is still unstable. Unit tests can pass or fail 
> in
> a in a random order. Also, I've disabled some tests to pass CI.
> 
> Long version:
> 
> While I was working on patch "Move drivers unit tests to 
> unit.volume.drivers
> directory" [1] I've found a lot of issues with our unit tests :(. Not all 
> of
> them are already fixed, so that patch is still in progress
> 
> What did I found and what should we have to fix:
> 
> 1) Execution time [2]. I don't want to argue what it unit tests, but 2-4
> seconds per tests should be non-acceptable, IMO.
> 
> 2) Execution order. Seriously, do you know that our tests will fail or 
> hang
> if execution order will change? Even if one test for diver A failed, some
> tests for driver B will fail too.
> 
> 3) Lack of mock. It's a root cause for #2. We didn't mock sleeps and event
> loops right. We don't mock RPC call well too [3]. We don't
> have 'cinder.openstack.common.rpc.impl_fake' module in Cinder tree.
> 
> In some drivers, we use oslo_service.loopingcall.FixedIntervalLoopingCall
> [4]. We've go ZeroIntervalLoopingCall [5] class in Cinder. Do we use it
> everywhere or mock FixedIntervalLoopingCall right? I don't think so, I've
> hacked oslo_service in my env to rise an exception if interval > 0. 297
> tests failed. It means, our tests use sleep. We have to get rid of this.
> TBH, not only volume drivers unit tests failed. E.g. some API unit tests
> failed too.
> 
> 4) Due to #3, sometimes unit tests hangs even on master branch with a 
> minor
> 

Re: [openstack-dev] [stable][liberty] [cinder] is stable liberty broken?

2016-07-07 Thread Eric Harney
On 07/07/2016 02:41 AM, Chen CH Ji wrote:
> Hi,
>I am backporting https://review.openstack.org/#/c/333749/ 
> 
>to stable/liberty and failed in gating job, then I submitted 
> another 
> doc change https://review.openstack.org/#/c/338699/  to verify and seems 
> failed 
> with same reason and I have no idea what's wrong in test... can someone help 
> to 
> take a look or give some hint?
> error is :
> 
>  
> ft17.1: 
> cinder.tests.unit.api.contrib.test_quotas.QuotaSetsControllerTest.test_delete_StringException:
>  Empty attachments:
>pythonlogging:''
>stderr
>stdout
> 
> Traceback (most recent call last):
>File "cinder/tests/unit/api/contrib/test_quotas.py", line 100, in setUp
>  self.fixture = self.useFixture(config_fixture.Config(auth_token.CONF))
> AttributeError: 'module' object has no attribute 'CONF'
> 
> 
> 

Yes, it looks like it.

I filed bug https://bugs.launchpad.net/cinder/+bug/1599855 for this.

It looks like the way we are setting keystonemiddleware options breaks
with newer keystonemiddleware releases.

keystonemiddleware 2.6.0 works here, 4.6.0 does not.

Thanks for the report.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-06-28 Thread Eric Harney
On 06/27/2016 01:27 PM, Sean McGinnis wrote:
> I would like to nominate Scott D'Angelo to core. Scott has been very
> involved in the project for a long time now and is always ready to help
> folks out on IRC. His contributions [1] have been very valuable and he
> is a thorough reviewer [2].
> 
> Please let me know if there are any objects to this within the next
> week. If there are none I will switch Scott over by next week, unless
> all cores approve prior to then.
> 
> Thanks!
> 
> Sean McGinnis (smcginnis)
> 
> [1] 
> https://review.openstack.org/#/q/owner:%22Scott+DAngelo+%253Cscott.dangelo%2540hpe.com%253E%22+status:merged
> [2] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
> 


Definitely a +2.  Welcome, Scott!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] A friendly reminder about reviews

2016-05-11 Thread Eric Harney
And for completeness, I'd also like to mention that there is this nice
dashboard:

http://status.openstack.org/reviews/#cinder


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] A friendly reminder about reviews

2016-05-11 Thread Eric Harney
On 05/11/2016 05:16 AM, Ivan Kolodyazhny wrote:
> Hi all,
> 
> I would like to kindly ask you to pay a bit more attention to the following 
> areas:
> 
>   * Specs: we don' reviews specs as well as code:(. TBH, my specs reviews 
> count
> is very low too.
>   * os-brick and python-brick-cinderclient-ext - we've got a lack of code and
> review contribution for python-brick-cinderclient-ext
>   * stable branches - let's help our stable maintainers team to review such
> patches; It's pretty sad for me and all contributors that we can't merge
> patches until branch EoL just because we didn't review them in time
> 

Just as a useful tip, I've saved the following as a "Cinder stable" item
on my gerrit menu (in gerrit prefs):

#/q/status:open+AND+(project:openstack/cinder+OR+project:openstack/os-brick)+AND+NOT+branch:master


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread Eric Harney
On 05/03/2016 02:16 PM, Sean McGinnis wrote:
> Hey everyone,
> 
> I would like to nominate Michał Dulko to the Cinder core team. Michał's
> contributions with both code reviews [0] and code contributions [1] have
> been significant for some time now.
> 
> His persistence with versioned objects has been instrumental in getting
> support in the Mitaka release for rolling upgrades.
> 
> If there are no objections from current cores by next week, I will add
> Michał to the core group.
> 
> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
> [1]
> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
> 
> Thanks!
> 
> Sean McGinnis (smcginnis)
> 
> 

+1, definitely a strong addition to the team.

Eric


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Does the OpenStack community(or Cinder team) allow one driver to call another driver's public method?

2016-03-19 Thread Eric Harney
On 03/18/2016 10:01 AM, Sean McGinnis wrote:
> On Fri, Mar 18, 2016 at 04:05:34AM +, liuxinguo wrote:
>> Hi Cinder team,
>>
>> We are going to implement storage-assisted volume migrate in our driver 
>> between different backend storage array or even different array of different 
>> vendors.
>> This is really high-efficiency than the host-copy migration between 
>> different array of different vendors.
>>
>> To implement this, we need to call other backend's method like 
>> create_volume() or initialize_connection(). We can call them like the 
>> cinder/volume/manage.py:
>>
>> rpcapi.create_volume(ctxt, new_volume, host['host'],
>>  None, None, allow_reschedule=False)
>>
>> or
>> conn = rpcapi.initialize_connection(ctxt, volume, properties)
>>
>> And my question is: Does the OpenStack community(or Cinder team) allow 
>> driver to call rpcapi in order to call other driver's method like 
>> create_volume() or initialize_connection()?
>>
> 
> This is an interesting question. I have thought in the past we may be
> able to do some interesting things, particularly with more involved
> replication or migration scenarios.
> 
> We do not currently do this. Ideally I think we would want the other
> driver instance passed in to the source driver so each driver would not
> need to do something special to look it up.
> 

I believe Jon Bernard researched this same idea a bit while implementing
generic volume migration [2] and found that there were a handful of
reasons that it doesn't really work.

[2] https://review.openstack.org/#/c/187270/

> You do have the option today of optimizing migrate for your driver [1].
> But I think especially in cross-vendor migrations, there are things that
> need to be done outside the scope of a driver that are currently handled
> by Cinder.
> 
> There could be a valid use case for driver to driver interfaces, but I
> think as it is now, what I think you are looking for is something that
> is a little more involved and would need a little more design (and a lot
> more discussion) to support.
> 
> [1]
> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L1552
> 
>>
>> Thanks for any input!
>> --
>> Wilson Liu
> 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-07 Thread Eric Harney
On 03/06/2016 09:35 PM, John Griffith wrote:
> On Sat, Mar 5, 2016 at 4:27 PM, Jay S. Bryant > wrote:
> 
>> Ivan,
>>
>> I agree that our testing needs improvement.  Thanks for starting this
>> thread.
>>
>> With regards to adding a hacking check for tests that run too long ... are
>> you thinking that we would have a timer that checks or long running jobs or
>> something that checks for long sleeps in the testing code?  Just curious
>> your ideas for tackling that situation.  Would be interested in helping
>> with that, perhaps.
>>
>> Thanks!
>> Jay
>>
>>
>> On 03/02/2016 05:25 AM, Ivan Kolodyazhny wrote:
>>
>> Hi Team,
>>
>> Here are my thoughts and proposals how to make Cinder testing process
>> better. I won't cover "3rd party CI's" topic here. I will share my opinion
>> about current and feature jobs.
>>
>>
>> Unit-tests
>>
>>- Long-running tests. I hope, everybody will agree that unit-tests
>>must be quite simple and very fast. Unit tests which takes more than 3-5
>>seconds should be refactored and/or moved to 'integration' tests.
>>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>>good to have some hacking checks to prevent such issues in a future.
>>
>>- Tests coverage. We don't check it in an automatic way on gates.
>>Usually, we require to add some unit-tests during code review process. Why
>>can't we add coverage job to our CI and do not merge new patches, with
>>will decrease tests coverage rate? Maybe, such job could be voting in a
>>future to not ignore it. For now, there is not simple way to check 
>> coverage
>>because 'tox -e cover' output is not useful [2].
>>
>>
>> Functional tests for Cinder
>>
>> We introduced some functional tests last month [3]. Here is a patch to
>> infra to add new job [4]. Because these tests were moved from unit-tests, I
>> think we're OK to make this job voting. Such tests should not be a
>> replacement for Tempest. They even could tests Cinder with Fake Driver to
>> make it faster and not related on storage backends issues.
>>
>>
>> Tempest in-tree tests
>>
>> Sean started work on it [5] and I think it's a good idea to get them in
>> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a real
>> backend.
>>
>>
>> Functional tests for python-brick-cinderclient-ext
>>
>> There are patches that introduces functional tests [6] and new job [7].
>>
>>
>> Functional tests for python-cinderclient
>>
>> We've got a very limited set of such tests and non-voting job. IMO, we can
>> run them even with Cinder Fake Driver to make them not depended on a
>> storage backend and make it faster. I believe, we can make this job voting
>> soon. Also, we need more contributors to this kind of tests.
>>
>>
>> Integrated tests for python-cinderclient
>>
>> We need such tests to make sure that we won't break Nova, Heat or other
>> python-cinderclient consumers with a next merged patch. There is a thread
>> in openstack-dev ML about such tests [8] and proposal [9] to introduce them
>> to python-cinderclient.
>>
>>
>> Rally tests
>>
>> IMO, it would be good to have new Rally scenarios for every patches like
>> 'improves performance', 'fixes concurrency issues', etc. Even if we as a
>> Cinder community don't have enough time to implement them, we have to ask
>> for them in reviews, openstack-dev ML, file Rally bugs and blueprints if
>> needed.
>>
>>
>> [1] https://review.openstack.org/#/c/282861/
>> [2] http://paste.openstack.org/show/488925/
>> [3] https://review.openstack.org/#/c/267801/
>> [4] https://review.openstack.org/#/c/287115/
>> [5] https://review.openstack.org/#/c/274471/
>> [6] https://review.openstack.org/#/c/265811/
>> [7] https://review.openstack.org/#/c/265925/
>> [8]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.html
>> [9] https://review.openstack.org/#/c/279432/
>>
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>>
>>
>> ​We could just parse out the tox slowest tests output we already have.  Do
> something like pylint where we look at existing/current slowest test and
> balk if that's exceeded.
> 
> Thoughts?
> 
> John​
> 

I'm not really sure that writing a "hacking" check for this is a
worthwhile investment.  (It's not a hacking check really, but something
more like what you're describing, but that's beside the point.)

We should just be looking for large, complex unit tests in review, and
the ones that we already have should be moving towards the functional
test area anyway.

So what would the objective here be exactly?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread Eric Harney
On 03/02/2016 10:07 AM, Ivan Kolodyazhny wrote:
> Eric,
> 
> For now, we test Cinder API with some concurrency only with Rally, so, IMO,
> it's reasonable get more scenarios for API races fixes.
> 
> It's not a hard task to implement new scenarios, they are pretty simple:
> [11] and [12]
> 

Sure, these are simple, but I think it's nowhere near that simple to
write a scenario which will prove that "remove API races" works correctly.

> [11]
> https://github.com/openstack/rally/blob/master/rally/plugins/openstack/scenarios/cinder/volumes.py#L535
> [12]
> https://github.com/openstack/rally/blob/master/rally-jobs/cinder.yaml#L516
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> 
> On Wed, Mar 2, 2016 at 4:50 PM, Eric Harney <ehar...@redhat.com> wrote:
> 
>> On 03/02/2016 09:36 AM, Ivan Kolodyazhny wrote:
>>> Eric,
>>>
>>> There are Gorka's patches [10] to remove API Races
>>>
>>>
>>> [10]
>>>
>> https://review.openstack.org/#/q/project:openstack/cinder+branch:master+topic:fix/api-races-simplified
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>
>> So the second part of my question is, is writing a Rally job to prove
>> out that code a reasonable task?
>>
>> How hard is that to do and what does it look like?
>>
>>> On Wed, Mar 2, 2016 at 4:27 PM, Eric Harney <ehar...@redhat.com> wrote:
>>>
>>>> On 03/02/2016 06:25 AM, Ivan Kolodyazhny wrote:
>>>>> Hi Team,
>>>>>
>>>>> Here are my thoughts and proposals how to make Cinder testing process
>>>>> better. I won't cover "3rd party CI's" topic here. I will share my
>>>> opinion
>>>>> about current and feature jobs.
>>>>>
>>>>>
>>>>> Unit-tests
>>>>>
>>>>>- Long-running tests. I hope, everybody will agree that unit-tests
>>>> must
>>>>>be quite simple and very fast. Unit tests which takes more than 3-5
>>>> seconds
>>>>>should be refactored and/or moved to 'integration' tests.
>>>>>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>>>>>good to have some hacking checks to prevent such issues in a future.
>>>>>
>>>>>- Tests coverage. We don't check it in an automatic way on gates.
>>>>>Usually, we require to add some unit-tests during code review
>>>> process. Why
>>>>>can't we add coverage job to our CI and do not merge new patches,
>> with
>>>>>will decrease tests coverage rate? Maybe, such job could be voting
>> in
>>>> a
>>>>>future to not ignore it. For now, there is not simple way to check
>>>> coverage
>>>>>because 'tox -e cover' output is not useful [2].
>>>>>
>>>>>
>>>>> Functional tests for Cinder
>>>>>
>>>>> We introduced some functional tests last month [3]. Here is a patch to
>>>>> infra to add new job [4]. Because these tests were moved from
>>>> unit-tests, I
>>>>> think we're OK to make this job voting. Such tests should not be a
>>>>> replacement for Tempest. They even could tests Cinder with Fake Driver
>> to
>>>>> make it faster and not related on storage backends issues.
>>>>>
>>>>>
>>>>> Tempest in-tree tests
>>>>>
>>>>> Sean started work on it [5] and I think it's a good idea to get them in
>>>>> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a
>> real
>>>>> backend.
>>>>>
>>>>>
>>>>> Functional tests for python-brick-cinderclient-ext
>>>>>
>>>>> There are patches that introduces functional tests [6] and new job [7].
>>>>>
>>>>>
>>>>> Functional tests for python-cinderclient
>>>>>
>>>>> We've got a very limited set of such tests and non-voting job. IMO, we
>>>> can
>>>>> run them even with Cinder Fake Driver to make them not depended on a
>>>>> storage backend and make it faster. I believe, we can make this job
>>>> voting
>>>>> soon. Also, we need more contributors to this kind of tests.
>>>>>
>>>>>
>>>>> Integrated tests for pyth

Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread Eric Harney
On 03/02/2016 09:36 AM, Ivan Kolodyazhny wrote:
> Eric,
> 
> There are Gorka's patches [10] to remove API Races
> 
> 
> [10]
> https://review.openstack.org/#/q/project:openstack/cinder+branch:master+topic:fix/api-races-simplified
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> 

So the second part of my question is, is writing a Rally job to prove
out that code a reasonable task?

How hard is that to do and what does it look like?

> On Wed, Mar 2, 2016 at 4:27 PM, Eric Harney <ehar...@redhat.com> wrote:
> 
>> On 03/02/2016 06:25 AM, Ivan Kolodyazhny wrote:
>>> Hi Team,
>>>
>>> Here are my thoughts and proposals how to make Cinder testing process
>>> better. I won't cover "3rd party CI's" topic here. I will share my
>> opinion
>>> about current and feature jobs.
>>>
>>>
>>> Unit-tests
>>>
>>>- Long-running tests. I hope, everybody will agree that unit-tests
>> must
>>>be quite simple and very fast. Unit tests which takes more than 3-5
>> seconds
>>>should be refactored and/or moved to 'integration' tests.
>>>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>>>good to have some hacking checks to prevent such issues in a future.
>>>
>>>- Tests coverage. We don't check it in an automatic way on gates.
>>>Usually, we require to add some unit-tests during code review
>> process. Why
>>>can't we add coverage job to our CI and do not merge new patches, with
>>>will decrease tests coverage rate? Maybe, such job could be voting in
>> a
>>>future to not ignore it. For now, there is not simple way to check
>> coverage
>>>because 'tox -e cover' output is not useful [2].
>>>
>>>
>>> Functional tests for Cinder
>>>
>>> We introduced some functional tests last month [3]. Here is a patch to
>>> infra to add new job [4]. Because these tests were moved from
>> unit-tests, I
>>> think we're OK to make this job voting. Such tests should not be a
>>> replacement for Tempest. They even could tests Cinder with Fake Driver to
>>> make it faster and not related on storage backends issues.
>>>
>>>
>>> Tempest in-tree tests
>>>
>>> Sean started work on it [5] and I think it's a good idea to get them in
>>> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a real
>>> backend.
>>>
>>>
>>> Functional tests for python-brick-cinderclient-ext
>>>
>>> There are patches that introduces functional tests [6] and new job [7].
>>>
>>>
>>> Functional tests for python-cinderclient
>>>
>>> We've got a very limited set of such tests and non-voting job. IMO, we
>> can
>>> run them even with Cinder Fake Driver to make them not depended on a
>>> storage backend and make it faster. I believe, we can make this job
>> voting
>>> soon. Also, we need more contributors to this kind of tests.
>>>
>>>
>>> Integrated tests for python-cinderclient
>>>
>>> We need such tests to make sure that we won't break Nova, Heat or other
>>> python-cinderclient consumers with a next merged patch. There is a thread
>>> in openstack-dev ML about such tests [8] and proposal [9] to introduce
>> them
>>> to python-cinderclient.
>>>
>>>
>>> Rally tests
>>>
>>> IMO, it would be good to have new Rally scenarios for every patches like
>>> 'improves performance', 'fixes concurrency issues', etc. Even if we as a
>>> Cinder community don't have enough time to implement them, we have to ask
>>> for them in reviews, openstack-dev ML, file Rally bugs and blueprints if
>>> needed.
>>>
>>
>> Are there any recent examples of a fix like this recently where it would
>> seem like a reasonable task to write a Rally scenario along with the patch?
>>
>> Not being very familiar with Rally (as I think most of us aren't), I'm
>> having a hard time picturing this.
>>
>>>
>>> [1] https://review.openstack.org/#/c/282861/
>>> [2] http://paste.openstack.org/show/488925/
>>> [3] https://review.openstack.org/#/c/267801/
>>> [4] https://review.openstack.org/#/c/287115/
>>> [5] https://review.openstack.org/#/c/274471/
>>> [6] https://review.openstack.org/#/c/265811/
>>> [7] https://review.openstack.org/#/c/265925/
>>> [8]
>>>
>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.html
>>> [9] https://review.openstack.org/#/c/279432/
>>>
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread Eric Harney
On 03/02/2016 06:25 AM, Ivan Kolodyazhny wrote:
> Hi Team,
> 
> Here are my thoughts and proposals how to make Cinder testing process
> better. I won't cover "3rd party CI's" topic here. I will share my opinion
> about current and feature jobs.
> 
> 
> Unit-tests
> 
>- Long-running tests. I hope, everybody will agree that unit-tests must
>be quite simple and very fast. Unit tests which takes more than 3-5 seconds
>should be refactored and/or moved to 'integration' tests.
>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>good to have some hacking checks to prevent such issues in a future.
> 
>- Tests coverage. We don't check it in an automatic way on gates.
>Usually, we require to add some unit-tests during code review process. Why
>can't we add coverage job to our CI and do not merge new patches, with
>will decrease tests coverage rate? Maybe, such job could be voting in a
>future to not ignore it. For now, there is not simple way to check coverage
>because 'tox -e cover' output is not useful [2].
> 
> 
> Functional tests for Cinder
> 
> We introduced some functional tests last month [3]. Here is a patch to
> infra to add new job [4]. Because these tests were moved from unit-tests, I
> think we're OK to make this job voting. Such tests should not be a
> replacement for Tempest. They even could tests Cinder with Fake Driver to
> make it faster and not related on storage backends issues.
> 
> 
> Tempest in-tree tests
> 
> Sean started work on it [5] and I think it's a good idea to get them in
> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a real
> backend.
> 
> 
> Functional tests for python-brick-cinderclient-ext
> 
> There are patches that introduces functional tests [6] and new job [7].
> 
> 
> Functional tests for python-cinderclient
> 
> We've got a very limited set of such tests and non-voting job. IMO, we can
> run them even with Cinder Fake Driver to make them not depended on a
> storage backend and make it faster. I believe, we can make this job voting
> soon. Also, we need more contributors to this kind of tests.
> 
> 
> Integrated tests for python-cinderclient
> 
> We need such tests to make sure that we won't break Nova, Heat or other
> python-cinderclient consumers with a next merged patch. There is a thread
> in openstack-dev ML about such tests [8] and proposal [9] to introduce them
> to python-cinderclient.
> 
> 
> Rally tests
> 
> IMO, it would be good to have new Rally scenarios for every patches like
> 'improves performance', 'fixes concurrency issues', etc. Even if we as a
> Cinder community don't have enough time to implement them, we have to ask
> for them in reviews, openstack-dev ML, file Rally bugs and blueprints if
> needed.
> 

Are there any recent examples of a fix like this recently where it would
seem like a reasonable task to write a Rally scenario along with the patch?

Not being very familiar with Rally (as I think most of us aren't), I'm
having a hard time picturing this.

> 
> [1] https://review.openstack.org/#/c/282861/
> [2] http://paste.openstack.org/show/488925/
> [3] https://review.openstack.org/#/c/267801/
> [4] https://review.openstack.org/#/c/287115/
> [5] https://review.openstack.org/#/c/274471/
> [6] https://review.openstack.org/#/c/265811/
> [7] https://review.openstack.org/#/c/265925/
> [8]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.html
> [9] https://review.openstack.org/#/c/279432/
> 
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Patrick East to Cinder Core

2016-02-01 Thread Eric Harney
On 01/29/2016 07:04 PM, Sean McGinnis wrote:
> Patrick has been a strong contributor to Cinder over the last few releases, 
> both with great code submissions and useful reviews. He also participates 
> regularly on IRC helping answer questions and providing valuable feedback.
> 
> I would like to add Patrick to the core reviewers for Cinder. Per our 
> governance process [1], existing core reviewers please respond with any 
> feedback within the next five days. Unless there are no objections, I will 
> add Patrick to the group by February 3rd.
> 
> Thanks!
> 
> Sean (smcginnis)
> 
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

+1, sounds great to me!

Eric


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spam of patches

2016-01-12 Thread Eric Harney
On 01/12/2016 09:32 AM, Julien Danjou wrote:
> On Tue, Jan 12 2016, Amrith Kumar wrote:
> 
>> My question to the ML is this, should stylistic changes of this kind be 
>> handled
>> in a consistent way across all projects, maybe with a hacking rule and some
>> discussion on the ML first? After all, if this change is worthwhile, it is
>> worth ensuring that this construct that we are seeking to eliminate, does not
>> reenter the code base.
> 
> This is not stylistic, these are actual changes that can break the code
> for no good reason. I've already -2'ed the Ceilometer one.
> 
> Honestly, this kind of change are getting more and more a problem to us.
> People invent a false bug, maybe report it to LP and mass-assign
> projects, and then spam all the projects without any discussion before.
> The worse thing is that most of these patches are wrong or incorrect,
> add code-churn that just pollutes project history for no benefit.
> 

For anyone interested here, this is the most recent example of this that
I've seen (and not the first time this same faulty change has been
discussed in Cinder):

https://bugs.launchpad.net/cinder/+bug/1512207

The change suggested here makes unit tests weaker, but many projects
have already landed this change.

I'd just like to be another voice to say: these changes are often not as
simple as they look, and really need careful review.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Deprecating ConfKeyManager (fixed-key key manager)

2016-01-04 Thread Eric Harney
On 01/04/2016 10:46 AM, Farr, Kaitlin M. wrote:
>> The fixed key manager is useful for easy testing (we're using it in the
>> gate in places where barbican isn't available). Is there anything
>> equivalent with Catellan?
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
> 
> There is no fixed-key back end with Castellan. I agree that using a
> fixed key makes for very easy testing, but the tests use a
> configuration (ConfKeyManager) that should not be used in deployment.
> The tests could be made much more useful if they used a more realistic
> configuration (Barbican).
> 
> Adding a gate that tests using DevStack with Barbican enabled would
> be a more valuable than the existing tests for two reasons:
> 
>  1. ConfKeyManager could be removed.
>  2. It would test the feature configured more closely to how a
> deployment would actually look.
> 
> As part of this change to deprecate ConfKeyManager and integrate
> Castellan, I would like to add this new gate.
> 
>  -Kaitlin
> 

Aiming toward tests that mirror real-world deployment is certainly a
good thing, but I don't think we should remove ConfKeyManager.

We will want to maintain the ability to test these Cinder/Nova code
paths in development environments or in some automated environments
without requiring additional services to be configured.

We can address this by having ConfKeyManager emit warning messages
indicating that it isn't for production environments.

Thanks,
Eric


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] RemoteFS drivers refactoring: move code, which works with images to separate classes

2015-10-28 Thread Eric Harney
On 10/28/2015 03:18 PM, Dmitry Guryanov wrote:
> Hello!
> 
> Can we discuss this on the summit?
> 
> As I promised, I've written a blueprint for this change:
> 
> https://review.openstack.org/#/c/237094/
> 

I assume we can talk about this at the Cinder contributors meetup on Friday.

> 
> On 10/14/2015 03:57 AM, Dmitry Guryanov wrote:
>> Hello,
>>
>> RemoteFS drivers combine 2 logical tasks. The first one is how to
>> mount a filesystem and select proper share for a new or existing
>> volume. The second one: how to deal with an image files in given
>> directory (mount point) (create, delete, create snapshot e.t.c.).
>>
>> The first part is different for each volume driver. The second - the
>> same for all volume drivers, but it depends on selected volume format:
>> you can create qcow2 file on NFS or smbfs with the same code.
>>
>> Since there are several volume formats (raw, qcow2, vhd and possibly
>> some others), I propose to move the code, which works with image to
>> separate classes, 'VolumeFormat' handlers.
>>
>> This change have 3 advantages:
>>
>> 1. Duplicated code from remotefs driver will be removed.
>> 2. All drivers will support all volume formats.
>> 3. New volume formats could be added easily, including non-qcow2
>> snapshots.
>>
>> Here is a draft version of a patch:
>> https://review.openstack.org/#/c/234359/
>>
>> Although there are problems in it, most of the operations with volumes
>> work and there are only about 10 fails in tempest.
>>
>>
>> I'd like to discuss this approach before further work on the patch.
>>
>>
>> -- 
>> Dmitry Guryanov
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] RemoteFS drivers refactoring: move code, which works with images to separate classes

2015-10-13 Thread Eric Harney
On 10/13/2015 02:57 PM, Dmitry Guryanov wrote:
> Hello,
> 
> RemoteFS drivers combine 2 logical tasks. The first one is how to mount
> a filesystem and select proper share for a new or existing volume. The
> second one: how to deal with an image files in given directory (mount
> point) (create, delete, create snapshot e.t.c.).
> 
> The first part is different for each volume driver. The second - the
> same for all volume drivers, but it depends on selected volume format:
> you can create qcow2 file on NFS or smbfs with the same code.
> 
> Since there are several volume formats (raw, qcow2, vhd and possibly
> some others), I propose to move the code, which works with image to
> separate classes, 'VolumeFormat' handlers.
> 
> This change have 3 advantages:
> 
> 1. Duplicated code from remotefs driver will be removed.
> 2. All drivers will support all volume formats.
> 3. New volume formats could be added easily, including non-qcow2 snapshots.
> 
> Here is a draft version of a patch:
> https://review.openstack.org/#/c/234359/
> 
> Although there are problems in it, most of the operations with volumes
> work and there are only about 10 fails in tempest.
> 
> 
> I'd like to discuss this approach before further work on the patch.
> 

I've only taken a quick look, but, a few comments:

IMO it is not a good idea to work on extending support for volume
formats until we get further on having Cinder manage data in different
formats in a robust and secure manner [1]. We should fix that problem
before making it a worse problem.

Points 2 and 3 above aren't really that straightforward.  For example,
calling delete_snapshot_online only works if Nova/libvirt/etc. support
managing the format you are using.  This is fine for the current uses,
because qcow2 is well-supported.  Adding this to a driver using a
different/new file format will likely not work, so combining all of the
code is questionable, even if it seems more nicely organized.

Point #2 assumes that there's a reason that someone would want to use
currently unsupported combinations such as NFS + VHD or SMB + qcow2.
The specific file format being used is not terribly interesting other
than in the context of what a hypervisor supports, and we don't need
more not-so-well-tested combinations for people to deploy.  So why
enable this?

We've already gone somewhat in the other direction with [2], which
removed the ability to configure the GlusterFS driver to use qcow2
volumes, and instead just lets you choose if you want thick or thinly
provisioned volumes, leaving the format choice as an implementation
detail rather than a deployment choice.  (It still uses qcow2 behind the
scenes.)  I think that's the right direction.

[1] https://review.openstack.org/#/c/165393/
[2] https://review.openstack.org/#/c/164527/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] NFS mount as cinder user instead of root

2015-10-02 Thread Eric Harney
On 10/02/2015 05:48 AM, Francesc Pinyol Margalef wrote:
> Hi,
> In a previous message in general openstack list I reported a problem
> when trying to mount an NFS volume from a Fujitsu Eternus DX
> http://lists.openstack.org/pipermail/openstack/2015-July/013578.html
> 
> The issue is that this file server does not allow mounts as root, and
> this behaviour cannot be changed.
> 
> Would it be possible to configure (or modify) Cinder in order to mount
> NFS points as cinder user instead of root?
> 
> Francesc
> 
> 
> 

With the NFS driver, setting the option "nas_secure_file_operations =
True" should cause files to be created as the cinder user rather than as
root.

Can you try setting this and let me know if that helps?

Eric


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] snapshot and cloning for NFS backend

2015-09-29 Thread Eric Harney
On 09/29/2015 09:05 AM, Kekane, Abhishek wrote:
> Hi Sean,
> 
> Author of specs is Eric Harney, If he is ok with it then I will submit the 
> patch for moving specs to Mitaka.
> 
> Thank you,
> 
> Abhishek Kekane
> 

I saw that go by, thanks.

Note that while the work outlined there is sufficient, to get this
feature fully robust and polished we also need to get some attention on
this spec:

https://review.openstack.org/#/c/165393/

(I'll update it to move it to the mitaka directory as well I suppose.)

> -Original Message-
> From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com] 
> Sent: 29 September 2015 17:47
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder] snapshot and cloning for NFS backend
> 
> Hi Sean,
> 
> Sure I will submit a patch to add this spec in Mitaka.
> 
> Thank you,
> 
> Abhishek Kekane
> 
> -Original Message-
> From: Sean McGinnis [mailto:sean.mcgin...@gmx.com]
> Sent: 29 September 2015 17:38
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder] snapshot and cloning for NFS backend
> 
> On Tue, Sep 29, 2015 at 06:26:06AM +, Kekane, Abhishek wrote:
>> Hi Devs,
>>
>> The cinder-specs [1] for snapshot and cloning NFS backend submitted by Eric 
>> was approved in Kilo but due to nova issue [2] it is not implemented in Kilo 
>> and Liberty.
>> I am discussing about this nova bug with nova team for finding possible 
>> solutions and Nikola has given some pointers about fixing the same in 
>> launchpad bug.
>>
>> This feature is very useful for NFS backend and if the work should be 
>> continued then is there a need to resubmit this specs for approval in Mitaka?
> 
> Thanks for looking at this Abhishek. I would like to see this work continued 
> and completed in Mitaka if at all possible.
> 
> Would you mind submitting a patch to add the spec to Mitaka? I will make sure 
> we get that through and targeted for this release.
> 
> Thanks!
> 
> Sean
> 
>>
>> Please let me know your opinion on the same.
>>
>> [1] https://review.openstack.org/#/c/133074/
>> [2] https://bugs.launchpad.net/nova/+bug/1416132
>>
>>
>> Thanks & Regards,
>>
>> Abhishek Kekane
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-25 Thread Eric Harney
On 09/25/2015 02:30 PM, Mitsuhiro Tanino wrote:
> On 09/22/2015 06:43 PM, Robert Collins wrote:
>> On 23 September 2015 at 09:52, Chris Friesen 
>>  wrote:
>>> Hi,
>>>
>>> I recently had an issue with one file out of a dozen or so in 
>>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm 
>>> running stable/kilo if it makes a difference.
>>>
>>> Looking at the code in 
>>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we 
>>> should do a fsync() before the close().  The way it stands now, it 
>>> seems like it might be possible to write the file, start making use 
>>> of it, and then take a power outage before it actually gets written 
>>> to persistent storage.  When we come back up we could have an 
>>> instance expecting to make use of it, but no target information in the 
>>> on-disk copy of the file.
> 
> I think even if there is no target information in configuration file dir, 
> c-vol started successfully
> and iSCSI targets were created automatically and volumes were exported, right?
> 
> There is an problem in this case that the iSCSI target was created without 
> authentication because
> we can't get previous authentication from the configuration file.
> 
> I'm curious what kind of problem did you met?
>   
>> If its being kept in sync with DB records, and won't self-heal from 
>> this situation, then yes. e.g. if the overall workflow is something 
>> like
> 
> In my understanding, the provider_auth in database has user name and password 
> for iSCSI target. 
> Therefore if we get authentication from DB, I think we can self-heal from 
> this situation
> correctly after c-vol service is restarted.
> 

Is this not already done as-needed by ensure_export()?

> The lio target obtains authentication from provider_auth in database, but 
> tgtd, iet, cxt obtain
> authentication from file to recreate iSCSI target when c-vol is restarted.
> If the file is missing, these volumes are exported without authentication and 
> configuration
> file is recreated as I mentioned above.
> 
> tgtd: Get target chap auth from file
> iet:  Get target chap auth from file
> cxt:  Get target chap auth from file
> lio:  Get target chap auth from Database(in provider_auth)
> scst: Get target chap auth by using original command
> 
> If we get authentication from DB for tgtd, iet and cxt same as lio, we can 
> recreate iSCSI target
> with proper authentication when c-vol is restarted.
> I think this is a solution for this situation.
> 

This may be possible, but fixing the target config file to be written
more safely to work as currently intended is still a win.

> Any thought?
> 
> Thanks,
> Mitsuhiro Tanino
> 
>> -Original Message-
>> From: Chris Friesen [mailto:chris.frie...@windriver.com]
>> Sent: Friday, September 25, 2015 12:48 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
>> config file?
>>
>> On 09/24/2015 04:21 PM, Chris Friesen wrote:
>>> On 09/24/2015 12:18 PM, Chris Friesen wrote:
>>>

 I think what happened is that we took the SIGTERM after the open()
 call in create_iscsi_target(), but before writing anything to the file.

  f = open(volume_path, 'w+')
  f.write(volume_conf)
  f.close()

 The 'w+' causes the file to be immediately truncated on opening,
 leading to an empty file.

 To work around this, I think we need to do the classic "write to a
 temporary file and then rename it to the desired filename" trick.
 The atomicity of the rename ensures that either the old contents or the new
>> contents are present.
>>>
>>> I'm pretty sure that upstream code is still susceptible to zeroing out
>>> the file in the above scenario.  However, it doesn't take an
>>> exception--that's due to a local change on our part that attempted to fix 
>>> the
>> below issue.
>>>
>>> The stable/kilo code *does* have a problem in that when it regenerates
>>> the file it's missing the CHAP authentication line (beginning with
>> "incominguser").
>>
>> I've proposed a change at https://urldefense.proofpoint.com/v2/url?u=https-
>> 3A__review.openstack.org_-23_c_227943_=BQICAg=DZ-
>> EF4pZfxGSU6MfABwx0g=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE=SVlOqKiqO04_
>> NttKUIoDiaOR7cePB0SOA1bpjakqAss=q2_8XBAVH9lQ2mdT72nW7dN2EafIqJEpHGLBuf4K970=
>>
>> If anyone has suggestions on how to do this more robustly or more cleanly,
>> please let me know.
>>
>> Chris
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] How to make a mock effactive for all method of a testclass

2015-09-23 Thread Eric Harney
On 09/23/2015 04:06 AM, liuxinguo wrote:
> Hi,
> 
> In a.py we have a function:
> def _change_file_mode(filepath):
> utils.execute('chmod', '600', filepath, run_as_root=True)
> 
> In test_xxx.py, there is a testclass:
> class DriverTestCase(test.TestCase):
> def test_a(self)
> ...
> Call a. _change_file_mode
> ...
> 
> def test_b(self)
> ...
> Call a. _change_file_mode
> ...
> 
> I have tried to mock like mock out function _change_file_mode like this:
> @mock.patch.object(a, '_change_file_mode', return_value=None)
> class DriverTestCase(test.TestCase):
> def test_a(self)
> ...
> Call a. _change_file_mode
> ...
> 
> def test_b(self)
> ...
> Call a. _change_file_mode
> ...
> 
> But the mock takes no effort, the real function _change_file_mode is still 
> executed.
> So how to make a mock effactive for all method of a testclass?
> Thanks for any input!
> 
> Wilson Liu

The simplest way I found to do this was to use mock.patch in the test
class's setUp() method, and tear it down again in tearDown().

There may be cleaner ways to do this with tools in oslotest etc. (I'm
not sure), but this is fairly straightforward.

See here -- self._clear_patch stores the mock:
http://git.openstack.org/cgit/openstack/cinder/tree/cinder/tests/unit/test_volume.py?id=8de60a8b#n257


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-18 Thread Eric Harney
On 09/18/2015 03:33 PM, John Griffith wrote:
> On Fri, Sep 18, 2015 at 12:52 PM, Eric Harney <ehar...@redhat.com> wrote:
> 
>> On 09/18/2015 01:01 PM, John Griffith wrote:
>>> On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <
>> chris.frie...@windriver.com>
>>> wrote:
>>>
>>>> On 09/18/2015 06:57 AM, Eric Harney wrote:
>>>>
>>>>> On 09/17/2015 06:06 PM, John Griffith wrote:
>>>>>
>>>>
>>>> Having the "global conf" settings intermixed with the backend sections
>>>>>> caused a number of issues when we first started working on this.
>> That's
>>>>>> part of why we require the "self.configuration" usage all over in the
>>>>>> drivers.  Each driver instantiation is it's own independent entity.
>>>>>>
>>>>>>
>>>>> Yes, each driver instantiation is independent, but that would still be
>>>>> the case if these settings inherited values set in [DEFAULT] when they
>>>>> aren't set in the backend section.
>>>>>
>>>>
>>>> Agreed.  If I explicitly set something in the [DEFAULT] section, that
>>>> should carry through and apply to all the backends unless overridden in
>> the
>>>> backend-specific section.
>>>>
>>>> Chris
>>>>
>>>>
>>> Meh I don't know about the "have to modify the code", the config file
>> works
>>> you just need to add that line to your driver section and configure the
>>> backend correctly.
>>>
>>
>> My point is that there doesn't seem to be a justification for "you just
>> need to add that line to your driver section", which seems to counter
>> what most people's expectation would be.
>>
> ​There certainly is, I don't want to force the same options against all
> backends.  Perfect example is the issues with some distros in the past that
> DID use global settings and stomp over any driver; which in turn broke
> those that weren't compatible with that conf setting even though in the
> driver section they overrode it.​
> 
> 
>>
>> People can and do fail to do that, because they assume that [DEFAULT]
>> settings are treated as defaults.
>>
> 
> ​Bad assumption, we should probably document this until we fix it (making a
> very large assumption that we'll ever agree on how to fix it).​
> 
>>
>> To help people who make that assumption, yes, you have to modify the
>> code, because the code supplies a default value that you cannot supply
>> in the same way via config files.
>>
> 
> ​Or you could just fill out the config file properly:
> [lvm-1]
> iscsi_helper = lioadm
> 
> I didn't have to modify any code.
> ​
> 
> 

In the use case I was describing, I'm shipping a package, as a
distribution, with a default configuration file. The deployer (not me)
is the only one that knows about config sections that they want for
multi-backend. I don't think it's fair to require them to fill out
things like iscsi_helper, because there is only one correct value for
iscsi_helper on the platform I support, and defaulting to a different
one is not useful.

The fact that we don't inherit [DEFAULT] settings means that it is not
possible for me to ship a package with the correct defaults without
changing the hard-coded default value, in the code, to customize it for
my platform. I want to set iscsi_helper = lioadm in a configuration file
and have that be the default for any enabled_backend.


>>
>>> Regardless, I see your point (but I still certainly don't agree that it's
>>> "blatantly wrong").
>>>
>>
>> You can substitute "very confusing" for "blatantly wrong" but I think
>> those are about the same thing when talking about usability issues with
>> how to configure a service.
>>
> 
> ​Fair enough.  Call it whatever you like.​
> 
> 
>>
>> Look at options like:
>>  - strict_ssh_host_key_policy
>>  - sio_verify_server_certificate
>>  - driver_ssl_cert_verify
> 
> 
>> All of these default to False, and if turned on, enable protections
>> against MITM attacks.  All of them also fail to turn on for the relevant
>> drivers if set in [DEFAULT].  These should, if set in DEFAULT when using
>> multi-backend, issue a warning so the admin knows that they are not
>> getting the intended security guarantees.  Instead, nothing happens and
>> Cinder and the storage works.  Confusion is dangerous.
>>
> 
> ​Yeah, so is crappy documentation 

Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-18 Thread Eric Harney
On 09/18/2015 01:01 PM, John Griffith wrote:
> On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <chris.frie...@windriver.com>
> wrote:
> 
>> On 09/18/2015 06:57 AM, Eric Harney wrote:
>>
>>> On 09/17/2015 06:06 PM, John Griffith wrote:
>>>
>>
>> Having the "global conf" settings intermixed with the backend sections
>>>> caused a number of issues when we first started working on this.  That's
>>>> part of why we require the "self.configuration" usage all over in the
>>>> drivers.  Each driver instantiation is it's own independent entity.
>>>>
>>>>
>>> Yes, each driver instantiation is independent, but that would still be
>>> the case if these settings inherited values set in [DEFAULT] when they
>>> aren't set in the backend section.
>>>
>>
>> Agreed.  If I explicitly set something in the [DEFAULT] section, that
>> should carry through and apply to all the backends unless overridden in the
>> backend-specific section.
>>
>> Chris
>>
>>
> Meh I don't know about the "have to modify the code", the config file works
> you just need to add that line to your driver section and configure the
> backend correctly.
> 

My point is that there doesn't seem to be a justification for "you just
need to add that line to your driver section", which seems to counter
what most people's expectation would be.

People can and do fail to do that, because they assume that [DEFAULT]
settings are treated as defaults.

To help people who make that assumption, yes, you have to modify the
code, because the code supplies a default value that you cannot supply
in the same way via config files.

> Regardless, I see your point (but I still certainly don't agree that it's
> "blatantly wrong").
> 

You can substitute "very confusing" for "blatantly wrong" but I think
those are about the same thing when talking about usability issues with
how to configure a service.

Look at options like:
 - strict_ssh_host_key_policy
 - sio_verify_server_certificate
 - driver_ssl_cert_verify

All of these default to False, and if turned on, enable protections
against MITM attacks.  All of them also fail to turn on for the relevant
drivers if set in [DEFAULT].  These should, if set in DEFAULT when using
multi-backend, issue a warning so the admin knows that they are not
getting the intended security guarantees.  Instead, nothing happens and
Cinder and the storage works.  Confusion is dangerous.

> Bottom line "yes", ideally in the case of drivers we would check
> global/default setting, and then override it if something was provided in
> the driver specific setting, or if the driver itself set a different
> default.  That seems like the right way to be doing it anyway.  I've looked
> at that a bit this morning, the issue is that currently we don't even pass
> any of those higher level conf settings in to the drivers init methods
> anywhere.  Need to figure out how to change that, then it should be a
> relatively simple fix.
> 

What I was getting at earlier though, is that I'm not really sure there
is a simple fix.  It may be simple to change the behavior to more
predictable behavior, but doing that in a way that doesn't introduce
upgrade problems for deployments relying on the current defaults seems
difficult to me.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-18 Thread Eric Harney
On 09/17/2015 06:06 PM, John Griffith wrote:
> On Thu, Sep 17, 2015 at 11:31 AM, Eric Harney <ehar...@redhat.com> wrote:
> 
>> On 09/17/2015 05:00 AM, Duncan Thomas wrote:
>>> On 16 September 2015 at 23:43, Eric Harney <ehar...@redhat.com> wrote:
>>>
>>>> Currently, at least some options set in [DEFAULT] don't apply to
>>>> per-driver sections, and require you to set them in the driver section
>>>> as well.
>>>>
>>>
>>> This is extremely confusing behaviour. Do you have any examples? I'm not
>>> sure if we can fix it without breaking people's existing configs but I
>>> think it is worth trying. I'll add it to the list of things to talk about
>>> briefly in Tokyo.
>>>
>>
>> The most recent place this bit me was with iscsi_helper.
>>
>> If cinder.conf has:
>>
>> [DEFAULT]
>> iscsi_helper = lioadm
>> enabled_backends = lvm1
>>
>> [lvm1]
>> volume_driver = ...LVMISCSIDriver
>> # no iscsi_helper setting
>>
>>
>> You end up with c-vol showing "iscsi_helper = lioadm", and
>> "lvm1.iscsi_helper = tgtadm", which is the default in the code, and not
>> the default in the configuration file.
>>
>> I agree that this is confusing, I think it's also blatantly wrong.  I'm
>> not sure how to fix it, but I think it's some combination of your
>> suggestions above and possibly having to introduce new option names.
>>
> ​
> I'm not sure why that's "blatantly wrong', this is a side effect of having
> multiple backends enabled, it's by design really.  Any option that is
> defined in driver.py needs to be set in the actual enabled-backend stanza
> IIRC.  This includes iscsi_helper, volume_clear etc.
> 

I think it's wrong because it's not predictable for someone configuring
Cinder.  I understand that this is a side effect of multi-backend, but
I'm not sure what the reasoning is if it's intentional design.  I think
most people would expect a setting set in a [DEFAULT] section to be
treated as a default rather than being ignored.

This is particularly odd in the case of "iscsi_helper", where I want to
ship packages configured to use LIO since tgt doesn't exist on the
platform, and is never the right value for my packages.

This isn't possible without patching the code directly, which seems like
a shortfall in our configuration system.

> Having the "global conf" settings intermixed with the backend sections
> caused a number of issues when we first started working on this.  That's
> part of why we require the "self.configuration" usage all over in the
> drivers.  Each driver instantiation is it's own independent entity.
> 

Yes, each driver instantiation is independent, but that would still be
the case if these settings inherited values set in [DEFAULT] when they
aren't set in the backend section.

> I haven't looked at this for a long time, but if something has changed or
> I'm missing something my apologies.  We can certainly consider changing it,
> but because of the way we do multi-backend I'm not exactly sure how you
> would do this, or honestly why you would want to.
> 
> John
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-17 Thread Eric Harney
On 09/17/2015 05:00 AM, Duncan Thomas wrote:
> On 16 September 2015 at 23:43, Eric Harney <ehar...@redhat.com> wrote:
> 
>> Currently, at least some options set in [DEFAULT] don't apply to
>> per-driver sections, and require you to set them in the driver section
>> as well.
>>
> 
> This is extremely confusing behaviour. Do you have any examples? I'm not
> sure if we can fix it without breaking people's existing configs but I
> think it is worth trying. I'll add it to the list of things to talk about
> briefly in Tokyo.
> 

The most recent place this bit me was with iscsi_helper.

If cinder.conf has:

[DEFAULT]
iscsi_helper = lioadm
enabled_backends = lvm1

[lvm1]
volume_driver = ...LVMISCSIDriver
# no iscsi_helper setting


You end up with c-vol showing "iscsi_helper = lioadm", and
"lvm1.iscsi_helper = tgtadm", which is the default in the code, and not
the default in the configuration file.

I agree that this is confusing, I think it's also blatantly wrong.  I'm
not sure how to fix it, but I think it's some combination of your
suggestions above and possibly having to introduce new option names.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-16 Thread Eric Harney
On 09/16/2015 04:25 PM, Duncan Thomas wrote:
> On 16 Sep 2015 20:42, "yang, xing" <xing.y...@emc.com> wrote:
> 
>> On 9/16/15, 1:20 PM, "Eric Harney" <ehar...@redhat.com> wrote:
> 
>>> This sounds like a good idea, I'm just not sure how to structure it yet
>>> without creating a very confusing set of config options.
>>
>> I’m thinking we could have a prefix with vendor name for this and it also
>> requires documentation by driver maintainers if they are using a different
>> config option.  I proposed a topic to discuss about this at the summit.
> 
> We already have per-backend config values in cinder.conf. I'm not sure how
> the config code will need to be  structured to achieve it, but ideally I'd
> like a single config option that can be:
> 
> (i) set in the default section if desired
> (in) overridden in the per driver section, and (iii) have a default set in
> each driver.
> 
> I don't think oslo.config lets us do (I'll) yet though.
> 

I think there may be other issues to sort through to do that.
Currently, at least some options set in [DEFAULT] don't apply to
per-driver sections, and require you to set them in the driver section
as well.

If we keep that behavior (which I think is broken, personally), then
trying to do option (iii) may be pretty confusing, because the deployer
won't know which of the global vs. driver defaults are actually going to
be applied.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-16 Thread Eric Harney
On 09/15/2015 04:56 PM, yang, xing wrote:
> Hi Eric,
> 
> Regarding the default max_over_subscription_ratio, I initially set the
> default to 1 while working on oversubscription, and changed it to 2 after
> getting review comments.  After it was merged, I got feedback that 2 is
> too small and 20 is more appropriated, so I changed it to 20.  So it looks
> like we can¹t find a default value that makes everyone happy.
> 

I'm curious about how this is used in real-world deployments.  Are we
making the assumption that the admin has some external monitoring
configured to send alarms if the storage is nearing capacity?

> If we can decide what is the best default value for LVM, we can change the
> default max_over_subscription_ratio, but we should also allow other
> drivers to specify a different config option if a different default value
> is more appropriate for them.

This sounds like a good idea, I'm just not sure how to structure it yet
without creating a very confusing set of config options.


> On 9/15/15, 1:38 PM, "Eric Harney" <ehar...@redhat.com> wrote:
> 
>> On 09/15/2015 01:00 PM, Chris Friesen wrote:
>>> I'm currently trying to work around an issue where activating LVM
>>> snapshots created through cinder takes potentially a long time.
>>> (Linearly related to the amount of data that differs between the
>>> original volume and the snapshot.)  On one system I tested it took about
>>> one minute per 25GB of data, so the worst-case boot delay can become
>>> significant.
>>>
>>> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
>>> not intended to be kept around indefinitely, they were supposed to be
>>> used only until the backup was taken and then deleted.  He recommends
>>> using thin provisioning for long-lived snapshots due to differences in
>>> how the metadata is maintained.  (He also says he's heard reports of
>>> volume activation taking half an hour, which is clearly crazy when
>>> instances are waiting to access their volumes.)
>>>
>>> Given the above, is there any reason why we couldn't make thin
>>> provisioning the default?
>>>
>>
>>
>> My intention is to move toward thin-provisioned LVM as the default -- it
>> is definitely better suited to our use of LVM.  Previously this was less
>> easy, since some older Ubuntu platforms didn't support it, but in
>> Liberty we added the ability to specify lvm_type = "auto" [1] to use
>> thin if it is supported on the platform.
>>
>> The other issue preventing using thin by default is that we default the
>> max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
>> the reference implementation, since it means that people who deploy
>> Cinder LVM on smaller storage configurations can easily fill up their
>> volume group and have things grind to halt.  I think we want something
>> closer to the semantics of thick LVM for the default case.
>>
>> We haven't thought through a reasonable migration strategy for how to
>> handle that.  I'm not sure we can change the default oversubscription
>> ratio without breaking deployments using other drivers.  (Maybe I'm
>> wrong about this?)
>>
>> If we sort out that issue, I don't see any reason we can't switch over
>> in Mitaka.
>>
>> [1] https://review.openstack.org/#/c/104653/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-15 Thread Eric Harney
On 09/15/2015 01:00 PM, Chris Friesen wrote:
> I'm currently trying to work around an issue where activating LVM
> snapshots created through cinder takes potentially a long time. 
> (Linearly related to the amount of data that differs between the
> original volume and the snapshot.)  On one system I tested it took about
> one minute per 25GB of data, so the worst-case boot delay can become
> significant.
> 
> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
> not intended to be kept around indefinitely, they were supposed to be
> used only until the backup was taken and then deleted.  He recommends
> using thin provisioning for long-lived snapshots due to differences in
> how the metadata is maintained.  (He also says he's heard reports of
> volume activation taking half an hour, which is clearly crazy when
> instances are waiting to access their volumes.)
> 
> Given the above, is there any reason why we couldn't make thin
> provisioning the default?
> 


My intention is to move toward thin-provisioned LVM as the default -- it
is definitely better suited to our use of LVM.  Previously this was less
easy, since some older Ubuntu platforms didn't support it, but in
Liberty we added the ability to specify lvm_type = "auto" [1] to use
thin if it is supported on the platform.

The other issue preventing using thin by default is that we default the
max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
the reference implementation, since it means that people who deploy
Cinder LVM on smaller storage configurations can easily fill up their
volume group and have things grind to halt.  I think we want something
closer to the semantics of thick LVM for the default case.

We haven't thought through a reasonable migration strategy for how to
handle that.  I'm not sure we can change the default oversubscription
ratio without breaking deployments using other drivers.  (Maybe I'm
wrong about this?)

If we sort out that issue, I don't see any reason we can't switch over
in Mitaka.

[1] https://review.openstack.org/#/c/104653/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade][cinder] Updates of rootwrap filters

2015-08-26 Thread Eric Harney
On 08/26/2015 09:57 AM, Dulko, Michal wrote:
 Hi,
 
 Recently when working on a simple bug [1] I've run into a need to change 
 rootwrap filters rules for a few commands. After sending fix to Gerrit [2] it 
 turns out that when testing the upgraded cloud grenade haven't copied my 
 updated volume.filters file, and therefore failed the check. I wonder how 
 should I approach the issue:
 1. Make grenade script for Cinder to copy the new file to upgraded cloud.
 2. Divide the patch into two parts - at first add new rules, leaving the old 
 ones there, then fix the bug and remove old rules.
 3. ?
 
 Any opinions?
 
 [1] https://bugs.launchpad.net/cinder/+bug/1488433
 [2] https://review.openstack.org/#/c/216675/


I believe you have to go with option 1 and add code to grenade to handle
installing the new rootwrap filters.

grenade is detecting an upgrade incompatibility that requires a config
change, which is a good thing.  Splitting it into two patches will still
result in grenade failing, because it will test upgrading kilo to
master, not patch A to patch B.

Example for neutron:
https://review.openstack.org/#/c/143299/

A different example for nova (abandoned for unrelated reasons):
https://review.openstack.org/#/c/151408/



/me goes to investigate whether he can set the system locale to
something strange in the full-lio job, because he really thought we had
fixed all of the locale-related LVM parsing bugs by now.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor for core

2015-08-14 Thread Eric Harney
On 08/13/2015 03:13 PM, Mike Perez wrote:
 It gives me great pleasure to nominate Gorka Eguileor for Cinder core.

 Cinder core, please reply with a +1 for approval. This will be left
 open until August 19th. Assuming there are no objections, this will go
 forward after voting is closed.
 

+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Implementation of ABC MetaClasses

2015-07-20 Thread Eric Harney
On 07/20/2015 07:16 AM, Marc Koderer wrote:
 Hello Cinder,
 
 Instead of reverting nearly everything what was done (and is currently 
 ongoing).
 I would strongly suggest to simply reduce the number of the classes stepwise.
 

This makes sense, and this was the general plan as I recall -- to
collapse things into the base classes as we could.

 I spend some time to analyze what it actually implemented for all the drivers.
 
 Please see:
 
 https://docs.google.com/spreadsheets/d/1L_GuUCs-NMVbhbOj8Jtt8vjMQ23zhJ1yagSH4zSKWEw/edit?usp=sharing
 
 The following classes can be moved to BaseVD directly:
 
  - ClonableImageVD

ClonableImageVD doesn't need to exist anyway IMO, since the
functionality still works without it via a generic implementation.

  - CloneableVD
  
 For the following only BlockDeviceDriver has no implementation :
 
  - SnapshopVD
  - ExtendVD

BlockDeviceDriver is an odd special case.  But, NfsDriver is a real
driver and Snapshot support is in progress still.

 
 This would remove 4 sub classes out of 10.
 
 I used a script to produce this table [1]. Please let me know if you find a 
 bug :)
 

NfsDriver is not abstract. :)

(I think I'm going to rename RemoteFS[Snap]Driver to something that
doesn't end in Driver.)

 Regards
 Marc
 
 [1]: http://paste.openstack.org/show/391303/
 
 
 Am 15.07.2015 um 22:26 schrieb John Griffith john.griffi...@gmail.com:
 

 ​Ok, so I spent a little time on this; first gathering some detail around 
 what's been done as well as proposing a patch to sort of step back a bit and 
 take another look at this [1].

 Here's some more detail on what is bothering me here:
 * Inheritance model
 
 One of the things the work has done is moved us from a mostly singular 
 inheritance OO structure for the ​Volume Drivers where each level of 
 inheritance was specifically for a more general differentiation.  For 
 example, in driver.py we had:

 VolumeDriver(object):
 -- ISCSIDriver(VolumeDriver):
 -- FakeISCSIDriver(ISCSIDriver):
 -- ISERDriver(ISCSIDriver):
 -- FakeISERDriver(FakeISCSIDriver):
 -- FibreChannelDriver(VolumeDriver):

 Arguably the fakes probably should be done differently and ISCSI, ISER and 
 Fibre should be able to go away if we follow through with the target driver 
 work we started.

 Under the new model we started with ABC, we ended up with 25 base classes to 
 work with, and the base VolumeDriver itself is now composed of 12 other 
 independent base classes.  

 BaseVD(object):
 -- LocalVD(object):
 -- SnapshotVD(object):
 -- ConsistencyGroupVD(object):
 -- CloneableVD(object):
 -- CloneableImageVD(object):
 -- MigrateVD(object):
 -- ExtendVD(object):
 -- RetypeVD(object):
 -- TransferVD(object):
 -- ManageableVD(object):
 -- ReplicaVD(object):
 -- VolumeDriver(ConsistencyGroupVD, TransferVD, ManageableVD, ExtendVD,
 -- ProxyVD(object): (* my personal favorite*)
 -- ISCSIDriver(VolumeDriver):
 -- FakeISCSIDriver(ISCSIDriver):
 -- ISERDriver(ISCSIDriver):
 -- FakeISERDriver(FakeISCSIDriver):
 -- FibreChannelDriver(VolumeDriver):

 The idea behind this was to break out different functionality into it's own 
 class so that we could enforce an entire feature based on whether a 
 backend implemented it or not, good idea I think, but hindsight is 20/20 and 
 I have some problems with this.  

 I'm not a fan of having the base VolumeDriver that ideally could be used as 
 a template and source of truth be composed of 12 different classes.  I think 
 this has caused some confusion among a number of contributors.

 I think this creates a very rigid model, inheritance is not always a good 
 answer; it's the most strict form of coupling and in my opinion should be 
 used sparingly and with great care.

 This doesn't really accomplish what it set out to do anyway and I believe 
 there are cleaner, simpler ways to achieve the same goal.  Most of the 
 drivers have not converted to or cared about using the new metaclass 
 objects, however, simply identifying the required methods and marking them 
 with the abc decorator in the base driver will accomplish what we originally 
 hoped for (at least what I originally interpreted this to be all about). 
 Simply a way to ensure that drivers that didn't implement a required method 
 would fail to load, rather than raise NotImplemented at run time when called.

 The downside of my proposal vs what's in master currently:

 One thing that current implementation does quite nicely is group 
 functionality into classes.  Consistency groups for example is it's own 
 class, and once a driver inherits from it, it ensures that every base method 
 for CG support is implemented.  It turns out I have a problem with this too 
 however.  The bulk of the classes have a single method in them, so we build 
 a class, instantiate and build a composite object just to check that a 
 driver implements extend_volume?  And that assumes they're even using the 
 meta class and not just implementing it on their own anyway.

 In addition it's not 

Re: [openstack-dev] [cinder] Rebranded Volume Drivers

2015-06-03 Thread Eric Harney
On 06/03/2015 01:59 PM, John Griffith wrote:
 On Wed, Jun 3, 2015 at 11:32 AM, Mike Perez thin...@gmail.com wrote:
 
 There are a couple of cases [1][2] I'm seeing where new Cinder volume
 drivers for Liberty are rebranding other volume drivers. This involves
 inheriting off another volume driver's class(es) and providing some
 config options to set the backend name, etc.

 Two problems:

 1) There is a thought of no CI [3] is needed, since you're using
 another vendor's driver code which does have a CI.

 2) IMO another way of satisfying a check mark of being OpenStack
 supported and disappearing from the community.

 What gain does OpenStack get from these kind of drivers?

 Discuss.

 [1] - https://review.openstack.org/#/c/187853/
 [2] - https://review.openstack.org/#/c/187707/4
 [3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers

 --
 Mike Perez

 
 ​This case is interesting​ mostly because it's the same contractor
 submitting the driver for all the related platforms.  Frankly I find the
 whole rebranding annoying, but there's certainly nothing really wrong with
 it, and well... why not, it's Open Source.
 
 What I do find annoying is the lack of give back; so this particular
 contributor has submitted a few drivers thus far (SCST, DotHill and some
 others IIRC), and now has three more proposed. This would be great except I
 personally have spent a very significant amount of time with this person
 helping with development, CI and understanding OpenStack and Cinder.
 
 To date, I don't see that he's provided a single code review (good or bad)
 or contributed anything back other than to his specific venture.
 
 Anyway... I think your point was for input on the two questions:
 
 For item '1':
 I guess as silly as it seems they should probably have 3'rd party CI.
 There are firmware differences etc that may actually change behaviors, or
 things my diverge, or maybe their code is screwed up and the inheritance
 doesn't work (doubtful).

Given that part of the case made for CI was ensure that Cinder ships
drivers that work, the case of backend behavior diverging over time
from what originally worked with Cinder seems like a valid concern.  We
lose the ability to keep tabs on that for derived drivers without CI.

 
 Yes, it's just a business venture in this case (good or bad, not for me to
 decide).  The fact is we don't discriminate or place a value on peoples
 contributions, and this shouldn't be any different.  I think the best
 answer is follow same process for any driver and move on.  This does
 point out that maybe OpenStack/Cinder has grown to a point where there are
 so many options and choices that it's time to think about changing some of
 the policies and ways we do things.
 
 In my opinion, OpenStack doesn't gain much in this particular case, which
 brings me back to;
 remove all drivers except the ref-impl and have them pip installable and on
 a certified list based on CI.
 
 Thanks,
 John
 

The other issue I see with not requiring CI for derived drivers is
that, inevitably, small changes will be made to the driver code, and we
will find ourselves having to sort out how much change can happen before
CI is then required.  I don't know how to define that in a way that
would be useful as a general policy.

Eric

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Some Changes to Cinder Core

2015-05-27 Thread Eric Harney
On 05/22/2015 07:34 PM, Mike Perez wrote:
 This is long overdue, but it gives me great pleasure to nominate Sean
 McGinnis for
 Cinder core.
 
 
 Cinder core, please reply with a +1 for approval. This will be left
 open until May 29th. Assuming there are no objections, this will go
 forward after voting is closed.
 

+1 from me!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-11 Thread Eric Harney
On 03/11/2015 03:37 PM, Mike Bayer wrote:
 
 
 Mike Perez thin...@gmail.com wrote:
 
 On 11:49 Wed 11 Mar , Walter A. Boring IV wrote:
 We have this patch in review currently.   I think this one should
 'fix' it no?

 Please review.

 https://review.openstack.org/#/c/163551/

 Looks like it to me. Would appreciate a +1 from Mike Bayer before we push 
 this
 through. Thanks for all your time on this Mike.
 
 I have a question there, since I don’t know the scope of “Base”, that this
 “Base” constructor is generally called once per Python process. It’s OK if 
 it’s
 called a little more than that, but if it’s called on like every service
 request or something, then those engine.dispose() calls are not the right
 approach, you’d instead just turn off pooling altogether, because otherwise
 you’re spending tons of time creating and destroying connection pools that
 aren’t even used as pools.   you want the “engine” to be re-used across
 requests and everything else as much as possible, *except* across process
 boundaries.
 

I don't see it used anywhere that isn't a long-standing service, it's
only used by service and API managers, and BackupDrivers.  So should be
ok in this regard.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-18 Thread Eric Harney
On 08/14/2014 02:55 AM, Boring, Walter wrote:
 Hey guys,
I wanted to pose a nomination for Cinder core.
 
 Xing Yang.
 She has been active in the cinder community for many releases and has worked 
 on several drivers as well as other features for cinder itself.   She has 
 been doing an awesome job doing reviews and helping folks out in the 
 #openstack-cinder irc channel for a long time.   I think she would be a good 
 addition to the core team.
 

+1 from me.

Thanks Xing!

 
 Walt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The future of the integrated release

2014-08-07 Thread Eric Harney
On 08/07/2014 09:55 AM, John Griffith wrote:
 ​Seems everybody that's been around a while has noticed issues this
 release and have talked about it, thanks Thierry for putting it together so
 well and kicking off the ML thread here.
 
 I'd agree with everything that you stated, I've also floated the idea this
 past week with a few members of the Core Cinder team to have an every
 other release for new drivers submissions in Cinder (I'm expecting this to
 be a HUGELY popular proposal [note sarcastic tone]).
 
 There are three things that have just crushed productivity and motivation
 in Cinder this release (IMO):
 1. Overwhelming number of drivers (tactical contributions)
 2. Overwhelming amount of churn, literally hundreds of little changes to
 modify docstrings, comments etc but no real improvements to code

I'm not sure that there is much data to support that this has been a
problem to the point of impacting productivity.  Even if some patches
make changes that aren't too significant, those tend to be quick to
review.  Personally, I haven't found this to be a troublesome area, and
it's been clear that Cinder does need some cleanup/refactoring work in
some areas.

Just going on my gut feeling, I'd argue that we too often have patchsets
that are too large and should be split into a series of smaller commits,
and that concerns me more, because these are both harder to review and
harder to catch bugs in.

 3. A new sense of pride in hitting the -1 button on reviews.  A large
 number of reviews these days seem to be -1 due to punctuation or
 misspelling in comments and docstrings.  There's also a lot of my way of
 writing this method is better because it's *clever* taking place.

I still don't really have a good sense of how much this happens and what
the impact is.  But, the basic problem with this argument is that if we
feel that #2 and #3 are both problems, we are effectively inviting the
code/documentation to get sloppier and rot over time.  It needs to
either be cleaned up in review or patched later.

(Or if there's a dispute about need there, we at least need to be ok
with letting people who feel that this is worthwhile fix it up.)

I'd add:
4. Quite a few people have put time into working on third-party driver
CI, presumably at the expense of the other usual efforts.  This is fine,
and a good thing, but it surely impacted the amount of attention given
to other efforts with our small team.

 In Cinder's case I don't think new features is a problem, in fact we can't
 seem to get new features worked on and released because of all the other
 distractions.  That being said doing a maintenance or hardening only type
 of release is for sure good with me.
 
 Anyway, I've had some plans to talk about how we might fix some of this in
 Cinder at next week's sprint.  If there's a broader community effort along
 these lines that's even better.
 
 Thanks,
 John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-23 Thread Eric Harney
On 06/23/2014 11:07 AM, Trump.Zhang wrote:
 Hi, all:
 
 Currently, there are several filesystem-based drivers in Cinder, such
 as nfs, glusterfs, etc. Multiple format of volume other than raw can be
 potentially supported in these drivers, such as qcow2, raw, sparse, etc.
 
 However, Cinder does not store the actual format of volume and suppose
 all volumes are raw format. It will has or already has several problems
 as follows:
 
 1. For volume migration, the generic migration implementation in Cinder
 uses the dd command to copy src volume to dest volume. If the src
 volume is qcow2 format, instance will not get the right data from volume
 after the dest volume attached to instance, because the info returned from
 Cinder states that the volume's format is raw other than qcow2
 2. For volume backup, the backup driver also supposes that src volumes
 are raw format, other format will not be supported
 
 Indeed, glusterfs driver has used qemu-img info command to judge the
 format of volume. However, as the comment from Duncan in [1] says, this
 auto detection method has many possible error / exploit vectors. Because if
 the beginning content of a raw volume happens to a qcow2 disk, auto
 detection method will judge this volume to be a qcow2 volume wrongly.
 
 I proposed that the format info should be added to admin_metadata
 of volumes, and enforce it on all operations, such as create, copy, migrate
 and retype. The format will be only set / updated for filesystem-based
 drivers,  other drivers will not contains this metadata and have a default
 raw format.
 
 Any advice?
 
 [1] https://review.openstack.org/#/c/100529/
 

I agree with the concerns here, and I think storing the creation format
is the right idea.  Please file a blueprint describing the fix and I'll
help review from there.

Eric


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-11 Thread Eric Harney
On 04/11/2014 10:55 AM, Eric Harney wrote:
 On 04/11/2014 07:54 AM, Deepak Shetty wrote:
 Hi,
I am using the nfs and glusterfs driver as reference here.

 I see that load_shares_config is called everytime via
 _ensure_shares_mounted which I feel is incorrect mainly because
 ensure_shares_mounted loads the config file again w/o restarting the service

 I think that the shares config file should only be loaded once (during
 service startup) as part of do_setup and never again.

 
 Wouldn't this change the functionality that this provides now, though?
 
 Unless I'm missing something, since get_volume_stats calls
 _ensure_shares_mounted(), this means you can add a new share to the
 config file and have it become active in the driver.  (While I'm not
 sure this was the original intent, it could be nice to have and should
 at least be considered before ditching it.)
 
 If someone changes something in the conf file, one needs to restart service
 which calls do_setup again and the changes made in shares.conf is taken
 effect.

 
 I'm not sure this is correct given the above.
 
 In looking further.. the ensure_shares_mounted ends up calling
 remotefsclient.mount() which does _Nothing_ if the share is already
 mounted.. whcih is mostly the case. So even if someone changed something in
 the shares file (like added -o options) it won't take effect as the share
 is already mounted  service already running.

 In fact today, if you restart the service, even then the changes in share
 won't take effect as the mount is not un-mounted, hence when the service is
 started next, the mount is existing and ensures_shares_mounted just returns
 w/o doing anything.

 The only adv of calling load_shares_config in ensure_shares_mounted is if
 someone changed the shares server IP while the service is running ... it
 loads the new share usign the new server IP.. which again is wrong since
 ideally the person should restart service for any shares.conf changes to
 take effect.

 
 This won't work anyway because of how we track provider_location in the
 database.  This particular case is planned to be addressed via this
 blueprint with reworks configuration:
 
 https://blueprints.launchpad.net/cinder/+spec/remotefs-share-cfg-improvements
 

I suppose I should also note that if the plans in this blueprint are
implemented the way I've had in mind, the main issue here about only
loading shares at startup time would be in place, so we may want to
consider these questions under that direction.

 Hence i feel callign load_shares_config in ensure_shares_mounted is
 Incorrect and should be removed

 Thoughts ?

 thanx,
 deepak

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-26 Thread Eric Harney
On 03/25/2014 11:07 AM, Shlomi Sasson wrote:

 I am not sure what will be the right approach to handle this, I already have 
 the code, should I open a bug or blueprint to track this issue?
 
 Best Regards,
 Shlomi
 


A blueprint around this would be appreciated.  I have had similar
thoughts around this myself, that these should be options for the LVM
iSCSI driver rather than different drivers.

These options also mirror how we can choose between tgt/iet/lio in the
LVM driver today.  I've been assuming that RDMA support will be added to
the LIO driver there at some point, and this seems like a nice way to
enable that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Stability Hack-a-thon

2014-02-03 Thread Eric Harney
On 02/03/2014 04:11 AM, Flavio Percoco wrote:
 On 01/02/14 00:06 -0800, Mike Perez wrote:
 Folks,

 I would love to get people together who are interested in Cinder
 stability to really dedicate a few days. This is not for additional
 features, but rather finishing what we already have and really getting
 those in a good shape before the end of the release.

 When: Feb 24-26
 Where: San Francisco (DreamHost Office can host), Colorado, remote?

 Some ideas that come to mind:

 - Cleanup/complete volume retype
 - Cleanup/complete volume migration [1][2]
 - Other ideas that come from this thread.

 
 As an occasional contributor to Cinder, I think it would benefit a lot
 if new tests were added. There are some areas that are lacking of
 tests - AFAICT - and other tests that seem to be inconsistent with the
 rest of the test suite. This has caused me some frustrations in the
 past. I don't have good examples handy but if I have some free time
 between the 24th and 26th, I'll look into that and raise them in the
 IRC channel.
 

I've gotten the same feeling, and have had some ideas around improving
the LVM and base volume tests to improve structure and coverage that I'd
like to work on.  (Though some of those may have been implemented already.)

 That said, I think folks participating should also look forward to add
 more tests during those hacking days. Ensuring that features (not just
 methods and functions ) are fully covered is important.

This may also fit in nicely with the effort around moving to mock, which
I expect will reveal issues in tests and improve things a good bit as we
pick through them while converting to the new framework.

 
 Great initiative Mike!

Definitely agreed.

 
 Cheers,
 flaper
 
 I can't stress the dedicated part enough. I think if we have some
 folks from core and anyone interested in contributing and staying
 focus, we can really get a lot done in a few days with small set of
 doable stability
 goals to stay focused on. If there is enough interest, being together
 in the mentioned locations would be great, otherwise remote would be
 fine as long as people can stay focused and communicate through
 suggested ideas like team speak or google hangout.

 What do you guys think? Location? Other stability concerns to add to
 the list?

 [1] - https://bugs.launchpad.net/cinder/+bug/1255622
 [2] - https://bugs.launchpad.net/cinder/+bug/1246200


 -Mike Perez
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-17 Thread Eric Harney
I also like the idea of alternating each week.

Eric

On 12/17/2013 01:40 AM, Mike Perez wrote:
 I agree with Qin here that alternating might be a good option. I'm not
 opposed to being present to both meetings though.
 
 -Mike Perez
 
 
 On Mon, Dec 16, 2013 at 9:31 PM, Qin Zhao chaoc...@gmail.com wrote:
 
 Hi John,

 Yes, alternating the time for each week should be fine.  I just change my
 gmail name to English... I think you can see my name now...


  On Tue, Dec 17, 2013 at 12:05 PM, John Griffith 
 john.griff...@solidfire.com wrote:

 On Mon, Dec 16, 2013 at 8:57 PM, 赵钦 chaoc...@gmail.com wrote:
 Hi John,

 I think the current meeting schedule, UTC 16:00, basically works for
 China
 TZ (12AM), although it is not perfect. If we need to reschedule, I
 think UTC
 05:00 is better than UTC 04:00, since UTC 04:00 (China 12PM) is our
 lunch
 time.


 On Tue, Dec 17, 2013 at 11:04 AM, John Griffith
 john.griff...@solidfire.com wrote:

 Hi All,

 Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
 some interest in either changing the weekly Cinder meeting time, or
 proposing a second meeting to accomodate folks in other time-zones.

 A large number of folks are already in time-zones that are not
 friendly to our current meeting time.  I'm wondering if there is
 enough of an interest to move the meeting time from 16:00 UTC on
 Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
 willing to look at either moving the meeting for a trial period or
 holding a second meeting to make sure folks in other TZ's had a chance
 to be heard.

 Let me know your thoughts, if there are folks out there that feel
 unable to attend due to TZ conflicts and we can see what we might be
 able to do.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Hi Chaochin,

 Thanks for the feedback, I think the alternate time would have to be
 moved up an hour or two anyway (between the lunch hour in your TZ and
 the fact that it just moves the problem of being at midnight to the
 folks in US Eastern TZ).  Also, I think if there is interest that a
 better solution might be to implement something like the Ceilometer
 team does and alternate the time each week.

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev