Re: [openstack-dev] multi-attach-volume for rbd

2016-12-19 Thread Avishay Traeger
Multi-attach also happens during VM migration: (attach volume to
destination host, move VM to destination host, detach volume from source
host).  It's not clear if rbd's limitation affects this as well - it's not
clear from the bug report or the patch.

On Thu, Dec 15, 2016 at 1:30 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> On 12/14/2016 3:53 PM, Shane Peters wrote:
>
>> Hello all,
>>
>> Per https://review.openstack.org/#/c/283695/ multi-attach was disabled
>> for rbd volumes.
>>
>> Does anyone know what features are missing or aren't compatible in rbd
>> for it to support multi-attach?
>>
>> Thanks
>> Shane
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Nova doesn't support multiattach volumes for any Cinder backend. That's
> being worked by the Nova and Cinder teams but won't be something that's
> implemented on the Nova side in the Ocata release, and will be a stretch
> probably for Pike. There are weekly meetings about this though, see:
>
> http://eavesdrop.openstack.org/#Cinder,_Nova
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-20 Thread Avishay Traeger
On Tue, Sep 20, 2016 at 8:50 AM, Alon Marx <alo...@il.ibm.com> wrote:

> 
> From deployment stand point the desire is to have any piece of code that
> is required on an openstack installation would be easily downloadable.
>

And redistributable please.

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Periodically checking Glance image files

2016-09-13 Thread Avishay Traeger
On Tue, Sep 13, 2016 at 7:16 AM, Nikhil Komawar <nik.koma...@gmail.com>
wrote:
> Firstly, I'd like to mention that Glance is built-in (and if deployed
> correctly) is self-resilient in ensuring that you do NOT need an audit
> of such files. In fact, if any operator (particularly large scale
> operator) needs such a system we have a serious issue where
> potentially
> important /user/ data is likely to be lost resulting in legal
> issues (so
> please beware).

Can you please elaborate on how Glance is self-resilient?

Hey Sergio,
>
>
> Glad to know that you're not having any feature related issues (to me
> this is a good sign). Based on your answers, it makes sense to require a
> reliability solution for backend data (or some sort of health monitoring
> for the user data).
>

All backends will at some point lose some data.  The ask is for reflecting
the image's "health" to the user.


> So, I wonder what your thoughts are for such an audit system. At a first
> glance, this looks rather not scalable, at least if you plan to do the
> audit on all of the active images. Consider a deployment trying to run
> this for around 100-500K active image records. This will need to be run
> in batches, thus completing the list of records and saying that you've
> done a full audit of the active image -- is a NP-complete problem (new
> images can be introduced, some images can be updated in the meantime, etc.)
>

NP-complete?  Really?  Every storage system scrubs all data periodically to
protect from disk errors.  Glance images should be relatively static anyway.


> The failure rate is low, so a random (sparse check) on the image data
> won't help either. Would a cron job setup to do the audit for smaller
> deployments work? May be we can look into some known cron solutions to
> do the trick?
>

How about letting the backend report the health?  S3, for example, reports
an event on object loss
<http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#supported-notification-event-types>.
The S3 driver could monitor those events and update status.  Swift performs
scrubbing to determine object health - I haven't checked if it reports an
event on object loss, but don't see any reason not to.  For local
filesystem, it would need its own scrubbing process (e.g., recalculate hash
for each object every N days).  On the other hand if it is a mount of some
filer, the filer should be able to report on health.

Thanks,
Avishay

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-10 Thread Avishay Traeger
On Sep 9, 2016 18:13, "Duncan Thomas"
> So my issue is not with any of those things, it is that I believe anybody
should be able to put together a distribution of openstack, that just
works, which any supported backend, without needed to negotiate licensing
deals with vendors, and without having to have nasty hacks in their
installers that pull things down off the web on to cinder nodes to get
around licensing rules. That is one of the main 'opens' to me in openstack.
>
> I don't care so much whether your CLI or API proxy in open or closed
source, but I really do care if I can create a distribution, even a novel
one, with that software in it, without hitting licensing issues. That is,
as I see it, a bare minimum - anything less than that and it does not
belong in the cinder source tree.

As someone who ships a product that uses these drivers, I can definitely
identity with this. Finding the software, and verifying its license, is at
times not simple for someone who hasn't purchased the product. I personally
don't care if the source code is available, I care that it's readily
available for  download and can be distributed.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-08 Thread Avishay Traeger
There are a number of drivers that require closed-source tools to
communicate with the storage.  3 others that I've come across recently:

   - EMC VNX: requires Navisphere CLI v7.32 or higher
   - Hitachi storage volume driver: requires RAID Manager Ver 01-32-03/01
   or later for VSP G1000/VSP/HUS VM, Hitachi Storage Navigator Modular 2
   (HSNM2) Ver 27.50 or later for HUS 100 Family
   - Infortrend driver: requires raidcmd ESDS10

Many appliances support REST, SSH, or something similar, but some still use
proprietary protocols/tools available only as binaries to customers.

Thanks,
Avishay

On Thu, Sep 8, 2016 at 12:37 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> On 9/7/2016 8:47 AM, John Griffith wrote:
>
>>
>>
>> On Tue, Sep 6, 2016 at 9:27 AM, Alon Marx <alo...@il.ibm.com
>> <mailto:alo...@il.ibm.com>> wrote:
>>
>> I want to share our plans to open the IBM Storage driver source
>> code. Historically we started our way in cinder way back (in Essex
>> if I'm not mistaken)
>>
>> ​You're mistaken, Cinder didn't exist at that time... but it's irrelevant.
>> ​
>>
>>
>> with just a small piece of code in the community while keeping most
>> of the driver code closed. Since then the code has grown, but we
>> kept with the same format. We would like now to open the driver
>> source code, while keeping the connectivity to the storage as closed
>> source.
>>
>> ​It might help to know *which* driver you are referring to.  IBM has a
>> number of Storwiz and GPFS drivers in Cinder... what drivers are you
>> referring to here?​
>>
>>
>>
>> I believe that there are other cinder drivers that have some stuff
>> in proprietary libraries.
>>
>> ​Actually we've had a hard stance on this, if you have code in Cinder
>> that requires an external lib (I personally hate this model) we
>> typically require it to be open source.
>>
>> I want to propose and formalize the principles to where we draw the
>> line (this has also been discussed in
>> https://review.openstack.org/#/c/341780/
>> <https://review.openstack.org/#/c/341780/>) on what's acceptable by
>> the community.
>> ​
>>
>>
>>
>> Based on previous discussion I understand that the rule of thumb is
>> "as long as the majority of the driver logic is in the public
>> driver" the community would be fine with that. Is this acceptable to
>> the community?
>>
>> ​No, I don't think that's true.  It's quite possible that some people
>> make those sorts of statements but frankly their missing the entire point.
>>
>> In case you weren't aware, OpenStack IS an OPEN SOURCE project, not a
>> proprietary or hybrid project.  We are VERY clear as a community about
>> that fact and what we call the "4 Opens" [1].  It's my opinion that if
>> you're in then you're ALL in.​
>>
>> [1]: https://governance.openstack.org/reference/opens.html
>> ​
>>
>>
>>
>> Regards,
>> Alon
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I'm assuming this is the XIV driver which is a shim:
>
> https://github.com/openstack/cinder/blob/master/cinder/volum
> e/drivers/ibm/ibm_storage.py
>
> As for the open source part of it, vcenter isn't open source but there is
> a vmdk driver to talk to it, I imagine this is similar.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi

Re: [openstack-dev] [cinder][drivers] Backend and volume health reporting

2016-08-15 Thread Avishay Traeger
On Sun, Aug 14, 2016 at 5:53 PM, John Griffith <john.griffi...@gmail.com>
wrote:
>
> ​I'd like to get a more detailed use case and example of a problem you
> want to solve with this.  I have a number of concerns including those I
> raised in your "list manageable volumes" proposal.​  Most importantly
> there's really no clear definition of what these fields mean and how they
> should be interpreted.
>

I didn't specify what anything means yet on purpose - the idea was to first
gather information here about what various backends can report, then we
make an educated decision about what health states make sense to expose.

I see Cinder's potential as a single pane of glass management for all of my
cloud's storage.  Once I do some initial configuration, I hope to look at
the backend's UI as little as possible.  Today a user can create a volume,
but can't know anything about it's resiliency or availability.  The user
has a volume that's "available" and is happy.  But what does the user
really care about?  In my opinion not Cinder's internal state machine, but
things like "Is my data safe?" and "Is my data accessible?"  That's the
problem that I want to solve here.


> For backends, I'm not sure what you want to solve that can't be handled
> already by the scheduler and report-capabilities periodic job?  You can
> already report back from your backend to the scheduler that you shouldn't
> be used for any scheduling activities going forward.  More detailed info
> than that might be useful, but I'm not sure it wouldn't fall into an
> already existing OpenStack monitoring project like Monasca?
>

My storage requires maintenance and now all volumes are unaccessible.  I
have management access and create as many volumes as I want, but no
attach.  Or the storage is down totally.  Or it is up but
performance/reliability is degraded due to rebuilds in progress.  Or
multiple disks failed, and I lost data from 100 volumes.

In all these cases, all I see is that my volumes are available/in-use.  To
have any real insight into what is going on the admin has to go to the
storage backend and use vendor-specific APIs to find out.  Why not abstract
these APIs as well, to allow the admin to monitor the storage?  It can be
as simple as "Hey, there's a problem, your volumes aren't accessible - go
look at the backend's UI" - without going into details.

Do you propose every vendor write a Monasca plugin?  It doesn't seem to be
in line with their goal...

As far as volumes, I personally don't think volumes should have more than a
> few states.  They're either "ok" and available for an operation or they're
> not.
>

I agree.  In my opinion volumes have way too many states today.  But that's
another topic.  What I am proposing is not new states, or a new state
machine, but rather a simple health property: volume['health'] = "healthy",
volume['health'] = "error".  Whatever the backend reports.


> The list you have seems ok to me, but I don't see a ton of value in fault
> prediction or going to great lengths to avoid something failing. The
> current model we have of a volume being "ok" until it's "not" seems
> perfectly reasonable to me.  Typically my experience is that trying to be
> clever and polling/monitoring to try and preemptively change the status of
> a volume does little more than result in complexity, confusion and false
> status changes of resources.  I'm pretty strongly opposed to having a level
> of granularity of the volume here.  At least for now, I'd rather see what
> you have in mind for the backend and nail that down to something that's
> solid and basically bullet proof before trying to tackle thousands of
> volumes which have transient states.  And of course the biggest question I
> have still "what problem" you hope to solve here?
>

This is not about fault prediction, or preemptive changes, or anything
fancy like that.  It's simply reporting on the current health.  "You have
lost the data in this volume, sorry".  "Don't bother trying to attach this
volume right now, it's not accessible."  "The storage is currently doing
something with your volume and performance will suck."

I don't know exactly what we want to expose - I'd rather answer that after
getting feedback from vendors about what information is available.  But
providing some real, up to date, health status on storage resources is a
big value for customers.

Thanks,
Avishay


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linked

[openstack-dev] [cinder][drivers] Backend and volume health reporting

2016-08-14 Thread Avishay Traeger
Hi all,
I would like to propose working on a new feature for Ocata to provide
health information for Cinder backends and volumes.  Currently, a volume's
status basically reflects the last management operation performed on it -
it will be in error state only as a result of a failed management
operation.  There is no indication as to whether or not a backend or volume
is "healthy" - i.e., the data exists and is accessible.

The basic idea would be to add a "health" property for both backends and
volumes.

For backends, this may be something like:
- "healthy"
- "warning" (something is wrong and the admin should check the storage)
- "management unavailable" (there is no management connectivity)
- "data unavailable" (there is no data path connectivity)

For volumes:
- "healthy"
- "degraded" (i.e., not at full redundancy)
- "error" (in case of a data loss event)
- "management unavailable" (there is no management connectivity)
- "data unavailable" (there is no data path connectivity)

Before I start working on a spec, I wanted to get some feedback, especially
from driver owners:
1. What useful information can you provide at the backend level?
2. And at the volume level?
3. How would you obtain this information?  Querying the storage (poll)?
Registering for events?  Something else?
4. Other feedback?

Thank you,
Avishay

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Hummingbird Roadmap

2016-03-21 Thread Avishay Traeger
Hi all,
I was wondering what the roadmap for Hummingbird is.
Will development continue?  Will support continue?  Is it expected to reach
feature parity or even replace the Python code?

Thank you,
Avishay


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-11 Thread Avishay Traeger
On Thu, Feb 11, 2016 at 12:23 PM, Daniel P. Berrange <berra...@redhat.com>
wrote:

> As above, we need to solve this more generally than just multi-attach,
> even single-attach is flawed today.
>

Agreed.  This is what I was getting at.  Because we have at least 3
different types of attach being handled the same way, we are getting into
tricky situations. (3 types: iSCSI/FC attach volume to host, Ceph attach
volume to VM, NFS attach pool to host)
Multi-attach just makes a bad situation worse.

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread Avishay Traeger
sumer of the Target.
>
> My view is that maybe we should look at addressing the multiple use of a
> single target case in Nova, and then absolutely figure out how to make
> things work correctly on the Cinder side for all the different behaviors
> that may occur on the Cinder side from the various vendors.
>
> Make sense?
>
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-02-06 Thread Avishay Traeger
On Thu, Feb 4, 2016 at 6:38 PM, Walter A. Boring IV <walter.bor...@hpe.com>
wrote:

> My plan was to store the connector object at attach_volume time.   I was
> going to add an additional column to the cinder volume attachment table
> that stores the connector that came from nova.   The problem is live
> migration.   After live migration the connector is out of date.  Cinder
> doesn't have an existing API to update attachment.  That will have to be
> added, so that the connector info can be updated.
> We have needed this for force detach for some time now.
>
> It's on my list, but most likely not until N, or at least not until the
> microversions land in Cinder.
> Walt
>

I think live migration should probably just be a second attachment - during
the migration you have two attachments, then you detach the first.  I think
this is correct because as far as Cinder and the storage is concerned,
there are two attachments.  I think most of this mess started because we
were trying to make the volume status reflect the status in Cinder and
Nova.  If the status reflects only Cinder's (and the storage backends')
status, things because simpler.  (Might need to pass an extra flag on the
second attach to over-ride any "no multiattach" policies that exist.)

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Avishay Traeger
On Wed, Jan 27, 2016 at 1:01 PM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:


> I've replied on https://review.openstack.org/#/c/266095/ and the related
> cinder change https://review.openstack.org/#/c/272899/ which are adding a
> new key to the volume connector dict being passed around between nova and
> cinder, which is not ideal.
>
> I'd really like to see us start modeling the volume connector with
> versioned objects so we can (1) tell what's actually in this mystery
> connector dict in the nova virt driver interface and (2) handle version
> compat with adding new keys to it.
>

I agree with you.  Actually, I think it would be more correct to have
Cinder store it, and not pass it at all to terminate_connection().


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-25 Thread Avishay Traeger
OK great, thanks!  I added a suggestion to the etherpad as well, and found
this link helpful: https://review.openstack.org/#/c/266095/

On Tue, Jan 26, 2016 at 1:37 AM, D'Angelo, Scott <scott.dang...@hpe.com>
wrote:

> There is currently no simple way to clean up Cinder attachments if the
> Nova node (or the instance) has gone away. We’ve put this topic on the
> agenda for the Cinder mid-cycle this week:
>
> https://etherpad.openstack.org/p/mitaka-cinder-midcycle L#113
>
>
>
> *From:* Avishay Traeger [mailto:avis...@stratoscale.com]
> *Sent:* Monday, January 25, 2016 7:21 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from
> failed nodes
>
>
>
> Hi all,
>
> I was wondering if there was any way to cleanly detach volumes from failed
> nodes.  In the case where the node is up nova-compute will call Cinder's
> terminate_connection API with a "connector" that includes information about
> the node - e.g., hostname, IP, iSCSI initiator name, FC WWPNs, etc.
>
> If the node has died, this information is no longer available, and so the
> attachment cannot be cleaned up properly.  Is there any way to handle this
> today?  If not, does it make sense to save the connector elsewhere (e.g.,
> DB) for cases like these?
>
>
>
> Thanks,
>
> Avishay
>
>
>
> --
>
> *Avishay Traeger, PhD*
>
> *System Architect*
>
>
>
> Mobile: +972 54 447 1475
>
> E-mail: avis...@stratoscale.com
>
>
>
>
>
> Web <http://www.stratoscale.com/> | Blog
> <http://www.stratoscale.com/blog/> | Twitter
> <https://twitter.com/Stratoscale> | Google+
> <https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
>  | Linkedin <https://www.linkedin.com/company/stratoscale>
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-25 Thread Avishay Traeger
Hi all,
I was wondering if there was any way to cleanly detach volumes from failed
nodes.  In the case where the node is up nova-compute will call Cinder's
terminate_connection API with a "connector" that includes information about
the node - e.g., hostname, IP, iSCSI initiator name, FC WWPNs, etc.
If the node has died, this information is no longer available, and so the
attachment cannot be cleaned up properly.  Is there any way to handle this
today?  If not, does it make sense to save the connector elsewhere (e.g.,
DB) for cases like these?

Thanks,
Avishay

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-01 Thread Avishay Traeger
On Sat, Aug 1, 2015 at 2:51 AM, Monty Taylor mord...@inaugust.com wrote:

 I hear tell that there a bunch of ops people who are in love with consul


At my company we love Consul.  We found it to be very scalable and
performant, gives us an easy-to-use k/v store, membership service, DNS,
etc.  We use it to load balance requests to our services and route requests
to active instances, including to openstack and mariadb+galera.  That said,
I don't know if something like Consul, etcd, or zookeeper need to be part
of openstack itself, or just part of the deployment (unless we decide to
store metadata in a kv store in place of the SQL DB - which is entirely
possible with some adjustments to openstack).

I find it hard to believe that Cinder really needs distributed locks.
AFAIU, there is one lock in the non-driver Cinder code to solve a race
between deleting a volume and creating a snapshot/clone from it.  You can
solve that with other methods.  I already proposed to use garbage
collection for deleting volumes - you can delete offline and before
deleting easily check the DB if there is an ongoing operation with the
given volume as a source.  If yes, just wait.  The bulk of the locks seem
to be in the drivers.  I find it hard to believe that the management APIs
of so many storage products cannot be called concurrently.  I think we
could solve many issues in Cinder with some requirements on drivers, such
as that they need to be able to run active-active with no distributed
locks.  Another requirement of idempotency would significantly ease
recovery pains I believe.

I very much agree with Mike's statement that Cinder isn't as complex as
people are making it.  Well maybe it is, but it doesn't need to be. :-)


-- 
*Avishay Traeger, PhD*
*Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot

2015-06-29 Thread Avishay Traeger
On Sun, Jun 28, 2015 at 1:16 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 We need mutual exclusion for several operations. Whether that is done by
 entity queues, locks, state based locking at the api later, or something
 else, we need mutual exclusion.

 Our current api does not lend itself to looser consistency, and I struggle
 to come up with a sane api that does - nobody doing an operation on a
 volume  wants it to happen maybe, at some time...

What about deletes?  They can happen later on, which can help in these
situations I think.

-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot

2015-06-27 Thread Avishay Traeger
...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][cinder] How to configure tempest with to test volume migration?

2015-05-26 Thread Avishay Traeger
Good to see that you will work on this!
In addition to the cases that Duncan mentioned, there is also the case of
moving a volume between two pools within the same backend.  This probably
complicates the setup even further.

On Mon, May 25, 2015 at 12:31 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 There doesn't seem to be an existing tempest test for this.

 I suggest writing the test so that it looks if there are two volume types
 defined, and if so uses them. Ideally you should test migration to and from
 lvm. There is offline migration (volume is available) and online migration
 (volume is attached), which are two completely separate code paths.

 Great to hear somebody I'd writing tests for this!
 On 25 May 2015 10:24, Sheng Bo Hou sb...@cn.ibm.com wrote:

 Hi everyone,

 I am planning to add test cases for volume migration for cinder into
 tempest. I am wondering how to enable multiple back-ends for cinder in
 tempest, and connect to different back-ends. For example, I configure one
 back-end for LVM and the other is for IBM Storwize V7000 driver. Then run
 the test named like test_volume_migration_LVM_Storwize to test
 if the migration really works fine.

 About the configuration, is this something tempest can do so far? Or is
 this something new we need to add?
 Thank you very much.

 Best wishes,
 Vincent Hou (侯胜博)

 Staff Software Engineer, Open Standards and Open Source Team, Emerging
 Technology Institute, IBM China Software Development Lab

 Tel: 86-10-82450778 Fax: 86-10-82453660
 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
 Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
 West Road, Haidian District, Beijing, P.R.C.100193
 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Some Changes to Cinder Core

2015-05-25 Thread Avishay Traeger
On Sat, May 23, 2015 at 2:34 AM, Mike Perez thin...@gmail.com wrote:

 I would like to recognize Avishay Traeger for his
 contributions, and now
 unfortunately departure from the Cinder core team.


Unfortunately I have been unable to participate in fully due to additional
obligations and therefore it is time to step down.  It was a pleasure
serving on the Cinder core team.  I will still be around, and am not
leaving the community.  I wish Mike and the Cinder team the best of luck.

Cheers,
Avishay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Third-Party CI: what next? (was Re: [cinder] Request exemption for removal of NetApp FC drivers (no voting CI))

2015-03-24 Thread Avishay Traeger
On Mon, Mar 23, 2015 at 9:23 PM, Anita Kuno ante...@anteaya.info wrote:

 I'm really disappointed that there hasn't been more support for Mike in
 this process. I can see how everyone thought I was the problem last year
 when I had to endure this kind of treatment in Neutron, but I would
 think after seeing the exact same kind of behaviour a second time folks
 might be starting to see the pattern.


This CI is in my opinion one of the most important undertakings in Cinder
to date.  Cinder is basically an abstraction over a set of drivers, so
testing those drivers is very important.  This is especially true for those
that deploy OpenStack at various customer sites, each with different
hardware, and up to now just had to pray that things worked.

The discussion about the CI has been going on forever.  Mike brought it up
so many times in every forum possible and did a great job with this
difficult task.  While I understand that setting this up is not a simple
task, I think there was enough time.  We have been discussing this CI
forever, and if action is not taken now, it will never happen.

This is not the end of the world for drivers that are removed.  Some
drivers are already hosted in their own github repos as well as in
Cinder's, so vendors can go with that route.  Or maybe there will be an
exception made to allow backports for removed drivers (I'm not sure this is
a good idea).

Anyway, I'm very happy to finally have a release where I know that all
drivers are more-or-less working.  Kudos to Mike, and to all of the Cinder
folks that pioneered the effort and provided support to those that followed.


-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][horizon]Proper error handling/propagation to UI

2015-03-02 Thread Avishay Traeger
Sorry, I meant to say that the expected behavior is that volumes are
independent entities, and therefore you should be able to delete a snapshot
even if it has volumes created from it (just like you should be able to
delete a volume that has clones from it).  The exception is that Cinder
will not permit you to delete a volume that has snapshots.

On Mon, Mar 2, 2015 at 3:22 PM, Eduard Matei eduard.ma...@cloudfounders.com
 wrote:

 @Duncan:
 I tried with lvmdriver-1, fails with error:
 ImageCopyFailure: Failed to copy image to volume: qemu-img:
 /dev/mapper/stack--volumes--lvmdriver--1-volume--e8323fc5--8ce4--4676--bbec--0a85efd866fc:
 error while converting raw: Could not open device: Permission denied

 It's been configured with 2 drivers (ours, and lvmdriver), but our driver
 works, so not sure where it fails.

 Eduard

 On Mon, Mar 2, 2015 at 8:23 AM, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

 Thanks
 @Duncan: I'll try with the lvm driver.
 @Avishay, i'm not trying to delete a volume created from a snapshot, i'm
 trying to delete a snapshot that has volumes created from it (actually i
 need to prevent this action and properly report the cause of the failure:
 SnapshotIsBusy).


 Eduard

 On Mon, Mar 2, 2015 at 7:57 AM, Avishay Traeger avis...@stratoscale.com
 wrote:

 Deleting a volume created from a snapshot is permitted.  Performing
 operations on a volume created from snapshot should have the same behavior
 as volumes created from volumes, images, or empty (no source).  In all of
 these cases, the volume should be deleted, regardless of where it came
 from.  Independence from source is one of the differences between volumes
 and snapshots in Cinder.  The driver must take care to ensure this.

 As to your question about propagating errors without changing an
 object's state, that is unfortunately not doable in Cinder today (or any
 other OpenStack project as far as I know).  The object's state is currently
 the only mechanism for reporting an operation's success or failure.

 On Sun, Mar 1, 2015 at 6:07 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 I thought that case should be caught well before it gets to the driver.
 Can you retry with the LVM driver please?

 On 27 February 2015 at 10:48, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

 Hi,

 We've been testing our cinder driver extensively and found a strange
 behavior in the UI:
 - when trying to delete a snapshot that has clones (created volume
 from snapshot) and error is raised in our driver which turns into
 error_deleting in cinder and the UI; further actions on that snapshot 
 are
 impossible from the ui, the user has to go to CLI and do cinder
 snapshot-reset-state to be able to delete it (after having deleted the
 clones)
 - to help with that we implemented a check in the driver and now we
 raise exception.SnapshotIsBusy; now the snapshot remains available (as it
 should be) but no error bubble is shown in the UI (only the green one:
 Success. Scheduled deleting of...). So the user has to go to c-vol screen
 and check the cause of the error

 So question: how should we handle this so that
 a. The snapshot remains in state available
 b. An error bubble is shown in the UI stating the cause.

 Thanks,
 Eduard

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and 
 intended solely for the use of the individual or entity to whom they are 
 addressed.
 If you are not the named addressee or an employee or agent responsible 
 for delivering this message to the named addressee, you are hereby 
 notified that you are not authorized to read, print, retain, copy or 
 disseminate this message or any part of it. If you have received this 
 email in error we request you to notify us by reply e-mail and to delete 
 all electronic files of the message. If you are not the intended 
 recipient you are notified that disclosing, copying, distributing or 
 taking any action in reliance on the contents of this information is 
 strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late 
 or incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, 
 which arise as a result of e-mail transmission.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas

Re: [openstack-dev] [cinder][horizon]Proper error handling/propagation to UI

2015-03-01 Thread Avishay Traeger
Deleting a volume created from a snapshot is permitted.  Performing
operations on a volume created from snapshot should have the same behavior
as volumes created from volumes, images, or empty (no source).  In all of
these cases, the volume should be deleted, regardless of where it came
from.  Independence from source is one of the differences between volumes
and snapshots in Cinder.  The driver must take care to ensure this.

As to your question about propagating errors without changing an object's
state, that is unfortunately not doable in Cinder today (or any other
OpenStack project as far as I know).  The object's state is currently the
only mechanism for reporting an operation's success or failure.

On Sun, Mar 1, 2015 at 6:07 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 I thought that case should be caught well before it gets to the driver.
 Can you retry with the LVM driver please?

 On 27 February 2015 at 10:48, Eduard Matei eduard.ma...@cloudfounders.com
  wrote:

 Hi,

 We've been testing our cinder driver extensively and found a strange
 behavior in the UI:
 - when trying to delete a snapshot that has clones (created volume from
 snapshot) and error is raised in our driver which turns into
 error_deleting in cinder and the UI; further actions on that snapshot are
 impossible from the ui, the user has to go to CLI and do cinder
 snapshot-reset-state to be able to delete it (after having deleted the
 clones)
 - to help with that we implemented a check in the driver and now we raise
 exception.SnapshotIsBusy; now the snapshot remains available (as it should
 be) but no error bubble is shown in the UI (only the green one: Success.
 Scheduled deleting of...). So the user has to go to c-vol screen and check
 the cause of the error

 So question: how should we handle this so that
 a. The snapshot remains in state available
 b. An error bubble is shown in the UI stating the cause.

 Thanks,
 Eduard

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] what code in cinder volume driver supports volume migration between two backends of same type but having different volume types? [cinder]

2015-02-28 Thread Avishay Traeger
Nikesh,
The case you are trying is supposed to fail.  You have a volume of type
dothill_realstor1 which is defined to say this volume must be on backend
DotHill_RealStor1.  This is a requirement that you defined for that
volume.  Now you want to migrate it to realstor2, which is a violation of
the requirement that you specified.  To migrate it, you should change the
volume type (retype), which changes the requirement.

Thanks,
Avishay

On Sat, Feb 28, 2015 at 11:02 AM, Nikesh Kumar Mahalka 
nikeshmaha...@vedams.com wrote:

 I tried below link  for volume migration on my driver and also similar
 efforts for LVM.
 Even whatever documents available in openstack for
 volume-migration,each one showing volume migration of a volume having
 volume type None

 I added host assisted volume migration function in my cinder driver.
 When i am trying volume migration on a volume without volume type,then
 my volume migration function is getting called and i  am able to do
 volume migration.

 But when i am trying volume migration on a volume having volume
 type,then my volume migration function is not getting called.


 http://paste.openstack.org/show/183392/
 http://paste.openstack.org/show/183405/



 On Tue, Jan 20, 2015 at 12:31 AM, Nikesh Kumar Mahalka
 nikeshmaha...@vedams.com wrote:
  do cinder retype (v2) works for lvm?
  How to use cinder retype?
 
  I tried for volume migration from one volume-type LVM backend to
  another volume-type LVM backend.But its failed.
  How can i acheive this?
 
  Similarly i am writing a cinder volume driver for my array and want to
  migrate volume from one volume type to another volume type for my
  array backends.
  so want to know how can i achieve this in cinder driver?
 
 
 
  Regards
  Nikesh

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-04 Thread Avishay Traeger
On Wed, Feb 4, 2015 at 11:00 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 5 February 2015 at 10:24, Joshua Harlow harlo...@outlook.com wrote:
  How interesting,
 
  Why are people using galera if it behaves like this? :-/

 Because its actually fairly normal. In fact its an instance of point 7
 on https://wiki.openstack.org/wiki/BasicDesignTenets - one of our
 oldest wiki pages :).


When I hear MySQL I don't exactly think of eventual consistency (#7),
scalability (#1), horizontal scalability (#4), etc.
For the past few months I have been advocating implementing an alternative
to db/sqlalchemy, but of course it's a huge undertaking.  NoSQL (or even
distributed key-value stores) should be considered IMO.  Just some food for
thought :)


-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] what code in cinder volume driver supports volume migration between two backends of same type but having different volume types?

2015-01-21 Thread Avishay Traeger
On Mon, Jan 19, 2015 at 8:01 PM, Nikesh Kumar Mahalka 
nikeshmaha...@vedams.com wrote:

 do cinder retype (v2) works for lvm?
 How to use cinder retype?


As far as I remember, LVM doesn't really leverage volume types.  What types
did you define, and what command are you running?


 I tried for volume migration from one volume-type LVM backend to
 another volume-type LVM backend.But its failed.
 How can i acheive this?


It should work.  Please provide the commands you ran, the result, and all
relevant logs.


 Similarly i am writing a cinder volume driver for my array and want to
 migrate volume from one volume type to another volume type for my
 array backends.
 so want to know how can i achieve this in cinder driver?


There are several driver APIs that you can implement.  First, you are most
likely inheriting generic migration/retype from the base driver class.
This works by creating a new volume, and moving data from the original to
the new either using the hypervisor (for an attached volume) or by
attaching  both volumes to a server running cinder-volume and running dd.
Your driver may be able to do more optimized migrations/retypes by
implementing the respective APIs.  The IBM storwize/svc driver implements
both, as do several others - I suggest you look at them for examples.

Thanks,
Avishay


-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Changes to Cinder Core

2015-01-21 Thread Avishay Traeger
+1

On Wed, Jan 21, 2015 at 7:16 PM, Mike Perez thin...@gmail.com wrote:

 On Wed, Jan 21, 2015 at 10:14 AM, Mike Perez thin...@gmail.com wrote:
  It gives me great pleasure to nominate Ivan Kolodyazhny (e0ne) for
  Cinder core. Ivan's reviews have been valuable in decisions, and his
  contributions to Cinder core code have been greatly appreciated.
 
  Reviews:
 
 https://review.openstack.org/#/q/reviewer:%22Ivan+Kolodyazhny+%253Ce0ne%2540e0ne.info%253E%22,n,z
 
  Contributions:
 
 https://review.openstack.org/#/q/owner:%22Ivan+Kolodyazhny%22+project:+openstack/cinder,n,z
 
  30/90 day review stats:
  http://stackalytics.com/report/contribution/cinder-group/30
  http://stackalytics.com/report/contribution/cinder-group/90
 
  As new contributors step up to help in the project, some move onto
  other things. I would like to recognize Josh Durgin for his early
  contributions to Nova volume, early involvement with Cinder, and now
  unfortunately departure from the Cinder core team.
 
  Cinder core, please reply with a +1 for approval. This will be left
  open until Jan 26th. Assuming there are no objections, this will go
  forward after voting is closed.

 And apologies for missing the [cinder] subject prefix.

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Avishay Traeger*
*Storage RD*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web http://www.stratoscale.com/ | Blog http://www.stratoscale.com/blog/
 | Twitter https://twitter.com/Stratoscale | Google+
https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
 | Linkedin https://www.linkedin.com/company/stratoscale
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-21 Thread Avishay Traeger
I would say that wipe-on-delete is not necessary in most deployments.

Most storage backends exhibit the following behavior:
1. Delete volume A that has data on physical sectors 1-10
2. Create new volume B
3. Read from volume B before writing, which happens to map to physical
sector 5 - backend should return zeroes here, and not data from volume A

In case the backend doesn't provide this rather standard behavior, data
must be wiped immediately.  Otherwise, the only risk is physical security,
and if that's not adequate, customers shouldn't be storing all their data
there regardless.  You could also run a periodic job to wipe deleted
volumes to reduce the window of vulnerability, without making delete_volume
take a ridiculously long time.

Encryption is a good option as well, and of course it protects the data
before deletion as well (as long as your keys are protected...)

Bottom line - I too think the default in devstack should be to disable this
option, and think we should consider making the default False in Cinder
itself.  This isn't the first time someone has asked why volume deletion
takes 20 minutes...

As for queuing backup operations and managing bandwidth for various
operations, ideally this would be done with a holistic view, so that for
example Cinder operations won't interfere with Nova, or different Nova
operations won't interfere with each other, but that is probably far down
the road.

Thanks,
Avishay


On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 10/19/2014 09:33 AM, Avishay Traeger wrote:

 Hi Preston,
 Replies to some of your cinder-related questions:
 1. Creating a snapshot isn't usually an I/O intensive operation.  Are
 you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the
 CPU usage of cinder-api spike sometimes - not sure why.
 2. The 'dd' processes that you see are Cinder wiping the volumes during
 deletion.  You can either disable this in cinder.conf, or you can use a
 relatively new option to manage the bandwidth used for this.

 IMHO, deployments should be optimized to not do very long/intensive
 management operations - for example, use backends with efficient
 snapshots, use CoW operations wherever possible rather than copying full
 volumes/images, disabling wipe on delete, etc.


 In a public-cloud environment I don't think it's reasonable to disable
 wipe-on-delete.

 Arguably it would be better to use encryption instead of wipe-on-delete.
 When done with the backing store, just throw away the key and it'll be
 secure enough for most purposes.

 Chris



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-19 Thread Avishay Traeger
Hi Preston,
Replies to some of your cinder-related questions:
1. Creating a snapshot isn't usually an I/O intensive operation.  Are you
seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the CPU
usage of cinder-api spike sometimes - not sure why.
2. The 'dd' processes that you see are Cinder wiping the volumes during
deletion.  You can either disable this in cinder.conf, or you can use a
relatively new option to manage the bandwidth used for this.

IMHO, deployments should be optimized to not do very long/intensive
management operations - for example, use backends with efficient snapshots,
use CoW operations wherever possible rather than copying full
volumes/images, disabling wipe on delete, etc.

Thanks,
Avishay

On Sun, Oct 19, 2014 at 1:41 PM, Preston L. Bannister pres...@bannister.us
wrote:

 OK, I am fairly new here (to OpenStack). Maybe I am missing something. Or
 not.

 Have a DevStack, running in a VM (VirtualBox), backed by a single flash
 drive (on my current generation MacBook). Could be I have something off in
 my setup.

 Testing nova backup - first the existing implementation, then my (much
 changed) replacement.

 Simple scripts for testing. Create images. Create instances (five). Run
 backup on all instances.

 Currently found in:
 https://github.com/dreadedhill-work/stack-backup/tree/master/backup-scripts

 First time I started backups of all (five) instances, load on the Devstack
 VM went insane, and all but one backup failed. Seems that all of the
 backups were performed immediately (or attempted), without any sort of
 queuing or load management. Huh. Well, maybe just the backup implementation
 is naive...

 I will write on this at greater length, but backup should interfere as
 little as possible with foreground processing. Overloading a host is
 entirely unacceptable.

 Replaced the backup implementation so it does proper queuing (among other
 things). Iterating forward - implementing and testing.

 Fired off snapshots on five Cinder volumes (attached to five instances).
 Again the load shot very high. Huh. Well, in a full-scale OpenStack setup,
 maybe storage can handle that much I/O more gracefully ... or not. Again,
 should taking snapshots interfere with foreground activity? I would say,
 most often not. Queuing and serializing snapshots would strictly limit the
 interference with foreground. Also, very high end storage can perform
 snapshots *very* quickly, so serialized snapshots will not be slow. My take
 is that the default behavior should be to queue and serialize all heavy I/O
 operations, with non-default allowances for limited concurrency.

 Cleaned up (which required reboot/unstack/stack and more). Tried again.

 Ran two test backups (which in the current iteration create Cinder volume
 snapshots). Asked Cinder to delete the snapshots. Again, very high load
 factors, and in top I can see two long-running dd processes. (Given I
 have a single disk, more than one dd is not good.)

 Running too many heavyweight operations against storage can lead to
 thrashing. Queuing can strictly limit that load, and insure better and
 reliable performance. I am not seeing evidence of this thought in my
 OpenStack testing.

 So far it looks like there is no thought to managing the impact of disk
 intensive management operations. Am I missing something?





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Get server side exception

2014-10-05 Thread Avishay Traeger
Hi Eduard,
According to the error you had, I assume your volume format is sparse
(probably qcow2).  When you create a volume out of it, Cinder will convert
the format to 'raw', which increases its size.  I believe the 'size'
parameter is now optional if you create a volume from a source (such as
image, snapshot, or other volume).

With regard to your question on how to view this on the client side, if the
volume goes to 'error' state, there is no way to know why right now without
checking logs.  Yes, this is bad.

Thanks,
Avishay

On Fri, Oct 3, 2014 at 10:06 AM, Eduard Matei 
eduard.ma...@cloudfounders.com wrote:

 Hi,

 I'm creating a cinder volume from a glance image (Fedora).
 The image is 199 Mb and since it's for testing, i created a volume of size
 1GB.
 This fails, and puts image in status Error. (without any more info)

 Digging through screens i found an exception ImageUnacceptable (size is 2
 GB and doesn't fit ...).

 Is there a way to get this exception on the client side?
 e.g. cinder show VOLUMEID  to contain the exception message

 Thanks,

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-19 Thread Avishay Traeger
+1


On Thu, Aug 14, 2014 at 9:55 AM, Boring, Walter walter.bor...@hp.com
wrote:

 Hey guys,
I wanted to pose a nomination for Cinder core.

 Xing Yang.
 She has been active in the cinder community for many releases and has
 worked on several drivers as well as other features for cinder itself.
  She has been doing an awesome job doing reviews and helping folks out in
 the #openstack-cinder irc channel for a long time.   I think she would be a
 good addition to the core team.


 Walt
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-19 Thread Avishay Traeger
I think to make the Summit sessions more effective:
1. The presenter to put in more effort beforehand - implement a rough POC,
write up a detailed etherpad, etc. where everything is ready say 2-3 weeks
before the Summit.  Maybe even require a reviewed spec for sessions which
introduce new features?
2. The active members of the project (core and otherwise) to put in more
effort beforehand - review the POC and/or etherpad, digest the information,
maybe even start a preliminary discussion on IRC to sharpen the proposal
3. Rather than the presenter giving a lecture and the project members
trying to digest what is being said, with no time to think of implications,
design options, etc., there should just be that debate to refine the idea
(I think this is what usually happens at the mid-cycle meetup).
4. Community members who did not do their homework should be discouraged
from actively participating in that session (i.e., asking basic questions
or taking the discussion on tangents).  Anyone who has questions or
off-topic comments can take it up with the presenter or anyone else during
the breaks.

Of course this requires a lot discipline on everyone's part, but I think it
would not only make the Summit sessions more valuable, but also help
developers who present to get their code in quicker and thereby help the
project to meet its objectives for that release.

Thanks,
Avishay


On Mon, Aug 18, 2014 at 6:40 PM, John Griffith john.griff...@solidfire.com
wrote:




 On Mon, Aug 18, 2014 at 9:18 AM, Russell Bryant rbry...@redhat.com
 wrote:

 On 08/18/2014 06:18 AM, Thierry Carrez wrote:
  Doug Hellmann wrote:
  On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com
 wrote:
  Let me try to say it another way.  You seemed to say that it wasn't
 much
  to ask given the rate at which things happen in OpenStack.  I would
  argue that given the rate, we should not try to ask more of
 individuals
  (like this proposal) and risk burnout.  Instead, we should be doing
 our
  best to be more open an inclusive to give the project the best chance
 to
  grow, as that's the best way to get more done.
 
  I think an increased travel expectation is a raised bar that will
 hinder
  team growth, not help it.
 
  +1, well said.
 
  Sorry, I was away for a few days. This is a topic I have a few strong
  opinions on :)
 
  There is no denial that the meetup format is working well, comparatively
  better than the design summit format. There is also no denial that that
  requiring 4 travels per year for a core dev is unreasonable. Where is
  the limit ? Wouldn't we be more productive and aligned if we did one per
  month ? No, the question is how to reach a sufficient level of focus and
  alignment while keeping the number of mandatory travel at 2 per year.
 
  I don't think our issue comes from not having enough F2F time. Our issue
  is that the design summit no longer reaches its objectives of aligning
  key contributors on a common plan, and we need to fix it.
 
  We established the design summit as the once-per-cycle opportunity to
  have face-to-face time and get alignment across the main contributors to
  a project. That used to be completely sufficient, but now it doesn't
  work as well... which resulted in alignment and team discussions to be
  discussed at mid-cycle meetups instead. Why ? And what could we change
  to have those alignment discussions at the design summit again ?
 
  Why are design summits less productive that mid-cycle meetups those days
  ? Is it because there are too many non-contributors in the design summit
  rooms ? Is it the 40-min format ? Is it the distractions (having talks
  to give somewhere else, booths to attend, parties and dinners to be at)
  ? Is it that beginning of cycle is not the best moment ? Once we know
  WHY the design summit fails its main objective, maybe we can fix it.
 
  My gut feeling is that having a restricted audience and a smaller group
  lets people get to the bottom of an issue and reach consensus. And that
  you need at least half a day or a full day of open discussion to reach
  such alignment. And that it's not particularly great to get such
  alignment in the middle of the cycle, getting it at the start is still
  the right way to align with the release cycle.
 
  Nothing prevents us from changing part of the design summit format (even
  the Paris one!), and restrict attendance to some of the sessions. And if
  the main issue is the distraction from the conference colocation, we
  might have to discuss the future of co-location again. In that 2 events
  per year objective, we could make the conference the optional cycle
  thing, and a developer-oriented specific event the mandatory one.
 
  If we manage to have alignment at the design summit, then it doesn't
  spell the end of the mid-cycle things. But then, ideally the extra
  mid-cycle gatherings should be focused on getting specific stuff done,
  rather than general team alignment. Think workshop/hackathon 

Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread Avishay Traeger
One more reason why block storage management doesn't really work on file
systems.  I'm OK with storing the format, but that just means you fail
migration/backup operations with different formats, right?


On Mon, Jun 23, 2014 at 6:07 PM, Trump.Zhang zhangleiqi...@gmail.com
wrote:

 Hi, all:

 Currently, there are several filesystem-based drivers in Cinder, such
 as nfs, glusterfs, etc. Multiple format of volume other than raw can be
 potentially supported in these drivers, such as qcow2, raw, sparse, etc.

 However, Cinder does not store the actual format of volume and suppose
 all volumes are raw format. It will has or already has several problems
 as follows:

 1. For volume migration, the generic migration implementation in
 Cinder uses the dd command to copy src volume to dest volume. If the
 src volume is qcow2 format, instance will not get the right data from
 volume after the dest volume attached to instance, because the info
 returned from Cinder states that the volume's format is raw other than
 qcow2
 2. For volume backup, the backup driver also supposes that src volumes
 are raw format, other format will not be supported

 Indeed, glusterfs driver has used qemu-img info command to judge the
 format of volume. However, as the comment from Duncan in [1] says, this
 auto detection method has many possible error / exploit vectors. Because if
 the beginning content of a raw volume happens to a qcow2 disk, auto
 detection method will judge this volume to be a qcow2 volume wrongly.

 I proposed that the format info should be added to admin_metadata
 of volumes, and enforce it on all operations, such as create, copy, migrate
 and retype. The format will be only set / updated for filesystem-based
 drivers,  other drivers will not contains this metadata and have a default
 raw format.

 Any advice?

 [1] https://review.openstack.org/#/c/100529/

 --
 ---
 Best Regards

 Trump.Zhang

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Driver] Delete snapshot

2014-06-21 Thread Avishay Traeger
This is what I thought of as well.  In the rbd driver, if a request to
delete a volume comes in, where the volume object on the backend has other
objects that depend on it, it simply renames it:
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/rbd.py#L657

There is also code to clean up those renamed objects.

The point is, Cinder has an API which should be consistent no matter what
storage is being used.  The driver must do whatever necessary to implement
the API rather than allowing quirks of the specific storage to show through
to the user.

Thanks,
Avishay


On Thu, Jun 19, 2014 at 8:13 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 So these are all features that various other backends manage to
 implement successfully.

 Your best point of reference might be the ceph code - I believe it
 deals with very similar issues in various ways.

 On 19 June 2014 18:01, Amit Das amit@cloudbyte.com wrote:
  Hi All,
 
  Thanks for clarifying the Cinder behavior w.r.t a snapshot  its clones
  which seems to be independent/decoupled.
  The current volume  its snapshot based validations in Cinder holds true
 for
  snapshot  its clones w.r.t my storage requirements.
 
  Our storage is built on top of ZFS filesystem.
  The volume - snapshot - clone that I am referring to in turn points to
 a
  ZFS dataset - ZFS snapshot - ZFS clone.
 
  The best part of ZFS based snapshots  clones are :
 
  these are almost instantaneous ( i.e. copy-on-write based copies)
  these will not consume any additional (initially)
 
  a clone initially shares all its disk space with the original snapshot,
 its
  used property is initially zero.
  As changes are made to the clone, it uses more space.
  The used property of the original snapshot does not consider the disk
 space
  consumed by the clone.
 
  Further optimizations i.e. cool feature:
 
  While creating VM clones, a hypervisor driver can delegate part of its
  cloning process to storage driver  hence, the overall VM cloning will be
  very very fast.
 
 
 
 
  Regards,
  Amit
  CloudByte Inc.
 
 
  On Thu, Jun 19, 2014 at 9:16 PM, John Griffith 
 john.griff...@solidfire.com
  wrote:
 
 
 
 
  On Tue, Jun 17, 2014 at 10:50 PM, Amit Das amit@cloudbyte.com
 wrote:
 
  Hi Stackers,
 
  I have been implementing a Cinder driver for our storage solution 
  facing issues with below scenario.
 
  Scenario - When a user/admin tries to delete a snapshot that has
  associated clone(s), an error message/log should be shown to the user
  stating that 'There are clones associated to this snapshot. Hence,
 snapshot
  cannot be deleted'.
 
 
  What's the use model of clones associated with the snapshot?  What are
  these clones from a Cinder perspective.  Easy answer is: don't create
  them, but I realize you probably have a cool feature or optimization
 that
  you're trying to leverage here.
 
 
  Implementation issues - If Cinder driver throws an Exception the
 snapshot
  will have error_deleting status  will not be usable. If Cinder driver
 logs
  the error silently then Openstack will probably mark the snapshot as
  deleted.
 
 
  So as others point out, from a Cinder perspective this is what I/we
 would
  expect.
 
  Scott made some really good points, but the point is we do not want to
  behave differently for every single driver.  The agreed upon mission for
  Cinder is to actually provide a consistent API and set of behaviors to
 end
  users regardless of what backend device they're using (in other words
 that
  should remain pretty much invisible to the end-user).
 
  What do you use the Clones of the Snapshot for?  Maybe we can come up
 with
  another approach that works and keeps consistency in the API.
 
 
 
  What is the appropriate procedure that needs to be followed for above
  usecase.
 
  Regards,
  Amit
  CloudByte Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Mid-cycle meetup for Cinder devs

2014-06-12 Thread Avishay Traeger
I think you can create an easy survey with doodle.com.  You can fill in the
dates, and ask people to specify next to their names if their attendance
will be physical or virtual.


On Thu, Jun 12, 2014 at 12:16 AM, D'Angelo, Scott scott.dang...@hp.com
wrote:

  During the June 11 #openstack-cinder meeting we discussed a mid-cycle
 meetup. The agenda is To be Determined.

 I have inquired and HP in Fort Collins, CO has room and network
 connectivity available. There were some dates that worked well for
 reserving a nice room:

 July 14,15,17,18, 21-25, 27-Aug 1

 But a room could be found regardless.

 Virtual connectivity would also be available.



 Some of the open questions are:

 Are developers interested in a mid-cycle meetup?

 What dates are Not Good (Blackout dates)?

 What dates are Good?

 Whom might be able to be physically present in Ft Collins, CO?

 Are there alternative locations to be considered?



 Someone had mentioned a Google Survey. Would someone like to create that?
 Which questions should be asked?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-25 Thread Avishay Traeger
Hello Mitsuhiro,
I'm sorry, but I remain unconvinced.  Is there a customer demand for this
feature?
If you'd like, feel free to add this topic to a Cinder weekly meeting
agenda, and join the meeting so that we can have an interactive discussion.
https://wiki.openstack.org/wiki/CinderMeetings

Thanks,
Avishay


On Sat, May 24, 2014 at 12:31 AM, Mitsuhiro Tanino mitsuhiro.tan...@hds.com
 wrote:

  Hi Avishay-san,



 Thank you for your review and comments for my proposal. I commented
 in-line.



 So the way I see it, the value here is a generic driver that can work
 with any storage.  The downsides:



 A generic ­driver for any storage is an one of benefit.

 But main benefit of proposed driver is as follows.

 - Reduce hardware based storage workload by offloading the workload to
 software based volume operation.



 Conventionally, operations to an enterprise storage such as volume
 creation, deletion, snapshot, etc

 are only permitted system administrator and they handle these operations
 after carefully examining.

 In OpenStack cloud environment, every user have a permission to execute
 these storage operations

 via cinder. As a result, workloads of storages have been increasing and it
 is difficult to manage

 the workloads.



 If we have two drivers in regards to a storage, we can use both way as the
 situation demands.

 Ex.

   As for Standard type storage, use proposed software based LVM cinder
 driver.

   As for High performance type storage, use hardware based cinder driver.



 As a result, we can offload the workload of standard type storage from
 physical storage to cinder host.



 1. The admin has to manually provision a very big volume and attach it
 to the Nova and Cinder hosts.

   Every time a host is rebooted,



 I thinks current FC-based cinder drivers using scsi scan to find created
 LU.

 # echo - - -  /sys/class/scsi_host/host#/scan



 The admin can find additional LU using this, so host reboot are not
 required.



  or introduced, the admin must do manual work. This is one of the things
 OpenStack should be trying

  to avoid. This can't be automated without a driver, which is what
 you're trying to avoid.



 Yes. Some admin manual work is required and can’t be automated.

 I would like to know whether these operations are acceptable range to
 enjoy benefits from

 my proposed driver.



 2. You lose on performance to volumes by adding another layer in the
 stack.



 I think this is case by case.  When user use a cinder volume for DATA BASE,
 they prefer

 raw volume and proposed driver can’t provide raw cinder volume.

 In this case, I recommend High performance type storage.



 LVM is a default feature in many Linux distribution. Also LVM is used
 many enterprise

 systems and I think there is not critical performance loss.



 3. You lose performance with snapshots - appliances will almost
 certainly have more efficient snapshots

  than LVM over network (consider that for every COW operation, you are
 reading synchronously over the network).

  (Basically, you turned your fully-capable storage appliance into a dumb
 JBOD)



 I agree that storage has efficient COW snapshot feature, so we can create
 new Boot Volume

 from glance quickly. In this case, I recommend High performance type
 storage.

 LVM can’t create nested snapshot with shared LVM now. Therefore, we can’t
 assign

 writable LVM snapshot to instances.



 Is this answer for your comment?



  In short, I think the cons outweigh the pros.  Are there people
 deploying OpenStack who would deploy

  their storage like this?



 Please consider above main benefit.



 Regards,

 Mitsuhiro Tanino mitsuhiro.tan...@hds.com

  *HITACHI DATA SYSTEMS*

  c/o Red Hat, 314 Littleton Road, Westford, MA 01886



 *From:* Avishay Traeger [mailto:avis...@stratoscale.com]
 *Sent:* Wednesday, May 21, 2014 4:36 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Tomoki Sekiyama
 *Subject:* Re: [openstack-dev] [Cinder] Support LVM on a shared LU



 So the way I see it, the value here is a generic driver that can work with
 any storage.  The downsides:

 1. The admin has to manually provision a very big volume and attach it to
 the Nova and Cinder hosts.  Every time a host is rebooted, or introduced,
 the admin must do manual work. This is one of the things OpenStack should
 be trying to avoid. This can't be automated without a driver, which is what
 you're trying to avoid.

 2. You lose on performance to volumes by adding another layer in the stack.

 3. You lose performance with snapshots - appliances will almost certainly
 have more efficient snapshots than LVM over network (consider that for
 every COW operation, you are reading synchronously over the network).



 (Basically, you turned your fully-capable storage appliance into a dumb
 JBOD)



 In short, I think the cons outweigh the pros.  Are there people deploying
 OpenStack who would deploy their storage like this?



 Thanks

Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-21 Thread Avishay Traeger
So the way I see it, the value here is a generic driver that can work with
any storage.  The downsides:
1. The admin has to manually provision a very big volume and attach it to
the Nova and Cinder hosts.  Every time a host is rebooted, or introduced,
the admin must do manual work. This is one of the things OpenStack should
be trying to avoid. This can't be automated without a driver, which is what
you're trying to avoid.
2. You lose on performance to volumes by adding another layer in the stack.
3. You lose performance with snapshots - appliances will almost certainly
have more efficient snapshots than LVM over network (consider that for
every COW operation, you are reading synchronously over the network).

(Basically, you turned your fully-capable storage appliance into a dumb
JBOD)

In short, I think the cons outweigh the pros.  Are there people deploying
OpenStack who would deploy their storage like this?

Thanks,
Avishay

On Tue, May 20, 2014 at 6:31 PM, Mitsuhiro Tanino
mitsuhiro.tan...@hds.comwrote:

  Hello All,



 I’m proposing a feature of LVM driver to support LVM on a shared LU.

 The proposed LVM volume driver provides these benefits.
   - Reduce hardware based storage workload by offloading the workload to
 software based volume operation.
   - Provide quicker volume creation and snapshot creation without storage
 workloads.
   - Enable cinder to any kinds of shared storage volumes without specific
 cinder storage driver.

   - Better I/O performance using direct volume access via Fibre channel.



 In the attachment pdf, following contents are explained.

   1. Detail of Proposed LVM volume driver

   1-1. Big Picture

   1-2. Administrator preparation

   1-3. Work flow of volume creation and attachment

   2. Target of Proposed LVM volume driver

   3. Comparison of Proposed LVM volume driver



 Could you review the attachment?

 Any comments, questions, additional ideas would be appreciated.





 Also there are blueprints, wiki and patches related to the slide.

 https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage

 https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage


 https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder

 https://review.openstack.org/#/c/92479/

 https://review.openstack.org/#/c/92443/



 Regards,

 Mitsuhiro Tanino mitsuhiro.tan...@hds.com

  *HITACHI DATA SYSTEMS*

  c/o Red Hat, 314 Littleton Road, Westford, MA 01886

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] pep8 issues and how to pep8 locally ?

2014-04-28 Thread Avishay Traeger
Deepak,
Sean meant that 'tox -epep8' is the command that runs the pep8 checks.
You can install tox with 'pip install tox' and pep8 with 'pip install
pep8'.  Once you have those, run 'tox -epep8'

Thanks,
Avishay


On Mon, Apr 28, 2014 at 1:15 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 [stack@devstack-vm cinder]$ sudo pip install tox-epep8
 Downloading/unpacking tox-epep8
   Could not find any downloads that satisfy the requirement tox-epep8
 Cleaning up...
 No distributions at all found for tox-epep8
 Storing complete log in /root/.pip/pip.log

 [stack@devstack-vm cinder]$ sudo yum search tox-epep8
 Warning: No matches found for: tox-epep8
 No matches found
 [stack@devstack-vm cinder]$



 On Mon, Apr 28, 2014 at 3:39 PM, Sean Dague s...@dague.net wrote:

 On 04/28/2014 06:08 AM, Deepak Shetty wrote:
  Hi,
 
  H703  Multiple positional placeholders
 
  I got this for one of my patch and googling i could find that the fix is
  to use
  dict instead of direct substitues.. which i did.. but it still gives me
  the error :(
 
  Also just running pep8 locally on my glsuterfs.py file doesn't show any
  issue
  but gerrit does.
  So how do i run the same pep8 that gerrit does locally on my box, so
  that I don't end up resending new patches due to failed gerrit build
  checks ?

 tox -epep8

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-09 Thread Avishay Traeger
On Wed, Apr 9, 2014 at 8:35 AM, Deepak Shetty dpkshe...@gmail.com wrote:




 On Tue, Apr 8, 2014 at 6:24 PM, Avishay Traeger 
 avis...@stratoscale.comwrote:

 On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty dpkshe...@gmail.comwrote:

 Hi List,
 I had few Qs on the implementation of manage_existing and unmanage
 API extns

 1) For LVM case, it renames the lv.. isn't it better to use name_id (one
 used during cinder migrate to keep id same for a diff backend name/id) to
 map cinder name/id to backend name/id and thus avoid renaming the backend
 storage. Renaming isn't good since it changes the original name of the
 storage object and hence storage admin may lose track? The Storwize uses
 UID and changes vdisk_name on the backend array which isn't good either. Is
 renaming a must, if yes why ?


 'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
 In migration, both the original and new volumes use the same template for
 volume names, just with a different ID, so name_id works well for that.
  When importing a volume that wasn't created by Cinder, chances are it
 won't conform to this template, and so name_id won't work (i.e., I can call
 the volume 'my_very_important_db_volume', and name_id can't help with
 that).  When importing, the admin should give the volume a proper name and
 description, and won't lose track of it - it is now being managed by Cinder.


 Avishay,
 thanks for ur reply.. it did help. Just one more Q tho...

  (i.e., I can call the volume 'my_very_important_db_volume', and name_id
 can't help with that).
 This is the name of the volume. but isn't it common for most arrays to
 provide name and ID (which is again UUID) for a volume on the backend.. so
 name_id can still point to the UID which has the name
 'my_very_important_db_volume'
 In fact in storwize, you are using vdisk_id itself and changing the
 vdisk_name to match what the user gave.. and vdisk_id is a UUID and matches
 w/ name_id format


Not exactly, it's a number (like '5'), not a UUID like
c8b3d8e2-2410-4362-b24b-548a13fa850b


 Alternatively, does this mean we need to make name_id a generic field (not
 a ID) and then use somethign like uuidutils.is_uuid_like() to determine if
 its UUID or non-UUID and then backend will accordinly map it ?

 Lastly,  I said storage admin will lose track of it bcos he would have
 named is my_vol and when he asks cidner to manage it using
 my_cinder_vol its not expected that u wud rename the volume's name on the
 backend :)
 I mean its good if we could implement manage_existing w/o renaming as then
 it would seem like less disruptive :)


 I think there are a few trade-offs here - making it less disruptive in
this sense makes it more disruptive to:
1. Managing the storage over its lifetime.  If we assume that the admin
will stick with Cinder for managing their volumes, and if they need to find
the volume on the storage, it should be done uniformly (i.e., go to the
backend and find the volume named 'volume-%s' % name_id).
2. The code, where a change of this kind could make things messy.
 Basically the rename approach has a little bit of complexity overhead when
you do manage_existing, but from then on it's just like any other volume.
 Otherwise, it's always a special case in different code paths, which could
be tricky.

If you still feel that rename is wrong and that there is a better approach,
I encourage you to try, and post code if it works.  I don't mind being
proved wrong. :)

Thanks,
Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-08 Thread Avishay Traeger
On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi List,
 I had few Qs on the implementation of manage_existing and unmanage API
 extns

 1) For LVM case, it renames the lv.. isn't it better to use name_id (one
 used during cinder migrate to keep id same for a diff backend name/id) to
 map cinder name/id to backend name/id and thus avoid renaming the backend
 storage. Renaming isn't good since it changes the original name of the
 storage object and hence storage admin may lose track? The Storwize uses
 UID and changes vdisk_name on the backend array which isn't good either. Is
 renaming a must, if yes why ?


'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
In migration, both the original and new volumes use the same template for
volume names, just with a different ID, so name_id works well for that.
 When importing a volume that wasn't created by Cinder, chances are it
won't conform to this template, and so name_id won't work (i.e., I can call
the volume 'my_very_important_db_volume', and name_id can't help with
that).  When importing, the admin should give the volume a proper name and
description, and won't lose track of it - it is now being managed by Cinder.


 2) How about a force rename option can be provided ? if force = yes, use
 rename otherwise name_id ?


As I mentioned, name_id won't work.  You would need some DB changes to
accept ANY volume name, and it can get messy.


 3) Durign unmanage its good if we can revert the name back (in case it was
 renamed as part of manage), so that we leave the storage object as it was
 before it was managed by cinder ?


I don't see any compelling reason to do this.

Thanks,
Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] AUTO: Avishay Traeger is prepared for DELETION (FREEZE) (returning 05/12/2013)

2014-03-06 Thread Avishay Traeger

I am out of the office until 05/12/2013.

Avishay Traeger is prepared for DELETION (FREEZE)


Note: This is an automated response to your message  Re: [openstack-dev]
[Cinder][FFE] Cinder switch-over to oslo.messaging sent on 06/03/2014
12:50:51.

This is the only notification you will receive while this person is away.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Questions about proposed volume replication patch

2014-02-23 Thread Avishay Traeger
Hi Bruce,

Bruce Montague bruce_monta...@symantec.com wrote on 02/23/2014 03:35:10 
AM:
 Hi, regarding the proposed Cinder volume replication patch,  
 https://review.openstack.org/#/c/64026  :
 
 The replication driver methods are create_replica(), swap_replica(),
 delete_replica(),
 replication_status_check(), enable_replica(), and disable_replica().
 
 What are the expected semantics of the enable and disable methods?  In 
 enable_vol_replication() it looks like the intent is that replicas 
 are created by
 create, than started by enable (and vice versa for disable/delete).

One of the challenges in the replication design was creating a driver API 
that would work for all backends.  One way of doing so was to allow the 
driver to execute on both sides of the replication.  So when creating a 
replicated volume we have:
1. primary backend: create_volume
2. secondary backend: create_replica
3. primary backend: enable_replica

When deleting a replicated volume we have the opposite:
1. primary backend: disable_replica
2. secondary backend: delete_replica
3. primary backend: delete_volume

The goal here is to be flexible and allow all drivers to implement 
replication.  If you look at the patch for IBM Storwize/SVC replication (
https://review.openstack.org/#/c/70792/) you'll see two main replication 
modes supported in replication.py.  The first (starting on line 58) simply 
requires making a copy of the volume in the proper pool, and so only 
create_replica and delete_replica are implemented there.  The second 
method (starting on line 118) implements all of the functions: 
create_replica creates a second volume, and enable_replica creates a 
replication relationship between the two volumes.
 
 Are the plugin's enable/disable method intended for just a one-time 
start and
 stop of the replication or are they expected to be able to cleanly pause 
and
 resume the replication process? Is disable expected to flush volume 
contents
 all the way through to the replica?

As of now you can assume that create_replica and enable_replica are called 
together, and disable_replica and delete_replica are also called together, 
in those orders.  So if we call disable_replica you can assume we are 
getting rid of the replica.
 
 Another question is what is the expected usage of 
 primary_replication_unit_id
 and secondary_replication_unit_id in the replication_relationships 
table.
 Are these optional? Are they the type of fields that could go in the
 driver_data
 field for the relationship?

Those two fields are filled in automatically - see replication_update_db() 
in scheduler/driver.py
They simply hold whatever the driver returns in 'replication_unit_id', 
which will likely be needed by drivers to know who the other side is.
In addition, you can put whatever you like driver_data to implement 
replication for you backend.

Thanks,
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Avishay Traeger
Walter A. Boring IV walter.bor...@hp.com wrote on 02/13/2014 06:59:38 
PM:
 What I would do different for the Icehouse release is this:
 
 If a driver doesn't pass the certification test by IceHouse RC1, then we 

 have a bug filed
 against the driver.   I would also put a warning message in the log for 
 that driver that it
 doesn't pass the certification test.  I would not remove it from the 
 codebase.
 
 Also:
 if a driver hasn't even run the certification test by RC1, then we 
 mark the driver as
 uncertified and deprecated in the code and throw an error at driver init 

 time.
 We can have a option in cinder.conf that says 
 ignore_uncertified_drivers=False.
 If an admin wants to ignore the error, they set the flag to True, and we 

 let the driver init at next startup.
 The admin then takes full responsibility for running uncertified code.
 
I think removing the drivers outright is premature for Icehouse, 
 since the certification process is a new thing.
 For Juno, we remove any drivers that are still marked as uncertified and 

 haven't run the tests.
 
 I think the purpose of the tests is to get vendors to actually run their 

 code through tempest and
 prove to the community that they are willing to show that they are 
 fixing their code.  At the end of the day,
 it better serves the community and Cinder if we have many working 
drivers.
 
 My $0.02,
 Walt


I like this.  Make that $0.04 now :)

Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Stability Hack-a-thon

2014-02-02 Thread Avishay Traeger
Will join remotely for a few hours each day (time zones and all).  Nice 
effort!

Thanks,
Avishay



From:   Mike Perez thin...@gmail.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   02/01/2014 10:09 AM
Subject:[openstack-dev] Cinder Stability Hack-a-thon



Folks,

I would love to get people together who are interested in Cinder 
stability 
to really dedicate a few days. This is not for additional features, but 
rather 
finishing what we already have and really getting those in a good shape 
before the end of the release.

When: Feb 24-26
Where: San Francisco (DreamHost Office can host), Colorado, remote?

Some ideas that come to mind:

- Cleanup/complete volume retype
- Cleanup/complete volume migration [1][2]
- Other ideas that come from this thread.

I can't stress the dedicated part enough. I think if we have some folks 
from core and anyone interested in contributing and staying focus, we 
can really get a lot done in a few days with small set of doable stability 
goals 
to stay focused on. If there is enough interest, being together in the 
mentioned locations would be great, otherwise remote would be fine as 
long as people can stay focused and communicate through suggested 
ideas like team speak or google hangout.

What do you guys think? Location? Other stability concerns to add to the 
list?

[1] - https://bugs.launchpad.net/cinder/+bug/1255622
[2] - https://bugs.launchpad.net/cinder/+bug/1246200


-Mike Perez___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-18 Thread Avishay Traeger
For me, 4/5 is currently 6/7AM.  An hour later when daylight savings 
messes things up.  Exactly wake up/get ready/kid to school/me to work 
time.
I'd rather leave it as is (6/7PM depending on daylight savings).
If you alternate, I may have to miss some.

Thanks,
Avishay



From:   John Griffith john.griff...@solidfire.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   12/17/2013 05:08 AM
Subject:[openstack-dev] [cinder] weekly meeting



Hi All,

Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
some interest in either changing the weekly Cinder meeting time, or
proposing a second meeting to accomodate folks in other time-zones.

A large number of folks are already in time-zones that are not
friendly to our current meeting time.  I'm wondering if there is
enough of an interest to move the meeting time from 16:00 UTC on
Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
willing to look at either moving the meeting for a trial period or
holding a second meeting to make sure folks in other TZ's had a chance
to be heard.

Let me know your thoughts, if there are folks out there that feel
unable to attend due to TZ conflicts and we can see what we might be
able to do.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][cinder][glance] Should glance be installed on cinder only nodes?

2013-12-03 Thread Avishay Traeger
Gans,
No, you don't need to install Glance on Cinder nodes.  Cinder will use the 
Glance client, which must be installed on the Cinder node (see 
python-glanceclient the requirements.txt file in Cinder's tree).

Thanks,
Avishay



From:   gans developer gans.develo...@gmail.com
To: openstack-dev@lists.openstack.org, 
Date:   12/03/2013 10:58 AM
Subject:[openstack-dev] [OpenStack-dev][cinder][glance] Should 
glance be installed on cinder only nodes?



Hi All,

I was performing Copy Image to Volume operation on my controller node 
which has glance and cinder installed.

If i wish to create a cinder only node for cinder-volume operations , 
would i need to install glance also on this node for performing Copy 
Image to Volume operation ?

Thanks,
Gans.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cloning vs copying images

2013-12-02 Thread Avishay Traeger
Dmitry,
You are correct.  I made the same comment on the review before seeing this 
thread.  Let's see how both patches turn out and we'll choose one. :)

Thanks,
Avishay



From:   Dmitry Borodaenko dborodae...@mirantis.com
To: openstack-dev@lists.openstack.org, 
Date:   12/02/2013 09:32 PM
Subject:[openstack-dev] [Cinder] Cloning vs copying images



Hi OpenStack, particularly Cinder backend developers,

Please consider the following two competing fixes for the same problem:

https://review.openstack.org/#/c/58870/
https://review.openstack.org/#/c/58893/

The problem being fixed is that some backends, specifically Ceph RBD,
can only boot from volumes created from images in a certain format, in
RBD's case, RAW. When an image in a different format gets cloned into
a volume, it cannot be booted from. Obvious solution is to refuse
clone operation and copy/convert the image instead.

And now the principal question: is it safe to assume that this
restriction applies to all backends? Should the fix enforce copy of
non-RAW images for all backends? Or should the decision whether to
clone or copy the image be made in each backend?

The first fix puts this logic into the RBD backend, and makes changes
necessary for all other backends to have enough information to make a
similar decision if necessary. The problem with this approach is that
it's relatively intrusive, because driver clone_image() method
signature has to be changed.

The second fix has significantly less code changes, but it does
prevent cloning non-RAW images for all backends. I am not sure if this
is a real problem or not.

Can anyone point at a backend that can boot from a volume cloned from
a non-RAW image? I can think of one candidate: GPFS is a file-based
backend, while GPFS has a file clone operation. Is GPFS backend able
to boot from, say, a QCOW2 volume?

Thanks,

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ERROR : Unable to locate Volume Group stack-volumes

2013-11-09 Thread Avishay Traeger
Hello Xin,
This error indicates that there is no volume group named stack-volumes.
Can you verify if it exists with the command sudo vgs?
If does not exist, perhaps something is in the devstack logs about why it
failed to create it?

Thanks,
Avishay



From:   openstack learner openstacklea...@gmail.com
To: openstack-dev@lists.openstack.org,
openst...@lists.openstack.org,
Date:   11/08/2013 08:11 PM
Subject:[openstack-dev] ERROR : Unable to locate Volume Group
stack-volumes



Hi all,

I am using devstack and sometimes when i do a ./unstack.sh and
then ./stack.sh,
there is a Unable to locate Volume Group stack-volumes ERROR as followed.
Does anyone know what causes the error and  how to solve the issue? Before
i do the ./unstack.sh and redo ./stack.sh.  Everything looks good.

thanks
xin


2013-11-08 09:57:54.869 ERROR cinder.brick.local_dev.lvm
[req-3ccb7cde-b7bf-472a-afb6-c1e15b1cf589 None None] Unable to locate
Volume Group stack-volumes
2013-11-08 09:57:54.869 ERROR cinder.volume.manager
[req-3ccb7cde-b7bf-472a-afb6-c1e15b1cf589 None None] Error encountered
during initialization of driver: LVMISCSIDriver
2013-11-08 09:57:54.870 ERROR cinder.volume.manager
[req-3ccb7cde-b7bf-472a-afb6-c1e15b1cf589 None None] Bad or unexpected
response from the storage volume backend API: Volume Group stack-volumes
does not exist
2013-11-08 09:57:54.870 TRACE cinder.volume.manager Traceback (most recent
call last):
2013-11-08 09:57:54.870 TRACE cinder.volume.manager   File
/opt/stack/cinder/cinder/volume/manager.py, line 191, in init_host
2013-11-08 09:57:54.870 TRACE cinder.volume.manager
self.driver.check_for_setup_error()
2013-11-08 09:57:54.870 TRACE cinder.volume.manager   File
/opt/stack/cinder/cinder/volume/drivers/lvm.py, line 89, in
check_for_setup_error
2013-11-08 09:57:54.870 TRACE cinder.volume.manager raise
exception.VolumeBackendAPIException(data=message)
2013-11-08 09:57:54.870 TRACE cinder.volume.manager
VolumeBackendAPIException: Bad or unexpected response from the storage
volume backend API: Volume Group stack-volumes does not exist
2013-11-08 09:57:54.870 TRACE cinder.volume.manager



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-05 Thread Avishay Traeger
Chris Friesen chris.frie...@windriver.com wrote on 11/05/2013 10:21:07
PM:
  I think the proper fix is to make sure that Cinder is moving the volume
  into 'error' state in all cases where there is an error.  Nova can then
  poll as long as its in the 'downloading' state, until it's 'available'
or
  'error'.  Is there a reason why Cinder would legitimately get stuck in
  'downloading'?

 There's always the cinder service crashed and couldn't restart case. :)

Well we should fix that too :)
Your Cinder processes should be properly HA'ed, and yes, Cinder needs to be
robust enough to resume operations.
I don't see how adding a callback would help - wouldn't you still need to
timeout if you don't get a callback?

Thanks,
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-04 Thread Avishay Traeger
So while doubling the timeout will fix some cases, there will be cases with
larger volumes and/or slower systems where the bug will still hit.  Even
timing out on the download progress can lead to unnecessary timeouts (if
it's really slow, or volume is really big, it can stay at 5% for some
time).

I think the proper fix is to make sure that Cinder is moving the volume
into 'error' state in all cases where there is an error.  Nova can then
poll as long as its in the 'downloading' state, until it's 'available' or
'error'.  Is there a reason why Cinder would legitimately get stuck in
'downloading'?

Thanks,
Avishay



From:   John Griffith john.griff...@solidfire.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   11/05/2013 07:41 AM
Subject:Re: [openstack-dev] Improvement of Cinder API wrt
https://bugs.launchpad.net/nova/+bug/1213953



On Tue, Nov 5, 2013 at 7:27 AM, John Griffith
john.griff...@solidfire.com wrote:
 On Tue, Nov 5, 2013 at 6:29 AM, Chris Friesen
 chris.frie...@windriver.com wrote:
 On 11/04/2013 03:49 PM, Solly Ross wrote:

 So, There's currently an outstanding issue with regards to a Nova
 shortcut command that creates a volume from an image and then boots
 from it in one fell swoop.  The gist of the issue is that there is
 currently a set timeout which can time out before the volume creation
 has finished (it's designed to time out in case there is an error),
 in cases where the image download or volume creation takes an
 extended period of time (e.g. under a Gluster backend for Cinder with
 certain network conditions).

 The proposed solution is a modification to the Cinder API to provide
 more detail on what exactly is going on, so that we could
 programmatically tune the timeout.  My initial thought is to create a
 new column in the Volume table called 'status_detail' to provide more
 detailed information about the current status.  For instance, for the
 'downloading' status, we could have 'status_detail' be the completion
 percentage or JSON containing the total size and the current amount
 copied.  This way, at each interval we could check to see if the
 amount copied had changed, and trigger the timeout if it had not,
 instead of blindly assuming that the operation will complete within a
 given amount of time.

 What do people think?  Would there be a better way to do this?


 The only other option I can think of would be some kind of callback that
 cinder could explicitly call to drive updates and/or notifications of
faults
 rather than needing to wait for a timeout.  Possibly a combination of
both
 would be best, that way you could add a --poll option to the create
volume
 and boot CLI command.

 I come from the kernel-hacking world and most things there involve
 event-driven callbacks.  Looking at the openstack code I was kind of
 surprised to see hardcoded timeouts and RPC casts with no callbacks to
 indicate completion.

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I believe you're referring to [1], which was closed after a patch was
added to nova to double the timeout length.  Based on comments sounds
like your still seeing issues on some Gluster (maybe other) setups?

Rather than mess with the API in order to do debug, why don't you use
the info in the cinder-logs?

[1] https://bugs.launchpad.net/nova/+bug/1213953

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-30 Thread Avishay Traeger
+1



From:   John Griffith john.griff...@solidfire.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
Date:   10/29/2013 11:05 PM
Subject:[openstack-dev] [Cinder] Propose Jay Bryant for core



Hey,

I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
:) ) for core membership on the Cinder team.  Jay has been working on
Cinder for a while now and has really shown some dedication and
provided much needed help with quality reviews.  In addition to his
review activity he's also been very active in IRC and in Cinder
development as well.

I think he'd be a good add to the core team.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Towards OpenStack Disaster Recovery

2013-10-21 Thread Avishay Traeger

Hi all,
We (IBM and Red Hat) have begun discussions on enabling Disaster Recovery
(DR) in OpenStack.

We have created a wiki page with our initial thoughts:
https://wiki.openstack.org/wiki/DisasterRecovery
We encourage others to contribute to this wiki.

In addition, we are planning the following activities at the Icehouse
Summit in Hong Kong:
1. A presentation on the OpenStack DR vision:
http://openstacksummitnovember2013.sched.org/event/36ef8daa098c248d7fbb4ac7409f802a#%20

2. A Cinder design summit session on storage replication:
http://summit.openstack.org/cfp/details/69
3. An unconference session discussing the next steps for the design of
OpenStack DR

Note that we are not proposing a new OpenStack project, but are rather
focused on enabling DR in existing projects.  We hope that more people will
join this effort; the above Summit activities can serve as a good starting
point for that, as well as this mailing list, of course.

Thank you,
Avishay Traeger


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how can i get volume name in create snapshot call

2013-10-15 Thread Avishay Traeger
Dinakar,
The driver's create_snapshot function gets a dictionary that describes the
snapshot.  In that dictionary, you have the volume_name field that has
the source volume's name: snapshot['volume_name'].  You can get other
details via snapshot['volume'], which is a dictionary containing the
volume's metadata.

Hope that helps.

Thanks,
Avishay



From:   Dinakar Gorti Maruti dinakar...@cloudbyte.co
To: openstack-dev@lists.openstack.org,
Date:   10/15/2013 08:44 AM
Subject:[openstack-dev] how can i get volume name in create snapshot
call



Dear all,
    we are in progress of implementing a new driver for cinder
services , we have a scenario where we need volume name for creation of
snapshot. In detail , we designed the driver in such a way that it
communicates with our server through http calls , and now we need the
volume name and other details in create snapshot function , how can we get
those details

Thanks
Dinakar ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Additions to Hypervisor Support Matrix

2013-09-16 Thread Avishay Traeger

Hi all,
I have added a few features to the hypervisor support matrix that are
related to volume functions.
https://wiki.openstack.org/wiki/HypervisorSupportMatrix#Hypervisor_feature_support_matrix

1. iSCSI CHAP: Sets CHAP password on iSCSI connections
2. Fibre Channel: Use the FC protocol to attach volumes
3. Volume swap: Swap an attached volume with a different (unattached)
volume - data is copied over
4. Volume rate limiting: Rate limit the I/Os to a given volume

1+2 are not new (Grizzly or before), while 3+4 are new in Havana (Cinder
uses the former for live volume migration, and the latter for QoS/rate
limiting).

The purpose of this email is to notify hypervisor driver maintainers:
1. To update their entries
2. That these features exist and it would be great to have wide support

I know the libvirt driver supports them all, but maybe the maintainers
would like to update it themselves.

Thanks!
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder] Driver qualification

2013-07-28 Thread Avishay Traeger
John Griffith john.griff...@solidfire.com wrote on 07/26/2013 03:44:12
AM:
snip
 I think it would be a very useful tool for initial introduction of a
 new driver and even perhaps some sort of check that's run and
 submitted again prior to milestone releases.
snip

+1.  Do you see this happening for Havana?  Or should this be a summit
topic?

Thanks,
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal for Ollie Leahy to join cinder-core

2013-07-17 Thread Avishay Traeger
-1

I'm sorry to do that, and it really has nothing to do with Ollie or his
work (which I appreciate very much).  The main reason is that right now
Cinder core has 8 members:
1. Avishay Traeger (IBM)
2. Duncan Thomas (HP)
3. Eric Harney (RedHat)
4. Huang Zhiteng (Intel)
5. John Griffith (SolidFire)
6. Josh Durgin (Inktank)
7. Mike Perez (DreamHost)
8. Walt Boring (HP)

Adding another core team member from HP means that 1/3 of the core team is
from HP.  I believe that we should strive to have the core team be as
diverse as possible, with as many companies as possible represented (big
and small alike).  I think that's one of the keys to keeping a project
healthy and on the right track (nothing against HP - I would say the same
for IBM or any other company).  Further, we appointed two core members
fairly recently (Walt and Eric), and I don't feel that we have a shortage
at this time.

Again, nothing personal against Ollie, Duncan, HP, or anyone else.

Thanks,
Avishay



From:   Duncan Thomas duncan.tho...@gmail.com
To: Openstack (openst...@lists.launchpad.net)
(openst...@lists.launchpad.net)
openst...@lists.launchpad.net, OpenStack Development Mailing
List openstack-dev@lists.openstack.org,
Date:   07/17/2013 06:18 PM
Subject:[openstack-dev] [cinder] Proposal for Ollie Leahy to join
cinder-core



Hi Everybody

I'd like to propose Ollie Leahy for cinder core. He has been doing
plenty of reviews and bug fixes, provided useful and tasteful negative
reviews (something often of far higher value than a +1) and has joined
in various design discussions.

Thanks

--
Duncan Thomas
Cinder Core, HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal for Ollie Leahy to join cinder-core

2013-07-17 Thread Avishay Traeger
Dan Smith d...@danplanet.com wrote on 07/17/2013 09:40:02 PM:
  The affiliation of core team members should not come into a decision
  like this.
 
  It is assumed that all core team members are wearing their upstream
  hat and aren't there merely to represent their employers interests.

 Mark beat me to it, but.. Yeah, what he said. Core members aren't
 investments the likes of which get you voting shares and they
 shouldn't enforced as such, IMHO.

I agree, and didn't mean to imply that there would be a conscientious
effort to move the project in a certain way, or that people would be
purposefully voting for the good of their employers.  Of course, voting
should be based on what the individual believes would be best for the
project as a whole, for all its users.  However, a person's view of the
project's direction is certainly influenced by the customers they meet, the
use cases they encounter, and so on.  Those employed by the same company
generally will have similar views.  It's not because of voting shares, or
because of people representing their employers' interests rather than the
project's.  It's because those who come from similar backgrounds will tend
to have similar views of what is good for the project, and a diverse
population will tend to have a broader picture of the users' needs.  I
think the current Cinder core members provide a nice balance of views and
backgrounds - people who understand the needs of public clouds as well as
private clouds, those who interact with customers who are coming from
certain deployment models such as Fibre Channel, those who deal with
customers that are iSCSI-only operations, those that want NAS appliances,
and those who want to go with server-based storage.

I believe that diversity of ideas and backgrounds yields the best results,
and that's why I voted with -1.  If I were representing my employer's
interests, I would go with +1, because HP has been pushing for more FC
support, which is good for IBM.  But I personally have invested many many
hours in Cinder, and I want it to succeed everywhere.  That's why I review
5,000 LOC patches from IBM's competitors with as much care as I do when
reviewing my own code, and even fix bugs in their drivers.  That's why I
listen to every feature request and vote as objectively as I can, even if
I've never encountered the use case for it myself.  I want Cinder to
succeed for every user and for every vendor, and I think that leadership
with as wide a view as possible is important to that success.

Thanks,
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [cinder] Proposal for Ollie Leahy to join cinder-core

2013-07-17 Thread Avishay Traeger
Monty Taylor mord...@inaugust.com wrote on 07/17/2013 09:52:47 PM:
  On 07/17/2013 02:35 PM, John Griffith wrote:
  snip
  Just to point out a few things here, first off there is no guideline
  that states a company affiliation should have anything to do with the
  decision on voting somebody as core.  I have ABSOLUTELY NO concern
about
  representation of company affiliation what so ever.
 
  Quite frankly I wouldn't mind if there were 20 core members from HP,
if
  they're all actively engaged and participating then that's great.  I
  don't think there has been ANY incidence of folks exerting
inappropriate
  influence based on their affiliated interest, and if there ever was I
  think it would be easy to identify and address.
 
  As far as don't need more I don't agree with that either, if there
are
  folks contributing and doing the work then there's no reason not to
add
  them.  Cinder IMO does NOT have an excess of reviewers by a very very
  long stretch.
 
  The criteria here should be review consistency and quality as well as
  knowledge of the project, nothing more nothing less.  If there's an
  objection to the individuals participation or contribution that's
fine,
  but company affiliation should have no bearing.
 
  +1
 
  The people that do great work on reviews, should really be your review
  team, regardless of affiliation.

 +1 (also +1 to what Mark and Dan said)

 FWIW - _all_ of the core members of the Infra team were HP for quite a
 while - and everyone on the team was quite good about always wearing
 their upstream hat. We're split 50/50 now - HP and OpenStack Foundation.
 I'd LOVE more diversity, but we would certainly die if we had a
 diversity requirement.


You've all made good points, and I agree with them.  When I saw the
nomination, I had some concerns due to the reasons I stated previously, but
what you have all said makes sense.  Thanks for the patient explanations.
In terms of this specific nomination, according to the review statistics
that John posted in the other thread[1], Ollie is an obvious candidate for
core, and has my support.

Thanks all,
Avishay

[1] http://russellbryant.net/openstack-stats/cinder-reviewers-30.txt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [cinder] Proposal for Ollie Leahy to join cinder-core

2013-07-17 Thread Avishay Traeger
Walter A. Boring IV walter.bor...@hp.com wrote on 07/18/2013 12:04:07
AM:
snip
 +1 to Ollie from me.

 +1 to John's points.   If a company is colluding with other core
 members, from the same company, to do bad things within a project,
 it should become pretty obvious at some point and the project's
 community should take action.   If someone is putting in an extra
 effort to provide quality code and reviews on a regular basis, then
 why wouldn't we want that person on the team?  Besides, being a core
 member really just means that you are required to do reviews and
 help out with the community.  You do get some gerrit privileges for
 reviews, but that's about it.   I for one think that we absolutely
 can use more core members to help out with reviews during the
 milestone deadlines :)

Walt,
As I said, I really wasn't worried about anyone colluding or doing bad
things.  As you said, that would be obvious and could be handled.  I was
concerned about creating a limited view, and I thank you and everyone who
replied for easing those concerns.

And BTW, I don't think there is an HP conspiracy to take over Cinder and
make it FC-only :)

Thanks,
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev