Re: [openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-21 Thread John Griffith
On Fri, Sep 21, 2018 at 11:00 AM Sean McGinnis 
wrote:

> On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote:
> > All,
> >
> > In the last year we have had some changes to Core team participation.
> This
> > was a topic of discussion at the PTG in Denver last week.  Based on that
> > discussion I have reached out to John Griffith and Winston D (Huang
> Zhiteng)
> > and asked if they felt they could continue to be a part of the Core
> Team.
> > Both agreed that it was time to relinquish their titles.
> >
> > So, I am proposing to remove John Griffith and Winston D from Cinder
> Core.
> > If I hear no concerns with this plan in the next week I will remove them.
> >
> > It is hard to remove people who have been so instrumental to the early
> days
> > of Cinder.  Your past contributions are greatly appreciated and the team
> > would be happy to have you back if circumstances every change.
> >
> > Sincerely,
> > Jay Bryant
> >
>
> Really sad to see Winston go as he's been a long time member, but I think
> over
> the last several releases it's been obvious he's had other priorities to
> compete with. It would be great if that were to change some day. He's made
> a
> lot of great contributions to Cinder over the years.
>
> I'm a little reluctant to make any changes with John though. We've spoken
> briefly. He definitely is off to other things now, but with how deeply he
> has
> been involved up until recently with things like the multiattach
> implementation, replication, and other significant things, I would much
> rather
> have him around but less active than completely gone. Having a few good
> reviews
> is worth a lot.
>


> I would propose we hold off on changing John's status for at least a
> cycle. He
> has indicated to me he would be willing to devote a little time to still
> doing
> reviews as his time allows, and I would hate to lose out on his expertise
> on
> changes to some things. Maybe we can give it a little more time and see if
> his
> other demands keep him too busy to participate and reevaluate later?
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hey Everyone,

Now that I'm settling in on my other things I think I can still contribute
a bit to Cinder on my own time.  I'm still pretty fond of OpenStack and
Cinder so would love the opportunity to give it a cycle to see if I can
balance things and still be helpful.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][cinder] about cinder volume qos

2018-09-21 Thread John Griffith
On Mon, Sep 10, 2018 at 2:22 PM Jay S Bryant  wrote:

>
>
> On 9/10/2018 7:17 AM, Rambo wrote:
>
> Hi,all
>
>   At first,I find it is supported that we can define hard performance
> limits for each volume in doc.openstack.org[1].But only can define hard
> performance limits for each volume type in fact. Another, the note"As of
> the Nova 18.0.0 Rocky release, front end QoS settings are only supported
> when using the libvirt driver.",in fact, we have supported the front end
> QoS settings when using the libvirt driver previous. Is the document
> wrong?Can you tell me more about this ?Thank you very much.
>
> [1]
> https://docs.openstack.org/cinder/latest/admin/blockstorage-basic-volume-qos.html
>
>
>
> Rambo,
>
> The performance limits are limited to a volume type as you need to have a
> volume type to be able to associate a QoS type with it.  So, that makes
> sense.
>
> As for the documentation, it is a little confusing the way that is worded
> but it isn't wrong.  So, QoS support thus far, including Nova 18.0.0, front
> end QoS setting only works with the libvirt driver.  I don't interpret that
> as meaning that there wasn't QoS support before that.
>
Right, the point is that now it's listed as supported ONLY on libvirt, as
opposed to in the past it may have been supported on other hypervisors like
hyper-v, xen etc.  I don't know any of the details around how well those
other implementations worked or what decisions were made but I just read
the update as noting that currently only libvirt is supported, but not that
anything has changed there.

>
> Jay
>
>
>
>
>
>
> Best Regards
> Rambo
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-08-01 Thread John Griffith
On Fri, Jul 27, 2018 at 8:44 AM Matt Riedemann  wrote:

> On 7/16/2018 4:20 AM, Gorka Eguileor wrote:
> > If I remember correctly the driver was deprecated because it had no
> > maintainer or CI.  In Cinder we require our drivers to have both,
> > otherwise we can't guarantee that they actually work or that anyone will
> > fix it if it gets broken.
>
> Would this really require 3rd party CI if it's just local block storage
> on the compute node (in devstack)? We could do that with an upstream CI
> job right? We already have upstream CI jobs for things like rbd and nfs.
> The 3rd party CI requirements generally are for proprietary storage
> backends.
>
> I'm only asking about the CI side of this, the other notes from Sean
> about tweaking the LVM volume backend and feature parity are good
> reasons for removal of the unmaintained driver.
>
> Another option is using the nova + libvirt + lvm image backend for local
> (to the VM) ephemeral disk:
>
>
> https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We've had this conversation multiple times, here were the results from past
conversations and the reasons we deprecated:
1. Driver was not being tested at all (no CI, no upstream tests etc)
2. We sent out numerous requests trying to determine if anybody was using
the driver, didn't receive much feedback
3. The driver didn't work for an entire release, this indicated that
perhaps it wasn't that valuable
4. The driver is unable to implement a number of the required features for
a Cinder Block Device
5. Digging deeper into performance tests most comparisons were doing things
like
a. Using the shared single nic that's used for all of the cluster
communications (ie DB, APIs, Rabbit etc)
b. Misconfigured deployment, ie using a 1Gig Nic for iSCSI connections
(also see above)

The decision was that raw-block was not by definition a "Cinder Device",
and given that it wasn't really tested or
maintained that it should be removed.  LVM is actually quite good, we did
some pretty extensive testing and even
presented it as a session in Barcelona that showed perf within
approximately 10%.  I'm skeptical any time I see
dramatic comparisons of 1/2 performance, but I could be completely wrong.

I would be much more interested in putting efforts towards trying to figure
out why you have such a large perf
delta and see if we can address that as opposed to trying to bring back and
maintain a driver that only half
works.

Or as Jay Pipes mentioned, don't use Cinder in your case.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-27 Thread John Griffith
On Tue, Feb 27, 2018 at 9:34 AM, Walter Boring  wrote:

> I think you might be able to get away with just calling os-brick's
> connect_volume again without the need to call disconnect_volume first.
>  calling disconnect_volume wouldn't be good for volumes that are being
> used, just to refresh the connection_info on that volume.
>
​Hmm... but then you'd have an orphaned connection left hanging around for
the old connection no?
​


>
> On Tue, Feb 27, 2018 at 2:56 PM, Matt Riedemann 
> wrote:
>
>> On 2/27/2018 10:02 AM, Matthew Booth wrote:
>>
>>> Sounds like the work Nova will have to do is identical to volume update
>>> (swap volume). i.e. Change where a disk's backing store is without actually
>>> changing the disk.
>>>
>>
>> That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the
>> libvirt driver supports swap volume, but I assume all other virt drivers
>> could support this generically.
>>
>>
>>> Multi-attach! There might be more than 1 instance per volume, and we
>>> can't currently support volume update for multi-attached volumes.
>>>
>> ​Not sure I follow... why not?  It's just refreshing connections, only
difference is you might have to do this "n" times instead of once?​


>
>> Good point - cinder would likely need to reject a request to replicate an
>> in-use multiattach volume if the volume has more than one attachment.
>
> ​So replication is set on create of the volume, you could have a rule that
keeps the two features mutually exclusive, but I'm still not quite sure why
that would be a requirement here.  ​


>
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
On Mon, Feb 26, 2018 at 2:47 PM, Matt Riedemann <mriede...@gmail.com> wrote:

> On 2/26/2018 9:28 PM, John Griffith wrote:
>
>> I'm also wondering how much of the extend actions we can leverage here,
>> but I haven't looked through all of that yet.​
>>
>
> The os-server-external-events API in nova is generic. We'd just add a new
> microversion to register a new tag for this event. Like the extend volume
> event, the volume ID would be provided as input to the API and nova would
> use that to identify the instance + volume to refresh on the compute host.
>
> We'd also register a new instance action / event record so that users
> could poll the os-instance-actions API for completion of the operation.

​Yeah, it seems like this would be pretty handy with what's there.  So are
folks good with that?  Wanted to make sure there's nothing contentious
there before I propose a spec on the Nova and Cinder sides.  If you think
it seems at least worth proposing I'll work on it and get something ready
as a welcome home from Dublin gift for everyone :)
​


>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
On Mon, Feb 26, 2018 at 2:13 PM, Matt Riedemann <mriede...@gmail.com> wrote:

> On 2/26/2018 8:09 PM, John Griffith wrote:
>
>> I'm interested in looking at creating a mechanism to "refresh" all of the
>> existing/current attachments as part of the Cinder Failover process.
>>
>
> What would be involved on the nova side for the refresh? I'm guessing
> disconnect/connect the volume via os-brick (or whatever for non-libvirt
> drivers), resulting in a new host connector from os-brick that nova would
> use to update the existing volume attachment for the volume/server instance
> combo?

​Yep, that's pretty much exactly what I'm thinking about / looking at.  I'm
also wondering how much of the extend actions we can leverage here, but I
haven't looked through all of that yet.​


>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
Hey Everyone,

Something I've been looking at with Cinder's replication (sort of the next
step in the evolution if you will) is the ability to refresh/renew in-use
volumes that were part of a migration event.

We do something similar with extend-volume on the Nova side through the use
of Instance Actions I believe, and I'm wondering how folks would feel about
the same sort of thing being added upon failover/failback for replicated
Cinder volumes?

If you're not familiar, Cinder allows a volume to be replicated to multiple
physical backend devices, and in the case of a DR situation an Operator can
failover a backend device (or even a single volume).  This process results
in Cinder making some calls to the respective backend device, it doing it's
magic and updating the Cinder Volume Model with new attachment info.

This works great, except for the case of users that have a bunch of in-use
volumes on that particular backend.  We don't currently do anything to
refresh/update them, so it's a manual process of running through a
detach/attach loop.

I'm interested in looking at creating a mechanism to "refresh" all of the
existing/current attachments as part of the Cinder Failover process.

Curious if anybody has any thoughts on this, or if anyone has already done
something related to this topic?

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-13 Thread John Griffith
On Wed, Dec 13, 2017 at 9:59 AM, Thierry Carrez 
wrote:

> Chris Jones wrote:
> > [...]
> > For me the first thing that comes to mind with this proposal, is how
> > would the milestones/FF/etc be arranged within that year? As I've raised
> > previously on this list [0], I would prefer more time for testing and
> > stabilisation between Feature Freeze and Release. I continue to think
> > that the unit testing our CI provides, is not a sufficient protection
> > against real world deployment issues. I think building in a useful
> > amount of time for functional testing, would be a huge benefit to both
> > the quality of upstream releases, and the timeliness of downstream
> releases.
>
> The release team did an example layout of what it could look like. We'd
> likely add a bit of time to the stabilisation period, and likely do
> feature freeze before the end-of-year holidays, allowing for a bit of
> relax period before we run the final release mile / enter the PTG prep
> tunnel.
>
> Having more time definitely opens up options :)
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
+1 on the idea of annual release.  In my opinion, the six month treadmill
has been taxing for a while and there hasn't been a good rhythm to things
the last year or so.  I don't think it would/should impact functionality
and velocity and the intermittent releases are a reasonable step towards
something like bug fix releases etc if a project wishes to go that route
(personally I think that would be a win).  I also think that the reality is
many of us can't afford to keep doing things the way we've been doing them
for the last few years.  I'm also not sure we're as productive as we used
to be with our scheduling or our face to face time.

If nothing else I think it's healthy and proactive to consider options like
this one and maybe even try it for a year.

Thaks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread John Griffith
On Fri, Oct 6, 2017 at 11:22 AM, Matt Riedemann  wrote:

> This came up in IRC discussion the other day, but we didn't dig into it
> much given we were all (2 of us) exhausted talking about rebuild.
>
> But we have had several bugs over the years where people expect the root
> disk to change to a newly supplied image during rebuild even if the
> instance is volume-backed.
>
> I distilled several of those bugs down to just this one and duplicated the
> rest:
>
> https://bugs.launchpad.net/nova/+bug/1482040
>
> I wanted to see if there is actually any failure on the backend when doing
> this, and there isn't - there is no instance fault or anything like that.
> It's just not what the user expects, and actually the instance image_ref is
> then shown later as the image specified during rebuild, even though that's
> not the actual image in the root disk (the volume).
>
> There have been a couple of patches proposed over time to change this:
>
> https://review.openstack.org/#/c/305079/
>
> https://review.openstack.org/#/c/201458/
>
> https://review.openstack.org/#/c/467588/
>
> And Paul Murray had a related (approved) spec at one point for detach and
> attach of root volumes:
>
> https://review.openstack.org/#/c/221732/
>
> But the blueprint was never completed.
>
> So with all of this in mind, should we at least consider, until at least
> someone owns supporting this, that the API should fail with a 400 response
> if you're trying to rebuild with a new image on a volume-backed instance?
> That way it's a fast failure in the API, similar to trying to backup a
> volume-backed instance fails fast.
>
​Seems reasonable and correct to me, if we were dealing with ephemeral
Cinder (which isn't a thing) we'd just delete the volume, create a new one
with new image.  In this case however the point of BFV for must is
persistence so it seems reasonable to me to start with changing the
response.​


>
> If we did, that would change the API response from a 202 today to a 400,
> which is something we normally don't do. I don't think a microversion would
> be necessary if we did this, however, because essentially what the user is
> asking for isn't what we're actually giving them, so it's a failure in an
> unexpected way even if there is no fault recorded, it's not what the user
> asked for. I might not be thinking of something here though, like
> interoperability for example - a cloud without this change would blissfully
> return 202 but a cloud with the change would return a 400...so that should
> be considered.

​It's a bug IMO, if you're relying on an incorrect response code and not
getting what you expect it seems that should be more important than
consistent behavior.  ​


>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [election] NON nomination for TC

2017-10-04 Thread John Griffith
On Mon, Oct 2, 2017 at 10:03 AM, Sean Dague  wrote:

> I'd like to announce that after 4 years serving on the OpenStack
> Technical Committee, I will not be running in this fall's
> election. Over the last 4 years we've navigated some rough seas
> together, including the transition to more inclusion of projects, the
> dive off the hype curve, the emergence of real interoperability
> between clouds, and the beginnings of a new vision of OpenStack
> pairing with more technologies beyond our community.
>
> There remains a ton of good work to be done. But it's also important
> that we have a wide range of leaders to do that. Those opportunities
> only exist if we make space for new leaders to emerge. Rotational
> leadership is part of what makes OpenStack great, and is part of what
> will ensure that this community lasts far beyond any individuals
> within it.
>
> I plan to still be around in the community, and contribute where
> needed. So this is not farewell. However it will be good to see new
> faces among the folks leading the next steps in the community.
>
> I would encourage all members of the community that are interested in
> contributing to the future of OpenStack to step forward and run. It's
> important to realize what the TC is and can be. This remains a
> community driven by consensus, and the TC reflects that. Being a
> member of the TC does raise your community visibility, but it does not
> replace the need to listen, understand, communicate clearly, and
> realize that hard work comes through compromise.
>
> Good luck to all our candidates this fall, and thanks for letting me
> represent you the past 4 years.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
It's already been said by others, but in my opinion, can't be stated enough
that you sir are indeed a true leader and for many of us a role model in
the community.  I've been happy to see you on the TC over all of these
years, you've earned my respect and admiration and you've done a fantastic
job as a member of the TC.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removing screen from devstack - RSN

2017-09-07 Thread John Griffith
On Thu, Sep 7, 2017 at 7:07 PM, Sean Dague  wrote:

> On 09/07/2017 04:54 PM, Eric Fried wrote:
>
>> All-
>>
>> The plain pdb doc patch [1] is merging.
>>
>> On clarkb's suggestion, I took a look at remote-pdb [2], and it
>> turned
>> out to be easy-peasy to use.  I submitted a followon doc patch for that
>> [3].
>>
>> Thanks, John, for speaking up and getting this rolling.
>>
>> Eric
>>
>
> Approved for merge, should be in shortly.
>
> Eric, thanks again for stepping up and pulling this all together. Very
> much appreciated.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Thanks everyone, and thanks sdague for driving this to begin with!​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removing screen from devstack - RSN

2017-09-07 Thread John Griffith
On Thu, Sep 7, 2017 at 1:28 PM, Sean Dague  wrote:

> On 09/07/2017 01:52 PM, Eric Fried wrote:
>
>> John-
>>
>> You're not the only one for whom the transition to systemd has
>> been
>> painful.
>>
>> However...
>>
>> It *is* possible (some would argue just as easy) to do all things
>> with
>> systemd that were done with screen.
>>
>> For starters, have you seen [1] ?
>>
>> Though looking at that again, I realize it could use a section on
>> how
>> to do pdb - I'll propose something for that.  In the meantime, feel free
>> to find me in #openstack-dev and I can talk you through it.
>>
>> [1] https://docs.openstack.org/devstack/latest/systemd.html
>>
>> Thanks,
>> Eric Fried (efried)
>>
>
> Thank you Eric. Would love to get a recommended pdb path into the docs.
> Ping me as soon as it's up for review, and I'll get it merged quickly.
>
> Thanks for stepping up here, it's highly appreciated.
>
> -Sean
>
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Patch is here [1] for those that are interested:

[1]: https://review.openstack.org/#/c/501834/1​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removing screen from devstack - RSN

2017-09-07 Thread John Griffith
On Thu, Sep 7, 2017 at 11:52 AM, Eric Fried <openst...@fried.cc> wrote:

> John-
>
> You're not the only one for whom the transition to systemd has been
> painful.
>
> However...
>
> It *is* possible (some would argue just as easy) to do all things
> with
> systemd that were done with screen.
>
> For starters, have you seen [1] ?
>
> Though looking at that again, I realize it could use a section on
> how
> to do pdb - I'll propose something for that.  In the meantime, feel free
> to find me in #openstack-dev and I can talk you through it.
>
> [1] https://docs.openstack.org/devstack/latest/systemd.html
>
> Thanks,
>     Eric Fried (efried)
>
> On 09/07/2017 12:34 PM, John Griffith wrote:
> >
> >
> > On Thu, Sep 7, 2017 at 11:29 AM, John Griffith <john.griffi...@gmail.com
> > <mailto:john.griffi...@gmail.com>> wrote:
> >
> > Please don't, some of us have no issues with screen and use it
> > extensively for debugging.  Unless there's a viable option using
> > systemd I fail to understand why this is such a big deal.  I've been
> > using devstack in screen for a long time without issue, and I still
> > use rejoin that supposedly didn't work (without issue).
> >
> > I completely get the "run like customers" but in theory I'm not sure
> > how screen makes it much different than what customers do, it's
> > executing the same binary at the end of the day.  I'd also ask then
> > is devstack no longer "dev" stack, but now a preferred method of
> > install for running production clouds?  Anyway, I'd just ask to
> > leave it as an option, unless there's equivalent options for things
> > like using pdb etc.  It's annoying enough that we lost that
> > capability for the API services, is there a possibility we can
> > reconsider not allowing this an option?
> >
> > Thanks,
> > John
> >
> > On Thu, Sep 7, 2017 at 7:31 AM, Davanum Srinivas <dava...@gmail.com
> > <mailto:dava...@gmail.com>> wrote:
> >
> > w00t!
> >
> > On Thu, Sep 7, 2017 at 8:45 AM, Sean Dague <s...@dague.net
> > <mailto:s...@dague.net>> wrote:
> > > On 08/31/2017 06:27 AM, Sean Dague wrote:
> > >> The work that started last cycle to make devstack only have a
> > single
> > >> execution mode, that was the same between automated QA and
> > local, is
> > >> nearing it's completion.
> > >>
> > >> https://review.openstack.org/#/c/499186/
> > <https://review.openstack.org/#/c/499186/> is the patch that
> > will remove
> > >> screen from devstack (which was only left as a fall back for
> > things like
> > >> grenade during Pike). Tests are currently passing on all the
> > gating jobs
> > >> for it. And experimental looks mostly useful.
> > >>
> > >> The intent is to merge this in about a week (right before
> > PTG). So, if
> > >> you have a complicated devstack plugin you think might be
> > affected by
> > >> this (and were previously making jobs pretend to be grenade
> > to keep
> > >> screen running), now is the time to run tests against this
> > patch and see
> > >> where things stand.
> > >
> > > This patch is in the gate and now merging, and with it
> > devstack now has
> > > a single run mode, using systemd units, which is the same
> > between test
> > > and development.
> > >
> > > Thanks to everyone helping with the transition!
> > >
> > > -Sean
> > >
> > > --
> > > Sean Dague
> > > http://dague.net
> > >
> > >
> > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <http://openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe>
> > >
> > http://lists.ope

Re: [openstack-dev] removing screen from devstack - RSN

2017-09-07 Thread John Griffith
On Thu, Sep 7, 2017 at 11:29 AM, John Griffith <john.griffi...@gmail.com>
wrote:

> Please don't, some of us have no issues with screen and use it extensively
> for debugging.  Unless there's a viable option using systemd I fail to
> understand why this is such a big deal.  I've been using devstack in screen
> for a long time without issue, and I still use rejoin that supposedly
> didn't work (without issue).
>
> I completely get the "run like customers" but in theory I'm not sure how
> screen makes it much different than what customers do, it's executing the
> same binary at the end of the day.  I'd also ask then is devstack no longer
> "dev" stack, but now a preferred method of install for running production
> clouds?  Anyway, I'd just ask to leave it as an option, unless there's
> equivalent options for things like using pdb etc.  It's annoying enough
> that we lost that capability for the API services, is there a possibility
> we can reconsider not allowing this an option?
>
> Thanks,
> John
>
> On Thu, Sep 7, 2017 at 7:31 AM, Davanum Srinivas <dava...@gmail.com>
> wrote:
>
>> w00t!
>>
>> On Thu, Sep 7, 2017 at 8:45 AM, Sean Dague <s...@dague.net> wrote:
>> > On 08/31/2017 06:27 AM, Sean Dague wrote:
>> >> The work that started last cycle to make devstack only have a single
>> >> execution mode, that was the same between automated QA and local, is
>> >> nearing it's completion.
>> >>
>> >> https://review.openstack.org/#/c/499186/ is the patch that will remove
>> >> screen from devstack (which was only left as a fall back for things
>> like
>> >> grenade during Pike). Tests are currently passing on all the gating
>> jobs
>> >> for it. And experimental looks mostly useful.
>> >>
>> >> The intent is to merge this in about a week (right before PTG). So, if
>> >> you have a complicated devstack plugin you think might be affected by
>> >> this (and were previously making jobs pretend to be grenade to keep
>> >> screen running), now is the time to run tests against this patch and
>> see
>> >> where things stand.
>> >
>> > This patch is in the gate and now merging, and with it devstack now has
>> > a single run mode, using systemd units, which is the same between test
>> > and development.
>> >
>> > Thanks to everyone helping with the transition!
>> >
>> > -Sean
>> >
>> > --
>> > Sean Dague
>> > http://dague.net
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ​FWIW I realize my opinion doesn't count here particularly since this
already merged, BUT I also realize that it didn't count before it merged
either as the response I was given was "I don't use debuggers".  It's
unfortunate, perhaps I'm really the only one that has counter opinions on
things, or maybe nobody else wants to speak up, or maybe just nobody
cares.  Either way, it's a bit of a bummer and just another thing on the
list.  ​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removing screen from devstack - RSN

2017-09-07 Thread John Griffith
Please don't, some of us have no issues with screen and use it extensively
for debugging.  Unless there's a viable option using systemd I fail to
understand why this is such a big deal.  I've been using devstack in screen
for a long time without issue, and I still use rejoin that supposedly
didn't work (without issue).

I completely get the "run like customers" but in theory I'm not sure how
screen makes it much different than what customers do, it's executing the
same binary at the end of the day.  I'd also ask then is devstack no longer
"dev" stack, but now a preferred method of install for running production
clouds?  Anyway, I'd just ask to leave it as an option, unless there's
equivalent options for things like using pdb etc.  It's annoying enough
that we lost that capability for the API services, is there a possibility
we can reconsider not allowing this an option?

Thanks,
John

On Thu, Sep 7, 2017 at 7:31 AM, Davanum Srinivas  wrote:

> w00t!
>
> On Thu, Sep 7, 2017 at 8:45 AM, Sean Dague  wrote:
> > On 08/31/2017 06:27 AM, Sean Dague wrote:
> >> The work that started last cycle to make devstack only have a single
> >> execution mode, that was the same between automated QA and local, is
> >> nearing it's completion.
> >>
> >> https://review.openstack.org/#/c/499186/ is the patch that will remove
> >> screen from devstack (which was only left as a fall back for things like
> >> grenade during Pike). Tests are currently passing on all the gating jobs
> >> for it. And experimental looks mostly useful.
> >>
> >> The intent is to merge this in about a week (right before PTG). So, if
> >> you have a complicated devstack plugin you think might be affected by
> >> this (and were previously making jobs pretend to be grenade to keep
> >> screen running), now is the time to run tests against this patch and see
> >> where things stand.
> >
> > This patch is in the gate and now merging, and with it devstack now has
> > a single run mode, using systemd units, which is the same between test
> > and development.
> >
> > Thanks to everyone helping with the transition!
> >
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Proposing TommyLikeHu for Cinder core

2017-07-26 Thread John Griffith
+1

On Wed, Jul 26, 2017 at 8:50 AM, yang, xing  wrote:

> +1!   Tommy is great addition to Cinder core team.
>
> Thanks,
> Xing
>
>
>
> 
> From: Sean McGinnis [sean.mcgin...@gmx.com]
> Sent: Tuesday, July 25, 2017 4:07 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Cinder] Proposing TommyLikeHu for Cinder core
>
> I am proposing we add TommyLike as a Cinder core.
>
> DISCLAIMER: We work for the same company.
>
> I have held back on proposing him for some time because of this conflict.
> But
> I think from his number of reviews [1] and code contributions [2] it's
> hopefully clear that my motivation does not have anything to do with this.
>
> TommyLike has consistently done quality code reviews. He has contributed a
> lot of bug fixes and features. And he has been available in the IRC channel
> answering questions and helping out, despite some serious timezone
> challenges.
>
> I think it would be great to add someone from this region so we can get
> more
> perspective from the APAC area, as well as having someone around that may
> help as more developers get involved in non-US and non-EU timezones.
>
> Cinder cores, please respond with your opinion. If no reason is given to do
> otherwise, I will add TommyLike to the core group in one week.
>
> And absolutely call me out if you see any in bias in my proposal.
>
> Thanks,
> Sean
>
> [1] http://stackalytics.com/report/contribution/cinder-group/90
> [2] https://review.openstack.org/#/q/owner:%22TommyLike+%
> 253Ctommylikehu%2540gmail.com%253E%22++status:merged
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Requirements for re-adding Gluster support

2017-07-26 Thread John Griffith
On Wed, Jul 26, 2017 at 10:42 AM, Sean McGinnis 
wrote:

> On Wed, Jul 26, 2017 at 12:30:49PM +, Jeremy Stanley wrote:
> > On 2017-07-26 12:56:55 +0200 (+0200), Niels de Vos wrote:
> > [...]
> > > My current guess is that adding a 3rd party CI [3] for Gluster is
> > > the only missing piece?
> > [...]
> >
> > I thought GlusterFS was free/libre software. If so, won't the Cinder
> > team allow upstream testing in OpenStack's CI system for free
> > backends/drivers? Maintaining a third-party CI system for that seems
> > like overkill, but I'm unfamiliar with Cinder's particular driver
> > testing policies.
> > --
> > Jeremy Stanley
>
> You are correct Jeremy. It wasn't a CI issue that caused the removal.
> IIRC, Red Hat decided to focus on Ceph as the platform for Cinder
> storage.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Just confirming Sean's recollection, Eric Harney from Redhat was pretty
much the sole maintainer of the Gluster code in Cinder and the decision was
made that he would stop maintaining/supporting the Gluster driver in Cinder
(and I believe he actually put out some calls asking for any volunteers
that might want to pick it up).  I'll certainly let Eric speak to any
details if he wishes so I don't misrepresent.

The bottom line is there was only one person maintaining it, CI is
relatively easy with Gluster, there was even (IIRC) infra already in place
to deploy/test in the upstream gate.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-27 Thread John Griffith
On Wed, Jun 21, 2017 at 8:59 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> One of the areas identified as a priority by the Board + TC + UC
> workshop in March was the need to better close the feedback loop and
> make unanswered requirements emerge. Part of the solution is to ensure
> that groups that look at specific use cases, or specific problem spaces
> within OpenStack get participation from a wide spectrum of roles, from
> pure operators of OpenStack clouds, to upstream developers, product
> managers, researchers, and every combination thereof. In the past year
> we reorganized the Design Summit event, so that the design / planning /
> feedback gathering part of it would be less dev- or ops-branded, to
> encourage participation of everyone in a neutral ground, based on the
> topic being discussed. That was just a first step.
>
> In OpenStack we have a number of "working groups", groups of people
> interested in discussing a given use case, or addressing a given problem
> space across all of OpenStack. Examples include the API working group,
> the Deployment working group, the Public clouds working group, the
> Telco/NFV working group, or the Scientific working group. However, for
> governance reasons, those are currently set up either as a User
> Committee working group[1], or a working group depending on the
> Technical Committee[2]. This branding of working groups artificially
> discourages participation from one side to the others group, for no
> specific reason. This needs to be fixed.
>
> We propose to take a page out of Kubernetes playbook and set up "SIGs"
> (special interest groups), that would be primarily defined by their
> mission (i.e. the use case / problem space the group wants to
> collectively address). Those SIGs would not be Ops SIGs or Dev SIGs,
> they would just be OpenStack SIGs. While possible some groups will lean
> more towards an operator or dev focus (based on their mission), it is
> important to encourage everyone to join in early and often. SIGs could
> be very easily set up, just by adding your group to a wiki page,
> defining the mission of the group, a contact point and details on
> meetings (if the group has any). No need for prior vetting by any
> governance body. The TC and UC would likely still clean up dead SIGs
> from the list, to keep it relevant and tidy. Since they are neither dev
> or ops, SIGs would not use the -dev or the -operators lists: they would
> use a specific ML (openstack-sigs ?) to hold their discussions without
> cross-posting, with appropriate subject tagging.
>
> Not everything would become a SIG. Upstream project teams would remain
> the same (although some of them, like Security, might turn into a SIG).
> Teams under the UC that are purely operator-facing (like the Ops Tags
> Team or the AUC recognition team) would likewise stay as UC subteams.
>
> Comments, thoughts ?
>
> [1]
> https://wiki.openstack.org/wiki/Governance/Foundation/
> UserCommittee#Working_Groups_and_Teams
> [2] https://wiki.openstack.org/wiki/Upstream_Working_Groups
>
> --
> Melvin Hillsman & Thierry Carrez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​I don't think this is necessarily where you're heading on this, but one
thing that's been kinda nice IMO working in some other upstream communities
(like K8's) that use the SIG model.  There's one channel for something like
storage, and to your point it's for everybody and anybody interested in
that topic.  Whether they're a developer, deployer or end-user.  I actually
think this works really well because it ensures that the same people that
are developing code are also directly exposed to and interacting with
various consumers of their code.

It also means people that will be consuming the code also may actually get
to contribute directly to the development process.  This would be a huge
win in my opinion.  The example is that rather than having a cinder channel
just for dev related conversations and a general openstack channel for
support and questions, throw all Cinder related things into a single
channel.  This means devs are actually in touch with ops and users which is
something that I think would be extremely beneficial.

​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread John Griffith
On Thu, Jun 15, 2017 at 3:15 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> Back in 2014, OpenStack was facing a problem. Our project structure,
> inherited from days where Nova, Swift and friends were the only game in
> town, was not working anymore. The "integrated release" that we ended up
> producing was not really integrated, already too big to be installed by
> everyone, and yet too small to accommodate the growing interest in other
> forms of "open infrastructure". The incubation process (from stackforge
> to incubated, from incubated to integrated) created catch-22s that
> prevented projects from gathering enough interest to reach the upper
> layers. Something had to give.
>
> The project structure reform[1] that resulted from those discussions
> switched to a simpler model: project teams would be approved based on
> how well they fit the OpenStack overall mission and community
> principles, rather than based on a degree of maturity. It was nicknamed
> "the big tent" based on a blogpost[2] that Monty wrote -- mostly
> explaining that things produced by the OpenStack community should be
> considered OpenStack projects.
>
> So the reform removed the concept of incubated vs. integrated, in favor
> of a single "official" category. Tags[3] were introduced to better
> describe the degree of maturity of the various official things. "Being
> part of the big tent" was synonymous to "being an official project" (but
> people kept saying the former).
>
> At around the same time, mostly for technical reasons around the
> difficulty of renaming git repositories, the "stackforge/" git
> repository prefix was discontinued (all projects hosted on OpenStack
> infrastructure would be created under an "openstack/" git repository
> prefix).
>
> All those events combined, though, sent a mixed message, which we are
> still struggling with today. "Big tent" has a flea market connotation of
> "everyone can come in". Combined with the fact that all git repositories
> are under the same prefix, it created a lot of confusion. Some people
> even think the big tent is the openstack/ namespace, not the list of
> official projects. We tried to stop using the "big tent" meme, but (I
> blame Monty), the name is still sticking. I think it's time to more
> aggressively get rid of it. We tried using "unofficial" and "official"
> terminology, but that did not stick either.
>
> I'd like to propose that we introduce a new concept: "OpenStack-Hosted
> projects". There would be "OpenStack projects" on one side, and
> "Projects hosted on OpenStack infrastructure" on the other side (all
> still under the openstack/ git repo prefix). We'll stop saying "official
> OpenStack project" and "unofficial OpenStack project". The only
> "OpenStack projects" will be the official ones. We'll chase down the
> last mentions of "big tent" in documentation and remove it from our
> vocabulary.
>
> I think this new wording (replacing what was previously Stackforge,
> replacing what was previously called "unofficial OpenStack projects")
> will bring some clarity as to what is OpenStack and what is beyond it.
>
> Thoughts ?
>
> [1]
> https://governance.openstack.org/tc/resolutions/20141202-
> project-structure-reform-spec.html
> [2] http://inaugust.com/posts/big-tent.html
> [3] https://governance.openstack.org/tc/reference/tags/index.html
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I like it, and I actually like the naming.  "Friends of OpenStack" is way
too touchy feely, koomba ya.  True there's not a glaring distinction in the
names (OpenStack Project vs OpenStack Hosted), but I thought that was kind
of a good thing.  A sort of compromise between the two extremes we've had
in the past.​  Either way, whatever the names etc, the concept seems solid
to me and I think might be more clear for those trying to wrap their head
around things.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread John Griffith
On Thu, Jun 8, 2017 at 7:58 AM, Matt Riedemann  wrote:

> Nova stores the output of the Cinder os-initialize_connection info API in
> the Nova block_device_mappings table, and uses that later for making volume
> connections.
>
> This data can get out of whack or need to be refreshed, like if your ceph
> server IP changes, or you need to recycle some secret uuid for your ceph
> cluster.
>
> I think the only ways to do this on the nova side today are via volume
> detach/re-attach, reboot, migrations, etc - all of which, except live
> migration, are disruptive to the running guest.
>
> I've kicked around the idea of adding some sort of admin API interface for
> refreshing the BDM.connection_info on-demand if needed by an operator. Does
> anyone see value in this? Are operators doing stuff like this already, but
> maybe via direct DB updates?
>
> We could have something in the compute API which calls down to the compute
> for an instance and has it refresh the connection_info from Cinder and
> updates the BDM table in the nova DB. It could be an admin action API, or
> part of the os-server-external-events API, like what we have for the
> 'network-changed' event sent from Neutron which nova uses to refresh the
> network info cache.
>
> Other ideas or feedback here?
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​The attachment_update call could do this for you, might need some slight
tweaks because I tried to make sure that we weren't having attachment
records be modified things that lived forever and were dynamic.  This
particular case seems like a descent fit though, issue the call; cinder
queries the backend to get any updated connection info and sends it back.
I'd leave it to Nova to figure out if said info has been updated or not.
Just iterate through the attachment_ids in the bdm and update/refresh each
one maybe?
​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread John Griffith
On Fri, Jun 2, 2017 at 3:51 PM, Jay Bryant <jsbry...@electronicjungle.net>
wrote:

> I had forgotten that we added this and am guessing that other cores did as
> well. As a result, it likely, was not enforced in driver reviews.
>
> I need to better understand the benefit. In don't think there is a hurry
> to remove this right now. Can we put it on the agenda for Denver?

Yeah, I think it's an out of sight out of mind... and maybe just having the
volume/targets module alone
is good enough regardless of whether drivers want to do child inheritance
or member inheritance against
it.

Meh... ok, never mind.​


>
>
> Jay
>
> On Fri, Jun 2, 2017 at 4:14 PM Eric Harney <ehar...@redhat.com> wrote:
>
>> On 06/02/2017 03:47 PM, John Griffith wrote:
>> > Hey Everyone,
>> >
>> > So quite a while back we introduced a new model for dealing with target
>> > management in the drivers (ie initialize_connection, ensure_export etc).
>> >
>> > Just to summarize a bit:  The original model was that all of the target
>> > related stuff lived in a base class of the base drivers.  Folks would
>> > inherit from said base class and off they'd go.  This wasn't very
>> flexible,
>> > and it's why we ended up with things like two drivers per backend in the
>> > case of FibreChannel support.  So instead of just say having
>> "driver-foo",
>> > we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
>> > own CI, configs etc.  Kind of annoying.
>>
>> We'd need separate CI jobs for the different target classes too.
>>
>>
>> > So we introduced this new model for targets, independent connectors or
>> > fabrics so to speak that live in `cinder/volume/targets`.  The idea
>> being
>> > that drivers were no longer locked in to inheriting from a base class to
>> > get the transport layer they wanted, but instead, the targets class was
>> > decoupled, and your driver could just instantiate whichever type they
>> > needed and use it.  This was great in theory for folks like me that if I
>> > ever did FC, rather than create a second driver (the pattern of 3
>> classes:
>> > common, iscsi and FC), it would just be a config option for my driver,
>> and
>> > I'd use the one you selected in config (or both).
>> >
>> > Anyway, I won't go too far into the details around the concept (unless
>> > somebody wants to hear more), but the reality is it's been a couple
>> years
>> > now and currently it looks like there are a total of 4 out of the 80+
>> > drivers in Cinder using this design, blockdevice, solidfire, lvm and
>> drbd
>> > (and I implemented 3 of them I think... so that's not good).
>> >
>> > What I'm wondering is, even though I certainly think this is a FAR
>> SUPERIOR
>> > design to what we had, I don't like having both code-paths and designs
>> in
>> > the code base.  Should we consider reverting the drivers that are using
>> the
>> > new model back and remove cinder/volume/targets?  Or should we start
>> > flagging those new drivers that don't use the new model during review?
>> > Also, what about the legacy/burden of all the other drivers that are
>> > already in place?
>> >
>> > Like I said, I'm biased and I think the new approach is much better in a
>> > number of ways, but that's a different debate.  I'd be curious to see
>> what
>> > others think and what might be the best way to move forward.
>> >
>> > Thanks,
>> > John
>> >
>>
>> Some perspective from my side here:  before reading this mail, I had a
>> bit different idea of what the target_drivers were actually for.
>>
>> The LVM, block_device, and DRBD drivers use this target_driver system
>> because they manage "local" storage and then layer an iSCSI target on
>> top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
>> original POV of the LVM driver, which was doing this to work on multiple
>> different distributions that had to pick scsi-target-utils or LIO to
>> function at all.  The important detail here is that the
>> scsi-target-utils/LIO code could also then be applied to different
>> volume drivers.
>>
>> The Solidfire driver is doing something different here, and using the
>> target_driver classes as an interface upon which it defines its own
>> target driver.  In this case, this splits up the code within the driver
>> itself, but doesn't enable plugging in other target drivers to

Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread John Griffith
On Fri, Jun 2, 2017 at 3:11 PM, Eric Harney <ehar...@redhat.com> wrote:

> On 06/02/2017 03:47 PM, John Griffith wrote:
> > Hey Everyone,
> >
> > So quite a while back we introduced a new model for dealing with target
> > management in the drivers (ie initialize_connection, ensure_export etc).
> >
> > Just to summarize a bit:  The original model was that all of the target
> > related stuff lived in a base class of the base drivers.  Folks would
> > inherit from said base class and off they'd go.  This wasn't very
> flexible,
> > and it's why we ended up with things like two drivers per backend in the
> > case of FibreChannel support.  So instead of just say having
> "driver-foo",
> > we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> > own CI, configs etc.  Kind of annoying.
>
> We'd need separate CI jobs for the different target classes too.
>
>
> > So we introduced this new model for targets, independent connectors or
> > fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> > that drivers were no longer locked in to inheriting from a base class to
> > get the transport layer they wanted, but instead, the targets class was
> > decoupled, and your driver could just instantiate whichever type they
> > needed and use it.  This was great in theory for folks like me that if I
> > ever did FC, rather than create a second driver (the pattern of 3
> classes:
> > common, iscsi and FC), it would just be a config option for my driver,
> and
> > I'd use the one you selected in config (or both).
> >
> > Anyway, I won't go too far into the details around the concept (unless
> > somebody wants to hear more), but the reality is it's been a couple years
> > now and currently it looks like there are a total of 4 out of the 80+
> > drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> > (and I implemented 3 of them I think... so that's not good).
> >
> > What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR
> > design to what we had, I don't like having both code-paths and designs in
> > the code base.  Should we consider reverting the drivers that are using
> the
> > new model back and remove cinder/volume/targets?  Or should we start
> > flagging those new drivers that don't use the new model during review?
> > Also, what about the legacy/burden of all the other drivers that are
> > already in place?
> >
> > Like I said, I'm biased and I think the new approach is much better in a
> > number of ways, but that's a different debate.  I'd be curious to see
> what
> > others think and what might be the best way to move forward.
> >
> > Thanks,
> > John
> >
>
> Some perspective from my side here:  before reading this mail, I had a
> bit different idea of what the target_drivers were actually for.
>
> The LVM, block_device, and DRBD drivers use this target_driver system
> because they manage "local" storage and then layer an iSCSI target on
> top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
> original POV of the LVM driver, which was doing this to work on multiple
> different distributions that had to pick scsi-target-utils or LIO to
> function at all.  The important detail here is that the
> scsi-target-utils/LIO code could also then be applied to different
> volume drivers.
>

​Yeah, that's fair; it is different that they're
creating a target etc.  At least the new code is
sucked up by default and we don't have that mixin
iscsi class any more.  Meaning that drivers that
don't need LIO/Tgt etc don't get it in the import.

Regardless of which way you use things here you end
up sharing this interface anyway, so I guess maybe
none of this topic is even relevant any more.

>
> The Solidfire driver is doing something different here, and using the
> target_driver classes as an interface upon which it defines its own
> target driver.  In this case, this splits up the code within the driver
> itself, but doesn't enable plugging in other target drivers to the
> Solidfire driver.  So the fact that it's tied to this defined
> target_driver class interface doesn't change much.
>
> The question, I think, mostly comes down to whether you get better code,
> or better deployment configurability, by a) defining a few target
> classes for your driver or b) defining a few volume driver classes for
> your driver.   (See coprhd or Pure for some examples.)
>
> I'm not convinced there is any difference in the outcome, so I can't see
> why we would enforce any policy around this.  The main difference is in
> which 

[openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread John Griffith
Hey Everyone,

So quite a while back we introduced a new model for dealing with target
management in the drivers (ie initialize_connection, ensure_export etc).

Just to summarize a bit:  The original model was that all of the target
related stuff lived in a base class of the base drivers.  Folks would
inherit from said base class and off they'd go.  This wasn't very flexible,
and it's why we ended up with things like two drivers per backend in the
case of FibreChannel support.  So instead of just say having "driver-foo",
we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
own CI, configs etc.  Kind of annoying.

So we introduced this new model for targets, independent connectors or
fabrics so to speak that live in `cinder/volume/targets`.  The idea being
that drivers were no longer locked in to inheriting from a base class to
get the transport layer they wanted, but instead, the targets class was
decoupled, and your driver could just instantiate whichever type they
needed and use it.  This was great in theory for folks like me that if I
ever did FC, rather than create a second driver (the pattern of 3 classes:
common, iscsi and FC), it would just be a config option for my driver, and
I'd use the one you selected in config (or both).

Anyway, I won't go too far into the details around the concept (unless
somebody wants to hear more), but the reality is it's been a couple years
now and currently it looks like there are a total of 4 out of the 80+
drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
(and I implemented 3 of them I think... so that's not good).

What I'm wondering is, even though I certainly think this is a FAR SUPERIOR
design to what we had, I don't like having both code-paths and designs in
the code base.  Should we consider reverting the drivers that are using the
new model back and remove cinder/volume/targets?  Or should we start
flagging those new drivers that don't use the new model during review?
Also, what about the legacy/burden of all the other drivers that are
already in place?

Like I said, I'm biased and I think the new approach is much better in a
number of ways, but that's a different debate.  I'd be curious to see what
others think and what might be the best way to move forward.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread John Griffith
On Tue, May 30, 2017 at 5:47 AM, Spyros Trigazis <strig...@gmail.com> wrote:

> FYI, there is already a cinder volume driver for docker available, written
> in golang, from rexray [1].
>
> Our team recently contributed to libstorage [3], it could support manila
> too. Rexray
> also supports the popular cloud providers.
>
> Magnum's docker swarm cluster driver, already leverages rexray for cinder
> integration. [2]
>
> Cheers,
> Spyros
>
> [1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
> [2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
> [3] http://git.openstack.org/cgit/openstack/magnum/tree/
> magnum/drivers/common/templates/swarm/fragments/
> volume-service.sh?h=stable/ocata
>
> On 27 May 2017 at 12:15, zengchen <chenzeng...@163.com> wrote:
>
>> Hi John & Ben:
>>  I have committed a patch[1] to add a new repository to Openstack. Please
>> take a look at it. Thanks very much!
>>
>>  [1]: https://review.openstack.org/#/c/468635
>>
>> Best Wishes!
>> zengchen
>>
>>
>>
>>
>>
>> 在 2017-05-26 21:30:48,"John Griffith" <john.griffi...@gmail.com> 写道:
>>
>>
>>
>> On Thu, May 25, 2017 at 10:01 PM, zengchen <chenzeng...@163.com> wrote:
>>
>>>
>>> Hi john:
>>> I have seen your updates on the bp. I agree with your plan on how to
>>> develop the codes.
>>> However, there is one issue I have to remind you that at present,
>>> Fuxi not only can convert
>>>  Cinder volume to Docker, but also Manila file. So, do you consider to
>>> involve Manila part of codes
>>>  in the new Fuxi-golang?
>>>
>> Agreed, that's a really good and important point.  Yes, I believe Ben
>> Swartzlander
>>
>> is interested, we can check with him and make sure but I certainly hope
>> that Manila would be interested.
>>
>>> Besides, IMO, It is better to create a repository for Fuxi-golang,
>>> because
>>>  Fuxi is the project of Openstack,
>>>
>> Yeah, that seems fine; I just didn't know if there needed to be any more
>> conversation with other folks on any of this before charing ahead on new
>> repos etc.  Doesn't matter much to me though.
>>
>>
>>>
>>>Thanks very much!
>>>
>>> Best Wishes!
>>> zengchen
>>>
>>>
>>>
>>>
>>> At 2017-05-25 22:47:29, "John Griffith" <john.griffi...@gmail.com>
>>> wrote:
>>>
>>>
>>>
>>> On Thu, May 25, 2017 at 5:50 AM, zengchen <chenzeng...@163.com> wrote:
>>>
>>>> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
>>>> language.
>>>> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>>>>
>>>>
>>>> At 2017-05-25 19:46:54, "zengchen" <chenzeng...@163.com> wrote:
>>>>
>>>> Hi guys:
>>>> hongbin had committed a bp of rewriting Fuxi with go language[1].
>>>> My question is where to commit codes for it.
>>>> We have two choice, 1. create a new repository, 2. create a new
>>>> branch.  IMO, the first one is much better. Because
>>>> there are many differences in the layer of infrastructure, such as CI.
>>>> What's your opinion? Thanks very much
>>>>
>>>> Best Wishes
>>>> zengchen
>>>>
>>>>
>>>> 
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> Hi Zengchen,
>>>
>>> For now I was thinking just use Github and PR's outside of the OpenStack
>>> projects to bootstrap things and see how far we can get.  I'll update the
>>> BP this morning with what I believe to be the key tasks to work through.
>>>
>>> Thanks,
>>> John
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>

Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-26 Thread John Griffith
On Thu, May 25, 2017 at 10:01 PM, zengchen <chenzeng...@163.com> wrote:

>
> Hi john:
> I have seen your updates on the bp. I agree with your plan on how to
> develop the codes.
> However, there is one issue I have to remind you that at present, Fuxi
> not only can convert
>  Cinder volume to Docker, but also Manila file. So, do you consider to
> involve Manila part of codes
>  in the new Fuxi-golang?
>
​Agreed, that's a really good and important point.  Yes, I believe Ben
Swartzlander ​

​is interested, we can check with him and make sure but I certainly hope
that Manila would be interested.
​

> Besides, IMO, It is better to create a repository for Fuxi-golang, because
>  Fuxi is the project of Openstack,
>
​Yeah, that seems fine; I just didn't know if there needed to be any more
conversation with other folks on any of this before charing ahead on new
repos etc.  Doesn't matter much to me though.​


>
>Thanks very much!
>
> Best Wishes!
> zengchen
>
>
>
>
> At 2017-05-25 22:47:29, "John Griffith" <john.griffi...@gmail.com> wrote:
>
>
>
> On Thu, May 25, 2017 at 5:50 AM, zengchen <chenzeng...@163.com> wrote:
>
>> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
>> language.
>> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>>
>>
>> At 2017-05-25 19:46:54, "zengchen" <chenzeng...@163.com> wrote:
>>
>> Hi guys:
>> hongbin had committed a bp of rewriting Fuxi with go language[1]. My
>> question is where to commit codes for it.
>> We have two choice, 1. create a new repository, 2. create a new branch.
>> IMO, the first one is much better. Because
>> there are many differences in the layer of infrastructure, such as CI.
>> What's your opinion? Thanks very much
>>
>> Best Wishes
>> zengchen
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> Hi Zengchen,
>
> For now I was thinking just use Github and PR's outside of the OpenStack
> projects to bootstrap things and see how far we can get.  I'll update the
> BP this morning with what I believe to be the key tasks to work through.
>
> Thanks,
> John
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-25 Thread John Griffith
On Thu, May 25, 2017 at 6:25 PM, Zhipeng Huang <zhipengh...@gmail.com>
wrote:

> Hi John and Zeng,
>
> The OpenSDS community already developed a golang client for the
> os-brick[1], I think we could host the new golang os-brick code there as a
> new repo and after things settled port the code back to OpenStack
>
> [1]https://github.com/opensds/opensds/blob/master/pkg/dock/
> plugins/connector/connector.go
>
> On Thu, May 25, 2017 at 10:47 PM, John Griffith <john.griffi...@gmail.com>
> wrote:
>
>>
>>
>> On Thu, May 25, 2017 at 5:50 AM, zengchen <chenzeng...@163.com> wrote:
>>
>>> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
>>> language.
>>> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>>>
>>>
>>> At 2017-05-25 19:46:54, "zengchen" <chenzeng...@163.com> wrote:
>>>
>>> Hi guys:
>>> hongbin had committed a bp of rewriting Fuxi with go language[1]. My
>>> question is where to commit codes for it.
>>> We have two choice, 1. create a new repository, 2. create a new branch.
>>> IMO, the first one is much better. Because
>>> there are many differences in the layer of infrastructure, such as CI.
>>> What's your opinion? Thanks very much
>>>
>>> Best Wishes
>>> zengchen
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ​Hi Zengchen,
>>
>> For now I was thinking just use Github and PR's outside of the OpenStack
>> projects to bootstrap things and see how far we can get.  I'll update the
>> BP this morning with what I believe to be the key tasks to work through.
>>
>> Thanks,
>> John​
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Product Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​I like the idea of the service version of brick, and then the golang
bindings.  There's a lot of good investment already in os-brick that it
would be great to leverage.  Walt and Ivan mentioned that they had a POC
for this a while back, might be worth considering taking the fork
referenced in the BP and submitting that upstream for the community?
 #openstack-cinder IRC channel would be a great place to sync up on these
aspects in real time if folks would like.

Thanks,
John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-25 Thread John Griffith
On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:

> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
> language.
> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>
>
> At 2017-05-25 19:46:54, "zengchen"  wrote:
>
> Hi guys:
> hongbin had committed a bp of rewriting Fuxi with go language[1]. My
> question is where to commit codes for it.
> We have two choice, 1. create a new repository, 2. create a new branch.
> IMO, the first one is much better. Because
> there are many differences in the layer of infrastructure, such as CI.
> What's your opinion? Thanks very much
>
> Best Wishes
> zengchen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Hi Zengchen,

For now I was thinking just use Github and PR's outside of the OpenStack
projects to bootstrap things and see how far we can get.  I'll update the
BP this morning with what I believe to be the key tasks to work through.

Thanks,
John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread John Griffith
On Tue, May 23, 2017 at 3:43 PM, Dean Troyer  wrote:

> On Tue, May 23, 2017 at 3:42 PM, Sean McGinnis 
> wrote:
> >
> >> If it's just too much debt and risk of slippery slope type arguments on
> >> the Nova side (and that's fair, after lengthy conversations with Nova
> folks
> >> I get it), do we consider just orchestrating this from say OpenStack
> Client
> >> completely?  The last resort (and it's an awful option) is orchestrate
> the
> >> whole thing from Cinder.  We can certainly make calls to Nova and pass
> in
> >> the volume using the semantics that are already accepted and in use.
> >>
> >> John
> >>
> >
> > /me runs away screaming!
>
> Now I know Sean's weakness...
>
​Ha!  I thought it was the put it in Cinder part (so I have a patch queued
up for emergencies when I need to threaten him). :)
​


>
> In this particular case it may not be necessary, but I think early
> implementation of composite features in clients is actually the right
> way to prove the utility of these things going forward.

​Yeah, I've been doing more with OSC as of late and it really has all the
pieces and currently is one of the few places in OpenStack that really
knows what the other actors are up to (or at least how to communicate with
them and ask them to do things).

It does seem like a reasonable place (OSC), and as far as some major
objections I've heard already around "where would you draw the line"...
yeah, that's important.  To start though orchestrated "features" that have
been requested for multiple releases that are actually fairly trivial to
implement might be a great starting point.  It's at least worth thinking on
for a bit in my opinion.
​


> Establish and
> document the process, implement in a way for users to opt-in, and move
> into the services as they are proven useful.  With the magic of
> microversions we can then migrate from client-side to server-side as
> the implementations roll through the deployment lifecycle.
>
> This last bit is important.   Even today many of our users are unable
> to take advantage of useful features that are already over a year old
> due to the upgrade delay that production deployments see.
> Implementing new things in clients helps users on existing clouds.
> Sure other client implementations are left to their own work, but they
> should have a common process model to follow, and any choice to
> deviate from that is their own.
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread John Griffith
On Tue, May 23, 2017 at 10:13 AM, Matt Riedemann 
wrote:

> On 5/23/2017 9:56 AM, Duncan Thomas wrote:
>
>> Is it entirely unreasonable to turn the question around and ask why,
>> given it is such a commonly requested feature, the Nova team are so
>> resistant to it?
>>
>
> Because it's technical debt for one thing. Adding more orchestration adds
> complexity, which adds bugs. Also, as noted in the linked devref on this,
> when nova proxies something via the compute API to another service's API,
> if that other service changes their API (like with nova's image proxy API
> to glance v1 for example, and needing to get to glance v2), then we have
> this weird situation with compatibility. Which is more technical debt.
> Microversions should make that less of an issue, but it's still there.
>
> It's also a slippery slope. Once you allow proxies and orchestration into
> part of the API, people use it as grounds for justifying doing more of it
> elsewhere, i.e. if we do this for volumes, when are we going to start
> seeing people asking for passing more detailed information about Neutron
> ports when creating a server?
>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I get the concern about adding more orchestration etc, I'm not totally
convinced only because it's adding another flag as opposed to functionality
etc.  But, regardless I get the argument and the slippery slope after
talking through it with Matt and Dan multiple times.

The disappointing part of this for me is that the main reason this comes up
(I believe) is not only because Cinder volumes are AWESOME!  But, probably
more accurately; all of the non-OpenStack public clouds behave this way (or
the big ones do at least).  Service Providers using OpenStack as well as
user consuming OpenStack have voiced that they'd like to have this same
functionality/behavior that includes selecting what type of volume.

If it's just too much debt and risk of slippery slope type arguments on the
Nova side (and that's fair, after lengthy conversations with Nova folks I
get it), do we consider just orchestrating this from say OpenStack Client
completely?  The last resort (and it's an awful option) is orchestrate the
whole thing from Cinder.  We can certainly make calls to Nova and pass in
the volume using the semantics that are already accepted and in use.

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [go][all] Settling on gophercloud as the go based API library for all OpenStack projects

2017-05-23 Thread John Griffith
On Tue, May 23, 2017 at 12:48 PM, Davanum Srinivas <dava...@gmail.com>
wrote:

> John,
>
> I had heard this a few time in Boston Summit. So want to put this to bed :)
>
> -- Dims
>
> On Tue, May 23, 2017 at 2:43 PM, John Griffith <john.griffi...@gmail.com>
> wrote:
> >
> >
> > On Tue, May 23, 2017 at 8:54 AM, Davanum Srinivas <dava...@gmail.com>
> wrote:
> >>
> >> Folks,
> >>
> >> This has come up several times in various conversations.
> >>
> >> Can we please stop activity on
> >> https://git.openstack.org/cgit/openstack/golang-client/ and just
> >> settle down on https://github.com/gophercloud/gophercloud ?
> >>
> >> This becomes important since new container-y projects like
> >> stackube/fuxi/kuryr etc can just pick one that is already working and
> >> not worry about switching later. This is also a NIH kind of behavior
> >> (at least from a casual observer from outside).
> >>
> >> Thanks,
> >> Dims
> >>
> >> --
> >> Davanum Srinivas :: https://twitter.com/dims
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > Oh, my bad... I'm actually guilty of bringing this up (this morning).  I
> was
> > confused about the direction of this, I've been a happy GopherCloud user
> for
> > a couple years so I'm perfectly happy with this answer.  Thanks and sorry
> > for adding to the confusion.
> >
> > John
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Sleep with the fishes confusing issue!​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [go][all] Settling on gophercloud as the go based API library for all OpenStack projects

2017-05-23 Thread John Griffith
On Tue, May 23, 2017 at 8:54 AM, Davanum Srinivas  wrote:

> Folks,
>
> This has come up several times in various conversations.
>
> Can we please stop activity on
> https://git.openstack.org/cgit/openstack/golang-client/ and just
> settle down on https://github.com/gophercloud/gophercloud ?
>
> This becomes important since new container-y projects like
> stackube/fuxi/kuryr etc can just pick one that is already working and
> not worry about switching later. This is also a NIH kind of behavior
> (at least from a casual observer from outside).
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Oh, my bad... I'm actually guilty of bringing this up (this morning).  I
was confused about the direction of this, I've been a happy GopherCloud
user for a couple years so I'm perfectly happy with this answer.  Thanks
and sorry for adding to the confusion.

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread John Griffith
On Fri, May 5, 2017 at 11:24 AM, Chris Friesen 
wrote:

> On 05/05/2017 10:48 AM, Chris Dent wrote:
>
> Would it be accurate to say, then, that from your perpsective the
>> tendency of OpenStack to adopt new projects willy nilly contributes
>> to the sense of features winning out over deployment, configuration
>> and usability issues?
>>
>
> Personally I don't care about the new projects...if I'm not using them I
> can ignore them, and if I am using them then I'll pay attention to them.
>
> But within existing established projects there are some odd gaps.
>
> Like nova hasn't implemented cold-migration or resize (or live-migration)
> of an instance with LVM local storage if you're using libvirt.
>
> Image properties get validated, but not flavor extra-specs or instance
> metadata.
>
> Cinder theoretically supports LVM/iSCSI, but if you actually try to use it
> for anything stressful it falls over.
>

​Oh really?​

​I'd love some detail on this.  What falls over?


> Some of the database pruning tools don't cover all the tables so the DB
> gets bigger over time.
>
> I'm sure there are historical reasons for all of these, I'm just pointing
> out some of the things that were surprising to me.
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is Cinder still maintained?

2017-04-27 Thread John Griffith
On Thu, Apr 27, 2017 at 1:42 AM, Julien Danjou  wrote:

> Hi,
>
> I've posted a refactoring patch that simplifies tooz (read: remove
> technical debt) usage more than a month ago, and I got 0 review since
> then:
>
>   https://review.openstack.org/#/c/447079
>
> I'm a bit worried to see this zero review on such patches. It seems the
> most recently merged things are all vendor specific. Is the core of
> Cinder still maintained? Is there any other reason for such patches to
> be ignored for so long?
>
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Nope, nobody working on Cinder any more.  It's no longer a thing,
sorry... we're closed.​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread John Griffith
On Fri, Mar 10, 2017 at 1:34 PM, Ken'ichi Ohmichi <ken1ohmi...@gmail.com>
wrote:

> Hi John,
>
> Now Tempest is testing microversions only for Nova and contains some
> testing framework for re-using for another projects.
> On this framework, we can implement necessary microversions tests as
> we want and actually many microversions of Nova are not tested by
> Tempest.
> We can see the tested microversion of Nova on
> https://github.com/openstack/tempest/blob/master/doc/
> source/microversion_testing.rst#microversion-tests-implemented-in-tempest
>
> Before implementing microversion testing for Cinder, we will implement
> JSON-Schema validation for API responses for Cinder.
> The validation will be helpful for testing base microversion of Cinder
> API and we will be able to implement the microversion tests based on
> that.
> This implementation is marked as 7th priority in this Pike cycle as
> https://etherpad.openstack.org/p/pike-qa-priorities
>
> In addition, now Cinder V3 API is not tested. So we are going to
> enable v3 tests with some restructure of Tempest in this cycle.
> The detail is described on the part of "Volume API" of
> https://etherpad.openstack.org/p/tempest-api-versions-in-pike
>
> Thanks
> Ken Ohmichi
>
> ---
>
> 2017-03-10 11:37 GMT-08:00 John Griffith <john.griffi...@gmail.com>:
> > Hey Everyone,
> >
> > So along the lines of an earlier thread that went out regarding testing
> of
> > deprecated API's and Tempest etc [1].
> >
> > Now that micro-versions are *the API versioning scheme to rule them all*
> one
> > question I've not been able to find an answer for is what we're going to
> > promise here for support and testing.  My understanding thus far is that
> the
> > "community" approach here is "nothing is ever deprecated, and everything
> is
> > supported forever".
> >
> > That's sort of a tall order IMO, but ok.  I've already had some questions
> > from folks about implementing an explicit Tempest test for every
> > micro-versioned implementation of an API call also.  My response has been
> > "nahh, just always test latest available".  This kinda means that we're
> not
> > testing/supporting the previous versions as promised though.
> >
> > Anyway; I'm certain that between Nova and the API-WG this has come up
> and is
> > probably addressed, just wondering if somebody can point me to some
> > documentation or policies in this respect.
> >
> > Thanks,
> > John
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Thanks for the pointer to the doc Ken, that helps a lot.  I do have
concerns about supportability, test sprawl and life-cycle with this, but
maybe it's unwarranted.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread John Griffith
On Fri, Mar 10, 2017 at 12:37 PM, John Griffith <john.griffi...@gmail.com>
wrote:

> Hey Everyone,
>
> So along the lines of an earlier thread that went out regarding testing of
> deprecated API's and Tempest etc [1].
>
> Now that micro-versions are *the API versioning scheme to rule them all*
> one question I've not been able to find an answer for is what we're going
> to promise here for support and testing.  My understanding thus far is that
> the "community" approach here is "nothing is ever deprecated, and
> everything is supported forever".
>
> That's sort of a tall order IMO, but ok.  I've already had some questions
> from folks about implementing an explicit Tempest test for every
> micro-versioned implementation of an API call also.  My response has been
> "nahh, just always test latest available".  This kinda means that we're not
> testing/supporting the previous versions as promised though.
>
> Anyway; I'm certain that between Nova and the API-WG this has come up and
> is probably addressed, just wondering if somebody can point me to some
> documentation or policies in this respect.
>
> Thanks,
> John
>
​Ooops:
  [1]:
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113727.html​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread John Griffith
Hey Everyone,

So along the lines of an earlier thread that went out regarding testing of
deprecated API's and Tempest etc [1].

Now that micro-versions are *the API versioning scheme to rule them all*
one question I've not been able to find an answer for is what we're going
to promise here for support and testing.  My understanding thus far is that
the "community" approach here is "nothing is ever deprecated, and
everything is supported forever".

That's sort of a tall order IMO, but ok.  I've already had some questions
from folks about implementing an explicit Tempest test for every
micro-versioned implementation of an API call also.  My response has been
"nahh, just always test latest available".  This kinda means that we're not
testing/supporting the previous versions as promised though.

Anyway; I'm certain that between Nova and the API-WG this has come up and
is probably addressed, just wondering if somebody can point me to some
documentation or policies in this respect.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread John Griffith
On Fri, Mar 10, 2017 at 9:51 AM, Sean McGinnis 
wrote:

> > >
> > As far as I can tell:
> > - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
> > deprecated in all supported releases.
> > - Glance v1 has been deprecated in Newton, so it's deprecated in all
> > supported releases
> > - Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
> > Tempest until Mitaka EOL, which is in a month from now
> >
> > We should stop testing these three api versions in the common gate
> > including stable branches now (except for keystone v2 on stable/mitaka
> > which can run for one more month).
> >
> > Are cinder / glance / keystone willing to take over the API tests and run
> > them in their own gate until removal of the API version?
> >
> > Doug
>
> With Cinder's v1 API being deprecated for quite awhile now, I would
> actually prefer to just remove all tempest tests and drop the API
> completely. There was some concern about removal a few cycles back since
> there was concern (rightly so) that a lot of deployments and a lot of
> users were still using it.
>

​+1​

>
> I think it has now been marked as deprecated long enough that if anyone
> is still using it, it's just out of obstinance. We've removed the v1
> api-ref documentation, and the default in the client has been v2 for
> awhile.
>
> Unless there's a strong objection, and a valid argument to support it,
> I really would just like to drop v1 from Cinder and not waste any more
> cycles on redoing tempest tests and reconfiguring jobs to support
> something we have stated for over two years that we were no longer going
> to support. Juno went EOL in December of 2015. I really hope it's safe
> now to remove.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2017-02-26 Thread John Griffith
On Thu, Feb 23, 2017 at 7:18 PM, Zhenyu Zheng 
wrote:

> Matt,
>
> Thanks for the information, I will check that; But still I think the user
> demand here is to use local disk from
> compute node as block device, as the data can be remained if the old vm
> got deleted, and we can start a
> new one with the data and having the performance they wanted.
>
> Kevin Zheng
>
> On Fri, Feb 24, 2017 at 4:06 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>> On 9/26/2016 9:21 PM, Zhenyu Zheng wrote:
>>
>>> Hi,
>>>
>>> Thanks for the reply, actually approach one is not we are looking for,
>>> our demands is to attach the real physical volume from compute node to
>>> VMs,
>>> by this way we can achieve the performance we need for usecases such as
>>> big data, this can be done by cinder using BlockDeviceDriver, it is quite
>>> different from the approach one you mentioned. The only problem now is
>>> that we cannot practially ensure the compute resource located on the same
>>> host with the volume, as Matt mentioned above, currently we have to
>>> arrange 1:1 AZ in Cinder and Nova to do this and it is not practical in
>>> commercial
>>> deployments.
>>>
>>> Thanks.
>>>
>>>
>> Kevin,
>>
>> Is the issue because you can't use ephemeral local disks (it must be a
>> persistent boot from volume)?
>>
>> Have you looked at using the LVM image backend for local storage in Nova?
>> I thought cfriesen said once that windriver is doing high performance
>> config using local LVM in nova.
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Hi Kevin,

Few things that may be related to your request:

>>"our demands is to attach the real physical volume from compute node to
VMs"

Cinder had this capability for a while in the form of the block-driver, but
it's been removed due to lack of functionality, testing and really interest
all the way around.  We also took a look at performance data and the fact
was that the performance between iSCSI over a 10Gig dedicated network and a
local block device was minimal.  The block-driver model breaks just about
every Cinder feature at this point so rather than carry it around as a
special one off case it's been removed and if you really need local disk to
the compute node, you need to just use the ephemeral LVM driver in Nova.

>>"compute node as block device, as the data can be remained if the old vm
got deleted"

Yes, I understand what you are asking for here, and that's similar to how
the old block-driver worked; like I said though that's been deprecated and
removed.  I would be curious to get more info about the *requirement* here
in terms of performance.  We had a number of people look at the performance
characteristics between a Cinder LVM Volume behind an LIO Tgt and found
there to be minimal differences in performance between the raw disk device
without iSCSI.

There are ways to bring the block-driver up to feature parity etc, but
frankly most have decided it's not worth it as there's very little real
benefit to using it over the existing and well supported drivers.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Should nova just default to use cinder v3 in Pike?

2017-02-11 Thread John Griffith
On Sat, Feb 11, 2017 at 11:29 AM, Matt Riedemann <mriede...@gmail.com>
wrote:

> On 2/11/2017 11:47 AM, John Griffith wrote:
>
>>
>> It seems like just moving Nova to V3 in Pike would alleviate quite a few
>> snarls here.  The fact that V3.0 is just pointing back to V2 for Cinder
>> calls anyway I'm uncertain there's a huge downside to this.  Nova +
>> Cinder V2 coverage is only an entry point issue IIUC (V3.0 points to
>> Cinder V2 API server calls anyway based on what I was looking at).  So
>> it's more an issue of cinderclient and what it's set up at no?
>> Honestly, this is another one of those things we probably need to unwind
>> together at PTG.  The V3 Cinder thing has proven to be quite thorny.
>>
>>
>>
> Scott's nova patch to support cinder v3 is dependent on a
> python-cinderclient change for version discovery for min/max versions in
> the v3 API. Once that's released we just bump the minimum required
> cinderclient in global-requirements for pike and we should be good to go
> there.
>
> But overall yeah I like the idea of just defaulting to cinderv3 in Pike,
> as long as we can still get cinderv2 coverage in CI in master, which I
> think we can do via grenade jobs.


​+1
​

>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Should nova just default to use cinder v3 in Pike?

2017-02-11 Thread John Griffith
On Fri, Feb 10, 2017 at 1:33 PM, Matt Riedemann  wrote:

> While talking about [1] yesterday and trying to figure out how to
> configure nova to use cinder v3 in the CI jobs in Pike, things got a bit
> messy from the CI job configuration perspective.
>
> My initial plan was to make the nova-next (formerly "placement" job [2])
> use cinder v3 but that breaks down a bit when that job runs on
> stable/newton where nova doesn't support cinder v3.
>
> So when the cat woke me up at 3am I couldn't stop thinking that we should
> just default "[cinder]/catalog_info" in nova.conf to cinderv3 in Pike. Then
> CI on master will be running nova + cinder v3 (which should be backward
> compatible with cinder v2). That way I don't have to mess with marking a
> single CI job in master as using cinder v3 when by default they all will.
>
> We'll still want some nova + cinder v2 coverage and I initially though
> grenade would provide that, but I don't think it will since we don't
> explicitly write out the 'catalog_info' value in nova.conf during a
> devstack run, but we could do that in stable/ocata devstack and then it
> would persist through an upgrade from Ocata to Pike. There are other ways
> to get that coverage too, that's just my first thought.
>
> Anyway, I just remembered this and it was middle-of-the-night thinking, so
> I'm looking to see if this makes sense or what is wrong with it.
>
> [1] https://review.openstack.org/#/c/420201/
> [2] https://review.openstack.org/#/c/431704/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

It seems like just moving Nova to V3 in Pike would alleviate quite a few
snarls here.  The fact that V3.0 is just pointing back to V2 for Cinder
calls anyway I'm uncertain there's a huge downside to this.  Nova + Cinder
V2 coverage is only an entry point issue IIUC (V3.0 points to Cinder V2 API
server calls anyway based on what I was looking at).  So it's more an issue
of cinderclient and what it's set up at no?  Honestly, this is another one
of those things we probably need to unwind together at PTG.  The V3 Cinder
thing has proven to be quite thorny.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mitaka - Unable to attach volume to VM

2017-02-09 Thread John Griffith
On Thu, Feb 9, 2017 at 9:04 PM, Sam Huracan 
wrote:

> Hi Sean,
>
> I've checked 'openstack volume list', the state of all volume is avaiable,
> and I can download image to volume.
> I also use Ceph as other Cinder volume backend, and issue is similarly.
> Same log.
>
> Port 3260 have opened on iptables.
>
> When I nova --debug volume-attach, I see nova contact to cinder for
> volume, but nova log still returns "VolumeNotFound", can't understand.
> http://paste.openstack.org/show/598332/
>
> cinder-scheduler.log and cinder-volume.log do not have any error and
> attaching log.
>
>
> 2017-02-10 10:16 GMT+07:00 Sean McGinnis :
>
>> On Fri, Feb 10, 2017 at 02:18:15AM +0700, Sam Huracan wrote:
>> > Hi guys,
>> >
>> > I meet this issue when deploying Mitaka.
>> > When I attach LVM volume to VM, it keeps state "Attaching". I am also
>> > unable to boot VM from volume.
>> >
>> > This is /var/log/nova/nova-compute.log in Compute node when I attach
>> volume.
>> > http://paste.openstack.org/show/598282/
>> >
>> > Mitaka version: http://prntscr.com/e6ns0u
>> >
>> > Can you help me solve this issue?
>> >
>> > Thanks a lot
>>
>>
>> Hi Sam,
>>
>> Any errors in the Cinder logs? Or just the ones from Nova not finding the
>> volume?
>>
>> Sean
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Long shot, but any chance your nova config is pointing to a different
Cinder Endpoint?  The Error in your logs states the volume DNE from the
cinderclient get call.

VolumeNotFound: Volume 5b69704f-14b4-41bb-af51-23d0aa55f148 could
not be found.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-05 Thread John Griffith
On Wed, Oct 5, 2016 at 9:08 AM, Sean Dague  wrote:

> On 10/03/2016 12:46 PM, Edward Leafe wrote:
> 
> > We are fortunate in that all of the candidates are exceptionally
> well-qualified, and those elected have put in excellent service while on
> the TC. But one thing I'm afraid of is that we tend to get into a situation
> where groupthink [0] is very likely. There are many excellent candidates
> running in every election, but it is rare for someone who hasn't been a PTL
> of a large project, and thus very visible, has been selected. Is this
> really the best approach?
> >
> > I wrote a blog post about implicit bias [1], and in that post used the
> example of blind auditions for musical orchestras radically changing the
> selection results. Before the introduction of blind auditions, men
> overwhelmingly comprised orchestras, but once the people judging the
> auditions had no clue as to whether the musician was male or female, women
> began to be selected much more in proportion to their numbers in the
> audition pools. So I'd like to propose something for the next election:
> have candidates self-nominate as in the past, but instead of writing a big
> candidacy letter, just state their interest in serving. After the
> nominations close, the election officials will assign each candidate a
> non-identifying label, such as a random number, and those officials will be
> the only ones who know which candidate is associated with which number. The
> nomination period can be much, much shorter, and then followed by a week of
> campaigning (the part that's really missing in the current process).
> Candidates will post their thoughts and positions, and respond to questions
> from people, and this is how the voters will know who best represents what
> they want to see in their TC.
>
> The comparison to orchestra auditions was brought up a couple of cycles
> ago as well. But I think it's a bad comparison.
>
> In the listed example the job being asked of people was performing their
> instrument, and it turns out that lots of things not having to do with
> performing their instrument were biasing the results. It was possible to
> remove the irrelevant parts.
>
> What is the job being asked of a TC member? To put the best interests of
> OpenStack at heart. To be effective in working with a diverse set of
> folks in our community to get things done. To find areas of friction and
> remove them. To help set an overall direction for the project that the
> community accepts and moves forward with.
>
> Writing a good candidacy email isn't really a good representation of
> those abilities. It's the measure of writing a good candidacy email, in
> english.
>
> I hope that when voters vote in the election they are taking the
> reputations of the individuals into account. That they look at the work
> they did across all of OpenStack, the hundreds / thousands of individual
> actions in the community that the person made. How they got to consensus
> on items. What efforts they were able to get folks to rally around and
> move forward. Where they get stuck, and how they dig out of being stuck.
> When they ask for help. When they admit they are out of their element.
> How they help new folks in our community. How they work with long timers.
>
> That aggregate reputation is much more indicative of their
> successfulness as a member of the TC to help OpenStack than the
> candidate email. It's easy to dismiss it as a popularity contest, but I
> don't think it is. This is about evaluating the plausible promise that
> individuals put forward. Not just the ideas they have, but how likely
> they are to be able to bring them to fruition.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Well said Sean!​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-04 Thread John Griffith
On Tue, Oct 4, 2016 at 1:31 PM, Ed Leafe  wrote:

> On Oct 4, 2016, at 11:54 AM, Thierry Carrez  wrote:
>
> >> In French, "prétendre" has a connotation of "profess" or simply
> >> "say", which is very different from the more negative connotation
> >> of "pretend" in English where common use implies some false intent.
> >> Knowing Thierry and his past contributions well enough to trust his
> >> good intentions, I was able to look past the awkward phrasing to
> >> ask what message he was trying to convey.
> >
> > Yeah, sorry for the poor choice of words, I didn't mean that candidates
> > are trying to deceive anyone. I only meant that in my experience, past
> > members of the TC were overly optimistic in their campaign emails about
> > how much they thought they would be able to achieve. So looking at the
> > past track record is important.
>
> A great example of knowing the person. It sounded harsh to me when I read
> it, too, but knowing Thierry so well, I understood the intent. Had that
> been an anonymous comment, I wouldn’t have made that mental adjustment.
>
> So maybe anonymous isn’t the way to go. But we really do need to do
> several things:
>
> 1) Allow time between the nominations and the voting. Half of the
> candidates don’t announce until the last day or two, and that doesn’t leave
> very much time to get to know them.
>
> 2) I like the idea of identifying the issues that the people of OpenStack
> care about, and having every candidate give their answers. One thing I
> worry about, though, is the time zone difference. Candidate A publishes
> their answers early, and gets a lot of reaction. Candidate Z, in a later
> timezone, publishes their answers after the discussions have played out
> already. Let’s release the answers all at once.
>
> 3) We need to find a way to at least *reduce* the effect of incumbency.
> Not that I have any particular incumbent in mind, of course, but any group
> of people gets set in their ways unless the membership changes regularly.
>
> And let me reiterate: I’m a candidate for the TC, and not an incumbent. So
> of course this seems a bit self-serving, especially to an outsider who
> might not know me very well. But I’m sure that Thierry and Doug and others,
> who have known me for many years, understand my intent: to keep improving
> OpenStack.
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Just to be "that person", that choice of words in no way seemed harsh or
offensive to me at all.  It's a fact, people can write an email professing
anything they want.  They can even do so without even really having an
understanding of what they may or may not actually be able to do within the
confines of the community and the boundaries that we have to work in out of
necessity and reality.  So nobody wants to offend anyone or say anything
construed as negative or pessimistic... ok; but personally I think that if
your vote is based solely on a candidate's email (stump speech)​ with no
background on who they actually are, what level of involvement, commitment
or capability they possess I think you have a recipe for disaster.  I don't
think that people realize that being a TC member is actually a lot of hard
and not so fun work.

Additionally along those lines, I personally would not be involved or
participate in an election where I was presented with just a candidate's
proposal and responses to email all shielded under anonymity.

Finally, given that Thierry is perhaps the most pragmatic, level headed and
welcoming person that I've encountered in my five years of participating in
OpenStack, I fear that must appear to be a complete Ogre if his statements
were interpreted as harsh or distrusting of people.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2016-10-03 Thread John Griffith
On Mon, Oct 3, 2016 at 11:33 AM, Clay Gerrard <clay.gerr...@gmail.com>
wrote:

>
>
> On Thu, Sep 29, 2016 at 8:34 PM, John Griffith <john.griffi...@gmail.com>
> wrote:
>
>>
>>
>> I think what's more important and critical is the future and where
>> OpenStack is going over the course of the next few years.
>>
>
> I think this is a really important topic right now!  Do you see any
> dangerous road blocks coming up!?
>
​I don't know if I'd call them "road blocks", what I do worry about though
is becoming irrelevant (ok maybe that's harsh), but I do think stagnation
is a potential.  It's a great big world out there and there are other Open
Source efforts out there moving really fast and doing some pretty cool
stuff.  More importantly they're creating things that are ACTUALLY
CONSUMABLE by mere mortals!  Some are solving real problems in a novel way,
and they're doing it in a way that people can actually use them as a
foundation and build value on top of them.  That's what I always envisioned
happening with OpenStack.  The fact is though if you're not offering
something unique and providing the ability for people to easily solve
problems, then they'll move on to the next solution.

​To be clear, I'm not discrediting OpenStack, the community or any of the
folks that have led efforts in the past.  I am saying that time is
changing, we have to grow up at some point and that things either evolve or
die.​

>
>
>>
>> Do we want our most popular topic at the key-notes to continue being
>> customers telling their story of "how hard" it was to do OpenStack?
>>
>
> No ;)
>
>
>>
>> It's my belief that the TC has a great opportunity (with the right
>> people) to take input from the "outside world" and drive a meaningful and
>> innovative future for OpenStack.  Maybe try and dampen the echo-chamber a
>> bit, see if we can identify some real problems that we can help real
>> customers solve.
>>
>
> I think this is where we *all* should be focused - do you think the TC can
> offer specific support to the projects here - or is more about just
> removing other roadblocks and keeping the community on target?
>
​The problem currently in my view is not whether the TC offers this support
or not, but rather I don't think it's currently intended or viewed as
fulfilling this obligation.  I think it should though, the trick is that
it's not something that just one or two newly elected members of the TC can
do.  This is something that really needs significant mind-share, and of
course has to find a way to balance the needs and wishes of the community
at the same time.

Really above all, I've had this feeling the past few years that the
OpenStack development community served itself, that being the vendors that
sponsor the developers, or even in some cases just groups of developers
inside the community itself.  If that's the way things go, then that's
fine, but I'd really like to see a true shift towards a focus on solving
new problems for actual users.​


>
>
>> I'd like to see us embracing new technologies and ways of doing things.
>> I'd love to have a process where we don't care so much about the check
>> boxes of what oslo libs you do or don't use in a project, or how well you
>> follow the hacking rules; but instead does your project actually work?  Can
>> it actually be deployed by somebody outside of the OpenStack community or
>> with minimal OpenStack experience?
>>
>> It's my belief that Projects should offer real value as stand-alone
>> services just as well as they do working with other OpenStack services.
>>
>
> Very controversial (surprisingly?)!  Why do you think this is important?
> Do you think this is in conflict with the goal of OpenStack as one
> community?
>
​Which soap box should I get on top of here :)
​* New technologies

Same thing I mentioned before, evolve or die.  If there are better tools
that are easier, better, more advanced that can be used to solve a problem
then by all means they should be used.  Frankly I don't care so much if a
project uses oslo's version of XYZ vs implementing their own thing so long
as what they implement works and people can consume it.  I'm not saying
anything negative about oslo or any libraries that exist under that
project, just saying that maybe the lowest common denominator, one size
fits all lib isn't always the best or only answer.  If folks feel strongly
and have what for them is a better implementation, good for them.  At the
same time I'd hope we're all professionals and build a new wheel for
everything just because it's the cool thing to do.

* Real value as stand-alone services

This one is really interesting to me, and really until recently there's
only been on OpenStack project (I'm looking 

Re: [openstack-dev] [tc] open question to the candidates

2016-10-03 Thread John Griffith
On Mon, Oct 3, 2016 at 9:30 AM, gordon chung  wrote:

> hi,
>
> as there are many candidates this TC election, i figured i'd ask a
> question to better understand the candidates from the usual sales pitch
> in self-nominations. hopefully, this will give some insights into the
> candidates for those who haven't voted yet. obviously, the following is
> completely optional. :)
>
> i initially asked this to John Dickinson[1] in his self-nomination. i'd
> like to open this up to everyone. the (re-worded) question is:
>
> the TC has historically been a reactive council that lets others ask for
> change and acts as the final approver. do you believe the TC should be a
> proactive committee that initiates change and if yes, to what scope?
> more generally, what are some specific issues you'd like the TC address
> in the coming year?
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-
> September/104821.html
>
> thanks,
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Hey Gordon,

It's my opinion that OpenStack and the community has grown and changed so
much over the years that now it might be beneficial for the committee to be
more "proactive" as you put it and perhaps drive things.  There's a few
very big problems with that statement though, the first of course is
"proactive about what"?  If it's things like driving projects to actually
do what they advertise I think that's fine.  Picking some project-wide
effort (I'm looking at you Python 3) that's awesome.  On the other hand if
it's things like trying to be the only source of innovation or new ideas...
well that would be pretty awful in my opinion.

The other big problem (and probably the most significant) is that what some
folks are talking about here is not really what the TC was ever designed or
intended for.  The TC in my opinion has over the last several years done
EXACTLY what it was designed, intended and meant to do.  I think we've been
really fortunate to have some really great people serve over the years and
do the work.  So regardless of what anyone says they "will or won't do" the
fact is, there needs to be some consensus and some work to really figure
out how (or even if) we want the TC to evolve, and if so, then evolve how.

So the first thing for the TC to do during the next release in my opinion
is; try and figure out how they can best serve the community?  How can they
add the most value?  It may turn out that the way to do those things is to
keep doing things exactly as they have been, or maybe there are some things
we can drill into and try and take more of a driving or pro-active approach
to.

Personally, as I said in my nomination email, I would like to push to have
some oversight on making sure that the various services in OpenStack
actually "work".  That means that I can install them based on the
documentation they provide, get them up and running and use them to do
something useful and they should be relatively stable.

Anyway, hope that helps a bit.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2016-09-29 Thread John Griffith
Hey Everyone,

Some of you may know me, I've been around the OpenStack community for a
while (longer than some, shorter than others).  I'm not an "uber hipster",
or a "super cool bro-grammer", or even a "mega hacker" trying to write the
most clever code possible to impress everyone.

I am however someone that has been contributing to OpenStack for about five
years now.  Not only via code contributions, but services, support and
evangelism.  I started the Cinder project with some great help from a few
other folks and did the best I could with that while forging ahead into
unknown territory.  I use OpenStack on a daily basis in a number of private
clouds, have helped several average sized companies deploy and maintain
OpenStack clouds and have spent countless hours helping people get their
heads wrapped around the whole OpenStack Platform thing and how it might be
able solve some of their problems.

I'm not going to try and claim that I have all the answers related to
OpenStack and the TC, in fact, I'm not even going to pretend to know what
all the questions are.  I'm not going to tell you what a great person I am,
or all of my "great achievements" over the years.  As we all know, people
can write up whatever wonderful things.  People can say or write up just
about anything and promise the world without really having any idea what
they're talking about.

What I will say however, is that I believe OpenStack has changed
dramatically over the last few years.  Some things for the better, some
things... not so much.  While I think the past is extremely important for
the experience it gives, I think what's more important and critical is the
future and where OpenStack is going over the course of the next few years.

OpenStack is a bit ambiguous for a lot of people that I talk to (both
inside and outside of the community).  Even more unclear is what do we want
to be in another two years, three or even five?  Do we want to just
continue being a platform that kinda looks like AWS or a "free" version of
VMware?  Do we want our most popular topic at the key-notes to continue
being customers telling their story of "how hard" it was to do OpenStack?

I think we're at an important cross-roads with respect to the future of
OpenStack.  It's my belief that the TC has a great opportunity (with the
right people) to take input from the "outside world" and drive a meaningful
and innovative future for OpenStack.  Maybe try and dampen the echo-chamber
a bit, see if we can identify some real problems that we can help real
customers solve.

I'd like to see us embracing new technologies and ways of doing things.
I'd love to have a process where we don't care so much about the check
boxes of what oslo libs you do or don't use in a project, or how well you
follow the hacking rules; but instead does your project actually work?  Can
it actually be deployed by somebody outside of the OpenStack community or
with minimal OpenStack experience?

It's my belief that Projects should offer real value as stand-alone
services just as well as they do working with other OpenStack services.  I
should be able to use them equally as well outside the eco-system as in
side of it.  I believe the TC should consider driving issues like these and
help guide the future of OpenStack.

If you like my philosophy (really that's all it is), or agree with it; I'd
love the opportunity to try and make some of this a reality.  I can't
promise anything, except that I'll try to do what I believe is good for the
community (especially deployers and end-users).

Feel free to ask me about my thoughts on anything specific, I'm happy to
answer any questions that I can as honestly as I can.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-21 Thread John Griffith
On Wed, Sep 21, 2016 at 12:57 AM, Michał Dulko <michal.du...@intel.com>
wrote:

> On 09/20/2016 05:48 PM, John Griffith wrote:
> > On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas
> > <duncan.tho...@gmail.com <mailto:duncan.tho...@gmail.com>> wrote:
> >
> > On 20 September 2016 at 16:24, Nikita Konovalov
> > <nkonova...@mirantis.com <mailto:nkonova...@mirantis.com>> wrote:
> >
> > Hi,
> >
> > From Sahara (and Hadoop workload in general) use-case the
> > reason we used BDD was a complete absence of any overhead on
> > compute resources utilization.
> >
> > The results show that the LVM+Local target perform pretty
> > close to BDD in synthetic tests. It's a good sign for LVM. It
> > actually shows that most of the storage virtualization
> > overhead is not caused by LVM partitions and drivers
> > themselves but rather by the iSCSI daemons.
> >
> > So I would still like to have the ability to attach partitions
> > locally bypassing the iSCSI to guarantee 2 things:
> > * Make sure that lio processes do not compete for CPU and RAM
> > with VMs running on the same host.
> > * Make sure that CPU intensive VMs (or whatever else is
> > running nearby) are not blocking the storage.
> >
> >
> > So these are, unless we see the effects via benchmarks, completely
> > meaningless requirements. Ivan's initial benchmarks suggest
> > that LVM+LIO is pretty much close enough to BDD even with iSCSI
> > involved. If you're aware of a case where it isn't, the first
> > thing to do is to provide proof via a reproducible benchmark.
> > Otherwise we are likely to proceed, as John suggests, with the
> > assumption that local target does not provide much benefit.
> >
> > I've a few benchmarks myself that I suspect will find areas where
> > getting rid of iSCSI is benefit, however if you have any then you
> > really need to step up and provide the evidence. Relying on vague
> > claims of overhead is now proven to not be a good idea.
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <http://openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> > ​Honestly we can have both, I'll work up a bp to resurrect the idea of
> > a "smart" scheduling feature that lets you request the volume be on
> > the same node as the compute node and use it directly, and then if
> > it's NOT it will attach a target and use it that way (in other words
> > you run a stripped down c-vol service on each compute node).
>
> Don't we have at least scheduling problem solved [1] already?
>
> [1]
> https://github.com/openstack/cinder/blob/master/cinder/
> scheduler/filters/instance_locality_filter.py


Yes, that is a sizeable chunk of the solution.  The remaining components
are how to coordinate with Nova (compute nodes) and figuring out if we just
use c-vol as is, or if we come up with some form of a paired down agent.
Just using c-vol as a start might be the best way to go.
​


>
>
> >
> > Sahara keeps insisting on being a snow-flake with Cinder volumes and
> > the block driver, it's really not necessary.  I think we can
> > compromise just a little both ways, give you standard Cinder semantics
> > for volumes, but allow you direct acccess to them if/when requested,
> > but have those be flexible enough that targets *can* be attached so
> > they meet all of the required functionality and API implementations.
> > This also means that we don't have to continue having a *special*
> > driver in Cinder that frankly only works for one specific use case and
> > deployment.
> >
> > I've pointed to this a number of times but it never seems to
> > resonate... but I never learn so I'll try it once again [1].  Note
> > that was before the name "brick" was hijacked and now means something
> > completely different.
> >
> > [1]: https://wiki.openstack.org/wiki/CinderBrick
> >
> > Thanks,
> > John​
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-20 Thread John Griffith
On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas 
wrote:

> On 20 September 2016 at 16:24, Nikita Konovalov 
> wrote:
>
>> Hi,
>>
>> From Sahara (and Hadoop workload in general) use-case the reason we used
>> BDD was a complete absence of any overhead on compute resources
>> utilization.
>>
>> The results show that the LVM+Local target perform pretty close to BDD in
>> synthetic tests. It's a good sign for LVM. It actually shows that most of
>> the storage virtualization overhead is not caused by LVM partitions and
>> drivers themselves but rather by the iSCSI daemons.
>>
>> So I would still like to have the ability to attach partitions locally
>> bypassing the iSCSI to guarantee 2 things:
>> * Make sure that lio processes do not compete for CPU and RAM with VMs
>> running on the same host.
>> * Make sure that CPU intensive VMs (or whatever else is running nearby)
>> are not blocking the storage.
>>
>
> So these are, unless we see the effects via benchmarks, completely
> meaningless requirements. Ivan's initial benchmarks suggest that LVM+LIO is
> pretty much close enough to BDD even with iSCSI involved. If you're aware
> of a case where it isn't, the first thing to do is to provide proof via a
> reproducible benchmark. Otherwise we are likely to proceed, as John
> suggests, with the assumption that local target does not provide much
> benefit.
>
> I've a few benchmarks myself that I suspect will find areas where getting
> rid of iSCSI is benefit, however if you have any then you really need to
> step up and provide the evidence. Relying on vague claims of overhead is
> now proven to not be a good idea.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Honestly we can have both, I'll work up a bp to resurrect the idea of a
"smart" scheduling feature that lets you request the volume be on the same
node as the compute node and use it directly, and then if it's NOT it will
attach a target and use it that way (in other words you run a stripped down
c-vol service on each compute node).

Sahara keeps insisting on being a snow-flake with Cinder volumes and the
block driver, it's really not necessary.  I think we can compromise just a
little both ways, give you standard Cinder semantics for volumes, but allow
you direct acccess to them if/when requested, but have those be flexible
enough that targets *can* be attached so they meet all of the required
functionality and API implementations.  This also means that we don't have
to continue having a *special* driver in Cinder that frankly only works for
one specific use case and deployment.

I've pointed to this a number of times but it never seems to resonate...
but I never learn so I'll try it once again [1].  Note that was before the
name "brick" was hijacked and now means something completely different.

[1]: https://wiki.openstack.org/wiki/CinderBrick

Thanks,
John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-19 Thread John Griffith
On Mon, Sep 19, 2016 at 2:54 PM, Duncan Thomas <duncan.tho...@gmail.com>
wrote:

> I think there's some mileage in some further work on adding local LVM,
> since things like striping/mirroring for performace can be done. We can
> prototype it and get the numbers before even thinking about merging though
> - as additions to an already fully featured driver. these seem more
> worthwhile a way forward than limping on with the bdd driver.
>
​I think that's a different discussion, a good one, but a different one.
I'd also like to point out that there's been a mirroring option in the
existing LVM driver for years (Vish added it a long time ago) but there
have been very few people that have showed any interest in it.

Again, rather than change the entire architecture of things, I'd rather see
us do some things around multi-pathing and exploitation of the mirroring
that we already have.  IMHO we either flush out and refine the
features/options we have or start removing them; but I hate to continue
piling little corner case configs into the mix that aren't tested and
typically don't implement the entire API.

​


>
> Moving to change our default target to LIO seems worthwhile - I'd suggest
> being cautious with deprecation rather than aggressive though - aiming to
> change the default in 'O' then planning the rest based on how that goes.
>
> On 19 September 2016 at 21:54, John Griffith <john.griffi...@gmail.com>
> wrote:
>
>>
>>
>> On Mon, Sep 19, 2016 at 12:01 PM, Ivan Kolodyazhny <e...@e0ne.info>
>> wrote:
>>
>>> + [sahara] because they are primary consumer of the BDD.
>>>
>>> John,
>>> Thanks for the answer. My comments are inline.
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>> On Mon, Sep 19, 2016 at 4:41 PM, John Griffith <john.griffi...@gmail.com
>>> > wrote:
>>>
>>>>
>>>>
>>>> On Mon, Sep 19, 2016 at 4:43 AM, Ivan Kolodyazhny <e...@e0ne.info>
>>>> wrote:
>>>>
>>>>> Hi team,
>>>>>
>>>>> We did some performance tests [1] for LVM and BDD drivers. All tests
>>>>> were executed on real hardware with OpenStack Mitaka release.
>>>>> Unfortunately, we didn't have enough time to execute all tests and compare
>>>>> results. We used Sahara/Hadoop cluster with TestDFSIO and others
>>>>> tests.
>>>>>
>>>>> All tests were executed on the same hardware and OpenStack release.
>>>>> Only difference were in cinder.conf to enable needed backend and/or target
>>>>> driver.
>>>>>
>>>>> Tests were executed on following configurations:
>>>>>
>>>>>- LVM +TGT target
>>>>>- LVM+LocalTarget: PoC based on [2] spec
>>>>>- LVM+LIO
>>>>>- Block Device Driver.
>>>>>
>>>>>
>>>>> Feel free to ask question if any about our testing infrastructure,
>>>>> environment, etc.
>>>>>
>>>>>
>>>>> [1] https://docs.google.com/spreadsheets/d/1qS_ClylqdbtbrVSvwbbD
>>>>> pdWNf2lZPR_ndtW6n54GJX0/edit?usp=sharing
>>>>> [2] https://review.openstack.org/#/c/247880/
>>>>>
>>>>> Regards,
>>>>> Ivan Kolodyazhny,
>>>>> http://blog.e0ne.info/
>>>>>
>>>>> 
>>>>> __
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>>> enstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>> ​Thanks Ivan, so I'd like to propose we (the Cinder team) discuss a
>>>> few things (again):
>>>>
>>>> 1. Deprecate the BDD driver
>>>>  Based on the data here LVM+LIO the delta in performance ​(with the
>>>> exception of the Terravalidate run against 3TB) doesn't seem significant
>>>> enough to warrant maintaining an additional driver that has only a subset
>>>> of features implemented.  It would be good to understand why that
>>>> particular test has such a significant peformance gap.
>>>>
>>> What about Local Target? Does it make sense to implement it instead BDD?
>>>
>> ​Maybe I'

Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-19 Thread John Griffith
On Mon, Sep 19, 2016 at 12:01 PM, Ivan Kolodyazhny <e...@e0ne.info> wrote:

> + [sahara] because they are primary consumer of the BDD.
>
> John,
> Thanks for the answer. My comments are inline.
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Mon, Sep 19, 2016 at 4:41 PM, John Griffith <john.griffi...@gmail.com>
> wrote:
>
>>
>>
>> On Mon, Sep 19, 2016 at 4:43 AM, Ivan Kolodyazhny <e...@e0ne.info> wrote:
>>
>>> Hi team,
>>>
>>> We did some performance tests [1] for LVM and BDD drivers. All tests
>>> were executed on real hardware with OpenStack Mitaka release.
>>> Unfortunately, we didn't have enough time to execute all tests and compare
>>> results. We used Sahara/Hadoop cluster with TestDFSIO and others tests.
>>>
>>> All tests were executed on the same hardware and OpenStack release. Only
>>> difference were in cinder.conf to enable needed backend and/or target
>>> driver.
>>>
>>> Tests were executed on following configurations:
>>>
>>>- LVM +TGT target
>>>- LVM+LocalTarget: PoC based on [2] spec
>>>- LVM+LIO
>>>- Block Device Driver.
>>>
>>>
>>> Feel free to ask question if any about our testing infrastructure,
>>> environment, etc.
>>>
>>>
>>> [1] https://docs.google.com/spreadsheets/d/1qS_ClylqdbtbrVSvwbbD
>>> pdWNf2lZPR_ndtW6n54GJX0/edit?usp=sharing
>>> [2] https://review.openstack.org/#/c/247880/
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ​Thanks Ivan, so I'd like to propose we (the Cinder team) discuss a few
>> things (again):
>>
>> 1. Deprecate the BDD driver
>>  Based on the data here LVM+LIO the delta in performance ​(with the
>> exception of the Terravalidate run against 3TB) doesn't seem significant
>> enough to warrant maintaining an additional driver that has only a subset
>> of features implemented.  It would be good to understand why that
>> particular test has such a significant peformance gap.
>>
> What about Local Target? Does it make sense to implement it instead BDD?
>
​Maybe I'm missing something, what would the advantage be?  LVM+LIO and
LVM+LOCAL-TARGET seem pretty close.  In the interest of simplicity and
maintenance just thinking maybe it would be worth considering just using
LVM+LIO across the board.
​


>
>> 2. Consider getting buy off to move the default implementation to use the
>> LIO driver and consider deprecating the TGT driver
>>
> +1. Let's bring this topic for the next weekly meeting.
>
>
>
>>
>> I realize this probably isn't a sufficient enough data set to make those
>> two decisions but I think it's at least enough to have a more informed
>> discussion this time.
>>
>> Thanks,
>> John​
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] FFE request for RBD replication

2016-09-10 Thread John Griffith
Given the the patch is up and well on its way I don't know why this would
be a problem.  FWIW ya get my +1

On Sep 9, 2016 1:34 PM, "Gorka Eguileor"  wrote:

> Hi,
>
> As some of you may know, Jon Bernard (jbernard on IRC) has been working
> on the RBD v2.1 replication implementation [1] for a while, and we would
> like to request a Feature Freeze Exception for that work, as we believe
> it is a good candidate being a low risk change for the integrity of
> the existing functionality in the driver:
>
> - It's non intrusive if it's not enabled (enabled using
>   replication_device configuration option).
> - It doesn't affect existing deployments (disabled by default).
> - Changes are localized to the driver itself (rbd.py) and the driver
>   unit tests file (test_rbd.py).
>
> Jon would have liked to make this request himself, but due to the
> untimely arrival of his newborn baby this is not possible.
>
> For obvious reasons Jon will not be available for a little while, but
> this will not be a problem, as I am well acquainted with the code -and
> I'll be able to reach Jon if necessary- and will be taking care of the
> final steps of the review process of his patch: replying to comments in
> a timely fashion, making changes to the code as required, and answering
> pings on IRC regarding the patch.
>
> Since some people may be interested in testing this functionality during
> the reviewing process -or just for fun- I'll be publishing a post with
> detailed explanation on how to deploy and test this feature as well as
> an automated way to deploy 2 Ceph clusters -linked to be mirroring one
> another-, and one devstack node with everything ready to test the
> functionality (configuration and keys for the Ceph clusters, cinder
> configuration, the latest upstream patch, and a volume type with the
> right configuration).
>
> Please, do not hesitate to ask if there are any questions to or concerns
> related to this request.
>
> Thank you for taking the time to evaluate this request.
>
> Cheers,
> Gorka.
>
> [1]: https://review.openstack.org/333565
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-09 Thread John Griffith
On Sep 9, 2016 08:26, "Ben Swartzlander" <b...@swartzlander.org> wrote:
>
> On 09/08/2016 04:41 PM, Duncan Thomas wrote:
>>
>> On 8 September 2016 at 20:17, John Griffith <john.griffi...@gmail.com
>> <mailto:john.griffi...@gmail.com>> wrote:
>>
>> On Thu, Sep 8, 2016 at 11:04 AM, Jeremy Stanley <fu...@yuggoth.org
>> <mailto:fu...@yuggoth.org>> wrote:
>>
>>
>>
>>
>> 
>>
>> they should be able to simply install it and its free
dependencies
>> and get a working system that can communicate with "supported"
>> hardware without needing to also download and install separate
>> proprietary tools from the hardware vendor. It's not what we say
>> today, but it's what I personally feel like we *should* be
saying.
>>
>>
>> Your view on what you feel we *should* say, is exactly how I've
>> interpreted our position in previous discussions within the Cinder
>> project.  Perhaps I'm over reaching in my interpretation and that's
>> why this is so hotly debated when I do see it or voice my concerns
>> about it.
>>
>>
>> Despite the fact I've appeared to be slightly disagreeing with John in
>> the IRC discussion on this subject, you've summarised my concern very
>> well. I'm not convinced that these support tools need to be open source,
>> but they absolutely need to be licensed in such a way that distributions
>> can repackage them and freely distribute them. I'm not aware of any
>> tools currently required by cinder where this is not the case, but a few
>> of us are in the process of auditing this to make sure we understand the
>> situation before we clarify our rules.
>
>
> I don't agree with this stance. I think the Cinder (and OpenStack)
communities should be able to dictate what form driver take, including the
code and the license, but when we start to try to control what drivers are
allowed to talk to (over and API or CLI) then we are starting to
artificially limit what kinds of storage systems can integrate with
OpenStack.
>
> Storage systems take a wide variety of forms, including specialized
hardware systems, clusters of systems, pure software-based systems, open
source, closed source, and even other SDS abstraction layers. I don't see
the point is creating rules that specify what form a storage system has to
take if we are going to allow a driver for it. As long as the driver itself
and all of it's python dependencies are Apache licensed, we can do our job
of reviewing the code and fixing cinder-level bugs. Any other kind of
restrictions just limit customer choice and stifle competition.
>
I get it, like I said I realize that my view doesn't match others, and I
certainly seem to be in the minority.  I'm sure there's some things we can
hammer out and define clearly that makes everybody at least a 'little'
happy.

> Even if you don't agree with my stance, I see serious practical problems
with trying to define what it and is not permitted in terms of "support
tools". Is a proprietary binary that communicates with a physical
controller using a proprietary API a "support tool"? What if someone
creates a software-defined-storage system which is purely a proprietary
binary and nothing else?
>
> API proxies are also very hard to nail down. Is an API proxy with a
proprietary license not allowed? What if that proxy runs on the box itself?
What if it's a separate software package you have to install? I don't think
we can write a set of rules that won't accidentally exclude things we don't
want to exclude.
>
> -Ben Swartzlander
>
>> --
>> Duncan Thomas
>>
>>
>>
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Timeframe for future elections & "Release stewards"

2016-09-09 Thread John Griffith
On Fri, Sep 9, 2016 at 2:42 AM, Thierry Carrez <thie...@openstack.org>
wrote:

> John Griffith wrote:
> > ​I think Sean Dague made some really good points and I'd tend to lean
> > that way.  Honestly charters, bylaws, governance etc shift or are
> > rewritten fairly often.  Why not just change when we do elections to
> > correspond with releases and keep the continuity that we have now.​  Is
> > there a problem with the existing terms and cycles that maybe I'm
> missing?
>
> AFAICT this is not what Sean is proposing. He is saying that we should
> run elections in the weeks before Summit as usual, but the newly-elected
> PTL would /not/ take over the current PTL until 3 months later when the
> next development branches are opened.
>
​Yes, which is a reasonable choice in my mind as well.  I was throwing out
there however that maybe this isn't that hard, maybe we could just move the
election date as well.
​


>
> While it's true that there are projects with a lot of continuity and
> succession planning, with the old PTL staying around after they have
> been replaced, there are also a fair share of projects where the PTL is
> replaced by election and either rage-quits or lowers their involvement
> significantly as a result. I'd rather have the /possibility/ to separate
> the PTL from the release steward role and ensure continuity.


> That doesn't prevent you from doing it Nova-style and use the PTL as the
> release steward. It just lets you use someone else if you want to. A bit
> like keeping a headphone jack. Options.
>
​I see what you did there (and I like it).​


>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Timeframe for future elections & "Release stewards"

2016-09-08 Thread John Griffith
On Thu, Sep 8, 2016 at 12:49 PM, Matt Riedemann 
wrote:

> On 9/8/2016 6:42 AM, Sean Dague wrote:
>
>> On 09/08/2016 05:00 AM, Thierry Carrez wrote:
>>
>>> Sean Dague wrote:
>>>
>> 
>>
>>> So... the difference between your proposal and mine is: you force the
>>> PTL to be the release steward (rather than having a choice there), and
>>> introduce a delay between election and start of authority for the PTL.
>>>
>>> I don't see that delay as a good thing. You would elect in April a PTL
>>> whose authority over the project would start in August. That sounds a
>>> bit weird to me. I'd rather say that the authority of the PTL starts
>>> when he is elected, and not introduce a delay.
>>>
>>> I don't see *forcing* the PTL to be the release steward to be a good
>>> thing either. The just-elected PTL can totally be the release steward
>>> for the upcoming cycle -- actually, that is how my proposal would work
>>> by default: the PTL elected around Boston would be the default release
>>> steward for Q, and the PTL elected around Sydney would be the default
>>> release steward for R. But I'd rather allow for some flexibility here,
>>> in case the PTL prefers to delegate more of his work. I also think
>>> allowing for more leadership roles (rather than piling it all on the
>>> PTL) helps growing a stronger leadership pipeline.
>>>
>>> In summary, I see drawbacks to your variant, and I fail to see any
>>> benefits... Am I missing something ?
>>>
>>
>> I can only bring my own experience from projects, which is to expose
>> projects to succession planning a bit earlier, but otherwise keep things
>> the same. Both with working in the QA team, and in Nova, usually the
>> standing PTL starts telling folks about half way through their final
>> term that they aren't going to run again. And there ends up being a
>> bunch of private team conversations to figure out who else is
>> interested. Often those folks need to clear some things off their plate.
>> So there is some completely private indication of who might be the next
>> PTL. However, nothing is made official, and no one wants to presume
>> until an actual election happens months later.
>>
>> When succession planning doesn't go well, you get to nomination week,
>> and you find out the current PTL isn't running, and there are two days
>> of mad scramble trying to figure out who is going to run.
>>
>> Forcing the PTL-next conversation back some amount of time means it
>> matches the way I've seen succession planning work in projects for the
>> best handoff.
>>
>> I feel like projects and PTLs do already delegate the things they can
>> and want to. It's not clear to me that creating another title of release
>> steward is going to dramatically change that. Maybe it's an active
>> suggestion to delegate that role out? Or that another title helps
>> convince employers that someone needs to end up at the PTG?
>>
>> I'm also not very concerned about delayed authority of the PTL. Peaceful
>> handoff should be a pretty basic tenant in projects. Knowing about it
>> for a longer time shouldn't be a big deal. If it causes giant strife to
>> pass the torch from one PTL to the next there is something else going
>> wrong in that project. In the few cases I'm familiar with in which a
>> standing PTL lost an election, the relationship between that PTL and the
>> PTL-next was fine.
>>
>> Again, these are personal experiences from the projects I'm actively
>> involved with, or collaborate with the most.
>>
>> -Sean
>>
>>
> +1 to everything sdague said here.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I think Sean Dague made some really good points and I'd tend to lean that
way.  Honestly charters, bylaws, governance etc shift or are rewritten
fairly often.  Why not just change when we do elections to correspond with
releases and keep the continuity that we have now.​  Is there a problem
with the existing terms and cycles that maybe I'm missing?

If there's a real hang up on the wording of it being related to the summit,
then fine... word it such that the election is "summit-date - N months =
election-date".  I think there is value in continuity of a single PTL for a
release cycle personally.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-08 Thread John Griffith
On Thu, Sep 8, 2016 at 11:04 AM, Jeremy Stanley  wrote:

> On 2016-09-08 09:32:20 +0100 (+0100), Daniel P. Berrange wrote:
> > That policy is referring to libraries (ie, python modules that we'd
> > actually "import" at the python level), while the list above seems to be
> > referring to external command line tools that we merely invoke from the
> > python code. From a license compatibility POV there's no problem, as
> there's
> > a boundary between the open source openstack code, and the closed source
> > external program. Talking to a closed source external command over stdio,
> > is conceptually no different to talking to a closed source server over
> > some remote API.
>
> The drawback to this interpretation is that someone can't ship a
> complete OpenStack solution that will "talk" to the devices in
> question without either obtaining permission to bundle and ship
> these proprietary software components or instructing the recipient
> to obtain them separately and assemble the working system
> themselves. If we go with an interpretation that the openness
> boundary for hardware "support" in OpenStack is at the same place
> where this proprietary hardware connects to the commodity system
> running OpenStack software (so communication over SSH, HTTP, SCSI,
> PCI/DMA, et cetera) we can perhaps avoid this.
>
> It's a grey area many free software projects struggle with, and from
> a pragmatic standpoint we need to accept that there will in almost
> every case be proprietary software involved in any system (after
> all, "firmware" is still software). Still, we can minimize
> complication for recipients of our software if we make it expected
> they should be able to simply install it and its free dependencies
> and get a working system that can communicate with "supported"
> hardware without needing to also download and install separate
> proprietary tools from the hardware vendor. It's not what we say
> today, but it's what I personally feel like we *should* be saying.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​
Thanks for your input here Jeremy..
​

> complication for recipients of our software if we make it expected
> they should be able to simply install it and its free dependencies
> and get a working system that can communicate with "supported"
> hardware without needing to also download and install separate
> proprietary tools from the hardware vendor. It's not what we say
> today, but it's what I personally feel like we *should* be saying.


Your view on what you feel we *should* say, is exactly how I've interpreted
our position in previous discussions within the Cinder project.  Perhaps
I'm over reaching in my interpretation and that's why this is so hotly
debated when I do see it or voice my concerns about it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-08 Thread John Griffith
On Thu, Sep 8, 2016 at 2:32 AM, Daniel P. Berrange 
wrote:

> On Thu, Sep 08, 2016 at 10:24:09AM +0200, Thierry Carrez wrote:
> > Avishay Traeger wrote:
> > > There are a number of drivers that require closed-source tools to
> > > communicate with the storage.  3 others that I've come across recently:
> > >
> > >   * EMC VNX: requires Navisphere CLI v7.32 or higher
> > >   * Hitachi storage volume driver: requires RAID Manager Ver
> 01-32-03/01
> > > or later for VSP G1000/VSP/HUS VM, Hitachi Storage Navigator
> Modular
> > > 2 (HSNM2) Ver 27.50 or later for HUS 100 Family
> > >   * Infortrend driver: requires raidcmd ESDS10
> >
> > If those proprietary dependencies are required for operation, those
> > would probably violate our licensing policy[1] and should probably be
> > removed:
> >
> > "In order to be acceptable as dependencies of OpenStack projects,
> > external libraries (produced and published by 3rd-party developers) must
> > be licensed under an OSI-approved license that does not restrict
> > distribution of the consuming project. The list of acceptable licenses
> > includes ASLv2, BSD (both forms), MIT, PSF, LGPL, ISC, and MPL. Licenses
> > considered incompatible with this requirement include GPLv2, GPLv3, and
> > AGPL."
>
> That policy is referring to libraries (ie, python modules that we'd
> actually "import" at the python level), while the list above seems to be
> referring to external command line tools that we merely invoke from the
> python code. From a license compatibility POV there's no problem, as
> there's
> a boundary between the open source openstack code, and the closed source
> external program. Talking to a closed source external command over stdio,
> is conceptually no different to talking to a closed source server over
> some remote API.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I think I need some clarification here.  My initial interpretation was
inline with Thierry's comments ​here
> "In order to be acceptable as dependencies of OpenStack projects,
> external libraries (produced and published by 3rd-party developers) must
> be licensed under an OSI-approved license that does not restrict
> distribution of the consuming project. The list of acceptable licenses
> includes ASLv2, BSD (both forms), MIT, PSF, LGPL, ISC, and MPL. Licenses
> considered incompatible with this requirement include GPLv2, GPLv3, and
> AGPL."

But I also get Daniel's point in the previous message.

my understanding when talking to folks about this library (and examples of
things that were proposed and rejected by myself in the past):
1.  It was in fact a module that would be imported by the Python code
2.  It was not a binary but a source module
3.  The module proposed had a proprietary license that specifically
prohibited redistribution

There have also been proposals over the years for binaries that had
restrictive licenses that did NOT allow redistribution.  My position in the
past was always "if it was questionable or seemed like it *could* cause
problems for distribution then simply don't do it".

All of that being said, based on feedback here, and the overwhelming
feedback I received in Cinders meeting on this yesterday (and not positive
feedback), it seems obvious that I must not understand our policies, the
ramifications of what's being proposed etc.  I'd suggest as I did yesterday
that instead of running the risk of my misinterpreting what's being
proposed, perhaps Arlon could detail what it is in fact that they are
proposing?  Some clarification regarding what the closed source piece
actually is, what the licensing is, how it works etc might be useful and
may prove that my concerns here are completely unwarranted.  Or, the rest
of the community if fine with it anyway in which case we can move along.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-07 Thread John Griffith
On Tue, Sep 6, 2016 at 9:27 AM, Alon Marx  wrote:

> I want to share our plans to open the IBM Storage driver source code.
> Historically we started our way in cinder way back (in Essex if I'm not
> mistaken)

​You're mistaken, Cinder didn't exist at that time... but it's irrelevant.
​


> with just a small piece of code in the community while keeping most of the
> driver code closed. Since then the code has grown, but we kept with the
> same format. We would like now to open the driver source code, while
> keeping the connectivity to the storage as closed source.

​It might help to know *which* driver you are referring to.  IBM has a
number of Storwiz and GPFS drivers in Cinder... what drivers are you
referring to here?​


>
> I believe that there are other cinder drivers that have some stuff in
> proprietary libraries.

​Actually we've had a hard stance on this, if you have code in Cinder that
requires an external lib (I personally hate this model) we typically
require it to be open source.

I want to propose and formalize the principles to where we draw the line
> (this has also been discussed in https://review.openstack.org/#/c/341780/)
> on what's acceptable by the community.
> ​
>


> Based on previous discussion I understand that the rule of thumb is "as
> long as the majority of the driver logic is in the public driver" the
> community would be fine with that. Is this acceptable to the community?

​No, I don't think that's true.  It's quite possible that some people make
those sorts of statements but frankly their missing the entire point.

In case you weren't aware, OpenStack IS an OPEN SOURCE project, not a
proprietary or hybrid project.  We are VERY clear as a community about that
fact and what we call the "4 Opens" [1].  It's my opinion that if you're in
then you're ALL in.​

[1]: https://governance.openstack.org/reference/opens.html
​

>
>
> Regards,
> Alon
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread John Griffith
On Fri, Aug 26, 2016 at 10:20 AM, Ed Leafe  wrote:

> On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:
>
> > One other thing to note is that while a flavor constrains how much local
> > disk is used it does not constrain volume size at all. So a user can
> > specify an ephemeral/swap disk <= to what the flavor provides but can
> > have an arbitrary sized root disk if it's a remote volume.
>
> This kind of goes to the heart of the argument against flavors being the
> sole source of truth for a request. As cloud evolves, we keep packing more
> and more stuff into a concept that was originally meant to only divide up
> resources that came bundled together (CPU, RAM, and local disk). This
> hasn’t been a good solution for years, and the sooner we start accepting
> that a request can be much more complex than a flavor can adequately
> express, the better.
>
> If we have decided that remote volumes are a good thing (I don’t think
> there’s any argument there), then we should treat that part of the request
> as being as fundamental as a flavor. We need to make the scheduler smarter
> so that it doesn’t rely on flavor as being the only source of truth.
>
​+1​


>
> The first step to improving Nova is admitting we have a problem. :)
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread John Griffith
On Fri, Aug 26, 2016 at 7:37 AM, Andrew Laski  wrote:

>
>
> On Fri, Aug 26, 2016, at 03:44 AM, kostiantyn.volenbovs...@swisscom.com
> wrote:
> > Hi,
> > option 1 (=that's what patches suggest) sounds totally fine.
> > Option 3 > Allow block device mappings, when present, to mostly determine
> > instance  packing
> > sounds like option 1+additional logic (=keyword 'mostly')
> > I think I miss to understand the part of 'undermining the purpose of the
> > flavor'
> > Why new behavior might require one more parameter to limit number of
> > instances of host?
> > Isn't it that those VMs will be under control of other flavor
> > constraints, such as CPU and RAM anyway and those will be the ones
> > controlling 'instance packing'?
>
> Yes it is possible that CPU and RAM could be controlling instance
> packing. But my understanding is that since those are often
> oversubscribed

​I don't understand why the oversubscription ratio matters here?
​


> while disk is not that it's actually the disk amounts
> that control the packing on some environments.

​Maybe an explanation of what you mean by "packing" here.  Customers that
I've worked with over the years have used CPU and Mem as their levers and
the main thing that they care about in terms of how many Instances go on a
Node.  I'd like to learn more about why that's wrong and that disk space is
the mechanism that deployers use for this.
​


> But that is a sub option
> here, just document that disk amounts should not be used to determine
> flavor packing on hosts and instead CPU and RAM must be used.
>
> > Does option 3 covers In case someone relied on eg. flavor root disk for
> > disk volume booted from volume - and now instance packing will change
> > once patches are implemented?
>
> That's the goal. In a simple case of having hosts with 16 CPUs, 128GB of
> RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB, root_gb=500GB,
> swap/ephemeral=0 the deployer is stating that they want only 4 instances
> on that host.

​How do you arrive at that logic?  What if they actually wanted a single
VCPU=4,RAM=32GB,root_gb=500 but then they wanted the remaining resources
split among Instances that were all 1 VCPU, 1 G ram and a 1 G root disk?

If there is CPU and RAM oversubscription enabled then by
> using volumes a user could end up with more than 4 instances on that
> host. So a max_instances=4 setting could solve that. However I don't
> like the idea of adding a new config, and I think it's too simplistic to
> cover more complex use cases. But it's an option.
>
​
​I would venture to guess that most Operators would be sad to read that.
So rather than give them an explicit lever that does exactly what they want
clearly and explicitly we should make it as complex as possible and have it
be the result of a 4 or 5 variable equation?  Not to mention it's
completely dynamic (because it seems like lots of clouds have more than one
flavor).

All I know is that the current state is broken.  It's not just the
scheduling problem, I could live with that probably since it's too hard to
fix... but keep in mind that you're reporting the complete wrong
information for the Instance in these cases.  My flavor says it's 5G, but
in reality it's 200 or whatever.  Rather than make it perfect we should
just fix it.  Personally I thought the proposals for a scheduler check and
the addition of the Instances/Node option was a win win for everyone.  What
am I missing?  Would you rather a custom filter scheduler so it wasn't a
config option?
​

>
> >
> > BR,
> > Konstantin
> >
> > > -Original Message-
> > > From: Andrew Laski [mailto:and...@lascii.com]
> > > Sent: Thursday, August 25, 2016 10:20 PM
> > > To: openstack-dev@lists.openstack.org
> > > Cc: openstack-operat...@lists.openstack.org
> > > Subject: [Openstack-operators] [Nova] Reconciling flavors and block
> device
> > > mappings
> > >
> > > Cross posting to gather some operator feedback.
> > >
> > > There have been a couple of contentious patches gathering attention
> recently
> > > about how to handle the case where a block device mapping supersedes
> flavor
> > > information. Before moving forward on either of those I think we
> should have a
> > > discussion about how best to handle the general case, and how to
> handle any
> > > changes in behavior that results from that.
> > >
> > > There are two cases presented:
> > >
> > > 1. A user boots an instance using a Cinder volume as a root disk,
> however the
> > > flavor specifies root_gb = x where x > 0. The current behavior in Nova
> is that the
> > > scheduler is given the flavor root_gb info to take into account during
> scheduling.
> > > This may disqualify some hosts from receiving the instance even though
> that disk
> > > space  is not necessary because the root disk is a remote volume.
> > > https://review.openstack.org/#/c/200870/
> > >
> > > 2. A user boots an instance and uses the block device mapping
> parameters to
> > > specify a swap or 

Re: [openstack-dev] [gate] [cinder] A current major cause for gate failure - cinder backups

2016-08-24 Thread John Griffith
Patch is up, see LP bug reference in Lisa message.

On Aug 24, 2016 10:35, "Jay S. Bryant" 
wrote:

> Lisa,
>
> Great debug!  Thank you!
>
> Let me know when a patch is up and I will take a look.
>
> Jay
>
> On 08/24/2016 02:24 AM, Li, Xiaoyan wrote:
>
>> Hi,
>>
>> I noticed that as VolumesBackupsV1Test and VolumesBackupsV2Test use same
>> volume to do backup creation test etc.
>> When creating backup from volume, it needs to attach volume. As both two
>> tests use same volume, they attach the volume at same time, and leads
>> failure.
>> I opened a bug https://bugs.launchpad.net/tempest/+bug/1616338, and will
>> fix it.
>>
>> Best wishes
>> Lisa
>>
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: Wednesday, August 24, 2016 10:21 AM
>> To: OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> Subject: [openstack-dev] [gate] [cinder] A current major cause for gate
>> failure - cinder backups
>>
>> The gate is in a bad state, as people may have noticed. We're only at a
>> 50% characterization for integrated-gate right now -
>> http://status.openstack.org/elastic-recheck/data/integrated_gate.html
>> which means there are a lot of unknown bugs in there.
>>
>> Spot checking one job - gate-tempest-dsvm-postgres-full-ubuntu-xenial -
>> 6 of the 7 fails were failure of cinder backup -
>> http://logs.openstack.org/92/355392/4/gate/gate-tempest-dsvm
>> -postgres-full-ubuntu-xenial/582fbd7/console.html#_2016-08-
>> 17_04_55_24_109972
>> - though they were often different tests.
>>
>> With the current state of privsep logging (hundreds of lines at warn
>> level) it is making it difficult for me to narrow this down further. I do
>> suspect this might be another concurrency shake out from os-brick, so it
>> probably needs folks familiar to go through logs with a fine toothed comb
>> to get to root cause. If anyone can jump on that, it would be great.
>>
>> This is probably not the only big new issue, but it seems like a pretty
>> concrete one that solving would help drop out merge window (which is 16
>> hours).
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][drivers] Backend and volume health reporting

2016-08-14 Thread John Griffith
On Sun, Aug 14, 2016 at 2:11 AM, Avishay Traeger 
wrote:

> Hi all,
> I would like to propose working on a new feature for Ocata to provide
> health information for Cinder backends and volumes.  Currently, a volume's
> status basically reflects the last management operation performed on it -
> it will be in error state only as a result of a failed management
> operation.  There is no indication as to whether or not a backend or volume
> is "healthy" - i.e., the data exists and is accessible.
>
> The basic idea would be to add a "health" property for both backends and
> volumes.
>
> For backends, this may be something like:
> - "healthy"
> - "warning" (something is wrong and the admin should check the storage)
> - "management unavailable" (there is no management connectivity)
> - "data unavailable" (there is no data path connectivity)
>
> For volumes:
> - "healthy"
> - "degraded" (i.e., not at full redundancy)
> - "error" (in case of a data loss event)
> - "management unavailable" (there is no management connectivity)
> - "data unavailable" (there is no data path connectivity)
>
> Before I start working on a spec, I wanted to get some feedback,
> especially from driver owners:
> 1. What useful information can you provide at the backend level?
> 2. And at the volume level?
> 3. How would you obtain this information?  Querying the storage (poll)?
> Registering for events?  Something else?
> 4. Other feedback?
>
> Thank you,
> Avishay
>
> --
> *Avishay Traeger, PhD*
> *System Architect*
>
> Mobile: +972 54 447 1475
> E-mail: avis...@stratoscale.com
>
>
>
> Web  | Blog
>  | Twitter
>  | Google+
> 
>  | Linkedin 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​I'd like to get a more detailed use case and example of a problem you
want to solve with this.  I have a number of concerns including those I
raised in your "list manageable volumes" proposal.​  Most importantly
there's really no clear definition of what these fields mean and how they
should be interpreted.

For backends, I'm not sure what you want to solve that can't be handled
already by the scheduler and report-capabilities periodic job?  You can
already report back from your backend to the scheduler that you shouldn't
be used for any scheduling activities going forward.  More detailed info
than that might be useful, but I'm not sure it wouldn't fall into an
already existing OpenStack monitoring project like Monasca?

As far as volumes, I personally don't think volumes should have more than a
few states.  They're either "ok" and available for an operation or they're
not.  The list you have seems ok to me, but I don't see a ton of value in
fault prediction or going to great lengths to avoid something failing. The
current model we have of a volume being "ok" until it's "not" seems
perfectly reasonable to me.  Typically my experience is that trying to be
clever and polling/monitoring to try and preemptively change the status of
a volume does little more than result in complexity, confusion and false
status changes of resources.  I'm pretty strongly opposed to having a level
of granularity of the volume here.  At least for now, I'd rather see what
you have in mind for the backend and nail that down to something that's
solid and basically bullet proof before trying to tackle thousands of
volumes which have transient states.  And of course the biggest question I
have still "what problem" you hope to solve here?

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread John Griffith
On Fri, Aug 12, 2016 at 12:10 PM, Walter A. Boring IV  wrote:

>
> I was leaning towards a separate repo until I started thinking about all
> the overhead and complications this would cause. It's another repo for
> cores to watch. It would cause everyone extra complication in setting up
> their CI, which is already one of the biggest roadblocks. It would make
> it a little harder to do things like https://review.openstack.org/297140
> and https://review.openstack.org/346470 to be able to generate 
> this:http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
> setup, more moving parts to break, and just generally more
> complications.
>
> All things that can be solved for sure. I just question whether it would
> be worth having that overhead. Frankly, there are better things I'd like
> to spend my time on.
>
> I think at this point my first preference would actually be to define a
> new tag. This addresses both the driver removal issue as well as the
> backporting of driver bug fixes. I would like to see third party drivers
> recognized and treated as being different, because in reality they are
> very different than the rest of the code. Having something like
> follows_deprecation_but_has_third_party_drivers_that_dont would make a
> clear statement that their is a vendor component to this project that
> really has to be treated differently and has different concerns
> deployers need to be aware of.
>
> Barring that, I think my next choice would be to remove the tag. That
> would really be unfortunate as we do want to make it clear to users that
> Cinder will not arbitrarily break APIs or do anything between releases
> without warning when it comes to non-third party drivers. But if that is
> what we need to do to effectively communicate what to expect from
> Cinder, then I'm OK with that.
>
> My last choice (of the ones I'm favorable towards) would be marking a
> driver as untested/unstable/abandoned/etc rather than removing it. We
> could flag these a certain way and have then spam the logs like crazy
> after upgrade to make it very and painfully clear that they are not
> being maintained. But as Duncan pointed out, this doesn't have as much
> impact for getting vendor attention. It's amazing the level of executive
> involvement that can happen after a patch is put up for driver removal
> due to non-compliance.
>
> Sean
>
> __
>
> I believe there is a compromise that we could implement in Cinder that
> enables us to have a deprecation
> of unsupported drivers that aren't meeting the Cinder driver requirements
> and allow upgrades to work
> without outright immediately removing a driver.
>
>
>1. Add a 'supported = True' attribute to every driver.
>2. When a driver no longer meets Cinder community requirements, put a
>patch up against the driver
>3. When c-vol service starts, check the supported flag.  If the flag
>is False, then log an exception, and disable the driver.
>4. Allow the admin to put an entry in cinder.conf for the driver in
>question "enable_unsupported_driver = True".  This will allow the c-vol
>service to start the driver and allow it to work.  Log a warning on every
>driver call.
>5. This is a positive acknowledgement by the operator that they are
>enabling a potentially broken driver. Use at your own risk.
>6. If the vendor doesn't get the CI working in the next release, then
>remove the driver.
>7. If the vendor gets the CI working again, then set the supported
>flag back to True and all is good.
>
>
> This allows a deprecation period for a driver, and keeps operators who
> upgrade their deployment from losing access to their volumes they have on
> those back-ends.  It will give them time to contact the community and/or do
> some research, and find out what happened to the driver.   This also
> potentially gives the operator time to find a new supported backend and
> start migrating volumes.  I say potentially, because the driver may be
> broken, or it may work enough to migrate volumes off of it to a new backend.
>
> Having unsupported drivers in tree is terrible for the Cinder community,
> and in the long run terrible for operators.
> Instantly removing drivers because CI is unstable is terrible for
> operators in the short term, because as soon as they upgrade OpenStack,
> they lose all access to managing their existing volumes.   Just because we
> leave a driver in tree in this state, doesn't mean that the operator will
> be able to migrate if the drive is broken, but they'll have a chance
> depending on the state of the driver in question.  It could be horribly
> broken, but the breakage might be something fixable by someone that just
> knows Python.   If the driver is gone from tree entirely, then that's a lot
> more to overcome.
>
> I don't think there is a way to make everyone happy all the time, but I
> 

Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread John Griffith
On Fri, Aug 12, 2016 at 7:37 AM, Sean McGinnis 
wrote:

> On Fri, Aug 12, 2016 at 03:40:47PM +0300, Duncan Thomas wrote:
> > On 12 Aug 2016 15:28, "Thierry Carrez"  wrote:
> > >
> > > Duncan Thomas wrote:
> >
> > > I agree that leaving broken drivers in tree is not significantly better
> > > from an operational perspective. But I think the best operational
> > > experience would be to have an idea of how much risk you expose
> yourself
> > > when you pick a driver, and have a number of them that are actually
> > > /covered/ by the standard deprecation policy.
> > >
> > > So ideally there would be a number of in-tree drivers (on which the
> > > Cinder team would apply the standard deprecation policy), and a
> separate
> > > repository for 3rd-party drivers that can be removed at any time (and
> > > which would /not/ have the follows-standard-deprecation-policy tag).
> >
> > So we'd certainly have to move out all of the backends requiring
> > proprietary hardware, since we couldn't commit to keeping them working if
> > their vendors turn of their CI. That leaves ceph, lvm, NFS, drdb, and
> > sheepdog, I think. There is not enough broad knowledge in the core team
> > currently to support sheepdog or drdb without 'vendor' help. That would
> > leave us with three drivers in the tree, and not actually provide much
> > useful risk information to deployers at all.
> >
> > > I understand that this kind of reorganization is a bit painful for
> > > little (developer-side) gain, but I think it would provide the most
> > > useful information to our users and therefore the best operational
> > > experience...
> >
> > In theory this might be true, but see above - in practice it doesn't work
> > that way.
>
> I was leaning towards a separate repo until I started thinking about all
> the overhead and complications this would cause. It's another repo for
> cores to watch. It would cause everyone extra complication in setting up
> their CI, which is already one of the biggest roadblocks. It would make
> it a little harder to do things like https://review.openstack.org/297140
> and https://review.openstack.org/346470 to be able to generate this:
> http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
> setup, more moving parts to break, and just generally more
> complications.
>
> All things that can be solved for sure. I just question whether it would
> be worth having that overhead. Frankly, there are better things I'd like
> to spend my time on.
>
> I think at this point my first preference would actually be to define a
> new tag. This addresses both the driver removal issue as well as the
> backporting of driver bug fixes. I would like to see third party drivers
> recognized and treated as being different, because in reality they are
> very different than the rest of the code. Having something like
> follows_deprecation_but_has_third_party_drivers_that_dont would make a
> clear statement that their is a vendor component to this project that
> really has to be treated differently and has different concerns
> deployers need to be aware of.
>
> Barring that, I think my next choice would be to remove the tag. That
> would really be unfortunate as we do want to make it clear to users that
> Cinder will not arbitrarily break APIs or do anything between releases
> without warning when it comes to non-third party drivers. But if that is
> what we need to do to effectively communicate what to expect from
> Cinder, then I'm OK with that.
>
> My last choice (of the ones I'm favorable towards) would be marking a
> driver as untested/unstable/abandoned/etc rather than removing it. We
> could flag these a certain way and have then spam the logs like crazy
> after upgrade to make it very and painfully clear that they are not
> being maintained. But as Duncan pointed out, this doesn't have as much
> impact for getting vendor attention. It's amazing the level of executive
> involvement that can happen after a patch is put up for driver removal
> due to non-compliance.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Yeah, I think something like a "passes-upstream-integration" tag per driver
would be a better option.  Whether that's collected via automation looking
at the gerrit info from 3'rd party CI or we bring back the old manual Cert
scripts (or some form of them) is another conversation worth having next
time we're all together.​  Now to try and agree on the criteria might be a
bit of work.

By going with a tag we don't remove anything but we also don't pretend that
we know it works or anything either.
​
 The statement suggesting that if it's not in the infra gate then it must
be considered as maybe not there in the future 

Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-11 Thread John Griffith
On Thu, Aug 11, 2016 at 7:14 AM, Erno Kuvaja  wrote:

> On Thu, Aug 11, 2016 at 2:47 PM, Sean McGinnis 
> wrote:
> >> >>
> >> >> As follow up on the mailing list discussion [0], gerrit activity
> >> >> [1][2] and cinder 3rd party CI policy [3] I'd like to initiate
> >> >> discussion how Cinder follows, or rather does not follow, the
> standard
> >> >> deprecation policy [4] as the project has been tagged on the assert
> >> >> page [5].
> >> >>
> > 
> >> >>
> >> >> [0] http://lists.openstack.org/pipermail/openstack-dev/2016-
> August/100717.html
> >> >> [1] https://review.openstack.org/#/c/348032/
> >> >> [2] https://review.openstack.org/#/c/348042/
> >> >> [3] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
> >> >> [4] https://governance.openstack.org/reference/tags/assert_
> follows-standard-deprecation.html#requirements
> >> >> [5] https://governance.openstack.org/reference/tags/assert_
> follows-standard-deprecation.html#application-to-current-projects
> >> >>
> >> >
> >> > Can you be more specific about what you mean? Are you saying that
> >> > the policy isn't being followed because the drivers were removed
> >> > without a deprecation period, or is there something else to it?
> >> >
> >> > Doug
> >> >
> >>
> >> Yes, that's how I see it. Cinder's own policy is that the drivers can
> >> be removed without any warning to the consumers while the standard
> >> deprecation policy defines quite strict lines about informing the
> >> consumer of the functionality deprecation before it gets removed.
> >>
> >> - Erno
> >
> > It is a good point. I think it highlights a common thread though with
> > the other discussion that, at least so far, third party drivers are
> > treated differently than the rest of the code.
> >
> > For any other functionality we certainly follow the deprecation policy.
> > Even in existing drivers we try to enforce that any driver renames,
> > config setting changes, and similar non-backwards compatible changes go
> > through the normal deprecation cycle before being removed.
> >
> > Ideally I would love it if we could comply with the deprecation policy
> > with regards to driver removal. But the reality is, if we don't see that
> > a driver is being supported and maintained by its vendor, then that
> > burden can't fall on the wider OpenStack and Cinder community that has
> > no way of validating against physical hardware.
> >
> > I think third party drivers need to be treated differently when it comes
> > to the deprecation policy. If that is not acceptable, then I suppose we
> > do need to remove that tag. Tag removal would be the lesser of the two
> > versus keeping around drivers that we know aren't really being
> > maintained.
> >
> > If it came to that, I would also consider creating a new cinder-drivers
> > project under the Cinder umbrella and move all of the drivers not tested
> > by Jenkins over to that. That wouldn't be a trivial undertaking, so I
> > would try to avoid that if possible. But it would at least allow us to
> > still get code reviews and all of the benefits of being in tree. Just
> > some thoughts.
> >
> > Sean
> >
>
> Sean,
>
> As said on my initial opening, I do understand and agree with the
> reasoning/treatment of the 3rd party drivers. My request for that tag
> removal is out of the remains of my ops hat.
>
> Lets say I was ops evaluating different options as storage vendor for
> my cloud and I get told that "Here is the list of supported drivers
> for different OpenStack Cinder back ends delivered by Cinder team", I
> start looking what the support level of those drivers are and see that
> Cinder follows standard deprecation which is fairly user/ops friendly
> with decent warning etc. I'm happy with that, not knowing OpenStack I
> would not even look if different subcomponents of Cinder happens to
> follow different policy. Now I buy storage vendor X HW and at Oct I
> realize that the vendor's driver is not shipped, nor any remains of it
> is visible anymore, I'd be reasonably pissed off. If I knew that the
> risk is there I would select my HW based on the negotiations that my
> HW is contractually tied to maintain that driver and it's CI, and that
> would be fine as well or if not possible I'd select some other
> solution I could get reasonably guarantee that it will be
> supported/valid at it's expected life time. As said I don't think
> there is anything wrong with the 3rd party driver policy, but
> maintaining that and the tag about standard-deprecation project wide
> is sending wrong message to those who do not know better to safeguard
> their rear ends.
>
> The other option would be to leave the drivers in tree, tag them with
> deprecation message, something like "This driver has not been tested
> by vendor CI since 15.3.2016 and cannot be guaranteed working. Unless
> testing will be resumed the driver will be removed on Unicorn
> release". Which would give as clear indication that the driver 

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread John Griffith
On Tue, Aug 9, 2016 at 10:26 PM, Matthew Treinish <mtrein...@kortar.org>
wrote:

> On Tue, Aug 09, 2016 at 09:16:02PM -0700, John Griffith wrote:
> > On Tue, Aug 9, 2016 at 7:21 PM, Matthew Treinish <mtrein...@kortar.org>
> > wrote:
> >
> > > On Tue, Aug 09, 2016 at 05:28:52PM -0700, John Griffith wrote:
> > > > On Tue, Aug 9, 2016 at 4:53 PM, Sean McGinnis <sean.mcgin...@gmx.com
> >
> > > wrote:
> > > >
> > > > > .
> > > > > >
> > > > > > Mike, you must have left the midcycle by the time this topic came
> > > > > > up. On the issue of out-of-tree drivers, I specifically offered
> this
> > > > > > proposal (a community managed mechanism for distributing driver
> > > > > > bugfix backports) as an compromise alternative to try to address
> the
> > > > > > needs of both camps. Everyone who was in the room at the time
> (plus
> > > > > > DuncanT who wasn't) agreed that if we had that (a way to deal
> with
> > > > > > backports) that they wouldn't want drivers out of the tree
> anymore.
> > > > > >
> > > > > > Your point of view wasn't represented so go ahead and explain
> why,
> > > > > > if we did have a reasonable way for bugfixes to get backported to
> > > > > > the releases customers actually run (leaving that mechanism
> > > > > > unspecified for the time being), that you would still want the
> > > > > > drivers out of the tree.
> > > > > >
> > > > > > -Ben Swartzlander
> > > > >
> > > > > The conversation about this started around the 30 minute point
> here if
> > > > > anyone is interested in more of the background discussion on this:
> > > > >
> > > > > https://www.youtube.com/watch?v=g3MEDFp08t4
> > > > >
> > > > > 
> > > __
> > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > > unsubscribe
> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > > >
> > > >
> > > > ​I don't think anybody is whining at all here, we had a fairly
> productive
> > > > discussion at the mid-cycle surrounding this topic and I do think
> there
> > > are
> > > > some valid advantages to this approach regardless of the QA question.
> > > Note
> > > > that it's been pointed out we weren't talking about or considering
> > > > advertising this *special* branch as tested by the standard means or
> gate
> > > > CI etc.
> > > >
> > > > We did discuss this though mostly in the context of helping the
> package
> > > > maintainers and distributions.  The fact is that many of us currently
> > > offer
> > > > backports of fixes in our own various github accounts.  That's fine
> and
> > > it
> > > > works well for many.  The problem we were trying to address however
> is
> > > that
> > > > this practice is rather problematic for the distros.  For example
> RHEL,
> > > > Helion or Mirantis are most certainly not going to run around cherry
> > > > picking change sets from random github repos scattered around.
> > > >
> > > > The context of the discussion was that by having a long lived
> *driver*
> > > > (emphasis on driver) branch there would be a single location and an
> > > *easy*
> > > > method of contact and communication regarding fixes to drivers that
> may
> > > be
> > > > available for stable branches that are no longer supported.  This
> puts
> > > the
> > > > burden of QA/Testing mostly on the vendors and distros, which I
> think is
> > > > fine.  They can either choose to work with the Vendor and verify the
> > > > versions for backport on a regular basis, or they can choose to
> ignore
> > > them
> > > > and NOT provide them to their customers.
> > > >
> > > > I don't think this is an awful idea, and it's very far from the
> "drivers
> > > > out of tree" discussion.  The feedback from the distro maintainers
> during
> > > > the week was that they would gladly welcome a model where they could

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread John Griffith
On Tue, Aug 9, 2016 at 7:21 PM, Matthew Treinish <mtrein...@kortar.org>
wrote:

> On Tue, Aug 09, 2016 at 05:28:52PM -0700, John Griffith wrote:
> > On Tue, Aug 9, 2016 at 4:53 PM, Sean McGinnis <sean.mcgin...@gmx.com>
> wrote:
> >
> > > .
> > > >
> > > > Mike, you must have left the midcycle by the time this topic came
> > > > up. On the issue of out-of-tree drivers, I specifically offered this
> > > > proposal (a community managed mechanism for distributing driver
> > > > bugfix backports) as an compromise alternative to try to address the
> > > > needs of both camps. Everyone who was in the room at the time (plus
> > > > DuncanT who wasn't) agreed that if we had that (a way to deal with
> > > > backports) that they wouldn't want drivers out of the tree anymore.
> > > >
> > > > Your point of view wasn't represented so go ahead and explain why,
> > > > if we did have a reasonable way for bugfixes to get backported to
> > > > the releases customers actually run (leaving that mechanism
> > > > unspecified for the time being), that you would still want the
> > > > drivers out of the tree.
> > > >
> > > > -Ben Swartzlander
> > >
> > > The conversation about this started around the 30 minute point here if
> > > anyone is interested in more of the background discussion on this:
> > >
> > > https://www.youtube.com/watch?v=g3MEDFp08t4
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > ​I don't think anybody is whining at all here, we had a fairly productive
> > discussion at the mid-cycle surrounding this topic and I do think there
> are
> > some valid advantages to this approach regardless of the QA question.
> Note
> > that it's been pointed out we weren't talking about or considering
> > advertising this *special* branch as tested by the standard means or gate
> > CI etc.
> >
> > We did discuss this though mostly in the context of helping the package
> > maintainers and distributions.  The fact is that many of us currently
> offer
> > backports of fixes in our own various github accounts.  That's fine and
> it
> > works well for many.  The problem we were trying to address however is
> that
> > this practice is rather problematic for the distros.  For example RHEL,
> > Helion or Mirantis are most certainly not going to run around cherry
> > picking change sets from random github repos scattered around.
> >
> > The context of the discussion was that by having a long lived *driver*
> > (emphasis on driver) branch there would be a single location and an
> *easy*
> > method of contact and communication regarding fixes to drivers that may
> be
> > available for stable branches that are no longer supported.  This puts
> the
> > burden of QA/Testing mostly on the vendors and distros, which I think is
> > fine.  They can either choose to work with the Vendor and verify the
> > versions for backport on a regular basis, or they can choose to ignore
> them
> > and NOT provide them to their customers.
> >
> > I don't think this is an awful idea, and it's very far from the "drivers
> > out of tree" discussion.  The feedback from the distro maintainers during
> > the week was that they would gladly welcome a model where they could pull
> > updates from a single driver branch on a regular basis or as needed for
> > customers that are on *unsupported* releases and for whom a fix exists.
> > Note that support cycles are not the same for the distros as they are of
> > the upstream community.  This is in no way proposing a change to the
> > existing support time frames or processes we have now, and in that way it
> > differs significantly from proposals and discussions we've had in the
> past.
> >
> > The basic idea here was to eliminate the proliferation of custom backport
> > patches scattered all over the web, and to ease the burden for distros
> and
> > vendors in supporting their customers.  I think there may be some
> concepts
> > to iron out and I certainly understand some of the comments regarding
> being
> > disingenuous regarding what we're advertising.  I think that's a
> > misunderstanding of the intent however, the proposa

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread John Griffith
On Tue, Aug 9, 2016 at 4:53 PM, Sean McGinnis  wrote:

> .
> >
> > Mike, you must have left the midcycle by the time this topic came
> > up. On the issue of out-of-tree drivers, I specifically offered this
> > proposal (a community managed mechanism for distributing driver
> > bugfix backports) as an compromise alternative to try to address the
> > needs of both camps. Everyone who was in the room at the time (plus
> > DuncanT who wasn't) agreed that if we had that (a way to deal with
> > backports) that they wouldn't want drivers out of the tree anymore.
> >
> > Your point of view wasn't represented so go ahead and explain why,
> > if we did have a reasonable way for bugfixes to get backported to
> > the releases customers actually run (leaving that mechanism
> > unspecified for the time being), that you would still want the
> > drivers out of the tree.
> >
> > -Ben Swartzlander
>
> The conversation about this started around the 30 minute point here if
> anyone is interested in more of the background discussion on this:
>
> https://www.youtube.com/watch?v=g3MEDFp08t4
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​I don't think anybody is whining at all here, we had a fairly productive
discussion at the mid-cycle surrounding this topic and I do think there are
some valid advantages to this approach regardless of the QA question.  Note
that it's been pointed out we weren't talking about or considering
advertising this *special* branch as tested by the standard means or gate
CI etc.

We did discuss this though mostly in the context of helping the package
maintainers and distributions.  The fact is that many of us currently offer
backports of fixes in our own various github accounts.  That's fine and it
works well for many.  The problem we were trying to address however is that
this practice is rather problematic for the distros.  For example RHEL,
Helion or Mirantis are most certainly not going to run around cherry
picking change sets from random github repos scattered around.

The context of the discussion was that by having a long lived *driver*
(emphasis on driver) branch there would be a single location and an *easy*
method of contact and communication regarding fixes to drivers that may be
available for stable branches that are no longer supported.  This puts the
burden of QA/Testing mostly on the vendors and distros, which I think is
fine.  They can either choose to work with the Vendor and verify the
versions for backport on a regular basis, or they can choose to ignore them
and NOT provide them to their customers.

I don't think this is an awful idea, and it's very far from the "drivers
out of tree" discussion.  The feedback from the distro maintainers during
the week was that they would gladly welcome a model where they could pull
updates from a single driver branch on a regular basis or as needed for
customers that are on *unsupported* releases and for whom a fix exists.
Note that support cycles are not the same for the distros as they are of
the upstream community.  This is in no way proposing a change to the
existing support time frames or processes we have now, and in that way it
differs significantly from proposals and discussions we've had in the past.

The basic idea here was to eliminate the proliferation of custom backport
patches scattered all over the web, and to ease the burden for distros and
vendors in supporting their customers.  I think there may be some concepts
to iron out and I certainly understand some of the comments regarding being
disingenuous regarding what we're advertising.  I think that's a
misunderstanding of the intent however, the proposal is not to extend the
support life of stable from an upstream or community perspective but
instead the proposal is geared at consolidation and tracking of drivers.

If this isn't something we can come to an agreement on as a community, then
I'd suggest we just create our own repo on github outside of upstream and
have it serve the same purpose.

Thanks,
John ​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread John Griffith
On Thu, Aug 4, 2016 at 9:57 AM, Fox, Kevin M  wrote:

> Ok. I'll play devils advocate here and speak to the other side of this,
> because you raised an interesting issue...
>
> Ceph is outside of the tent. It provides a (mostly) api compatible
> implementation of the swift api (radosgw), and it is commonly used in
> OpenStack deployments.
>
> Other OpenStack projects don't take it into account because its not a big
> tent thing, even though it is very common. Because of some rules about only
> testing OpenStack things, radosgw is not tested against even though it is
> so common. This causes odd breakages at times that could easily be
> prevented, but for procedural things around the Big Tent.
>
​I think this statement needs some fact checking.  The reality is that Ceph
is a PERFECT example of a valuable and widely used project in the OpenStack
eco system but doesn't not officially reside in the eco system.  I can
assure you that Cinder and Nova inparticular take into account, almost to
the point of being detrimental to other storage options.

I suspect part of your view stems from issues prior to Ceph being an active
part of CI, which now it is.  The question of testing isn't the same here
IMO.  In the case of Block storage in particular we have all drivers (none
of which but the ref LVM driver being a part of OpenStack governance)
running CI testing.  Granted it's not pretty, but there's nothing keeping
them from implementing CI, running and reporting.  In the case of open
source software based options like Ceph, Gluster, SheepDog etc... those are
all project maintained outside of OpenStack Governance BUT they all have
Infra resources running CI etc.
​


>
> I do think this should be fixed before we advocate single vendor projects
> exit the big tent after some time. As the testing situation may be made
> worse.
>
> Thanks,
> Kevin
> 
> From: Thierry Carrez [thie...@openstack.org]
> Sent: Thursday, August 04, 2016 5:59 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] persistently single-vendor projects
>
> Thomas Goirand wrote:
> > On 08/01/2016 09:39 AM, Thierry Carrez wrote:
> >> But if a project is persistently single-vendor after some time and
> >> nobody seems interested to join it, the technical value of that project
> >> being "in" OpenStack rather than a separate project in the OpenStack
> >> ecosystem of projects is limited. It's limited for OpenStack (why
> >> provide resources to support a project that is obviously only beneficial
> >> to one organization ?), and it's limited to the organization itself (why
> >> go through the OpenStack-specific open processes when you could shortcut
> >> it with internal tools and meetings ? why accept the oversight of the
> >> Technical Committee ?).
> >
> > A project can still be useful for everyone with a single vendor
> > contributing to it, even after a long period of existence. IMO that's
> > not the issue we're trying to solve.
>
> I agree with that -- open source projects can be useful for everyone
> even if only a single vendor contributes to it.
>
> But you seem to imply that the only way an open source project can be
> useful is if it's developed as an OpenStack project under the OpenStack
> Technical Committee governance. I'm not advocating that these projects
> should stop or disappear. I'm just saying that if they are very unlikely
> to grow a more diverse affiliation in the future, they derive little
> value in being developed under the OpenStack Technical Committee
> oversight, and would probably be equally useful if developed outside of
> OpenStack official projects governance. There are plenty of projects
> that are useful to OpenStack that are not developed under the TC
> governance (libvirt, Ceph, OpenvSwitch...)
>
> What is the point for a project to submit themselves to the oversight of
> a multi-organization Technical Committee if they always will be the
> result of the efforts of a single organization ?
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-29 Thread John Griffith
On Thu, Jul 28, 2016 at 9:24 AM, Sergey Lukjanov 
wrote:

> Hi folks,
>
> First of all, let me say that it’s a marketing announcement and as all of
> you know such announcements aren’t precise from a technical side.
> Personally I’ve seen this paper first time on TechCrunch.
>
> First of all, fuel-ccp-* are a set of OpenStack projects and everyone is
> welcome to participate. All the regular community process(es) for other
> openstack projects apply to fuel-ccp-*. At the moment, in spite of what the
> marketing announcements say, it’s a bunch of people from Mirantis working
> on the repositories. Please think of this as an incubation process to try
> and see what the next incarnation of Fuel would look like in the future.
>
> Regardless of what was written, we aren’t applying to the Big Tent right
> now (as it was initially said explicitly when we were creating repos and
> it’s still valid). The state of the repos is still experimental, but I’d
> like to make things clear and confirm that Mirantis has chosen to use
> containers for infrastructure and OpenStack components and to use
> Kubernetes as the orchestrator of those containers. In the future, the Fuel
> OpenStack installer will use these containerized OpenStack/infrastructure
> component images. There are many questions to be solved and things to be
> done first in Fuel CCP, such as:
>
> * Freeze technologies and approaches, such as repos structure, image
> layers, etc.
> * Cleanup deprecated PoC stuff from the code
> * Implement basic test coverage for all parts of the project
> * Create Release Management approach
> * Consume OpenStack CI to run tests
> * Fully implement 3rd party CI (with end-to-end integration tests only)
> * Make at least initial documentation and ensure that it’s deployable
> using this doc
>
> and etc. In general, I would not expect us to seriously consider applying
> to the Big Tent for another 5-6 months at the earliest.
>
> Regarding the Fuel mission, that is:
>
> To streamline and accelerate the process of deploying, testing and
> maintaining various configurations of OpenStack at scale.
>
> I think that it’s 100% aligned with that we’re doing in Fuel CCP.
>
​All the other stuff aside, the above was my take away from the first or
second message in this thread so I fail to understand the debate around
this.  The mission statement is simply around deploying, how that deploy
mechanism is implemented (Kubernetes, Ironic whatever) doesn't really seem
to be an issue here.

The point about API's that Jay Pipes made was spot on in my opinion as
well.  We're not talking about service or project API's that the end users
or operators deal with on a daily basis.  Until there's a standard install
API I fail to see the argument against this.

Other questions about the 4 opens etc seem to have been answered, but I
don't have any real insight here.  Personally I'm looking forward to seeing
if somebody can come up with a reliable and relatively easy deployment
tool.  If it means competition then that's great as far as I'm concerned.
I'll use whichever one doesn't make me want to rip my hair out.

​


>
> About the Kolla usage in Fuel CCP, I agree with Kevin and we can see in
> future that Fuel CCP will be potentially using Kolla containers, it’ll
> require some time anyway, but it doesn’t mean that we stop considering it.
> And as Kevin correctly noticed, we did it already one time with Fuel
> adopting upstream Puppet modules and contributing actively to them.
>
> Thanks.
>
>
> On Thu, Jul 28, 2016 at 7:43 AM, Flavio Percoco  wrote:
>
>> On 28/07/16 04:45 +, Steven Dake (stdake) wrote:
>>
>>>
>>>
>>> On 7/27/16, 2:12 PM, "Jay Pipes"  wrote:
>>>
>>> On 07/27/2016 04:42 PM, Ed Leafe wrote:

> On Jul 27, 2016, at 2:42 PM, Fox, Kevin M  wrote:
>
> Its not an "end user" facing thing, but it is an "operator" facing
>> thing.
>>
>
> Well, the end user for Kolla is an operator, no?
>
> I deploy kolla containers today on non kolla managed systems in
>> production, and rely on that api being consistent.
>>
>> I'm positive I'm not the only operator doing this either. This sounds
>> like a consumable api to me.
>>
>
> I don¹t think that an API has to be RESTful to be considered an
> interface for we should avoid duplication.
>

 Application *Programming* Interface. There's nothing that is being
 *programmed* or *called* in Kolla's image definitions.

 What Kolla is/has is not an API. As Stephen said, it's more of an
 Application Binary Interface (ABI). It's not really an ABI, though, in
 the traditional sense of the term that I'm used to.

 It's an agreed set of package bases, installation procedures/directories
 and configuration recipes for OpenStack and infrastructure components.

>>>
>>> Jay,
>>>
>>> From my perspective, this isn't about 

Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-24 Thread John Griffith
On Fri, Jun 24, 2016 at 2:19 AM, Daniel P. Berrange 
wrote:

> On Thu, Jun 23, 2016 at 09:09:44AM -0700, Walter A. Boring IV wrote:
> >
> > volumes connected to QEMU instances eventually become directly connected?
> >
> > > Our long term goal is that 100% of all network storage will be
> connected
>
​Oh, didn't know this at all.  Is this something Nova has been working on
for a while?  I'd love to hear more about the reasoning, the plan etc.  It
would also be really neat to have an opportunity to participate.
​


> > > to directly by QEMU. We already have the ability to partially do this
> with
> > > iSCSI, but it is lacking support for multipath. As & when that gap is
> > > addressed though, we'll stop using the host OS for any iSCSI stuff.
>
​
Any chance anybody has any insight on how to make this work?  I tried
configuring this last week and it appears to be broken in a few places.

> >
> > > So if you're requiring access to host iSCSI volumes, it'll work in the
> > > short-medium term, but in the medium-long term we're not going to use
> > > that so plan accordingly.
> >
> > What is the benefit of this largely monolithic approach?  It seems that
> > moving everything into QEMU is diametrically opposed to the unix model
> > itself and
> > is just a re-implementation of what already exists in the linux world
> > outside of QEMU.
>
> There are many benefits to having it inside QEMU. First it gives us
> improved isolation between VMs, because we can control the network
> I/O directly against the VM using cgroup resource controls.

It gives
> us improved security, particularly in combination with LUKS encryption
> since the unencrypted block device is not directly visible / accessible
> to any other process. It gives us improved reliability / managability
> since we avoid having to spawn the iscsi client tools which have poor
> error reporting and have been frequent sources of instability in our
>
​True, the iscsi tools aren't the greatest.
​


> infrastructure (eg see how we have to blindly re-run the same command
> many times over because it randomly times out). It will give us improved
> I/O performance because of a shorter I/O path to get requests from QEMU
> out to the network.
>
​I'd love to hear more on the design and how it all comes together.
Particularly the performance info.  Like I said, I tried to set it up
against master but seems I'm either missing something in the config or it's
broken.​


>
> NB, this is not just about iSCSI, the same is all true for RBD where
> we've also stopped using in-kernel RBD client and do it all in QEMU.
>
> > Does QEMU support hardware initiators? iSER?


> No, this is only for case where you're doing pure software based
> iSCSI client connections. If we're relying on local hardware that's
> a different story.
>
​I'm confused, so what's the iser driver referenced in the patch commit
message:  https://review.openstack.org/#/c/135854/
​

​So there's a different story for that?
​

>
> >
> > We regularly fix issues with iSCSI attaches in the release cycles of
> > OpenStack,
> > because it's all done in python using existing linux packages.  How often
>
> This is a great example of the benefit that in-QEMU client gives us. The
> Linux iSCSI client tools have proved very unreliable in use by OpenStack.
> This is a reflection of the very architectural approach. We have individual
> resources needed by distinct VMs, but we're having to manage them as a host
> wide resource and that's creating us unneccessary complexity and having a
> poor effect on our reliability overall.
>
> > are QEMU
> > releases done and upgraded on customer deployments vs. python packages
> > (os-brick)?
>
> We're removing the entire layer of instability by removing the need to
> deal with any command line tools, and thus greatly simplifying our
> setup on compute nodes. No matter what we might do in os-brick it'll
> never give us a simple or reliable system - we're just papering over
> the flaws by doing stuff like blindly re-trying iscsi commands upon
> failure.
>
​This all sounds like it could be a good direction to go in.  I'd love to
see more info on the plan, how it works, and how to test it out a bit.
Didn't find a spec, any links, reviews or config info available?

​Wish I would've caught this on ML or IRC or wherever, would've loved to
have participated a bit.​


> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-16 Thread John Griffith
On Wed, Jun 15, 2016 at 5:59 PM, Preston L. Bannister 
wrote:

> QEMU has the ability to directly connect to iSCSI volumes. Running the
> iSCSI connections through the nova-compute host *seems* somewhat
> inefficient.
>

​I know tests I've run in the past virt-io actually does a really good job
here.  Granted it's been a couple years since I've spent any time looking
at this so really can't definitively say without looking again.​


>
> There is a spec/blueprint and implementation that landed in Kilo:
>
>
> https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html
> https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator
>
> From looking at the OpenStack Nova sources ... I am not entirely clear on
> when this behavior is invoked (just for Ceph?), and how it might change in
> future.
>

​I actually hadn't seen that, glad you pointed it out :)  I haven't tried
configuring it but will try and do so and see what sort of differences in
performance there are.  One other thing to keep in mind (I could be
mistaken, but...) last time I looked at this is wasn't vastly different
from the model we use now.  It's not actually using an iSCSI initiator on
the Instance, it's still using an initiator on the compute node and passing
the device in I believe.  I'm sure somebody will correct me if I'm wrong
here.

I don't know what your reference to Ceph has to do with this here?  This
appears to be a Cinder iSCSI mechanism.  You can see how to config in the
commit message (https://review.openstack.org/#/c/135854/19, again I plan to
try it out).

>
> Looking for a general sense where this is headed. (If anyone knows...)
>

​Seems like you should be able to configure it and run it, assuming the
work is actually done and hasn't broken while sitting.
​


>
> If there is some problem with QEMU and directly attached iSCSI volumes,
> that would explain why this is not the default. Or is this simple inertia?
>

​Virt-io is actually super flexible and lets us do all sorts of things with
various connector types.  I think you'd have to have some pretty compelling
data to change the default here.​  Another thing to keep in mind, even if
we just consider iSCSI and leave out FC and other protocols; one thing we
absolutely wouldn't want is to give Instances direct access to the iSCSI
network.  This raises all sorts of security concerns for folks running
public clouds.  It also means more heavy weight Instances due to additional
networking requirements, the iSCSI stack etc.  More importantly, the last
time I looked hot-plugging didn't work with this option, but again I admit
it's been a long time since I've looked at it and my memory isn't always
that great.

>
>
> I have a concrete concern. I work for a company (EMC) that offers backup
> products, and we now have backup for instances in OpenStack. To make this
> efficient, we need to collect changed-block information from instances.
>

​Ahh, ok, so you don't really have a "concrete concern" about using virt-io
driver, or the way things work... or even any data that one performs
better/worse than the other.  What you do have apparently is a solution
you'd like to integrate and sell with OpenStack.​  Fair enough, but we
should probably be clear about the motivation until there's some data
(there very well may be compelling reasons to change this).

>
> 1)  We could put an intercept in the Linux kernel of the nova-compute host
> to track writes at the block layer. This has the merit of working for
> containers, and potentially bare-metal instance deployments. But is not
> guaranteed for instances, if the iSCSI volumes are directly attached to
> QEMU.
>
> 2)  We could use the QEMU support for incremental backup (first bit landed
> in QEMU 2.4). This has the merit of working with any storage, by only for
> virtual machines under QEMU.
>
> As our customers are (so far) only asking about virtual machine backup. I
> long ago settled on (2) as most promising.
>
> What I cannot clearly determine is where (1) will fail. Will all iSCSI
> volumes connected to QEMU instances eventually become directly connected?
>
>
> Xiao's unanswered query (below) presents another question. Is this a
> site-choice? Could I require my customers to configure their OpenStack
> clouds to always route iSCSI connections through the nova-compute host? (I
> am not a fan of this approach, but I have to ask.)
>

​Certainly seems like you could.  The question is would the distro in use
support it?  Also would it work with multi-backend configs.  Honestly it
sounds like there's a lot of data collection and analysis that you could do
here and contribute back to the community.​  Perhaps Xiao or you should try
it out?

>
> To answer Xiao's question, can a site configure their cloud to *always*
> directly connect iSCSI volumes to QEMU?
>
>
>
> On Tue, Feb 16, 2016 at 4:54 AM, Xiao Ma (xima2)  wrote:
>
>> Hi, All
>>
>> I want to 

Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-24 Thread John Griffith
On Tue, May 24, 2016 at 1:34 AM, Duncan Thomas 
wrote:

> Cinder bugs list was far more manageable once this had been done.
>
> It is worth sharing the tool for this? I realise it's fairly trivial to
> write one, but some standardisation on the comment format etc seems
> valuable, particularly for Q/A folks who work between different projects.
>
​consistency sure seems like a nice thing to me.​


>
> On 23 May 2016 at 14:02, Markus Zoeller 
> wrote:
>
>> TL;DR: Automatic closing of 185 bug reports which are older than 18
>> months in the week R-13. Skipping specific bug reports is possible. A
>> bug report comment explains the reasons.
>>
>>
>> I'd like to get rid of more clutter in our bug list to make it more
>> comprehensible by a human being. For this, I'm targeting our ~185 bug
>> reports which were reported 18 months ago and still aren't in progress.
>> That's around 37% of open bug reports which aren't in progress. This
>> post is about *how* and *when* I do it. If you have very strong reasons
>> to *not* do it, let me hear them.
>>
>> When
>> 
>> I plan to do it in the week after the non-priority feature freeze.
>> That's week R-13, at the beginning of July. Until this date you can
>> comment on bug reports so they get spared from this cleanup (see below).
>> Beginning from R-13 until R-5 (Newton-3 milestone), we should have
>> enough time to gain some overview of the rest.
>>
>> I also think it makes sense to make this a repeated effort, maybe after
>> each milestone/release or monthly or daily.
>>
>> How
>> ---
>> The bug reports which will be affected are:
>> * in status: [new, confirmed, triaged]
>> * AND without assignee
>> * AND created at: > 18 months
>> A preview of them can be found at [1].
>>
>> You can spare bug reports if you leave a comment there which says
>> one of these (case-sensitive flags):
>> * CONFIRMED FOR: NEWTON
>> * CONFIRMED FOR: MITAKA
>> * CONFIRMED FOR: LIBERTY
>>
>> The expired bug report will have:
>> * status: won't fix
>> * assignee: none
>> * importance: undecided
>> * a new comment which explains *why* this was done
>>
>> The comment the expired bug reports will get:
>> This is an automated cleanup. This bug report got closed because
>> it is older than 18 months and there is no open code change to
>> fix this. After this time it is unlikely that the circumstances
>> which lead to the observed issue can be reproduced.
>> If you can reproduce it, please:
>> * reopen the bug report
>> * AND leave a comment "CONFIRMED FOR: "
>>   Only still supported release names are valid.
>>   valid example: CONFIRMED FOR: LIBERTY
>>   invalid example: CONFIRMED FOR: KILO
>> * AND add the steps to reproduce the issue (if applicable)
>>
>>
>> Let me know if you think this comment gives enough information how to
>> handle this situation.
>>
>>
>> References:
>> [1] http://45.55.105.55:8082/bugs-dashboard.html#tabExpired
>>
>> --
>> Regards, Markus Zoeller (markus_z)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> --
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][cinder] max_concurrent_builds in Cinder

2016-05-23 Thread John Griffith
On Mon, May 23, 2016 at 8:32 AM, Ivan Kolodyazhny  wrote:

> Hi developers and operators,
> I would like to get any feedback from you about my idea before I'll start
> work on spec.
>
> In Nova, we've got max_concurrent_builds option [1] to set 'Maximum number
> of instance builds to run concurrently' per each compute. There is no
> equivalent Cinder.
>
> Why do we need it for Cinder? IMO, it could help us to address following
> issues:
>
>- Creation of N volumes at the same time increases a lot of resource
>usage by cinder-volume service. Image caching feature [2] could help us a
>bit in case when we create volume form image. But we still have to upload N
>images to the volumes backend at the same time.
>- Deletion on N volumes at parallel. Usually, it's not very hard task
>for Cinder, but if you have to delete 100+ volumes at once, you can fit
>different issues with DB connections, CPU and memory usages. In case of
>LVM, it also could use 'dd' command to cleanup volumes.
>- It will be some kind of load balancing in HA mode: if cinder-volume
>process is busy with current operations, it will not catch message from
>RabbitMQ and other cinder-volume service will do it.
>- From users perspective, it seems that better way is to create/delete
>N volumes a bit slower than fail after X volumes were created/deleted.
>
>
> [1]
> https://github.com/openstack/nova/blob/283da2bbb74d0a131a66703516d16698a05817c7/nova/conf/compute.py#L161-L163
> [2]
> https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Just curious about a couple things:  Is this attempting to solve a
problem in the actual Cinder Volume Service or is this trying to solve
problems with backends that can't keep up and deliver resources under heavy
load?  I get the copy-image to volume, that's a special case that certainly
does impact Cinder services and the Cinder node itself, but there's already
throttling going on there, at least in terms of IO allowed.

Also, I'm curious... would the exiting API Rate Limit configuration achieve
the same sort of thing you want to do here?  Granted it's not selective but
maybe it's worth mentioning.

If we did do something like this I would like to see it implemented as a
driver config; but that wouldn't help if the problem lies in the Rabbit or
RPC space.  That brings me back to wondering about exactly where we want to
solve problems and exactly which.  If delete is causing problems like you
describe I'd suspect we have an issue in our DB code (too many calls to
start with) and that we've got some overhead elsewhere that should be
eradicated.  Delete is a super simple operation on the Cinder side of
things (and most back ends) so I'm a bit freaked out thinking that it's
taxing resources heavily.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]The backend-group concept in Cinder

2016-05-18 Thread John Griffith
On Wed, May 18, 2016 at 9:45 PM, chenying  wrote:

> Hi all:
> I want to know whether the backend-group concept  have been
> discussed or have other recommendations to us?
> The backend-group concept means can be regarded as a mechanism to
> manage same type cinder backends .
>

​Sorry, I'm not quite following the analogy here.  Cinder allows multiple
backends and always has (similar to compute nodes in Nova) and we also
allow you to run them on a single node (particularly for cases like
external storage backends, no need to deploy a node to configure them)
​


> (backend-group  concept  like nova  Aggregate. The Falvor in Nova
> correspond to VolumeType  in Cinder)
> While backends are visible to users, backend-group are only visible to
> admin.
>

​So actually in Cinder backends are not visible to regular users.  We
abstract any/all devices from the user.  Not quite sure I follow.​


>
> We can use this mechanism dynamic  to add/delete one backend
> form  backend-group without restarting volume services.
>
>User case 1:
>The  backends in backend-group-1 have SSD disk, more memory . The
> backend-group-1 can provide higher performance to user.
>   The other  backends  in  backend-group-2 have HHD disk, more
> capacity. The backend-group-2 can provide more storage space to user .
>
​Not sure, but we sort of do some of this already via the filter
scheduler.  An Admin can define various types (they may be set up based on
performance, ssd, spinning-rust etc).  Those types are then given arbitrary
definitions via a type (again details hidden from end user) and he/she can
create volumes of a specific type.
​


>User case 2:
>  The backend-group is set with specific metadata/extra-spec
> (capability), Each node can have multiple backend-group, each backend-group
> can have multiple key-value pairs, and the same key-value pair can be
> assigned to multiple backend-group. This information can be used in
> the scheduler to enable advanced scheduling,
>  scheduler will select the backends from backend-group only.
>

​We have the capability to do this already, at least to an extent.  Perhaps
if you provide more details in this use case I can better understand.  It
is possible today to group multiple backends into a single Volume Type.  So
for example you could say "I want all backends with capability XYZ" and the
filter scheduler will handle that for you already (well, there are some
details on what those capabilities are currently).​


>
>
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
​I'd be interested in hearing more about what you're thinking.  The concept
of dynamically adding/removing backends I think I kinda get, the risk there
is dealing with the ​existing data (volumes) on the backend when you remove
it.  We could do a migration, but that gets kinda ugly sometimes.  One
thing I have always wanted to see is a way to dynamically add/remove
backends, by dynamically I mean without restarting the c-vol services.  I'm
not sure there's a great use case or need for it though so I've never
really spent much time on it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling deprecated APIs by config?

2016-05-18 Thread John Griffith
On Wed, May 18, 2016 at 9:20 AM, Sean Dague  wrote:

> nova-net is now deprecated - https://review.openstack.org/#/c/310539/
>
> And we're in the process in Nova of doing some spring cleaning and
> deprecating the proxies to other services -
> https://review.openstack.org/#/c/312209/
>
> At some point in the future after deprecation the proxy code is going to
> stop working. Either accidentally, because we're not going to test or
> fix this forever (and we aren't going to track upstream API changes to
> the proxy targets), or intentionally when we decide to delete it to make
> it easier to address core features and bugs that everyone wants addressed.
>
> However, the world moves forward slowly. Consider the following scenario.
>
> We delete nova-net & the network proxy entirely in Peru (a not entirely
> unrealistic idea). At that release there are a bunch of people just
> getting around to Newton. Their deployments allow all these things to
> happen which are going to 100% break when they upgrade, and people are
> writing more and more OpenStack software every cycle.
>
> How do we signal to users this kind of deprecation? Can we give sites
> tools to help prevent new software being written to deprecated (and
> scheduled for deletion) APIs?
>
> One idea was a "big red switch" in the format of a config option
> ``disable_deprecated_apis=True`` (defaults to False). Which would set
> all deprecated APIs to 404 routes.
>
> One of the nice ideas here is this would allow some API servers to have
> this set, and others not. So users could point to the "clean" API
> server, figure out that they will break, but the default API server
> would still support these deprecated APIs. Or, conversely, the default
> could be the clean API server, and a legacy API server endpoint could be
> provided for projects that really needed it that included these
> deprecated things for now. Either way it would allow some site assisted
> transition. And be something like the -Werror flag in gcc.
>
> In the Nova case the kinds of things ending up in this bucket are going
> to be interfaces that people *really* shouldn't be using any more. Many
> of them data back to when OpenStack was only 2 projects, and the concept
> of splitting out function wasn't really thought about (note: we're
> getting ahead of this one for the 'placement' rest API, so it won't have
> any of these issues). At some point this house cleaning was going to
> have to happen, and now seems to be the time to do get it rolling.
>
> Feedback on this idea would be welcomed. We're going to deprecate the
> proxy APIs regardless, however disable_deprecated_apis is it's own idea
> and consequences, and we really want feedback before pushing forward on
> this.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I like the idea of a switch in the config file.  To Dean's point, would it
also be worth considering a "list-deprecated-calls" that could give him a
list without having to do the roundtrip every time?  That might not
actually solve anything for him, but perhaps something along those lines
would help?​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-17 Thread John Griffith
Thanks Brian

On Tue, May 17, 2016 at 6:54 AM, Brian Rosmaita <
brian.rosma...@rackspace.com> wrote:

> Subject was: Re: [openstack-dev] [tc] [all] [glance] On operating a high
> throughput or otherwise team
>
> Un-hijacking the thread.  Here are some answers to John's questions, hope
> they are helpful.
>
> On 5/16/16, 9:06 PM, "John Griffith" <john.griffi...@gmail.com> wrote:
>
> Hey,
>
> Maybe not related, but maybe it is.  After spending the past couple of
> hours trying to help a customer with a Glance issue I'm a bit... well
> annoyed with Glance.  I'd like to chime in on this thread.  I'm honesty not
> entirely sure what the goal of the thread is, but honestly there's
> something rather important to me that I don't really seem to see being
> called out.
>
> Is there any way we could stop breaking the API and it's behaviors?  Is
> there any way we can fix some of the issues with respect to how things work
> when folks configure multiple Glance repos?
>
> Couple of examples:
> 1. switching from "is_public=true" to "visibility=public"
>
>
> This was a major version change in the Images API.  The 'is_public'
> boolean is in the original Images v1 API, 'visibility' was introduced with
> the Images v2 API in the Folsom release.  You just need an awareness of
> which version of the API you're talking to.
>
>
>
> Ok, cool, I'm sure there's great reasons, but it really sucks when folks
> update their client and now none of their automation works any longer
>
>
> The Images v1 API went from CURRENT to SUPPORTED in the Kilo release
> (April 30, 2015).  The python-glanceclient began using v2 as the default
> with Change-Id: I09c9e409d149e2d797785591183e06c13229b7f7 on June 21, 2015
> (and hence would have been in release 0.17.2 on July 16, 2015).  So these
> changes have been in the works for a while.
>
> 2. making virtual_size R/O
>
>
> So for some time this was a property that folks could use to set the size
> of an image needed for things like volume creation, cloning etc.  At some
> point though it was decided "this should be read only", ok... well again
> all sorts of code is now broken, including code in Cinder.​  It also seems
> there's no way to set it, so it's always there and just Null.  It looked
> like I would be able to set it during image-create maybe... but then I hit
> number 3.
>
>
> The virtual_size was added to the Images v2 API with Change-Id:
> Ie4f58ee2e4da3a6c1229840295c7f62023a95b70 on February 11, 2014.  The commit
> message indicates: "This patch adds the knowledge of a virtual_size field
> to Glance's API v2. The virtual_size field should respect the same rules
> applied to the size field in terms of readability, access control and
> propagation."  The 'size' field has never been end-user modifiable, hence
> the virtual_size is read-only as well.
>
> 3. broken parsing for size and virtual_size
>
> I just started looking at this one and I'm not sure what happened here
> yet, but it seems that these inputs aren't being parsed any more and are
> now raising an exception due to trying to stuff a string into an int field
> in the json schema.
>
>
> Please file a bug with some details when you know more about this one.  It
> sounds like a client issue, but you can put details in the bug report.
>
> So I think if the project wants to move faster that's great, but please is
> theres any chance to value backwards compatibility just a bit more?​  I'm
> sure I'm going to get flamed for this email, and the likely response will
> be "you're doing it wrong".  I guess if I'm the only one that has these
> sorts of issues then alright, I deserve the flames, and maybe somebody will
> enlighten me on the proper ways of using Glance so I can be happier and
> more in tune with my Universe.
>
>
> Well, since you asked for enlightenment ... it *is* helpful to make sure
> that you know which version of the Images API you're using.  The Glance
> community values backwards compatibility, but not across major releases.
>
> As I imagine you're aware, Glance is tagged "release:
> cycle-with-milestones", so you can read about any changes in the release
> notes.  Or if you want a quick overview of what major features were added
> to Glance for each release, there was an excellent presentation at the
> Tokyo summit about the evolution of the Glance APIs:
>
> https://www.openstack.org/summit/tokyo-2015/videos/presentation/the-evolution-of-glance-api-on-the-way-from-v1-to-v3
> slides only:
> http://www.slideshare.net/racker_br/the-evolution-of-glance-api-on-the-way-from-v1-to-v3
>
> Before people begin freaking out at the mention of the Images 

Re: [openstack-dev] [tc] [all] [glance] On operating a high throughput or otherwise team

2016-05-16 Thread John Griffith
On Mon, May 16, 2016 at 7:43 PM, Nikhil Komawar <nik.koma...@gmail.com>
wrote:

>
> First of all, are you serious?  This is not the right email thread for
> such issue complaints. This thread is about communication and not just
> for Glance but for [tc] and [all]!
>
> Secondly, you have not mentioned _anything_ about the upgrade
> numbers/releases. You may want to notice the *major* API upgrade to the
> glance-api server you are talking to (hint is_public vs. visibility),
> possibly the reason for all your issues.
>
> Thirdly, when the client is upgraded please check the default version it
> is going to use, it's in the release notes, etc.
>
> We are more than happy to help resolve the issues if done rightly.
> Dumping stuff on the ML after a bad customer call is a bit... sad (if I
> were to put it mildly). You can talk to us separately and we will be
> happy to point you to the operators who are running large scale
> OpenStack installation & actually happy with OpenStack services that
> include Glance.
>
> And, I think you just proved the point I'm making, is ML the right place
> to share ideas sanely? Do we have etiquette that people actually follow
> making sure you stay on topic and move forward, or rather diverge and
> create even more problems?
>
>
> On 5/16/16 9:06 PM, John Griffith wrote:
> >
> >
> > On Mon, May 16, 2016 at 10:10 AM, Flavio Percoco <fla...@redhat.com
> > <mailto:fla...@redhat.com>> wrote:
> >
> > On 16/05/16 00:23 -0700, Clint Byrum wrote:
> >
> >
> > Excerpts from Nikhil Komawar's message of 2016-05-14 17:42:16
> > -0400:
> >
> >
> > Hi all,
> >
> >
> >
> >
> >
> > Lately I have been involved in discussions that have
> > resulted in giving
> >
> > a wrong idea to the approach I take in operating the
> > (Glance) team(s).
> >
> > While my approach is consistency, coherency and agility in
> > getting
> >
> > things done (especially including the short, mid as well
> > as long term
> >
> > plans), it appears that it wasn't something evident. So, I
> > have decided
> >
> > to write this email so that I can collectively gather
> > feedback and share
> >
> > my thoughts on the right(eous) approach.
> >
> >
> >
> >
> >
> >
> > I find it rather odd that you or anyone believes there is a
> > "right"
> >
> > approach that would work for over 1500 active developers and 200+
> >
> > companies.
> >
> >
> >
> > We can definitely improve upon the aspects of what we have now,
> by
> >
> > incremental change or revolution. But I doubt we'll ever have a
> >
> > community that is "right" for everyone.
> >
> >
> >
> >
> >
> >
> > Right! To that let's also add we have a bunch of smaller
> > communities within our
> >
> > OpenStack big tent. Each one of these teams might requires a
> > different approach.
> >
> >
> >
> > [snip]
> >
> >
> >
> >
> >
> >
> > We are developing something that is usable, operationally
> > friendly and
> >
> > that it's easier to contribute & maintain but, many strong
> > influencers
> >
> > are missing on the most important need for OpenStack --
> > efficient way of
> >
> > communication. I think we have the tools and right
> > approach on paper and
> >
> > we've mandated it in the charter too, but that's not
> > enough to operate
> >
> > things. Also, many people like to work on the assumption
> > that all the
> >
> > tools of communication are equivalent or useful and there
> > are no
> >
> > side-effects of using them ever. I strongly disagree.
> > Please find the
> >
> > reason below:
> >
> >
> >
> >
> >
> >
> > I'd be interested to see evidence of anyone believing
> > something close
> >
> > to that, much less "many people".
> >
> >
> >
> > I d

Re: [openstack-dev] [tc] [all] [glance] On operating a high throughput or otherwise team

2016-05-16 Thread John Griffith
On Mon, May 16, 2016 at 10:10 AM, Flavio Percoco  wrote:

> On 16/05/16 00:23 -0700, Clint Byrum wrote:
>
>> Excerpts from Nikhil Komawar's message of 2016-05-14 17:42:16 -0400:
>>
>>> Hi all,
>>>
>>>
>>> Lately I have been involved in discussions that have resulted in giving
>>> a wrong idea to the approach I take in operating the (Glance) team(s).
>>> While my approach is consistency, coherency and agility in getting
>>> things done (especially including the short, mid as well as long term
>>> plans), it appears that it wasn't something evident. So, I have decided
>>> to write this email so that I can collectively gather feedback and share
>>> my thoughts on the right(eous) approach.
>>>
>>>
>> I find it rather odd that you or anyone believes there is a "right"
>> approach that would work for over 1500 active developers and 200+
>> companies.
>>
>> We can definitely improve upon the aspects of what we have now, by
>> incremental change or revolution. But I doubt we'll ever have a
>> community that is "right" for everyone.
>>
>
>
> Right! To that let's also add we have a bunch of smaller communities
> within our
> OpenStack big tent. Each one of these teams might requires a different
> approach.
>
> [snip]
>
>
>>> We are developing something that is usable, operationally friendly and
>>> that it's easier to contribute & maintain but, many strong influencers
>>> are missing on the most important need for OpenStack -- efficient way of
>>> communication. I think we have the tools and right approach on paper and
>>> we've mandated it in the charter too, but that's not enough to operate
>>> things. Also, many people like to work on the assumption that all the
>>> tools of communication are equivalent or useful and there are no
>>> side-effects of using them ever. I strongly disagree. Please find the
>>> reason below:
>>>
>>>
>> I'd be interested to see evidence of anyone believing something close
>> to that, much less "many people".
>>
>> I do believe people don't take into account everyone's perspective and
>> communication style when choosing how to communicate. But we can't really
>> know all of the ways anything we do in a distributed system affects all
>> of the parts. We can reason about it, and I think you've done a fine job
>> of reasoning through some of the points. But you can't know, nor can I,
>> and I don't think anyone is laboring under the illusion that they can
>> know this.
>>
>
> This is a good point and I believe it may explode into several smaller
> discussions. I'll try to light the bomb:
>
> - We used to be a community that assumed good faith about people,
> proposals,
>  etc. Have I been genuine enough to believe this is still true? I certainly
>  work under this *assumption* unless I things stink really bad and even
> then,
>  I'd try to work around the issue without unecessary finger-pointing.
>
> - We've always been a multi-cultural community company-wise, tz-wise,
>  culturally-wise, language-wise, etc. These is super hard to coordinate and
>  finding a right solution for everyone is even harder. For example,
> depending
>  the communication medium you might have a bad/good impact on non-native
>  English speakers. Spending enough time understanding people's perspective
> is
>  critical to avoid the frustration of non-native English speakers. Asking
> dumb
>  questions that translate to "I don't get your English" when things are
> utterly
>  clear doesn't help, really.
>
> - Is picking the communication tool based on people's preferences rather
> than
>  based on the technical issue they are meant to solve the right thing to
> do?
>  I'm sorry if this is missing part of your point but I believe each one of
> the
>  tools we have are meant to ease specific communication issues for specific
>  cases. That is to say, I agree not all mediums are equivalent but I do
> think
>  there must be a preferred medium for "whenever you don't know where to
> send
>  $X". To me, that's the ML.
>
>
> [snip]
>
> [1] https://en.wikipedia.org/wiki/Conway's_law
>>
>>
>>> * So, what can be the blocker?
>>>
>>> Nothing, but working with these assumptions is really the blocker. That
>>> is exactly why many people in their feedback say we have a "people
>>> problem" in OpenStack. But it's not really the people problem, it is the
>>> assumption problem.
>>>
>>> Assumptions are very very bad:
>>>
>>> With 'n' problems in a domain and 'm' people working on all those
>>> problems, individually, we have the assumption problem of the order of
>>> O((m*e)^n) where you can think of 'e' as the convergence factor.
>>> Convergence factor being the ability of a group to come to an agreement
>>> of the order of 'agree to agree', 'agree to disagree' (add percentages
>>> to each for more granularity). There is also another assumption (for the
>>> convergence factor) that everyone wants to work in the best interest of
>>> solving the problems in that domain.
>>>
>>>
>>>
>> rAmen brother. We can't assume 

Re: [openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread John Griffith
Definitely a +1 from me

On Tue, May 3, 2016 at 6:10 PM, Patrick East 
wrote:

> +1, Michal has done some awesome work on Cinder!
>
> -Patrick
>
> On Tue, May 3, 2016 at 11:16 AM, Sean McGinnis 
> wrote:
>
>> Hey everyone,
>>
>> I would like to nominate Michał Dulko to the Cinder core team. Michał's
>> contributions with both code reviews [0] and code contributions [1] have
>> been significant for some time now.
>>
>> His persistence with versioned objects has been instrumental in getting
>> support in the Mitaka release for rolling upgrades.
>>
>> If there are no objections from current cores by next week, I will add
>> Michał to the core group.
>>
>> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
>> [1]
>>
>> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
>>
>> Thanks!
>>
>> Sean McGinnis (smcginnis)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Additional notes on Nova and Cinder volume managment

2016-04-28 Thread John Griffith
Hey Everyone,

I've spent a bit more time thinking through some of what we talked about in
todays session.  I wanted to summarize some things, clarify a couple points
and also add some details that I've been thinking about.

Etherpad seemed like a more collaborative way to go than super long email
message, so if you're interested have a look and provide input:

https://etherpad.openstack.org/p/cinder-nova-volume-attach-change-proposal

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-19 Thread John Griffith
On Tue, Apr 19, 2016 at 12:17 PM, Monty Taylor  wrote:

> On 04/19/2016 10:16 AM, Daniel P. Berrange wrote:
>
>> On Tue, Apr 19, 2016 at 09:57:56AM -0500, Dean Troyer wrote:
>>
>>> On Tue, Apr 19, 2016 at 9:06 AM, Adam Young  wrote:
>>>
>>> I wonder how much of that is Token caching.  In a typical CLI use patter,
 a new token is created each time a client is called, with no passing of
 a
 token between services.  Using a session can greatly decrease the
 number of
 round trips to Keystone.


>>> Not as much as you think (or hope?).  Persistent token caching to disk
>>> will
>>> help some, at other expenses though.  Using --timing on OSC will show how
>>> much time the Identity auth round trip cost.
>>>
>>> I don't have current numbers, the last time I instrumented OSC there were
>>> significant load times for some modules, so we went a good distance to
>>> lazy-load as much as possible.
>>>
>>> What Dan sees WRT a persistent client process, though, is a combination
>>> of
>>> those two things: saving the Python loading and the Keystone round trips.
>>>
>>
>> The 1.5sec overhead I eliminated doesn't actually have anything todo
>> with network round trips at all. Even if you turn off all network
>> services and just run 'openstack ' and let it fail due
>> to inability to connect it'll still have that 1.5 sec overhead. It
>> is all related to python runtime loading and work done during module
>> importing.
>>
>> eg run 'unstack.sh' and then compare the main openstack client:
>>
>> $ time /usr/bin/openstack server list
>> Discovering versions from the identity service failed when creating the
>> password plugin. Attempting to determine version from URL.
>> Unable to establish connection to http://192.168.122.156:5000/v2.0/tokens
>>
>> real0m1.555s
>> user0m1.407s
>> sys 0m0.147s
>>
>> Against my client-as-a-service version:
>>
>> $ time $HOME/bin/openstack server list
>> [Errno 111] Connection refused
>>
>> real0m0.045s
>> user0m0.029s
>> sys 0m0.016s
>>
>>
>> I'm sure there is scope for also optimizing network traffic / round
>> trips, but I didn't investigate that at all.
>>
>> I have (had!) a version of DevStack that put OSC into a subprocess and
>>> called it via pipes to do essentially what Dan suggests.  It saves some
>>> time, at the expense of complexity that may or may not be worth the
>>> effort.
>>>
>>
>> devstack doesn't actually really need any significant changes beyond
>> making sure $PATH pointed to the replacement client programs and that
>> the server was running - the latter could be automated as a launch on
>> demand thing which would limit devstack changes.
>>
>> It actually doesn't technically need any devstack change - these
>> replacement clients could simply be put in some 3rd party git repo
>> and let developers who want the speed benefit simply put them in
>> their $PATH before running devstack.
>>
>> One thing missing is any sort of transactional control in the I/O with the
>>> subprocess, ie, an EOT marker.  I planned to add a -0 option (think
>>> xargs)
>>> to handle that but it's still down a few slots on my priority list.
>>> Error
>>> handling is another problem, and at this point (for DevStack purposes
>>> anyway) I stopped the investigation, concluding that reliability trumped
>>> a
>>> few seconds saved here.
>>>
>>
>> For I/O I simply replaced stdout + stderr with a new StringIO handle to
>> capture the data when running each command, and for error handling I
>> ensured the exit status was fed back & likewise stderr printed.
>>
>> It is more than just a few seconds saved - almost 4 minutes, or
>> nearly 20% of entire time to run stack.sh on my machine
>>
>>
>> Ultimately, this is one of the two giant nails in the coffin of continuing
>>> to persue CLIs in Python.  The other is co-installability. (See that
>>> current thread on the ML for pain points).  Both are easily solved with
>>> native-code-generating languages.  Go and Rust are at the top of my
>>> personal list here...
>>>
>>
> Using entrypoints and plugins in python is slow, so loading them is slow,
> as is loading all of the dependent libraries. Those were choices made for
> good reason back in the day, but I'm not convinced either are great anymore.
>
> A pluginless CLI that simply used REST calls rather than the
> python-clientlibs should be able to launch in get to the business of doing
> work in 0.2 seconds - counting time to load and parse clouds.yaml. That
> time could be reduced - the time spent in occ parsing vendor json files is
> not strictly necessary and certainly could go faster. It's not as fast as
> 0.004 seconds, but with very little effort it's 6x faster.
>
> Rather than ditching python for something like go, I'd rather put together
> a CLI with no plugins and that only depended on keystoneauth and
> os-client-config as libraries. No?

​
Yes, it would certainly seem more pragmatic than just dumping 

Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-18 Thread John Griffith
On Thu, Apr 14, 2016 at 1:31 AM, Markus Zoeller  wrote:

> Sometimes (especially when I try to reproduce bugs) I have the need
> to set up a local environment with devstack. Everytime I have to look
> at my notes to check which option in the "local.conf" have to be set
> for my needs. I'd like to add a folder in devstacks tree which hosts
> multiple example local.conf files for different, often used setups.
> Something like this:
>
> example-confs
> --- newton
> --- --- x86-ubuntu-1404
> --- --- --- minimum-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- serial-console-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- live-migration-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf.controller
> --- --- --- --- local.conf.compute1
> --- --- --- --- local.conf.compute2
> --- --- --- minimal-neutron-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- s390x-1.1.1-vulcan
> --- --- --- minimum-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- live-migration-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf.controller
> --- --- --- --- local.conf.compute1
> --- --- --- --- local.conf.compute2
> --- mitaka
> --- --- # same structure as master branch. omitted for brevity
> --- liberty
> --- --- # same structure as master branch. omitted for brevity
>
> Thoughts?
>
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Love the idea personally.  Maybe we could start with a working Neutron
multi node deployment!!!​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Does the OpenStack community(or Cinder team) allow one driver to call another driver's public method?

2016-03-19 Thread John Griffith
On Thu, Mar 17, 2016 at 10:05 PM, liuxinguo  wrote:

> Hi Cinder team,
>
>
>
> We are going to implement storage-assisted volume migrate in our driver
> between different backend storage array or even different array of
> different vendors.
>
> This is really high-efficiency than the host-copy migration between
> different array of different vendors.
>
>
>
> To implement this, we need to call other backend’s method like
> create_volume() or initialize_connection(). We can call them like the
> cinder/volume/manage.py:
>
>
>
> rpcapi.create_volume(ctxt, new_volume, host[*'host'*],
>
>  None, None, allow_reschedule=False)
>
>
>
> or
>
> conn = rpcapi.initialize_connection(ctxt, volume, properties)
>
>
>
> And my question is: Does the OpenStack community(or Cinder team) allow
> driver to call rpcapi in order to call other driver’s method like
> create_volume() or initialize_connection()?
>
>
>
>
>
> Thanks for any input!
>
> --
>
> Wilson Liu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Hi Wilson,

We don't have a direct version of the model you describe in place
currently.  We do have something similar by coordinating through the api
and scheduler layers, there's even some existing code in the migrate code
that does some similar things in manager.py.

It would be good to see if you could leverage some of the design that's
already in place.  I don't know that there should be an objection to having
a driver call another driver but it really depends on how in depth it ends
up being and how all the details around context and quota are dealt with.

I'd be curious to see what you've got going.  Might be something that helps
make the migrate code we already have better?

Thanks,
John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-07 Thread John Griffith
On Mon, Mar 7, 2016 at 8:57 AM, Knight, Clinton <clinton.kni...@netapp.com>
wrote:

>
>
> On 3/7/16, 10:45 AM, "Eric Harney" <ehar...@redhat.com> wrote:
>
> >On 03/06/2016 09:35 PM, John Griffith wrote:
> >> On Sat, Mar 5, 2016 at 4:27 PM, Jay S. Bryant
> >><jsbry...@electronicjungle.net
> >>> wrote:
> >>
> >>> Ivan,
> >>>
> >>> I agree that our testing needs improvement.  Thanks for starting this
> >>> thread.
> >>>
> >>> With regards to adding a hacking check for tests that run too long ...
> >>>are
> >>> you thinking that we would have a timer that checks or long running
> >>>jobs or
> >>> something that checks for long sleeps in the testing code?  Just
> >>>curious
> >>> your ideas for tackling that situation.  Would be interested in helping
> >>> with that, perhaps.
> >>>
> >>> Thanks!
> >>> Jay
> >>>
> >>>
> >>> On 03/02/2016 05:25 AM, Ivan Kolodyazhny wrote:
> >>>
> >>> Hi Team,
> >>>
> >>> Here are my thoughts and proposals how to make Cinder testing process
> >>> better. I won't cover "3rd party CI's" topic here. I will share my
> >>>opinion
> >>> about current and feature jobs.
> >>>
> >>>
> >>> Unit-tests
> >>>
> >>>- Long-running tests. I hope, everybody will agree that unit-tests
> >>>must be quite simple and very fast. Unit tests which takes more
> >>>than 3-5
> >>>seconds should be refactored and/or moved to 'integration' tests.
> >>>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
> >>>good to have some hacking checks to prevent such issues in a future.
> >>>
> >>>- Tests coverage. We don't check it in an automatic way on gates.
> >>>Usually, we require to add some unit-tests during code review
> >>>process. Why
> >>>can't we add coverage job to our CI and do not merge new patches,
> >>>with
> >>>will decrease tests coverage rate? Maybe, such job could be voting
> >>>in a
> >>>future to not ignore it. For now, there is not simple way to check
> >>>coverage
> >>>because 'tox -e cover' output is not useful [2].
>
> The Manila project has a coverage job that may be of interest to Cinder.
> It’s not perfect, because sometimes the periodic loopingcall routines run
> during the test run and sometimes not, leading to false negatives.  But
> most of the time it’s a handy confirmation that the unit test coverage
> didn’t decline due to a patch.  Look at the manila-coverage job in this
> example:  https://review.openstack.org/#/c/287575/
>
> >>>
> >>>
> >>> Functional tests for Cinder
> >>>
> >>> We introduced some functional tests last month [3]. Here is a patch to
> >>> infra to add new job [4]. Because these tests were moved from
> >>>unit-tests, I
> >>> think we're OK to make this job voting. Such tests should not be a
> >>> replacement for Tempest. They even could tests Cinder with Fake Driver
> >>>to
> >>> make it faster and not related on storage backends issues.
> >>>
> >>>
> >>> Tempest in-tree tests
> >>>
> >>> Sean started work on it [5] and I think it's a good idea to get them in
> >>> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a
> >>>real
> >>> backend.
> >>>
> >>>
> >>> Functional tests for python-brick-cinderclient-ext
> >>>
> >>> There are patches that introduces functional tests [6] and new job [7].
> >>>
> >>>
> >>> Functional tests for python-cinderclient
> >>>
> >>> We've got a very limited set of such tests and non-voting job. IMO, we
> >>>can
> >>> run them even with Cinder Fake Driver to make them not depended on a
> >>> storage backend and make it faster. I believe, we can make this job
> >>>voting
> >>> soon. Also, we need more contributors to this kind of tests.
> >>>
> >>>
> >>> Integrated tests for python-cinderclient
> >>>
> >>> We need such tests to make sure that we won't break Nova, Heat or other
> >>> python-cinderclient cons

Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-06 Thread John Griffith
On Sat, Mar 5, 2016 at 4:27 PM, Jay S. Bryant  wrote:

> Ivan,
>
> I agree that our testing needs improvement.  Thanks for starting this
> thread.
>
> With regards to adding a hacking check for tests that run too long ... are
> you thinking that we would have a timer that checks or long running jobs or
> something that checks for long sleeps in the testing code?  Just curious
> your ideas for tackling that situation.  Would be interested in helping
> with that, perhaps.
>
> Thanks!
> Jay
>
>
> On 03/02/2016 05:25 AM, Ivan Kolodyazhny wrote:
>
> Hi Team,
>
> Here are my thoughts and proposals how to make Cinder testing process
> better. I won't cover "3rd party CI's" topic here. I will share my opinion
> about current and feature jobs.
>
>
> Unit-tests
>
>- Long-running tests. I hope, everybody will agree that unit-tests
>must be quite simple and very fast. Unit tests which takes more than 3-5
>seconds should be refactored and/or moved to 'integration' tests.
>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>good to have some hacking checks to prevent such issues in a future.
>
>- Tests coverage. We don't check it in an automatic way on gates.
>Usually, we require to add some unit-tests during code review process. Why
>can't we add coverage job to our CI and do not merge new patches, with
>will decrease tests coverage rate? Maybe, such job could be voting in a
>future to not ignore it. For now, there is not simple way to check coverage
>because 'tox -e cover' output is not useful [2].
>
>
> Functional tests for Cinder
>
> We introduced some functional tests last month [3]. Here is a patch to
> infra to add new job [4]. Because these tests were moved from unit-tests, I
> think we're OK to make this job voting. Such tests should not be a
> replacement for Tempest. They even could tests Cinder with Fake Driver to
> make it faster and not related on storage backends issues.
>
>
> Tempest in-tree tests
>
> Sean started work on it [5] and I think it's a good idea to get them in
> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a real
> backend.
>
>
> Functional tests for python-brick-cinderclient-ext
>
> There are patches that introduces functional tests [6] and new job [7].
>
>
> Functional tests for python-cinderclient
>
> We've got a very limited set of such tests and non-voting job. IMO, we can
> run them even with Cinder Fake Driver to make them not depended on a
> storage backend and make it faster. I believe, we can make this job voting
> soon. Also, we need more contributors to this kind of tests.
>
>
> Integrated tests for python-cinderclient
>
> We need such tests to make sure that we won't break Nova, Heat or other
> python-cinderclient consumers with a next merged patch. There is a thread
> in openstack-dev ML about such tests [8] and proposal [9] to introduce them
> to python-cinderclient.
>
>
> Rally tests
>
> IMO, it would be good to have new Rally scenarios for every patches like
> 'improves performance', 'fixes concurrency issues', etc. Even if we as a
> Cinder community don't have enough time to implement them, we have to ask
> for them in reviews, openstack-dev ML, file Rally bugs and blueprints if
> needed.
>
>
> [1] https://review.openstack.org/#/c/282861/
> [2] http://paste.openstack.org/show/488925/
> [3] https://review.openstack.org/#/c/267801/
> [4] https://review.openstack.org/#/c/287115/
> [5] https://review.openstack.org/#/c/274471/
> [6] https://review.openstack.org/#/c/265811/
> [7] https://review.openstack.org/#/c/265925/
> [8]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.html
> [9] https://review.openstack.org/#/c/279432/
>
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​We could just parse out the tox slowest tests output we already have.  Do
something like pylint where we look at existing/current slowest test and
balk if that's exceeded.

Thoughts?

John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL for Newton and beyond

2016-03-03 Thread John Griffith
On Thu, Mar 3, 2016 at 8:22 AM, Ronald Bradford 
wrote:

> Dims,
>
> As my first project and cycle in OpenStack I have really appreciated your
> input and direction as I was starting out and during Mitaka cycle.
> It has been great to learn just a bit of what PTL of Oslo is and does, so
> thanks for all your hard work.
> I hope I can work with the team to help turn those "impossible tasks" to
> "possible".
>
> Best to your next pursuit.
>
> I recently watched an Oslo presentation from a prior conference, there was
> a quote while I will leave as anonymous for now.
>
> "OpenStack is like an aircraft carrier, you have to be very careful when
> your steering it, it does not move quickly".So, it sounds like a lot is
> possible, it just can take some time to adjust the course of a complex
> system of projects.
>
> Ronald
>
>
>
> On Thu, Mar 3, 2016 at 9:28 AM, Doug Hellmann 
> wrote:
>
>> Excerpts from Davanum Srinivas (dims)'s message of 2016-03-03 06:32:42
>> -0500:
>> > Team,
>> >
>> > It has been great working with you all as PTL for Oslo. Looks like the
>> > nominations open up next week for elections and am hoping more than
>> > one of you will step up for the next cycle(s). I can show you the
>> > ropes and help smoothen the transition process if you let me know
>> > about your interest in being the next PTL. With the move to more
>> > automated testing in our CI (periodic jobs running against oslo.*
>> > master) and the adoption of the release process (logging reviews in
>> > /releases repo) the load should be considerably less on you.
>> > especially proud of all the new people joining as both oslo cores and
>> > project cores and hitting the ground running. Big shout out to Doug
>> > Hellmann for his help and guidance when i transitioned into the PTL
>> > role.
>>
>> Thanks, Dims, you've done awesome work during your terms. It has
>> been great to see the team gel and mature under your leadership.
>>
>> > Main challenges will be to get back confidence of all the projects
>> > that use the oslo libraries, NOT be the first thing they look for when
>> > things break (Better backward compat, better test matrix) and
>> > evangelizing that Oslo is still the common play ground for *all*
>> > projects and not just the headache of some nut jobs who are willing to
>> > take up the impossible task of defining and nurturing these libraries.
>> > There's a lot of great work ahead of us and i am looking forward to
>> > continue to work with you all.
>>
>> Excellent analysis, too.  We're in a good position to build on that
>> testing and stability work and continue with the adoption and hardening
>> tasks. I'm looking forward to working out the details with the rest of
>> the team.
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Just a tip of the hat to you Dims, you've always been a great help to me
over the years, and especially in your role as PTL.  Well done!  Not
completely sure if you're name will get thrown in the ring next week or
not, but regardless, kudos!​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Cinder - Wishlist

2016-03-02 Thread John Griffith
There's actually a Launchpad category for this very thing.  Under the
importance tag.

On Wed, Mar 2, 2016 at 6:27 AM,  wrote:

> Thank you Yatin!
>
>
>
> *From:* yatin kumbhare [mailto:yatinkumbh...@gmail.com]
> *Sent:* Tuesday, March 1, 2016 4:43 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] Openstack Cinder - Wishlist
>
>
>
> Hi Ashraf,
>
>
>
> you can find all such information over launchpad.
>
>
>
> https://bugs.launchpad.net/cinder
>
>
>
> Regards,
>
> Yatin
>
>
>
> On Tue, Mar 1, 2016 at 4:01 PM,  wrote:
>
> Hi,
>
>
>
> Would like to know if there’s  feature wish list/enhancement request for
> Open stack Cinder  I.e. a list of features that we would like to add to
> Cinder Block Storage ; but hasn’t been taken up for development yet.
>
> We have couple  developers who are interested to work on OpenStack
> Cinder... Hence would like to take a look at those wish list…
>
>
>
> Thanks ,
>
> Ashraf
>
> The information contained in this electronic message and any attachments
> to this message are intended for the exclusive use of the addressee(s) and
> may contain proprietary, confidential or privileged information. If you are
> not the intended recipient, you should not disseminate, distribute or copy
> this e-mail. Please notify the sender immediately and destroy all copies of
> this message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability
> for any damage caused by any virus transmitted by this email.
> www.wipro.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> The information contained in this electronic message and any attachments
> to this message are intended for the exclusive use of the addressee(s) and
> may contain proprietary, confidential or privileged information. If you are
> not the intended recipient, you should not disseminate, distribute or copy
> this e-mail. Please notify the sender immediately and destroy all copies of
> this message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability
> for any damage caused by any virus transmitted by this email.
> www.wipro.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-03-01 Thread John Griffith
On Tue, Mar 1, 2016 at 3:48 PM, Murray, Paul (HP Cloud) 
wrote:

>
> > -Original Message-
> > From: D'Angelo, Scott
> >
> > Matt, changing Nova to store the connector info at volume attach time
> does
> > help. Where the gap will remain is after Nova evacuation or live
> migration,
>
> This will happen with shelve as well I think. Volumes are not detached in
> shelve
> IIRC.
>
> > when that info will need to be updated in Cinder. We need to change the
> > Cinder API to have some mechanism to allow this.
> > We'd also like Cinder to store the appropriate info to allow a
> force-detach for
> > the cases where Nova cannot make the call to Cinder.
> > Ongoing work for this and related issues is tracked and discussed here:
> > https://etherpad.openstack.org/p/cinder-nova-api-changes
> >
> > Scott D'Angelo (scottda)
> > 
> > From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
> > Sent: Monday, February 29, 2016 7:48 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [nova][cinder] volumes stuck detaching
> > attaching and force detach
> >
> > On 2/22/2016 4:08 PM, Walter A. Boring IV wrote:
> > > On 02/22/2016 11:24 AM, John Garbutt wrote:
> > >> Hi,
> > >>
> > >> Just came up on IRC, when nova-compute gets killed half way through a
> > >> volume attach (i.e. no graceful shutdown), things get stuck in a bad
> > >> state, like volumes stuck in the attaching state.
> > >>
> > >> This looks like a new addition to this conversation:
> > >> http://lists.openstack.org/pipermail/openstack-dev/2015-
> > December/0826
> > >> 83.html
> > >>
> > >> And brings us back to this discussion:
> > >> https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
> > >>
> > >> What if we move our attention towards automatically recovering from
> > >> the above issue? I am wondering if we can look at making our usually
> > >> recovery code deal with the above situation:
> > >>
> > https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24
> > >> c79f4bf615/nova/compute/manager.py#L934
> > >>
> > >>
> > >> Did we get the Cinder APIs in place that enable the force-detach? I
> > >> think we did and it was this one?
> > >> https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force
> > >> -detach-needs-cinderclient-api
> > >>
> > >>
> > >> I think diablo_rojo might be able to help dig for any bugs we have
> > >> related to this. I just wanted to get this idea out there before I
> > >> head out.
> > >>
> > >> Thanks,
> > >> John
> > >>
> > >>
> > __
> > ___
> > >> _
> > >>
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >> .
> > >>
> > > The problem is a little more complicated.
> > >
> > > In order for cinder backends to be able to do a force detach
> > > correctly, the Cinder driver needs to have the correct 'connector'
> > > dictionary passed in to terminate_connection.  That connector
> > > dictionary is the collection of initiator side information which is
> gleaned
> > here:
> > > https://github.com/openstack/os-brick/blob/master/os_brick/initiator/c
> > > onnector.py#L99-L144
> > >
> > >
> > > The plan was to save that connector information in the Cinder
> > > volume_attachment table.  When a force detach is called, Cinder has
> > > the existing connector saved if Nova doesn't have it.  The problem was
> > > live migration.  When you migrate to the destination n-cpu host, the
> > > connector that Cinder had is now out of date.  There is no API in
> > > Cinder today to allow updating an existing attachment.
> > >
> > > So, the plan at the Mitaka summit was to add this new API, but it
> > > required microversions to land, which we still don't have in Cinder's
> > > API today.
> > >
> > >
> > > Walt
> > >
> > >
> > __
> > 
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > Regarding storing off the initial connector information from the attach,
> does
> > this [1] help bridge the gap? That adds the connector dict to the
> > connection_info dict that is serialized and stored in the nova
> > block_device_mappings table, and then in that patch is used to pass it to
> > terminate_connection in the case that the host has changed.
> >
> > [1] https://review.openstack.org/#/c/266095/
> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-

Re: [openstack-dev] [Cinder] [nova] Work flow for detaching a volume

2016-02-26 Thread John Griffith
On Fri, Feb 26, 2016 at 12:47 PM, Sean McGinnis 
wrote:

> On Fri, Feb 26, 2016 at 06:11:15PM +, Srinivas Sakhamuri wrote:
> > I want to confirm the correct work flow for detaching a volume, Both nova
> > and cinder (unpublished, available through cinder.volumes.detach)
> provides
> > detach volume API. Only nova seems to have correct work flow in terms of
> > detaching a volume i.e.
> > 1) detaches the volume from the VM (libvirt.volume_detach)
> > 2) Informs Cinder to do Cinder-side detach
> > 3) delete BlockDeviceMapping from Nova DB
> >
> > On the other hand cinder just modifies cinder DB entries and let's the
> > driver handle the detaching the volume. No API call to nova to let it
> know
> > about the volume detaching.
> >
> > - Does that mean nova is the only work flow to use to correctly detach a
> > volume from instance?
> > - And cinder detach API serves only to cleanup internal state in the DB
> and
> > cinder driver?
>
> I guess it depends on what you are trying to accomplish. If a volume is
> being used by nova, then the nova APIs are the correct ones to use to
> have it fully detached from the host and cleaned up.
>
> Internally, Nova calls the Cinder APIs to perform the storage
> management.
>
> If you are just looking to perform storage management, for example
> with bare metal and not using Nova, then you would call the
> Cinder APIs.
>
> So to answer your first question of whether the nova API is the correct
> way to detach a volume from an instance - yes.
>
> Sean
>
> >
> > TIA
> > Srini
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Maybe this will help https://review.openstack.org/#/c/285471/

Additionally I've been trying to point out lately that a Volume should be
just a "thing" or resource and it's up to consumers of volumes to manage
connections (attach/detach) appropriately.​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][cinder] How to configure the third party CI to be triggered only when jenkins +1

2016-02-22 Thread John Griffith
On Mon, Feb 22, 2016 at 6:32 PM, liuxinguo  wrote:

> Hi,
>
>
>
> There is no need to trigger third party CI if a patch does not pass
> Jenkins Verify.
>
> I think there is a way to reach this but I’m not sure how.
>
>
>
> So is there any reference or suggestion to configure the third party CI to
> be triggered only when jenkins +1?
>
>
>
> Thanks for any input!
>
>
>
> Regards,
>
> Wilson Liu
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​In my case I inspect the comments and only trigger a run on either "run
solidfire" or on a Jenkins +1.  The trick is to parse out the comments and
look for the conditions that you are interested in.  The code looks
something like this:

if (event.get('type', 'nill') == 'comment-added' and

'Verified+1' in event['comment'] and
cfg['AccountInfo']['project_name'] == event['change']['project'] and
event['author']['username'] == 'jenkins' and
event['change']['branch'] == 'master'):
​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][all] Integration python-*client tests on gates

2016-02-15 Thread John Griffith
On Mon, Feb 15, 2016 at 1:02 PM, Clark Boylan <cboy...@sapwetik.org> wrote:

> On Mon, Feb 15, 2016, at 11:48 AM, Ivan Kolodyazhny wrote:
> > Hi all,
> >
> > I'll talk mostly about python-cinderclient but the same question could be
> > related for other clients.
> >
> > Now, for python-cinderclient we've got to kinds for functional/integrated
> > jobs:
> >
> > 1) gate-cinderclient-dsvm-functional - a very limited (for now) set of
> > functional tests, most of them were part of tempest CLI tests in the
> > past.
> >
> > 2) gate-tempest-dsvm-neutron-src-python-cinderclient - if I understand
> > right, the idea os this job was to have integrated tests to test
> > cinderclient with other projects to verify that new patch to
> > python-cinderclietn won't break any other project.
> > But it does *not* test cinderclient at all, except few attach-related
> > tests
> > because Tempest doesn't use python-*client.
>
> Tempest doesn't use python-*client to talk to the APIs but the various
> OpenStack services do use python-*client to talk to the other services.
> Using cinderclient as an example, nova consumes cinderclient to perform
> volume operations in nova/volume/cinder.py. There is value in this
> existing test if those code paths are exercised. Basically ensuring the
> next release of cinderclient does not break nova. It may be the case
> that cinderclient is a bad example because tempest doesn't do volume
> operations through nova, but I am sure for many of the other clients
> these tests do provide value.
>
> >
> > The same job was added for python-heatclient but was removed because
> > devstack didn't install Heat for that job [1].
> >
> > We agreed [2] to remove this job from cinderclient gates too, once
> > functional or integration tests will be implemented.
>
> Just make sure that you don't lose exercising of the above code paths
> when this transition happens. If we don't currently test that code it
> would be a good goal for any new integration testing to do so.
>
> >
> >
> > There is a proposal to python-cinderclient tests to implement some
> > cross-project testing to make sure, that new python-cinderclient won't
> > break any of existing project who use it.
> >
> > After discussing in IRC with John Griffith (jgriffith) I'm realized that
> > it
> > could be an cross-project initiative in such kind of integration tests.
> > OpenStack Client (OSC) could cover some part of such tests, but does it
> > mean that we'll run OSC tests on every patch to python-*client? We can
> > run
> > only cinder-realated OSC tests on our gates to verify that it doesn't
> > breack OSC and, may be other project.
> >
> > The other option, is to implement tests like [3] per project basis and
> > call
> > it "integration".  Such tests could cover more cases than OSC functional
> > tests and have more project-related test cases, e.g.: test some
> > python-cinderclient specific corner cases, which is not related to OSC.
> >
> > IMO, It would be good to have some cross-project decision on how will be
> > implement clients' integration tests per project.
> >
> >
> > [1] https://review.openstack.org/#/c/272411/
> > [2]
> >
> http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-12-16-16.00.log.html
> > [3] https://review.openstack.org/#/c/279432/8
> >
> > Regards,
> > Ivan Kolodyazhny,
> > http://blog.e0ne.info/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hey Everyone,

So this started after after I made some comments on a patch in CinderClient
that added attach/detach integration tests to cinder's local test repo.  My
first thought was that we should focus on just Cinder functional tests
first, and that maybe the integration tests (including with the clients)
should be centralized, or have a more standardized approach to them.

What I was getting at is that while the Tempest tests don't use the clients
directly, there are a number of places where tempest does end up calling
them indirectly. Volume attach in Nova is a good example of this, while we
don't call NovaClient to do this, the Nova API drills down into
volume/cinder.py which just loads and calls CinderClient in order to issue
the volume related calls that it does.  My thought was that maybe it would
be useful to have a more cross-project effort for things like this. There
are other place

Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-12 Thread John Griffith
On Thu, Feb 11, 2016 at 10:31 AM, Walter A. Boring IV  wrote:

> There seems to be a few discussions going on here wrt to detaches.   One
> is what to do on the Nova side with calling os-brick's disconnect_volume,
> and also when to or not to call Cinder's terminate_connection and detach.
>
> My original post was simply to discuss a mechanism to try and figure out
> the first problem.  When should nova call brick to remove
> the local volume, prior to calling Cinder to do something.
> ​
>


> Nova needs to know if it's safe to call disconnect_volume or not. Cinder
> already tracks each attachment, and it can return the connection_info for
> each attachment with a call to initialize_connection.   If 2 of those
> connection_info dicts are the same, it's a shared volume/target.  Don't
> call disconnect_volume if there are any more of those left.
>
> On the Cinder side of things, if terminate_connection, detach is called,
> the volume manager can find the list of attachments for a volume, and
> compare that to the attachments on a host.  The problem is, Cinder doesn't
> track the host along with the instance_uuid in the attachments table.  I
> plan on allowing that as an API change after microversions lands, so we
> know how many times a volume is attached/used on a particular host.  The
> driver can decide what to do with it at terminate_connection, detach time.
>This helps account for
> the differences in each of the Cinder backends, which we will never get
> all aligned to the same model.  Each array/backend handles attachments
> different and only the driver knows if it's safe to remove the target or
> not, depending on how many attachments/usages it has
> on the host itself.   This is the same thing as a reference counter, which
> we don't need, because we have the count in the attachments table, once we
> allow setting the host and the instance_uuid at the same time.
>
> ​Not trying to drag this out or be difficult I promise.  But, this seems
like it is in fact the same problem, and I'm not exactly following; if you
store the info on the compute side during the attach phase, why would you
need/want to then create a split brain scenario and have Cinder do any sort
of tracking on the detach side of things?

Like the earlier posts said, just don't call terminate_connection if you
don't want to really terminate the connection?  I'm sorry, I'm just not
following the logic of why Cinder should track this and interfere with
things?  It's supposed to be providing a service to consumers and "do what
it's told" even if it's told to do the wrong thing.
 ​


> Walt
>
> On Tue, Feb 09, 2016 at 11:49:33AM -0800, Walter A. Boring IV wrote:
>>
>>> Hey folks,
>>> One of the challenges we have faced with the ability to attach a
>>> single
>>> volume to multiple instances, is how to correctly detach that volume.
>>> The
>>> issue is a bit complex, but I'll try and explain the problem, and then
>>> describe one approach to solving one part of the detach puzzle.
>>>
>>> Problem:
>>>When a volume is attached to multiple instances on the same host.
>>> There
>>> are 2 scenarios here.
>>>
>>>1) Some Cinder drivers export a new target for every attachment on a
>>> compute host.  This means that you will get a new unique volume path on a
>>> host, which is then handed off to the VM instance.
>>>
>>>2) Other Cinder drivers export a single target for all instances on a
>>> compute host.  This means that every instance on a single host, will
>>> reuse
>>> the same host volume path.
>>>
>>
>> This problem isn't actually new. It is a problem we already have in Nova
>> even with single attachments per volume.  eg, with NFS and SMBFS there
>> is a single mount setup on the host, which can serve up multiple volumes.
>> We have to avoid unmounting that until no VM is using any volume provided
>> by that mount point. Except we pretend the problem doesn't exist and just
>> try to unmount every single time a VM stops, and rely on the kernel
>> failing umout() with EBUSY.  Except this has a race condition if one VM
>> is stopping right as another VM is starting
>>
>> There is a patch up to try to solve this for SMBFS:
>>
>> https://review.openstack.org/#/c/187619/
>>
>> but I don't really much like it, because it only solves it for one
>> driver.
>>
>> I think we need a general solution that solves the problem for all
>> cases, including multi-attach.
>>
>> AFAICT, the only real answer here is to have nova record more info
>> about volume attachments, so it can reliably decide when it is safe
>> to release a connection on the host.
>>
>>
>> Proposed solution:
>>>Nova needs to determine if the volume that's being detached is a
>>> shared or
>>> non shared volume.  Here is one way to determine that.
>>>
>>>Every Cinder volume has a list of it's attachments.  In those
>>> attachments
>>> it contains the instance_uuid that the volume is attached to.  I presume
>>> Nova can find which of the 

Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread John Griffith
On Fri, Feb 12, 2016 at 5:01 AM, Sean Dague  wrote:

> Ok... this is going to be one of those threads, but I wanted to try to
> get resolution here.
>
> OpenStack is wildly inconsistent in it's use of tenant vs. project. As
> someone that wasn't here at the beginning, I'm not even sure which one
> we are supposed to be transitioning from -> to.
>
> At a minimum I'd like to make all of devstack use 1 term, which is the
> term we're trying to get to. That will help move the needle.
>
> However, again, I'm not sure which one that is supposed to be (comments
> in various places show movement in both directions). So people with
> deeper knowledge here, can you speak up as to which is the deprecated
> term and which is the term moving forward.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I honestly don't have any real feeling about one over the other; BUT I
applaud the fact that somebody was brave enough to raise the question again.

Sounds like Project is where we're supposed to be, so if we can get it in
Keystone we can all go work on updating it once and for all?​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread John Griffith
On Tue, Feb 9, 2016 at 3:23 PM, Ildikó Váncsa 
wrote:

> Hi Walt,
>
> > -Original Message-
> > From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
> > Sent: February 09, 2016 23:15
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining
> when to call os-brick's connector.disconnect_volume
> >
> > On 02/09/2016 02:04 PM, Ildikó Váncsa wrote:
> > > Hi Walt,
> > >
> > > Thanks for starting this thread. It is a good summary of the issue and
> the proposal also looks feasible to me.
> > >
> > > I have a quick, hopefully not too wild idea based on the earlier
> discussions we had. We were considering earlier to store the target
> > identifier together with the other items of the attachment info. The
> problem with this idea is that when we call initialize_connection
> > from Nova, Cinder does not get the relevant information, like
> instance_id, to be able to do this. This means we cannot do that using
> > the functionality we have today.
> > >
> > > My idea here is to extend the Cinder API so that Nova can send the
> missing information after a successful attach. Nova should have
> > all the information including the 'target', which means that it could
> update the attachment information through the new Cinder API.
> > I think we need to do is to allow the connector to be passed at
> > os-attach time.   Then cinder can save it in the attachment's table
> entry.
> >
> > We will also need a new cinder API to allow that attachment to be
> updated during live migration, or the connector for the attachment
> > will get stale and incorrect.
>
> When saying below that it will be good for live migration as well I meant
> that the update is part of the API.
>
> Ildikó
>
> >
> > Walt
> > >
> > > It would mean that when we request for the volume info from Cinder at
> detach time the 'attachments' list would contain all the
> > required information for each attachments the volume has. If we don't
> have the 'target' information because of any reason we can
> > still use the approach described below as fallback. This approach could
> even be used in case of live migration I think.
> > >
> > > The Cinder API extension would need to be added with a new
> microversion to avoid problems with older Cinder versions talking to
> > new Nova.
> > >
> > > The advantage of this direction is that we can reduce the round trips
> to Cinder at detach time. The round trip after a successful
> > attach should not have an impact on the normal operation as if that
> fails the only issue we have is we need to use the fall back method
> > to be able to detach properly. This would still affect only
> multiattached volumes, where we have more than one attachments on the
> > same host. By having the information stored in Cinder as well we can
> also avoid removing a target when there are still active
> > attachments connected to it.
> > >
> > > What do you think?
> > >
> > > Thanks,
> > > Ildikó
> > >
> > >
> > >> -Original Message-
> > >> From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
> > >> Sent: February 09, 2016 20:50
> > >> To: OpenStack Development Mailing List (not for usage questions)
> > >> Subject: [openstack-dev] [Nova][Cinder] Multi-attach, determining
> > >> when to call os-brick's connector.disconnect_volume
> > >>
> > >> Hey folks,
> > >>  One of the challenges we have faced with the ability to attach a
> > >> single volume to multiple instances, is how to correctly detach that
> > >> volume.  The issue is a bit complex, but I'll try and explain the
> problem, and then describe one approach to solving one part of the
> > detach puzzle.
> > >>
> > >> Problem:
> > >> When a volume is attached to multiple instances on the same host.
> > >> There are 2 scenarios here.
> > >>
> > >> 1) Some Cinder drivers export a new target for every attachment
> > >> on a compute host.  This means that you will get a new unique volume
> path on a host, which is then handed off to the VM
> > instance.
> > >>
> > >> 2) Other Cinder drivers export a single target for all instances
> > >> on a compute host.  This means that every instance on a single host,
> will reuse the same host volume path.
> > >>
> > >>
> > >> When a user issues a request to detach a volume, the workflow boils
> > >> down to first calling os-brick's connector.disconnect_volume before
> > >> calling Cinder's terminate_connection and detach. disconnect_volume's
> job is to remove the local volume from the host OS and
> > close any sessions.
> > >>
> > >> There is no problem under scenario 1.  Each disconnect_volume only
> > >> affects the attached volume in question and doesn't affect any other
> > >> VM using that same volume, because they are using a different path
> that has shown up on the host.  It's a different target
> > exported from the Cinder backend/array.
> > >>
> > >> The problem comes under scenario 2, where that single volume is
> > >> 

Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread John Griffith
On Wed, Feb 10, 2016 at 5:12 PM, Fox, Kevin M  wrote:

> But the issue is, when told to detach, some of the drivers do bad things.
> then, is it the driver's issue to refcount to fix the issue, or is it
> nova's to refcount so that it doesn't call the release before all users are
> done with it? I think solving it in the middle, in cinder's probably not
> the right place to track it, but if its to be solved on nova's side, nova
> needs to know when it needs to do it. But cinder might have to relay some
> extra info from the backend.
>
> Either way, On the driver side, there probably needs to be a mechanism on
> the driver to say it either can refcount properly so its multiattach
> compatible (or that nova should refcount), or to default to not allowing
> multiattach ever, so existing drivers don't break.
>
> Thanks,
> Kevin
> 
> From: Sean McGinnis [sean.mcgin...@gmx.com]
> Sent: Wednesday, February 10, 2016 3:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when
> to call os-brick's connector.disconnect_volume
>
> On Wed, Feb 10, 2016 at 11:16:28PM +, Fox, Kevin M wrote:
> > I think part of the issue is whether to count or not is cinder driver
> specific and only cinder knows if it should be done or not.
> >
> > But if cinder told nova that particular multiattach endpoints must be
> refcounted, that might resolve the issue?
> >
> > Thanks,
> > Kevin
>
> I this case (the point John and I were making at least) it doesn't
> matter. Nothing is driver specific, so it wouldn't matter which backend
> is being used.
>
> If a volume is needed, request it to be attached. When it is no longer
> needed, tell Cinder to take it away. Simple as that.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Hey Kevin,

So I think what Sean M pointed out is still valid in your case.  It's not
really that some drivers do bad things, the problem is actually the way
attach/detach works in OpenStack as a whole.  The original design (which we
haven't strayed very far from) was that you could only attach a single
resource to a single compute node.  That was it, there was no concept of
multi-attach etc.

Now however folks want to introduce multi-attach, which means all of the
old assumptions that the code was written on and designed around are kinda
"bad assumptions" now.  It's true, as you pointed out however that there
are some drivers that behave or deal with targets in a way that makes
things complicated, but they're completely inline with the scsi standards
and aren't doing anything *wrong*.

The point Sean M and I were trying to make is that for the specific use
case of a single volume being attached to a compute node, BUT being passed
through to more than one Instance it might be worth looking at just
ensuring that Compute Node doesn't call detach unless it's *done* with all
of the Instances that it was passing that volume through to.

You're absolutely right, there are some *weird* things that a couple of
vendors do with targets in the case of like replication where they may
actually create a new target and attach; those sorts of things are
ABSOLUTELY Cinder's problem and Nova should not have to know anything about
that as a consumer of the Target.

My view is that maybe we should look at addressing the multiple use of a
single target case in Nova, and then absolutely figure out how to make
things work correctly on the Cinder side for all the different behaviors
that may occur on the Cinder side from the various vendors.

Make sense?

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-10 Thread John Griffith
On Wed, Feb 10, 2016 at 3:59 PM, Sean McGinnis <sean.mcgin...@gmx.com>
wrote:

> On Wed, Feb 10, 2016 at 03:30:42PM -0700, John Griffith wrote:
> > On Tue, Feb 9, 2016 at 3:23 PM, Ildikó Váncsa <
> ildiko.van...@ericsson.com>
> > wrote:
> >
> > >
> > ​This may still be in fact the easiest way to handle this.  The only
> other
> > thing I am still somewhat torn on here is that maybe Nova should be doing
> > ref-counting WRT shared connections and NOT send the detach in that
> > scenario to begin with?
> >
> > In the case of unique targets per-attach we already just "work", but if
> you
> > are using the same target/attachment on a compute node for multiple
> > instances, then you probably should keep track of that on the users end
> and
> > not remove it while in use.  That seems like the more "correct" way to
> deal
> > with this, but ​maybe that's just me.  Keep in mind we could also do the
> > same ref-counting on the Cinder side if we so choose.
>
> This is where I've been pushing too. It seems odd to me that the storage
> domain should need to track how the volume is being used by the
> consumer. Whether it is attached to one instance, 100 instances, or the
> host just likes to keep it around as a pet, from the storage perspective
> I don't know why we should care.
>
> Looking beyond Nova usage, does Cinder now need to start tracking
> information about containers? Bare metal hosts? Apps that are associated
> with LUNs. It just seems like concepts that the storage component
> shouldn't need to know or care about.
>
>
​Well said​

​, I agree
​


> I know there's some history here and it may not be as easy as that. But
> just wanted to state my opinion that in an ideal world (which I
> recognize we don't live in) this should not be Cinder's concern.
>
> >
> > We talked about this at mid-cycle with the Nova team and I proposed
> > independent targets for each connection on Cinder's side.  We can still
> do
> > that IMO but that doesn't seem to be a very popular idea.
>
> John, I don't think folks are against this idea as a concept. I think
> the problem is I don't believe all storage vendors can support exposing
> new targets for the same volume for each attachment.
>
​
Ahh, well that's a very valid reason to take a different approach.​


>
> >
> > My point here is just that it seems like there might be a way to fix this
> > without breaking compatibility in the API.  Thoughts?
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Patrick East to Cinder Core

2016-01-31 Thread John Griffith
On Sat, Jan 30, 2016 at 12:58 PM, Jay Bryant 
wrote:

> +1. Patrick's contributions to Cinder have been notable since he joined us
> and he is a pleasure to work with!   Welcome to the core team Patrick!
>
> Jay
>
>
> On Fri, Jan 29, 2016, 19:05 Sean McGinnis  wrote:
>
>> Patrick has been a strong contributor to Cinder over the last few
>> releases, both with great code submissions and useful reviews. He also
>> participates regularly on IRC helping answer questions and providing
>> valuable feedback.
>>
>> I would like to add Patrick to the core reviewers for Cinder. Per our
>> governance process [1], existing core reviewers please respond with any
>> feedback within the next five days. Unless there are no objections, I will
>> add Patrick to the group by February 3rd.
>>
>> Thanks!
>>
>> Sean (smcginnis)
>>
>> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​+1 Patrick would make a great addition IMHO​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-18 Thread John Griffith
On Sun, Jan 17, 2016 at 8:30 PM, Matt Kassawara 
wrote:

> Prior attempts to solve the MTU problem in neutron simply band-aid it or
> become too complex from feature creep or edge cases that mask the primary
> goal of a simple implementation that works for most deployments. So, I ran
> some experiments to empirically determine the root cause of MTU problems in
> common neutron deployments using the Linux bridge agent. I plan to perform
> these experiments again using the Open vSwitch agent... after sufficient
> mental recovery.
>
> I highly recommend reading further, but here's the TL;DR:
>
> Observations...
>
> 1) During creation of a VXLAN interface, Linux automatically subtracts the
> VXLAN protocol overhead from the MTU of the parent interface.
> 2) A veth pair or tap with a different MTU on each end drops packets
> larger than the smaller MTU.
> 3) Linux automatically adjusts the MTU of a bridge to the lowest MTU of
> all the ports. Therefore, Linux reduces the typical bridge MTU from 1500 to
> 1450 when neutron adds a VXLAN interface to it.
> 4) A bridge with different MTUs on each port drops packets larger than the
> MTU of the bridge.
> 5) A bridge or veth pair with an IP address can participate in path MTU
> discovery (PMTUD). However, these devices do not appear to understand
> namespaces and originate the ICMP message from the host instead of a
> namespace. Therefore, the message never reaches the destination...
> typically a host outside of the deployment.
>
> Conclusion...
>
> The MTU disparity between native and overlay networks must reside in a
> device capable of layer-3 operations that can participate in PMTUD, such as
> the neutron router between a private/project overlay network and a
> public/external native network.
>
> Some background...
>
> In a typical datacenter network, MTU must remain consistent within a
> layer-2 network because fragmentation and the mechanism indicating the need
> for it occurs at layer-3. In other words, all host interfaces and switch
> ports on the same layer-2 network must use the same MTU. If the layer-2
> network connects to a router, the router port must also use the same MTU. A
> router can contain ports on multiple layer-2 networks with different MTUs
> because it operates on those networks at layer-3. If the MTU changes
> between ports on a router and devices on those layer-2 networks attempt to
> communicate at layer-3, the router can perform a couple of actions. For
> IPv4, the router can fragment the packet. However, if the packet contains
> the "don't fragment" (DF) flag, the router can either silently drop the
> packet or return an ICMP "fragmentation needed" message to the sender. This
> ICMP message contains the MTU of the next layer-2 network in the route
> between the sender and receiver. Each router in the path can return these
> ICMP messages to the sender until it learns the maximum MTU for the entire
> path, also known as path MTU discovery (PMTUD). IPv6 does not support
> fragmentation.
>
> The cloud provides a virtual extension of a physical network. In the
> simplest sense, patch cables become veth pairs, switches become bridges,
> and routers become namespaces. Therefore, MTU implementation for virtual
> networks should mimic physical networks where MTU changes must occur within
> a router at layer-3.
>
> For these experiments, my deployment contains one controller and one
> compute node. Neutron uses the ML2 plug-in and Linux bridge agent. The
> configuration does not contain any MTU options (e.g, path_mtu). One VM with
> a floating IP address attaches to a VXLAN private network that routes to a
> flat public network. The DHCP agent does not advertise MTU to the VM. My
> lab resides on public cloud infrastructure with networks that filter
> unknown MAC addresses such as those that neutron generates for virtual
> network components. Let's talk about the implications and workarounds.
>
> The VXLAN protocol contains 50 bytes of overhead. Linux automatically
> calculates the MTU of VXLAN devices by subtracting 50 bytes from the parent
> device, in this case a standard Ethernet interface with a 1500 MTU.
> However, due the limitations of public cloud networks, I must create a
> VXLAN tunnel between the controller node and a host outside of the
> deployment to simulate traffic from a datacenter network. This tunnel
> effectively reduces the "native" MTU from 1500 to 1450. Therefore, I need
> to subtract an additional 50 bytes from neutron VXLAN network components,
> essentially emulating the 50-byte difference between conventional neutron
> VXLAN networks and native networks. The host outside of the deployment
> assumes it can send packets using a 1450 MTU. The VM also assumes it can
> send packets using a 1450 MTU because the DHCP agent does not advertise a
> 1400 MTU to it.
>
> Let's get to it!
>
> Note: The commands in these experiments often generate lengthy output, so
> please refer to the gists when necessary.
>

Re: [openstack-dev] Should Cinder have a volume_action table to track changes on a Volume?

2016-01-03 Thread John Griffith
On Wed, Dec 30, 2015 at 7:54 AM, SCHVENINGER, DOUGLAS P 
wrote:

> I was looking into a support issue and noticed that Nova has an instance
> and instance_action table. I was wondering if this is something that cinder
> would consider adding a volume_action table to track changes to a volume?
>
​Yes, it's been discussed; although the discussions were more around using
it as a state-machine of sorts.  Since those discussions other ideas have
been implemented (or are being implemented) that won't address what you're
asking for (DLM's etc), but I do think having tracking info like you
describe is useful.​


> I looked in the specs and I could not see anything like this.
>
> Has this been talked about before?
>
> Does this sound like something cinder would like to add?
>
>
>
> Doug Schveninger
>
> Principal Technical Architect
>
> AIC – AT Integrated Cloud
>
> *AT* Services, Inc.
>
> *Rethink Possible*
>
>
>
> Email: ds6...@att.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread John Griffith
On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan  wrote:

> Hi all,
>
> Currently when deleting a volume, it checks whether there are snapshots
> created from it. If yes deletion is prohibited.  But it allows to extend
> the volume, no check whether there are snapshots from it.
>
​Correct​


>
> The two behaviors in Cinder are not consistent from my viewpoint.
>
​Well, your snapshot was taken at a point in time; and if you do a create
from snapshot the whole point is you want what you HAD when the snapshot
command was issued and NOT what happened afterwards.  So in my opinion this
is not inconsistent at all.
​


>
> In backend storage, their behaviors are same.
>
​Which backend storage are you referring to in this case?
​


> For full snapshot, if still in copying progress, both extend and deletion
> are not allowed. If snapshot copying finishes, both extend and deletion are
> allowed.
> For incremental snapshot, both extend and deletion are not allowed.
>

​So your particular backend has "different/specific" rules/requirements
around snapshots.  That's pretty common, I don't suppose theres any way to
hack around this internally?  In other words do things on your backend like
clones as snaps etc to make up for the differences in behavior?​


>
> As a result, this raises two concerns here:
> 1. Let such operations behavior same in Cinder.
> 2. I prefer to let storage driver decide the dependencies, not in the
> general core codes.
>

​I have and always will strongly disagree with this approach and your
proposal.  Sadly we've already started to allow more and more vendor
drivers just "do their own thing" and implement their own special API
methods.  This is in my opinion a horrible path and defeats the entire
purpose of have a Cinder abstraction layer.

This will make it impossible to have compatibility between clouds for those
that care about it, it will make it impossible for operators/deployers to
understand exactly what they can and should expect in terms of the usage of
their cloud.  Finally, it will also mean that not OpenStack API
functionality is COMPLETELY dependent on backend device.  I know people are
sick of hearing me say this, so I'll keep it short and say it one more time:
"Compatibility in the API matters and should always be our priority"


> Meanwhile, if we let driver to decide the dependencies, the following
> changes need to do in Cinder:
> 1. When creating a snapshot from volume, it needs copy all metadata of
> volume to snapshot. Currently it doesn't.
> Any other potential issues please let me know.
>
> Any input will be appreciated.
>
> Best wishes
> Lisa
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   >