Re: [openstack-dev] [manila] nominating Amit Oren for manila core

2018-10-03 Thread Ben Swartzlander

On 10/02/2018 01:58 PM, Tom Barron wrote:
Amit Oren has contributed high quality reviews in the last couple of 
cycles so I would like to nominated him for manila core.


Please respond with your +1 or -1 votes.  We'll hold voting open for 7 
days.


+1


Thanks,

-- Tom Barron (tbarron)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-05-29 Thread Ben Swartzlander

On 05/29/2018 03:43 PM, Davanum Srinivas wrote:

Agree with Ian here.

Also another problem that comes up is: "Why are you touching *MY*
review?" (probably coming from the view where stats - and stackalytics
leaderboard position is important). So i guess we ask permission
before editing (or) file a follow up later (or) just tell folks that
this is ok to do!!


I think Stackalytics is evil and should be killed with fire. It 
encourages all kinds of pathological behavior, this being one prime 
example. Having worked as a core reviewer, I find zero value from the 
project. We know who is contributing code and who is doing reviews 
without some robot to tell us.


-Ben



Hoping engaging with them will solve yet another issue is someone
going around filing the same change in a dozen projects (repeatedly!),
but that may be wishful thinking.

-- Dims

On Tue, May 29, 2018 at 12:17 PM, Ian Wells  wrote:

If your nitpick is a spelling mistake or the need for a comment where you've
pretty much typed the text of the comment in the review comment itself, then
I have personally found it easiest to use the Gerrit online editor to
actually update the patch yourself.  There's nothing magical about the
original submitter, and no point in wasting your time and theirs to get them
to make the change.  That said, please be a grown up; if you're changing
code or messing up formatting enough for PEP8 to be a concern, it's your
responsibility, not the original submitter's, to fix it.  Also, do all your
fixes in one commit if you don't want to make Zuul cry.
--
Ian.


On 29 May 2018 at 09:00, Neil Jerram  wrote:


 From my point of view as someone who is still just an occasional
contributor (in all OpenStack projects other than my own team's networking
driver), and so I think still sensitive to the concerns being raised here:

- Nits are not actually a problem, at all, if they are uncontroversial and
quick to deal with.  For example, if it's a point of English, and most
English speakers would agree that a correction is better, it's quick and no
problem for me to make that correction.

- What is much more of a problem is:

   - Anything that is more a matter of opinion.  If a markup is just the
reviewer's personal opinion, and they can't say anything to explain more
objectively why their suggestion is better, it would be wiser to defer to
the contributor's initial choice.

   - Questioning something unconstructively or out of proportion to the
change being made.  This is a tricky one to pin down, but sometimes I've had
comments that raise some random left-field question that isn't really
related to the change being made, or where the reviewer could have done a
couple minutes research themselves and then either made a more precise
comment, or not made their comment at all.

   - Asking - implicitly or explicitly - the contributor to add more
cleanups to their change.  If someone usefully fixes a problem, and their
fix does not of itself impair the quality or maintainability of the
surrounding code, they should not be asked to extend their fix so as to fix
further problems that a more regular developer may be aware of in that area,
or to advance a refactoring / cleanup that another developer has in mind.
(At least, not as part of that initial change.)

(Obviously the common thread of those problem points is taking up more
time; psychologically I think one of the things that can turn a contributor
away is the feeling that they've contributed a clearly useful thing, yet the
community is stalling over accepting it for reasons that do not appear
clearcut.)

Hoping this is vaguely helpful...
  Neil


On Tue, May 29, 2018 at 4:35 PM Amy Marrich  wrote:


If I have a nit that doesn't affect things, I'll make a note of it and
say if you do another patch I'd really like it fixed but also give the patch
a vote. What I'll also do sometimes if I know the user or they are online
I'll offer to fix things for them, that way they can see what I've done,
I've sped things along and I haven't caused a simple change to take a long
amount of time and reviews.

I think this is a great addition!

Thanks,

Amy (spotz)

On Tue, May 29, 2018 at 6:55 AM, Julia Kreger
 wrote:


During the Forum, the topic of review culture came up in session after
session. During these discussions, the subject of our use of nitpicks
were often raised as a point of contention and frustration, especially
by community members that have left the community and that were
attempting to re-engage the community. Contributors raised the point
of review feedback requiring for extremely precise English, or
compliance to a particular core reviewer's style preferences, which
may not be the same as another core reviewer.

These things are not just frustrating, but also very inhibiting for
part time contributors such as students who may also be time limited.
Or an operator who noticed something that was clearly a bug and that
put forth a very minor fix and doesn't have the time to 

[openstack-dev] [manila][ptl] Stepping down as Manila PTL

2018-01-30 Thread Ben Swartzlander
After leading the Manila project for 5 years, it's time for me to step 
down. I feel incredibly proud of the project and the team that's worked 
to bring Manila from an idea at the Folsom design summit to the 
successful project is it today.


Manila has reached a point of stability where I feel like it doesn't 
need me to spend all my time pushing it forward, and I can change my 
role to contributor and let someone else lead.


I'm thankful for all the support the project has received from 
contributors and from the larger OpenStack community.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-19 Thread Ben Swartzlander
 holidays. Late
September is just too close to the October/November summit.

So the year-long cycles would ideally start at the beginning of the
year, when we would organize the yearly PTG. That said, I'm not sure we
can really afford to keep the current rhythm for one more year before
switching. That is why I'd like us to consider taking the plunge and
just doing it for *Rocky*, and have a single PTG in 2018 (in Dublin).

Who makes the call ?

While traditionally the release team has been deciding the exact shape
of development cycles, we think that this significant change goes well
beyond the release team and needs to be discussed across all of the
OpenStack community, with a final decision made by the Technical Committee.

So... What do you think ?


I have no problem with lengthening the official project cycles. 
Traveling 4 times a year and holding elections twice a year always felt 
a little crazy to me.


As I mentioned in a reply to a thread on the SIG list, I would like to 
see smaller and faster software releases, and the 6-month coordinated 
release cycle was actually an impediment to doing that because it would 
be challenging to pull off mini-releases inside the 6 month cycle.


It seems to me that with a 12 month coordinated release, individual 
teams could still release twice a year, or they could start going 3 or 4 
releases a year, or even try for 5 or 6 (which is the cadence of the 
Linux kernel). The additional freedom for individual teams plus the 
continued stability for users seems like a win-win to me.


Of course as you mentioned, it doesn't really solve some of our more 
horrifying problems, such as users who persist in running ancient 
versions, and users who want to upgrade over 2+ years worth of releases 
in a single shot. But I don't see how this makes those problems any 
worse either.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Nominating Zhong Jun (zhongjun) for Manila core

2017-11-22 Thread Ben Swartzlander

On 11/19/2017 06:29 PM, Ravi, Goutham wrote:

Hello Manila developers,

I would like to nominate Zhong Jun (zhongjun on irc, zhongjun2 on 
gerrit) to be part of the Manila core team. Zhongjun has been an 
important member of our community since the Kilo release, and has, in 
the past few releases made significant contributions to the 
constellation of projects related to openstack/manila [1]. She is also 
our ambassador in the APAC region/timezones. Her opinion is valued 
amongst the core team and I think, as a core reviewer and maintainer, 
she would continue to help grow and maintain our project.


Please respond with a +1/-1.

We will not be having an IRC meeting this Thursday (23^rd November 
2017), so if we have sufficient quorum, PTL extraordinaire, Ben 
Swartzlander will confirm her nomination here.


Welcome Jun to the manila core reviewer team! Your hard work and 
dedication to the Manila project is very appreciated! Normally we do 
these announcements during the weekly meetings, but since tomorrow's 
meeting is canceled, I'm adding you early. If you have any questions 
about responsibilities as a core reviewer, please ask me on IRC.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Nominating Zhong Jun (zhongjun) for Manila core

2017-11-19 Thread Ben Swartzlander

On 11/19/2017 06:29 PM, Ravi, Goutham wrote:

Hello Manila developers,

I would like to nominate Zhong Jun (zhongjun on irc, zhongjun2 on 
gerrit) to be part of the Manila core team. Zhongjun has been an 
important member of our community since the Kilo release, and has, in 
the past few releases made significant contributions to the 
constellation of projects related to openstack/manila [1]. She is also 
our ambassador in the APAC region/timezones. Her opinion is valued 
amongst the core team and I think, as a core reviewer and maintainer, 
she would continue to help grow and maintain our project.


Please respond with a +1/-1.


+1 from me.

-Ben


We will not be having an IRC meeting this Thursday (23^rd November 
2017), so if we have sufficient quorum, PTL extraordinaire, Ben 
Swartzlander will confirm her nomination here.


[1] 
http://stackalytics.com/?user_id=jun-zhongjun=all=person-day


Thanks,

Goutham



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Access rule types for protocols

2017-07-27 Thread Ben Swartzlander
Manila has a long standing design flaw that I'd like to address in 
Queens. We've discussed this as previous PTGs and will cover it in 
detail as the upcoming PTG but because of the potential for impacting 
users I wanted to bring it up here too.


In short, Manila allows potentially incompatible access rules to be 
added to shares. We handle this by allowing the driving to fail to apply 
the rule and reporting that error back to the user by marking the access 
rule as being in an error state.


This history of this design is that at the very beginning of the 
project, we didn't have a strong sense of what kind of access rules 
would be used in practice, or implemented by vendors, and we didn't want 
to design the API to limit what users were allowed to request. In 
retrospect, this attempt at flexibility was misguided because it's now 
very hard for users to know what is supported on any given cloud and we 
have a complete lack of standardization.


Informally, we've settled on the idea that each protocol has exactly 1 
access type that must be supported, and I'd like to formalize that 
notion by changing the API to enforce the agreed upon standard. This 
raises the specter of "backwards incompatibility" because such an API 
change would block requests that were previously allowed. My feeling 
about this specific case is that the requests were going to result in an 
error eventually, so making them result in an error earlier is an 
improvement.


The concern is whether there might be any legitimate cases for using the 
nonstandard access methods that could be broken by the proposed change. 
In particular, I'd like to know if anyone actually uses IP-based access 
for CIFS or User-based access for NFS and what that looks like -- in 
particular which driver and what is the use case for it. My current 
assumption is that these are effectively broken and removing them will 
hurt nobody. If legitimate cases can be found, then we need to find a 
way to support them in a more standardized, discoverable fashion.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Bug Squash Day - Wednesday Aug 2

2017-07-27 Thread Ben Swartzlander
We have planned a bug squash day coming up next week! The goal of this 
event is to get together and come up with a list of bugs that need 
squashing and to work together to fix as many as we can. The timing is 
right after feature freeze so if we're successful we can have a very 
strong impact on the RC1 release for Pike.


I encourage everyone in the community whether you're a developer, a 
user, a tester, or a deployer. We need people to help identify and 
prioritize bugs, which is easily as important as fixing the bugs.


Like many Manila events, the event will be virtual, organized on IRC 
with Webex for audio. There is an etherpad [1] with the event details 
which we will use the track the bugs, prioritize them, and assign people.


-Ben Swartzlander

[1] https://etherpad.openstack.org/p/pike-manila-bug-squash


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Feature proposal freeze exception request for change

2017-07-19 Thread Ben Swartzlander

On 07/18/2017 04:33 PM, Ravi, Goutham wrote:

Hello Manila reviewers,

It has been a few days past the feature proposal freeze, but I would 
like to request an extension for an enhancement to the NetApp driver in 
Manila. [1] implements a low-impact blueprint [2] that was approved for 
the Pike release. The code change is contained within the driver and 
would be a worthwhile addition to users of this driver in Manila/Pike.


I have no problem with this particular feature, but given that it missed 
the deadline I would recommend prioritizing review on this lower than 
other changes which did meet the deadline.


I'll put an agenda item on the weekly meeting tomorrow to formally 
consider this FFE.


-Ben


[1] https://review.openstack.org/#/c/484933/

[2] https://blueprints.launchpad.net/openstack/?searchtext=netapp-cdot-qos

Thanks,

Goutham



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Stable-maint team members

2017-07-19 Thread Ben Swartzlander

Welcome to the following new members of the manila-stable-maint team:

Goutham Pacha Ravi
Rodrigo Barbieri
Thomas Bechtold
Tom Barron
Valeriy Ponomaryov
Xing Yang

All of you are of course familiar with the stable-maint guidelines, and 
have a good history of enforcing the rules. Please continue to exercise 
restraint when approving backports, and perhaps go over the checklist 
one more time before pressing the +2 button :-)


Thanks for keeping stable branches stable!
-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Weekly meeting canceled

2017-07-05 Thread Ben Swartzlander
 I'm not able to chair the meeting tomorrow and nobody has offered to 
chair the meeting in my stead so the meeting is canceled. We will meet 
as normal week.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] PTG involvement

2017-05-17 Thread Ben Swartzlander
As I've mentioned in past meetings, the Manila community needs to decide 
whether the OpenStack PTG is a venue we want to take advantage of in the 
future. In particular, there's a deadline to declare our plans for the 
Denver PTG in September by the end of the week.


Personally I'm conflicted because there are pros and cons either way, 
but having attended the Summit in Boston last week, I think I have all 
the information needed to form my own opinion and I'd really like to 
hear from the rest of you at the weekly meeting tomorrow.


I believe our choice breaks down into 2 broad categories:

1) Continue to be involved with PTG. In this case we would need to lobby 
the foundation organizers to reduce the kinds of scheduling conflicts 
that made the Atlanta PTG so problematic for our team.


2) Drop out of PTG and plan a virtual PTG just for Manila a few weeks 
before or after the official PTG. In this case we would encourage team 
members to get together at the summits for face to face discussions.



Pros for (1):
* This is clearly what the foundation wants
* For US-based developers it would save money compared to (2)
* It ensures that timezone issues don't prevent participation in discussions

Cons for (1):
* We'd have to fight with cross project sessions and other project 
sessions (notably Cinder) for time slots to meet. Very likely it will be 
impossible to participate in all 3 tracks, which some of us currently 
try to do.
* Some developers won't get budget for travel because it's not the kind 
of conference where there are customers and salespeople (and thus lots 
of spare money).


Pros for (2):
* Virtual meetups have worked out well for us in the past, and they save 
money.

* It allows us to easily avoid any scheduling conflicts with other tracks.
* It avoids exhaustion at the PTG itself where trying to participate in 
3 tracks would probably mean no downtime.
* It's pretty easy to get budget to travel to summits because there are 
customers and salespeople, so face to face time could be preserved by 
hanging out in hacking rooms and using forum sessions.


Cons for (2):
* Virtual meetups always cause problems for about 1/3 of the world's 
timezones. In the past this has meant west coast USA, and Asia/Pacific 
have been greatly inconvenienced because most participants where east 
coast USA and Europe based.
* Less chance for cross pollination to occur at PTG where people from 
other projects drop in.



Based on the pros/cons I personally lean towards (2), but I look forward 
to hearing from the community.


There is one more complication affecting this decision, which is that 
the very next summit is planned for Sydney, which is uniquely far away 
and expensive to travel to (for most of the core team). For that summit 
only, I expect the argument that it's easier to travel to summits than 
PTGs to be less true, because Sydney might be simply too expensive or 
time consuming for some of us. In the long run though I expect Sydney to 
be an outlier and most summits will be relatively cheap/easy to travel to.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-05-01 Thread Ben Swartzlander

On 04/28/2017 06:26 PM, Monty Taylor wrote:

Hey everybody!

Yay! (I'm sure you're all saying this, given the topic. I'll let you
collect yourself from your exuberant celebration)

== Background ==

As I'm sure you all know, we've been trying to make some hearway for a
while on getting service-types that are registered in the keystone
service catalog to be consistent. The reason for this is so that API
Consumers can know how to request a service from the catalog. That might
sound like a really easy task - but uh-hoh, you'd be so so wrong. :)

The problem is that we have some services that went down the path of
suggesting people register a new service in the catalog with a version
appended. This pattern was actually started by nova for the v3 api but
which we walked back from - with "computev3". The pattern was picked up
by at least cinder (volumev2, volumev3) and mistral (workflowv2) that I
am aware of. We're also suggesting in the service-types-authority that
manila go by "shared-file-system" instead of "share".

(Incidentally, this is related to a much larger topic of version
discovery, which I will not bore you with in this email, but about which
I have a giant pile of words just waiting for you in a little bit. Get
excited about that!)

== Proposed Solution ==

As a follow up to the consuming version discovery spec, which you should
absolutely run away from and never read, I wrote these:

https://review.openstack.org/#/c/460654/ (Consuming historical aliases)
and
https://review.openstack.org/#/c/460539/ (Listing historical aliases)

It's not a particularly clever proposal - but it breaks down like this:

* Make a list of the known historical aliases we're aware of - in a
place that isn't just in one of our python libraries (460539)
* Write down a process for using them as part of finding a service from
the catalog so that there is a clear method that can be implemented by
anyone doing libraries or REST interactions. (460654)
* Get agreement on that process as the "recommended" way to look up
services by service-type in the catalog.
* Implement it in the base libraries OpenStack ships.
* Contact the authors of as many OpenStack API libraries that we can find.
* Add tempest tests to verify the mappings in both directions.
* Change things in devstack/deployer guides.

The process as described is backwards compatible. That is, once
implemented it means that a user can request "volumev2" or
"block-storage" with version=2 - and both will return the endpoint the
user expects. It also means that we're NOT asking existing clouds to run
out and break their users. New cloud deployments can do the new thing -
but the old values are handled in both directions.

There is a hole, which is that people who are not using the base libs
OpenStack ships may find themselves with a new cloud that has a
different service-type in the catalog than they have used before. It's
not idea, to be sure. BUT - hopefully active outreach to the community
libraries coupled with documentation will keep the issues to a minimum.

If we can agree on the matching and fallback model, I am volunteering to
do the work to implement in every client library in which it needs to be
implemented across OpenStack and to add the tempest tests. (it's
actually mostly a patch to keystoneauth, so that's actually not _that_
impressive of a volunteer) I will also reach out to as many of the
OpenStack API client library authors as I can find, point them at the
docs and suggest they add the support.

Thoughts? Anyone violently opposed?


I don't have any problems with this idea. My main concern would be for 
backwards-compatibility and it sounds like that's pretty well sorted out.


I do think it's important that if we make this improvement that all the 
projects really do get it done at around the same time, because if we 
only implement it 80% of projects, it will look pretty weird.



Thanks for reading...

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Unlimited shares and share usage tracking

2017-04-21 Thread Ben Swartzlander
We had a meeting this morning to discuss the unlimited shares spec [1] 
and decided the use case wasn't compelling enough to implement this 
feature during Pike. We considered a number of different solutions to 
the proposed use case as well as other related use cases not mentioned 
in the spec but rejected most of them as unworkable, too complex, or 
solving problems nobody cares about.


However we did decide that the use case presented in the spec was worth
solving and that it could be solved using a combination of 2 other 
features: per-share-type quotas [2] which is ready for review [3][4] and 
share usage tracking.


So far Manila hasn't cared whether shares were empty or full, or what 
fraction of the space was consumed. We inherited this behavior from 
Cinder where it makes little sense care about the used/free space in a 
volume, but one of the things that make shares different from volumes is 
that their size is a lot more fluid than volumes'.


Resizing shares is trivial on many (but not all) backends where the 
"size" of the share is nothing more than a quota enforced by the storage 
controller. A natural effect of this is it makes less sense to bill by 
share "size" and more sense to bill by share usage.


We agreed during the meeting that we should focus on the share usage 
tracking before we consider unlimited shares because that feature is 
sufficient to solve the use case in [1] and it's a prerequisite for 
unlimited shares and it has significant value whether we end up 
implementing unlimited shares or not.


Zhongjun has volunteered to continue to lead this effort and will split 
the spec so we can consider the narrower feature for Pike. We expect 
there to be interactions with the ceilometer integration effort [5] 
which might make it hard to get both done during Pike but the current 
plan is to aim to complete both efforts.


-Ben Swartzlander

[1] https://review.openstack.org/#/c/452097/
[2] 
http://specs.openstack.org/openstack/manila-specs/specs/pike/support-quotas-per-share-type.html

[3] https://review.openstack.org/#/c/452158/
[4] https://review.openstack.org/#/c/452159/
[5] 
http://specs.openstack.org/openstack/manila-specs/specs/pike/ceilometer-integration.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] share server frameworks for DHSS=False case

2017-04-04 Thread Ben Swartzlander

On 04/03/2017 03:58 PM, Valeriy Ponomaryov wrote:



On Mon, Apr 3, 2017 at 10:00 PM, Ben Swartzlander <b...@swartzlander.org
<mailto:b...@swartzlander.org>> wrote:


... and we later gave up on supporting remote ZFS using SSH altogether.

-Ben


No, we didn't. It works. Just have couple of workarounds related to
difference of remote and local shell executors.


Thanks for this correction. While the SSH path isn't actively used we 
are fixing bugs in that code path so it remains a valid option.


-Ben



--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com <http://www.mirantis.com>
vponomar...@mirantis.com <mailto:vponomar...@mirantis.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] share server frameworks for DHSS=False case

2017-04-03 Thread Ben Swartzlander

On 04/03/2017 02:24 PM, Tom Barron wrote:

We're building an NFS frontend onto the CephFS driver and
are considering the relative merits, for the DHSS=False case,
of following (1) the lvm driver model, and (2) the generic
driver model.

With #1 export locations use a configured address in the backend,
e.g. lvm_share_export_ip and the exporting is done from the
host itself.

With #2 export locations use address from (typically at least)
a floating-IP assigned to a service VM.  The service VM must
be started up externally to manila services themselves -
e.g. by devstack plugin, tripleo, juju, whatever - prior to
configuring the backend in manila.conf.

I lean towards #1 because of its relative simplicity, and
because of its smaller resource footprint in devstack gate, but want
to make sure that I'm not missing some critical limitations.
The main limitation that occurs to me is that multiple backends
of the same type, both DHSS=False - so think of London and Paris
with the lvm jobs in gate - will typically have the same export
location IP.  They'll effectively have the same share server
as long as they run from a manila-share service on a single
host.  Offhand, that doesn't seem a show-stopper.

Am I missing something important about that limitation, and are
there other issues that I should think about?


I think either #1 or #2 could work but #1 will be simpler should have a 
smaller resource footprint as you point out.


There's no rule that says that 2 different backends can't share an IP 
address. Manila intentionally hides all concepts of "servers" from end 
users such that it's impossible to predict the IP address of the server 
that will host the share until after the share is created and the export 
location(s) are filled in by the backend. Typically backends fully own 1 
or more IPs and it's just a question of whether the will be 1 export 
location or more, but we left this up to the implementer for maximum 
flexibility.


If 2 backends were to share an IP address then they would need to avoid 
conflicting NFS export locations with some kind of namespacing scheme, 
but a simple directory prefix would be good enough.


While Manila doesn't care about 2 backends potentially sharing an IP, 
you do have to consider how the m-shr services interact with the daemon 
to avoid situations where they fight eachother. I haven't looked into 
the potential issues around sharing a Ganesha instance, but I know that 
the LVM driver, which uses nfs-kernel-server, does have some issues in 
this area which need fixing.



Thanks,

-- Tom

p.s. When I asked this question on IRC vponmaryov suggested that I look
at the zfsforlinux driver, which allows for serving shares from the host
but which also allows for running a share server on a remote host,
accessible via ssh.  The remote host could be an appliance or a service
VM.  At the moment I'm leaning in this direction as it allows one to run
with a simple configuration as in #1 but also allows for deployment with
multiple backends, each with their own share servers.


What Valeriy was referring to is the ability to run the m-shr service on 
a node other than where the NFS daemon resides. ZFS initially 
implemented this because we were interested in managing potentially 
non-Linux-based ZFS servers (like FreeBSD and Illumos) but we never 
pursued those options due to technical challenges, and we later gave up 
on supporting remote ZFS using SSH altogether.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Some information about the Forum at the Summit in Boston

2017-03-09 Thread Ben Swartzlander



On 03/09/2017 12:10 PM, Jonathan Bryce wrote:

Hi Ben,


On Mar 9, 2017, at 10:23 AM, Ben Swartzlander <b...@swartzlander.org> wrote:

I might be the only one who has negative feelings about the PTG/Forum split, 
but I suspect the foundation is suppressing negative feedback from myself and 
other developers so I'll express my feelings here. If there's anyone else who 
feels like me please reply, otherwise I'll assume I'm just an outlier.


“Suppressing negative feedback” is a pretty strong accusation (and I’m honestly 
not sure how you even imagine we are doing that). I searched through the last 2 
years of mailing list threads and couldn’t find negative feedback from you 
around the PTG. Same for googling around the internet in general. I might have 
missed a place where you had provided this feedback, so feel free to pass it 
along again. Also, if there’s some proof behind the accusation, I would love to 
see it so I can address it with whoever might be doing the suppressing. It’s 
certainly not something I would support anyone in the foundation doing. You can 
send it to me directly off-list if you feel more comfortable providing it that 
way.


I filled out the official PTG feedback survey in Atlanta and I 
completely panned the event in the survey. I don't know if my name was 
attached to that, but it doesn't matter. I wasn't able to attend the 
in-person feedback session because again, it was scheduled on top of 
time we were using to get work on as a Manila team.


All I know is that the announcement after the PTG was that the feedback 
on the event was all pretty good. I conclude from that that either there 
are very few people who feel like me, or that there are more and their 
feedback was ignored. This ML thread is an attempt to give voice to us 
whether it's just a handful or actually a large number.


I'm not going to write a blog entry that says "OpenStack PTG sucked". 
That would be asinine. I think this developers list is small enough that 
we can have a serious and productive discussion about whether the 
current PTG/Forum event split has merits _for the developer community_.



Putting that aside, I appreciate your providing your input. The most consistent 
piece of feedback we received was around scheduling and visibility for 
sessions, so I think that is definitely an area for improvement at the next 
PTG. I heard mixed feedback on whether the ability to participate in multiple 
projects was better or worse than under the previous model, but understanding 
common conflicts ahead of time might give us a chance to schedule in a way that 
makes the multi-project work more possible. Did you participate in both Cinder 
and Manila mid-cycles in addition to the Design Summit sessions previously? 
Trying to understand which types of specific interactions you’re now less able 
to participate in.


Yes in the past I was able to attend all of the Manila and most of the 
Cinder sessions at the Design summit, and I was able to attend the 
Cinder midcycles in person and (since I'm the PTL) I was able to 
schedule the Manila midcycles to not conflict.



I’m also interested in finding ways to support remote participation, but it’s a 
hard problem that has failed more often than it’s worked when we’ve tried it. 
I’m still open to continuing to attempt new methods—we actually brainstormed 
some ideas in Atlanta and if you have any suggestions, let’s experiment the 
next time around.


My feeling on remote participation is that it's something the project 
teams can manage themselves. If we're going to have a "virtual midcycle" 
(which many projects do) then the team can set it up and nothing is 
required from the foundation to facilitate it. Trying to mix the 
in-person interactions of a summit/forum/ptg with remote attendees 
usually just leads to a poor experience for the remote participants. 
When we have in-person events we do our best to include remote people 
who can't attend but IMO that's still inferior to planning a fully 
virtual event that puts everyone on equal footing.



The PTG was actually an idea that was initiated by development teams, and 
something that we tried to organize to make it as productive as possible for 
the teams. The goal of the PTGs is to provide focused time to that help us make 
better software, and there’s really no other benefit that the Foundation gets 
from them. We did have some teams, like Kuryr, who did not participate in 
person at the PTG. I talked to Antoni before and offered to assist with 
whatever we could when they did their VTG, and we will continue to support 
teams whether they participate in future PTGs or not.


I've been part of OpenStack a long time and my perception of history is 
a bit different. I recall a frustration from developers who attended 
design summits that it was hard to attend both the design summit and the 
conference because they were scheduled on top of eachother. What people 
wanted was a way to at

Re: [openstack-dev] [all] Some information about the Forum at the Summit in Boston

2017-03-09 Thread Ben Swartzlander
I might be the only one who has negative feelings about the PTG/Forum 
split, but I suspect the foundation is suppressing negative feedback 
from myself and other developers so I'll express my feelings here. If 
there's anyone else who feels like me please reply, otherwise I'll 
assume I'm just an outlier.


The new structure is asking developers to travel 4 times a year 
(minimum) and makes it impossible to participate in 2 or more vertical 
projects.


I know that most of the people working on Manila have pretty limited 
travel budgets, and meeting 4 times a year basically guarantees that a 
good number of people will be remote at any given meeting. From my 
perspective if I'm going to be meeting with people on the phone I'd 
rather be on the phone myself and have everyone on equal footing.


I also normally try to participate in Cinder as well as Manila and the 
new PTG structures makes that impossible. I decided to try to be 
positive and to wait until after the PTG to make up my mind but having 
attended in Atlanta it was exactly as bad as I expected in terms of my 
ability to participate in Cinder.


I will be in Boston to try to develop a firsthand opinion of the new 
Forum format but as of now I'm pretty unhappy with the proposal. For 
Manila I'm proposing that the community either meets at PTG and skips 
conferences or meetings at conferences and skips PTGs going forward. I'm 
not going to ask everyone to travel 4 times a year.


-Ben Swartzlander
Manila PTL


On 03/07/2017 07:35 AM, Thierry Carrez wrote:

Hi everyone,

I recently got more information about the space dedicated to the "Forum"
at the OpenStack Summit in Boston. We'll have three different types of
spaces available.

1/ "Forum" proper

There will be 3 medium-sized fishbowl rooms for cross-community
discussions. Topics for the discussions in that space will be selected
and scheduled by a committee formed of TC and UC members, facilitated by
Foundation staff members. In case you missed it, the brainstorming for
topics started last week, announced by Emilien in that email:

http://lists.openstack.org/pipermail/openstack-dev/2017-March/113115.html

2/ "On-boarding" rooms

We'll have two rooms set up in classroom style, dedicated to project
teams and workgroups who want to on-board new team members. Those can
for example be booked by project teams to run an introduction to their
codebase to prospective new contributors, in the hope that they will
join their team in the future. Those are not meant to do traditional
user-facing "project intro" talks -- there is space in the conference
for that. They are meant to provide the next logical step in
contributing after Upstream University and being involved on the
sidelines. It covers the missing link for prospective contributors
between attending Summit and coming to the PTG. Kendall Nelson and Mike
Perez will soon announce the details for this, including how projects
can sign up.

3/ Free hacking/meetup space

We'll have four or five rooms populated with roundtables for ad-hoc
discussions and hacking. We don't have specific plans for these -- we
could set up something like the PTG ethercalc for teams to book the
space, or keep it open. Maybe half/half.

More details on all this as they come up.
Hoping to see you there !



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] PTG summary

2017-03-03 Thread Ben Swartzlander
d have had public extra specs were added. In particular shrinking 
of shares is an optional feature which currently can't be discovered, 
and needs a capability/extra spec. Also the list of supported protocols 
applies here too.


Access Groups
-
Still not going to do this. We've punted this feature 7 releases now, 
why does it keep coming up?


RBAC

We need to get rid of hard-coded is_admin checks and replace them with 
policy checks.


OpenStack Client

No volunteers to work on this. Not a priority. Still concerns about lack 
of support for some things we would need if we were to do this integration.


Share Modify

Now that the migration feature is complete, ganso plans to open up the 
functionality to end-users with the addition of the share-modify API 
which will allow users to retype, change AZ, change share-network, etc 
their shares, possibly invoking migrations.


manila-image-elements vs manila-test-image
--
We clarified the goals of these 2 repos: manila-test-image is a GPL repo 
only intended for our internal testing, manila-image-elements is an 
Apache-licensed repo intended for producing references images for end 
users. The work to add ganesha support to our service images will go 
into manila-image-elements.


Migration Testing Issues

Ben raised an issue with the way periodic tasks are run in the manager. 
Periodic tasks are serialized not parallelized which makes it impossible 
to increase the frequency of some checks, slowing down tests and 
ultimately causing timeouts in our test jobs. The plan is to parallelize 
these.


-Ben Swartzlander

[1] https://etherpad.openstack.org/p/manila-pike-ptg-topics


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Deadlines and no meeting next week

2016-11-17 Thread Ben Swartzlander
For those who missed the weekly meeting tonight there were 2 important 
things you should know:


The meeting next week is canceled.

The low-priority spec merge deadline was extended to tomorrow midnight 
(18 Nov 23:59 UTC) because we wanted to accept more specs for Ocata, and 
there are several which are almost ready.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][cinder] [api] API and entity naming consistency

2016-11-16 Thread Ben Swartzlander

On 11/16/2016 11:28 AM, Ravi, Goutham wrote:

+ [api] in the subject to attract API-WG attention.



We already have a guideline in the API-WG around resource names for “_”
vs “-“ -
https://specs.openstack.org/openstack/api-wg/guidelines/naming.html#rest-api-resource-names
. With some exceptions (like share_instances that you mention), I see
that we have implemented – across other resources.

Body elements however, we prefer underscores, i.e, do not have body
elements that follow CamelCase or mixedCase.



My personal preference would be to retain “share-” in the resource
names. As an application developer that has to integrate with block
storage and shared file systems APIs, I would like the distinction if
possible; because at the end of the day, the typical workflow for me
would be:

-  Get the endpoint from the catalog for the specific version of
the service API I want

-  Append resource to endpoint and make my REST calls.



The distinction in the APIs would ensure my code is readable. It would
be interesting to see what the API working group prefers around this. We
have in the past realized that /capabilities could to be uniform across
services because it is expected to spew a bunch of strings to the user
(warning: still under contention, see
https://review.openstack.org/#/c/386555/) . However, there is a mountain
of a difference between the underlying intent of /share-networks and
neutron’s /networks resources.


So you'd be in favor of renaming cinder's /snapshots URL to 
/volume-snapshots and manila's /snapshots URL to /share-snapshots?


I agree the explicitness is appealing, but we have to recognize that the 
existing API has tons of implicitness in the names, and changing the 
existing API will cause pain no matter how well-intentioned the changes are.



However, whatever we decide there, let’s not overload resources within
the project, an explicit API will be appreciated for application
development. share-types and group-types are not ‘types’ unless
everything about these resources (i.e, database representation) are the
same and all HTTP verbs that you are planning to add correspond to both.



--

Goutham



*From: *Valeriy Ponomaryov 
*Reply-To: *"OpenStack Development Mailing List (not for usage
questions)" 
*Date: *Wednesday, November 16, 2016 at 4:22 PM
*To: *"OpenStack Development Mailing List (not for usage questions)"

*Subject: *[openstack-dev] [manila][cinder] API and entity naming
consistency



For the moment Manila project, as well as Cinder, does have
inconsistency between entity and API naming, such as:

- "share type" ("volume type" in Cinder) entity has "/types/{id}" URL

- "share snapshot" ("volume snapshot" in Cinder) entity has
"/snapshots/{id}" URL



BUT, Manila has other Manila-specific APIs as following:



- "share network" entity and "/share-networks/{id}" API

- "share server" entity and "/share-servers/{id}" API



And with implementation of new features [1] it becomes a problem,
because we start having

"types" and "snapshots" for different things (share and share groups,
share types and share group types).



So, here is first open question:



What is our convention in naming APIs according to entity names?



- Should APIs contain full name or it may be shortened?

- Should we restrict it to some of the variants (full or shortened) or
allow some API follow one approach and some follow other approach,
consider it as "don't care"? Where "don't care" case is current
approach, de facto.



Then, we have second question here:



- Should we use only "dash" ( - ) symbols in API names or "underscore" (
_ ) is allowed?

- Should we allow both variants at once for each API?

- Should we allow APIs use any of variants and have zoo with various
approaches?



In Manila project, mostly "dash" is used, except one API -
"share_instances".



[1] https://review.openstack.org/#/c/315730/



--

Kind Regards
Valeriy Ponomaryov
vponomar...@mirantis.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][cinder] API and entity naming consistency

2016-11-16 Thread Ben Swartzlander

On 11/16/2016 10:22 AM, Valeriy Ponomaryov wrote:

For the moment Manila project, as well as Cinder, does have
inconsistency between entity and API naming, such as:
- "share type" ("volume type" in Cinder) entity has "/types/{id}" URL
- "share snapshot" ("volume snapshot" in Cinder) entity has
"/snapshots/{id}" URL

BUT, Manila has other Manila-specific APIs as following:

- "share network" entity and "/share-networks/{id}" API
- "share server" entity and "/share-servers/{id}" API

And with implementation of new features [1] it becomes a problem,
because we start having
"types" and "snapshots" for different things (share and share groups,
share types and share group types).

So, here is first open question:

What is our convention in naming APIs according to entity names?

- Should APIs contain full name or it may be shortened?
- Should we restrict it to some of the variants (full or shortened) or
allow some API follow one approach and some follow other approach,
consider it as "don't care"? Where "don't care" case is current
approach, de facto.


I think that consistency is important but the question is consistency 
with what. Right now we have an inconsistent design and it will be 
effort to change it either way. If we're going to spend that effort 
there needs to be a good reason.


Initially I had been in favor of "share-groups" over just "groups", 
however if we go that direction it will make all of the places where we 
don't use the share- prefix that much more glaring. Consistency with the 
the past and with cinder would suggest that we should avoid using share- 
prefixes everywhere possible, and we should look into removing them from 
places where we added them somewhat gratuitously (share networks, share 
servers, share instances).



Then, we have second question here:

- Should we use only "dash" ( - ) symbols in API names or "underscore" (
_ ) is allowed?


Underscores should never be used. This seems like a mistake when 
instances were added.



- Should we allow both variants at once for each API?


Thanks to microversions, if we change any API we can support only the 
old name for the old microversion and only the new name for the new 
microversion. There is no reason to support both at the same time for 
any microversion



- Should we allow APIs use any of variants and have zoo with various
approaches?

In Manila project, mostly "dash" is used, except one API -
"share_instances".

[1] https://review.openstack.org/#/c/315730/

--
Kind Regards
Valeriy Ponomaryov
vponomar...@mirantis.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] spec detail of IPv6's export locations

2016-11-14 Thread Ben Swartzlander

On 11/13/2016 10:03 PM, TommyLike Hu wrote:

Hey manila memebers and driver maintainers,
  I wanna set up an new thread to talk about the proper and
better exprienced way for both end users and drivers who want use or
support IPv6 in manila, and I wish this can be discussed and optimised
before the spec[1] can get ready to the next review.
 The issue was initial raise by Ben Swartzlander on the manila
IPv6 spec[1] (please check PS9 line 56), and this draft only state the
export_location part which is currently fully focused in the spec.

*proposed changes**(Based on Bens' comments)** are below*:
1.  Add new share type extra_spec 'ip_version' that can indicate the
access ip version:
  * ip_version = '4' or '6' or '4&6'*. default without declaration is '4'.
2. Drivers have to report the capabilties with the ip versions that it
can support, the capability is report
   in the form: *support_ip_versions = '4' or '6' or '4&6'*. default
without override is '4'


I think I'd prefer 2 extra specs: one for v4 and one for v6. Two 
booleans seems cleaner than an enum string



3. The ip versions of share type's, driver supported's and share
network's will be all checked whether
   they matched before the creating of new share.


It should be mentioned that this won't require any new code. The 
existing filters in the scheduler can handle this.



4. Driver will export the locations according the share-network's ip
version.


Share networks only affect DHSS=true share types, and the share networks 
already have support for IPv6 because they just refer to a neutron 
subnet. What's missing is handling for IPv6 during creation of the share 
servers.



5. (About access rules)the ip versions of share type's and access host's
will be also checked.


This is a good idea I hadn't thought about. It probably doesn't make 
sense to allow v6 rules for a v4-only share, and similarly it might not 
make sense to allow v4 rules for a v6-only share.


What makes this complicated however is that support can change over 
time. A share that was created on a v4-only backend, could later get 
IPv6 support added. Looking at the share type won't give you the right 
answer -- you have to look at the export locations. We might want to 
enforce that drivers tell the share manager which export locations are 
v4/v6 explicitly.



Please leave any comments about your concern or just which is not clear
or missed  here.

Thanks
TommyLike.Hu

[1] https://review.openstack.org/#/c/362786/9/specs/ocata/manila-ipv6.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Migration options

2016-11-11 Thread Ben Swartzlander
After the long and contentious discussion about migration options, 
Rodrigo and I have reached agreement about how we should handle them 
that works for both of us, and I will share it here before Rodrigo 
updates the spec. Discussion about the proposal can continue here on the 
ML or in the spec, but the final decision will be made through the spec 
approval process.


===

The main assumptions that drive this conclusion are:
* The API design must not violate our agreed-upon versioning 
(microversions) scheme as the API evolves over time.
* Actions requested by a client must result in behavior that is the same 
across server versions. It's not okay to get "bonus" behaviors due to a 
server upgrade if the client doesn't ask for them.


For the REST API, we propose that all the options are mandatory 
booleans. There will be no defaults at the API level. Values of false 
for options will always be compatible with the fallback/universal 
migration strategy. The options will be:

* writable
* preserve-metadata
* non-disruptive
* preserve-snapshots

Omitting any one of these is an error. This ensures safety by making 
clients send a value of true or false ensuring there are no surprise 
downsides to performing a migration.


For future migration options, they will be added with a new microversion 
and they will also be required options (in the new microversion). Newer 
server versions will provide backwards compatibility by defaulting 
options to false when clients invoke older microversions where that 
option didn't exist.


For the pythonclient, we propose that options are mandatory on the CLI. 
Again this provides safety by avoiding situations where users who don't 
read the docs are surprised by the behavior they get. This ensures that 
CLI scripts that invoke migrations will never break as long as the 
client version remains the same, and that they will get consistent 
behavior across all servers versions that support that client.


Updating to newer python clients may introduce new required parameters 
though, which can break old scripts, so this will have the effect of 
tying CLI scripts to specific client versions.


The client will provide backwards compatibility with older servers by 
only allowing requests to be sent if the user specifies a value of false 
for any option that doesn't exist on the server. This ensures that users 
always get exactly what they ask for, or an error if what they asked for 
can't be provided. It again avoids surprise behavior.


For the UI, options will always be specified as checkboxes, which will 
default to checked (true).


===

This proposal was arrived at after thinking through a long list of 
hypothetical use cases involving newer and older clients and servers as 
well as use of apps which import the client, usage of the client's CLI 
interface, usage of the UI, and direct calls to the REST API without 
using the client.


Also we specifically considered hypothetical future migration options 
and how they would affect the API. I'm confident that this is as "safe" 
as the API and CLIs can be, and the only downside I can see to this 
approach is that it's more verbose than alternatives that include 
implicit defaults.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Barcelona Design Summit summary

2016-11-09 Thread Ben Swartzlander

On 11/04/2016 02:00 PM, Joshua Harlow wrote:

Ben Swartzlander wrote:

Thanks to gouthamr for doing these writeups and for recording!

We had a great turn out at the manila Fishbowl and working sessions.
Important notes and Action Items are below:

===
Fishbowl 1: Race Conditions
===
Thursday 27th Oct / 11:00 - 11:40 / AC Hotel -Salon Barcelona - P1
Etherpad: https://etherpad.openstack.org/p/ocata-manila-race-conditions
Video: https://www.youtube.com/watch?v=__P7zQobAQw

Gist:
* We've some race conditions that have worsened over time:
* Deleting a share while snapshotting the share
* Two simultaneous delete-share calls
* Two simultaneous create-snapshot calls
* Though the end result of the race conditions is not terrible, we can
leave resources in untenable states, requiring administrative cleanup in
the worst scenario
* Any type of resource interaction must be protected in the database
with a test-and-set using the appropriate status fields
* Any test-and-set must be protected with a lock
* Locks must not be held over long running tasks: i.e, RPC Casts, driver
invocations etc.
* We need more granular state transitions: micro/transitional states
must be added per resource and judiciously used for state locking
* Ex: Shares need a 'snapshotting' state
* Ex: Share servers need states to signify setup phases, a la nova
compute instances


Just something that I've always wondered, and I know its not a easy
answer, but are there any ideas on why such simultaneous issues keep on
getting discovered so late in the software lifecycle, instead of at
design time? Not probably just a manilla question, but it strikes me as
somewhat confusing that keeps on popping up.


In the case of Manila the reason is historical. Manila forked from 
Cinder, and Cinder forked from Nova-Volume. Each inherited 
infrastructure and design choices, as well as design *assumptions* which 
didn't always remain true after the forks.


The basic problem is that the people who wrote (some of) the original 
code are no longer around and new people often assume that old stuff 
isn't broken, even when it is. Issues like concurrency problems can lay 
dormant for a long time before they pop up because they're hard to test.



Discussion Item:
* Locks in the manila-api service (or specifically, extending usage of
locks across all manila services)
* Desirable because:
* Adding test-and-set logic at the database layer may render code
unmaintainable complicated as opposed to using locking abstractions
(oslo.concurrency / tooz)
* Cinder has evolved an elegant test-and-set solution but we may not be
able to benefit from that implementation because of the lack of being
able to do multi-table updates and because the code references OVO which
manila doesn't yet support.
* Un-desirable because:
* Most distributors (RedHat/Suse/Kubernetes-based/MOS) want to run more
than one API service in active-active H/A.
* If a true distributed locking mechanism isn't used/supported, the
current file-locks would be useless in the above scenario.
* Running file locks on shared file systems is a possibility, but
applies configuration/set-up burden
* Having all the locks on the share service would allow scale out of the
API service and the share manager is really the place where things are
going wrong
* With a limited form of test-and-set, atomic state changes can still be
achieved for the API service.

Agreed:
* File locks will not help

Action Items:
(bswartz): Will propose a spec for the locking strategy
(volunteers): Act on the spec ^ and help add more transitional states
and locks (or test-and-set if any)
(gouthamr): state transition diagrams for shares/share
instances/replicas, access rules / instance access rules
(volunteers): Review ^ and add state transition diagrams for
snapshots/snapshot instances, share servers
(mkoderer): will help with determining race conditions within
manila-share with tests

=
Fishbowl 2: Data Service / Jobs Table
=
Thursday 27th Oct / 11:50 - 12:30 / AC Hotel - Salon Barcelona - P1
Etherpad:
https://etherpad.openstack.org/p/ocata-manila-data-service-jobs-table
Video: https://www.youtube.com/watch?v=Sajy2Qjqbmk


Will https://review.openstack.org/#/c/260246/ help here instead?

It's the equivalent of:

http://docs.openstack.org/developer/taskflow/jobs.html

Something to think about...



Gist:
* Currently, a synchronous RPC call is made from the API to the
share-manager/data-service that's performing a migration to get the
progress of a migration
* We need a way to record progress of long running tasks: migration,
backup, data copy etc.
* We need to introduce a jobs table so that the respective service
performing the long running task can write to the database and the API
relies on the database

Discussion Items:
* There was a suggestion to extend the jobs table to all tasks on the
share: snapshotting, creating share from

Re: [openstack-dev] [manila] spec review focus

2016-11-03 Thread Ben Swartzlander

The following specs were designated as review focus specs for Ocata:
https://etherpad.openstack.org/p/manila-ocata-spec-review-focus

-Ben

On 11/03/2016 12:27 PM, Ben Swartzlander wrote:

As agreed to in the manila spec process spec, the whole core team is
expected to review certain specs and vote before they merge.

The following specs were designed as review focus specs for Ocata:

https://etherpad.openstack.org/p/manila-ocata-spec-review-focus


-Ben Swartzlander



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] spec review focus

2016-11-03 Thread Ben Swartzlander
As agreed to in the manila spec process spec, the whole core team is 
expected to review certain specs and vote before they merge.


The following specs were designed as review focus specs for Ocata:

https://etherpad.openstack.org/p/manila-ocata-spec-review-focus


-Ben Swartzlander



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Barcelona Design Summit summary

2016-11-03 Thread Ben Swartzlander

Thanks to gouthamr for doing these writeups and for recording!

We had a great turn out at the manila Fishbowl and working sessions. 
Important notes and Action Items are below:


===
Fishbowl 1: Race Conditions
===
Thursday 27th Oct / 11:00 - 11:40 / AC Hotel -Salon Barcelona - P1
Etherpad: https://etherpad.openstack.org/p/ocata-manila-race-conditions
Video: https://www.youtube.com/watch?v=__P7zQobAQw

Gist:
* We've some race conditions that have worsened over time:
  * Deleting a share while snapshotting the share
  * Two simultaneous delete-share calls
  * Two simultaneous create-snapshot calls
* Though the end result of the race conditions is not terrible, we can 
leave resources in untenable states, requiring administrative cleanup in 
the worst scenario
* Any type of resource interaction must be protected in the database 
with a test-and-set using the appropriate status fields

* Any test-and-set must be protected with a lock
* Locks must not be held over long running tasks: i.e, RPC Casts, driver 
invocations etc.
* We need more granular state transitions: micro/transitional states 
must be added per resource and judiciously used for state locking

* Ex: Shares need a 'snapshotting' state
* Ex: Share servers need states to signify setup phases, a la nova 
compute instances


Discussion Item:
* Locks in the manila-api service (or specifically, extending usage of 
locks across all manila services)

* Desirable because:
  * Adding test-and-set logic at the database layer may render code 
unmaintainable complicated as opposed to using locking abstractions 
(oslo.concurrency / tooz)
  * Cinder has evolved an elegant test-and-set solution but we may not 
be able to benefit from that implementation because of the lack of being 
able to do multi-table updates and because the code references OVO which 
manila doesn't yet support.

* Un-desirable because:
  * Most distributors (RedHat/Suse/Kubernetes-based/MOS) want to run 
more than one API service in active-active H/A.
  * If a true distributed locking mechanism isn't used/supported, the 
current file-locks would be useless in the above scenario.
  * Running file locks on shared file systems is a possibility, but 
applies configuration/set-up burden
  * Having all the locks on the share service would allow scale out of 
the API service and the share manager is really the place where things 
are going wrong
  * With a limited form of test-and-set, atomic state changes can still 
be achieved for the API service.


Agreed:
* File locks will not help

Action Items:
(bswartz): Will propose a spec for the locking strategy
(volunteers): Act on the spec ^ and help add more transitional states 
and locks (or test-and-set if any)
(gouthamr): state transition diagrams for shares/share 
instances/replicas, access rules / instance access rules
(volunteers): Review ^ and add state transition diagrams for 
snapshots/snapshot instances, share servers
(mkoderer): will help with determining race conditions within 
manila-share with tests


=
Fishbowl 2: Data Service / Jobs Table
=
Thursday 27th Oct / 11:50 - 12:30 / AC Hotel - Salon Barcelona - P1
Etherpad: 
https://etherpad.openstack.org/p/ocata-manila-data-service-jobs-table

Video: https://www.youtube.com/watch?v=Sajy2Qjqbmk

Gist:
* Currently, a synchronous RPC call is made from the API to the 
share-manager/data-service that's performing a migration to get the 
progress of a migration
* We need a way to record progress of long running tasks: migration, 
backup, data copy etc.
* We need to introduce a jobs table so that the respective service 
performing the long running task can write to the database and the API 
relies on the database


Discussion Items:
* There was a suggestion to extend the jobs table to all tasks on the 
share: snapshotting, creating share from snapshot, extending, shrinking, 
etc.
* We agreed not to do this because the table can easily go out of 
control; and there isn't a solid use case to register all jobs. Maybe 
asynchronous user messages is a better answer to this feature request

* "restartable" jobs would benefit from the jobs table
* service heartbeats could be used to react to services dying while 
running long running jobs
* When running the data service in active-active mode, a service going 
down can pass on its jobs to the other data service


Action Items:
(ganso): Will determine the structure of the jobs table model in his spec
(ganso): Will determine the benefit of the data service reacting to 
additions in the database rather than acting upon RPC requests


=
Working Sessions 1: High Availability
=
Thursday 27th Oct / 14:40 - 15:20 / CCIB - Centre de Convencions 
Internacional de Barcelona - P1 - Room 130

Etherpad: https://etherpad.openstack.org/p/ocata-manila-high-availability
Video: 

Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Ben Swartzlander

On 11/02/2016 06:23 AM, Arne Wiebalck wrote:

Hi Valeriy,

I wasn’t aware, thanks!

So, if each driver exposes the storage_protocols it supports, would it 
be sensible to have
manila-ui check the extra_specs for this key and limit the protocol 
choice for a given
share type to the supported protocols (in order to avoid that the user 
tries to create

incompatible type/protocol combinations)?


This is not possible today, as any extra_specs related to protocols are 
hidden from normal API users. It's possible to make sure the share type 
called "nfs_shares" always goes to a backend that supports NFS, but it's 
not possible to programatically know that in a client, and therefore 
it's not possible to build the smarts into the UI. We intend to fix this 
though, as there is no good reason to keep that information hidden.


-Ben



Thanks again!
 Arne


On 02 Nov 2016, at 10:00, Valeriy Ponomaryov 
> wrote:


Hello, Arne

Each share driver has capability called "storage_protocol". So, for 
case you describe, you should just define such extra spec in your 
share type that will match value reported by desired backend[s].


It is the purpose of extra specs in share types, you (as cloud admin) 
define its connection yourself, either it is strong or not.


Valeriy

On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck > wrote:


Hi,

We’re preparing the use of Manila in production and noticed that
there seems to be no strong connection
between share types and share protocols.

I would think that not all backends will support all protocols.
If that’s true, wouldn’t it be sensible to establish
a stronger relation and have supported protocols defined per
type, for instance as extra_specs (which, as one
example, could then be used by the Manila UI to limit the choice
to supported protocols for a given share
type, rather than maintaining two independent and hard-coded tuples)?

Thanks!
 Arne

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com 
vponomar...@mirantis.com 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org 
?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Arne Wiebalck
CERN IT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Ben Swartzlander

+1

-Ben


On 11/02/2016 08:09 AM, Tom Barron wrote:

I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
manila core team.  This is a clear case where he's already been doing
the review work, excelling both qualitatively and quantitatively, as
well as being a valuable committer to the project.  Goutham deserves to
be core and we need the additional bandwidth for the project.  He's
treated as a de facto core by the community already.  Let's make it
official!

-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Cross project prorietary driver code recap

2016-11-01 Thread Ben Swartzlander

On 11/01/2016 12:03 PM, Jeremy Stanley wrote:

On 2016-11-01 00:26:11 -0400 (-0400), Ben Swartzlander wrote:
[...]

As usual I'd like to point to the Linux project as a good example
for how to handle such things. Linux is older than us and has been
dealing with drivers and proprietary code for a very long time.

Linux does a few specific things that I like a lot:

[...]

3) Drivers which require proprietary stuff (typically called
"firmware" or "binary blobs" in the Linux world) can contribute
that stuff to the linux-firmware repo which supports closed-source
but freely-distributable software.


Note that the "proprietary stuff" here is generally opaque blobs the
running kernel uploads into other parts of the system to run on or
initialize different processors than the kernel itself is running
within. I'm pretty sure the Linux devs wouldn't (and legally
couldn't? but I am not a lawyer nor a kernel maintainer) allow
drivers in tree that dynamically load proprietary external libraries
into it the kernel's process space. This is the closest analogue we
have to our drivers importing proprietary Python modules.


I'm not talking about python code here. I'm talking about non-python 
binaries, which were a large part of the discussion at the summit.



Personally I think the needs of the distros could be met with
something like the linux-firmware repo. It would create a way for
vendors that require proprietary stuff to make it available for
distros to do what they need to do.

[...]

This might be a solution to the case where drivers call proprietary
local command-line utilities or load proprietary data for use in
initializing some device, as long as those things are still freely
redistributable on their own. I don't know, though, how many
services have drivers in exactly this situation. It certainly
doesn't change the legality of things like importing proprietary
Python modules within drivers, and is also a suboptimal solution I
think we should discourage in favor of providing _actual_ free
drivers. I suppose it's worth a try, but I'm skeptical it will
significantly alter the dynamics of the situation.


I think we're saying the same thing. Python code can and should be 100% 
Apache licensed. External C/C++/Java tools required to interact with a 
storage controller could be treated similarly to Linux firmware and 
allowed only if those tools could be made available under a freely 
distributable license. From the conversations I've had I believe this 
would address the major sticking point with Sean's "Option 3".


-Ben


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Cross project prorietary driver code recap

2016-10-31 Thread Ben Swartzlander

On 10/31/2016 02:23 PM, Sean McGinnis wrote:

Last Tuesday we had a cross-project session around the use of proprietary
code/libs/binaries in OpenStack drivers. The challenge we are running into
is where to draw the line with these drivers. What is the appropriate policy
to have in place to allow/disallow certain approaches.

There was a lot of good discussion and input from multiple projects that are
affected by this. I won't attempt to recap the full conversation here. The
etherpad with the notes from the discussion can be found here:

https://etherpad.openstack.org/p/ocata-xp-proprietary-drivers

The two main concerns I heard were around 1) the ability to package and
redistribute everything needed to run an OpenStack environment, and 2) the
value these drivers bring to the OpenStack project. Particularly those that
have no business logic in the code in OpenStack and just call out to third
party libraries to handle all business logic.

Proposals
=

Option 1:

- All libraries imported by the driver must be licensed such that they are
  redistributable by package maintainers.
- Existing non-compliant driver code would need to be updated by the Q
  release to stay in tree.
- Code that does not get imported into the driver at runtime (CLIs, external
  binaries, remote application servers) are acceptable to be not
  redistributable.

Issues:

- The desire for having things redistributable was to have everything fully
  functional "out of the box" no matter how the deployer configured the system.
  I don't believe this is ever possible. With requirements of setting up some
  type of application or management server to enable some solutions, it is not
  possible to package all required code and binaries for every configuration.


Option 2:

- Remove all drivers that are not completely open source and contained in the
  project repo.

Issues:

- This restricts things down to only the drivers that can currently be run in
  gate.
- This would be a big issue for many Cinder drivers and I'm sure most Ironic
  drivers.
- I strongly believe this would result in less vendor involvement in upstream
  work.


Option 3:

- Require the majority of business logic is in the open source code.
- Allow third party, non-redistributable libraries and CLIs that are used as
  more of an "RPC" type interface.
- Reviewers should be able to review the driver code and at least get some
  idea of the steps the driver is doing to perform each requested operation.

Issues:

- Does not address the desire to package all needed requirements.



My preference is actually option 3. The reality is, with most vendor solutions
there's always going to be some part of the overall solution that is
proprietary and requires some configuration and setup by the deployer.
Arbitrarily drawing that line such that a proprietary app server is OK, but a
proprietary library is not OK, just leads to poor solution architectures such
as requiring the end user to set up a separate server just so the driver can
SSH into it to run commands.

The case that instigated this discussion for me was a proposal to have a driver
that did nothing more than make a one line call of everything out to a library
that handled all logic. In that case, there is no benefit to the community of
being able to review the code and give some assurance that it is doing things
correctly. And the only benefit really it gives the vendor is they have an out
of tree driver that gets advertised as being in tree.

My desire as a code reviewer and project maintainer is that I can take a look
at a driver and at least get an idea of what they are doing and how their
device works to perform some of these operations. That's not an exact number -
I'm not saying something like 10% of the code needs to be in the driver, but
9% would not be acceptable - but we need to have some visiblity to the logic
so we as a community can at least help point users in the right direction if
they come looking for help.

I think this works for Cinder and probably is necessary for Ironic.

I'd be interested in hearing feedback or other options from other folks. I'd
especially like to hear from Helion, OSP, Mirantis, and others that are doing
packaging and support for these deployments to hear how this impacts the way
they do things.


As usual I'd like to point to the Linux project as a good example for 
how to handle such things. Linux is older than us and has been dealing 
with drivers and proprietary code for a very long time.


Linux does a few specific things that I like a lot:
1) Drivers have to match the license (GPLv2 in Linux's case) which 
guarantees the code is free software.
2) Drivers have to be in tree, which makes it easier for kernel 
developers to evolve the driver interfaces without breaking most drivers.
3) Drivers which require proprietary stuff (typically called "firmware" 
or "binary blobs" in the Linux world) can contribute that stuff to the 
linux-firmware repo which supports closed-source 

Re: [openstack-dev] [Cinder] Proposed logo

2016-10-26 Thread Ben Swartzlander

On 10/26/2016 03:53 AM, Sean McGinnis wrote:

Hey team,

Attached is the proposed new logo for the Cinder project. I think some have 
already seen this, so making sure everyone gets a chance to see it before it's 
finalized.

Sean (smcginnis)


FTFY

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][octavia][tacker][designate][manila][sahara][magnum][infra] servicevm working group meetup

2016-10-25 Thread Ben Swartzlander

Missed it


On October 24, 2016 8:24:11 PM Doug Wiegley  
wrote:


As part of a requirements mailing list thread [1], the idea of a servicevm 
working group, or a common framework for reference openstack service VMs, 
came up. It's too late to get onto the official schedule, but unofficially, 
let's meet here:


When: Tuesday, 1:30pm-2:10pm
Where: CCIB P1 Room 128

If this is too short notice, then we can retry on Friday.

Thanks,
doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-October/105861.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [elections] TC candidacy

2016-09-30 Thread Ben Swartzlander

I'd like to throw my hat in the ring for the TC election.

My name is Ben Swartzlander (bswartz on IRC) and I've been PTL for the 
Manila project for the entire life of the project. I'm also relatively 
active within the Cinder project, and I've been part of the OpenStack 
community since Essex.


My reasons for running for TC are fairly simple. There are some changes 
I'd like to see and I think that I'll have more ability to effect change 
if I'm part of the TC.


The first thing I'd like to change is that I'd like to see OpenStack 
start acting more mature. It *is* fairly mature now but in many ways we 
still have the habits of a shiny new project. I would like to see way 
less time spent on new features and much more time spent on stability 
and quality improvements.


Specifically I'd like to see dramatically more automated testing of 
stuff we already have. Not gate tests that run for every checkin but 
serious automated nightly regression style tests that actually cover 
realistic use cases and take several hours to run.


I would like to see more frequent releases. I hold up the Linux (kernel) 
project as an example to emulate. Linux releases a new major release 
about every 2 months. OpenStack has held to a 6 month release cycle for 
it's whole life but I think we can and should move to shorter cycles. In 
a similar vein I think serious effort needs to be spend on LTS (long 
term support) -- specifically the ability to upgrade across multiple 
releases without anything breaking. The deprecation policy needs to 
change if we want to get this right.


I would like to see fewer new projects and more focus on existing 
projects and possible integration between them. Long ago it was decided 
that OpenStack should be a loose federation of related projects but many 
feel that OpenStack should be a unified product. This has created a 
cognitive dissonance that pervades nearly every discussion I have about 
architectural decisions within OpenStack and I feel the TC owes this 
topic more consideration. If we decide we're all working on a single 
product then we need to change the way we act. If we affirm that we 
really are just working on a bunch of loosely related things then we 
need to disband working groups/and cross-cutting projects that are 
trying to push for uniformity.


Lastly I feel strongly about community and the Python language. I am not 
a language zealot but I know from experience that adding more 
programming languages to an existing project is ALWAYS WRONG and I will 
fight any proposal to add more programming languages to OpenStack.


I don't expect everyone to agree with my ideas but if enough of you do, 
vote me onto the TC and I'll do my best to gradually change things for 
the better.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-12 Thread Ben Swartzlander

On 09/09/2016 11:12 AM, Duncan Thomas wrote:

On 9 September 2016 at 17:22, Ben Swartzlander <b...@swartzlander.org
<mailto:b...@swartzlander.org>> wrote:

On 09/08/2016 04:41 PM, Duncan Thomas wrote:



Despite the fact I've appeared to be slightly disagreeing with
John in
the IRC discussion on this subject, you've summarised my concern
very
well. I'm not convinced that these support tools need to be open
source,
but they absolutely need to be licensed in such a way that
distributions
can repackage them and freely distribute them. I'm not aware of any
tools currently required by cinder where this is not the case,
but a few
of us are in the process of auditing this to make sure we
understand the
situation before we clarify our rules.


I don't agree with this stance. I think the Cinder (and OpenStack)
communities should be able to dictate what form driver take,
including the code and the license, but when we start to try to
control what drivers are allowed to talk to (over and API or CLI)
then we are starting to artificially limit what kinds of storage
systems can integrate with OpenStack.

Storage systems take a wide variety of forms, including specialized
hardware systems, clusters of systems, pure software-based systems,
open source, closed source, and even other SDS abstraction layers. I
don't see the point is creating rules that specify what form a
storage system has to take if we are going to allow a driver for it.
As long as the driver itself and all of it's python dependencies are
Apache licensed, we can do our job of reviewing the code and fixing
cinder-level bugs. Any other kind of restrictions just limit
customer choice and stifle competition.

Even if you don't agree with my stance, I see serious practical
problems with trying to define what it and is not permitted in terms
of "support tools". Is a proprietary binary that communicates with a
physical controller using a proprietary API a "support tool"? What
if someone creates a software-defined-storage system which is purely
a proprietary binary and nothing else?

API proxies are also very hard to nail down. Is an API proxy with a
proprietary license not allowed? What if that proxy runs on the box
itself? What if it's a separate software package you have to
install? I don't think we can write a set of rules that won't
accidentally exclude things we don't want to exclude.


So my issue is not with any of those things, it is that I believe
anybody should be able to put together a distribution of openstack, that
just works, which any supported backend, without needed to negotiate
licensing deals with vendors, and without having to have nasty hacks in
their installers that pull things down off the web on to cinder nodes to
get around licensing rules. That is one of the main 'opens' to me in
openstack.

I don't care so much whether your CLI or API proxy in open or closed
source, but I really do care if I can create a distribution, even a
novel one, with that software in it, without hitting licensing issues.
That is, as I see it, a bare minimum - anything less than that and it
does not belong in the cinder source tree.


I don't understand how you can have this stance while tolerating the 
existence of such things as the VMware driver. That software (ESXi) 
absolutely requires a license to use or distribute.


-Ben


--
Duncan Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [all] Regarding string freeze and back ports involving translatable strings

2016-09-09 Thread Ben Swartzlander

On 09/08/2016 08:37 PM, Matt Riedemann wrote:

On 9/8/2016 7:05 PM, Ravi, Goutham wrote:

Hi,



I was looking for some clarity around backports of bug fixes that
qualify the stable branch policy [1].
<http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes>

What is the policy if the fix introduces a new translatable string or
modifies an existing one?

The guidelines in Release management [2]
<http://docs.openstack.org/project-team-guide/release-management.html>
regarding string freeze do not specifically call this scenario out. I
see that while translatable strings are mostly avoided, some projects
have been merging changes to stable branches with introduction of new
translatable strings.



The question is reminiscent of one posed in the ML a few releases ago
[3];
<http://lists.openstack.org/pipermail/openstack-dev/2015-September/073942.html>

but applies to stable branches. Should we allow changes to translatable
strings for bug fixes that matter, or is it better to always deny them
for the sake of translation accuracy?


The former IMO, a high severity bug fix trumps a translation. Note that
some projects are translated on the stable branches too, I know this is
the case for Nova.

If it's a user-facing change, like an error message in the API, then it
might require a bit more careful consideration, but if it's just a log
message marked for translation that an end user of the API wouldn't see
anyway, then I think it's fine to backport it.


So this stance makes sense to me, but I can't reconcile it with the 
"hard string freeze" rules. Is the theory that after we release, the 
string freeze ends for the stable branch, and that the hard string 
freeze only exists for that 3 week period between RC1 and final release, 
or is the theory that hard string freeze is always subject to exceptions 
for "critical" bug fixes?


-Ben Swartzlander





[1]
http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes



[2] http://docs.openstack.org/project-team-guide/release-management.html

[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073942.html





Thanks,

Goutham



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-09 Thread Ben Swartzlander

On 09/08/2016 04:41 PM, Duncan Thomas wrote:

On 8 September 2016 at 20:17, John Griffith <john.griffi...@gmail.com
<mailto:john.griffi...@gmail.com>> wrote:

On Thu, Sep 8, 2016 at 11:04 AM, Jeremy Stanley <fu...@yuggoth.org
<mailto:fu...@yuggoth.org>> wrote:






they should be able to simply install it and its free dependencies
and get a working system that can communicate with "supported"
hardware without needing to also download and install separate
proprietary tools from the hardware vendor. It's not what we say
today, but it's what I personally feel like we *should* be saying.


Your view on what you feel we *should* say, is exactly how I've
interpreted our position in previous discussions within the Cinder
project.  Perhaps I'm over reaching in my interpretation and that's
why this is so hotly debated when I do see it or voice my concerns
about it.


Despite the fact I've appeared to be slightly disagreeing with John in
the IRC discussion on this subject, you've summarised my concern very
well. I'm not convinced that these support tools need to be open source,
but they absolutely need to be licensed in such a way that distributions
can repackage them and freely distribute them. I'm not aware of any
tools currently required by cinder where this is not the case, but a few
of us are in the process of auditing this to make sure we understand the
situation before we clarify our rules.


I don't agree with this stance. I think the Cinder (and OpenStack) 
communities should be able to dictate what form driver take, including 
the code and the license, but when we start to try to control what 
drivers are allowed to talk to (over and API or CLI) then we are 
starting to artificially limit what kinds of storage systems can 
integrate with OpenStack.


Storage systems take a wide variety of forms, including specialized 
hardware systems, clusters of systems, pure software-based systems, open 
source, closed source, and even other SDS abstraction layers. I don't 
see the point is creating rules that specify what form a storage system 
has to take if we are going to allow a driver for it. As long as the 
driver itself and all of it's python dependencies are Apache licensed, 
we can do our job of reviewing the code and fixing cinder-level bugs. 
Any other kind of restrictions just limit customer choice and stifle 
competition.


Even if you don't agree with my stance, I see serious practical problems 
with trying to define what it and is not permitted in terms of "support 
tools". Is a proprietary binary that communicates with a physical 
controller using a proprietary API a "support tool"? What if someone 
creates a software-defined-storage system which is purely a proprietary 
binary and nothing else?


API proxies are also very hard to nail down. Is an API proxy with a 
proprietary license not allowed? What if that proxy runs on the box 
itself? What if it's a separate software package you have to install? I 
don't think we can write a set of rules that won't accidentally exclude 
things we don't want to exclude.


-Ben Swartzlander


--
Duncan Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-08 Thread Ben Swartzlander

On 09/06/2016 11:27 AM, Alon Marx wrote:

I want to share our plans to open the IBM Storage driver source code.
Historically we started our way in cinder way back (in Essex if I'm not
mistaken) with just a small piece of code in the community while keeping
most of the driver code closed. Since then the code has grown, but we
kept with the same format. We would like now to open the driver source
code, while keeping the connectivity to the storage as closed source.
I believe that there are other cinder drivers that have some stuff in
proprietary libraries. I want to propose and formalize the principles to
where we draw the line (this has also been discussed in
https://review.openstack.org/#/c/341780/) on what's acceptable by the
community.
Based on previous discussion I understand that the rule of thumb is "as
long as the majority of the driver logic is in the public driver" the
community would be fine with that. Is this acceptable to the community?


NetApp went through this about a year ago with our driver. There was a 
desire to have our cinder driver depend on a proprietary-licensed python 
library. This was soundly rejected by the community, and I personally 
think the policy is clear -- all libraries imported by cinder and cinder 
drivers must have OSI-approved licenses, period.


There is no concept of a "majority of the code" being open. It's all 
open or it violates the policy.


Where things get more grey is when external components are involved, 
such as API proxies or management tools which run in a separate process 
or over the network. The community has traditionally tolerated these 
these things. What the community doesn't like is when the driver is just 
a few lines of python code that calls out to an external binary or 
service to do all the work. Personally, I have no issue whatsoever with 
that approach, but that's where the debate exists.


IMO, there is no debate about proprietary python libs. You can't use 
them. Your choices are:

1) Take the driver out of tree
2) Release the library under and OSI-approved license
3) Refactor the driver to not import the proprietary library

-Ben Swartzlander



Regards,
Alon










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Choose a new logo

2016-09-07 Thread Ben Swartzlander

On 09/07/2016 01:20 PM, Ben Swartzlander wrote:

For reasons discussed in last week's IRC meeting, we have vacated the
previous logo choice and are redoing the vote.

http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_6f0d111cec78c5ef;
akey=d34a751f2d084d79


http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_6f0d111cec78c5ef=d34a751f2d084d79


This time I'm using the CIVS system because it will allow us to
determine a #2 and #3 choice in case the new winner is disqualified for
any reason.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Choose a new logo

2016-09-07 Thread Ben Swartzlander
For reasons discussed in last week's IRC meeting, we have vacated the 
previous logo choice and are redoing the vote.


http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_6f0d111cec78c5ef;
akey=d34a751f2d084d79

This time I'm using the CIVS system because it will allow us to 
determine a #2 and #3 choice in case the new winner is disqualified for 
any reason.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-09-02 Thread Ben Swartzlander
nto a nerf bat. However I also think that in the 
long run the market will punish vendors who do a bad job of writing and 
maintaining drivers and the community probably doesn't need to expend as 
much effort as it does policing driver quality.


-Ben Swartzlander



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] [security] [tc] Add the vulnerability:managed tag to Manila

2016-09-01 Thread Ben Swartzlander
Thanks fungi. I misunderstood the full scope of the requirements for 
vulnerability management and since we don't yet have volunteers willing 
to perform all the required duties, I'm going to withdraw the tag request.


As soon as interested community members step up to take on the 
responsibilities I'll reapply for the tag.


-Ben Swartzlander


On 08/30/2016 01:07 PM, Jeremy Stanley wrote:

Ben has proposed[1] adding manila, manila-ui and python-manilaclient
to the list of deliverables whose vulnerability reports and
advisories are overseen by the OpenStack Vulnerability Management
Team. This proposal is an assertion that the requirements[2] for the
vulnerability:managed governance tag are met by these deliverables.
As such, I wanted to initiate a discussion evaluating each of the
listed requirements to see how far along those deliverables are in
actually fulfilling these criteria.

1. All repos for a covered deliverable must meet the criteria or
else none do. Easy enough, each deliverable has only one repo so
this isn't really a concern.

2. We need a dedicated point of contact for security issues. Our
typical point of contact would be a manila-coresec team in
Launchpad, but that doesn't exist[3] (yet). Since you have a fairly
large core review team[4], you should pick a reasonable subset of
those who are willing to act as the next line of triage after the
VMT hands off a suspected vulnerability report under embargo. You
should have at least a couple of active volunteers for this task so
there's good coverage, but more than 5 or so is probably pushing the
bounds of information safety. Not all of them need to be core
reviewers, but enough of them should be so that patches proposed as
attachments to private bugs can effectively be "pre-approved" in an
effort to avoid delays merging at time of publication.

3. The PTL needs to agree to act as a point of escalation or
delegate this responsibility to a specific liaison. This is Ben by
default, but if he's not going to have time to serve in that role
then he should record a dedicated Vulnerability Management Liaison
in the CPLs list[5].

4. Configure sharing[6][7][8] on the defect trackers for these
deliverables so that OpenStack Vulnerability Management team
(openstack-vuln-mgmt) has "Private Security: All". Once the
vulnerability:managed tag is approved for them, also remove the
"Private Security: All" sharing from any other teams (so that the
VMT can redirect incorrectly reported vulnerabilities without
prematurely disclosing them to manila reviewers).

5. Independent security review, audit, or threat analysis... this is
almost certainly the hardest to meet. After some protracted
discussion on Kolla's application for this tag, it was determined
that projects should start supplying threat analyses to a central
security-analysis[9] repo where they can be openly reviewed and
ultimately published. No projects have actually completed this yet,
but there is some process being finalized by the Security Team which
projects will hopefully be able to follow. You may want to check
with them on the possibility of being an early adopter for that
process.

6. Covered deliverables need tests we can rely on to be able to
evaluate whether privately proposed security patches will break the
software. A cursory look shows many jobs[10] running in our upstream
CI for changes to these repos, so that requirement is probably
addressed (I did not yet check whether those
unit/functional/integration tests are particularly extensive).

So in summary, it looks like there are still some outstanding
requirements not yet met for the vulnerability:managed tag but I
don't see any insurmountable challenges there. Please let me know if
any of the above is significantly off-track.

[1] https://review.openstack.org/350597
[2] 
https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
[3] https://launchpad.net/~manila-coresec
[4] https://review.openstack.org/#/admin/groups/213,members
[5] 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management
[6] https://launchpad.net/manila/+sharing
[7] https://launchpad.net/manila-ui/+sharing
[8] https://launchpad.net/pythonmanilaclient/+sharing
[9] 
https://git.openstack.org/cgit/openstack/security-analysis/tree/doc/source/templates/
[10] 
https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] FFE request for Manila integration

2016-08-26 Thread Ben Swartzlander
The 3 patches we need to wrap up the Manila integration for TripleO 
still haven't gotten enough review attention to merge:


https://review.openstack.org/#/c/354019
https://review.openstack.org/#/c/354014
https://review.openstack.org/#/c/355394

Since it looks like the Feature Freeze is going to come without these 
having merge I'd like to formally request an FFE for them.


They're all related to Manila, which is a new in the Newton release and 
therefore I'd argue these don't add much risk. Worst case they affect 
deployments of Manila, but Manila won't be very usable without these 
anyways. Also these are small and hopefully not-hard-to-review patches.


If there's anything procedural I need to do to make these patches more 
acceptable please let me know, and I'll be watching them over the next 
few days and responding to review feedback.


thanks,
-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] FFE for "HPE 3PAR Pool feature"

2016-08-26 Thread Ben Swartzlander

On 08/26/2016 02:38 PM, Mehta, jay wrote:

Hello all

,

I am requesting you all to grant me an exception for Pools feature for
HPE 3PAR driver. The patch that implements this feature is:
https://review.openstack.org/#/c/329552/implementing blueprint blueprint
hpe3par-pool-support





I have fixed tempest and py34 failures which are passing now. Also I had
Jenkins failure for some unit tests with in Huawei and share drivers,
for which I have uploaded another patch that fixes these unit test
failures: https://review.openstack.org/#/c/360088/



This is a good feature to have for us in Newton release. I have had few
code reviews in the past and I have addressed those changes. I believe
there won’t be many review comments going further and this should be
easy to merge.

This is not a big feature, and has most of the code changes specific to
3PAR driver. Unit test are implemented to increase code coverage at
desired level.



Please grant exemption for marginal delay and consider this change for
Newton release.


This FFE is granted, however there are remaining concerns about the code 
that need to be addressed. Granting this FFE isn't a guarantee that the 
patch will merge, just that we will consider it.


For the future I'd like to remind people that the FPF deadline means you 
shouldn't continue adding features to your patch after the deadline. The 
only changes we should see to patches after FPF are responses to review 
comments and resolving of merge conflicts.


-Ben



Thanks and Regards,

Jay Mehta



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] Plans re newton-3 release and feature freeze exceptions

2016-08-26 Thread Ben Swartzlander

On 08/26/2016 02:04 PM, James Slagle wrote:

On Fri, Aug 26, 2016 at 12:14 PM, Steven Hardy <sha...@redhat.com> wrote:


1. Mistral API

We've made good progress on this over recent weeks, but several patches
remain - this is the umbrella BP, and it links several dependent BPs which
are mostly posted but need code reviews, please help by testing and
reviewing these:

https://blueprints.launchpad.net/tripleo/+spec/mistral-deployment-library


Based on what's linked off of that blueprint, here's what's left:

https://blueprints.launchpad.net/tripleo/+spec/cli-deployment-via-workflow
topic branch: 
https://review.openstack.org/#/q/status:open+project:openstack/python-tripleoclient+branch:master+topic:deploy
5 patches, 2 are marked WIP, all need reviews

https://blueprints.launchpad.net/tripleo-ui/+spec/tripleo-ui-mistral-refactoring
topic branch: 
https://review.openstack.org/#/q/topic:bp/tripleo-ui-mistral-refactoring
1 tripleo-ui patch
1 tripleo-common patch that is Workflow -1
1 tripleoclient patch that I just approved

https://blueprints.launchpad.net/tripleo/+spec/roles-list-action
single patch: https://review.openstack.org/#/c/330283/, needs review

From: https://etherpad.openstack.org/p/tripleo-mistral-api ---
https://review.openstack.org/#/c/355598/ (merge conflict, needs review)
https://review.openstack.org/#/c/348875/ (just approved, should merge)
https://review.openstack.org/#/c/341572/ (just approved, should merge)

Additionally, there are the validations patches:
https://review.openstack.org/#/q/topic:mistral-validations

If I missed anything, please point it out.


There's 3 small patches that need to be included in Newton. The author, 
marios, didn't tag them with a blueprint or bug so they don't show up on 
the LP milestone page, however there's a joint NetApp/Redhat 
presentation in Barcelona which assumes we have NetApp driver support 
for Manila in TripleO.


https://review.openstack.org/#/c/354019
https://review.openstack.org/#/c/354014
https://review.openstack.org/#/c/355394

I'm hoping these can go in today, but if not I'll file an FFE for them. 
AFAIK there's no problems with these.


-Ben Swartzlander






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-25 Thread Ben Swartzlander
Originally the NFS driver did support snapshots, but it was implemented by 
just 'cp'ing the file containing the raw bits. This works fine (if 
inefficiently) for unattached volumes, but if you do this on an attached 
volume the snapshot won't be crash consistent at all.


It was decided that we could do better for attached volumes by switching to 
qcow2 and relying on nova to perform the snapshots. Based on this, the bad 
snapshot implementation was removed.


However, for a variety of reasons the nova-assisted snapshot implementation 
has remained unmerged for 2+ years and the NFS driver has been an exception 
to the rules for that whole time.


I would like to see that exception end in the near future with either the 
removal of the driver or the completion of the Nova-assisted snapshot 
implementation, and it doesn't really matter to me which.


There is a 3rd alternative which would be to modify the NFS driver to 
require a specific filesystem that supports snapshots (there are a few 
choices here, but definitely NOT ext4). Unfortunately those of us who work 
for storage vendors aren't motivated to make such a modification because it 
would be effectively creating more competition for ourselves. The only way 
this could happen is if someone not working for a storage vendor takes this on.


-Ben


On August 25, 2016 10:39:35 AM Erlon Cruz  wrote:


Hi Jordan, Slade,

Currently NFS driver does not support cloning neither snapshots (which are
the base for implementing cloning). AFAIC, the NFS driver was in Cinder
before the minimum requirements being discussed and set, so, it just stood
there with the features it already supported.

There is currently this job
'gate-tempest-dsvm-full-devstack-plugin-nfs-nv'[1] that by the way are
failing in the same test you mentioned tough passing the snapshot tests
(not shure how the configuration is doing that) and a work[2] in progress
to support the snapshot feature.

So, Jordan, I think its OK to allow tempest to skip this tests, provided
that at least in the NFS driver, tempest isn't being an enforcement to
Cinder minimum features requirements.

Erlon


[1]
http://logs.openstack.org/86/147186/25/experimental/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/b149960/
[2] https://review.openstack.org/#/c/147186/

On Wed, Aug 24, 2016 at 6:34 PM, Jordan Pittier 
wrote:



On Wed, Aug 24, 2016 at 6:06 PM, Slade Baumann  wrote:


I am attempting to disable clone tests in tempest as they aren't
functioning in NFS. But the tests test_volumes_clone.py and
test_volumes_clone_negative.py don't have the "clone" feature
toggle in them. I thought it obvious that if clone is disabled
in tempest, the tests that simply clone should be disabled.

So I put up a bug and fix for it, but have been talking with
Jordan Pittier and he suggested I come to the mailing list to
get this figured out.

I'm not asking for reviews, unless you want to give them.
I'm simply asking if this is the right way to go about this
or if there is something else I need to do to get this into
Tempest.

Here are the bug and fix:
https://bugs.launchpad.net/tempest/+bug/1615770
https://review.openstack.org/#/c/358813/

I would appreciate any suggestion or direction in this problem.

For extra reference, the clone toggle flag was added here:
https://bugs.launchpad.net/tempest/+bug/1488274

Hi,

Thanks for starting this thread. My point about this patch is, as "volume
clone" is part of the core requirements [1] every Cinder drive must
support, I don't see a need for a feature flag. The feature flag already
exists, but that doesn't mean we should encourage its usage.

Now, if this really helps the NFS driver (although I don"t know why we
couldn't support clone with NFS)... I don't have a strong opinion on this
patch.

I -1ed the patch for consistency: I agree that there should be a minimum
set of features expected from a Cinder driver.

[1] http://docs.openstack.org/developer/cinder/devref/drivers.html#core-
functionality

Cheers,
Jordan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Feature proposal freeze is here

2016-08-19 Thread Ben Swartzlander
So the feature proposal deadline passed (yesterday evening) and I've 
started putting -2s on things which didn't meet the deadline. The 
purpose of this is to focus attention on features which we still want to 
land in Newton.


The goal of the next 7 days is to merge all of the features which are 
ready so we can enter feature freeze and start the QA cycle. Please turn 
your attention to reviewing and merging features which met the deadline, 
and of course testing stuff.


If you're still working on any feature, you've missed the deadline 
already and you should delay that work until Ocata and help us finish 
the Newton release. The only changes to feature patches in the next 2 
weeks should be responses to review comments and resolving merge conflicts.


Also please prioritize fixing bugs that affect the gate as we'll need 
every bit of cooperation we can get from the gate to merge the backlog 
of features we have.


thanks,
-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nfs-ganesha export modification issue

2016-08-16 Thread Ben Swartzlander

On 08/16/2016 08:42 AM, Ramana Raja wrote:

On Thursday, June 30, 2016 6:07 PM, Alexey Ovchinnikov 
 wrote:


Hello everyone,

here I will briefly summarize an export update problem one will encounter
when using nfs-ganesha.

While working on a driver that relies on nfs-ganesha I have discovered that
it
is apparently impossible to provide interruption-free export updates. As of
version
2.3 which I am working with it is possible to add an export or to remove an
export without restarting the daemon, but it is not possible to modify an
existing
export. So in other words if you create an export you should define all
clients
before you actually export and use it, otherwise it will be impossible to
change
rules on the fly. One can come up with at least two ways to work around
this issue: either by removing, updating and re-adding an export, or by
creating multiple
exports (one per client) for an exported resource. Both ways have associated
problems: the first one interrupts clients already working with an export,
which might be a big problem if a client is doing heavy I/O, the second one
creates multiple exports associated with a single resource, which can easily
lead
to confusion. The second approach is used in current manila's ganesha
helper[1].
This issue seems to be raised now and then with nfs-ganesha team, most
recently in
[2], but apparently it will not be addressed in the nearest future.


Frank Filz has added support to Ganesha (upstream "next" branch) to
allow one to dynamically update exports via D-Bus. Available since,
https://github.com/nfs-ganesha/nfs-ganesha/commits/2f47e8a761f3700


This is awesome news! Unforunately there's no time to update the 
container driver to use this mechanism before Newton FF, but we can 
provide feedback and plan this enhancement for Ocata.


-Ben



It'd be nice if we can test this feature and provide feedback.
Also, ML [2] was updated with more implementation details.

Thanks,
Ramana



With kind regards,
Alexey.

[1]:
https://github.com/openstack/manila/blob/master/manila/share/drivers/ganesha/__init__.py
[2]: https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-11 Thread Ben Swartzlander

On 08/10/2016 01:57 PM, Matthew Treinish wrote:

On Wed, Aug 10, 2016 at 09:52:55AM -0700, Clay Gerrard wrote:

On Wed, Aug 10, 2016 at 7:42 AM, Ben Swartzlander <b...@swartzlander.org>
wrote:



A big source of problems IMO is that tempest doesn't have stable branches.
We use the master branch of tempest to test stable branches of other
projects, and tempest regularly adds new features.



How come not this +1000 just fix this?


Well, mostly because it's actually not a problem and ignores the history on why
tempest is branchless. We actually used to do this pre-icehouse and it actually
made things much worse. What was happening back then was we didn't have enough
activity to keep the stable branches working at all. So we'd go very long
periods where nothing actually could land. We also often wedged ourselves where
master tempest changed to a point where we couldn't sanely backport a fix to
the stable branch. This would often mean that up until right before a stable
release things just couldn't land until someone was actually motivated to try
and dig us out. But, what more often happened was we had to just disable tempest
on the branch, because we didn't have another option. It also turns out that
having different tests across a release boundary meant we weren't actually
validating that the OpenStack APIs were consistent and worked the same. We had
many instances where a projects API just changed between release boundaries,
which violates our API consistency and backwards compatibility guidelines.
Tempest is about verifying the API and just like an other API client it should
work against any OpenStack release.

Doing this has been a huge benefit for making things actually work on the stable
branches. (in fact just thinking back about how broken everything was all the
time back then makes me appreciate it even more) We also test every incoming
tempest change on all the stable branches, and nothing can land unless it works
on all supported branches. It means we have a consistent and stable api across
releases. We do have occasional bugs where a new test or change in tempest
triggers a new race in a project's stable branch. But, that's a real bug and
normally a fix can be backported.(which is the whole point of doing stable
branches) If it can't and the race is bad enough to actively interfere with
things, we have a mechanism to skip the test. (but that's normally a last
resort) Although, these issues tend to come up pretty infrequently in practice,
especially as we slowly ramp up the stability of things over time.

FWIW, a lot of these details are covered in the original spec for implementing
this: (although it's definitely assumes a bit of prior knowledge about the
state of things going on when it was written)

http://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/branchless-tempest.html


I still don't agree with this stance. Code doesn't just magically stop 
working. Code breaks when things change which aren't version controlled 
properly or when you have undeclared dependencies.


Yes, managing dependencies and doing version control is a lot of work. I 
understand that the tempest team is strapped for resources. However 
simply declaring that the solution is to stop doing version control is 
an epic failure. I read the spec above and it sounds like cry for help 
rather than a well thought out idea.


Personally I would be interested in helping improve this situation, but 
I think the proper way to improve it is to actually do version control 
of everything that matters and use dependency management. If you do this 
correctly, then maintaining the old stuff stops being a chore because it 
never breaks. By definition, if a stable branch breaks without a change 
going into it, you failed at doing dependency management.


I'm not sure how I even could help with the current situation. It feels 
like we've dug ourselves into a hole and the plan of record is to get 
used to living underground. Does anyone have a will to make thing better 
and get to a place where stable branches could reasonably expected to 
remain stable for months or years without constant fixing?


-Ben Swartzlander





-Matt Treinish



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-10 Thread Ben Swartzlander

On 08/10/2016 11:33 AM, Luigi Toscano wrote:

On Wednesday, 10 August 2016 17:00:36 CEST Ihar Hrachyshka wrote:

Luigi Toscano <ltosc...@redhat.com> wrote:

On Wednesday, 10 August 2016 10:42:41 CEST Ben Swartzlander wrote:

On 08/10/2016 04:33 AM, Duncan Thomas wrote:

So I tried to get into helping with the cinder stable tree for a while,
and while I wasn't very successful (lack of time and an inability to
convince my employer it should be a priority), one thing I did notice it
that much of the breakage seemed to come from outside cinder - many of
the libraries we depend on make backwards incompatible changes by
accident, for example. Would it be possible to have a long-term-support
branch where we pinned the max version of everything for the gate, pips
and devtstack? I'd have thought (and I'm very willing to be corrected)
that would make the stable gate, well, stable, such that it required far
less work to keep it able to run a basic devstack test plus unit tests.

Does that sound at all sane?


A big source of problems IMO is that tempest doesn't have stable
branches. We use the master branch of tempest to test stable branches of
other projects, and tempest regularly adds new features. This guarantees
instability if you rely on tempest anywhere in your gate (and cinder
does).


Orthogonal to the discussion, but: this is not due to the lack of stable
branch, but that part of the Tempest API are not stable yet. This is being
addressed right now (in scope for Newton).
Once the Tempest stable API are used, no breakages should happen.


Well, it’s only partially true. But what happens when you add a new test to
tempest/master? It gets executed on all branches, and maybe some of them
are failing it. We can argue that it’s probably a bug revealed, but it
nevertheless requires attention from stable maintainers to solve.


The new test should work on all support branches. As tester I find a lot of
advantages of maintaining a unified set of tests than fighting with backports
of tests.


I'm sure it makes YOUR life easier to not have to deal with backports. 
The rest of us who do maintain stable branches though don't appreciate 
it when tempest adds a new feature or a new dependency which breaks the 
stable gate jobs. This happens a few times each release, and tends to 
get fixed within a day or two, which is barely tolerable.


The fact that it happens at all though says that we're "doing it wrong" 
w.r.t. testing of stable branches, and completely explains why we find 
it so challenging to support stuff more than 12 months old. If 
everything we used had stable branches and proper dependency management, 
then it would easy to keep gate job running for years.



There are mechanisms to skip tests based on the cloud capabilities.
So this should not be an issue, and if a bug is found that should definitely
be viewed as a good thing.


It's not new tests that cause the problem, because those would be easy 
to skip. It's changes to tempest core which force changes elsewhere.


-Ben



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-10 Thread Ben Swartzlander

On 08/10/2016 04:33 AM, Duncan Thomas wrote:

So I tried to get into helping with the cinder stable tree for a while,
and while I wasn't very successful (lack of time and an inability to
convince my employer it should be a priority), one thing I did notice it
that much of the breakage seemed to come from outside cinder - many of
the libraries we depend on make backwards incompatible changes by
accident, for example. Would it be possible to have a long-term-support
branch where we pinned the max version of everything for the gate, pips
and devtstack? I'd have thought (and I'm very willing to be corrected)
that would make the stable gate, well, stable, such that it required far
less work to keep it able to run a basic devstack test plus unit tests.

Does that sound at all sane?


A big source of problems IMO is that tempest doesn't have stable 
branches. We use the master branch of tempest to test stable branches of 
other projects, and tempest regularly adds new features. This guarantees 
instability if you rely on tempest anywhere in your gate (and cinder does).


-Ben Swartzlander



(I'm aware there are community standards for stable currently, but a lot
of this thread is the tail of standards wagging the dog of our goals.
Lets figure out what we want to achieve, and figure out how we can do
that without causing either too much extra work or an unnecessary fall
off in quality, rather than saying we can't do anything because of how
we do things now.)




On 10 August 2016 at 08:54, Tony Breeds <t...@bakeyournoodle.com
<mailto:t...@bakeyournoodle.com>> wrote:

On Tue, Aug 09, 2016 at 09:16:02PM -0700, John Griffith wrote:
> Sorry, I wasn't a part of the sessions in Austin on the topic of long
> terms support of Cinder drivers.  There's a lot going on during the 
summits
> these days.

For the record the session in Austin, that I think Matt was
referencing,  were
about stable life-cycles. not cinder specific.

> Yeah, ok... I do see your point here, and as I mentioned I have had this
> conversation with you and others over he years and I don't
disagree.  I also don't have the ability to "force"
> said parties to do things differently.  So when I try and help customers
> that are having issues my only recourse is an out of tree patch, which 
then
> when said distro notices or finds out they don't want to support the
> customer any longer based on the code no longer being "their blessed
> code".  The fact is that the distros hold the power in these situations, 
if
> they happen to own the OS release and the storage then it works out great
> for them, not so much for anybody else.​

Right we can't 'force' the distros to participate (if we could we
wouldn't be
having this discussion).  The community has a process and all we can
do is
encourage distros and the like to participate in that process as it
really is
best for them, and us.

> So is the consensus here that the only viable solution is for people to
> invest in keeping the stable branches in general supported longer?  How
> does that work for projects that are interested and have people willing to
> do the work vs projects that don't have the people willing to do the work?
> In other words, Cinder has a somewhat unique problem that Nova, Glance and
> Keystone don't have.  So for Cinder to try and follow the policies,
> processes and philosophies you outlined does that mean that as a project
> Cinder has to try and bend the will of "ALL" of the projects to make this
> happen?  Doesn't seem very realistic to me.​

So the 'Cinder' team wont need to do all the will bending, that's
for the
Stable team to do with the support of *everyone* that cares about
the outcome.
That probably doens't fill you with hope, but that is the reality.

> Just one last point and I'll move on from the topic.  I'm not sure where
> this illusion that we're testing all the drivers so well is coming from.
> Sure, we require the steps and facade of 3'rd party CI, but dig a bit
> deeper and you soon find that we're not really testing as much as some
> might think here.

That's probbaly true but if we created a 'mitaka-drivers' branch of
cinder the
gate CI would rapidly degernate to a noop any unit/functional tests
would be
*entirely* 3rd party.

Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Ben Swartzlander

On 08/09/2016 06:56 PM, Mike Perez wrote:

On 10:31 Aug 06, Sean McGinnis wrote:


I'm open and welcome to any feedback on this. Unless there are any major
concerns raised, I will at least instruct any Cinder stable cores to
start allowing these bugfix patches through past the security only
phase.


As others have said and as being a Cinder stable core myself, the status-quo
and this proposal itself are terrible practices because there is no testing
behind it, thereby it not being up to the community QA standards set. I will be
issuing -2 on these changes in the stable branch regardless of your
instructions until the policy has changed.


I agree we can't drop the testing standards on the "stable" branches. 
I'm not in favor of that. I'd rather use differently-named branches with 
different and well-documented policies. Ideally the branches would be 
named something like driver-fixes/newton and driver-fixes/ocata, etc, to 
avoid confusion with the stable branches.


-Ben Swartzlander


If you want to change that, work with the stable team on the various options
provided. This tangent of people whining on the mailing list and in
#openstack-cinder is not going to accomplish anything.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Ben Swartzlander

On 08/09/2016 05:45 PM, Mike Perez wrote:

On 19:40 Aug 08, Duncan Thomas wrote:

On 8 August 2016 at 18:31, Matthew Treinish <mtrein...@kortar.org> wrote:



This argument comes up at least once a cycle and there is a reason we
don't do
this. When we EOL a branch all of the infrastructure for running any ci
against
it goes away. This means devstack support, job definitions, tempest skip
checks,
etc. Leaving the branch around advertises that you can still submit
patches to
it which you can't anymore. As a community we've very clearly said that we
don't
land any code without ensuring it passes tests first, and we do not
maintain any
of the infrastructure for doing that after an EOL.



Ok, to turn the question around, we (the cinder team) have recognised a
definite and strong need to have somewhere for vendors to share patches on
versions of Cinder older than the stable branch policy allows.

Given this need, what are our options?

1. We could do all this outside Openstack infrastructure. There are
significant downsides to doing so from organisational, maintenance, cost
etc points of view. Also means that the place vendors go for these patches
is not obvious, and the process for getting patches in is not standard.

2. We could have something not named 'stable' that has looser rules than
stable branches,, maybe just pep8 / unit / cinder in-tree tests. No
devstack.

3. We go with the Neutron model and take drivers out of tree. This is not
something the cinder core team are in favour of - we see significant value
in the code review that drivers currently get - the code quality
improvements between when a driver is submitted and when it is merged are
sometimes very significant. Also, taking the code out of tree makes it
difficult to get all the drivers checked out in one place to analyse e.g.
how a certain driver call is implemented across all the drivers, when
reasoning or making changes to core code.


Just to set the record straight here, some Cinder core members are in favor of
out of tree.


Mike, you must have left the midcycle by the time this topic came up. On 
the issue of out-of-tree drivers, I specifically offered this proposal 
(a community managed mechanism for distributing driver bugfix backports) 
as an compromise alternative to try to address the needs of both camps. 
Everyone who was in the room at the time (plus DuncanT who wasn't) 
agreed that if we had that (a way to deal with backports) that they 
wouldn't want drivers out of the tree anymore.


Your point of view wasn't represented so go ahead and explain why, if we 
did have a reasonable way for bugfixes to get backported to the releases 
customers actually run (leaving that mechanism unspecified for the time 
being), that you would still want the drivers out of the tree.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Ben Swartzlander
e driver 
subdirectory, so in practice we do it this way. It makes cherry-picking 
into and out of these repos fairly painless.


-Ben Swartzlander


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-08 Thread Ben Swartzlander

On 08/08/2016 12:36 PM, Jeremy Stanley wrote:

On 2016-08-08 13:03:51 +0200 (+0200), Ihar Hrachyshka wrote:

Sean McGinnis <sean.mcgin...@gmx.com> wrote:

[...]

The suggestion was to just change our stable policy in regards to driver
bugfix backports. No need to create and maintain more branches. No need
to set up gate jobs and things like that.


Unless you manage to get it approved for the global policy

[...]

That was the gist of my suggestion to Sean as far as bringing this
discussion to the ML as a first option. Basically, if lots of
projects see their driver maintainers and downstream distros forking
their stable branches to add driver updates for support of newer
hardware, then see if the current OpenStack Stable Branch policy
should be amended to say that bug fixes and newer hardware support
specifically in driver source code (as long as it doesn't touch the
core service in that repo) are acceptable.

As far as the tangent this thread has taken on changing when we
delete stable branches, I feel like the only solution there is
working with the Stable Branch team to find ways to properly extend
support for the branches in question (including keeping them
properly tested). There have been ongoing efforts to make stable
branch testing less problematic, so it's possible over time we'll be
able to increase the support duration for them. Stating that it's
okay to keep them open for changes with no testing sets a terrible
precedent.


The proposal isn't "no testing". The proposal is that the gate tests 
would be minimal. We would rely heavily on the 3rd party CI system to 
actually test the patch and tell us that nothing is broken. If the 3rd 
party CI systems can't be relied on for this purpose then they're 
useless IMO.


Yes a human would have to recognize that the patch affects a particular 
vendor and know which CI system to look at before putting his +2 on. 
This is an unfortunate effect of not having 3rd party CI vote.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-08 Thread Ben Swartzlander

On 08/08/2016 12:40 PM, Duncan Thomas wrote:

On 8 August 2016 at 18:31, Matthew Treinish <mtrein...@kortar.org
<mailto:mtrein...@kortar.org>> wrote:


This argument comes up at least once a cycle and there is a reason
we don't do
this. When we EOL a branch all of the infrastructure for running any
ci against
it goes away. This means devstack support, job definitions, tempest
skip checks,
etc. Leaving the branch around advertises that you can still submit
patches to
it which you can't anymore. As a community we've very clearly said
that we don't
land any code without ensuring it passes tests first, and we do not
maintain any
of the infrastructure for doing that after an EOL.


Ok, to turn the question around, we (the cinder team) have recognised a
definite and strong need to have somewhere for vendors to share patches
on versions of Cinder older than the stable branch policy allows.

Given this need, what are our options?

1. We could do all this outside Openstack infrastructure. There are
significant downsides to doing so from organisational, maintenance, cost
etc points of view. Also means that the place vendors go for these
patches is not obvious, and the process for getting patches in is not
standard.

2. We could have something not named 'stable' that has looser rules than
stable branches,, maybe just pep8 / unit / cinder in-tree tests. No
devstack.


This. (2) is what I thought we were proposing from the beginning. Add a 
requirement for 3rd party CI from the affected vendor to pass and I 
think it works and benefits everyone.


-Ben Swartzlander


3. We go with the Neutron model and take drivers out of tree. This is
not something the cinder core team are in favour of - we see significant
value in the code review that drivers currently get - the code quality
improvements between when a driver is submitted and when it is merged
are sometimes very significant. Also, taking the code out of tree makes
it difficult to get all the drivers checked out in one place to analyse
e.g. how a certain driver call is implemented across all the drivers,
when reasoning or making changes to core code.

Given we've identified a clear need, and have repeated rejected one
solution (take drivers out of tree - it has been discussed at every
summit and midcycle for 3+ cycles), what positive suggestions can people
make?

--
Duncan Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Ben Swartzlander

On 08/06/2016 06:11 PM, Jeremy Stanley wrote:

On 2016-08-06 17:51:02 -0400 (-0400), Ben Swartzlander wrote:
[...]

when it's no longer to run dsvm jobs on them (because those jobs
WILL eventually break as infra stops maintaining support for very
old releases) then we simply remove those jobs and rely on vendor
CI + minimal upstream tests (pep8, unit tests).


This suggestion has been resisted in the past as it's not up to our
community's QA standards, and implying there is "support" when we
can no longer test that changes don't cause breakage is effectively
dishonest. In the past we've held that if a branch is no longer
testable, then there's not much reason to collaborate on code
reviewing proposed backports in the first place. If we're reducing
these branches to merely a holding place for "fixes" that "might
work" it doesn't sound particularly beneficial.


Well this was the whole point, and the reason I suggested using a 
different branch other than stable/release. Keeping the branches open 
for driver bugfix backports is only valuable if we can go 5 releases back.


I agree the level of QA we can do gets less as releases get older, and 
nobody expects the Infra team to keep devstack-gate running on such old 
releases. However vendors and distros DO support such old releases and 
the proposal to create these branches is largely to simplify the 
distributions of bugfixes from vendors to customers and distros.


Compare this proposal to the status quo, which is that several vendors 
effectively maintain forks of Cinder on github or other public repos 
just to have a place to distribute bugfixes on old releases. Distros 
either need to know about these repos or do the backports from master 
themselves when taking bugfixes into old releases.


-Ben


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Ben Swartzlander

On 08/06/2016 11:31 AM, Sean McGinnis wrote:

This may mostly be a Cinder concern, but putting it out there to get
wider input.

For some time now there has been some debate about moving third party
drivers in Cinder to be out of tree. I won't go into that too much,
other than to point out one of the major drivers for this desire that
was brought up at our recent Cinder midcycle.

It turned out at least part of the desire to move drivers out of tree
came down to the difficulty in getting bug fixes out to end users that
were on older stable versions, whether because that's what their distro
was still using, or because of some other internal constraint that
prevented them from upgrading.

A lot of times what several vendors ended up doing is forking Cinder to
their own github repo and keeping that in sync with backports, plus
including driver fixes they needed to get out to their end users. This
has a few drawbacks:

1- this is more work for the vendor to keep this fork up to date
2- end users don't necessarily know where to go to find these without
   calling in to a support desk (that then troubleshoots a known issue
   and hopefully eventually ends up contacting the folks internally that
   actually work on Cinder that know it's been fixed and where to get
   the updates). Generally a bad taste for someone using Cinder and
   OpenStack.
3- Distros that package stable branches aren't able to pick up these
   changes, even if they are picking up stable branch updates for
   security fixes
4- We end up with a lot of patches proposed against security only stable
   branches that we need to either leave or abandon, just so a vendor
   can point end users to the patch to be able to grab the code changes

Proposed Solution
-

So part of our discussion at the midcycle was a desire to open up stable
restrictions for getting these driver bugfixes backported. At the time,
we had discussed having new branches created off of the stable branches
specifically for driver bugfixes. Something like:

stable/mitaka > stable/mitaka-drivers

After talking to the infra team, this really did sound like overkill.
The suggestion was to just change our stable policy in regards to driver
bugfix backports. No need to create and maintain more branches. No need
to set up gate jobs and things like that.

So this is a divergence from our official policy. I want to propose
we officially make a change to our stable policy to call out that
drivers bugfixes (NOT new driver features) be allowed at any time.

If that's not OK with other project teams that support any kind of third
party drivers, I will just implement this policy specific to Cinder
unless there is a very strong objection, with good logic behind it, why
this should not be allowed.

This would address a lot of the concerns at least within Cinder and
allow us to better support users stuck on older releases.

I'm open and welcome to any feedback on this. Unless there are any major
concerns raised, I will at least instruct any Cinder stable cores to
start allowing these bugfix patches through past the security only
phase.


The only issue I see with this modified proposal is that it doesn't 
address the lifetime of the stable branches. If the plan is to use the 
normal stable branch instead of making a special branch, then we also 
need to find a way to keep stable branches around for practically 
forever (way longer than the typical 12 months).


Those of us dealing with bugfix backports for customers inevitably are 
looking at going 3, 4, or 5 releases back with the backports. Therefore 
I'd suggest modifying the policy to keep the stable branches around more 
or less forever, and when it's no longer to run dsvm jobs on them 
(because those jobs WILL eventually break as infra stops maintaining 
support for very old releases) then we simply remove those jobs and rely 
on vendor CI + minimal upstream tests (pep8, unit tests).


-Ben


Thanks!

Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Ben Swartzlander

On 08/04/2016 03:02 PM, Fox, Kevin M wrote:

Nope. The incompatibility was for things that never were in radosgw, not things 
that regressed over time. tmpurls differences and the namespacing things were 
there since the beginning first introduced.

At the last summit, I started with the DefCore folks and worked backwards until 
someone said, no we won't ever add tests for compatibility for that because 
radosgw is not an OpenStack project and we only test OpenStack.

Yes, I think thats a terrible thing. I'm just relaying the message I got.


I don't see how this is terrible at all. If someone were to start up a 
clone of another OpenStack project (say, Cinder) which aimed for 100% 
API compatibility with Cinder, but outside the tent, and then they 
somehow failed to achieve true compatibility because of Cinder's 
undocumented details, nobody would proclaim that the this was somehow 
our (the OpenStack community's) fault.


I think the Radosgw people probably have a legitimate beef with the 
Swift team about the lack of an official API spec that they can code do, 
but that's a choice for the Swift community to make. If users of Swift 
are satisfied with a the-code-is-the-spec stance then I say good luck to 
them.


If the user community cares enough about interoperability between 
swift-like things they will demand an API spec and conformance tests and 
someone will write those and then radosgw will have something to conform 
to. None of this has anything to do with the governance model for Ceph 
though.


-Ben Swartzlander




Thanks,
Kevin

From: Ben Swartzlander [b...@swartzlander.org]
Sent: Thursday, August 04, 2016 10:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 11:57 AM, Fox, Kevin M wrote:

Ok. I'll play devils advocate here and speak to the other side of this, because 
you raised an interesting issue...

Ceph is outside of the tent. It provides a (mostly) api compatible 
implementation of the swift api (radosgw), and it is commonly used in OpenStack 
deployments.

Other OpenStack projects don't take it into account because its not a big tent 
thing, even though it is very common. Because of some rules about only testing 
OpenStack things, radosgw is not tested against even though it is so common.


I call BS on this assertion. We test things that outside the tent in the
upstream gate all the time -- the only requirement is that they be
released. We won't test against unreleased stuff that's outside the big
tent and the reason for that should be obvious.


This causes odd breakages at times that could easily be prevented, but for 
procedural things around the Big Tent.


The only way I can see for "odd breakages" to sneak in is on the Ceph
side, if they aren't testing their changes against OpenStack and they
introduce a regression, then that's their fault (assuming of course that
we have good test coverage running against the latest stable release of
Ceph). It's reasonable to request that we increase our test coverage
with Ceph if it's not good enough and if we are the ones causing the
breakages. But their outside status isn't the problem.

-Ben Swartzlander



I do think this should be fixed before we advocate single vendor projects exit 
the big tent after some time. As the testing situation may be made worse.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, August 04, 2016 5:59 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

Thomas Goirand wrote:

On 08/01/2016 09:39 AM, Thierry Carrez wrote:

But if a project is persistently single-vendor after some time and
nobody seems interested to join it, the technical value of that project
being "in" OpenStack rather than a separate project in the OpenStack
ecosystem of projects is limited. It's limited for OpenStack (why
provide resources to support a project that is obviously only beneficial
to one organization ?), and it's limited to the organization itself (why
go through the OpenStack-specific open processes when you could shortcut
it with internal tools and meetings ? why accept the oversight of the
Technical Committee ?).


A project can still be useful for everyone with a single vendor
contributing to it, even after a long period of existence. IMO that's
not the issue we're trying to solve.


I agree with that -- open source projects can be useful for everyone
even if only a single vendor contributes to it.

But you seem to imply that the only way an open source project can be
useful is if it's developed as an OpenStack project under the OpenStack
Technical Committee governance. I'm not advocating that these projects
should stop or disappear. I'm just saying that if they are very unlikely
to grow a more diverse affiliation in the future, they

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Ben Swartzlander

On 08/04/2016 11:57 AM, Fox, Kevin M wrote:

Ok. I'll play devils advocate here and speak to the other side of this, because 
you raised an interesting issue...

Ceph is outside of the tent. It provides a (mostly) api compatible 
implementation of the swift api (radosgw), and it is commonly used in OpenStack 
deployments.

Other OpenStack projects don't take it into account because its not a big tent 
thing, even though it is very common. Because of some rules about only testing 
OpenStack things, radosgw is not tested against even though it is so common.


I call BS on this assertion. We test things that outside the tent in the 
upstream gate all the time -- the only requirement is that they be 
released. We won't test against unreleased stuff that's outside the big 
tent and the reason for that should be obvious.



This causes odd breakages at times that could easily be prevented, but for 
procedural things around the Big Tent.


The only way I can see for "odd breakages" to sneak in is on the Ceph 
side, if they aren't testing their changes against OpenStack and they 
introduce a regression, then that's their fault (assuming of course that 
we have good test coverage running against the latest stable release of 
Ceph). It's reasonable to request that we increase our test coverage 
with Ceph if it's not good enough and if we are the ones causing the 
breakages. But their outside status isn't the problem.


-Ben Swartzlander



I do think this should be fixed before we advocate single vendor projects exit 
the big tent after some time. As the testing situation may be made worse.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, August 04, 2016 5:59 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

Thomas Goirand wrote:

On 08/01/2016 09:39 AM, Thierry Carrez wrote:

But if a project is persistently single-vendor after some time and
nobody seems interested to join it, the technical value of that project
being "in" OpenStack rather than a separate project in the OpenStack
ecosystem of projects is limited. It's limited for OpenStack (why
provide resources to support a project that is obviously only beneficial
to one organization ?), and it's limited to the organization itself (why
go through the OpenStack-specific open processes when you could shortcut
it with internal tools and meetings ? why accept the oversight of the
Technical Committee ?).


A project can still be useful for everyone with a single vendor
contributing to it, even after a long period of existence. IMO that's
not the issue we're trying to solve.


I agree with that -- open source projects can be useful for everyone
even if only a single vendor contributes to it.

But you seem to imply that the only way an open source project can be
useful is if it's developed as an OpenStack project under the OpenStack
Technical Committee governance. I'm not advocating that these projects
should stop or disappear. I'm just saying that if they are very unlikely
to grow a more diverse affiliation in the future, they derive little
value in being developed under the OpenStack Technical Committee
oversight, and would probably be equally useful if developed outside of
OpenStack official projects governance. There are plenty of projects
that are useful to OpenStack that are not developed under the TC
governance (libvirt, Ceph, OpenvSwitch...)

What is the point for a project to submit themselves to the oversight of
a multi-organization Technical Committee if they always will be the
result of the efforts of a single organization ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread Ben Swartzlander
Tom (tbarron on IRC) has been working on OpenStack (both cinder and 
manila) for more than 2 years and has spent a great deal of time on 
Manila reviews in the last release. Tom brings another package/distro 
point of view to the community as well as former storage vendor experience.


-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Migration APIs 2-phase vs. 1-phase

2016-08-02 Thread Ben Swartzlander
It occurred to me that if we write the 2-phase migration APIs correctly, 
then it will be fairly trivial to implement 1-phase migration outside 
Manila (in the client, or even higher up).


I would like to propose that we change the migration API to actually 
work that way, because I think it will have positive impact on the 
driver interface and it will make the internals for migration a lot 
simpler. Specifically, I'm proposing that the Manila REST API only 
supports starting/completing migrations, and querying the status of an 
ongoing migration -- there should be no automatic looping inside Manila 
to perform a start+complete in 1 shot.


Additionally I think it makes sense to make all the migration driver 
interfaces more asynchronous, but that change is less urgent. Getting 
the driver interface exactly right is less important than getting the 
REST API right in Newton. Nevertheless, I think we should aim for a 
driver interface that expects all the migration calls to return quickly 
and for status polling to occur automatically on long running 
operations. This will enable much better behavior when restarting 
services during a migration.


I'm going to put a topic on the meeting agenda for Thursday to discuss 
this in more detail, but if anyone has other feelings please chime in here.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Ben Swartzlander

On 08/01/2016 03:39 AM, Thierry Carrez wrote:

Steven Dake (stdake) wrote:

On 7/31/16, 11:29 AM, "Doug Hellmann" <d...@doughellmann.com> wrote:

[...]
To be clear, I'm suggesting that projects with team:single-vendor be
given enough time to lose that tag. That does not require them to grow
diverse enough to get team:diverse-affiliation.


That makes sense and doesn't send the wrong message.  I wasn't trying to
suggest that either; was just pointing out Kevin's numbers are more in
line with diverse-affiliation than single vendor.  My personal thoughts
are single vendor projects are ok in OpenStack if they are undertaking
community-building activities to increase their diversity of contributors.


Basically my position on this is: OpenStack is about providing open
collaboration spaces so that multiple organizations and individuals can
collaborate (on a level playing ground) to solve a set of issues. It's
difficult to have a requirement of a project having a diversity of
affiliation before it can join, because of the chicken-and-egg issue
between visibility and affiliation-diversity. So we totally accept
single-vendor projects as official OpenStack projects.

But if a project is persistently single-vendor after some time and
nobody seems interested to join it, the technical value of that project
being "in" OpenStack rather than a separate project in the OpenStack
ecosystem of projects is limited. It's limited for OpenStack (why
provide resources to support a project that is obviously only beneficial
to one organization ?), and it's limited to the organization itself (why
go through the OpenStack-specific open processes when you could shortcut
it with internal tools and meetings ? why accept the oversight of the
Technical Committee ?).


Thierry I think you underestimate the value organizations perceive they 
get from projects being in the tent. Even if a project is single vendor, 
the halo effect of OpenStack and the access to free resources (the infra 
cloud, and more importantly the world-class infra TEAM) more than make 
up for any downsides associated with following established processes.


I strongly doubt any organization would choose to remove a project from 
OpenStack for the reasons you mention. If the community doesn't want 
these kinds of projects in the big tent then the community probably 
needs to push them out.


-Ben Swartzlander



So the idea is to find a way for projects who realize that they won't
attract a significant share of external contributions to move to an
externally-governed project. I'm not sure we can use a strict deadline
-- some projects might still be single-vendor after a year but without
structurally resisting contributions. But being able to trigger a review
after some time, to assess if we have reasons to think it will improve
in the future (or not), sounds like a good idea.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Service VMs, CI, etc

2016-07-29 Thread Ben Swartzlander

On 07/29/2016 09:25 AM, John Spray wrote:

Hi folks,

We're starting to look at providing NFS on top of CephFS, using NFS
daemons running in Nova instances.  Looking ahead, we can see that
this is likely to run into similar issues in the openstack CI that the
generic driver did.

I got the impression that the main issue with testing the generic
driver was that bleeding edge master versions of Nova/Neutron/Cinder
were in use when running in CI, and other stuff had a habit of
breaking.  Is that roughly correct?


The breakages related to using HEAD were mostly related to Tempest. For 
Nova, Neutron, and Cinder, the problems are more related to running a 
cloud within a cloud and having severely limited resources. Things take 
a long time and sometimes don't happen at all.


If you need a service VM to do real work, you can't create many of them 
and you can expect creation of each one to be quite slow. For the 
generic driver, we attempt to overcome the slowness by parallelizing 
tests, and sharing the VM resources between test groups, but that 
creates its own set of concurrency issues.


Our current approach to these problems is to not use the generic driver 
for most things, and to limit the tests we do run on the generic driver 
to only what's needed to ensure that driver isn't broken. Also, there is 
an effort to shrink the "service image" used by the generic driver so 
it's less resource hungry. Hopefully with those changes we can avoid the 
resource sharing in tempest and while still keeping test run times 
within reason.



Assuming versions are the main issue, we're going to need to look at
solutions to that, which could mean either doing some careful pinning
of the versions of Nova/Neutron used by Manila CI in general, or
creating a separate CI setup for CephFS that had that version pinning.
My preference would be to see this done Manila wide, so that the
generic driver could benefit as well.


I don't think pinning versions of the other projects would help much, 
for reason I outlined above.


-Ben


Thoughts?

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Logo Poll

2016-07-19 Thread Ben Swartzlander

https://www.surveymonkey.com/r/T9DW7G8

-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][stable] liberty periodic bitrot jobs have been failing more than a week

2016-07-04 Thread Ben Swartzlander

On 07/03/2016 09:19 AM, Matt Riedemann wrote:

On 7/1/2016 8:18 PM, Ravi, Goutham wrote:

Thanks Matt.

https://review.openstack.org/#/c/334220 adds the upper constraints.

--
Goutham


On 7/1/16, 5:08 PM, "Matt Riedemann"  wrote:

The manila periodic stable/liberty jobs have been failing for at least a
week.

It looks like manila isn't using upper-constraints when running unit
tests, not even on stable/mitaka or master. So in liberty it's pulling
in uncapped oslo.utils even though the upper constraint for oslo.utils
in liberty is 3.2.

Who from the manila team is going to be working on fixing this, either
via getting upper-constraints in place in the tox.ini for manila (on all
supported branches) or performing some kind of workaround in the code?



Thanks.

I noticed that there is no Tempest / devstack job run against the
stable/liberty change - why is there no integration testing of Manila in
stable/liberty outside of 3rd party CI (which is not voting)?


Matt, this is why: https://review.openstack.org/#/c/286497/

-Ben





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila]: questions on update-access() changes

2016-06-17 Thread Ben Swartzlander
Ramana, I think your questions got answered in a channel discussion last 
week, but I just wanted to double check that you weren't still expecting 
any answers here. If you were, please reply and we'll keep this thread going.



On June 2, 2016 9:30:39 AM Ramana Raja  wrote:


Hi,

There are a few changes that seem to be lined up for Newton to make manila's
share access control, update_access(), workflow better [1] --
reduce races in DB updates, avoid non-atomic state transitions, and
possibly enable the workflow fit in a HA active-active manila
configuration (if not already possible).

The proposed changes ...

a) Switch back to per rule access state (from per share access state) to
   avoid non-atomic state transition.

   Understood problem, but no spec or BP yet.


b) Use Tooz [2] (with Zookeeper?) for distributed lock management [3]
   in the access control workflow.

   Still under investigation and for now fits the share replication workflow 
[4].


c) Allow drivers to update DB models in a restricted manner (only certain
   fields can be updated by a driver API).

   This topic is being actively discussed in the community, and there should be
   a consensus soon on figuring out the right approach, following which there
   might be a BP/spec targeted for Newton.


Besides these changes, there's a update_access() change that I'd like to revive
(started in Mitaka), storing access keys (auth secrets) generated by a storage
backend when providing share access, i.e.  during update_access(), in the
``share_access_map`` table [5]. This change as you might have figured is a
smaller and a simpler change than the rest, but seems to depend on the 
approaches

that might be adopted by a) and c).

For now, I'm thinking of allowing a driver's update access()  to return a
dictionary of {access_id: access_key, ...} to (ShareManager)access_helper's
update_access(), which would then update the DB iteratively with access_key
per access_id. Would this approach be valid with changes a) and c) in
Newton? change a) would make the driver report access status per rule via
the access_helper, during which an 'access_key' can also be returned,
change c) might allow the driver to directly update the `access_key` in the
DB.

For now, should I proceed with implementing the approach currently outlined
in my spec [5], have the driver's update_access() return a dictionary of
{access_id: access_key, ...} or wait for approaches for changes a) and c)
to be outlined better?

Thanks,
Ramana

[1] https://etherpad.openstack.org/p/newton-manila-update-access

[2] 
https://blueprints.launchpad.net/openstack/?searchtext=distributed-locking-with-tooz


[3] https://review.openstack.org/#/c/209661/38/specs/chronicles-of-a-dlm.rst

[4] https://review.openstack.org/#/c/318336/

[5] https://review.openstack.org/#/c/322971/
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077602.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Midcycle call for topics

2016-06-14 Thread Ben Swartzlander

Our midcycle meetup is 2 weeks away! Please propose topics on the etherpad:

https://etherpad.openstack.org/p/newton-manila-midcycle

Depending on how much material we need to cover I'll decide if we need 
the third day or not.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] v1 API and nova-net support removal in newton

2016-06-02 Thread Ben Swartzlander
We have made the decision to remove the v1 API from Manila in Newton (it 
was deprecated in Mitaka). Only v2.0+ will be supported. For those that 
don't know, v2.0 is exactly the same as v1 but it has microversion 
support. You need a client library from Liberty or Mitaka to get 
microversion support, but scripts should work exactly the same and 
software that imports the library should work find with the new library.


We also made the decision to drop the nova-net plugin from Manila 
because nova-net has been deprecated since before we added it. This 
won't affect anyone unless they're using one of the few drivers that 
support shares servers (not including the Generic driver) AND they're 
still using nova-net instead of neutron. The recommended workaround for 
those users is to switch to neutron.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Newton deadlines (reminder)

2016-06-02 Thread Ben Swartzlander
At the start of the Newton release we agreed to keep the same deadlines 
we had for Mitaka. I thought everyone knew what those were but there is 
some confusion so I'll remind everyone.


As always, we will enforce a Feature Freeze on the N-3 milestone date: 
September 1st [1]. Only bugfixes and documentation changes are allowed 
to merge after that date without an explicit feature freeze exception (FFE).


Also like before, we will enforce a feature proposal freeze 2 weeks 
before the feature freeze, on Aug 18th. New feature patches must be 
submitted to gerrit, with complete test coverage, and passing Jenkins by 
this date.


New drivers must be submitted 3 weeks before the feature freeze, by Aug 
11th, with the same requirements as above, and working CI. Additionally 
driver refactor patches (any patch that significantly reworks existing 
driver code) will be subject to the same deadline, because these patches 
tend to take as much resources to review as a whole new driver.


"Large" new features must be submitted to gerrit 6 weeks before the 
feature freeze, by Jul 21 (a week after the N-2 milestone [1]). The 
definition of a "large" feature is the same as we defined it for Mitaka [2].


The manila specs repo is a new thing for Newton and for now there are no 
deadlines for specs.


Also I want to remind everyone that changes to our "library" project 
including python-manilaclient and manila-ui have the same deadlines. We 
can't sneak features in the libraries after the core manila patches 
land, they need to go in together.


-Ben Swartzlander


[1] http://releases.openstack.org/newton/schedule.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079901.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Midcycle meetup

2016-06-02 Thread Ben Swartzlander
As of now we're planning to hold our midcycle meetup in virtually on 
June 28, 29, and possibly June 30 (depending on agenda).


If any core reviewers or significant contributors can't attend those 
days please let me know.


Also if anyone wants to travel to RTP to join those of us based here I 
also need to know so I can get a space reserved. Given the geographic 
spread of the team I'm prioritizing remote participation though.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][manila] Moving forward with landing manila in tripleo

2016-06-01 Thread Ben Swartzlander
I think it makes sense to merge the triplo heat templates without m-dat 
support, as including m-dat will require a bunch of dependent patches 
and slow everything down. The lack of the m-dat service won't cause any 
issues other than that the experimental share-migration APIs won't work. 
We should of course start work on all of those other patches 
immediately, and add a follow-on patch to add m-dat support to tripleo.


One other thing -- the design of the m-dat service currently doesn't 
lend it to HA or scale-out configurations, but the whole point of 
creating this separate service is to provide for HA and scale out of 
data-oriented operations, so I expect that by the end of Newton any 
issues related to active/active m-dat will have been addressed.


-Ben


On 05/30/2016 04:48 AM, Marios Andreou wrote:

On 27/05/16 22:46, Rodrigo Barbieri wrote:

Hello Marios,



Hi Rodrigo, thanks very much for taking the time, indeed that clarifies
quite a lot:


The Data Service is needed for Share Migration feature in manila since the
Mitaka release.

There has not been any work done yet towards adding it to puppet. Since its
introduction in Mitaka, it has been made compatible only with devstack so
far.


I see so at least that confirms I didn't just miss it at puppet-manila
... so that is a prerequisite really to us being able to configure and
enable manila-data in tripleo and someone will have to look at doing that.



I have not invested time thinking about how it should fit in a HA
environment at this stage, this is a service that currently sports a single
instance, but we have plans to make it more scallable in the future.
What I have briefly thought about is the idea where there would be a
scheduler that decides whether to send the data job to m-dat1, m-dat2 or
m-dat3 and so on, based on information that indicates how busy each Data
Service instance is.

For this moment, active/passive makes sense in the context that manila
expects only a single instance of m-dat. But active/active would allow the
service to be load balanced through HAProxy and could partially accomplish
what we have plans to achieve in the future.


OK thanks, so we can proceed with a/p for manila-share and manila-data
(one thought below) for now and revisit once you've worked out the
details there.



I hope I have addressed your question. The absence of m-dat implies in the
Share Migration feature not working.



thanks for the clarification. So then I wonder if this is a feature we
can live w/out for now, especially if this is an obstacle to landing
manila-anything in tripleo. I mean, if we can live w/out the Share
Migration Feature, until we get proper support for configuring
manila-data landed, then lets land w/out manila data and just be really
clear about what is going on, manila-data pending etc.

thanks again, marios





Regards,

On Fri, May 27, 2016 at 10:10 AM, Marios Andreou  wrote:


Hi all, I explicitly cc'd a few folks I thought might be interested for
visibility, sorry for spam if you're not. This email is about getting
manila landed into tripleo asap, and the current obstacles to that (at
least those visible to me):

The current review [1] isn't going to land as is, regardless of the
outcome/discussion of any of the following points because all the
services are going to "composable controller services". How do people
feel about me merging my review at [2] into its parent review (which is
the current manilla review at [1]). My review just takes what is in  [1]
(caveats below) and makes it 'composable', and includes a dependency on
[3] which is the puppet-tripleo side for the 'composable manila'.

   ---> Proposal merge the 'composable manila' tripleo-heat-templates
review @ [2] into the parent review @ [1]. The review at [2] will be
abandoned. We will continue to try and land [1] in its new 'composable
manila' form.

WRT the 'caveats' mentioned above and why I haven't just just ported
what is in the current manila review @ [1] into the composable one @
[2]... there are two main things I've changed, both of which on
guidance/discussion on the reviews.

The first is addition of manila-data (wasn't in the original/current
review at [1]). The second a change to the pacemaker constraints, which
I've corrected to make manila-data and manila-share pacemaker a/p but
everything else systemd managed, based on ongoing discussion at [3].

So IMO to move forward I need clarity on both those points. For
manila-data my concerns are is it already available where we need it. I
looked at puppet-manila [4] and couldn't quickly find much (any) mention
of manila-data. We need it there if we are to configure anything for it
via puppet. The other unkown/concern here is does manila-data get
delivered with the manila package (I recall manila-share possibly, at
least one of them, had a stand-alone package) otherwise we'll need to
add it to the image. But mainly my question here is, can we live without
it? I mean can we deploy 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Ben Swartzlander

On 05/25/2016 06:48 AM, Sean Dague wrote:

I've been watching the threads, trying to digest, and find the way's
this is getting sliced doesn't quite slice the way I've been thinking
about it. (which might just means I've been thinking about it wrong).
However, here is my current set of thoughts on things.

1. Should OpenStack be open to more languages?

I've long thought the answer should be yes. Especially if it means we
end up with keystonemiddleware, keystoneauth, oslo.config in other
languages that let us share elements of infrastructure pretty
seamlessly. The OpenStack model of building services that register in a
service catalog and use common tokens for permissions through a bunch of
services is quite valuable. There are definitely people that have Java
applications that fit into the OpenStack model, but have no place to
collaborate on them.

(Note: nothing about the current proposal goes anywhere near this)

2. Is Go a "good" language to add to the community?

Here I am far more mixed. In programming language time, Go is super new.
It is roughly the same age as the OpenStack project. The idea that Go and
Python programmers overlap seems to be because some shops that used
to do a lot in Python, now do some things in Go.

But when compared to other languages in our bag, Javascript, Bash. These
are things that go back 2 decades. Unless you have avoided Linux or the
Web successfully for 2 decades, you've done these in some form. Maybe
not being an expert, but there is vestigial bits of knowledge there. So
they *are* different. In the same way that C or Java are different, for
having age. The likelihood of finding community members than know Python
+ one of these is actually *way* higher than Python + Go, just based on
duration of existence. In a decade that probably won't be true.


Thank you for bringing up this point. My major concern boils down to the 
likelihood that Go will never be well understood by more than a small 
subset of the community. (When I say "well understood" I mean years of 
experiences with thousands of lines of code -- not "I can write hello 
world").


You expect this problem to get better in the future -- I expect this 
problem to get worse. Not all programming languages survive. Google for 
"dead programming languages" some time and you'll find many examples. 
The problem is that it's never obvious when the languages are young that 
something more popular will come along and kill a language.


I don't want to imply that Golang is especially likely to die any time 
soon. But every time you add a new language to a community, you increase 
the *risk* that one of the programming languages used by the community 
will eventually fall out of popularity, and it will become hard or 
impossible to find people to maintain parts of the code.


I tend to take a long view of software lifecycles, having witnessed the 
death of projects due to bad decisions before. Does anyone expect 
OpenStack to still be around in 10 years? 20 years? What is the 
likelihood that both Python and Golang are both still popular languages 
then? I guarantee [1] that it's lower than the likelihood that only 
Python is still a popular language.


Adding a new language adds risk that new contributors won't understand 
some parts of the code. Period. It doesn't matter what the language is.


My proposed solution is to draw the community line at the language 
barrier line. People in this community are expected to understand 
Python. Anyone can start other communities, and they can overlap with 
ours, but let's make it clear that they're not the same.


-Ben Swartzlander

[1] For all X, Y in (0, 1): X * Y < X


3. Are there performance problems where python really can't get there?

This seems like a pretty clear "yes". It shouldn't be surprising. Python
has no jit (yes there is pypy, but it's compat story isn't here). There
is a reason a bunch of python libs have native components for speed -
numpy, lxml, cryptography, even yaml throws a warning that you should
really compile the native version for performance when there is full
python fallback.

The Swift team did a very good job demonstrating where these issues are
with trying to get raw disk IO. It was a great analysis, and kudos to
that team for looking at so many angles here.

4. Do we want to be in the business of building data plane services that
will all run into python limitations, and will all need to be rewritten
in another language?

This is a slightly different spin on the question Thierry is asking.

Control Plane services are very unlikely to ever hit a scaling concern
where rewriting the service in another language is needed for
performance issues. These are orchestrators, and the time spent in them
is vastly less than the operations they trigger (start a vm, configure a
switch, boot a database server). There was a whole lot of talk in the
threads of "well that's not innovative, no one wil

Re: [openstack-dev] [tc] supporting Go

2016-05-09 Thread Ben Swartzlander

On 05/09/2016 07:43 PM, Rayson Ho wrote:

On Mon, May 9, 2016 at 2:35 PM, Ben Swartzlander <b...@swartzlander.org
<mailto:b...@swartzlander.org>> wrote:
 >>
 >> Perhaps for mature languages. But go is still finding its way, and that
 >> usually involves rapid changes that are needed faster than the
multi-year
 >> cycle Linux distributions offer.
 >
 >
 > This statement right here would be the nail in the coffin of this
idea if I were deciding. As a community we should not be building
software based on unstable platforms and languages.


Go is a production language used by Google, Dropbox, many many web
startups, and in fact Fortune 500 companies.

Using a package manager won't buy us anything, and like Clint raised,
the Linux distros are way too slow in picking up new Go releases. In
fact, the standard way of installing Rust also does not use a package
manager:

https://www.rust-lang.org/downloads.html


I never tried to compare Go to Rust. Rust also strikes me as a rather 
immature language that we shouldn't use. My point though is not to 
fixate on the language and start a religious war. I have bad things to 
say about every programming language in existence. My point is that 
immature unstable languages are a poor choice to pair with a stable 
mature language like Python (25 years old!) This is primarily due to 
cultural fit.


Only a small fraction of the human beings on Earth can write code at 
all. Of those, some fraction knows Python well enough to write and 
maintain complex software written in Python. Some other fraction knows 
$COOL_LANGUAGE well enough to do the same in that language. The 
intersection of these 2 groups is inevitably vanishingly small.




 > I have nothing against golang in particular but I strongly believe
that mixing 2 languages within a project is always the wrong decision

It would be nice if we only need to write code in one language. But in
the real world the "nicer" & "easier" languages like Python & Perl are
also the slower ones. I used to work for an investment bank, and our
system was developed in Perl, with performance critical part rewritten
in C/C++, so there really is nothing wrong with mixing languages. (But
if you ask me, I would strongly prefer Go than C++.)


If you think Perl is "nice" or "easy" you better get you head checked.

Also, C is not really a programming language -- it's more like assembly 
code with a portability layer. C++ is not worth mentioning as a language 
to write anything in given the better alternatives.


My argument boils down to: don't force OpenStack to accept code written 
in $COOL_LANGUAGE because we don't all want to have to learn that 
language in addition to Python. Sure some people will, and some probably 
already have learned it, but in a functioning development team, EVERYONE 
needs to speak the same language or you get the I-can't-review-that-code 
or I-can't-fix-that-bug syndrome.


OpenStack succeeds spectacularly at being modular -- with literally 
dozens of small projects that work with eachother and with other 
components in the ecosystem. Go start another project. Go use whatever 
language you want. If you don't want to use Python then don't call it 
OpenStack.


-Ben Swartzlander



Rayson

==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html




 >
 > If you want to write code in a language that's not Python, go start
another project. Don't call it OpenStack. If it ends up being a better
implementation than the reference OpenStack Swift implementation, it
will win anyways and perhaps Swift will start to look more like the rest
of the projects in OpenStack with a standardized API and multiple
plugable implementations.
 >
 > -Ben Swartzlander
 >
 >
 >> Also worth noting, is that go is not a "language runtime" but a compiler
 >> (that happens to statically link in a runtime to the binaries it
 >> produces...).
 >>
 >> The point here though, is that the versions of Python that OpenStack
 >> has traditionally supported have been directly tied to what the Linux
 >> distributions carry in their repositories (case in point, Python 2.6
 >> was dropped from most things as soon as RHEL7 was available with Python
 >> 2.7). With Go, there might need to be similar restrictions.
 >>
 >>
__
 >> OpenStack Development Mailing List (not for usage questions)
 >> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
 >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >>
 >
 >

Re: [openstack-dev] [tc] supporting Go

2016-05-09 Thread Ben Swartzlander

On 05/09/2016 02:15 PM, Clint Byrum wrote:

Excerpts from Pete Zaitcev's message of 2016-05-09 08:52:16 -0700:

On Mon, 9 May 2016 09:06:02 -0400
Rayson Ho <raysonlo...@gmail.com> wrote:


Since the Go toolchain is pretty self-contained, most people just follow
the official instructions to get it installed... by a one-step:

# tar -C /usr/local -xzf go$VERSION.$OS-$ARCH.tar.gz


I'm pretty certain the humanity has moved on from this sort of thing.
Nowadays "most people" use packaged language runtimes that come with
the Linux they're running.



Perhaps for mature languages. But go is still finding its way, and that
usually involves rapid changes that are needed faster than the multi-year
cycle Linux distributions offer.


This statement right here would be the nail in the coffin of this idea 
if I were deciding. As a community we should not be building software 
based on unstable platforms and languages.


I have nothing against golang in particular but I strongly believe that 
mixing 2 languages within a project is always the wrong decision, and 
doubly so if one of those languages is a niche language. The reason is 
simple: it's hard enough to find programmers who are competent in one 
language -- finding programmers who know both languages well will be 
nearly impossible. You'll end up with core reviewers who can't review 
half of the code and developers who can only fix bugs in half the code.


If you want to write code in a language that's not Python, go start 
another project. Don't call it OpenStack. If it ends up being a better 
implementation than the reference OpenStack Swift implementation, it 
will win anyways and perhaps Swift will start to look more like the rest 
of the projects in OpenStack with a standardized API and multiple 
plugable implementations.


-Ben Swartzlander


Also worth noting, is that go is not a "language runtime" but a compiler
(that happens to statically link in a runtime to the binaries it
produces...).

The point here though, is that the versions of Python that OpenStack
has traditionally supported have been directly tied to what the Linux
distributions carry in their repositories (case in point, Python 2.6
was dropped from most things as soon as RHEL7 was available with Python
2.7). With Go, there might need to be similar restrictions.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] License for specs repo

2016-05-05 Thread Ben Swartzlander

On 05/05/2016 04:01 PM, Davanum Srinivas wrote:

Ben,

Have you seen this yet?

http://lists.openstack.org/pipermail/legal-discuss/2014-March/000201.html
https://wiki.openstack.org/wiki/Governance/Foundation/15Oct2012BoardMinutes#Approval_of_the_CCBY_License_for_Documentation.


No I hadn't seen this. It's helpful to know that there is official 
support from the board for using the CCBY license but it's unclear what 
that's supposed to look like, since I can't find a single project that's 
converted their whole specs repo to the new license.


My confusion comes from how to handle the existing Apache 2.0 stuff in 
the cookie cutter. I can't just drop the Apache 2.0 license... The only 
obvious path forward is to create a gross mess like the existing specs 
repos have where there's a mix of the 2 licenses and it's not clear 
which license applies to what.


-Ben



Thanks,
Dims

On Thu, May 5, 2016 at 3:44 PM, Ben Swartzlander <b...@swartzlander.org> wrote:

On 05/05/2016 03:24 PM, Jeremy Stanley wrote:


On 2016-05-05 12:03:38 -0400 (-0400), Ben Swartzlander wrote:


It appears that many of the existing specs repos contain a
confusing mixture of Apache 2.0 licensed code and Creative Commons
licensed docs.


[...]

Recollection is that the prose was intended to be under CC Attrib.
in line with official documentation, while any sample source code
was intended to be under ASL2 so that it could be directly used in
similarly-licensed software. We likely do a terrible job of
explaining that though, and maybe dual-licensing everything in specs
repos makes more sense? This might also be a better thread to have
on the legal-discuss@ ML.



We may ultimately need to consult legal experts, but I was hoping that we
already had a clear guideline for specs licensing and it was merely being
applied inconsistently. I figured the TC would know if a decision had been
made about this.

I also have a feeling that dual-licensing would be the least-likely-to-fail
option, however I haven't seen examples of how to properly dual-license a
repo in OpenStack so I wasn't going to jump to that option first.

-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] License for specs repo

2016-05-05 Thread Ben Swartzlander

On 05/05/2016 03:24 PM, Jeremy Stanley wrote:

On 2016-05-05 12:03:38 -0400 (-0400), Ben Swartzlander wrote:

It appears that many of the existing specs repos contain a
confusing mixture of Apache 2.0 licensed code and Creative Commons
licensed docs.

[...]

Recollection is that the prose was intended to be under CC Attrib.
in line with official documentation, while any sample source code
was intended to be under ASL2 so that it could be directly used in
similarly-licensed software. We likely do a terrible job of
explaining that though, and maybe dual-licensing everything in specs
repos makes more sense? This might also be a better thread to have
on the legal-discuss@ ML.


We may ultimately need to consult legal experts, but I was hoping that 
we already had a clear guideline for specs licensing and it was merely 
being applied inconsistently. I figured the TC would know if a decision 
had been made about this.


I also have a feeling that dual-licensing would be the 
least-likely-to-fail option, however I haven't seen examples of how to 
properly dual-license a repo in OpenStack so I wasn't going to jump to 
that option first.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] License for specs repo

2016-05-05 Thread Ben Swartzlander
It appears that many of the existing specs repos contain a confusing 
mixture of Apache 2.0 licensed code and Creative Commons licensed docs.


The official cookie-cutter for creating new specs repos [1] appears to 
also contain a mixture of the two licenses, although it's even more 
confusing because it seems an attempt was made to change the license 
from Apache to Creative Commons [2] yet there are still several [3] 
places [4] where Apache is clearly specified.


I personally have no opinion on what license should be used, but I'd 
like to clearly specify the license for the newly-created manila-specs 
repo, and I'm happy with whatever the TC is currently recommending.


-Ben Swartzlander

[1] https://github.com/openstack-dev/specs-cookiecutter
[2] 
https://github.com/openstack-dev/specs-cookiecutter/commit/8738f58981da3ad9c0f27fb545d61747213482a4#diff-053c5863d526dd5103cd9b0069074596
[3] 
https://github.com/openstack-dev/specs-cookiecutter/blob/master/%7B%7Bcookiecutter.repo_name%7D%7D/setup.cfg#L12
[4] 
https://github.com/openstack-dev/specs-cookiecutter/blob/master/README.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Design summit etherpads

2016-04-28 Thread Ben Swartzlander
Here at the design summit I've been asked a few times where the 
etherpads are. Here is a link to the top level page for design summit 
etherpads:


https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Manila

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Manila] BP https://blueprints.launchpad.net/manila/+spec/access-groups

2016-03-21 Thread Ben Swartzlander

On 03/09/2016 03:51 AM, nidhi.h...@wipro.com wrote:

Hi All,

This is just a gentle reminder to the previous mail ..

PFA is revised doc.

Same is pasted here also.

https://etherpad.openstack.org/p/access_group_nidhimittalhada

Kindly share your thoughts on this..


Now that we've finally wrapped up RC1 I hope people take a look at this 
proposal. Nidhi, you should propose this topic for the design summit, 
especially if you will be able to participate in person. If not, we can 
still discuss it in Austin.


-Ben



Thanks

Nidhi

*From:* Nidhi Mittal Hada (Product Engineering Service)
*Sent:* Friday, February 26, 2016 3:22 PM
*To:* 'OpenStack Development Mailing List (not for usage questions)'
<openstack-dev@lists.openstack.org>
*Cc:* 'bswa...@netapp.com' <bswa...@netapp.com>; 'Ben Swartzlander'
<b...@swartzlander.org>
*Subject:* [OpenStack-Dev][Manila] BP
https://blueprints.launchpad.net/manila/+spec/access-groups

Hi Manila Team,

I am working on

https://blueprints.launchpad.net/manila/+spec/access-groups

For this I have created initial document as attached with the mail.

It contains DB CLI REST API related changes.

Could you please have a look and share your opinion.

Kindly let me know, if there is some understanding gap,

or something I have missed to document or

share your comments in general to make it better.

*Thank you.*

*Nidhi Mittal Hada*

*Architect | PES / COE*– *Kolkata India*

*Wipro Limited*

*M*+91 74 3910 9883 | *O* +91 33 3095 4767 | *VOIP* +91 33 3095 4767

The information contained in this electronic message and any attachments
to this message are intended for the exclusive use of the addressee(s)
and may contain proprietary, confidential or privileged information. If
you are not the intended recipient, you should not disseminate,
distribute or copy this e-mail. Please notify the sender immediately and
destroy all copies of this message and any attachments. WARNING:
Computer viruses can be transmitted via email. The recipient should
check this email and any attachments for the presence of viruses. The
company accepts no liability for any damage caused by any virus
transmitted by this email. www.wipro.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Newton design summit topics

2016-03-21 Thread Ben Swartzlander

I've started an etherpad to collect ideas for summit topics:

https://etherpad.openstack.org/p/manila-newton-summit-topics

Please add your suggestions to the top section and we'll get them 
categorized and scheduled in the bottom section in time for Austin.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] FFE for update_access implementation for Ganesha lib and GlusterFS

2016-03-19 Thread Ben Swartzlander

On 03/16/2016 10:02 AM, Csaba Henk wrote:

Hi,

I'm asking for a Feature Free Exception for the update_access code for
Ganesha library and the two GlusterFS drivers (glusterfs, glusterfs-native).

This benefits the whole project in terms of getting closer to the point
when the backward compatiblitly hooks for the old driver access API can
be retired. There is no change in functionality, except for one thing:
the new code takes advantage of the new semantic feature of
update_access, that is, recovery situation is explicit; thus with
Ganesha, where actually a reset/restore operation can be performed, we
can trigger this mechanism only if really that's what's being asked for
(contrary to the earlier practice where reset/restore was done on each
m-shr startup).

The impact of the change is limited to the GlusterFS drivers (Ganesha
library currently is used only for these drivers).

The following changes are proposed:

https://review.openstack.org/282602 ("ganesha: implement update_access")
https://review.openstack.org/291151 ("glusterfs: Implement
update_access() method")


Sorry for the slow response. I have discussed this with some other cores 
and while I have mixed feelings, considering Doug's input I think I must 
deny this.


I told a few different driver maintainers that I would grant FFEs for 
update_access() changes due to how late the feature went into Mitaka, 
but when I made those promises I was imagining the patches would be 
available shortly after feature freeze. These patches come when we're 
down to just a few bugs before we cut the RC, and risk is the biggest 
thing on my mind.


A major factor in this decision is the fact that glusterfs is not the 
only driver lacking an implementation for update_access(), so we 
couldn't remove the fallback path even if we did merge these changes.


Csaba, thank you for getting these changes done and we will merge them 
early in Newton. We need to decide as a community how to get the 
remaining drivers updated in Newton so we can remove the fallback path.


-Ben Swartzlander



Thanks,
Csaba



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Removal of LXD driver

2016-03-19 Thread Ben Swartzlander
After conferring with aovchinnikov about the remaining bugs against the 
LXD driver, I have decided that it's better for Manila if we remove the 
driver from the tree before the Mitaka release.


The goal of adding the driver was to create a new, faster, first party 
Manila driver which had proper support for share servers. We need this 
because the generic driver is both slow and somewhat unreliable.


While the LXD driver worked correctly, it never reached the point where 
it was completely stable with high concurrency, and thus didn't meet our 
need for a new driver to use in gate tests.


The following are specific reasons for removing it:

1) The LXD project appears to have an unstable API. We cannot find a 
client version that works with all available releases of LXD and we 
can't find a stable version of LXD to recommend to users. Evidence shows 
that the Manila LXD driver is compatible with LXD 2.0RC2 but not 2.0RC3. 
We can't recommend users run RC2, and if we change the driver to be 
compatible with RC3 we can't guarantee something else won't break with 
some future release. Until the LXD project releases something stable, it 
doesn't make sense to release a Manila driver based upon it.


2) RedHat brought up the issue that LXD is not shipped on all Linux 
distros, and in fact the pylxd library that the driver depends on is not 
even packaged on some distros. As a community we avoid libraries that 
are not widely available, so pylxd was a bad fit here. We implemented 
some workarounds that made the problem less bad for RedHat users, but 
the fact was that the LXD driver was never going to be a good solution 
for RedHat users, unless RedHat was convinced to distribute LXD (which 
is a topic I officially have no opinion about). For this reason we were 
likely to consider replacing the LXD driver with a different 
container-based driver in Newton, and that would have created upgrade 
problems for anyone who actually used the LXD driver in Mitaka. It's 
better to remove it before release to avoid the upgrade issues.



We still retain the option to bring the LXD driver back in Newton, if 
solutions to the above problems can be found -- the code and all 
unmerged bugfixes will be preserved so it can be resubmitted if we 
desire. Alternatively we may implement another container-based driver 
that doesn't suffer from the above problems. If we do that it's likely 
that a majority of the LXD driver code can be reused for that driver.


The problem which the LXD driver was created to solve has not gone away, 
so a solution is still needed in Newton. Personally I'm still very 
optimistic about using containers to solve the problem but we need to be 
more careful about selecting a more stable and widely-supported 
container platform.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] FFE for Removing the File Tree on Delete When Using Nested Shares on 3PAR

2016-03-14 Thread Ben Swartzlander

On 03/11/2016 02:42 PM, O'Rourke, Alex Liam wrote:

Hi,

I would like to request a Feature Freeze Exception for Removing the File
Tree on Delete When Using Nested Shares with 3PAR:
https://review.openstack.org/#/c/290209/

Originally, this was filed as a blueprint and marked as a new feature
(https://blueprints.launchpad.net/manila/+spec/hpe3par-nested-deletion),
but it was reclassified as a bug fix
(https://bugs.launchpad.net/manila/+bug/1538800) soon after. When
deleting a nested share from 3PAR, the file tree is not deleted, making
it so the contents within the ‘deleted’ share can still be accessed.
This bug fix aims to solve the problem by mounting the parent share and
deleting the file tree of the share requested. The issue is isolated
only to the 3PAR driver.

This patch adds configuration options for the username, password, and
domain in order to mount a CIFS share. Config options are needed in
order to fix this issue because without valid credentials, we cannot
mount a CIFS share to delete the lingering files. Because of these
options we are adding, the fix is to be treated as a feature over a bug.
I am requesting the Feature Freeze Exception so we can get this fix for
the 3PAR driver into Mitaka.


After looking at the bug fix, I've concluded that allowing these new 
config options is the right thing to do, even though it's technically a 
new feature. There is no good alternative other than leaving the bug 
unfixed until Newton. Doug's concerns notwithstanding, I don't feel the 
risk added by this feature is any more than some of the other bug fixes 
we're allowing in.


This FFE is granted.

-Ben



Thank You,

Alex O’Rourke



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] FFE for Removing the File Tree on Delete When Using Nested Shares on 3PAR

2016-03-11 Thread Ben Swartzlander

On 03/11/2016 02:42 PM, O'Rourke, Alex Liam wrote:

Hi,

I would like to request a Feature Freeze Exception for Removing the File
Tree on Delete When Using Nested Shares with 3PAR:
https://review.openstack.org/#/c/290209/

Originally, this was filed as a blueprint and marked as a new feature
(https://blueprints.launchpad.net/manila/+spec/hpe3par-nested-deletion),
but it was reclassified as a bug fix
(https://bugs.launchpad.net/manila/+bug/1538800) soon after. When
deleting a nested share from 3PAR, the file tree is not deleted, making
it so the contents within the ‘deleted’ share can still be accessed.
This bug fix aims to solve the problem by mounting the parent share and
deleting the file tree of the share requested. The issue is isolated
only to the 3PAR driver.

This patch adds configuration options for the username, password, and
domain in order to mount a CIFS share. Config options are needed in
order to fix this issue because without valid credentials, we cannot
mount a CIFS share to delete the lingering files. Because of these
options we are adding, the fix is to be treated as a feature over a bug.
I am requesting the Feature Freeze Exception so we can get this fix for
the 3PAR driver into Mitaka.


This is one way to handle it! Is anyone opposed? I've already found at 
least one other "bugfix" which adds a new config option. I'll be on the 
lookout for more of these over the next week.


-Ben



Thank You,

Alex O’Rourke



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Concurrent execution of drivers

2016-03-06 Thread Ben Swartzlander

On 03/04/2016 08:15 AM, John Spray wrote:

On Fri, Mar 4, 2016 at 12:11 PM, Shinobu Kinjo  wrote:

What are you facing?


In this particular instance, I'm dealing with a case where we may add
some metadata in ceph that will get updated by the driver, and I need
to know how I'm going to be called.  I need to know whether e.g. I can
expect that ensure_share will only be called once at a time per share,
or whether it might be called multiple times in parallel, resulting in
a need for me to do more synchronisation a lower level.

This is more complicated than locking, because where we update more
than one thing at a time we also have to deal with recovery (e.g.
manila crashed halfway through updating something in ceph and now I'm
recovering it), especially whether the places we do recovery will be
called concurrently or not.

My very favourite answer here would be a pointer to some
documentation, but I'm guessing much this stuff is still at a "word of
mouth" stage.


Concurrency is the area where most of our problems are coming from. 
There was a time, I believe, when concurrency issues were largely taken 
care of, but that was before we forked Manila from Cinder and before 
Cinder forked from Nova. Over time, lack of test coverage has allowed 
race conditions to creep in, and architectural decisions have been made 
that failed to account for HA (highly available) deployments, where 
multiple services might be managing the very same backends. The Cinder 
team has been working on fixing these issues and we need to catch up.


As I start to turn my attention from wrapping up Mitaka to thinking 
about Newton, concurrency is the most urgent focus area I can see. Much 
of our gate stability problems are likely due to concurrency issues. 
This can be easily verified by changing the concurrency value to 1 when 
running tempest and noting that it runs flawlessly every time, yet when 
it's set to >1 we have occasional failures.


-Ben



John


On Fri, Mar 4, 2016 at 9:06 PM, John Spray  wrote:

Hi,

What expectations should driver authors have about multiple instances
of the driver being instantiated within different instances of
manila-share?

For example, should I assume that when one instance of a driver is
having ensure_share called during startup, another instance of the
driver might be going through the same process on the same share at
the same time?  Are there any rules at all?

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] nfs-over-vsock (was: Concurrent execution of drivers)

2016-03-06 Thread Ben Swartzlander

On 03/05/2016 05:34 AM, Shinobu Kinjo wrote:

Are we still going to think of nfs-over-vsock?
Never mind. It's just coming from my curiosity.


A lot of work outside Manila has to happen before we can actually 
deliver that feature in OpenStack. If you google about nfs over vsock 
you can learn about the current state of things. It's going to be a 
while before nfs over vsock is implemented and it's going to even longer 
before that support is shipping on plaforms where people run OpenStack. 
I'm very optimistic, but we need to be patient.


-Ben



Cheers,
S

On Fri, Mar 4, 2016 at 11:08 PM, Valeriy Ponomaryov
 wrote:

Thanks - so if I understand you correctly, each share instance is
uniquely associated with a single instance of the driver at one time,
right?  So while I might have two concurrent calls to ensure_share,
they are guaranteed to be for different shares?


Yes.


Is this true for the whole driver interface?


Yes.



Two instances of the
driver will never both be asked to do operations on the same share at
the same time?



Yes.

Each instance of a driver will have its own unique list of shares to be
'ensure'd.

--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-02-29 Thread Ben Swartzlander

On 02/29/2016 04:38 PM, Kevin Benton wrote:

You're correct. Right now there is no way via the HTTP API to find which
segments a port is bound to.
This is something we can certainly consider adding, but it will need an
RFE so it wouldn't land until Newton at the earliest.


I believe Newton is the target for this work. This is feature freeze 
week after all.



Have you considered writing an ML2 driver that just notifies Manilla of
the port's segment info? All of this information is available to ML2
drivers in the PortContext object that is passed to them.


This seems gross and backwards. It makes sense as a short term hack but 
given that we have time to design this correctly I'd prefer to get this 
information in a more straighforward way.


-Ben Swartzlander



On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka <ihrac...@redhat.com
<mailto:ihrac...@redhat.com>> wrote:

Fixed neutron tag in the subject.

Marc <m...@koderer.com <mailto:m...@koderer.com>> wrote:

Hi Neutron team,

I am currently working on a feature for hierarchical port
binding support in
Manila [1] [2]. Just to give some context: In the current
implementation Manila
creates a neutron port but let it unbound (state DOWN).
Therefore Manila uses
the port create only retrieve an IP address and segmentation ID
(some drivers
only support VLAN here).

My idea is to change this behavior and do an actual port binding
action so that
the configuration of VLAN isn’t a manual job any longer. And
that multi-segment
and HPB is supported on the long-run.

My current issue is: How can Manila retrieve the segment
information for a
bound port? Manila only is interested in the last (bottom)
segmentation ID
since I assume the storage is connected to a ToR switch.

Database-wise it’s possible to query it using
ml2_port_binding_levels table.
But AFAIK there is no API to query this. The only information
that is exposed
are all segments of a network. But this is not sufficient to
identify which
segments actually used for a port binding.

Regards
Marc
SAP SE

[1]:
https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support
[2]: https://review.openstack.org/#/c/277731/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] NFS and root squash

2016-02-29 Thread Ben Swartzlander
We haven't spent much time (as a community) discussing root squashing, 
but Rodrigo's migration work has made it clear that we need clearer 
definitions around NFS permissions, and root squashing in particular.


I hope it's obvious to everyone that an NFS share with root squash for 
ALL HOSTS is pretty useless because it's impossible to change ownership 
of files and to create different directories owned by different users. 
The best you can get with root squash turned on for all hosts is an NFS 
share with all files owned by a single user (presumably the "nobody" user).


Now there are use cases for shares where most clients have root squash 
turned on, as long as 1 host has root squash turned off. That 1 host 
would be the "NFS admin" host, where the admin in that case would just 
be a special user who was still a tenant from the Manila perspective. 
Unfortunately we don't have different "access levels" for root squash = 
on/off. This is something to address for Newton.


In the mean time, I hope that everyone agrees that the only sane option 
is for root squash to be disabled by default, and that we need a way to 
allow users to enable it optionally in the future.


If any drivers are currently turning root squash on, I would consider 
that a bug -- and it will prevent migration for working on your backend.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Ben Swartzlander

On 02/19/2016 11:24 AM, Sean Dague wrote:

On 02/19/2016 11:15 AM, Ben Swartzlander wrote:

On 02/19/2016 10:57 AM, Sean Dague wrote:

On 02/18/2016 10:38 AM, D'Angelo, Scott wrote:

Cinder team is proposing to add support for API microversions [1]. It
came up at our mid-cycle that we should add a new /v3 endpoint [2].
Discussions on IRC have raised questions about this [3]

Please weigh in on the design decision to add a new /v3 endpoint for
Cinder for clients to use when they wish to have api-microversions.

PRO add new /v3 endpoint: A client should not ask for new-behaviour
against old /v2 endpoint, because that might hit an old
pre-microversion (i.e. Liberty) server, and that server might carry
on with old behaviour. The client would not know this without
checking, and so strange things happen silently.
It is possible for client to check the response from the server, but
his requires an extra round trip.
It is possible to implement some type of caching of supported
(micro-)version, but not all clients will do this.
Basic argument is that  continuing to use /v2 endpoint either
requires an extra trip for each request (absent caching) meaning
performance slow-down, or possibility of unnoticed errors.

CON add new endpoint:
Downstream cost of changing endpoints is large. It took ~3 years to
move from /v1 -> /v2 and we will have to support the deprecated /v2
endpoint forever.
If we add microversions with /v2 endpoint, old scripts will keep
working on /v2 and they will continue to work.
We would assume that people who choose to use microversions will
check that the server supports it.


The concern as I understand it is that by extending the v2 API with
microversions the following failure scenario exists

If:

1) a client already is using the /v2 API
2) a client opt's into using microversions on /v2
3) that client issues a request on a Cinder API v2 endpoint without
microversion support
4) that client fails check if micoversions are supported by a GET of /v2
or by checking the return of the OpenStack-API-Version return header


It disagree that this (step 4) is a failure. Clients should not have to
do a check at all. The client should tell the server what it wants to do
(send the request and version) and the server should do exactly that if
and only if it can. Any requirement that the client check the server's
version is a massive violation of good API design and will cause either
performance problems or correctness problems or both.


That is a fair concern. However the Cinder API today doesn't do strict
input validation (in my understanding). Which means it's never given
users that guaruntee. Adding ?foo=bar to random resources, or extra
headers, it likely to just get silently dropped.

Strict input validation is a good thing to do, and would make a very
sensible initial microversion to get onto that path.

So this isn't really worse than the current situation. And the upside is
easier adoption.


I'm not okay with shipping a broken design just because adoption will be 
easier.


I agree the current situation could be better, but let's not let a bad 
status quo give us an excuse to build a bad future. I'm also in favor of 
input validation. Arguably it was harder to do in the past because we 
didn't have a clear versioning mechanism and we needed to to give 
ourselves a way to make backwards-compatible changes to APIs. With a 
proper versioning scheme, input validation is very practical, and the 
only hurdle to getting it implemented is the amount of work.


-Ben



-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Ben Swartzlander

On 02/19/2016 10:57 AM, Sean Dague wrote:

On 02/18/2016 10:38 AM, D'Angelo, Scott wrote:

Cinder team is proposing to add support for API microversions [1]. It came up 
at our mid-cycle that we should add a new /v3 endpoint [2]. Discussions on IRC 
have raised questions about this [3]

Please weigh in on the design decision to add a new /v3 endpoint for Cinder for 
clients to use when they wish to have api-microversions.

PRO add new /v3 endpoint: A client should not ask for new-behaviour against old 
/v2 endpoint, because that might hit an old pre-microversion (i.e. Liberty) 
server, and that server might carry on with old behaviour. The client would not 
know this without checking, and so strange things happen silently.
It is possible for client to check the response from the server, but his 
requires an extra round trip.
It is possible to implement some type of caching of supported (micro-)version, 
but not all clients will do this.
Basic argument is that  continuing to use /v2 endpoint either requires an extra 
trip for each request (absent caching) meaning performance slow-down, or 
possibility of unnoticed errors.

CON add new endpoint:
Downstream cost of changing endpoints is large. It took ~3 years to move from /v1 
-> /v2 and we will have to support the deprecated /v2 endpoint forever.
If we add microversions with /v2 endpoint, old scripts will keep working on /v2 
and they will continue to work.
We would assume that people who choose to use microversions will check that the 
server supports it.


The concern as I understand it is that by extending the v2 API with
microversions the following failure scenario exists

If:

1) a client already is using the /v2 API
2) a client opt's into using microversions on /v2
3) that client issues a request on a Cinder API v2 endpoint without
microversion support
4) that client fails check if micoversions are supported by a GET of /v2
or by checking the return of the OpenStack-API-Version return header


It disagree that this (step 4) is a failure. Clients should not have to 
do a check at all. The client should tell the server what it wants to do 
(send the request and version) and the server should do exactly that if 
and only if it can. Any requirement that the client check the server's 
version is a massive violation of good API design and will cause either 
performance problems or correctness problems or both.


-Ben Swartzlander


5) that client issues a request against a resource on /v2 with
parameters that would create a radically different situation that would
be hard to figure out later.

And, only if all these things happen is there a concern.

So let's look at each one.

1) clients already using /v2 API

Last cycle when we tried to drop v1 from devstack we got a bunch of
explosions. In researching it it was determined that very little
supported cinder v2 -
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075760.html


At that point not even OpenStack Client itself, or Rally. And definitely
no libraries except python cinderclient. So the entire space of #1 is
python cinderclient, or non open rest clients.

2 & 4) are coupled. A good client that does 2 should do 4, and not only
depend on the 406 failure. cinderclient definitely should be made to do
that. Which means we are completely left with only custom non open
access code that's a concern. That's definitely still a concern, but
again the problem space is smaller.

3) can be mitigated if cinder backports patches to stable branches to
throw the 406 when sending the header. It's mitigation. Code already is
out in the wild, however it does help. And given other security fixes
people will probably take these patches into production.

5) is there an example where this is expected? or is this theoretical.


My very high concern is the fact that v2 adoption remains quite low, and
that a v3 will hurt that even further. Especially as it means a whole
other endpoint... "volumev2" was already a big problem in teaching a
bunch of software that it needs a new type, "volumev3" is something I
don't think anyone wants to see. I'd really like to see more of these
improvements get out there.

At the end of the day, this is the call of the Cinder team.

However, I've seen real 3rd party vendor software hitting the Nova API
that completely bypasses the service catalog, and hits /v2.1 directly
(it's not using microversions). Which means that it can't work on a Kilo
cloud. For actually no reason. As /v2.1 and /v2 are semantically
equivalent. Vendors do weird things. They read the docs, say "oh this is
the latest API" and only implement to that. They don't need any new
features, don't realize the time delay in these things getting out
there. It's a big regret that we have multiple endpoints because it
means these kinds of applications basically break for no good reason.

So my recommendation is to extend out from the /v2 endpoint. This is
conceptually what you are 

[openstack-dev] [Manila] HDFS CI broken

2016-02-10 Thread Ben Swartzlander
The gate-manila-tempest-dsvm-hdfs jenkins job has been failing for a 
long time now. It appears to be a config issue that's probably not hard 
to fix, but nobody is actively maintaining this code.


Since it's a waste of resources to continue running this broken job, I 
plan to disable it, and if nobody wants to volunteer to get it working 
again, we will need to take the HDFS driver out of the tree in Mitaka, 
since we can't ensure its quality without the CI job.


I really don't like removing drivers, especially fully open-source 
drivers, but we have too many other priorities this release to be 
distracted by fixing this kind of thing. If this driver is something 
people actively use and find valuable, then it should not be hard to 
find a volunteer to fix it.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-04 Thread Ben Swartzlander

On 02/02/2016 12:30 PM, Ben Swartzlander wrote:

Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
release and has been working on share migration (an important core
feature) for the last 2 releases. Since Tokyo he has dedicated himself
to reviews and community participation. I would like to nominate him to
join the Manila core reviewer team.


We announced at the weekly meeting today that Rodrigo has joined the 
core reviewer team. Welcome Rodrigo!


-Ben



-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-02 Thread Ben Swartzlander
Rodrigo (ganso on IRC) joined the Manila project back in the Kilo 
release and has been working on share migration (an important core 
feature) for the last 2 releases. Since Tokyo he has dedicated himself 
to reviews and community participation. I would like to nominate him to 
join the Manila core reviewer team.


-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Manila] status=NONE when share is created

2016-01-07 Thread Ben Swartzlander

On 01/06/2016 02:53 AM, nidhi.h...@wipro.com wrote:

Hi All,

https://bugs.launchpad.net/manila/+bug/1526284

(snip)

Where we are intentionally giving *create_share_instance=False that
means in db function *


I think I agree it would make more sense to create the first instance at 
the same time we create the share. I'd have to look at the code to see 
if there's a good reason for doing it later on.


As Shinobu suggests, though, this would probably better be handled on 
IRC or in a code review.


-Ben



*share_create(), share Instance will not be created And status is a
field in share_instances table only. *

**

*Hence it will come as “None” only till instance is created.*

**

Is this an “intentional step” to show status as “None”..

is this bug not valid?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-Dev][Manila] - Need design decision help - - https://bugs.launchpad.net/manila/+bug/1503390

2015-12-21 Thread Ben Swartzlander

On 12/22/2015 12:26 AM, nidhi.h...@wipro.com wrote:

Hi all.

I am working on bug 1503390. (status=None while delete is in progress

)

I was doing analysis of problem and found that yes its there.

I reproduced it.

Now the two solutions proposed..

1)Either say the status as deleting for such snapshots ?

2)lets not list such snapshots in list ..

I see that

If we implement solution1 .. then it doesn’t work as …..

we reach the same Situation of (snapshot present but snapshot_instance
absent)

in two cases

When snapshot is deleted and when snapshot is created …

Now if I set status as deleting(to be shown in list for snapshots with
no snapshot_instances)

it will be a wrong information when we are creating the snapshot.

And list function – can not differentiate state (snapshot present but
snapshot_instance absent)

whether its due to creation or deletion.  PCIIMW…

Another way to do this is .. let create delete path set the status to a
special state .

which if list operation obtains .. can understand how to interpret it …

But setting status also is not possible as status resides in
snapshot_instances table ..

row for which is not created yet in create path .. we can not set staus …!!!

Do you think that

STATUS_NEW = 'new'

STATUS_CREATING = 'creating'

STATUS_DELETING = 'deleting'

STATUS_DELETED = 'deleted'

STATUS_ERROR = 'error'

STATUS_ERROR_DELETING = 'error_deleting'

STATUS_AVAILABLE = 'available'

STATUS_ACTIVE = 'active'

STATUS_INACTIVE = 'inactive'>

STATUS_MANAGING = 'manage_starting'

STATUS_MANAGE_ERROR = 'manage_error'

STATUS_UNMANAGING = 'unmanage_starting'

STATUS_UNMANAGE_ERROR = 'unmanage_error'

STATUS_UNMANAGED = 'unmanaged'

STATUS_EXTENDING = 'extending'

Do you think that setting state as *_some neutral state like
state_inactive .._*

will help?

it will be same in both creation and deletion path ?

so should we go with 2^nd solution ? lets not show such shares in list ?


I like the second solution. Rather than making up a neutral state, just 
don't list snapshots with no instances. If the snapshot was getting 
deleted, then very soon there won't be anything to show, and if the 
snapshot was just getting created, the client can retry the query later 
and see the snapshot after a short time.


-Ben



Thanks

Nidhi

The information contained in this electronic message and any attachments
to this message are intended for the exclusive use of the addressee(s)
and may contain proprietary, confidential or privileged information. If
you are not the intended recipient, you should not disseminate,
distribute or copy this e-mail. Please notify the sender immediately and
destroy all copies of this message and any attachments. WARNING:
Computer viruses can be transmitted via email. The recipient should
check this email and any attachments for the presence of viruses. The
company accepts no liability for any damage caused by any virus
transmitted by this email. www.wipro.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Midcycle meetup

2015-12-09 Thread Ben Swartzlander

On 12/04/2015 04:42 PM, Ben Swartzlander wrote:

On 11/19/2015 01:00 PM, Ben Swartzlander wrote:

If you planning to attend the midcycle in any capacity, please vote your
preferences here:

https://www.surveymonkey.com/r/BXPLDXT


The results of the survey were clear. Most people prefer the week of Jan
12-14.

There was an offer to host in Roseville, CA by HP (thanks HP) but at the
meeting yesterday most people still preferred the RTP site, so we will
be planning on hosting the meeting in RTP that week, unless someone
absolutely can't make that week.

What remains to be decided is whether we do Tuesday+Wednesday or
Wednesday+Thursday. We've tried both, and the 2 day length has worked
out very well. I personally lean towards Wednesday+Thursday, but please
reply back to me or the list if you have a different preference.

We need to finalize the dates so people can make travel arrangements.
I'll set the deadline to decide by Tuesday Dec 8 so people will have 5
week to make travel plans.


Okay it's final -- we will hold the midcycle meetup on Jan 13-14 at 
NetApp's RTP office.


-Ben



-Ben

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Tempest scenario tests vs. gate condition

2015-12-07 Thread Ben Swartzlander

On 12/03/2015 06:38 AM, John Spray wrote:

Hi,

We're working towards getting the devstack/CI parts ready to test the
forthcoming ceph native driver, and have a question: will a driver be
accepted into the tree if it has CI for running the api/ tempest
tests, but not the scenario/ tempest tests?

The context is that because the scenario tests require a client to
mount the shares, that's a bit more work for a new protocol such as
cephfs.  Naturally we intend to do get that done, but would like to
know if it will be a blocker in getting the driver in tree.


This is not currently a requirement for any of the existing 3rd party 
drivers so it wouldn't be fair to enforce it on cephfs.


It *is* something we would like to require at some point, because just 
running the API tests don't really ensure that the driver isn't broken, 
but I'm trying to be sensitive to vendors' limited resources and to add 
CI requirements gradually. The fact that the current generic driver is 
unstable in the gate is a much more serious issue than the fact that 
some drivers don't pass scenario tests.



Many thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Midcycle meetup

2015-12-04 Thread Ben Swartzlander

On 11/19/2015 01:00 PM, Ben Swartzlander wrote:

If you planning to attend the midcycle in any capacity, please vote your
preferences here:

https://www.surveymonkey.com/r/BXPLDXT


The results of the survey were clear. Most people prefer the week of Jan 
12-14.


There was an offer to host in Roseville, CA by HP (thanks HP) but at the 
meeting yesterday most people still preferred the RTP site, so we will 
be planning on hosting the meeting in RTP that week, unless someone 
absolutely can't make that week.


What remains to be decided is whether we do Tuesday+Wednesday or 
Wednesday+Thursday. We've tried both, and the 2 day length has worked 
out very well. I personally lean towards Wednesday+Thursday, but please 
reply back to me or the list if you have a different preference.


We need to finalize the dates so people can make travel arrangements. 
I'll set the deadline to decide by Tuesday Dec 8 so people will have 5 
week to make travel plans.



-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >