Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-13 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2016-08-13 20:04:13 -0700:
> The larger issue here IMHO is that there is now a  API 
> around locking that might be better suited targeting an actual lock 
> management system (say redis or zookeeper or etcd or ...).

The more I look at this, the more I think this is just evidence that
the compute node itself needs to be an API unto itself. Whether it's
Neutron agents, cinder volumes, or what, nova-compute has a bunch of
under-the-covers interactions with things like this. It would make more
sense to put that into its own implementation behind a real public API
than what we have now: processes that just magically expect to be run
together with shared filesystems, lock dirs, network interfaces, etc.

That would also go a long way to being able to treat the other components
more like microservices.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-13 Thread Joshua Harlow

Sean McGinnis wrote:

On Fri, Aug 12, 2016 at 05:55:47AM -0400, Sean Dague wrote:

A devstack patch was pushed earlier this cycle around os-brick -
https://review.openstack.org/341744

Apparently there are some os-brick operations that are only safe if the
nova and cinder lock paths are set to be the same thing. Though that
hasn't yet hit release notes or other documentation yet that I can see.


Patrick East submitted a patch to add a release note on the Cinder side
last night: https://review.openstack.org/#/c/354501/


Is this a thing that everyone is aware of at this point? Are project
teams ok with this new requirement? Given that lock_path has no default,
this means we're potentially shipping corruption by default to users.
The other way forward would be to revisit that lock_path by default
concern, and have a global default. Or have some way that users are
warned if we think they aren't in a compliant state.


This is a very good point that we are shipping corruption by default. I
would actually be in favor of having a global default. Other than
requiring tooz for default global locking (with a lot of extra overhead
for small deployments), I don't see a better way of making sure the
defaults are safe for those not aware of the issue.


What is this 'lot of extra overhead' you might be talking about here?

You're free when using tooz to pick (or recommend) the backend that is 
the best for the API that you're trying to develop:


http://docs.openstack.org/developer/tooz/drivers.html

http://docs.openstack.org/developer/tooz/drivers.html#file is similar to 
the one that oslo.concurrency provides (they both share the same 
underlying lock impl via https://pypi.python.org/pypi/fasteners).


The larger issue here IMHO is that there is now a  API 
around locking that might be better suited targeting an actual lock 
management system (say redis or zookeeper or etcd or ...).


For example we could have the following lock hierarchy convention:

openstack/
├── cinder
├── glance
├── neutron
├── nova
└── shared

The *shared* 'folder' there (not really a folder in some of the 
backends) would be where shared locks (ideally with sub-folders defining 
categories that provide useful context/names describing what is being 
shared) would go, with project-specific locks using there respective 
folders (and so-on).


Using http://docs.openstack.org/developer/tooz/drivers.html#file u could 
even create the above directory structure as is (right now); 
oslo.concurrency doesn't provide the right ability to do this since it 
has only one configuration option 'lock_path' (and IMHO although we 
could tweak oslo.concurrency more and more to do something like that it 
starts to enter the territory of 'if all you have is a hammer, 
everything looks like a nail').


That's my 3 cents :-P

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [nova] locking concern with os-brick

2016-08-13 Thread Jay Bryant
I have enough experience to know that the notes will not be read.

I think we need to pull Walt and Kendall in and come up with a safer
solution to this.

That is my 2 cents. :-)

Jay

On Sat, Aug 13, 2016 at 5:07 PM Matt Riedemann 
wrote:

> On 8/12/2016 8:54 AM, Matt Riedemann wrote:
> > On 8/12/2016 8:52 AM, Matt Riedemann wrote:
> >> On 8/12/2016 8:24 AM, Sean McGinnis wrote:
> >>> On Fri, Aug 12, 2016 at 05:55:47AM -0400, Sean Dague wrote:
>  A devstack patch was pushed earlier this cycle around os-brick -
>  https://review.openstack.org/341744
> 
>  Apparently there are some os-brick operations that are only safe if
> the
>  nova and cinder lock paths are set to be the same thing. Though that
>  hasn't yet hit release notes or other documentation yet that I can
> see.
> >>>
> >>> Patrick East submitted a patch to add a release note on the Cinder side
> >>> last night: https://review.openstack.org/#/c/354501/
> >>>
>  Is this a thing that everyone is aware of at this point? Are project
>  teams ok with this new requirement? Given that lock_path has no
>  default,
>  this means we're potentially shipping corruption by default to users.
>  The other way forward would be to revisit that lock_path by default
>  concern, and have a global default. Or have some way that users are
>  warned if we think they aren't in a compliant state.
> >>>
> >>> This is a very good point that we are shipping corruption by default. I
> >>> would actually be in favor of having a global default. Other than
> >>> requiring tooz for default global locking (with a lot of extra overhead
> >>> for small deployments), I don't see a better way of making sure the
> >>> defaults are safe for those not aware of the issue.
> >>>
> >>> And IMO, having the release note is just a CYA step. We can hope
> someone
> >>> reads it - and understands it's implications - but it likely will be
> >>> missed.
> >>>
> >>> Anyway, that's my 2 cents.
> >>>
> >>> Sean
> >>>
> 
>  I've put the devstack patch on a -2 hold until we get ACK from both
>  Nova
>  and Cinder teams that everyone's cool with this.
> 
>  -Sean
> 
>  --
>  Sean Dague
>  http://dague.net
> 
> 
> __
> 
> 
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> __
> >>>
> >>>
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> I saw the nova one last night:
> >>
> >> https://review.openstack.org/#/c/354502/
> >>
> >> But I don't know the details, like what are the actual specific things
> >> that fail w/o this? Vague "trust me, you need to do this or else"
> >> release notes that impact how people deploy is not fun, so I'd like more
> >> details before we just put this out there.
> >>
> >
> > This is also probably something that should be advertised on the
> > openstack-operators ML. I would at least feel more comfortable if this
> > is a known thing that operators have already been dealing with and we
> > just didn't realize.
> >
>
> I checked a tempest-dsvm CI run upstream and we don't follow this
> recommendation for our own CI on all changes in OpenStack, so before we
> make this note in the release notes, I'd like to see us use the same
> lock_path for c-vol and n-cpu in devstack for our CI runs.
>
> Also, it should really be a note in the help text of the actual
> lock_path option IMO since it's a latent and persistent thing that
> people are going to need to remember after newton has long been released
> and people deploying OpenStack for the first time AFTER newton shouldn't
> have to know there was a release note telling them not to shoot
> themselves in the foot, it should be in the config option help text.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [nova] locking concern with os-brick

2016-08-13 Thread Matt Riedemann

On 8/12/2016 8:54 AM, Matt Riedemann wrote:

On 8/12/2016 8:52 AM, Matt Riedemann wrote:

On 8/12/2016 8:24 AM, Sean McGinnis wrote:

On Fri, Aug 12, 2016 at 05:55:47AM -0400, Sean Dague wrote:

A devstack patch was pushed earlier this cycle around os-brick -
https://review.openstack.org/341744

Apparently there are some os-brick operations that are only safe if the
nova and cinder lock paths are set to be the same thing. Though that
hasn't yet hit release notes or other documentation yet that I can see.


Patrick East submitted a patch to add a release note on the Cinder side
last night: https://review.openstack.org/#/c/354501/


Is this a thing that everyone is aware of at this point? Are project
teams ok with this new requirement? Given that lock_path has no
default,
this means we're potentially shipping corruption by default to users.
The other way forward would be to revisit that lock_path by default
concern, and have a global default. Or have some way that users are
warned if we think they aren't in a compliant state.


This is a very good point that we are shipping corruption by default. I
would actually be in favor of having a global default. Other than
requiring tooz for default global locking (with a lot of extra overhead
for small deployments), I don't see a better way of making sure the
defaults are safe for those not aware of the issue.

And IMO, having the release note is just a CYA step. We can hope someone
reads it - and understands it's implications - but it likely will be
missed.

Anyway, that's my 2 cents.

Sean



I've put the devstack patch on a -2 hold until we get ACK from both
Nova
and Cinder teams that everyone's cool with this.

-Sean

--
Sean Dague
http://dague.net

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I saw the nova one last night:

https://review.openstack.org/#/c/354502/

But I don't know the details, like what are the actual specific things
that fail w/o this? Vague "trust me, you need to do this or else"
release notes that impact how people deploy is not fun, so I'd like more
details before we just put this out there.



This is also probably something that should be advertised on the
openstack-operators ML. I would at least feel more comfortable if this
is a known thing that operators have already been dealing with and we
just didn't realize.



I checked a tempest-dsvm CI run upstream and we don't follow this 
recommendation for our own CI on all changes in OpenStack, so before we 
make this note in the release notes, I'd like to see us use the same 
lock_path for c-vol and n-cpu in devstack for our CI runs.


Also, it should really be a note in the help text of the actual 
lock_path option IMO since it's a latent and persistent thing that 
people are going to need to remember after newton has long been released 
and people deploying OpenStack for the first time AFTER newton shouldn't 
have to know there was a release note telling them not to shoot 
themselves in the foot, it should be in the config option help text.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun][Higgins] Proposing Sudipta Biswas and Wenzhi Yu for Zun core reviewer team

2016-08-13 Thread Fei Long Wang

+1

On 12/08/16 19:22, taget wrote:


+1 for both, they would be great addition to zun team.

On 2016年08月12日 10:26, Yanyan Hu wrote:


Both Sudipta and Wenzhi have been actively contributing to the Zun 
project for a while. Sudipta provided helpful advice for the project 
roadmap and architecture design. Wenzhi consistently contributed high 
quality patches and insightful reviews. I think both of them are 
qualified to join the core team.






--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-13 Thread Duncan Thomas
There's so much misinformation in that email I barely know where to start.

There is nothing stopping out of tree drivers for cinder, and a few have
existed, though they don't seem to stick around. The driver is just a
python class referenced in the config file.

Turning a removed driver into an out of tree driver (or patching it back
into the tree) is trivial for anybody with basic python skills. They can
even just apply a reverse patch of the removal patch directly and cleanly
most of the time, since the drivers are clearly separated.

As has been said in the thread multiple times, by multiple people, the idea
of out of tree drivers has been discussed, passionately and at vast length,
with people on both sides of the debate. We've got storage vendors,
operators and distribution packagers at every single one of these
discussions, and have had each time it is discussed, which has been at
least the last three summits and the last three mid cycles.

It is getting tiring and distracting to keep rehashing that decision in
thread with nothing new being said, with somebody who neither has a driver
nor otherwise contributes to cinder. Please have the curtsey to follow some
of the provided historical references before repeatedly derailing the
thread by expounding the virtues of out of tree drivers. They have been
discussed, and soundly (though not unanimously, as Mike points out)
rejected. We have clearly decided there is a consensus that they aren't
what we want now. That is not what we're trying to discuss here.

To spell it pot one more time: we don't stop out of tree drivers. They
work, they're easy. We don't advertise them as supported because they're
not part of the review or testing process. We like in tree drivers. Vendors
like in tree drivers, and the advertised support we give them for doing so.
They will handle the Burton of keeping a third party CI going to maintain
that status, though a little begrudgingly - it has repeatedly and
continuously been necessary to have the option of the (apparently
substantial, given the effect it can have) threat of removal from tree in
order to persuade them to put enough resources into keeping their CI going.

On 13 Aug 2016 16:40, "Ihar Hrachyshka"  wrote:

> Clay Gerrard  wrote:
>
> The 
> use_untested_probably_broken_deprecated_manger_so_maybe_i_can_migrate_cross_fingers
>> option sounds good!  The experiment would be then if it's still enough of a
>> stick to keep 3rd party drivers pony'd up on their commitment to the Cinder
>> team to consistently ship quality releases?
>>
>>
> This commitment, is it the only model that you allow to extend Cinder for
> vendor technology? Because if not, then, in a way, you put vendors in an
> unfortunate situation where they are forced into a very specific model of
> commitment, that may be not in the best interest of that vendor. While
> there may be a value of keeping multiple drivers closer to the core code
> (code reusability, spotting common patterns, …), I feel that the benefit
> from such collaboration is worthwhile only when it's mutual, and not forced
> onto.
>
> I assume that if there would be alternatives available (including walking
> autonomously of Cinder release cycles and practices), then some of those
> vendors that you currently try hard to police into doing things the right
> and only way would actually choose that alternative path, that could be
> more in line with their production cycle. And maybe those vendors that
> break current centralized rules would voluntarily vote for leaving the tree
> to pursuit happiness as they see it, essentially freeing you from the need
> to police code that you cannot actually maintain.
>
> What about maybe the operator just not upgrading till post migration?
>> It's the migration that sucks right?  You either get to punt a release and
>> hope it gets "back in good faith" or do it now and that 3rd party driver
>> has lost your business/trust.
>>
>
> The deprecation tag indicates a good will of the community to do whatever
> it takes to fulfill the guarantee that a solution that worked in a previous
> cycle won’t be dropped with no prior notice (read: deprecation warnings in
> logs). Explicitly removing a driver just because you *think* it may no
> longer work is not in line with this thinking. Yes, there may be bugs in
> the code, but there is at least a path forward: for one, operators may try
> to fix bugs they hit in upgrade, or they can work with the vendor to fix
> the code and backport the needed fixes to stable branches. When you don’t
> have the code in tree at all, it’s impossible to backport, because stable
> branches don’t allow new features. And it’s not possible to play with
> (potentially broken) driver to understand which bugs block you from going
> forward.
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> 

Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-13 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2016-08-12 16:04:42 -0700:
> 
> On 12 Aug 2016, at 13:31, Doug Hellmann wrote:
> 
> > Excerpts from John Dickinson's message of 2016-08-12 13:02:59 -0700:
> >>
> >> On 12 Aug 2016, at 7:28, Doug Hellmann wrote:
> >>
> >>> Excerpts from John Dickinson's message of 2016-08-11 15:00:56 -0700:
> 
>  On 10 Aug 2016, at 8:29, Doug Hellmann wrote:
> 
> > Excerpts from Doug Hellmann's message of 2016-07-29 16:55:22 -0400:
> >> One of the outcomes of the discussion at the leadership training
> >> session earlier this year was the idea that the TC should set some
> >> community-wide goals for accomplishing specific technical tasks to
> >> get the projects synced up and moving in the same direction.
> >>
> >> After several drafts via etherpad and input from other TC and SWG
> >> members, I've prepared the change for the governance repo [1] and
> >> am ready to open this discussion up to the broader community. Please
> >> read through the patch carefully, especially the "goals/index.rst"
> >> document which tries to lay out the expectations for what makes a
> >> good goal for this purpose and for how teams are meant to approach
> >> working on these goals.
> >>
> >> I've also prepared two patches proposing specific goals for Ocata
> >> [2][3].  I've tried to keep these suggested goals for the first
> >> iteration limited to "finish what we've started" type items, so
> >> they are small and straightforward enough to be able to be completed.
> >> That will let us experiment with the process of managing goals this
> >> time around, and set us up for discussions that may need to happen
> >> at the Ocata summit about implementation.
> >>
> >> For future cycles, we can iterate on making the goals "harder", and
> >> collecting suggestions for goals from the community during the forum
> >> discussions that will happen at summits starting in Boston.
> >>
> >> Doug
> >>
> >> [1] https://review.openstack.org/349068 describe a process for 
> >> managing community-wide goals
> >> [2] https://review.openstack.org/349069 add ocata goal "support python 
> >> 3.5"
> >> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> >> libraries"
> >>
> >
> > The proposal was discussed at the TC meeting yesterday [4], and
> > left open to give more time to comment. I've added all of the PTLs
> > for big tent projects as reviewers on the process patch [1] to
> > encourage comments from them.
> >
> > Please also look at the associated patches with the specific goals
> > for this cycle (python 3.5 support and cleaning up Oslo incubated
> > code).  So far most of the discussion has focused on the process,
> > but we need folks to think about the specific things they're going
> > to be asked to do during Ocata as well.
> >
> > Doug
> >
> > [4] 
> > http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-09-20.01.log.html
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
>  Commonality in goals and vision is what unites any community. I
>  definitely support the TC's effort to define these goals for OpenStack
>  and to champion them. However, I have a few concerns about the process
>  that has been proposed.
> 
>  I'm concerned with the mandate that all projects must prioritize these
>  goals above all other work. Thinking about this from the perspective of
>  the employers of OpenStack contributors, and I'm finding it difficult
>  to imagine them (particularly smaller ones) getting behind this
>  prioritization mandate. For example, if I've got a user or deployer
>  issue that requires an upstream change, am I to prioritize Py35
>  compatibility over "broken in production"? Am I now to schedule my own
>  work on known bugs or missing features only after these goals have
>  been met? Is that what I should ask other community members to do too?
> >>>
> >>> There is a difference between priority and urgency. Clearly "broken
> >>> in production" is more urgent than other planned work. It's less
> >>> clear that, over the span of an entire 6 month release cycle, one
> >>> production outage is the most important thing the team would have
> >>> worked on.
> >>>
> >>> The point of the current wording is to make it clear that because these
> >>> are goals coming from the entire community, teams are expected to place
> >>> a high priority on completing them. In some cases that may mean
> >>> working on community goals instead of working on 

Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-13 Thread Clint Byrum
Excerpts from John Dickinson's message of 2016-08-12 13:02:59 -0700:
> 
> On 12 Aug 2016, at 7:28, Doug Hellmann wrote:
> 
> > Excerpts from John Dickinson's message of 2016-08-11 15:00:56 -0700:
> >>
> >
> >> I agree with Hongbin Lu's comments that the resulting goals might fit
> >> into the interests of the majority but fundamentally violate the
> >> interests of a minority of project teams. As an example, should the TC
> >> decide that a future goal is for projects to implement a particular
> >> API-WG document, that may be good for several projects, but it might
> >> not be possible or advisable for others.
> >
> > Again, the goals are not coming from the TC, they are coming from the
> > entire community. There will be discussion sessions, mailing list
> > threads, experimentation, etc. before any goal is settled on. By the
> > time the goals for a given cycle are picked, everyone will have had a
> > chance to give input. That's why I'm starting this conversation now, so
> > far in advance of the summit.
> 
> This is good, and the importance and difficulty of this is not lost on
> me. I'm very glad you've included community feedback as part of the
> process.
> 
> But if a project is on the minority side of the resulting concensus,
> how do we protect that project from being negatively affected? Even if
> there are good reasons at the time for a project to not support a
> goal, that dissent can come back against the project negatively, even
> years down the road after those who dissented have left. I know this
> from experience.
> 

There is no minority side of a consensus. If you can't support that goal,
it's not a community goal.

I know it's mind blowing, but the idea is that we actually all agree that
we all should exist, and have an important shared responsibility to one
another under the OpenStack banner. Rather than a divisive voting system
where we can chuck our desires at the wall, and point fingers when more
people have different desires, the TC has a radical idea. They'd like
to actually try and have OpenStack build OpenStack _together_.

There's still a position that gives more than others. This is not an
equilibrium. Some projects will have more time than others to complete
these goals. But the point of a consensus is that we can actually find
things that we can all commit to doing. And if we can't find those things,
we should spend more time figuring out why.

This isn't fairy tales and rainbows. It's human communication. We
actually need to spend time listening to one another here. If a project
team is feeling the pressure to gain more adoption so greatly that they
simply cannot commit to a goal, then the community should hear that,
and respect it. Don't make that a community goal, even if the teams that
want it go ahead with activities, they can do so with the knowledge that
it is their own, and they cannot expect community-wide support yet.

At the same time, that very busy project team, whomever they may be, needs
to consider the effect their activities have on the greater effort. The
discussion needs to continue, and as unsatisfying as it may feel to have
an open discussion instead of a closed discussion in which there were
winners and losers, it's the burden we're going to bear to be able to
achieve something larger than what a single team can achieve.

I for one believe in this model. I think it will require leaders to step
up and help build consensus. But I think we all know that as loosely
coupled as OpenStack is, there are plenty of ties that bind us together.

We all wear the same t-shirts, and we should act like we want to keep
doing that.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-13 Thread Ihar Hrachyshka

Clay Gerrard  wrote:

The  
use_untested_probably_broken_deprecated_manger_so_maybe_i_can_migrate_cross_fingers  
option sounds good!  The experiment would be then if it's still enough of  
a stick to keep 3rd party drivers pony'd up on their commitment to the  
Cinder team to consistently ship quality releases?




This commitment, is it the only model that you allow to extend Cinder for  
vendor technology? Because if not, then, in a way, you put vendors in an  
unfortunate situation where they are forced into a very specific model of  
commitment, that may be not in the best interest of that vendor. While  
there may be a value of keeping multiple drivers closer to the core code  
(code reusability, spotting common patterns, …), I feel that the benefit  
from such collaboration is worthwhile only when it's mutual, and not forced  
onto.


I assume that if there would be alternatives available (including walking  
autonomously of Cinder release cycles and practices), then some of those  
vendors that you currently try hard to police into doing things the right  
and only way would actually choose that alternative path, that could be  
more in line with their production cycle. And maybe those vendors that  
break current centralized rules would voluntarily vote for leaving the tree  
to pursuit happiness as they see it, essentially freeing you from the need  
to police code that you cannot actually maintain.


What about maybe the operator just not upgrading till post migration?   
It's the migration that sucks right?  You either get to punt a release  
and hope it gets "back in good faith" or do it now and that 3rd party  
driver has lost your business/trust.


The deprecation tag indicates a good will of the community to do whatever  
it takes to fulfill the guarantee that a solution that worked in a previous  
cycle won’t be dropped with no prior notice (read: deprecation warnings in  
logs). Explicitly removing a driver just because you *think* it may no  
longer work is not in line with this thinking. Yes, there may be bugs in  
the code, but there is at least a path forward: for one, operators may try  
to fix bugs they hit in upgrade, or they can work with the vendor to fix  
the code and backport the needed fixes to stable branches. When you don’t  
have the code in tree at all, it’s impossible to backport, because stable  
branches don’t allow new features. And it’s not possible to play with  
(potentially broken) driver to understand which bugs block you from going  
forward.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][api]API returns incorrectly when filtering fields in every LBaaS resource

2016-08-13 Thread Ihar Hrachyshka

zhi  wrote:


hi, all.

I have faced some strange problems when getting LBaaS resources, such as 
loadbalancers, listeners, pools, etc.

For example, when I send a request which only filtering "id" attribute, 
like this:

>>> curl -g -i -X GET  
http://10.0.44.233:9696/v2.0/lbaas/listeners.json?fields=id \

-H "User-Agent: python-neutronclient" \
-H "Accept: application/json" \
-H "X-Auth-Token: xxx"

>>> {"listeners": [{"protocol_port": 9998, "protocol": "HTTP",  
"description": "", "default_tls_container_ref": null, "admin_state_up":  
false, "loadbalancers": [{"id": "509781c5-4bab-42e6-99d5-343c991f018b"}],  
"sni_container_refs": [], "connection_limit": -1, "default_pool_id":  
null, "id": "e55cec57-060f-4d22-9b7c-1c37f612a4cd", "name": ""},  
{"protocol_port": 99, "protocol": "HTTP", "description": "",  
"default_tls_container_ref": null, "admin_state_up": true,  
"loadbalancers": [{"id": "509781c5-4bab-42e6-99d5-343c991f018b"}],  
"sni_container_refs": [], "connection_limit": -1, "default_pool_id":  
"b360fc75-b23d-46a3-b936-6c9480d35219", "id":  
"f8392236-e065-4aa2-a4ef-d6c6821cc038", "name": ""}, {"protocol_port":  
9998, "protocol": "HTTP", "description": "", "default_tls_container_ref":  
null, "admin_state_up": true, "loadbalancers": [{"id":  
"744b68a0-f08f-459a-ab7e-c43a6cb3b299"}], "sni_container_refs": [],  
"connection_limit": -1, "default_pool_id":  
"83a9d8ed-017b-412d-89c8-bd1e36295d81", "id":  
"c6ff129c-96c5-4121-b0dd-2258016b2f36", "name": ""}]}



API returns all the information about  the listeners rather only the "id" 
attribute. This problem also exists in every LBaaS resource, such loadbalaners, pools, 
etc.

I have already registered a bug in launchpad[1], and there is a patch to 
solve this problem about pools resource[2]. But I don't know if my solution is 
correctly. ;-(

Could someone give me some advice?


I actually believe that it’s an issue in neutron itself: I believe that its  
API layer should handle the needed filtering, including for passed in  
fields list.


We have a patch in progress that guard against plugins returning fields  
that are not defined in active attribute map:  
https://review.openstack.org/#/c/352809/


We should probably think about a similar approach in the base controller  
code to do the needed filtering as per fields query passed.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev