Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-06 Thread Jeremy Stanley
On 2016-05-06 10:25:41 -0400 (-0400), Doug Hellmann wrote:
> Excerpts from Tony Breeds's message of 2016-05-06 09:53:11 +1000:
[...]
> > I think some of these pro-active things will be key.  A quick
> > check shows we have nearly 30 items in g-r that don't seem to be
> > used by anything.  So there is some low hanging fruit there.
> > Search for overlapping requirements and then working with the
> > impacted teams is a big job but again a very worthwhile goal.
> 
> Someone had a tool that looked at second-order dependencies, I
> think. I can't find the reference in my notes, but maybe someone
> else has it handy?
[...]

I'm not sure entirely what you're looking for. I added
http://git.openstack.org/cgit/openstack/requirements/tree/tools/cruft.sh
to try to find things in global requirements which no project
declares as a requirement. Richard Jones wrote
https://pypi.python.org/pypi/pip_missing_reqs to spot modules
projects are importing directly while failing to declare a
requirement on them (a fragile situation if one of your other
dependencies adjusts its own dependencies and drops whatever
provided that module). We don't usually list transitive-only
dependencies in global requirements unless we need to pin/skip
versions of them in some way, and instead rely on the constraints
list to represent the complete transitive dependency set.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-06 Thread Jesse Pretorius
On 6 May 2016 at 15:25, Doug Hellmann  wrote:

>
> Someone had a tool that looked at second-order dependencies, I think. I
> can't find the reference in my notes, but maybe someone else has it
> handy?
>

I don't have a specific tool handy, but I'm guessing that we could cludge
something together through the implementation of tooling that effectively
builds venvs for each project and grabs a pip freeze from each of them.
That could at least give us a set of data to examine and work from?

We have something that kinda does this [2] although the purpose is quite
different. I would guess that we could either work out a way to make use of
this to achieve the goal through an automated process, or we could just
derive something useful from it. If this is deemed the best or only option
then I'd be happy to take this up.

If there's a better way then I'm all for it, but from what I see the pip
project has a long standing [1] issue for a resolver.

[1] https://github.com/pypa/pip/issues/988
[2] https://github.com/openstack/openstack-ansible-repo_build
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-06 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2016-05-06 09:53:11 +1000:
> On Thu, May 05, 2016 at 02:35:46PM -0400, Doug Hellmann wrote:
> 
> > This has been a lively thread, and the summit session was similarly
> > animated. I'm glad to see so much interest in managing our dependencies!
> > 
> > As we discussed at the summit, my primary objective with dependency
> > management this cycle is actually to spin it out into its own team, like
> > we did with stable management over the last year. We discussed several
> > things that team might undertake, including reviewing all of our
> > existing dependencies to ensure they are all still actually needed;
> > reviewing any overlap between dependencies to try to remove items from
> > the list; and implementing some of the other changes we discussed such
> > as allowing overlapping ranges between the global and per-project lists.
> 
> I think some of these pro-active things will be key.  A quick check shows we
> have nearly 30 items in g-r that don't seem to be used by anything.  So there
> is some low hanging fruit there.  Search for overlapping requirements and then
> working with the impacted teams is a big job but again a very worthwhile goal.

Someone had a tool that looked at second-order dependencies, I think. I
can't find the reference in my notes, but maybe someone else has it
handy?

> 
> > We had no volunteers to serve as PTL of that new team,
> 
> I was only have joking when I said I can learn how to be a PTL in 2 things at
> the same time :)

Yeah, I wasn't going to do that to you based on an off-hand comment. If
you want it when we decide we're ready for the team to form, that's
entirely up to you and the electorate. :-)

> 
> Regardless lets try to grow a team from the volunteers then we can decide what
> the spin off looks like.

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Tony Breeds
On Thu, May 05, 2016 at 02:35:46PM -0400, Doug Hellmann wrote:

> This has been a lively thread, and the summit session was similarly
> animated. I'm glad to see so much interest in managing our dependencies!
> 
> As we discussed at the summit, my primary objective with dependency
> management this cycle is actually to spin it out into its own team, like
> we did with stable management over the last year. We discussed several
> things that team might undertake, including reviewing all of our
> existing dependencies to ensure they are all still actually needed;
> reviewing any overlap between dependencies to try to remove items from
> the list; and implementing some of the other changes we discussed such
> as allowing overlapping ranges between the global and per-project lists.

I think some of these pro-active things will be key.  A quick check shows we
have nearly 30 items in g-r that don't seem to be used by anything.  So there
is some low hanging fruit there.  Search for overlapping requirements and then
working with the impacted teams is a big job but again a very worthwhile goal.

> We had no volunteers to serve as PTL of that new team,

I was only have joking when I said I can learn how to be a PTL in 2 things at
the same time :)

Regardless lets try to grow a team from the volunteers then we can decide what
the spin off looks like.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Tony Breeds
On Thu, May 05, 2016 at 04:18:08PM -0400, Davanum Srinivas wrote:
> prometheanfire,
> 
> I'll get the ball rolling next week. i.e, schedule a meeting, get
> started on writing down what we do usually for vetting requirements
> changes etc.

Please try to set this for a time that isn't terrible for Eastern Australia
(UTC+1000 ATM)

I'm keen to be involved in the meetings.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Doug Hellmann
Excerpts from Ian Cordasco's message of 2016-05-05 16:10:09 -0500:
>  
> 
> -Original Message-
> From: Haïkel <hgue...@fedoraproject.org>
> Reply: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Date: May 5, 2016 at 15:25:08
> To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Subject:  Re: [openstack-dev] [release][requirements][packaging][summit] 
> input needed on summit discussion about global requirements
> 
> > Well, I'm more in favor having it as a sub-team of release mgmt team.
> 
> I have to agree.
> 
> Doug, did you have specific ideas about what a PTL for the requirements team 
> would do? It's not inherently obvious to me what the benefits of having a 
> requirements PTL would be.

The dependencies list was placed under the release team sort of as a
default -- it didn't fit with any project teams, and we knew we would
have to manage freezing the list at certain times in the cycle. I have
full faith that a new standalone team will work with the release team on
that coordination.

Now that we have a lot of projects to manage, the release work takes
a bigger proportion of the release team's time over the entire span
of the cycle.  So it's not so much that a dependency management team
needs to exist in its own right as that the work such a team might do is
being mostly ignored by your current release PTL (me) because of other
priorities, and that's not a good situation for any of us.

Managing the list of dependencies is not quite a full time job, but
it's a focused set of responsibilities that can be carved out from the
current release team duties.  As far as other work, in the short term
there's quite a bit of cleanup to be done of the current dependency
list, and there are a few ongoing projects related to constraints and
other changes to how we manage and use the list that need someone to
drive them. I think having a separate team to own that work will be more
effective than what we have now. Eventually I expect it to settle back
down into a relatively low-level of effort to maintain the list.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2016-05-05 16:18:08 -0400:
> prometheanfire,
> 
> I'll get the ball rolling next week. i.e, schedule a meeting, get
> started on writing down what we do usually for vetting requirements
> changes etc.

Thanks, Dims!

Doug

> 
> Thanks,
> Dims
> 
> On Thu, May 5, 2016 at 4:13 PM, Matthew Thode  
> wrote:
> > On 05/05/2016 01:35 PM, Doug Hellmann wrote:
> >> This has been a lively thread, and the summit session was similarly
> >> animated. I'm glad to see so much interest in managing our dependencies!
> >>
> >> As we discussed at the summit, my primary objective with dependency
> >> management this cycle is actually to spin it out into its own team, like
> >> we did with stable management over the last year. We discussed several
> >> things that team might undertake, including reviewing all of our
> >> existing dependencies to ensure they are all still actually needed;
> >> reviewing any overlap between dependencies to try to remove items from
> >> the list; and implementing some of the other changes we discussed such
> >> as allowing overlapping ranges between the global and per-project lists.
> >>
> >> We had no volunteers to serve as PTL of that new team, but we did have
> >> several volunteers offer to help with reviewing requirements changes and
> >> implementation of some of the changes mentioned above. Thanks you! Please
> >> continue with the reviews, and we will take another look at establishing
> >> a separate team somewhere around the middle of the cycle.
> >>
> >> Doug
> >
> > Thanks,
> >
> >   One of the bigger things I think would be good from spinning out a
> > team is to have scheduled meetings on IRC.  Is there a way we can do
> > this without becoming an official project?
> >
> > --
> > Matthew Thode (prometheanfire)
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Ian Cordasco
 

-Original Message-
From: Haïkel <hgue...@fedoraproject.org>
Reply: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Date: May 5, 2016 at 15:25:08
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject:  Re: [openstack-dev] [release][requirements][packaging][summit] input 
needed on summit discussion about global requirements

> Well, I'm more in favor having it as a sub-team of release mgmt team.

I have to agree.

Doug, did you have specific ideas about what a PTL for the requirements team 
would do? It's not inherently obvious to me what the benefits of having a 
requirements PTL would be.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Haïkel
Well, I'm more in favor having it as a sub-team of release mgmt team.

H,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Matthew Thode
On 05/05/2016 03:18 PM, Davanum Srinivas wrote:
> prometheanfire,
> 
> I'll get the ball rolling next week. i.e, schedule a meeting, get
> started on writing down what we do usually for vetting requirements
> changes etc.
> 
> Thanks,
> Dims

Thanks, it's nice to have just from a planning perspective.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Davanum Srinivas
prometheanfire,

I'll get the ball rolling next week. i.e, schedule a meeting, get
started on writing down what we do usually for vetting requirements
changes etc.

Thanks,
Dims

On Thu, May 5, 2016 at 4:13 PM, Matthew Thode  wrote:
> On 05/05/2016 01:35 PM, Doug Hellmann wrote:
>> This has been a lively thread, and the summit session was similarly
>> animated. I'm glad to see so much interest in managing our dependencies!
>>
>> As we discussed at the summit, my primary objective with dependency
>> management this cycle is actually to spin it out into its own team, like
>> we did with stable management over the last year. We discussed several
>> things that team might undertake, including reviewing all of our
>> existing dependencies to ensure they are all still actually needed;
>> reviewing any overlap between dependencies to try to remove items from
>> the list; and implementing some of the other changes we discussed such
>> as allowing overlapping ranges between the global and per-project lists.
>>
>> We had no volunteers to serve as PTL of that new team, but we did have
>> several volunteers offer to help with reviewing requirements changes and
>> implementation of some of the changes mentioned above. Thanks you! Please
>> continue with the reviews, and we will take another look at establishing
>> a separate team somewhere around the middle of the cycle.
>>
>> Doug
>
> Thanks,
>
>   One of the bigger things I think would be good from spinning out a
> team is to have scheduled meetings on IRC.  Is there a way we can do
> this without becoming an official project?
>
> --
> Matthew Thode (prometheanfire)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-04-17 11:13:15 -0400:
> I am organizing a summit session for the cross-project track to
> (re)consider how we manage our list of global dependencies [1].
> Some of the changes I propose would have a big impact, and so I
> want to ensure everyone doing packaging work for distros is available
> for the discussion. Please review the etherpad [2] and pass the
> information along to colleagues who might be interested.
> 
> Doug
> 
> [1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
> [2] https://etherpad.openstack.org/p/newton-global-requirements
> 

This has been a lively thread, and the summit session was similarly
animated. I'm glad to see so much interest in managing our dependencies!

As we discussed at the summit, my primary objective with dependency
management this cycle is actually to spin it out into its own team, like
we did with stable management over the last year. We discussed several
things that team might undertake, including reviewing all of our
existing dependencies to ensure they are all still actually needed;
reviewing any overlap between dependencies to try to remove items from
the list; and implementing some of the other changes we discussed such
as allowing overlapping ranges between the global and per-project lists.

We had no volunteers to serve as PTL of that new team, but we did have
several volunteers offer to help with reviewing requirements changes and
implementation of some of the changes mentioned above. Thanks you! Please
continue with the reviews, and we will take another look at establishing
a separate team somewhere around the middle of the cycle.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-21 Thread Jeremy Stanley
On 2016-04-21 14:05:17 +1200 (+1200), Robert Collins wrote:
> On 20 April 2016 at 03:00, Jeremy Stanley  wrote:
[...]
> > When we were firming up the constraints idea in Vancouver, if my
> > memory is correct (which it quite often is not these days), part of
> > the long tail Robert suggested was that once constraints usage in
> > the CI is widespread we could consider resolving it from individual
> > requirements lists in participating projects, drop the version
> > specifiers from the global requirements list entirely and stop
> > trying to actively synchronize requirement version ranges in
> > individual projects.
[...]
> 
> I think I suggested that we could remove the *versions* from
> global-requirements. Constraints being in a single place is a
> necessary tool unless (we have atomic-multi-branch commits via zuul ||
> we never depend on two projects agreeing on compatible versions of
> libraries in the CI jobs that run for any given project).
[...]

Yep, that's what I was trying to convey above. We still need to
resolve upper-constraints.txt from something, and there was debate
as to whether it would be effective to generate it from the
unversioned requirements list in global/requirements or whether we
would need to resolve it from an aggregation of the still-versioned
requirements files in participating projects. Also briefly touched
on was the option of possibly dropping version specifiers from
individual project requirements files.

> Atomic multi-branch commits in zuul would allow us to fix
> multi-project wedging issues if constraints are federated out to
> multiple trees.
[...]

This still runs counter to the desire to serialize changes proposed
on different branches for the purpose of confirming upgrades from
one branch to another aren't broken by one change and then quietly
fixed by another.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Clark Boylan
On Wed, Apr 20, 2016, at 08:44 PM, Tony Breeds wrote:
> On Thu, Apr 21, 2016 at 02:09:24PM +1200, Robert Collins wrote:
> 
> > I also argued at the time that we should aim for entirely automated
> > check-and-update. This has stalled on not figuring out how to run e.g.
> > Neutron unit tests against requirements changes - our coverage is just
> > too low at the moment to proceed further down the automation path.
> 
> I thought we knew how to do this just is hadn't been done.  I *think*
> mostly
> because it's a largish project-config change.

It isn't too bad, I went ahead and pushed
https://review.openstack.org/308739 up which *should* do it (but Andreas
will likely point out something I overlooked). It is made easier by the
fact that already mostly have an integration test between requirements
and unittests for every project using the python unittest template. I
just had to make a small adjustment to how the repos are configured.

> Aiming to entirely automated is great but getting it to the point that we
> run
> (say) the nova, neutron, keystone, swift and horizon unit tests on *all*
> changes to upper-constraints would be fantastic and something I'm keen to
> work
> on during newton (as I suspect others are)
> 
> On a tangent, we also need to get wider adoption for constraints, I admit
> I
> wasn't paying close attention but I thought this was basically the
> default.  It
> seems I was wrong :(
> 
> Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Tony Breeds
On Thu, Apr 21, 2016 at 02:09:24PM +1200, Robert Collins wrote:

> I also argued at the time that we should aim for entirely automated
> check-and-update. This has stalled on not figuring out how to run e.g.
> Neutron unit tests against requirements changes - our coverage is just
> too low at the moment to proceed further down the automation path.

I thought we knew how to do this just is hadn't been done.  I *think* mostly
because it's a largish project-config change.

Aiming to entirely automated is great but getting it to the point that we run
(say) the nova, neutron, keystone, swift and horizon unit tests on *all*
changes to upper-constraints would be fantastic and something I'm keen to work
on during newton (as I suspect others are)

On a tangent, we also need to get wider adoption for constraints, I admit I
wasn't paying close attention but I thought this was basically the default.  It
seems I was wrong :(

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Robert Collins
On 18 April 2016 at 03:13, Doug Hellmann  wrote:
> I am organizing a summit session for the cross-project track to
> (re)consider how we manage our list of global dependencies [1].
> Some of the changes I propose would have a big impact, and so I
> want to ensure everyone doing packaging work for distros is available
> for the discussion. Please review the etherpad [2] and pass the
> information along to colleagues who might be interested.


Thanks for kicking this off Doug. Its a great topic - as the thread shows :).

I have a few thoughts - and I fully intend to be at the session as
well. I don't know if I'm pro or con the specific proposal today - and
I definitely need to understand the details of the issue a bit better,
my focus has been on various testing and packaging things for a bit -
I've neglected my requirements reviews except when prompted - sorry.

I think that federated constraints/requirements raise some risks with
multi-project gating jobs. This is separate to the co-installability
requirement and instead due to the ability to end up with a multi-tree
wedge. If something happens atomically that breaks two projects
constraints at the same time, two distinct git changes are required to
fix that. AIUI this happens maybe 1/8 weeks? In a centralised model we
can fix that atomically within the normal CI workflow. With a
federated approach, we will have to get infra intervention. Similarly,
if there is a needle-threading situation that can end up with multiple
projects broken at the same time, and they consume each other (or both
are present in devstack jobs for the other) we can wedge. I'm thinking
e.g. changes to Nova and Neutron go through, independently but the
combination turns out to be API incompatible on the callbacks between
services or some such. Perhaps too niche to worry about?

Co-installability has very significant impact on the backwards compat
discussion: its a major driver of the need I perceive for library
backwards compatibility (outside of client library compat with older
clouds) and I for one think we could make a bunch of stuff simpler
with a reduced co-installability story.
https://review.openstack.org/#/c/226157/ and
https://etherpad.openstack.org/p/newton-backwards-compat-libs

I'm super worried about the impact on legacy distributions though - I
don't think they're ready for it, and I don't think we're important
enough to act as a sane forcing function: but perhaps we can find some
compromise that works for everyone - or at least get distros to commit
to moving ahead in their view of the world :).

I don't think we can ditch co-installability per se though, even in a
totally containerised world: we still have the need to make each leaf
artifact in the dependency tree co-installable with all its
dependencies. That is, we can't get to a situation where
oslo.messaging and oslo.db are not co-installable, even though they
don't depend on each other, because Nova depends on both and we want
to be able to actually install Nova.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Robert Collins
On 20 April 2016 at 05:44, Clint Byrum  wrote:
> Excerpts from Michał Jastrzębski's message of 2016-04-18 10:29:20 -0700:
>> What I meant is if you have liberty Nova and liberty Cinder, and you
>> want to upgrade Nova to Mitaka, you also upgrade Oslo to Mitaka and
>> Cinder which was liberty either needs to be upgraded or is broken,
>> therefore during upgrade you need to do cinder and nova at the same
>> time. DB can be snapshotted for rollbacks.
>>
>
> If we're breaking backward compatibility even across one release, that
> is a bug.  You should be able to run Liberty components with Mitaka
> Libraries. Unfortunately, the testing matrix for all of the combinations
> is huge and nobody is suggesting we try to solve that equation.

Sadly no: we don't make that guarantee today. I think we should, but
there isn't consensus - at least amongst the folk that have been
debating the backwards compat for libraries spec - that it is actually
*desirable*. Please, come to the session and help build consensus in
Austin :).

> However, to the point of distros: partial upgrades is not the model distro
> packages work under. They upgrade what they can, whether they're a rolling
> release, or 7 year cycle LTS's. When the operator says "give me the new
> release", the packages that can be upgraded, will be upgraded. And if
> Mitaka Nova is depending on something outside the upper constraints in
> another package on the system, the distro will just hold Nova back.

And presumably all of OpenStack.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Robert Collins
On 20 April 2016 at 04:47, Clark Boylan  wrote:
> On Tue, Apr 19, 2016, at 08:14 AM, Doug Hellmann wrote:
>> Excerpts from Jeremy Stanley's message of 2016-04-19 15:00:24 +:
>> > On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
>> > [...]
>> > > We have the global list and the upper constraints list, and both
>> > > are intended to be used to signal to packaging folks what we think
>> > > ought to be used. I'm glad that signaling is working, and maybe
>> > > that means you're right that we don't need to sync the list
>> > > absolutely, just as a set of "compatible" ranges.
>> > [...]
>> >
>> > When we were firming up the constraints idea in Vancouver, if my
>> > memory is correct (which it quite often is not these days), part of
>> > the long tail Robert suggested was that once constraints usage in
>> > the CI is widespread we could consider resolving it from individual
>> > requirements lists in participating projects, drop the version
>> > specifiers from the global requirements list entirely and stop
>> > trying to actively synchronize requirement version ranges in
>> > individual projects. I don't recall any objection from those of us
>> > around the table, though it was a small ad-hoc group and we
>> > certainly didn't dig too deeply into the potential caveats that
>> > might imply.
>>
>> I have no memory of that part of the conversation, but I'll take your
>> word for it.
>>
>> If I understand your description correctly, that may be another
>> alternative. Most of the reviews I've been doing are related to the
>> constraints, though, so I'm not really sure it lowers the amount of work
>> I'm seeing.
>
> This was one of my concerns with constraints when we put them in place.
> Previously we would open requirements and things would break
> periodically and we would address them. With constraints every single
> requirements update whether centralized or decentralized needs to be
> reviewed. It does add quite a bit of overhead.
>
> The argument at the time was that the time saved by not having the gate
> explode every few weeks would offset the cost of micromanaging every
> constraint update.

I also argued at the time that we should aim for entirely automated
check-and-update. This has stalled on not figuring out how to run e.g.
Neutron unit tests against requirements changes - our coverage is just
too low at the moment to proceed further down the automation path.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Robert Collins
On 20 April 2016 at 03:00, Jeremy Stanley  wrote:
> On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
>> We have the global list and the upper constraints list, and both
>> are intended to be used to signal to packaging folks what we think
>> ought to be used. I'm glad that signaling is working, and maybe
>> that means you're right that we don't need to sync the list
>> absolutely, just as a set of "compatible" ranges.
> [...]
>
> When we were firming up the constraints idea in Vancouver, if my
> memory is correct (which it quite often is not these days), part of
> the long tail Robert suggested was that once constraints usage in
> the CI is widespread we could consider resolving it from individual
> requirements lists in participating projects, drop the version
> specifiers from the global requirements list entirely and stop
> trying to actively synchronize requirement version ranges in
> individual projects. I don't recall any objection from those of us
> around the table, though it was a small ad-hoc group and we
> certainly didn't dig too deeply into the potential caveats that
> might imply.

I think I suggested that we could remove the *versions* from
global-requirements. Constraints being in a single place is a
necessary tool unless (we have atomic-multi-branch commits via zuul ||
we never depend on two projects agreeing on compatible versions of
libraries in the CI jobs that run for any given project).

Constraints being in a single place (not necessarily a single file)
allows to fix multi-project wedging issues with a single git commit.
Atomic multi-branch commits in zuul would allow us to fix
multi-project wedging issues if constraints are federated out to
multiple trees.
Never needing any two projects to agree on compatible versions in CI
would allow us to change things without triggering a wedge...
possibly. *detailed* thought needed here - because consider for
instance the impact of a removed release on PyPI.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Thomas Goirand
tl;dr: You're right, but the point I was making was that all distros are
understaff.

Longer version:

On 04/19/2016 06:24 PM, Ian Cordasco wrote:
>> You can also add "Ubuntu" in the list here, as absolutely all OpenStack
>> dependencies are maintained mostly by me, within Debian, and then later
> 
> "absolutely all" of OpenStack's dependencies are not maintained by you
> in Debian. A significant number are maintained by the DPMT (Debian
> Python Modules Team). The large majority are maintained by you, but not
> "absolutely all".

That's absolutely right. Though, you're probably missing my point here.

I'll explain the situation in more details, to give justice to everyone.

Ubuntu OpenStack packaging is largely done in Debian. These days, Corey
Briant and David Dellav (who are both Canonical employees) are pushing
directly to the Git on alioth.debian.org. Unfortunately, they don't have
upload rights to Debian, so I have to review and upload their work.
Later on, the packages are synched from Debian to Ubuntu (if they didn't
push it directly to avoid waiting, as I sometime can't be as reactive as
I would like).

What I called "absolutely all OpenStack dependencies" was referring to
all the Python modules that OpenStack produces (Oslo, python-*client,
and all the other libs). So in that way, my sentence was correct.
However, there's a few general purpose libraries maintained within the
DPMT (Debian Python Module Team). Though there's also 100+ general
purpose python modules maintained within the PKG OpenStack team on
Alioth as well. It's hard to draw a clear line anyway.

And to forget nobody, I'd like to salute Ondřej Nový work, who
completely took over all things for Swift, and slowly shifted to do more
and more packaging work. I also just gave all ACLs to my colleague Ivan,
and we decided he would take care of all things for Horizon, which is *a
lot* of work.

So anyway, let's go back to my point of argumentation and stop
digressing! :)

The point that I was making is that I know that there's not more
staffing in Ubuntu either (since we package together everything which is
not server packages), and we're probably at approximately the same
number of people in Debian/Ubuntu as for RDO: around 2 and a half person
full time, with a bit of contributions here and there.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Jeremy Stanley
On 2016-04-19 11:30:38 -0500 (-0500), Ian Cordasco wrote:
[...]
> I've argued with different downstream distributors about their own
> judgment of what portions of the patch to apply in order to fix an
> issue with an assigned CVE. It took much longer than should have
> been necessary in at least one of those cases where it did affect
> OpenStack
[...]

I won't disagree that it's a double-edged sword, but on balance
having established, organized distros managing security backporting
for their packages helps in a lot more situations of lax upstream
security posture than it hinders responsive upstreams (probably
because there are a lot more of the former than the latter). At
least it's seemed to me that a majority of vulnerability
announcements posted on the oss-sec ML come from distro security
teams as compared to upstream security teams, though this also may
just be due to having a lot more low-popularity projects packaged in
major distros and written by small teams who don't have experience
handling vulnerability reports.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Corey Bryant
On Tue, Apr 19, 2016 at 12:24 PM, Ian Cordasco <sigmaviru...@gmail.com>
wrote:

>
>
> -Original Message-
> From: Thomas Goirand <z...@debian.org>
> Reply: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Date: April 18, 2016 at 17:21:36
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject:  Re: [openstack-dev] [release][requirements][packaging][summit]
> input needed on summit discussion about global requirements
>
> > Hi Doug,
> >
> > I very much welcome opening such a thread before the discussion at the
> > summit, as often, sessions are too short. Taking the time to write
> > things down first also helps having a more constructive discussion.
> >
> > Before I reply to each individual message below, let me attempt to reply
> > to the big picture seen in your etherpad. I was tempted to insert
> > comments on each lines of it, but I'm not sure how this would be
> > received, and probably it's best to attempt to reply more globally.
> >
> > From what I understand, the biggesgt problems you're trying to solve is
> > that managing the global-reqs is really time consuming from the release
> > team point of view, and especially its propagation to individual
> > projects. There's IMO many things that we could do to improve the
> > situation, which would be acceptable from the package maintainers point
> > of view.
> >
> > First of all, from what I could see in the etherpad, I see a lot of
> > release work which I consider not useful for anyone: not for downstream
> > distros, not upstream projects. Mostly, the propagation of the
> > global-requirements.txt to each and every individual Python library or
> > service *for OpenStack maintained libs* could be reviewed. Because 1/
> > distros will always package the highest version available in
> > upper-constraints.txt, and 2/ it doesn't really reflect a reality. As
> > you pointed out, project A may need a new feature from lib X, but
> > project B wont care. I strongly believe that we should leave lower
> > boundaries as a responsibility of individual projects. What important
> > though, is to make sure that the highest version released does work,
> > because that's what we will effectively package.
> >
> > What we can then later on do, at the distribution level, is artificially
> > set the lower bounds of versions to whatever we've just uploaded for a
> > given release of OpenStack. In fact, I've been doing this a lot already.
> > For example, I uploaded Eventlet 0.17.4, and then 0.18.4. There was
> > never anything in the between. Therefore, doing a dependency like:
> >
> > Depends: python-eventlet (>= 0.18.3)
> >
> > makes no sense, and I always pushed:
> >
> > Depends: python-eventlet (>= 0.18.4)
> >
> > as this reflects the reality of distros.
> >
> > If we generalize this concept, then I could push the minimum version of
> > all oslo libs into every single package for a given version of OpenStack.
> >
> > What is a lot more annoying though, is for packages which I do not
> > control directly, and which are used by many other non-OpenStack
> > packages inside the distribution. For example, Django, SQLAlchemy or
> > jQuery, to only name a few.
> >
> > I have absolutely no problem upping the lower bounds for all of
> > OpenStack components aggressively. We don't have gate jobs for the lower
> > bounds of our requirements. If we decide that it becomes the norm, I can
> > generalize and push for doing this even more. For example, after pushing
> > the update of an oslo lib B version X, I could push such requirements
> > everywhere, which in fact, would be a good thing (as this would trigger
> > rebuilds and recheck of all unit tests). Though, all of this would
> > benefit from a lot of automation and checks.
> >
> > On your etherpad, you wrote:
> >
> > "During the lead-up to preparing the final releases, one of the tracking
> > tasks we have is to ensure all projects have synced their global
> > requirements updates. This is another area where we could reduce the
> > burden on the release team."
> >
> > Well, don't bother, this doesn't reflect a reality anyway (ie: maybe
> > service X can use an older version of oslo.utils), so that's not really
> > helpful in any way.
> >
> > You also wrote:
> >
> > "Current ranges in global-requirements are large but most projects do
> > not actively test the oldest suppo

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-20 Thread Thierry Carrez

Fox, Kevin M wrote:

Thomas,

I normally side with the distro's take on making sure there is no duplication, 
but I think Thierry's point comes from two differences coming up that the 
traditional distro's don't tend to account for.


(and to be fair, I normally side with the distro's take too... If you 
asked me the same question 5 years ago I would be taking exactly the 
same side as Thomas)



[...]
To Thierry's point about newer distro's, there are distro's today starting to 
form around Docker as a packaging device and it does not have the same issues 
that traditional distro's do. Fedora/Redhat Atomic, CoreOS, RancherOS are some 
examples. You can run incompatible rabbit's on the same server. Both can be 
patched to the latest secure version, but simply incompatible with each other. 
Say a stable v1 branch and a stable v2 branch. They probably share every 
package except 1, and at a file system level actually do share all the space 
but the change.


Yes, you could imagine a container-based server distro that would deploy 
complex stacks (beyond the base system) as official containers (or 
pods). To avoid the maintenance/security/bundling nightmare, they would 
still reproducibly build those containers from a finite collection of 
base packages, but in that collection there could be multiple versions 
of the same library. If a security issue appears, you can still 
determine which base packages are affected and update them all, then 
refresh all containers that happen to use those packages.


It is totally technically doable, it would be a "sane way to maintain 
software" (just a different one), and it would meet the needs of 
everyone (the rift between distros and upstream is not affecting just 
OpenStack).


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Thomas Goirand
On 04/19/2016 03:10 PM, Doug Hellmann wrote:
>> On your etherpad, you wrote:
>>
>> "During the lead-up to preparing the final releases, one of the tracking
>> tasks we have is to ensure all projects have synced their global
>> requirements updates. This is another area where we could reduce the
>> burden on the release team."
>>
>> Well, don't bother, this doesn't reflect a reality anyway (ie: maybe
>> service X can use an older version of oslo.utils), so that's not really
>> helpful in any way.
> 
> Right, that's why I proposed stopping the whole business with managing
> the ranges. Your proposal seems to be somewhere in the middle between
> doing what we do today (obsessively keeping in sync) and what I'm
> proposing (abandoning any pretense of keeping in sync). Is that right?

Absolutely ! :)

>> Though having a single version that projects are allowed to test with is
>> super important, so we can check everything can work together. IMO,
>> that's the part which should absolutely not change. Dropping that is
>> like opening a Pandora box. Relying on containers and/or venv will
>> unfortunately, not work, from my stand point.
> 
> Again, as I pointed out elsewhere in the thread, using some sort of
> environment isolation tool in upstream CI testing does not imply that
> the same or a similar tool needs to be used by distros for packaging.

But then, we still need co-installability and avoid lib version
conflicts. Aren't we then back to square one?

It feels like I'm missing something in the reasoning here...

>> (think: we still have loads of Python 2.6 stuff like discover, argparse
>> and such that needs to be cleaned-up).
> 
> Aside: We should figure out a way to make a list of those things,
> so we can work on the cleanup.

I've pushed a bit of patches here and there, but I sometimes give-up
forwarding the Debian work (because lacking time). I'm hereby making the
promise to always do it for the Newton cycle (as there shouldn't be too
many remaining).

> I'm certainly not proposing that. My proposal is point out the
> economics of our current situation, which is that upstream we're
> doing a lot of work that IMHO we don't need to do *for ourselves*.
> I do see its usefulness, but it's not necessary for our own CI
> testing.

Thanks for making this explicit. I was kind of double-guessing something
like that was in your mind. I'm with you now.

>>> Another alternate that might work is for downstream folks to do
>>> their own dependency management.
>>
>> I fail to understand what you think distros could do to fix the
>> situation of conflicting versions, other than bundling, which isn't
>> acceptable. Could you explain what you mean by "downstream folks to do
>> their own dependency management"?
> 
> How do you ensure that a given version of a python lib you package
> is compatible with independent tools written to use that lib that
> you also want to include in your distro?

Well, you're pointing here at a major pain point. I can give you 2
examples where it has been extremely painful for me:
- SQLA upgrades from 0.7.x to 0.8.x, then to 0.9.x. a year ago.
- Django upgrade to 1.9 last december.

Both SQLA and Django are widely used in Debian, and when one gets
upgraded, package maintainers have no choice but to fix their packages
ASAP, to support the new version.

I don't maintain SQLA or Django, both are maintained by package
maintainers who don't really care about OpenStack, and therefore, they
uploaded new versions to Sid before OpenStack was ready.

The result is that most of OpenStack was broken in Sid for maybe 4 or 5
months when the SQLA upgrade happened.

As for Django, this broke Horizon from last December to a few days
before the release. Without heroic efforts from Horizon contributors,
(mostly by Rob, the new Horizon PTL. but not only him), then Horizon
would still be broken. I did a bunch of Django 1.9 patches (blind hacks
looking at other fixes and upstream doc, really...) myself too

> Or for that matter, anything
> written in any other language that supports shared libraries.

For shared libraries breaking ABI, we have "binNMU" that the release
team can triggers (in fact, just a rebuild of packages). For API
breakage of shared libs, we have "transitions" which the Debian release
team manages. It's a major pain point. Last summer, Matthias Klose (aka
Doko) upgraded gcc from version 4.9 to 5.x, which included (if I'm not
mistaking) a breakage in libstdc++. This broke Debian unstable for more
than a month, with hundreds of packages to fix. When all was fixed,
everything migrated at once from unstable to testing (yes, that's the
difference between Debian testing and unstable: testing isn't affected
by library transition bugs).

Oh, and a few month later, Doko decided it would be fun to upgrade
Python 3 from 3.4 to 3.5! :)

>>> I started the discussion to solicit ideas, but I would prefer to
>>> avoid proposing what downstream should do because (a) I'm sure
>>> folks want options and (b) I'm 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Fox, Kevin M
Thomas,

I normally side with the distro's take on making sure there is no duplication, 
but I think Thierry's point comes from two differences coming up that the 
traditional distro's don't tend to account for.

The traditional distro usually is about a single machine. being able to install 
all the things together. This often is beneficial but sometimes has major 
conflicts. OpenStack deployments often run into issues where you might want to 
run a newer X or newer Y package that is compatible. For example, I often want 
to run a liberty nova and a mitaka horizon on the same machine. Distro's 
currently make that impossible with their deployment tools.

There are a few ways around this... Some sites use multiple physical or virtual 
machines to separate the distro's components so that incompatible packages can 
be installed in the same system.

To Thierry's point about newer distro's, there are distro's today starting to 
form around Docker as a packaging device and it does not have the same issues 
that traditional distro's do. Fedora/Redhat Atomic, CoreOS, RancherOS are some 
examples. You can run incompatible rabbit's on the same server. Both can be 
patched to the latest secure version, but simply incompatible with each other. 
Say a stable v1 branch and a stable v2 branch. They probably share every 
package except 1, and at a file system level actually do share all the space 
but the change.

The most interesting bit about containers to me, is that the traditional 
distro's are often used as source materiel for the container images. So it 
doesn't eliminate debs/rpms. it just puts a wall around them to make conflicts 
not so bad.

There's a whole nother discussion about keeping the containers up to date 
its a hard problem and not well solved yet. But I do think that's fixable. Most 
likely the traditional distro's will test/ship containers based on their 
traditional distro and refresh the containers when any of the deps get updated 
too. This allows the distro's to continue to do the job that they are already 
great at. Packaging up, testing, and releasing great software that as a user, 
you can trust.

Thanks,
Kevin



From: Thomas Goirand [z...@debian.org]
Sent: Tuesday, April 19, 2016 3:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [release][requirements][packaging][summit] input 
needed on summit discussion about global requirements

On 04/19/2016 11:48 AM, Thierry Carrez wrote:
>> Remember that in distros, there's only a single version of a library at
>> any given time, at the exception of transitions (yes, in Red Hat it's
>> technically possible to install multiple versions, but the policy
>> strongly advocates against this).
>
> Depends on the distro. Gentoo for example lets you have multiple
> versions of libraries installed at any given time.

Earlier, I wrote that Gentoo was probably an exception, but reading
posts from Matthew Thode, he explained that it wasn't really a good idea
even in Gentoo to bundle libs. So let's consider that's not an option
for all distros packaging OpenStack.

> I expect in the future the one-version-of-lib-at-a-time
> will more and more be seen as a self-imposed limitation

It has always been the case that it's a self-imposed limitation, because
that's the only sane way to maintain software.

> and less as a
> distribution axiom, and next-generation distros will appear that will
> solve that limitation one way or another.

I very much doubt that distribution will attempt shooting themselves in
the foot, and allow software to use deprecated, old versions of libs,
which conflict with each other. What can be done is mitigate issues, at
most. I fail to see how hiding the issue that 2 libs are conflicting, or
that component A can't support the latest version of component B can be
seen as the "next-generation" thing that solves all troubles.

I'm very much concerned that we're here, taking the same wrong approach
of thinking that containing libs and avoid conflict will solve troubles.
That's not the case, it just hides it. We still need to make sure that
components will support the last versions of everything at the release
time, otherwise, we'll have serious issues maintaining stable branches.

If the problems is just releasing a lot of work from the release team
(and infra, as you just pointed, TTX), then I have given a list of
things we could do. These things wont IMO impact quality, and will still
allow co-instability. I'll attempt to list it again, in a more clear way:

1- stopping to propagate the global-requirements.txt versions *of
OpenStack components* to each and every project: this isn't helping
anyone anyway, since both the gate and downstream distros are going to
use the latest version.

Example:
Let every component decided what is the minimum version of oslo.utils
that it needs, but still force 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Thomas Goirand
Ah! Now we're getting right on the spot, where it hurts, and we're
finding the root cause. That's a very good thing. Let's dig ! :)

On 04/19/2016 05:26 PM, Chris Dent wrote:
> I, in project A, don't want to limit my requirements because project B
> has not yet upgraded itself to be able to use the new features in
> library X.

Solution: if given a reasonable time for project B, ignore project B,
declare it bad, and point fingers at the contributors of project B for
not doing the needed maintenance work.

But by the way, what is that bad library X that you're talking about,
which is constantly breaking backward compatibility? Isn't *that* the
source of the issues to begin with? Shouldn't we then just avoid library
X and find a better replacement, which does *not* break the API every
odd weeks?

> At the same time, people in project B don't want to have to upgrade
> themselves, when they are not ready, simply because project A wants
> to upgrade.

They are wrong. They *must* upgrade, because upstream for library X will
anyway supporting the old deprecated version next week.

That's a simple easy rule: the very latest version of *any* component
should always be the winner. Everyone should adapt.

Oh... and did I mention backward compatibility? Yes I did, and yet
everyone understand why the Linux kernel never breaks userland... :)

> That's an actual problem.

Which is due to bad upstream breaking compat.

> All the rest of the discussion is coming

... as ways to avoid issues due to bad code. Let's fix the code, or
avoid the bad one.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Thomas Goirand
On 04/19/2016 11:48 AM, Thierry Carrez wrote:
>> Remember that in distros, there's only a single version of a library at
>> any given time, at the exception of transitions (yes, in Red Hat it's
>> technically possible to install multiple versions, but the policy
>> strongly advocates against this).
> 
> Depends on the distro. Gentoo for example lets you have multiple
> versions of libraries installed at any given time.

Earlier, I wrote that Gentoo was probably an exception, but reading
posts from Matthew Thode, he explained that it wasn't really a good idea
even in Gentoo to bundle libs. So let's consider that's not an option
for all distros packaging OpenStack.

> I expect in the future the one-version-of-lib-at-a-time
> will more and more be seen as a self-imposed limitation

It has always been the case that it's a self-imposed limitation, because
that's the only sane way to maintain software.

> and less as a
> distribution axiom, and next-generation distros will appear that will
> solve that limitation one way or another.

I very much doubt that distribution will attempt shooting themselves in
the foot, and allow software to use deprecated, old versions of libs,
which conflict with each other. What can be done is mitigate issues, at
most. I fail to see how hiding the issue that 2 libs are conflicting, or
that component A can't support the latest version of component B can be
seen as the "next-generation" thing that solves all troubles.

I'm very much concerned that we're here, taking the same wrong approach
of thinking that containing libs and avoid conflict will solve troubles.
That's not the case, it just hides it. We still need to make sure that
components will support the last versions of everything at the release
time, otherwise, we'll have serious issues maintaining stable branches.

If the problems is just releasing a lot of work from the release team
(and infra, as you just pointed, TTX), then I have given a list of
things we could do. These things wont IMO impact quality, and will still
allow co-instability. I'll attempt to list it again, in a more clear way:

1- stopping to propagate the global-requirements.txt versions *of
OpenStack components* to each and every project: this isn't helping
anyone anyway, since both the gate and downstream distros are going to
use the latest version.

Example:
Let every component decided what is the minimum version of oslo.utils
that it needs, but still force it to support the latest
upper-constraints.txt version.

2- stop maintaining requirements for modules which are only needed by a
single project. Simply listing it (to make sure we don't have a 2nd
component that adopts it without knowing about the first) will be
enough, and this could be done in a separate file, which would point at
the project.

Example:
Take PuLP out of global-requirements.txt, and move it to "used-by.txt"
(the name of that file can be something else):
echo "PuLP: congress" >>used-by.txt

Then let Congress decide what version it needs.

I'm not sure yet about the consequences for 3rd party libs, this will
probably require more thinking and brainstorming, but right now, it's my
strong believe that we still need to maintain key packages in the
global-requirements.txt (I'm thinking about Alembic, SQLA and such).

Would the above releasing a lot of work from both infra and release
teams? Is this "enough", or would there still be too much work? Maybe we
could give this new workflow a try for a cycle, and see how it works
out? Please, voice your concerns, share your view, etc.

Hoping the above is constructive enough,

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Jeremy Stanley
On 2016-04-19 16:10:12 +0100 (+0100), Chris Dent wrote:
> On Tue, 19 Apr 2016, Jeremy Stanley wrote:
[...]
> > I feel like many of the people pushing this idea simply didn't
> > get to experience the pain it causes the first time around and
> > won't believe their peers who lived through it.
> 
> I feel like I have to respond to this, because I'm one pushing this (at
> least in this thread). I'll try not to take umbrage.
[...]

Sorry, no umbrage intended! At least I said "many" and not "all." ;)

> What I would like to see is that not only do packages lag master
> (they already do to some extent) but that requirements
> co-installability resolution also lags master. At master we should
> be building the future; yes, with chaos. That chaos should be
> resolved in a stabilizing process, driven by those with a need for
> stability.
[...]

Yep, I understand the sentiment. Just remember that we _did_ already
have that, and the requirements synchronization in place now grew
out of the pain points from trying to force co-installability as
problems arose rather than getting in front of them. It might not be
as terrible the second time around since we've decoupled and
modularized a lot more of OpenStack in the time since, but I don't
want us to forget why we have the solutions we have and would hate
to see us blow a release by having to hastily put them back into
place again after thinking we could just wing it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Clint Byrum
Excerpts from Thomas Goirand's message of 2016-04-19 05:59:19 -0700:
> On 04/19/2016 01:01 PM, Chris Dent wrote:
> > We also, however, need to consider what the future might look like and
> > at least for some people and situations
> 
> I agree.
> 
> > the future does not involve
> > debs or rpms of OpenStack: packages and distributions are more trouble
> > than they are worth when you want to be managing your infrastructure
> > across multiple, isolated environments.
> 
> But here, I don't. It is my view that best, for containers, is to build
> them using distribution packages.
> 
> > In that case you want
> > puppet, ansible, chef, docker, k8s, etc.
> > 
> > Sure all those things _can_ use packages to do their install but for
> > at least _some_ people that's redundant: deploy from a tag, branch,
> > commit or head of a repo.
> > 
> > That's for _some_ people.
> 
> This thinking (ie: using pip and trunk, always) was one of the reason
> for TripleO to fail, and they went back to use packages. Can we learn
> from the past?
> 

I want to clarify something here because what you say above implies that
the reason a whole lot of us stopped contributing to TripleO was that we
"failed", or that the approach was wrong.

There was never an intention to preclude packages entirely. Those of
us initially building TripleO did so with continuous delivery as the
sole purpose that _we_ had. RedHat's contributors joined in and had a
stable release focus, and eventually as our sponsors changed their mind
about what they wanted from TripleO, we stopped pushing the CD model,
because we weren't even working on the project anymore. This left a void,
and RedHat's stable release model was obviously better served by using
the packaging tools they are already familiar with, thus the project now
looks like it pivoted. But it's really more that one set of contributors
stopped working on one use case, and another set ramped up contribution
on a different one.

So please, do not use the comment above to color this discussion.
Continuous delivery is a model that people will use to great success,
and in that model, dependency management is _very_ different.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Matthew Thode
On 04/19/2016 12:44 PM, Clint Byrum wrote:
> Excerpts from Michał Jastrzębski's message of 2016-04-18 10:29:20 -0700:
>> What I meant is if you have liberty Nova and liberty Cinder, and you
>> want to upgrade Nova to Mitaka, you also upgrade Oslo to Mitaka and
>> Cinder which was liberty either needs to be upgraded or is broken,
>> therefore during upgrade you need to do cinder and nova at the same
>> time. DB can be snapshotted for rollbacks.
>>
> 
> If we're breaking backward compatibility even across one release, that
> is a bug.  You should be able to run Liberty components with Mitaka
> Libraries. Unfortunately, the testing matrix for all of the combinations
> is huge and nobody is suggesting we try to solve that equation.
> 
> However, to the point of distros: partial upgrades is not the model distro
> packages work under. They upgrade what they can, whether they're a rolling
> release, or 7 year cycle LTS's. When the operator says "give me the new
> release", the packages that can be upgraded, will be upgraded. And if
> Mitaka Nova is depending on something outside the upper constraints in
> another package on the system, the distro will just hold Nova back.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
https://etherpad.openstack.org/p/newton-backwards-compat-libs is about
the test matrix problem across releases.

Also, Gentoo at least supports partial upgrades, I tend to have two
releases packaged at a time, currently liberty and mitaka.

-- 
-- Matthew Thode (prometheanfire)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Clint Byrum
Excerpts from Matthew Thode's message of 2016-04-18 11:22:38 -0700:
> On 04/18/2016 12:33 PM, Doug Hellmann wrote:
> > Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
> >> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> >>> On 18/04/2016 13:51, Sean Dague wrote:
>  On 04/18/2016 08:22 AM, Chris Dent wrote:
> > On Mon, 18 Apr 2016, Sean Dague wrote:
> >
> >> So if you have strong feelings and ideas, why not get them out in email
> >> now? That will help in the framing of the conversation.
> >
> > I won't be at summit and I feel pretty strongly about this topic, so
> > I'll throw out my comments:
> >
> > I agree with the basic premise: In the big tent universe co-
> > installability is holding us back and is a huge cost in terms of spent
> > energy. In a world where service isolation is desirable and common
> > (whether by virtualenv, containers, different hosts, etc) targeting an
> > all-in-one install seems only to serve the purposes of all-in-one rpm-
> > or deb-based installations.
> >
> > Many (most?) people won't be doing those kinds of installations. If 
> > all-in-
> > one installations are important to the rpm- and deb- based distributions
> > then _they_ should be resolving the dependency issues local to their own
> > infrastructure (or realizing that it is too painful and start
> > containerizing or otherwise as well).
> >
> > I think making these changes will help to improve and strengthen the
> > boundaries and contracts between services. If not technically then
> > at least socially, in the sense that the negotiations that people
> > make to get things to work are about what actually matters in their
> > services, not unwinding python dependencies and the like.
> >
> > A lot of the basics of getting this to work are already in place in
> > devstack. One challenge I've run into the past is when devstack
> > plugin A has made an assumption about having access to a python
> > script provided by devstack plugin B, but it's not on $PATH or its
> > dependencies are not in the site-packages visible to the current
> > context. The solution here is to use full paths _into_ virtenvs.
> 
>  As Chris said, doing virtualenvs on the Devstack side for services is
>  pretty much there. The team looked at doing this last year, then stopped
>  due to operator feedback.
> 
>  One of the things that gets a little weird (when using devstack for
>  development) is if you actually want to see the impact of library
>  changes on the environment. As you'll need to make sure you loop and
>  install those libraries into every venv where they are used. This
>  forward reference doesn't really exist. So some tooling there will be
>  needed.
> 
>  Middleware that's pushed from one project into another (like Ceilometer
>  -> Swift) is also a funny edge case that I think get funnier here.
> 
>  Those are mostly implementation details, that probably have work
>  arounds, but would need people on them.
> 
> 
>   From a strategic perspective this would basically make traditional Linux
>  Packaging of OpenStack a lot harder. That might be the right call,
>  because traditional Linux Packaging definitely suffers from the fact
>  that everything on a host needs to be upgraded at the same time. For
>  large installs of OpenStack (especially public cloud cases) traditional
>  packages are definitely less used.
> 
>  However Linux Packaging is how a lot of people get exposed to software.
>  The power of onboarding with apt-get / yum install is a big one.
> 
>  I've been through the ups and downs of both approaches so many times now
>  in my own head, I no longer have a strong preference beyond the fact
>  that we do one approach today, and doing a different one is effort to
>  make the transition.
> 
>  -Sean
> 
> >>>
> >>> It is also worth noting that according to the OpenStack User Survey [0]
> >>> 56% of deployments use "Unmodifed packages from the operating system".
> >>>
> >>> Granted it was a small sample size (302 responses to that question)
> >>> but it is worth keeping this in mind as we talk about moving the burden
> >>> to packagers.
> >>>
> >>> 0 - 
> >>> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> >>> (page 
> >>> 36)
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >> To add to this, I'd also note that I as a packager would likely stop
> >> packaging Openstack at whatever release this goes into.  While the
> >> option to 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Clint Byrum
Excerpts from Michał Jastrzębski's message of 2016-04-18 10:29:20 -0700:
> What I meant is if you have liberty Nova and liberty Cinder, and you
> want to upgrade Nova to Mitaka, you also upgrade Oslo to Mitaka and
> Cinder which was liberty either needs to be upgraded or is broken,
> therefore during upgrade you need to do cinder and nova at the same
> time. DB can be snapshotted for rollbacks.
> 

If we're breaking backward compatibility even across one release, that
is a bug.  You should be able to run Liberty components with Mitaka
Libraries. Unfortunately, the testing matrix for all of the combinations
is huge and nobody is suggesting we try to solve that equation.

However, to the point of distros: partial upgrades is not the model distro
packages work under. They upgrade what they can, whether they're a rolling
release, or 7 year cycle LTS's. When the operator says "give me the new
release", the packages that can be upgraded, will be upgraded. And if
Mitaka Nova is depending on something outside the upper constraints in
another package on the system, the distro will just hold Nova back.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Clark Boylan
On Tue, Apr 19, 2016, at 08:14 AM, Doug Hellmann wrote:
> Excerpts from Jeremy Stanley's message of 2016-04-19 15:00:24 +:
> > On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > We have the global list and the upper constraints list, and both
> > > are intended to be used to signal to packaging folks what we think
> > > ought to be used. I'm glad that signaling is working, and maybe
> > > that means you're right that we don't need to sync the list
> > > absolutely, just as a set of "compatible" ranges.
> > [...]
> > 
> > When we were firming up the constraints idea in Vancouver, if my
> > memory is correct (which it quite often is not these days), part of
> > the long tail Robert suggested was that once constraints usage in
> > the CI is widespread we could consider resolving it from individual
> > requirements lists in participating projects, drop the version
> > specifiers from the global requirements list entirely and stop
> > trying to actively synchronize requirement version ranges in
> > individual projects. I don't recall any objection from those of us
> > around the table, though it was a small ad-hoc group and we
> > certainly didn't dig too deeply into the potential caveats that
> > might imply.
> 
> I have no memory of that part of the conversation, but I'll take your
> word for it.
> 
> If I understand your description correctly, that may be another
> alternative. Most of the reviews I've been doing are related to the
> constraints, though, so I'm not really sure it lowers the amount of work
> I'm seeing.

This was one of my concerns with constraints when we put them in place.
Previously we would open requirements and things would break
periodically and we would address them. With constraints every single
requirements update whether centralized or decentralized needs to be
reviewed. It does add quite a bit of overhead.

The argument at the time was that the time saved by not having the gate
explode every few weeks would offset the cost of micromanaging every
constraint update.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Matthew Thode
On 04/19/2016 11:30 AM, Ian Cordasco wrote:
>> On 2016-04-19 14:59:19 +0200 (+0200), Thomas Goirand wrote:
>>> On 04/19/2016 01:01 PM, Chris Dent wrote:
 On Tue, 19 Apr 2016, Thomas Goirand wrote:
>> [...]
> Most users are consuming packages from distributions. Also, if
> you're using containers, probably you will also prefer using
> these packages to build your containers: that's the most easy,
> safe and fast way to build your containers.

 I predict that that is not going to last.
>>>
>>> That's what everyone says, but I'm convinced the majority will be
>>> proven wrong! :)
>> [...]
>>  
>> Could just be that my beard has gotten a little too grey, but I
>> still very much prefer using stabilized software packaged by
>> traditional Linux distributions or similar Unix derivatives and
>> covered under security patched backports. My hope has always been
>> that as the rapid pace of development at the center of OpenStack
>> starts to cool (and as the press moves on and OpenStack becomes a
>> lot more boring to talk about), we'll approach the sort of ecosystem
>> stabilization needed to make that less awkward downstream.
> 
> Perhaps my beard is not grey enough, but as a developer and maintainer of 
> several of OpenStack's dependencies (some of which have needed security 
> backports) I've argued with different downstream distributors about their own 
> judgment of what portions of the patch to apply in order to fix an issue with 
> an assigned CVE. It took much longer than should have been necessary in at 
> least one of those cases where it did affect OpenStack, so perhaps I am too 
> confident in my ability to use tooling outside of distribution provided 
> packages but to date I've had better luck using the source with the 
> *complete* fixes.
> 

Well, as one of those downstream packagers I hope I'm not in that list.
 This is my ordering of how I try and remediate a sec issue.

1. I try to apply the entire patch to affected versions.
2. If that doesn't work and I can remove the bad versions I do that.
3. If that doesn't work I have to start getting creative :D

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Ian Cordasco
 

-Original Message-
From: Jeremy Stanley <fu...@yuggoth.org>
Reply: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Date: April 19, 2016 at 09:50:27
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject:  Re: [openstack-dev] [release][requirements][packaging][summit] input 
needed on summit discussion about global requirements

> On 2016-04-19 14:59:19 +0200 (+0200), Thomas Goirand wrote:
> > On 04/19/2016 01:01 PM, Chris Dent wrote:
> > > On Tue, 19 Apr 2016, Thomas Goirand wrote:
> [...]
> > > > Most users are consuming packages from distributions. Also, if
> > > > you're using containers, probably you will also prefer using
> > > > these packages to build your containers: that's the most easy,
> > > > safe and fast way to build your containers.
> > >
> > > I predict that that is not going to last.
> >
> > That's what everyone says, but I'm convinced the majority will be
> > proven wrong! :)
> [...]
>  
> Could just be that my beard has gotten a little too grey, but I
> still very much prefer using stabilized software packaged by
> traditional Linux distributions or similar Unix derivatives and
> covered under security patched backports. My hope has always been
> that as the rapid pace of development at the center of OpenStack
> starts to cool (and as the press moves on and OpenStack becomes a
> lot more boring to talk about), we'll approach the sort of ecosystem
> stabilization needed to make that less awkward downstream.

Perhaps my beard is not grey enough, but as a developer and maintainer of 
several of OpenStack's dependencies (some of which have needed security 
backports) I've argued with different downstream distributors about their own 
judgment of what portions of the patch to apply in order to fix an issue with 
an assigned CVE. It took much longer than should have been necessary in at 
least one of those cases where it did affect OpenStack, so perhaps I am too 
confident in my ability to use tooling outside of distribution provided 
packages but to date I've had better luck using the source with the *complete* 
fixes.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Matthew Thode
On 04/19/2016 04:48 AM, Thierry Carrez wrote:
> Thomas Goirand wrote:
>> [...]
>> Remember that in distros, there's only a single version of a library at
>> any given time, at the exception of transitions (yes, in Red Hat it's
>> technically possible to install multiple versions, but the policy
>> strongly advocates against this).
> 
> Depends on the distro. Gentoo for example lets you have multiple
> versions of libraries installed at any given time.

For ruby or c-libs where the language allows that sure, but not for
python where it isn't generally allowed.

>> [...]
> I say "old", since with the advent of containers this limitation is
> slowly going away. Ubuntu supports snappy packaging for container-based
> packages, for example. They could totally package OpenStack that way if
> they wanted. I expect in the future the one-version-of-lib-at-a-time
> will more and more be seen as a self-imposed limitation and less as a
> distribution axiom, and next-generation distros will appear that will
> solve that limitation one way or another.

Even if things stay the same I'm working on getting Gentoo support into
openstack-ansible (deploys via virtualenvs/containers).  So work is
progressing there too.

> That said, I still think we benefit from global requirements for the
> second reason: it provides us a mechanism to encourage dependency
> convergence. This is very important, as it limits the knowledge required
> to operate OpenStack, facilitates contributors jumping from one code
> base to another, provides a great checkpoint for licensing checks, and
> reduce our overall security exposure by limiting the body of code we
> rely on. If we dump global requirements we would have to replace it with
> a lot of manual effort to push convergence overall.
> 

Well said :D

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Ian Cordasco
 

-Original Message-
From: Thomas Goirand <z...@debian.org>
Reply: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Date: April 18, 2016 at 17:21:36
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject:  Re: [openstack-dev] [release][requirements][packaging][summit] input 
needed on summit discussion about global requirements

> Hi Doug,
>  
> I very much welcome opening such a thread before the discussion at the
> summit, as often, sessions are too short. Taking the time to write
> things down first also helps having a more constructive discussion.
>  
> Before I reply to each individual message below, let me attempt to reply
> to the big picture seen in your etherpad. I was tempted to insert
> comments on each lines of it, but I'm not sure how this would be
> received, and probably it's best to attempt to reply more globally.
>  
> From what I understand, the biggesgt problems you're trying to solve is
> that managing the global-reqs is really time consuming from the release
> team point of view, and especially its propagation to individual
> projects. There's IMO many things that we could do to improve the
> situation, which would be acceptable from the package maintainers point
> of view.
>  
> First of all, from what I could see in the etherpad, I see a lot of
> release work which I consider not useful for anyone: not for downstream
> distros, not upstream projects. Mostly, the propagation of the
> global-requirements.txt to each and every individual Python library or
> service *for OpenStack maintained libs* could be reviewed. Because 1/
> distros will always package the highest version available in
> upper-constraints.txt, and 2/ it doesn't really reflect a reality. As
> you pointed out, project A may need a new feature from lib X, but
> project B wont care. I strongly believe that we should leave lower
> boundaries as a responsibility of individual projects. What important
> though, is to make sure that the highest version released does work,
> because that's what we will effectively package.
>  
> What we can then later on do, at the distribution level, is artificially
> set the lower bounds of versions to whatever we've just uploaded for a
> given release of OpenStack. In fact, I've been doing this a lot already.
> For example, I uploaded Eventlet 0.17.4, and then 0.18.4. There was
> never anything in the between. Therefore, doing a dependency like:
>  
> Depends: python-eventlet (>= 0.18.3)
>  
> makes no sense, and I always pushed:
>  
> Depends: python-eventlet (>= 0.18.4)
>  
> as this reflects the reality of distros.
>  
> If we generalize this concept, then I could push the minimum version of
> all oslo libs into every single package for a given version of OpenStack.
>  
> What is a lot more annoying though, is for packages which I do not
> control directly, and which are used by many other non-OpenStack
> packages inside the distribution. For example, Django, SQLAlchemy or
> jQuery, to only name a few.
>  
> I have absolutely no problem upping the lower bounds for all of
> OpenStack components aggressively. We don't have gate jobs for the lower
> bounds of our requirements. If we decide that it becomes the norm, I can
> generalize and push for doing this even more. For example, after pushing
> the update of an oslo lib B version X, I could push such requirements
> everywhere, which in fact, would be a good thing (as this would trigger
> rebuilds and recheck of all unit tests). Though, all of this would
> benefit from a lot of automation and checks.
>  
> On your etherpad, you wrote:
>  
> "During the lead-up to preparing the final releases, one of the tracking
> tasks we have is to ensure all projects have synced their global
> requirements updates. This is another area where we could reduce the
> burden on the release team."
>  
> Well, don't bother, this doesn't reflect a reality anyway (ie: maybe
> service X can use an older version of oslo.utils), so that's not really
> helpful in any way.
>  
> You also wrote:
>  
> "Current ranges in global-requirements are large but most projects do
> not actively test the oldest supported version (or other versions in
> between) meaning that the requirement might result in broken packages."
>  
> Yeah, that's truth, I've seen this and reported a few bugs (the last I
> have in memory is Neutron requiring SQLA >= 1.0.12). Though that's still
> very useful hints for package maintainers *for 3rd party libs* (as I
> wrote, it's less important for OpenStack components). We have a few
> breakage here and there, but they are hopefully fixes.
> 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Chris Dent

On Tue, 19 Apr 2016, Thomas Goirand wrote:

On 04/19/2016 01:01 PM, Chris Dent wrote:

Do I expect it all to
happen like I want? No. Do I hope that my concerns will be
integrated in the discussion? Yes.


I fail to see what kind of concerns you have with the current situation.
Could you attempt to make me understand better? What exactly is wrong
for the type of use case you're discussing?


A genesis of this thread is questioning whether we need to
maintain co-installability as a thing that obtains in the OpenStack
universe.

I think that's a limitation on improving OpenStack, especially in a
big-tent world.

I, in project A, don't want to limit my requirements because project B
has not yet upgraded itself to be able to use the new features in
library X.

At the same time, people in project B don't want to have to upgrade
themselves, when they are not ready, simply because project A wants
to upgrade.

That's an actual problem. All the rest of the discussion is coming
up with ways to manage things if we happen to get rid of that
constraint.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2016-04-19 15:00:24 +:
> On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > We have the global list and the upper constraints list, and both
> > are intended to be used to signal to packaging folks what we think
> > ought to be used. I'm glad that signaling is working, and maybe
> > that means you're right that we don't need to sync the list
> > absolutely, just as a set of "compatible" ranges.
> [...]
> 
> When we were firming up the constraints idea in Vancouver, if my
> memory is correct (which it quite often is not these days), part of
> the long tail Robert suggested was that once constraints usage in
> the CI is widespread we could consider resolving it from individual
> requirements lists in participating projects, drop the version
> specifiers from the global requirements list entirely and stop
> trying to actively synchronize requirement version ranges in
> individual projects. I don't recall any objection from those of us
> around the table, though it was a small ad-hoc group and we
> certainly didn't dig too deeply into the potential caveats that
> might imply.

I have no memory of that part of the conversation, but I'll take your
word for it.

If I understand your description correctly, that may be another
alternative. Most of the reviews I've been doing are related to the
constraints, though, so I'm not really sure it lowers the amount of work
I'm seeing.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Chris Dent

On Tue, 19 Apr 2016, Jeremy Stanley wrote:


Could just be that my beard has gotten a little too grey, but I
still very much prefer using stabilized software packaged by
traditional Linux distributions or similar Unix derivatives and
covered under security patched backports. My hope has always been
that as the rapid pace of development at the center of OpenStack
starts to cool (and as the press moves on and OpenStack becomes a
lot more boring to talk about), we'll approach the sort of ecosystem
stabilization needed to make that less awkward downstream.


Agreed that would be nice to have, but I think we are quite a long
way from it and from my standpoint our efforts to reach a state of
quality that then allows for that stabilisation is hampered by the
time spent on co-installability.

Of course that position on quality is highly dependent on where
you're standing.

But yes: Once a tool becomes a standard piece of infrastructure then
packages totally rock.


Running a production system from a bunch of discrete containers each
with their own random versions of embedded libraries either getting
upgraded willy-nilly to the latest releases or lagging/missing
critical security patches is a regression to the bad old days when
we didn't have reliable package management. I feel like many of the
people pushing this idea simply didn't get to experience the pain it
causes the first time around and won't believe their peers who lived
through it.


I feel like I have to respond to this, because I'm one pushing this (at
least in this thread). I'll try not to take umbrage. I've been running
Unix systems, in production, since '91. When RPMs came along they were
an absolute godsend and anything that was stable and wasn't already
packaged we packaged. We automated all our builds and deploys and
packages made it rock. It was great.

But:

This thread is about removing the clamps that co-installability
has on the velocity of OpenStack _development_ and (perish the term)
independent innovation.

If we want to accelerate that velocity _and_ people use OpenStack
want to use modern OpenStack there are ways for that to happen: One
of them is to be fully buzzword compliant with your devops brostars
down at the ok container cluster.

People don't have to use that stuff. They can lag and get plenty of
benefit.

What I would like to see is that not only do packages lag master
(they already do to some extent) but that requirements
co-installability resolution also lags master. At master we should
be building the future; yes, with chaos. That chaos should be
resolved in a stabilizing process, driven by those with a need for
stability.

We need to decouple some of the constraints.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Jeremy Stanley
On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
[...]
> We have the global list and the upper constraints list, and both
> are intended to be used to signal to packaging folks what we think
> ought to be used. I'm glad that signaling is working, and maybe
> that means you're right that we don't need to sync the list
> absolutely, just as a set of "compatible" ranges.
[...]

When we were firming up the constraints idea in Vancouver, if my
memory is correct (which it quite often is not these days), part of
the long tail Robert suggested was that once constraints usage in
the CI is widespread we could consider resolving it from individual
requirements lists in participating projects, drop the version
specifiers from the global requirements list entirely and stop
trying to actively synchronize requirement version ranges in
individual projects. I don't recall any objection from those of us
around the table, though it was a small ad-hoc group and we
certainly didn't dig too deeply into the potential caveats that
might imply.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Jeremy Stanley
On 2016-04-19 14:59:19 +0200 (+0200), Thomas Goirand wrote:
> On 04/19/2016 01:01 PM, Chris Dent wrote:
> > On Tue, 19 Apr 2016, Thomas Goirand wrote:
[...]
> > > Most users are consuming packages from distributions. Also, if
> > > you're using containers, probably you will also prefer using
> > > these packages to build your containers: that's the most easy,
> > > safe and fast way to build your containers.
> > 
> > I predict that that is not going to last.
> 
> That's what everyone says, but I'm convinced the majority will be
> proven wrong! :)
[...]

Could just be that my beard has gotten a little too grey, but I
still very much prefer using stabilized software packaged by
traditional Linux distributions or similar Unix derivatives and
covered under security patched backports. My hope has always been
that as the rapid pace of development at the center of OpenStack
starts to cool (and as the press moves on and OpenStack becomes a
lot more boring to talk about), we'll approach the sort of ecosystem
stabilization needed to make that less awkward downstream.

Running a production system from a bunch of discrete containers each
with their own random versions of embedded libraries either getting
upgraded willy-nilly to the latest releases or lagging/missing
critical security patches is a regression to the bad old days when
we didn't have reliable package management. I feel like many of the
people pushing this idea simply didn't get to experience the pain it
causes the first time around and won't believe their peers who lived
through it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Doug Hellmann
Excerpts from Thomas Goirand's message of 2016-04-19 00:19:03 +0200:
> Hi Doug,
> 
> I very much welcome opening such a thread before the discussion at the
> summit, as often, sessions are too short. Taking the time to write
> things down first also helps having a more constructive discussion.

Thanks for your long and thoughtful response. :-)

> Before I reply to each individual message below, let me attempt to reply
> to the big picture seen in your etherpad. I was tempted to insert
> comments on each lines of it, but I'm not sure how this would be
> received, and probably it's best to attempt to reply more globally.
> 
> From what I understand, the biggesgt problems you're trying to solve is
> that managing the global-reqs is really time consuming from the release
> team point of view, and especially its propagation to individual
> projects. There's IMO many things that we could do to improve the
> situation, which would be acceptable from the package maintainers point
> of view.
> 
> First of all, from what I could see in the etherpad, I see a lot of
> release work which I consider not useful for anyone: not for downstream
> distros, not upstream projects. Mostly, the propagation of the
> global-requirements.txt to each and every individual Python library or
> service *for OpenStack maintained libs* could be reviewed. Because 1/
> distros will always package the highest version available in
> upper-constraints.txt, and 2/ it doesn't really reflect a reality. As
> you pointed out, project A may need a new feature from lib X, but
> project B wont care. I strongly believe that we should leave lower
> boundaries as a responsibility of individual projects. What important
> though, is to make sure that the highest version released does work,
> because that's what we will effectively package.
> 
> What we can then later on do, at the distribution level, is artificially
> set the lower bounds of versions to whatever we've just uploaded for a
> given release of OpenStack. In fact, I've been doing this a lot already.

That's great to know. We have the global list and the upper constraints
list, and both are intended to be used to signal to packaging folks what
we think ought to be used. I'm glad that signaling is working, and maybe
that means you're right that we don't need to sync the list absolutely,
just as a set of "compatible" ranges.

> For example, I uploaded Eventlet 0.17.4, and then 0.18.4. There was
> never anything in the between. Therefore, doing a dependency like:
> 
> Depends: python-eventlet (>= 0.18.3)
> 
> makes no sense, and I always pushed:
> 
> Depends: python-eventlet (>= 0.18.4)
> 
> as this reflects the reality of distros.
> 
> If we generalize this concept, then I could push the minimum version of
> all oslo libs into every single package for a given version of OpenStack.
> 
> What is a lot more annoying though, is for packages which I do not
> control directly, and which are used by many other non-OpenStack
> packages inside the distribution. For example, Django, SQLAlchemy or
> jQuery, to only name a few.
> 
> I have absolutely no problem upping the lower bounds for all of
> OpenStack components aggressively. We don't have gate jobs for the lower
> bounds of our requirements. If we decide that it becomes the norm, I can
> generalize and push for doing this even more. For example, after pushing
> the update of an oslo lib B version X, I could push such requirements
> everywhere, which in fact, would be a good thing (as this would trigger
> rebuilds and recheck of all unit tests). Though, all of this would
> benefit from a lot of automation and checks.

We don't generally raise the minimum version of a dependency unless we
actually need a new feature in that version. At least one reason for
this is to give packagers some flexibility, in case a slightly older
version of a lib works fine and is already packaged, for example.

> 
> On your etherpad, you wrote:
> 
> "During the lead-up to preparing the final releases, one of the tracking
> tasks we have is to ensure all projects have synced their global
> requirements updates. This is another area where we could reduce the
> burden on the release team."
> 
> Well, don't bother, this doesn't reflect a reality anyway (ie: maybe
> service X can use an older version of oslo.utils), so that's not really
> helpful in any way.

Right, that's why I proposed stopping the whole business with managing
the ranges. Your proposal seems to be somewhere in the middle between
doing what we do today (obsessively keeping in sync) and what I'm
proposing (abandoning any pretense of keeping in sync). Is that right?

> 
> You also wrote:
> 
> "Current ranges in global-requirements are large but most projects do
> not actively test the oldest supported version (or other versions in
> between) meaning that the requirement might result in broken packages."
> 
> Yeah, that's truth, I've seen this and reported a few bugs (the last I
> have in memory is Neutron 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Thomas Goirand
On 04/19/2016 01:01 PM, Chris Dent wrote:
> We also, however, need to consider what the future might look like and
> at least for some people and situations

I agree.

> the future does not involve
> debs or rpms of OpenStack: packages and distributions are more trouble
> than they are worth when you want to be managing your infrastructure
> across multiple, isolated environments.

But here, I don't. It is my view that best, for containers, is to build
them using distribution packages.

> In that case you want
> puppet, ansible, chef, docker, k8s, etc.
> 
> Sure all those things _can_ use packages to do their install but for
> at least _some_ people that's redundant: deploy from a tag, branch,
> commit or head of a repo.
> 
> That's for _some_ people.

This thinking (ie: using pip and trunk, always) was one of the reason
for TripleO to fail, and they went back to use packages. Can we learn
from the past?

There are multiple things which are done at the packaging level, like
unit testing at build time, validation of the whole repo using Tempest,
dependencies which aren't Python modules, stuff in postinst (like
managing system users, lock folders, log files, config files, etc.).
Yes, you can hack everything with puppet or shell scripts, but you will
end up reinventing the wheel, for packages which have been proven to be
working since the beginning of OpenStack. On top of that, you will not
have all the QA that is invested in packages (piuparts, lintian, etc.).

I've also seen a lot of people attempting to do automated packaging. A
lot of people. Almost always, the outcome is that there's simply too
many exception to manage, so finally, it becomes super complicated
scripts. There's simply no way you can move the human reviews out.

Instead of doing that, what I am hoping to do is having as many things
automated, including building packages from trunk, and managing
dependencies, but instead of having it produce a definitive result which
I know has a lot of chances to be broken, I would like to implement a
proposal bot for packaging. What I envision is a proposal bot which will:
- Check what version of components are available in the current
OpenStack release Debian repository (probably using madison-lite), and
propose update so that packages always (build-)depends on the latest
version that is packaged.
- Check the (test-)requirements.txt and propose updates to
debian/control, like adding missing new dependencies, and removing
deprecated ones.

This is already partly implemented in misc/pkgos-parse-requirements in
openstack-pkg-tools, though it doesn't work anymore since our
requirements.txt are now a lot more complex. So it would need a refresh,
and probably a refactor.

All this will bring us closer to the "package and deploy from trunk"
which many teams (including the containers people, the puppet team, and
Fuel contributors) would like to achieve.

If we do all of the above, do you still think that we need to deploy
using virtualenv and/or pip install?

The thing is, if we continue to be able to do the above, by having
workable global-requirements, it wont change the fact that you can do
what you want with containers, and not use packages if you don't want to.

> Keep in mind that I'm presenting my own opinion here. I'm quite sure
> it is different from, for example, Doug's. It's easy to conflate
> arguments such that they appear the same. My position is radical (in
> the sense that it is seeking to resolve root causes and institute
> fundamental change), I suspect Doug's is considerably more moderate
> and evolutionary.

That's what it seems, yes.

> Do I expect it all to
> happen like I want? No. Do I hope that my concerns will be
> integrated in the discussion? Yes.

I fail to see what kind of concerns you have with the current situation.
Could you attempt to make me understand better? What exactly is wrong
for the type of use case you're discussing?

>> Remember that in distros, there's only a single version of a library at
>> any given time, at the exception of transitions (yes, in Red Hat it's
>> technically possible to install multiple versions, but the policy
>> strongly advocates against this).
> 
> Yes, which is part of why distros and packaging are limiting in the
> cloud environment. When there is a security issue or other bug we don't
> want to update those libraries via packages, nor update those libraries
> across a bunch of different containers or what not.
> 
> We simply want to destroy the deployment and redeploy. With new
> stuff. We want to do that without co-dependencies.
> 
> In other words, packaging of OpenStack into rpms and debs is a short
> branch on the tree of life.

What you're talking about (ie: using containers to be able to do atomic
upgrades and rollbacks) is also possible if you use packages inside your
containers. It has also nothing to do with allowing conflicting python
module versions within the global-requirements.

>> Most users are consuming packages from 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-04-19 11:48:06 +0200:
> Thomas Goirand wrote:
> > [...]
> >  From what I understand, the biggesgt problems you're trying to solve is
> > that managing the global-reqs is really time consuming from the release
> > team point of view, and especially its propagation to individual
> > projects. There's IMO many things that we could do to improve the
> > situation, which would be acceptable from the package maintainers point
> > of view.
> 
> It's not just the release team. The machinery around syncing global 
> requirements affects QA/Infra/Stable teams as well.
> 
> > [...]
> > On 04/18/2016 02:22 PM, Chris Dent wrote:
> >> targeting an
> >> all-in-one install seems only to serve the purposes of all-in-one rpm-
> >> or deb-based installations.
> >
> > Though that's what most everyone consumes. Or are you proposing that we
> > completely stop publishing OpenStack in distros?
> 
> No, most people consume packages, but hopefully not as an all-in-one 
> installation (where you install everything on a single host).
> 
> > Remember that in distros, there's only a single version of a library at
> > any given time, at the exception of transitions (yes, in Red Hat it's
> > technically possible to install multiple versions, but the policy
> > strongly advocates against this).
> 
> Depends on the distro. Gentoo for example lets you have multiple 
> versions of libraries installed at any given time.
> 
> > [...]
> > Please take a step back. Devstack and virtualenv are for development,
> > not for production. That's not, and should not become, our target.
> 
> All-in-one installations are also for development/test/demo, not for 
> production. That should not be our target either.
> 
> 
> My point here is that we do have global dependencies for three reasons. 
> The first one is historic: because it facilitated all-in-one 
> devstack-based testing, I think we are past that one now. The second one 
> is operational: it encourages limiting the number of our dependencies 
> and the convergence across a common set of things. The third one is that 
> it facilitates the packaging work of old Linux distributions, which 
> generally have the limitation of only supporting one version of a given 
> library.
> 
> I say "old", since with the advent of containers this limitation is 
> slowly going away. Ubuntu supports snappy packaging for container-based 
> packages, for example. They could totally package OpenStack that way if 
> they wanted. I expect in the future the one-version-of-lib-at-a-time 
> will more and more be seen as a self-imposed limitation and less as a 
> distribution axiom, and next-generation distros will appear that will 
> solve that limitation one way or another.
> 
> That said, I still think we benefit from global requirements for the 
> second reason: it provides us a mechanism to encourage dependency 
> convergence. This is very important, as it limits the knowledge required 
> to operate OpenStack, facilitates contributors jumping from one code 
> base to another, provides a great checkpoint for licensing checks, and 
> reduce our overall security exposure by limiting the body of code we 
> rely on. If we dump global requirements we would have to replace it with 
> a lot of manual effort to push convergence overall.

Right, that's why I propose we keep the list of *names* as a way
to check the packages we've approved in terms of not having lots
of redundancy and for license compatibility. A new, looser, version
of the current check run when a project's requirements.txt file is
modified would be needed to look only at the name.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Chris Dent


Thomas, you've made a lot of great points here about the challenges
faced by packaging and packagers of OpenStack. They are valid,
important and need to be considered. I don't want to discount the
work you do because it is awesome, important and I'm well aware that
it can be a huge pain in the ass.

We also, however, need to consider what the future might look like and
at least for some people and situations, the future does not involve
debs or rpms of OpenStack: packages and distributions are more trouble
than they are worth when you want to be managing your infrastructure
across multiple, isolated environments. In that case you want
puppet, ansible, chef, docker, k8s, etc.

Sure all those things _can_ use packages to do their install but for
at least _some_ people that's redundant: deploy from a tag, branch,
commit or head of a repo.

That's for _some_ people. For other people packages are great.

Keep in mind that I'm presenting my own opinion here. I'm quite sure
it is different from, for example, Doug's. It's easy to conflate
arguments such that they appear the same. My position is radical (in
the sense that it is seeking to resolve root causes and institute
fundamental change), I suspect Doug's is considerably more moderate
and evolutionary.

More below.


On Tue, 19 Apr 2016, Thomas Goirand wrote:

On 04/18/2016 02:22 PM, Chris Dent wrote:

targeting an
all-in-one install seems only to serve the purposes of all-in-one rpm-
or deb-based installations.


Though that's what most everyone consumes. Or are you proposing that we
completely stop publishing OpenStack in distros?


I'm not really making a proposal. I'm presenting my thoughts on the
matter to participate, by proxy, in the conversations that will
happen at summit. I'm hoping that a proposal that makes everyone
happy will come out of that. It just so happens that my position on
the matter is probably at one of the extremes. Do I expect it all to
happen like I want? No. Do I hope that my concerns will be
integrated in the discussion? Yes.


Remember that in distros, there's only a single version of a library at
any given time, at the exception of transitions (yes, in Red Hat it's
technically possible to install multiple versions, but the policy
strongly advocates against this).


Yes, which is part of why distros and packaging are limiting in the
cloud environment. When there is a security issue or other bug we don't
want to update those libraries via packages, nor update those libraries
across a bunch of different containers or what not.

We simply want to destroy the deployment and redeploy. With new
stuff. We want to do that without co-dependencies.

In other words, packaging of OpenStack into rpms and debs is a short
branch on the tree of life.


Also, all-in-one is what I use in Debian in my CI, to make sure that
everyone works together, with whatever is uploaded (see the
openstack-deploy package in Debian).


Exactly, because this is what is needed to confirm one of your
requirements (that everything works together). Not everyone has that
requirement.


Many (most?) people won't be doing those kinds of installations.


Most users are consuming packages from distributions. Also, if you're
using containers, probably you will also prefer using these packages to
build your containers: that's the most easy, safe and fast way to build
your containers.


I predict that that is not going to last.


If all-in-one installations are important to the rpm- and deb- based
distributions  then _they_ should be resolving the dependency issues
local to their own infrastructure


Who is "they"? Well, approximately 1 person full time for Debian, 1 for
Ubuntu if you combine Corey and David (and Debian and Ubuntu
dependencies are worked on together in Debian), so that's 2 full time
for Ubuntu/Debian. Maybe 2 and a half for RDO if what Haikel told me is
still accurate.


Yes, this is a significant issue and I think is one of the very
complicated aspects of the strange economics of OpenStack. It's been
clear from the start that the amount of people involved at the
distro level has been far too low to satisfy the requirements of the
users of those distributions. Far.

However, that problem is, I think, orthogonal to the question of
effectively creating OpenStack at the upstream (pre-packaging) level.


So "we" wont handle it, even if "we" care, because we're already
understaffed.


I agree. That's a problem for employers of the packagers to solve,
not OpenStack at large. You could make a pretty strong argument, of
course, that OpenStack at large _is_ those employers. That's
unfortunate and isn't the reality we should be aspiring to.


I think making these changes will help to improve and strengthen the
boundaries and contracts between services. If not technically then
at least socially, in the sense that the negotiations that people
make to get things to work are about what actually matters in their
services, not unwinding python dependencies and the like.



Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-19 Thread Thierry Carrez

Thomas Goirand wrote:

[...]
 From what I understand, the biggesgt problems you're trying to solve is
that managing the global-reqs is really time consuming from the release
team point of view, and especially its propagation to individual
projects. There's IMO many things that we could do to improve the
situation, which would be acceptable from the package maintainers point
of view.


It's not just the release team. The machinery around syncing global 
requirements affects QA/Infra/Stable teams as well.



[...]
On 04/18/2016 02:22 PM, Chris Dent wrote:

targeting an
all-in-one install seems only to serve the purposes of all-in-one rpm-
or deb-based installations.


Though that's what most everyone consumes. Or are you proposing that we
completely stop publishing OpenStack in distros?


No, most people consume packages, but hopefully not as an all-in-one 
installation (where you install everything on a single host).



Remember that in distros, there's only a single version of a library at
any given time, at the exception of transitions (yes, in Red Hat it's
technically possible to install multiple versions, but the policy
strongly advocates against this).


Depends on the distro. Gentoo for example lets you have multiple 
versions of libraries installed at any given time.



[...]
Please take a step back. Devstack and virtualenv are for development,
not for production. That's not, and should not become, our target.


All-in-one installations are also for development/test/demo, not for 
production. That should not be our target either.



My point here is that we do have global dependencies for three reasons. 
The first one is historic: because it facilitated all-in-one 
devstack-based testing, I think we are past that one now. The second one 
is operational: it encourages limiting the number of our dependencies 
and the convergence across a common set of things. The third one is that 
it facilitates the packaging work of old Linux distributions, which 
generally have the limitation of only supporting one version of a given 
library.


I say "old", since with the advent of containers this limitation is 
slowly going away. Ubuntu supports snappy packaging for container-based 
packages, for example. They could totally package OpenStack that way if 
they wanted. I expect in the future the one-version-of-lib-at-a-time 
will more and more be seen as a self-imposed limitation and less as a 
distribution axiom, and next-generation distros will appear that will 
solve that limitation one way or another.


That said, I still think we benefit from global requirements for the 
second reason: it provides us a mechanism to encourage dependency 
convergence. This is very important, as it limits the knowledge required 
to operate OpenStack, facilitates contributors jumping from one code 
base to another, provides a great checkpoint for licensing checks, and 
reduce our overall security exposure by limiting the body of code we 
rely on. If we dump global requirements we would have to replace it with 
a lot of manual effort to push convergence overall.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Thomas Goirand
Hi Doug,

I very much welcome opening such a thread before the discussion at the
summit, as often, sessions are too short. Taking the time to write
things down first also helps having a more constructive discussion.

Before I reply to each individual message below, let me attempt to reply
to the big picture seen in your etherpad. I was tempted to insert
comments on each lines of it, but I'm not sure how this would be
received, and probably it's best to attempt to reply more globally.

>From what I understand, the biggesgt problems you're trying to solve is
that managing the global-reqs is really time consuming from the release
team point of view, and especially its propagation to individual
projects. There's IMO many things that we could do to improve the
situation, which would be acceptable from the package maintainers point
of view.

First of all, from what I could see in the etherpad, I see a lot of
release work which I consider not useful for anyone: not for downstream
distros, not upstream projects. Mostly, the propagation of the
global-requirements.txt to each and every individual Python library or
service *for OpenStack maintained libs* could be reviewed. Because 1/
distros will always package the highest version available in
upper-constraints.txt, and 2/ it doesn't really reflect a reality. As
you pointed out, project A may need a new feature from lib X, but
project B wont care. I strongly believe that we should leave lower
boundaries as a responsibility of individual projects. What important
though, is to make sure that the highest version released does work,
because that's what we will effectively package.

What we can then later on do, at the distribution level, is artificially
set the lower bounds of versions to whatever we've just uploaded for a
given release of OpenStack. In fact, I've been doing this a lot already.
For example, I uploaded Eventlet 0.17.4, and then 0.18.4. There was
never anything in the between. Therefore, doing a dependency like:

Depends: python-eventlet (>= 0.18.3)

makes no sense, and I always pushed:

Depends: python-eventlet (>= 0.18.4)

as this reflects the reality of distros.

If we generalize this concept, then I could push the minimum version of
all oslo libs into every single package for a given version of OpenStack.

What is a lot more annoying though, is for packages which I do not
control directly, and which are used by many other non-OpenStack
packages inside the distribution. For example, Django, SQLAlchemy or
jQuery, to only name a few.

I have absolutely no problem upping the lower bounds for all of
OpenStack components aggressively. We don't have gate jobs for the lower
bounds of our requirements. If we decide that it becomes the norm, I can
generalize and push for doing this even more. For example, after pushing
the update of an oslo lib B version X, I could push such requirements
everywhere, which in fact, would be a good thing (as this would trigger
rebuilds and recheck of all unit tests). Though, all of this would
benefit from a lot of automation and checks.

On your etherpad, you wrote:

"During the lead-up to preparing the final releases, one of the tracking
tasks we have is to ensure all projects have synced their global
requirements updates. This is another area where we could reduce the
burden on the release team."

Well, don't bother, this doesn't reflect a reality anyway (ie: maybe
service X can use an older version of oslo.utils), so that's not really
helpful in any way.

You also wrote:

"Current ranges in global-requirements are large but most projects do
not actively test the oldest supported version (or other versions in
between) meaning that the requirement might result in broken packages."

Yeah, that's truth, I've seen this and reported a few bugs (the last I
have in memory is Neutron requiring SQLA >= 1.0.12). Though that's still
very useful hints for package maintainers *for 3rd party libs* (as I
wrote, it's less important for OpenStack components). We have a few
breakage here and there, but they are hopefully fixes.

Though having a single version that projects are allowed to test with is
super important, so we can check everything can work together. IMO,
that's the part which should absolutely not change. Dropping that is
like opening a Pandora box. Relying on containers and/or venv will
unfortunately, not work, from my stand point.

The general rule for a distribution is that the highest version always
win, otherwise, it's never maintainable (for security and bug fixes). It
should be the case for *any program*, not even just any OpenStack
components. There's never a case where it's ok to use something older,
just because it feels like less work to do. This type of "laziness"
leads to very dangerous outcomes, always.

Though I don't see any issue with a project willing to keep backward
compatibility with a lower version than what other project do. In fact,
that's highly desirable to always try to remain compatible with lower

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 02:10 PM, Jeremy Stanley wrote:
> On 2016-04-18 13:58:03 -0500 (-0500), Matthew Thode wrote:
>> Ya, I'd be happy to work more with upstream.  I already review the
>> stable-reqs updates and watch them for the stable branches I package
>> for.  Not sure what else is needed.
> 
> Reviewing the master branch openstack/requirements repository
> changes (to make sure deps being added are going to be sane things
> for someone in your distro to maintain packages of in the long term)
> would also make sense.
> 
> https://review.openstack.org/#/q/project:openstack/requirements+status:open
> 
We can (and do) maintain multiple versions of packages available to be
installed.  The problem is that dependencies might conflict.  That's
what I'd like to avoid.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2016-04-18 19:10:52 +:
> On 2016-04-18 13:58:03 -0500 (-0500), Matthew Thode wrote:
> > Ya, I'd be happy to work more with upstream.  I already review the
> > stable-reqs updates and watch them for the stable branches I package
> > for.  Not sure what else is needed.
> 
> Reviewing the master branch openstack/requirements repository
> changes (to make sure deps being added are going to be sane things
> for someone in your distro to maintain packages of in the long term)
> would also make sense.
> 
> https://review.openstack.org/#/q/project:openstack/requirements+status:open

Right, we see far far more changes on master than on the stable
branches.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Jeremy Stanley
On 2016-04-18 13:58:03 -0500 (-0500), Matthew Thode wrote:
> Ya, I'd be happy to work more with upstream.  I already review the
> stable-reqs updates and watch them for the stable branches I package
> for.  Not sure what else is needed.

Reviewing the master branch openstack/requirements repository
changes (to make sure deps being added are going to be sane things
for someone in your distro to maintain packages of in the long term)
would also make sense.

https://review.openstack.org/#/q/project:openstack/requirements+status:open
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 01:40 PM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2016-04-18 13:22:38 -0500:
>> On 04/18/2016 12:33 PM, Doug Hellmann wrote:
>>> Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
 On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> On 18/04/2016 13:51, Sean Dague wrote:
>> On 04/18/2016 08:22 AM, Chris Dent wrote:
>>> On Mon, 18 Apr 2016, Sean Dague wrote:
>>>
 So if you have strong feelings and ideas, why not get them out in email
 now? That will help in the framing of the conversation.
>>>
>>> I won't be at summit and I feel pretty strongly about this topic, so
>>> I'll throw out my comments:
>>>
>>> I agree with the basic premise: In the big tent universe co-
>>> installability is holding us back and is a huge cost in terms of spent
>>> energy. In a world where service isolation is desirable and common
>>> (whether by virtualenv, containers, different hosts, etc) targeting an
>>> all-in-one install seems only to serve the purposes of all-in-one rpm-
>>> or deb-based installations.
>>>
>>> Many (most?) people won't be doing those kinds of installations. If 
>>> all-in-
>>> one installations are important to the rpm- and deb- based distributions
>>> then _they_ should be resolving the dependency issues local to their own
>>> infrastructure (or realizing that it is too painful and start
>>> containerizing or otherwise as well).
>>>
>>> I think making these changes will help to improve and strengthen the
>>> boundaries and contracts between services. If not technically then
>>> at least socially, in the sense that the negotiations that people
>>> make to get things to work are about what actually matters in their
>>> services, not unwinding python dependencies and the like.
>>>
>>> A lot of the basics of getting this to work are already in place in
>>> devstack. One challenge I've run into the past is when devstack
>>> plugin A has made an assumption about having access to a python
>>> script provided by devstack plugin B, but it's not on $PATH or its
>>> dependencies are not in the site-packages visible to the current
>>> context. The solution here is to use full paths _into_ virtenvs.
>>
>> As Chris said, doing virtualenvs on the Devstack side for services is
>> pretty much there. The team looked at doing this last year, then stopped
>> due to operator feedback.
>>
>> One of the things that gets a little weird (when using devstack for
>> development) is if you actually want to see the impact of library
>> changes on the environment. As you'll need to make sure you loop and
>> install those libraries into every venv where they are used. This
>> forward reference doesn't really exist. So some tooling there will be
>> needed.
>>
>> Middleware that's pushed from one project into another (like Ceilometer
>> -> Swift) is also a funny edge case that I think get funnier here.
>>
>> Those are mostly implementation details, that probably have work
>> arounds, but would need people on them.
>>
>>
>>  From a strategic perspective this would basically make traditional Linux
>> Packaging of OpenStack a lot harder. That might be the right call,
>> because traditional Linux Packaging definitely suffers from the fact
>> that everything on a host needs to be upgraded at the same time. For
>> large installs of OpenStack (especially public cloud cases) traditional
>> packages are definitely less used.
>>
>> However Linux Packaging is how a lot of people get exposed to software.
>> The power of onboarding with apt-get / yum install is a big one.
>>
>> I've been through the ups and downs of both approaches so many times now
>> in my own head, I no longer have a strong preference beyond the fact
>> that we do one approach today, and doing a different one is effort to
>> make the transition.
>>
>> -Sean
>>
>
> It is also worth noting that according to the OpenStack User Survey [0]
> 56% of deployments use "Unmodifed packages from the operating system".
>
> Granted it was a small sample size (302 responses to that question)
> but it is worth keeping this in mind as we talk about moving the burden
> to packagers.
>
> 0 - 
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> (page 
> 36)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
 To add to this, I'd also note that I as a packager would likely stop
 packaging Openstack at whatever 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-04-18 13:22:38 -0500:
> On 04/18/2016 12:33 PM, Doug Hellmann wrote:
> > Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
> >> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> >>> On 18/04/2016 13:51, Sean Dague wrote:
>  On 04/18/2016 08:22 AM, Chris Dent wrote:
> > On Mon, 18 Apr 2016, Sean Dague wrote:
> >
> >> So if you have strong feelings and ideas, why not get them out in email
> >> now? That will help in the framing of the conversation.
> >
> > I won't be at summit and I feel pretty strongly about this topic, so
> > I'll throw out my comments:
> >
> > I agree with the basic premise: In the big tent universe co-
> > installability is holding us back and is a huge cost in terms of spent
> > energy. In a world where service isolation is desirable and common
> > (whether by virtualenv, containers, different hosts, etc) targeting an
> > all-in-one install seems only to serve the purposes of all-in-one rpm-
> > or deb-based installations.
> >
> > Many (most?) people won't be doing those kinds of installations. If 
> > all-in-
> > one installations are important to the rpm- and deb- based distributions
> > then _they_ should be resolving the dependency issues local to their own
> > infrastructure (or realizing that it is too painful and start
> > containerizing or otherwise as well).
> >
> > I think making these changes will help to improve and strengthen the
> > boundaries and contracts between services. If not technically then
> > at least socially, in the sense that the negotiations that people
> > make to get things to work are about what actually matters in their
> > services, not unwinding python dependencies and the like.
> >
> > A lot of the basics of getting this to work are already in place in
> > devstack. One challenge I've run into the past is when devstack
> > plugin A has made an assumption about having access to a python
> > script provided by devstack plugin B, but it's not on $PATH or its
> > dependencies are not in the site-packages visible to the current
> > context. The solution here is to use full paths _into_ virtenvs.
> 
>  As Chris said, doing virtualenvs on the Devstack side for services is
>  pretty much there. The team looked at doing this last year, then stopped
>  due to operator feedback.
> 
>  One of the things that gets a little weird (when using devstack for
>  development) is if you actually want to see the impact of library
>  changes on the environment. As you'll need to make sure you loop and
>  install those libraries into every venv where they are used. This
>  forward reference doesn't really exist. So some tooling there will be
>  needed.
> 
>  Middleware that's pushed from one project into another (like Ceilometer
>  -> Swift) is also a funny edge case that I think get funnier here.
> 
>  Those are mostly implementation details, that probably have work
>  arounds, but would need people on them.
> 
> 
>   From a strategic perspective this would basically make traditional Linux
>  Packaging of OpenStack a lot harder. That might be the right call,
>  because traditional Linux Packaging definitely suffers from the fact
>  that everything on a host needs to be upgraded at the same time. For
>  large installs of OpenStack (especially public cloud cases) traditional
>  packages are definitely less used.
> 
>  However Linux Packaging is how a lot of people get exposed to software.
>  The power of onboarding with apt-get / yum install is a big one.
> 
>  I've been through the ups and downs of both approaches so many times now
>  in my own head, I no longer have a strong preference beyond the fact
>  that we do one approach today, and doing a different one is effort to
>  make the transition.
> 
>  -Sean
> 
> >>>
> >>> It is also worth noting that according to the OpenStack User Survey [0]
> >>> 56% of deployments use "Unmodifed packages from the operating system".
> >>>
> >>> Granted it was a small sample size (302 responses to that question)
> >>> but it is worth keeping this in mind as we talk about moving the burden
> >>> to packagers.
> >>>
> >>> 0 - 
> >>> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> >>> (page 
> >>> 36)
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >> To add to this, I'd also note that I as a packager would likely stop
> >> packaging Openstack at whatever release this goes into.  While the
> >> option to 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-04-18 13:49:31 -0400:
> On 04/18/2016 01:33 PM, Doug Hellmann wrote:
> > Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
> 
> >> To add to this, I'd also note that I as a packager would likely stop
> >> packaging Openstack at whatever release this goes into.  While the
> >> option to package and ship a virtualenv installed to /usr/local or /opt
> >> exists bundling is not something that should be supported given the
> >> issues it can have (update cadence and security issues mainly).
> > 
> > That's a useful data point, but it comes across as a threat and I'm
> > having trouble taking it as a constructive comment.
> > 
> > Can you truly not imagine any other useful way to package OpenStack
> > other than individual packages with shared dependencies that would
> > be acceptable?
> 
> I think it's important to realize that if we go down this route, I'd
> expect a lot of community  distros to take that stand point. Only
> distros with a product will be able to take on the work.
> 
> We often get annoyed with projects in our own space being "special
> snowflakes" and doing things differently. OpenStack demanding that every
> component has a copy of it's own dependencies is definitely being a
> special snowflake to the distros. And for those not building product,
> it's probably just going to be too much work. I'd rather be thankful of
> Matthew's honesty about that up front instead of not saying anything,
> and it getting quietly dropped, and people being mad later.

That's fair. It's still bothersome that the answer is "we'd walk away
from you" rather than "we understand the pressure our requirement places
on you and would like to work on a solution with you."

> 
> A lot of distros specifically have policies against this kind of
> bundling as well, because of security issues like this (which was so
> very bad) - http://www.zlib.net/advisory-2002-03-11.txt
> 
> How to mitigate that kind of issue and "fleet deploy" CVEed libraries in
> these environments is definitely an open question, especially as it
> doesn't fit into the security stream and tools that distros have built
> over the last couple of decades.

Yep. That's why I'm not trying to prescribe a solution. Our upstream
solution can be pretty light-weight, and that leaves room for
downstream folks to make different choices.

Doug

> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 12:33 PM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
>> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
>>> On 18/04/2016 13:51, Sean Dague wrote:
 On 04/18/2016 08:22 AM, Chris Dent wrote:
> On Mon, 18 Apr 2016, Sean Dague wrote:
>
>> So if you have strong feelings and ideas, why not get them out in email
>> now? That will help in the framing of the conversation.
>
> I won't be at summit and I feel pretty strongly about this topic, so
> I'll throw out my comments:
>
> I agree with the basic premise: In the big tent universe co-
> installability is holding us back and is a huge cost in terms of spent
> energy. In a world where service isolation is desirable and common
> (whether by virtualenv, containers, different hosts, etc) targeting an
> all-in-one install seems only to serve the purposes of all-in-one rpm-
> or deb-based installations.
>
> Many (most?) people won't be doing those kinds of installations. If 
> all-in-
> one installations are important to the rpm- and deb- based distributions
> then _they_ should be resolving the dependency issues local to their own
> infrastructure (or realizing that it is too painful and start
> containerizing or otherwise as well).
>
> I think making these changes will help to improve and strengthen the
> boundaries and contracts between services. If not technically then
> at least socially, in the sense that the negotiations that people
> make to get things to work are about what actually matters in their
> services, not unwinding python dependencies and the like.
>
> A lot of the basics of getting this to work are already in place in
> devstack. One challenge I've run into the past is when devstack
> plugin A has made an assumption about having access to a python
> script provided by devstack plugin B, but it's not on $PATH or its
> dependencies are not in the site-packages visible to the current
> context. The solution here is to use full paths _into_ virtenvs.

 As Chris said, doing virtualenvs on the Devstack side for services is
 pretty much there. The team looked at doing this last year, then stopped
 due to operator feedback.

 One of the things that gets a little weird (when using devstack for
 development) is if you actually want to see the impact of library
 changes on the environment. As you'll need to make sure you loop and
 install those libraries into every venv where they are used. This
 forward reference doesn't really exist. So some tooling there will be
 needed.

 Middleware that's pushed from one project into another (like Ceilometer
 -> Swift) is also a funny edge case that I think get funnier here.

 Those are mostly implementation details, that probably have work
 arounds, but would need people on them.


  From a strategic perspective this would basically make traditional Linux
 Packaging of OpenStack a lot harder. That might be the right call,
 because traditional Linux Packaging definitely suffers from the fact
 that everything on a host needs to be upgraded at the same time. For
 large installs of OpenStack (especially public cloud cases) traditional
 packages are definitely less used.

 However Linux Packaging is how a lot of people get exposed to software.
 The power of onboarding with apt-get / yum install is a big one.

 I've been through the ups and downs of both approaches so many times now
 in my own head, I no longer have a strong preference beyond the fact
 that we do one approach today, and doing a different one is effort to
 make the transition.

 -Sean

>>>
>>> It is also worth noting that according to the OpenStack User Survey [0]
>>> 56% of deployments use "Unmodifed packages from the operating system".
>>>
>>> Granted it was a small sample size (302 responses to that question)
>>> but it is worth keeping this in mind as we talk about moving the burden
>>> to packagers.
>>>
>>> 0 - 
>>> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
>>> (page 
>>> 36)
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> To add to this, I'd also note that I as a packager would likely stop
>> packaging Openstack at whatever release this goes into.  While the
>> option to package and ship a virtualenv installed to /usr/local or /opt
>> exists bundling is not something that should be supported given the
>> issues it can have (update cadence and security issues mainly).
> 
> That's a useful data point, but it comes across as a threat 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Hayes, Graham
 On Mon, 18 Apr 2016, Sean Dague wrote:



 Many (most?) people won't be doing those kinds of installations. If all-in-
 one installations are important to the rpm- and deb- based distributions
 then _they_ should be resolving the dependency issues local to their own
 infrastructure (or realizing that it is too painful and start
 containerizing or otherwise as well).


Sorry - I was responding to the point above - I should have made that
clearer.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Sean Dague
On 04/18/2016 01:33 PM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:

>> To add to this, I'd also note that I as a packager would likely stop
>> packaging Openstack at whatever release this goes into.  While the
>> option to package and ship a virtualenv installed to /usr/local or /opt
>> exists bundling is not something that should be supported given the
>> issues it can have (update cadence and security issues mainly).
> 
> That's a useful data point, but it comes across as a threat and I'm
> having trouble taking it as a constructive comment.
> 
> Can you truly not imagine any other useful way to package OpenStack
> other than individual packages with shared dependencies that would
> be acceptable?

I think it's important to realize that if we go down this route, I'd
expect a lot of community  distros to take that stand point. Only
distros with a product will be able to take on the work.

We often get annoyed with projects in our own space being "special
snowflakes" and doing things differently. OpenStack demanding that every
component has a copy of it's own dependencies is definitely being a
special snowflake to the distros. And for those not building product,
it's probably just going to be too much work. I'd rather be thankful of
Matthew's honesty about that up front instead of not saying anything,
and it getting quietly dropped, and people being mad later.

A lot of distros specifically have policies against this kind of
bundling as well, because of security issues like this (which was so
very bad) - http://www.zlib.net/advisory-2002-03-11.txt

How to mitigate that kind of issue and "fleet deploy" CVEed libraries in
these environments is definitely an open question, especially as it
doesn't fit into the security stream and tools that distros have built
over the last couple of decades.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-04-18 13:24:40 +:
> On 18/04/2016 13:51, Sean Dague wrote:
> > On 04/18/2016 08:22 AM, Chris Dent wrote:
> >> On Mon, 18 Apr 2016, Sean Dague wrote:
> >>
> >>> So if you have strong feelings and ideas, why not get them out in email
> >>> now? That will help in the framing of the conversation.
> >>
> >> I won't be at summit and I feel pretty strongly about this topic, so
> >> I'll throw out my comments:
> >>
> >> I agree with the basic premise: In the big tent universe co-
> >> installability is holding us back and is a huge cost in terms of spent
> >> energy. In a world where service isolation is desirable and common
> >> (whether by virtualenv, containers, different hosts, etc) targeting an
> >> all-in-one install seems only to serve the purposes of all-in-one rpm-
> >> or deb-based installations.
> >>
> >> Many (most?) people won't be doing those kinds of installations. If all-in-
> >> one installations are important to the rpm- and deb- based distributions
> >> then _they_ should be resolving the dependency issues local to their own
> >> infrastructure (or realizing that it is too painful and start
> >> containerizing or otherwise as well).
> >>
> >> I think making these changes will help to improve and strengthen the
> >> boundaries and contracts between services. If not technically then
> >> at least socially, in the sense that the negotiations that people
> >> make to get things to work are about what actually matters in their
> >> services, not unwinding python dependencies and the like.
> >>
> >> A lot of the basics of getting this to work are already in place in
> >> devstack. One challenge I've run into the past is when devstack
> >> plugin A has made an assumption about having access to a python
> >> script provided by devstack plugin B, but it's not on $PATH or its
> >> dependencies are not in the site-packages visible to the current
> >> context. The solution here is to use full paths _into_ virtenvs.
> >
> > As Chris said, doing virtualenvs on the Devstack side for services is
> > pretty much there. The team looked at doing this last year, then stopped
> > due to operator feedback.
> >
> > One of the things that gets a little weird (when using devstack for
> > development) is if you actually want to see the impact of library
> > changes on the environment. As you'll need to make sure you loop and
> > install those libraries into every venv where they are used. This
> > forward reference doesn't really exist. So some tooling there will be
> > needed.
> >
> > Middleware that's pushed from one project into another (like Ceilometer
> > -> Swift) is also a funny edge case that I think get funnier here.
> >
> > Those are mostly implementation details, that probably have work
> > arounds, but would need people on them.
> >
> >
> >  From a strategic perspective this would basically make traditional Linux
> > Packaging of OpenStack a lot harder. That might be the right call,
> > because traditional Linux Packaging definitely suffers from the fact
> > that everything on a host needs to be upgraded at the same time. For
> > large installs of OpenStack (especially public cloud cases) traditional
> > packages are definitely less used.
> >
> > However Linux Packaging is how a lot of people get exposed to software.
> > The power of onboarding with apt-get / yum install is a big one.
> >
> > I've been through the ups and downs of both approaches so many times now
> > in my own head, I no longer have a strong preference beyond the fact
> > that we do one approach today, and doing a different one is effort to
> > make the transition.
> >
> > -Sean
> >
> 
> It is also worth noting that according to the OpenStack User Survey [0]
> 56% of deployments use "Unmodifed packages from the operating system".
> 
> Granted it was a small sample size (302 responses to that question)
> but it is worth keeping this in mind as we talk about moving the burden
> to packagers.

To be clear, "Moving the burden to packagers" is not the only option
available to us. I've proposed one option for eliminating the issue,
which has some benefits for us upstream but obviously introduces
some other issues we would need to resolve. Another option is for
more people to get involved in managing the dependency list. Some
(most? all?) of those new people may come from distros, and sharing
the effort among them would make it easier than each of them doing
all of the work individually. Sort of like an open source project.

Doug

> 
> 0 - 
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> (page 
> 36)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> > On 18/04/2016 13:51, Sean Dague wrote:
> >> On 04/18/2016 08:22 AM, Chris Dent wrote:
> >>> On Mon, 18 Apr 2016, Sean Dague wrote:
> >>>
>  So if you have strong feelings and ideas, why not get them out in email
>  now? That will help in the framing of the conversation.
> >>>
> >>> I won't be at summit and I feel pretty strongly about this topic, so
> >>> I'll throw out my comments:
> >>>
> >>> I agree with the basic premise: In the big tent universe co-
> >>> installability is holding us back and is a huge cost in terms of spent
> >>> energy. In a world where service isolation is desirable and common
> >>> (whether by virtualenv, containers, different hosts, etc) targeting an
> >>> all-in-one install seems only to serve the purposes of all-in-one rpm-
> >>> or deb-based installations.
> >>>
> >>> Many (most?) people won't be doing those kinds of installations. If 
> >>> all-in-
> >>> one installations are important to the rpm- and deb- based distributions
> >>> then _they_ should be resolving the dependency issues local to their own
> >>> infrastructure (or realizing that it is too painful and start
> >>> containerizing or otherwise as well).
> >>>
> >>> I think making these changes will help to improve and strengthen the
> >>> boundaries and contracts between services. If not technically then
> >>> at least socially, in the sense that the negotiations that people
> >>> make to get things to work are about what actually matters in their
> >>> services, not unwinding python dependencies and the like.
> >>>
> >>> A lot of the basics of getting this to work are already in place in
> >>> devstack. One challenge I've run into the past is when devstack
> >>> plugin A has made an assumption about having access to a python
> >>> script provided by devstack plugin B, but it's not on $PATH or its
> >>> dependencies are not in the site-packages visible to the current
> >>> context. The solution here is to use full paths _into_ virtenvs.
> >>
> >> As Chris said, doing virtualenvs on the Devstack side for services is
> >> pretty much there. The team looked at doing this last year, then stopped
> >> due to operator feedback.
> >>
> >> One of the things that gets a little weird (when using devstack for
> >> development) is if you actually want to see the impact of library
> >> changes on the environment. As you'll need to make sure you loop and
> >> install those libraries into every venv where they are used. This
> >> forward reference doesn't really exist. So some tooling there will be
> >> needed.
> >>
> >> Middleware that's pushed from one project into another (like Ceilometer
> >> -> Swift) is also a funny edge case that I think get funnier here.
> >>
> >> Those are mostly implementation details, that probably have work
> >> arounds, but would need people on them.
> >>
> >>
> >>  From a strategic perspective this would basically make traditional Linux
> >> Packaging of OpenStack a lot harder. That might be the right call,
> >> because traditional Linux Packaging definitely suffers from the fact
> >> that everything on a host needs to be upgraded at the same time. For
> >> large installs of OpenStack (especially public cloud cases) traditional
> >> packages are definitely less used.
> >>
> >> However Linux Packaging is how a lot of people get exposed to software.
> >> The power of onboarding with apt-get / yum install is a big one.
> >>
> >> I've been through the ups and downs of both approaches so many times now
> >> in my own head, I no longer have a strong preference beyond the fact
> >> that we do one approach today, and doing a different one is effort to
> >> make the transition.
> >>
> >> -Sean
> >>
> > 
> > It is also worth noting that according to the OpenStack User Survey [0]
> > 56% of deployments use "Unmodifed packages from the operating system".
> > 
> > Granted it was a small sample size (302 responses to that question)
> > but it is worth keeping this in mind as we talk about moving the burden
> > to packagers.
> > 
> > 0 - 
> > https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> > (page 
> > 36)
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> To add to this, I'd also note that I as a packager would likely stop
> packaging Openstack at whatever release this goes into.  While the
> option to package and ship a virtualenv installed to /usr/local or /opt
> exists bundling is not something that should be supported given the
> issues it can have (update cadence and security issues mainly).

That's a useful data point, but it comes across as a threat and I'm
having trouble taking it as a constructive 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Michał Jastrzębski
What I meant is if you have liberty Nova and liberty Cinder, and you
want to upgrade Nova to Mitaka, you also upgrade Oslo to Mitaka and
Cinder which was liberty either needs to be upgraded or is broken,
therefore during upgrade you need to do cinder and nova at the same
time. DB can be snapshotted for rollbacks.

On 18 April 2016 at 11:15, Matthew Thode  wrote:
> On 04/18/2016 10:57 AM, Michał Jastrzębski wrote:
>> So I also want to stress out that shared libraries are huge pain
>> during upgrades. While I'm not in favor of packages with embedded
>> virtualenvs (as Matt pointed out, this has a lot of issues), having
>> shared dependency pool pretty much means that you need to upgrade
>> *everything* that is openstack at single run, and that is prone to
>> errors, volatile and nearly impossible to rollback if something goes
>> wrong. One way to address this issue is putting services in
>> containers, but that is not an solution to problem at hand (56% use
>> apt-get install as Graham says). Packagers have hard time keeping up
>> already, if we add fairly complex logic to this (virtualenvs) we will
>> probably end up with cross-compatibility hell of people not keeping up
>> with changes.
>>
>> That being said, in my opinion, this percentage is this high because
>> that's exactly what we suggest in install docs, once we came out with
>> a solution we should fix it there as well.
>>
>>
>> On 18 April 2016 at 10:23, Matthew Thode  wrote:
>>> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
 On 18/04/2016 13:51, Sean Dague wrote:
> On 04/18/2016 08:22 AM, Chris Dent wrote:
>> On Mon, 18 Apr 2016, Sean Dague wrote:
>>
>>> So if you have strong feelings and ideas, why not get them out in email
>>> now? That will help in the framing of the conversation.
>>
>> I won't be at summit and I feel pretty strongly about this topic, so
>> I'll throw out my comments:
>>
>> I agree with the basic premise: In the big tent universe co-
>> installability is holding us back and is a huge cost in terms of spent
>> energy. In a world where service isolation is desirable and common
>> (whether by virtualenv, containers, different hosts, etc) targeting an
>> all-in-one install seems only to serve the purposes of all-in-one rpm-
>> or deb-based installations.
>>
>> Many (most?) people won't be doing those kinds of installations. If 
>> all-in-
>> one installations are important to the rpm- and deb- based distributions
>> then _they_ should be resolving the dependency issues local to their own
>> infrastructure (or realizing that it is too painful and start
>> containerizing or otherwise as well).
>>
>> I think making these changes will help to improve and strengthen the
>> boundaries and contracts between services. If not technically then
>> at least socially, in the sense that the negotiations that people
>> make to get things to work are about what actually matters in their
>> services, not unwinding python dependencies and the like.
>>
>> A lot of the basics of getting this to work are already in place in
>> devstack. One challenge I've run into the past is when devstack
>> plugin A has made an assumption about having access to a python
>> script provided by devstack plugin B, but it's not on $PATH or its
>> dependencies are not in the site-packages visible to the current
>> context. The solution here is to use full paths _into_ virtenvs.
>
> As Chris said, doing virtualenvs on the Devstack side for services is
> pretty much there. The team looked at doing this last year, then stopped
> due to operator feedback.
>
> One of the things that gets a little weird (when using devstack for
> development) is if you actually want to see the impact of library
> changes on the environment. As you'll need to make sure you loop and
> install those libraries into every venv where they are used. This
> forward reference doesn't really exist. So some tooling there will be
> needed.
>
> Middleware that's pushed from one project into another (like Ceilometer
> -> Swift) is also a funny edge case that I think get funnier here.
>
> Those are mostly implementation details, that probably have work
> arounds, but would need people on them.
>
>
>  From a strategic perspective this would basically make traditional Linux
> Packaging of OpenStack a lot harder. That might be the right call,
> because traditional Linux Packaging definitely suffers from the fact
> that everything on a host needs to be upgraded at the same time. For
> large installs of OpenStack (especially public cloud cases) traditional
> packages are definitely less used.
>
> However Linux Packaging is how a lot of people get exposed to software.
> The power of onboarding with apt-get / yum 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 10:57 AM, Michał Jastrzębski wrote:
> So I also want to stress out that shared libraries are huge pain
> during upgrades. While I'm not in favor of packages with embedded
> virtualenvs (as Matt pointed out, this has a lot of issues), having
> shared dependency pool pretty much means that you need to upgrade
> *everything* that is openstack at single run, and that is prone to
> errors, volatile and nearly impossible to rollback if something goes
> wrong. One way to address this issue is putting services in
> containers, but that is not an solution to problem at hand (56% use
> apt-get install as Graham says). Packagers have hard time keeping up
> already, if we add fairly complex logic to this (virtualenvs) we will
> probably end up with cross-compatibility hell of people not keeping up
> with changes.
> 
> That being said, in my opinion, this percentage is this high because
> that's exactly what we suggest in install docs, once we came out with
> a solution we should fix it there as well.
> 
> 
> On 18 April 2016 at 10:23, Matthew Thode  wrote:
>> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
>>> On 18/04/2016 13:51, Sean Dague wrote:
 On 04/18/2016 08:22 AM, Chris Dent wrote:
> On Mon, 18 Apr 2016, Sean Dague wrote:
>
>> So if you have strong feelings and ideas, why not get them out in email
>> now? That will help in the framing of the conversation.
>
> I won't be at summit and I feel pretty strongly about this topic, so
> I'll throw out my comments:
>
> I agree with the basic premise: In the big tent universe co-
> installability is holding us back and is a huge cost in terms of spent
> energy. In a world where service isolation is desirable and common
> (whether by virtualenv, containers, different hosts, etc) targeting an
> all-in-one install seems only to serve the purposes of all-in-one rpm-
> or deb-based installations.
>
> Many (most?) people won't be doing those kinds of installations. If 
> all-in-
> one installations are important to the rpm- and deb- based distributions
> then _they_ should be resolving the dependency issues local to their own
> infrastructure (or realizing that it is too painful and start
> containerizing or otherwise as well).
>
> I think making these changes will help to improve and strengthen the
> boundaries and contracts between services. If not technically then
> at least socially, in the sense that the negotiations that people
> make to get things to work are about what actually matters in their
> services, not unwinding python dependencies and the like.
>
> A lot of the basics of getting this to work are already in place in
> devstack. One challenge I've run into the past is when devstack
> plugin A has made an assumption about having access to a python
> script provided by devstack plugin B, but it's not on $PATH or its
> dependencies are not in the site-packages visible to the current
> context. The solution here is to use full paths _into_ virtenvs.

 As Chris said, doing virtualenvs on the Devstack side for services is
 pretty much there. The team looked at doing this last year, then stopped
 due to operator feedback.

 One of the things that gets a little weird (when using devstack for
 development) is if you actually want to see the impact of library
 changes on the environment. As you'll need to make sure you loop and
 install those libraries into every venv where they are used. This
 forward reference doesn't really exist. So some tooling there will be
 needed.

 Middleware that's pushed from one project into another (like Ceilometer
 -> Swift) is also a funny edge case that I think get funnier here.

 Those are mostly implementation details, that probably have work
 arounds, but would need people on them.


  From a strategic perspective this would basically make traditional Linux
 Packaging of OpenStack a lot harder. That might be the right call,
 because traditional Linux Packaging definitely suffers from the fact
 that everything on a host needs to be upgraded at the same time. For
 large installs of OpenStack (especially public cloud cases) traditional
 packages are definitely less used.

 However Linux Packaging is how a lot of people get exposed to software.
 The power of onboarding with apt-get / yum install is a big one.

 I've been through the ups and downs of both approaches so many times now
 in my own head, I no longer have a strong preference beyond the fact
 that we do one approach today, and doing a different one is effort to
 make the transition.

  -Sean

>>>
>>> It is also worth noting that according to the OpenStack User Survey [0]
>>> 56% of deployments use "Unmodifed packages from the operating system".
>>>
>>> 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Michał Jastrzębski
So I also want to stress out that shared libraries are huge pain
during upgrades. While I'm not in favor of packages with embedded
virtualenvs (as Matt pointed out, this has a lot of issues), having
shared dependency pool pretty much means that you need to upgrade
*everything* that is openstack at single run, and that is prone to
errors, volatile and nearly impossible to rollback if something goes
wrong. One way to address this issue is putting services in
containers, but that is not an solution to problem at hand (56% use
apt-get install as Graham says). Packagers have hard time keeping up
already, if we add fairly complex logic to this (virtualenvs) we will
probably end up with cross-compatibility hell of people not keeping up
with changes.

That being said, in my opinion, this percentage is this high because
that's exactly what we suggest in install docs, once we came out with
a solution we should fix it there as well.


On 18 April 2016 at 10:23, Matthew Thode  wrote:
> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
>> On 18/04/2016 13:51, Sean Dague wrote:
>>> On 04/18/2016 08:22 AM, Chris Dent wrote:
 On Mon, 18 Apr 2016, Sean Dague wrote:

> So if you have strong feelings and ideas, why not get them out in email
> now? That will help in the framing of the conversation.

 I won't be at summit and I feel pretty strongly about this topic, so
 I'll throw out my comments:

 I agree with the basic premise: In the big tent universe co-
 installability is holding us back and is a huge cost in terms of spent
 energy. In a world where service isolation is desirable and common
 (whether by virtualenv, containers, different hosts, etc) targeting an
 all-in-one install seems only to serve the purposes of all-in-one rpm-
 or deb-based installations.

 Many (most?) people won't be doing those kinds of installations. If all-in-
 one installations are important to the rpm- and deb- based distributions
 then _they_ should be resolving the dependency issues local to their own
 infrastructure (or realizing that it is too painful and start
 containerizing or otherwise as well).

 I think making these changes will help to improve and strengthen the
 boundaries and contracts between services. If not technically then
 at least socially, in the sense that the negotiations that people
 make to get things to work are about what actually matters in their
 services, not unwinding python dependencies and the like.

 A lot of the basics of getting this to work are already in place in
 devstack. One challenge I've run into the past is when devstack
 plugin A has made an assumption about having access to a python
 script provided by devstack plugin B, but it's not on $PATH or its
 dependencies are not in the site-packages visible to the current
 context. The solution here is to use full paths _into_ virtenvs.
>>>
>>> As Chris said, doing virtualenvs on the Devstack side for services is
>>> pretty much there. The team looked at doing this last year, then stopped
>>> due to operator feedback.
>>>
>>> One of the things that gets a little weird (when using devstack for
>>> development) is if you actually want to see the impact of library
>>> changes on the environment. As you'll need to make sure you loop and
>>> install those libraries into every venv where they are used. This
>>> forward reference doesn't really exist. So some tooling there will be
>>> needed.
>>>
>>> Middleware that's pushed from one project into another (like Ceilometer
>>> -> Swift) is also a funny edge case that I think get funnier here.
>>>
>>> Those are mostly implementation details, that probably have work
>>> arounds, but would need people on them.
>>>
>>>
>>>  From a strategic perspective this would basically make traditional Linux
>>> Packaging of OpenStack a lot harder. That might be the right call,
>>> because traditional Linux Packaging definitely suffers from the fact
>>> that everything on a host needs to be upgraded at the same time. For
>>> large installs of OpenStack (especially public cloud cases) traditional
>>> packages are definitely less used.
>>>
>>> However Linux Packaging is how a lot of people get exposed to software.
>>> The power of onboarding with apt-get / yum install is a big one.
>>>
>>> I've been through the ups and downs of both approaches so many times now
>>> in my own head, I no longer have a strong preference beyond the fact
>>> that we do one approach today, and doing a different one is effort to
>>> make the transition.
>>>
>>>  -Sean
>>>
>>
>> It is also worth noting that according to the OpenStack User Survey [0]
>> 56% of deployments use "Unmodifed packages from the operating system".
>>
>> Granted it was a small sample size (302 responses to that question)
>> but it is worth keeping this in mind as we talk about moving the burden
>> to packagers.
>>
>> 0 -
>> 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> On 18/04/2016 13:51, Sean Dague wrote:
>> On 04/18/2016 08:22 AM, Chris Dent wrote:
>>> On Mon, 18 Apr 2016, Sean Dague wrote:
>>>
 So if you have strong feelings and ideas, why not get them out in email
 now? That will help in the framing of the conversation.
>>>
>>> I won't be at summit and I feel pretty strongly about this topic, so
>>> I'll throw out my comments:
>>>
>>> I agree with the basic premise: In the big tent universe co-
>>> installability is holding us back and is a huge cost in terms of spent
>>> energy. In a world where service isolation is desirable and common
>>> (whether by virtualenv, containers, different hosts, etc) targeting an
>>> all-in-one install seems only to serve the purposes of all-in-one rpm-
>>> or deb-based installations.
>>>
>>> Many (most?) people won't be doing those kinds of installations. If all-in-
>>> one installations are important to the rpm- and deb- based distributions
>>> then _they_ should be resolving the dependency issues local to their own
>>> infrastructure (or realizing that it is too painful and start
>>> containerizing or otherwise as well).
>>>
>>> I think making these changes will help to improve and strengthen the
>>> boundaries and contracts between services. If not technically then
>>> at least socially, in the sense that the negotiations that people
>>> make to get things to work are about what actually matters in their
>>> services, not unwinding python dependencies and the like.
>>>
>>> A lot of the basics of getting this to work are already in place in
>>> devstack. One challenge I've run into the past is when devstack
>>> plugin A has made an assumption about having access to a python
>>> script provided by devstack plugin B, but it's not on $PATH or its
>>> dependencies are not in the site-packages visible to the current
>>> context. The solution here is to use full paths _into_ virtenvs.
>>
>> As Chris said, doing virtualenvs on the Devstack side for services is
>> pretty much there. The team looked at doing this last year, then stopped
>> due to operator feedback.
>>
>> One of the things that gets a little weird (when using devstack for
>> development) is if you actually want to see the impact of library
>> changes on the environment. As you'll need to make sure you loop and
>> install those libraries into every venv where they are used. This
>> forward reference doesn't really exist. So some tooling there will be
>> needed.
>>
>> Middleware that's pushed from one project into another (like Ceilometer
>> -> Swift) is also a funny edge case that I think get funnier here.
>>
>> Those are mostly implementation details, that probably have work
>> arounds, but would need people on them.
>>
>>
>>  From a strategic perspective this would basically make traditional Linux
>> Packaging of OpenStack a lot harder. That might be the right call,
>> because traditional Linux Packaging definitely suffers from the fact
>> that everything on a host needs to be upgraded at the same time. For
>> large installs of OpenStack (especially public cloud cases) traditional
>> packages are definitely less used.
>>
>> However Linux Packaging is how a lot of people get exposed to software.
>> The power of onboarding with apt-get / yum install is a big one.
>>
>> I've been through the ups and downs of both approaches so many times now
>> in my own head, I no longer have a strong preference beyond the fact
>> that we do one approach today, and doing a different one is effort to
>> make the transition.
>>
>>  -Sean
>>
> 
> It is also worth noting that according to the OpenStack User Survey [0]
> 56% of deployments use "Unmodifed packages from the operating system".
> 
> Granted it was a small sample size (302 responses to that question)
> but it is worth keeping this in mind as we talk about moving the burden
> to packagers.
> 
> 0 - 
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> (page 
> 36)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
To add to this, I'd also note that I as a packager would likely stop
packaging Openstack at whatever release this goes into.  While the
option to package and ship a virtualenv installed to /usr/local or /opt
exists bundling is not something that should be supported given the
issues it can have (update cadence and security issues mainly).

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Hayes, Graham
On 18/04/2016 13:51, Sean Dague wrote:
> On 04/18/2016 08:22 AM, Chris Dent wrote:
>> On Mon, 18 Apr 2016, Sean Dague wrote:
>>
>>> So if you have strong feelings and ideas, why not get them out in email
>>> now? That will help in the framing of the conversation.
>>
>> I won't be at summit and I feel pretty strongly about this topic, so
>> I'll throw out my comments:
>>
>> I agree with the basic premise: In the big tent universe co-
>> installability is holding us back and is a huge cost in terms of spent
>> energy. In a world where service isolation is desirable and common
>> (whether by virtualenv, containers, different hosts, etc) targeting an
>> all-in-one install seems only to serve the purposes of all-in-one rpm-
>> or deb-based installations.
>>
>> Many (most?) people won't be doing those kinds of installations. If all-in-
>> one installations are important to the rpm- and deb- based distributions
>> then _they_ should be resolving the dependency issues local to their own
>> infrastructure (or realizing that it is too painful and start
>> containerizing or otherwise as well).
>>
>> I think making these changes will help to improve and strengthen the
>> boundaries and contracts between services. If not technically then
>> at least socially, in the sense that the negotiations that people
>> make to get things to work are about what actually matters in their
>> services, not unwinding python dependencies and the like.
>>
>> A lot of the basics of getting this to work are already in place in
>> devstack. One challenge I've run into the past is when devstack
>> plugin A has made an assumption about having access to a python
>> script provided by devstack plugin B, but it's not on $PATH or its
>> dependencies are not in the site-packages visible to the current
>> context. The solution here is to use full paths _into_ virtenvs.
>
> As Chris said, doing virtualenvs on the Devstack side for services is
> pretty much there. The team looked at doing this last year, then stopped
> due to operator feedback.
>
> One of the things that gets a little weird (when using devstack for
> development) is if you actually want to see the impact of library
> changes on the environment. As you'll need to make sure you loop and
> install those libraries into every venv where they are used. This
> forward reference doesn't really exist. So some tooling there will be
> needed.
>
> Middleware that's pushed from one project into another (like Ceilometer
> -> Swift) is also a funny edge case that I think get funnier here.
>
> Those are mostly implementation details, that probably have work
> arounds, but would need people on them.
>
>
>  From a strategic perspective this would basically make traditional Linux
> Packaging of OpenStack a lot harder. That might be the right call,
> because traditional Linux Packaging definitely suffers from the fact
> that everything on a host needs to be upgraded at the same time. For
> large installs of OpenStack (especially public cloud cases) traditional
> packages are definitely less used.
>
> However Linux Packaging is how a lot of people get exposed to software.
> The power of onboarding with apt-get / yum install is a big one.
>
> I've been through the ups and downs of both approaches so many times now
> in my own head, I no longer have a strong preference beyond the fact
> that we do one approach today, and doing a different one is effort to
> make the transition.
>
>   -Sean
>

It is also worth noting that according to the OpenStack User Survey [0]
56% of deployments use "Unmodifed packages from the operating system".

Granted it was a small sample size (302 responses to that question)
but it is worth keeping this in mind as we talk about moving the burden
to packagers.

0 - 
https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf (page 
36)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Sean Dague
On 04/18/2016 08:22 AM, Chris Dent wrote:
> On Mon, 18 Apr 2016, Sean Dague wrote:
> 
>> So if you have strong feelings and ideas, why not get them out in email
>> now? That will help in the framing of the conversation.
> 
> I won't be at summit and I feel pretty strongly about this topic, so
> I'll throw out my comments:
> 
> I agree with the basic premise: In the big tent universe co-
> installability is holding us back and is a huge cost in terms of spent
> energy. In a world where service isolation is desirable and common
> (whether by virtualenv, containers, different hosts, etc) targeting an
> all-in-one install seems only to serve the purposes of all-in-one rpm-
> or deb-based installations.
> 
> Many (most?) people won't be doing those kinds of installations. If all-in-
> one installations are important to the rpm- and deb- based distributions
> then _they_ should be resolving the dependency issues local to their own
> infrastructure (or realizing that it is too painful and start
> containerizing or otherwise as well).
> 
> I think making these changes will help to improve and strengthen the
> boundaries and contracts between services. If not technically then
> at least socially, in the sense that the negotiations that people
> make to get things to work are about what actually matters in their
> services, not unwinding python dependencies and the like.
> 
> A lot of the basics of getting this to work are already in place in
> devstack. One challenge I've run into the past is when devstack
> plugin A has made an assumption about having access to a python
> script provided by devstack plugin B, but it's not on $PATH or its
> dependencies are not in the site-packages visible to the current
> context. The solution here is to use full paths _into_ virtenvs.

As Chris said, doing virtualenvs on the Devstack side for services is
pretty much there. The team looked at doing this last year, then stopped
due to operator feedback.

One of the things that gets a little weird (when using devstack for
development) is if you actually want to see the impact of library
changes on the environment. As you'll need to make sure you loop and
install those libraries into every venv where they are used. This
forward reference doesn't really exist. So some tooling there will be
needed.

Middleware that's pushed from one project into another (like Ceilometer
-> Swift) is also a funny edge case that I think get funnier here.

Those are mostly implementation details, that probably have work
arounds, but would need people on them.


>From a strategic perspective this would basically make traditional Linux
Packaging of OpenStack a lot harder. That might be the right call,
because traditional Linux Packaging definitely suffers from the fact
that everything on a host needs to be upgraded at the same time. For
large installs of OpenStack (especially public cloud cases) traditional
packages are definitely less used.

However Linux Packaging is how a lot of people get exposed to software.
The power of onboarding with apt-get / yum install is a big one.

I've been through the ups and downs of both approaches so many times now
in my own head, I no longer have a strong preference beyond the fact
that we do one approach today, and doing a different one is effort to
make the transition.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Chris Dent

On Mon, 18 Apr 2016, Sean Dague wrote:


So if you have strong feelings and ideas, why not get them out in email
now? That will help in the framing of the conversation.


I won't be at summit and I feel pretty strongly about this topic, so
I'll throw out my comments:

I agree with the basic premise: In the big tent universe co-
installability is holding us back and is a huge cost in terms of spent
energy. In a world where service isolation is desirable and common
(whether by virtualenv, containers, different hosts, etc) targeting an
all-in-one install seems only to serve the purposes of all-in-one rpm-
or deb-based installations.

Many (most?) people won't be doing those kinds of installations. If all-in-
one installations are important to the rpm- and deb- based distributions
then _they_ should be resolving the dependency issues local to their own
infrastructure (or realizing that it is too painful and start
containerizing or otherwise as well).

I think making these changes will help to improve and strengthen the
boundaries and contracts between services. If not technically then
at least socially, in the sense that the negotiations that people
make to get things to work are about what actually matters in their
services, not unwinding python dependencies and the like.

A lot of the basics of getting this to work are already in place 
in devstack. One challenge I've run into the past is when devstack

plugin A has made an assumption about having access to a python
script provided by devstack plugin B, but it's not on $PATH or its
dependencies are not in the site-packages visible to the current
context. The solution here is to use full paths _into_ virtenvs.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Sean Dague
On 04/17/2016 11:34 AM, Monty Taylor wrote:
> On 04/17/2016 10:13 AM, Doug Hellmann wrote:
>> I am organizing a summit session for the cross-project track to
>> (re)consider how we manage our list of global dependencies [1].
>> Some of the changes I propose would have a big impact, and so I
>> want to ensure everyone doing packaging work for distros is available
>> for the discussion. Please review the etherpad [2] and pass the
>> information along to colleagues who might be interested.
>>
>> Doug
>>
>> [1]
>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
>> [2] https://etherpad.openstack.org/p/newton-global-requirements
> 
> Sadly the session conflicts with a different one that I'm leading, so I
> cannot be there. That, of course, makes me sad, because I think it's an
> important conversation to have, and I have some strong opinions on the
> topic in both directions.

Whether or not this session gets moved around to accommodate conflicts,
this session represents potentially the most disruptive change up for
consideration this cycle, which means this is going to have to include a
community conversation beyond just the DS session.

So if you have strong feelings and ideas, why not get them out in email
now? That will help in the framing of the conversation.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Thierry Carrez

Monty Taylor wrote:

On 04/17/2016 10:13 AM, Doug Hellmann wrote:

I am organizing a summit session for the cross-project track to
(re)consider how we manage our list of global dependencies [1].
Some of the changes I propose would have a big impact, and so I
want to ensure everyone doing packaging work for distros is available
for the discussion. Please review the etherpad [2] and pass the
information along to colleagues who might be interested.

Doug

[1]
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
[2] https://etherpad.openstack.org/p/newton-global-requirements


Sadly the session conflicts with a different one that I'm leading, so I
cannot be there. That, of course, makes me sad, because I think it's an
important conversation to have, and I have some strong opinions on the
topic in both directions.


We might be able to adapt the schedule to accommodate your presence... 
if we do the change ASAP and communicate it widely.


We could for example swap the "Co-installability Requirements" 
discussion with the "Stable Branch End of Life Policy" discussion.


Such a swap could help with the conflict someone reported between 
"Identity v3 API only devstack" and the "Stable Branch" discussion 
(can't remember who/where though).


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-17 Thread Matthew Thode
On 04/17/2016 10:34 AM, Monty Taylor wrote:
> On 04/17/2016 10:13 AM, Doug Hellmann wrote:
>> I am organizing a summit session for the cross-project track to
>> (re)consider how we manage our list of global dependencies [1].
>> Some of the changes I propose would have a big impact, and so I
>> want to ensure everyone doing packaging work for distros is available
>> for the discussion. Please review the etherpad [2] and pass the
>> information along to colleagues who might be interested.
>>
>> Doug
>>
>> [1]
>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
>> [2] https://etherpad.openstack.org/p/newton-global-requirements
> 
> Sadly the session conflicts with a different one that I'm leading, so I
> cannot be there. That, of course, makes me sad, because I think it's an
> important conversation to have, and I have some strong opinions on the
> topic in both directions.
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
If you will be in the other cross project sessions we can talk, we might
(or might not) share similar opinions :P

-- 
-- Matthew Thode (prometheanfire)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-17 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2016-04-17 10:34:36 -0500:
> On 04/17/2016 10:13 AM, Doug Hellmann wrote:
> > I am organizing a summit session for the cross-project track to
> > (re)consider how we manage our list of global dependencies [1].
> > Some of the changes I propose would have a big impact, and so I
> > want to ensure everyone doing packaging work for distros is available
> > for the discussion. Please review the etherpad [2] and pass the
> > information along to colleagues who might be interested.
> >
> > Doug
> >
> > [1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
> > [2] https://etherpad.openstack.org/p/newton-global-requirements
> 
> Sadly the session conflicts with a different one that I'm leading, so I 
> cannot be there. That, of course, makes me sad, because I think it's an 
> important conversation to have, and I have some strong opinions on the 
> topic in both directions.
> 
> Monty

Bummer. Please feel free to add any comments to the etherpad and I'll
try to proxy you.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-17 Thread Monty Taylor

On 04/17/2016 10:13 AM, Doug Hellmann wrote:

I am organizing a summit session for the cross-project track to
(re)consider how we manage our list of global dependencies [1].
Some of the changes I propose would have a big impact, and so I
want to ensure everyone doing packaging work for distros is available
for the discussion. Please review the etherpad [2] and pass the
information along to colleagues who might be interested.

Doug

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
[2] https://etherpad.openstack.org/p/newton-global-requirements


Sadly the session conflicts with a different one that I'm leading, so I 
cannot be there. That, of course, makes me sad, because I think it's an 
important conversation to have, and I have some strong opinions on the 
topic in both directions.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-17 Thread Doug Hellmann
I am organizing a summit session for the cross-project track to
(re)consider how we manage our list of global dependencies [1].
Some of the changes I propose would have a big impact, and so I
want to ensure everyone doing packaging work for distros is available
for the discussion. Please review the etherpad [2] and pass the
information along to colleagues who might be interested.

Doug

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
[2] https://etherpad.openstack.org/p/newton-global-requirements

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev