Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-25 Thread Joe Gordon
On Tue, Feb 24, 2015 at 7:00 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Mon, Feb 23, 2015, at 06:31 PM, Joe Gordon wrote:
  On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
  wrote:
 
  
  
   On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka 
 ihrac...@redhat.com
wrote:
   
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 02/20/2015 07:16 PM, Joshua Harlow wrote:
  Sean Dague wrote:
  On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps
  (direct and transitive), while
  requirements.inhttp://requirements.in  reflects what people
  will actually use.  Whatever is in requirements.txt affects the
  egg's requires.txt. Instead, we can keep requirements.txt
  unchanged and have it still be the canonical list of
  dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an
  upstream utility we produce and use to keep things sane on our
  slaves.
 
  Maybe all we need is:
 
  * update the existing post-merge job on the requirements repo
  to produce a requirements.txt (as it does now) as well the
  compiled version.
 
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
 
  I'm not sure how the second bit jives with the existing
  devstack installation code, specifically with the libraries
  from git-or-master but we can probably add something to warm
  the system with dependencies from the compiled version prior to
  calling pip/setup.py/etc http://setup.py/etc
 
  It sounds like you are suggesting we take the tool we use to
  ensure that all of OpenStack is installable together in a
 unified
  way, and change it's installation so that it doesn't do that any
  more.
 
  Which I'm fine with.
 
  But if we are doing that we should just whole hog give up on the
  idea that OpenStack can be run all together in a single
  environment, and just double down on the devstack venv work
  instead.
 
  It'd be interesting to see what a distribution (canonical,
  redhat...) would think about this movement. I know yahoo! has
 been
  looking into it for similar reasons (but we are more flexibly
 then
  I think a packager such as canonical/redhat/debian/... would/culd
  be). With a move to venv's that seems like it would just offload
  the work to find the set of dependencies that work together (in a
  single-install) to packagers instead.
 
  Is that ok/desired at this point?
 

 Honestly, I failed to track all the different proposals. Just
 saying
 from packager perspective: we absolutely rely on requirements.txt
 not
 being a list of hardcoded values from pip freeze, but present us a
 reasonable freedom in choosing versions we want to run in packaged
 products.


in short the current proposal for stable branches is:
   
keep requirements.txt as is, except maybe put some upper bounds on
 the
requirements.
   
Add requirements.gate to specify the *exact* versions we are gating
against
(this would be a full list including all transitive dependencies).
  
   The gate syncs requirements into projects before installing them. Would
   we change the sync script for the gate to work from the
   requirements.gate file, or keep it pulling from requirements.txt?
  
 
  We would only add requirements.gate for stable branches (because we don't
  want to cap/pin  things on master). So I think the answer is sync script
  should work for both.  I am not sure on the exact mechanics of how this
  would work. Whoever ends up driving this bit of work (I think Adam G),
  will
  have to sort out the details.

 OK. I think it's probably worth a spec, then, so we can think it through
 before starting work. Maybe in the cross-project specs repo, to avoid
 having to create one just for requirements? Or we could modify the
 README or something, but the specs repo seems more visible.


Start of the cross project spec https://review.openstack.org/159249



 Doug

 
 
   Doug
  
   
   
 That's why I asked before we should have caps and not pins.

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU61oJAAoJEC5aWaUY1u57T7cIALySnlpLV0tjrsTH2gZxskH+
 zY+L6E/DukFNZsWxB2XSaOuVdVaP3Oj4eYCZ2iL8OoxLrBotiOYyRFH29f9vjNSX
 h++dErBr0SwIeUtcnEjbk9re6fNP6y5Hqhk1Ac+NSxwL75KlS3bgKnJAhLA08MVB
 5xkGRR7xl2cuYf9eylPlQaAy9rXPCyyRdxZs6mNjZ2vlY6hZx/w/Y7V28R/V4gO4
 qsvMg6Kv+3urDTRuJdEsV6HbN/cXr2+o543Unzq7gcPpDYXRFTLkpCRV2k8mnmA1
 pO9W10F1FCQZiBnLk0c6OypFz9rQmKxpwlNUN5MTMF15Et6DOxGBxMcfr7TpRaQ=
 =WHOH
 -END PGP SIGNATURE-
   

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-25 Thread Doug Hellmann


On Wed, Feb 25, 2015, at 04:04 PM, Joe Gordon wrote:
 On Tue, Feb 24, 2015 at 7:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Mon, Feb 23, 2015, at 06:31 PM, Joe Gordon wrote:
   On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
   wrote:
  
   
   
On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
 On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka 
  ihrac...@redhat.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 02/20/2015 07:16 PM, Joshua Harlow wrote:
   Sean Dague wrote:
   On 02/20/2015 12:26 AM, Adam Gandelman wrote:
   Its more than just the naming.  In the original proposal,
   requirements.txt is the compiled list of all pinned deps
   (direct and transitive), while
   requirements.inhttp://requirements.in  reflects what people
   will actually use.  Whatever is in requirements.txt affects the
   egg's requires.txt. Instead, we can keep requirements.txt
   unchanged and have it still be the canonical list of
   dependencies, while
   reqiurements.out/requirements.gate/requirements.whatever is an
   upstream utility we produce and use to keep things sane on our
   slaves.
  
   Maybe all we need is:
  
   * update the existing post-merge job on the requirements repo
   to produce a requirements.txt (as it does now) as well the
   compiled version.
  
   * modify devstack in some way with a toggle to have it process
   dependencies from the compiled version when necessary
  
   I'm not sure how the second bit jives with the existing
   devstack installation code, specifically with the libraries
   from git-or-master but we can probably add something to warm
   the system with dependencies from the compiled version prior to
   calling pip/setup.py/etc http://setup.py/etc
  
   It sounds like you are suggesting we take the tool we use to
   ensure that all of OpenStack is installable together in a
  unified
   way, and change it's installation so that it doesn't do that any
   more.
  
   Which I'm fine with.
  
   But if we are doing that we should just whole hog give up on the
   idea that OpenStack can be run all together in a single
   environment, and just double down on the devstack venv work
   instead.
  
   It'd be interesting to see what a distribution (canonical,
   redhat...) would think about this movement. I know yahoo! has
  been
   looking into it for similar reasons (but we are more flexibly
  then
   I think a packager such as canonical/redhat/debian/... would/culd
   be). With a move to venv's that seems like it would just offload
   the work to find the set of dependencies that work together (in a
   single-install) to packagers instead.
  
   Is that ok/desired at this point?
  
 
  Honestly, I failed to track all the different proposals. Just
  saying
  from packager perspective: we absolutely rely on requirements.txt
  not
  being a list of hardcoded values from pip freeze, but present us a
  reasonable freedom in choosing versions we want to run in packaged
  products.
 
 
 in short the current proposal for stable branches is:

 keep requirements.txt as is, except maybe put some upper bounds on
  the
 requirements.

 Add requirements.gate to specify the *exact* versions we are gating
 against
 (this would be a full list including all transitive dependencies).
   
The gate syncs requirements into projects before installing them. Would
we change the sync script for the gate to work from the
requirements.gate file, or keep it pulling from requirements.txt?
   
  
   We would only add requirements.gate for stable branches (because we don't
   want to cap/pin  things on master). So I think the answer is sync script
   should work for both.  I am not sure on the exact mechanics of how this
   would work. Whoever ends up driving this bit of work (I think Adam G),
   will
   have to sort out the details.
 
  OK. I think it's probably worth a spec, then, so we can think it through
  before starting work. Maybe in the cross-project specs repo, to avoid
  having to create one just for requirements? Or we could modify the
  README or something, but the specs repo seems more visible.
 
 
 Start of the cross project spec https://review.openstack.org/159249

Thanks, Joe!

 
 
 
  Doug
 
  
  
Doug
   


  That's why I asked before we should have caps and not pins.
 
  /Ihar
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1
 
  iQEcBAEBAgAGBQJU61oJAAoJEC5aWaUY1u57T7cIALySnlpLV0tjrsTH2gZxskH+
  zY+L6E/DukFNZsWxB2XSaOuVdVaP3Oj4eYCZ2iL8OoxLrBotiOYyRFH29f9vjNSX
  h++dErBr0SwIeUtcnEjbk9re6fNP6y5Hqhk1Ac+NSxwL75KlS3bgKnJAhLA08MVB
  

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-25 Thread Robert Collins
I'll follow-up on the spec, but one thing Donald has been pointing out
for a while is that we don't use requirements.txt the way that pip
anticipates: the expected use is that a specific install (e.g. the
gate) will have a very specific list of requirements, caps etc, but
that the install_requires will be as minimal as possible to ensure the
project builds and self-tests ok.

I see the issues here as being related to that.

-Rob

On 26 February 2015 at 10:04, Joe Gordon joe.gord...@gmail.com wrote:


 On Tue, Feb 24, 2015 at 7:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:



 On Mon, Feb 23, 2015, at 06:31 PM, Joe Gordon wrote:
  On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
  wrote:
 
  
  
   On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka
ihrac...@redhat.com
wrote:
   
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 02/20/2015 07:16 PM, Joshua Harlow wrote:
  Sean Dague wrote:
  On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps
  (direct and transitive), while
  requirements.inhttp://requirements.in  reflects what people
  will actually use.  Whatever is in requirements.txt affects
  the
  egg's requires.txt. Instead, we can keep requirements.txt
  unchanged and have it still be the canonical list of
  dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an
  upstream utility we produce and use to keep things sane on our
  slaves.
 
  Maybe all we need is:
 
  * update the existing post-merge job on the requirements repo
  to produce a requirements.txt (as it does now) as well the
  compiled version.
 
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
 
  I'm not sure how the second bit jives with the existing
  devstack installation code, specifically with the libraries
  from git-or-master but we can probably add something to warm
  the system with dependencies from the compiled version prior
  to
  calling pip/setup.py/etc http://setup.py/etc
 
  It sounds like you are suggesting we take the tool we use to
  ensure that all of OpenStack is installable together in a
  unified
  way, and change it's installation so that it doesn't do that
  any
  more.
 
  Which I'm fine with.
 
  But if we are doing that we should just whole hog give up on
  the
  idea that OpenStack can be run all together in a single
  environment, and just double down on the devstack venv work
  instead.
 
  It'd be interesting to see what a distribution (canonical,
  redhat...) would think about this movement. I know yahoo! has
  been
  looking into it for similar reasons (but we are more flexibly
  then
  I think a packager such as canonical/redhat/debian/...
  would/culd
  be). With a move to venv's that seems like it would just offload
  the work to find the set of dependencies that work together (in
  a
  single-install) to packagers instead.
 
  Is that ok/desired at this point?
 

 Honestly, I failed to track all the different proposals. Just
 saying
 from packager perspective: we absolutely rely on requirements.txt
 not
 being a list of hardcoded values from pip freeze, but present us a
 reasonable freedom in choosing versions we want to run in packaged
 products.


in short the current proposal for stable branches is:
   
keep requirements.txt as is, except maybe put some upper bounds on
the
requirements.
   
Add requirements.gate to specify the *exact* versions we are gating
against
(this would be a full list including all transitive dependencies).
  
   The gate syncs requirements into projects before installing them.
   Would
   we change the sync script for the gate to work from the
   requirements.gate file, or keep it pulling from requirements.txt?
  
 
  We would only add requirements.gate for stable branches (because we
  don't
  want to cap/pin  things on master). So I think the answer is sync script
  should work for both.  I am not sure on the exact mechanics of how this
  would work. Whoever ends up driving this bit of work (I think Adam G),
  will
  have to sort out the details.

 OK. I think it's probably worth a spec, then, so we can think it through
 before starting work. Maybe in the cross-project specs repo, to avoid
 having to create one just for requirements? Or we could modify the
 README or something, but the specs repo seems more visible.


 Start of the cross project spec https://review.openstack.org/159249



 Doug

 
 
   Doug
  
   
   
 That's why I asked before 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-24 Thread Doug Hellmann


On Mon, Feb 23, 2015, at 06:31 PM, Joe Gordon wrote:
 On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
   On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka ihrac...@redhat.com
   wrote:
  
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
   
On 02/20/2015 07:16 PM, Joshua Harlow wrote:
 Sean Dague wrote:
 On 02/20/2015 12:26 AM, Adam Gandelman wrote:
 Its more than just the naming.  In the original proposal,
 requirements.txt is the compiled list of all pinned deps
 (direct and transitive), while
 requirements.inhttp://requirements.in  reflects what people
 will actually use.  Whatever is in requirements.txt affects the
 egg's requires.txt. Instead, we can keep requirements.txt
 unchanged and have it still be the canonical list of
 dependencies, while
 reqiurements.out/requirements.gate/requirements.whatever is an
 upstream utility we produce and use to keep things sane on our
 slaves.

 Maybe all we need is:

 * update the existing post-merge job on the requirements repo
 to produce a requirements.txt (as it does now) as well the
 compiled version.

 * modify devstack in some way with a toggle to have it process
 dependencies from the compiled version when necessary

 I'm not sure how the second bit jives with the existing
 devstack installation code, specifically with the libraries
 from git-or-master but we can probably add something to warm
 the system with dependencies from the compiled version prior to
 calling pip/setup.py/etc http://setup.py/etc

 It sounds like you are suggesting we take the tool we use to
 ensure that all of OpenStack is installable together in a unified
 way, and change it's installation so that it doesn't do that any
 more.

 Which I'm fine with.

 But if we are doing that we should just whole hog give up on the
 idea that OpenStack can be run all together in a single
 environment, and just double down on the devstack venv work
 instead.

 It'd be interesting to see what a distribution (canonical,
 redhat...) would think about this movement. I know yahoo! has been
 looking into it for similar reasons (but we are more flexibly then
 I think a packager such as canonical/redhat/debian/... would/culd
 be). With a move to venv's that seems like it would just offload
 the work to find the set of dependencies that work together (in a
 single-install) to packagers instead.

 Is that ok/desired at this point?

   
Honestly, I failed to track all the different proposals. Just saying
from packager perspective: we absolutely rely on requirements.txt not
being a list of hardcoded values from pip freeze, but present us a
reasonable freedom in choosing versions we want to run in packaged
products.
   
   
   in short the current proposal for stable branches is:
  
   keep requirements.txt as is, except maybe put some upper bounds on the
   requirements.
  
   Add requirements.gate to specify the *exact* versions we are gating
   against
   (this would be a full list including all transitive dependencies).
 
  The gate syncs requirements into projects before installing them. Would
  we change the sync script for the gate to work from the
  requirements.gate file, or keep it pulling from requirements.txt?
 
 
 We would only add requirements.gate for stable branches (because we don't
 want to cap/pin  things on master). So I think the answer is sync script
 should work for both.  I am not sure on the exact mechanics of how this
 would work. Whoever ends up driving this bit of work (I think Adam G),
 will
 have to sort out the details.

OK. I think it's probably worth a spec, then, so we can think it through
before starting work. Maybe in the cross-project specs repo, to avoid
having to create one just for requirements? Or we could modify the
README or something, but the specs repo seems more visible.

Doug

 
 
  Doug
 
  
  
That's why I asked before we should have caps and not pins.
   
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
   
iQEcBAEBAgAGBQJU61oJAAoJEC5aWaUY1u57T7cIALySnlpLV0tjrsTH2gZxskH+
zY+L6E/DukFNZsWxB2XSaOuVdVaP3Oj4eYCZ2iL8OoxLrBotiOYyRFH29f9vjNSX
h++dErBr0SwIeUtcnEjbk9re6fNP6y5Hqhk1Ac+NSxwL75KlS3bgKnJAhLA08MVB
5xkGRR7xl2cuYf9eylPlQaAy9rXPCyyRdxZs6mNjZ2vlY6hZx/w/Y7V28R/V4gO4
qsvMg6Kv+3urDTRuJdEsV6HbN/cXr2+o543Unzq7gcPpDYXRFTLkpCRV2k8mnmA1
pO9W10F1FCQZiBnLk0c6OypFz9rQmKxpwlNUN5MTMF15Et6DOxGBxMcfr7TpRaQ=
=WHOH
-END PGP SIGNATURE-
   
   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-23 Thread Joe Gordon
On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 02/20/2015 07:16 PM, Joshua Harlow wrote:
  Sean Dague wrote:
  On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps
  (direct and transitive), while
  requirements.inhttp://requirements.in  reflects what people
  will actually use.  Whatever is in requirements.txt affects the
  egg's requires.txt. Instead, we can keep requirements.txt
  unchanged and have it still be the canonical list of
  dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an
  upstream utility we produce and use to keep things sane on our
  slaves.
 
  Maybe all we need is:
 
  * update the existing post-merge job on the requirements repo
  to produce a requirements.txt (as it does now) as well the
  compiled version.
 
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
 
  I'm not sure how the second bit jives with the existing
  devstack installation code, specifically with the libraries
  from git-or-master but we can probably add something to warm
  the system with dependencies from the compiled version prior to
  calling pip/setup.py/etc http://setup.py/etc
 
  It sounds like you are suggesting we take the tool we use to
  ensure that all of OpenStack is installable together in a unified
  way, and change it's installation so that it doesn't do that any
  more.
 
  Which I'm fine with.
 
  But if we are doing that we should just whole hog give up on the
  idea that OpenStack can be run all together in a single
  environment, and just double down on the devstack venv work
  instead.
 
  It'd be interesting to see what a distribution (canonical,
  redhat...) would think about this movement. I know yahoo! has been
  looking into it for similar reasons (but we are more flexibly then
  I think a packager such as canonical/redhat/debian/... would/culd
  be). With a move to venv's that seems like it would just offload
  the work to find the set of dependencies that work together (in a
  single-install) to packagers instead.
 
  Is that ok/desired at this point?
 

 Honestly, I failed to track all the different proposals. Just saying
 from packager perspective: we absolutely rely on requirements.txt not
 being a list of hardcoded values from pip freeze, but present us a
 reasonable freedom in choosing versions we want to run in packaged
 products.


in short the current proposal for stable branches is:

keep requirements.txt as is, except maybe put some upper bounds on the
requirements.

Add requirements.gate to specify the *exact* versions we are gating against
(this would be a full list including all transitive dependencies).


 That's why I asked before we should have caps and not pins.

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU61oJAAoJEC5aWaUY1u57T7cIALySnlpLV0tjrsTH2gZxskH+
 zY+L6E/DukFNZsWxB2XSaOuVdVaP3Oj4eYCZ2iL8OoxLrBotiOYyRFH29f9vjNSX
 h++dErBr0SwIeUtcnEjbk9re6fNP6y5Hqhk1Ac+NSxwL75KlS3bgKnJAhLA08MVB
 5xkGRR7xl2cuYf9eylPlQaAy9rXPCyyRdxZs6mNjZ2vlY6hZx/w/Y7V28R/V4gO4
 qsvMg6Kv+3urDTRuJdEsV6HbN/cXr2+o543Unzq7gcPpDYXRFTLkpCRV2k8mnmA1
 pO9W10F1FCQZiBnLk0c6OypFz9rQmKxpwlNUN5MTMF15Et6DOxGBxMcfr7TpRaQ=
 =WHOH
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-23 Thread Joe Gordon
On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
  On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA1
  
   On 02/20/2015 07:16 PM, Joshua Harlow wrote:
Sean Dague wrote:
On 02/20/2015 12:26 AM, Adam Gandelman wrote:
Its more than just the naming.  In the original proposal,
requirements.txt is the compiled list of all pinned deps
(direct and transitive), while
requirements.inhttp://requirements.in  reflects what people
will actually use.  Whatever is in requirements.txt affects the
egg's requires.txt. Instead, we can keep requirements.txt
unchanged and have it still be the canonical list of
dependencies, while
reqiurements.out/requirements.gate/requirements.whatever is an
upstream utility we produce and use to keep things sane on our
slaves.
   
Maybe all we need is:
   
* update the existing post-merge job on the requirements repo
to produce a requirements.txt (as it does now) as well the
compiled version.
   
* modify devstack in some way with a toggle to have it process
dependencies from the compiled version when necessary
   
I'm not sure how the second bit jives with the existing
devstack installation code, specifically with the libraries
from git-or-master but we can probably add something to warm
the system with dependencies from the compiled version prior to
calling pip/setup.py/etc http://setup.py/etc
   
It sounds like you are suggesting we take the tool we use to
ensure that all of OpenStack is installable together in a unified
way, and change it's installation so that it doesn't do that any
more.
   
Which I'm fine with.
   
But if we are doing that we should just whole hog give up on the
idea that OpenStack can be run all together in a single
environment, and just double down on the devstack venv work
instead.
   
It'd be interesting to see what a distribution (canonical,
redhat...) would think about this movement. I know yahoo! has been
looking into it for similar reasons (but we are more flexibly then
I think a packager such as canonical/redhat/debian/... would/culd
be). With a move to venv's that seems like it would just offload
the work to find the set of dependencies that work together (in a
single-install) to packagers instead.
   
Is that ok/desired at this point?
   
  
   Honestly, I failed to track all the different proposals. Just saying
   from packager perspective: we absolutely rely on requirements.txt not
   being a list of hardcoded values from pip freeze, but present us a
   reasonable freedom in choosing versions we want to run in packaged
   products.
  
  
  in short the current proposal for stable branches is:
 
  keep requirements.txt as is, except maybe put some upper bounds on the
  requirements.
 
  Add requirements.gate to specify the *exact* versions we are gating
  against
  (this would be a full list including all transitive dependencies).

 The gate syncs requirements into projects before installing them. Would
 we change the sync script for the gate to work from the
 requirements.gate file, or keep it pulling from requirements.txt?


We would only add requirements.gate for stable branches (because we don't
want to cap/pin  things on master). So I think the answer is sync script
should work for both.  I am not sure on the exact mechanics of how this
would work. Whoever ends up driving this bit of work (I think Adam G), will
have to sort out the details.


 Doug

 
 
   That's why I asked before we should have caps and not pins.
  
   /Ihar
   -BEGIN PGP SIGNATURE-
   Version: GnuPG v1
  
   iQEcBAEBAgAGBQJU61oJAAoJEC5aWaUY1u57T7cIALySnlpLV0tjrsTH2gZxskH+
   zY+L6E/DukFNZsWxB2XSaOuVdVaP3Oj4eYCZ2iL8OoxLrBotiOYyRFH29f9vjNSX
   h++dErBr0SwIeUtcnEjbk9re6fNP6y5Hqhk1Ac+NSxwL75KlS3bgKnJAhLA08MVB
   5xkGRR7xl2cuYf9eylPlQaAy9rXPCyyRdxZs6mNjZ2vlY6hZx/w/Y7V28R/V4gO4
   qsvMg6Kv+3urDTRuJdEsV6HbN/cXr2+o543Unzq7gcPpDYXRFTLkpCRV2k8mnmA1
   pO9W10F1FCQZiBnLk0c6OypFz9rQmKxpwlNUN5MTMF15Et6DOxGBxMcfr7TpRaQ=
   =WHOH
   -END PGP SIGNATURE-
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Sean Dague
On 02/20/2015 12:26 AM, Adam Gandelman wrote:
 Its more than just the naming.  In the original proposal,
 requirements.txt is the compiled list of all pinned deps (direct and
 transitive), while requirements.in http://requirements.in reflects
 what people will actually use.  Whatever is in requirements.txt affects
 the egg's requires.txt. Instead, we can keep requirements.txt unchanged
 and have it still be the canonical list of dependencies, while
 reqiurements.out/requirements.gate/requirements.whatever is an upstream
 utility we produce and use to keep things sane on our slaves.
 
 Maybe all we need is:
 
 * update the existing post-merge job on the requirements repo to produce
 a requirements.txt (as it does now) as well the compiled version.  
 
 * modify devstack in some way with a toggle to have it process
 dependencies from the compiled version when necessary
 
 I'm not sure how the second bit jives with the existing devstack
 installation code, specifically with the libraries from git-or-master
 but we can probably add something to warm the system with dependencies
 from the compiled version prior to calling pip/setup.py/etc
 http://setup.py/etc

It sounds like you are suggesting we take the tool we use to ensure that
all of OpenStack is installable together in a unified way, and change
it's installation so that it doesn't do that any more.

Which I'm fine with.

But if we are doing that we should just whole hog give up on the idea
that OpenStack can be run all together in a single environment, and just
double down on the devstack venv work instead.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Joe Gordon
On Fri, Feb 20, 2015 at 12:10 PM, Doug Hellmann d...@doughellmann.com
wrote:



 On Fri, Feb 20, 2015, at 02:07 PM, Joe Gordon wrote:
  On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann d...@doughellmann.com
  wrote:
 
  
  
   On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
On 02/20/2015 12:26 AM, Adam Gandelman wrote:
 Its more than just the naming.  In the original proposal,
 requirements.txt is the compiled list of all pinned deps (direct
 and
 transitive), while requirements.in http://requirements.in
 reflects
 what people will actually use.  Whatever is in requirements.txt
 affects
 the egg's requires.txt. Instead, we can keep requirements.txt
 unchanged
 and have it still be the canonical list of dependencies, while
 reqiurements.out/requirements.gate/requirements.whatever is an
 upstream
 utility we produce and use to keep things sane on our slaves.

 Maybe all we need is:

 * update the existing post-merge job on the requirements repo to
   produce
 a requirements.txt (as it does now) as well the compiled version.

 * modify devstack in some way with a toggle to have it process
 dependencies from the compiled version when necessary

 I'm not sure how the second bit jives with the existing devstack
 installation code, specifically with the libraries from
 git-or-master
 but we can probably add something to warm the system with
 dependencies
 from the compiled version prior to calling pip/setup.py/etc
 http://setup.py/etc
   
It sounds like you are suggesting we take the tool we use to ensure
 that
all of OpenStack is installable together in a unified way, and change
it's installation so that it doesn't do that any more.
   
Which I'm fine with.
   
But if we are doing that we should just whole hog give up on the idea
that OpenStack can be run all together in a single environment, and
 just
double down on the devstack venv work instead.
  
   I don't disagree with your conclusion, but that's not how I read what
 he
   proposed. :-)
  
  
  Sean was reading between the lines here. We are doing all this extra work
  to make sure OpenStack can be run together in a single environment, but
  it
  seems like more and more people are moving away from deploying with that
  model anyway. Moving to this model would require a little more then just
  installing everything in separate venvs.  We would need to make sure we
  don't cap oslo libraries etc. in order to prevent conflicts inside a
  single

 Something I've noticed in this discussion: We should start talking about
 our libraries, not just Oslo libraries. Oslo isn't the only project
 managing libraries used by more than one other team any more. It never
 really was, if you consider the clients, but we have PyCADF and various
 middleware and other things now, too. We can base our policies on what
 we've learned from Oslo, but we need to apply them to *all* libraries,
 no matter which team manages them.


My mistake, you are correct. I was incorrectly using oslo as a shorthand
for all openstack libraries.


  service. And we would still need a story around what to do with stable
  branches, how do we make sure new libraries don't break stable branches
  --
  which in turn can break master via grenade and other jobs.

 I'm comfortable using simple caps based on minor number increments for
 stable branches. New libraries won't end up in the stable branches
 unless they are a patch release. We can set up test jobs for stable
 branches of libraries to run tempest just like we do against master, but
 using all stable branch versions of the source files (AFAIK, we don't
 have a job like that now, but I could be wrong).


In general I agree, this is the right way forward for openstack libraries.
But as made clear this week, we will have to be a little more careful about
what is a valid patch release.



 I'm less confident that we have identified all of the issues with more
 limited pins, so I'm reluctant to back that approach for now. That may
 be an excess of caution on my part, though.

 
 
 
   Joe wanted requirements.txt to be the pinned requirements computed from
   the list of all global requirements that work together. Pinning to a
   single version works in our gate, but makes installing everything else
   together *outside* of the gate harder because if the projects don't all
   sync all requirements changes pretty much at the same time they won't
 be
   compatible.
  
   Adam suggested leaving requirements.txt alone and creating a different
   list of pinned requirements that is *only* used in our gate. That way
 we
   still get the pinning for our gate, and the values are computed from
 the
   requirements used in the projects but they aren't propagated back out
 to
   the projects in a way that breaks their PyPI or distro packages.
  
   Another benefit of Adam's proposal is that we would only need to keep
   the list of pins in 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Doug Hellmann


On Fri, Feb 20, 2015, at 03:36 PM, Joe Gordon wrote:
 On Fri, Feb 20, 2015 at 12:10 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Fri, Feb 20, 2015, at 02:07 PM, Joe Gordon wrote:
   On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann d...@doughellmann.com
   wrote:
  
   
   
On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
 On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps (direct
  and
  transitive), while requirements.in http://requirements.in
  reflects
  what people will actually use.  Whatever is in requirements.txt
  affects
  the egg's requires.txt. Instead, we can keep requirements.txt
  unchanged
  and have it still be the canonical list of dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an
  upstream
  utility we produce and use to keep things sane on our slaves.
 
  Maybe all we need is:
 
  * update the existing post-merge job on the requirements repo to
produce
  a requirements.txt (as it does now) as well the compiled version.
 
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
 
  I'm not sure how the second bit jives with the existing devstack
  installation code, specifically with the libraries from
  git-or-master
  but we can probably add something to warm the system with
  dependencies
  from the compiled version prior to calling pip/setup.py/etc
  http://setup.py/etc

 It sounds like you are suggesting we take the tool we use to ensure
  that
 all of OpenStack is installable together in a unified way, and change
 it's installation so that it doesn't do that any more.

 Which I'm fine with.

 But if we are doing that we should just whole hog give up on the idea
 that OpenStack can be run all together in a single environment, and
  just
 double down on the devstack venv work instead.
   
I don't disagree with your conclusion, but that's not how I read what
  he
proposed. :-)
   
   
   Sean was reading between the lines here. We are doing all this extra work
   to make sure OpenStack can be run together in a single environment, but
   it
   seems like more and more people are moving away from deploying with that
   model anyway. Moving to this model would require a little more then just
   installing everything in separate venvs.  We would need to make sure we
   don't cap oslo libraries etc. in order to prevent conflicts inside a
   single
 
  Something I've noticed in this discussion: We should start talking about
  our libraries, not just Oslo libraries. Oslo isn't the only project
  managing libraries used by more than one other team any more. It never
  really was, if you consider the clients, but we have PyCADF and various
  middleware and other things now, too. We can base our policies on what
  we've learned from Oslo, but we need to apply them to *all* libraries,
  no matter which team manages them.
 
 
 My mistake, you are correct. I was incorrectly using oslo as a shorthand
 for all openstack libraries.

Yeah, I've been doing it, too, but the thing with neutronclient today
made me realize we shouldn't. :-)

 
 
   service. And we would still need a story around what to do with stable
   branches, how do we make sure new libraries don't break stable branches
   --
   which in turn can break master via grenade and other jobs.
 
  I'm comfortable using simple caps based on minor number increments for
  stable branches. New libraries won't end up in the stable branches
  unless they are a patch release. We can set up test jobs for stable
  branches of libraries to run tempest just like we do against master, but
  using all stable branch versions of the source files (AFAIK, we don't
  have a job like that now, but I could be wrong).
 
 
 In general I agree, this is the right way forward for openstack
 libraries.
 But as made clear this week, we will have to be a little more careful
 about
 what is a valid patch release.

Sure. With caps in place, and incrementing the minor version at the
start of each cycle, I think the issues that come up can be minimized
though.

 
 
 
  I'm less confident that we have identified all of the issues with more
  limited pins, so I'm reluctant to back that approach for now. That may
  be an excess of caution on my part, though.
 
  
  
  
Joe wanted requirements.txt to be the pinned requirements computed from
the list of all global requirements that work together. Pinning to a
single version works in our gate, but makes installing everything else
together *outside* of the gate harder because if the projects don't all
sync all requirements changes pretty much at the same time they won't
  be
compatible.
   
Adam suggested leaving requirements.txt 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Doug Hellmann


On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
 On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps (direct and
  transitive), while requirements.in http://requirements.in reflects
  what people will actually use.  Whatever is in requirements.txt affects
  the egg's requires.txt. Instead, we can keep requirements.txt unchanged
  and have it still be the canonical list of dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an upstream
  utility we produce and use to keep things sane on our slaves.
  
  Maybe all we need is:
  
  * update the existing post-merge job on the requirements repo to produce
  a requirements.txt (as it does now) as well the compiled version.  
  
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
  
  I'm not sure how the second bit jives with the existing devstack
  installation code, specifically with the libraries from git-or-master
  but we can probably add something to warm the system with dependencies
  from the compiled version prior to calling pip/setup.py/etc
  http://setup.py/etc
 
 It sounds like you are suggesting we take the tool we use to ensure that
 all of OpenStack is installable together in a unified way, and change
 it's installation so that it doesn't do that any more.
 
 Which I'm fine with.
 
 But if we are doing that we should just whole hog give up on the idea
 that OpenStack can be run all together in a single environment, and just
 double down on the devstack venv work instead.

I don't disagree with your conclusion, but that's not how I read what he
proposed. :-)

Joe wanted requirements.txt to be the pinned requirements computed from
the list of all global requirements that work together. Pinning to a
single version works in our gate, but makes installing everything else
together *outside* of the gate harder because if the projects don't all
sync all requirements changes pretty much at the same time they won't be
compatible.

Adam suggested leaving requirements.txt alone and creating a different
list of pinned requirements that is *only* used in our gate. That way we
still get the pinning for our gate, and the values are computed from the
requirements used in the projects but they aren't propagated back out to
the projects in a way that breaks their PyPI or distro packages.

Another benefit of Adam's proposal is that we would only need to keep
the list of pins in the global requirements repository, so we would have
fewer tooling changes to make.

Doug

 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Adam Gandelman
On Fri, Feb 20, 2015 at 3:06 AM, Sean Dague s...@dague.net wrote:


 It sounds like you are suggesting we take the tool we use to ensure that
 all of OpenStack is installable together in a unified way, and change
 it's installation so that it doesn't do that any more.

 Which I'm fine with.

 But if we are doing that we should just whole hog give up on the idea
 that OpenStack can be run all together in a single environment, and just
 double down on the devstack venv work instead.

 -Sean



Not necessarily. There'd be some tweaks to the tooling but we'd still be
doing the same fundamental thing (installing everything openstack together)
except using a strict set of dependencies that we know wont break each
other when that happens.

This would help tremendously with testing around global-requirements, too.
Currently, a local devstack run today likely produces a set dependency
different than what was tested by jenkins on the last change to
global-requirements.  If proposed changes to global-requirements produced a
compiled list of pinned dependencies and tested against that, we'd know
that the next day's devstack runs are still testing against the dependency
chain produced by the last change to GR.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Joshua Harlow

Sean Dague wrote:

On 02/20/2015 12:26 AM, Adam Gandelman wrote:

Its more than just the naming.  In the original proposal,
requirements.txt is the compiled list of all pinned deps (direct and
transitive), while requirements.inhttp://requirements.in  reflects
what people will actually use.  Whatever is in requirements.txt affects
the egg's requires.txt. Instead, we can keep requirements.txt unchanged
and have it still be the canonical list of dependencies, while
reqiurements.out/requirements.gate/requirements.whatever is an upstream
utility we produce and use to keep things sane on our slaves.

Maybe all we need is:

* update the existing post-merge job on the requirements repo to produce
a requirements.txt (as it does now) as well the compiled version.

* modify devstack in some way with a toggle to have it process
dependencies from the compiled version when necessary

I'm not sure how the second bit jives with the existing devstack
installation code, specifically with the libraries from git-or-master
but we can probably add something to warm the system with dependencies
from the compiled version prior to calling pip/setup.py/etc
http://setup.py/etc


It sounds like you are suggesting we take the tool we use to ensure that
all of OpenStack is installable together in a unified way, and change
it's installation so that it doesn't do that any more.

Which I'm fine with.

But if we are doing that we should just whole hog give up on the idea
that OpenStack can be run all together in a single environment, and just
double down on the devstack venv work instead.


It'd be interesting to see what a distribution (canonical, redhat...) 
would think about this movement. I know yahoo! has been looking into it 
for similar reasons (but we are more flexibly then I think a packager 
such as canonical/redhat/debian/... would/culd be). With a move to 
venv's that seems like it would just offload the work to find the set of 
dependencies that work together (in a single-install) to packagers instead.


Is that ok/desired at this point?

-Josh



-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Adam Gandelman
On Fri, Feb 20, 2015 at 10:16 AM, Joshua Harlow harlo...@outlook.com
wrote:


 It'd be interesting to see what a distribution (canonical, redhat...)
 would think about this movement. I know yahoo! has been looking into it for
 similar reasons (but we are more flexibly then I think a packager such as
 canonical/redhat/debian/... would/culd be). With a move to venv's that
 seems like it would just offload the work to find the set of dependencies
 that work together (in a single-install) to packagers instead.

 Is that ok/desired at this point?

 -Josh


I share this concern, as well. I wonder if the compiled list of pinned
dependencies will be the only thing we look at upstream. Once functional on
stable branches, will we essentially forget about the non-pinned
requirements.txt that downstreams are meant to use?

One way of looking at it, though (especially wrt stable) is that the pinned
list of compiled dependencies more closely resembles how distros are
packaging this stuff.  That is, instead of providing explicit dependencies
via a pinned list, they are providing them via a frozen package archive
(ie, ubuntu 14.04) that are known to provide a working set.  It'd be up to
distros to make sure that everything is functional prior to freezing that,
and I imagine they already do that.

-Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Joe Gordon
On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
  On 02/20/2015 12:26 AM, Adam Gandelman wrote:
   Its more than just the naming.  In the original proposal,
   requirements.txt is the compiled list of all pinned deps (direct and
   transitive), while requirements.in http://requirements.in reflects
   what people will actually use.  Whatever is in requirements.txt affects
   the egg's requires.txt. Instead, we can keep requirements.txt unchanged
   and have it still be the canonical list of dependencies, while
   reqiurements.out/requirements.gate/requirements.whatever is an upstream
   utility we produce and use to keep things sane on our slaves.
  
   Maybe all we need is:
  
   * update the existing post-merge job on the requirements repo to
 produce
   a requirements.txt (as it does now) as well the compiled version.
  
   * modify devstack in some way with a toggle to have it process
   dependencies from the compiled version when necessary
  
   I'm not sure how the second bit jives with the existing devstack
   installation code, specifically with the libraries from git-or-master
   but we can probably add something to warm the system with dependencies
   from the compiled version prior to calling pip/setup.py/etc
   http://setup.py/etc
 
  It sounds like you are suggesting we take the tool we use to ensure that
  all of OpenStack is installable together in a unified way, and change
  it's installation so that it doesn't do that any more.
 
  Which I'm fine with.
 
  But if we are doing that we should just whole hog give up on the idea
  that OpenStack can be run all together in a single environment, and just
  double down on the devstack venv work instead.

 I don't disagree with your conclusion, but that's not how I read what he
 proposed. :-)


Sean was reading between the lines here. We are doing all this extra work
to make sure OpenStack can be run together in a single environment, but it
seems like more and more people are moving away from deploying with that
model anyway. Moving to this model would require a little more then just
installing everything in separate venvs.  We would need to make sure we
don't cap oslo libraries etc. in order to prevent conflicts inside a single
service. And we would still need a story around what to do with stable
branches, how do we make sure new libraries don't break stable branches --
which in turn can break master via grenade and other jobs.



 Joe wanted requirements.txt to be the pinned requirements computed from
 the list of all global requirements that work together. Pinning to a
 single version works in our gate, but makes installing everything else
 together *outside* of the gate harder because if the projects don't all
 sync all requirements changes pretty much at the same time they won't be
 compatible.

 Adam suggested leaving requirements.txt alone and creating a different
 list of pinned requirements that is *only* used in our gate. That way we
 still get the pinning for our gate, and the values are computed from the
 requirements used in the projects but they aren't propagated back out to
 the projects in a way that breaks their PyPI or distro packages.

 Another benefit of Adam's proposal is that we would only need to keep
 the list of pins in the global requirements repository, so we would have
 fewer tooling changes to make.

 Doug

 
-Sean
 
  --
  Sean Dague
  http://dague.net
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Doug Hellmann


On Fri, Feb 20, 2015, at 02:07 PM, Joe Gordon wrote:
 On Fri, Feb 20, 2015 at 7:27 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Fri, Feb 20, 2015, at 06:06 AM, Sean Dague wrote:
   On 02/20/2015 12:26 AM, Adam Gandelman wrote:
Its more than just the naming.  In the original proposal,
requirements.txt is the compiled list of all pinned deps (direct and
transitive), while requirements.in http://requirements.in reflects
what people will actually use.  Whatever is in requirements.txt affects
the egg's requires.txt. Instead, we can keep requirements.txt unchanged
and have it still be the canonical list of dependencies, while
reqiurements.out/requirements.gate/requirements.whatever is an upstream
utility we produce and use to keep things sane on our slaves.
   
Maybe all we need is:
   
* update the existing post-merge job on the requirements repo to
  produce
a requirements.txt (as it does now) as well the compiled version.
   
* modify devstack in some way with a toggle to have it process
dependencies from the compiled version when necessary
   
I'm not sure how the second bit jives with the existing devstack
installation code, specifically with the libraries from git-or-master
but we can probably add something to warm the system with dependencies
from the compiled version prior to calling pip/setup.py/etc
http://setup.py/etc
  
   It sounds like you are suggesting we take the tool we use to ensure that
   all of OpenStack is installable together in a unified way, and change
   it's installation so that it doesn't do that any more.
  
   Which I'm fine with.
  
   But if we are doing that we should just whole hog give up on the idea
   that OpenStack can be run all together in a single environment, and just
   double down on the devstack venv work instead.
 
  I don't disagree with your conclusion, but that's not how I read what he
  proposed. :-)
 
 
 Sean was reading between the lines here. We are doing all this extra work
 to make sure OpenStack can be run together in a single environment, but
 it
 seems like more and more people are moving away from deploying with that
 model anyway. Moving to this model would require a little more then just
 installing everything in separate venvs.  We would need to make sure we
 don't cap oslo libraries etc. in order to prevent conflicts inside a
 single

Something I've noticed in this discussion: We should start talking about
our libraries, not just Oslo libraries. Oslo isn't the only project
managing libraries used by more than one other team any more. It never
really was, if you consider the clients, but we have PyCADF and various
middleware and other things now, too. We can base our policies on what
we've learned from Oslo, but we need to apply them to *all* libraries,
no matter which team manages them.

 service. And we would still need a story around what to do with stable
 branches, how do we make sure new libraries don't break stable branches
 --
 which in turn can break master via grenade and other jobs.

I'm comfortable using simple caps based on minor number increments for
stable branches. New libraries won't end up in the stable branches
unless they are a patch release. We can set up test jobs for stable
branches of libraries to run tempest just like we do against master, but
using all stable branch versions of the source files (AFAIK, we don't
have a job like that now, but I could be wrong).

I'm less confident that we have identified all of the issues with more
limited pins, so I'm reluctant to back that approach for now. That may
be an excess of caution on my part, though.

 
 
 
  Joe wanted requirements.txt to be the pinned requirements computed from
  the list of all global requirements that work together. Pinning to a
  single version works in our gate, but makes installing everything else
  together *outside* of the gate harder because if the projects don't all
  sync all requirements changes pretty much at the same time they won't be
  compatible.
 
  Adam suggested leaving requirements.txt alone and creating a different
  list of pinned requirements that is *only* used in our gate. That way we
  still get the pinning for our gate, and the values are computed from the
  requirements used in the projects but they aren't propagated back out to
  the projects in a way that breaks their PyPI or distro packages.
 
  Another benefit of Adam's proposal is that we would only need to keep
  the list of pins in the global requirements repository, so we would have
  fewer tooling changes to make.
 
  Doug
 
  
 -Sean
  
   --
   Sean Dague
   http://dague.net
  
  
  __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-19 Thread Joe Gordon
On Wed, Feb 18, 2015 at 7:14 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
   On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:
  
  
  
   On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
   On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add
 support
   for
   the latest version of juno and noticed some interesting version
   specifiers
   introduced into global-requirements.txt in January. It
 introduced some
   version specifiers that seem a bit impossible like the one for
   requests
   [1]. There are others that equate presently to pinning the
 versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it
 improves
   pretty much everyone’s quality of life (no fires to put out in
 the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
   requirements
   to
   make them far more explicit and less misleading to the human eye
 (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
   projects.
   Now we can apply updates to them by hand, to either move the lower
   bounds down (as in the case Ihar pointed out with stevedore) or
 clean
   up
   the range definitions. We should not raise the limits of any Oslo
   libraries, and we should consider raising the limits of
 third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we can
 see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally
 bump the
   caps
   to see if newer versions will not break everything, so what is
 the
   right
   way forward? What is the best way to both maintain a stable
 branch
   with
   known working dependencies while helping out those who do so
 much work
   for
   us (downstream and stable-maint) and not permanently pinning to
   certain
   working versions?
  
   Managing the upper bounds is still under discussion. Sean pointed
 out
   that we might want hard caps so that updates to stable branch were
   explicit. I can see either side of that argument and am still on
 the
   fence about the best approach.
  
   History has shown that it's too much work keeping testing
 functioning
   for stable branches if we leave dependencies uncapped. If
 particular
   people are interested in bumping versions when releases happen,
 it's
   easy enough to do with a requirements proposed update. It will
 even run
   tests that in most cases will prove that it works.
  
   It might even be possible for someone to build some automation
 that did
   that as stuff from pypi released so we could have the best of both
   worlds. But I think capping is definitely something we want as a
   project, and it reflects the way that most deployments will
 consume this
   code.
  
   -Sean
  
   --
   Sean Dague
   http://dague.net
  
   Right. No one is arguing the very clear benefits of all of this.
  
   I’m just wondering if for the example version identifiers that I
 gave in
   my original message (and others that are very similar) if we want
 to make
   the strings much simpler for people who tend to work from them
 (i.e.,
   downstream re-distributors whose jobs are already difficult
 enough). I’ve
   offered to help at least one of them in the past who maintains all
 of
   their distro’s packages themselves, but they refused so I’d like to
 help
   them anyway possible. Especially if any of them chime in as this
 being
   something that would be helpful.
  
   Ok, your links got kind of scrambled. Can you next time please inline
   the key relevant content in the email, because I think we all missed
 the
   original message intent as the key content was only in footnotes.
  
   From my point of view, normalization patches would be fine.
  
   requests=1.2.1,!=2.4.0,=2.2.1
  
   Is actually an odd one, because that's still there because we're
 using
   Trusty level requests in the tests, and my ability to have devstack
 not
   install that has thus far failed.
  
   Things like:
  
   osprofiler=0.3.0,=0.3.0 # Apache-2.0
  
   Can clearly be normalized to osprofiler==0.3.0 if you want to propose
   the patch manually.
  
  
   global-requirements for stable branches serves two uses:
  
   1. Specify the set of dependencies 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-19 Thread Doug Hellmann


On Thu, Feb 19, 2015, at 03:59 PM, Joe Gordon wrote:
 On Wed, Feb 18, 2015 at 7:14 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
  
On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com
  wrote:
   
   
   
On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
   
On 02/16/2015 08:50 PM, Ian Cordasco wrote:
On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
   
On 02/16/2015 02:08 PM, Doug Hellmann wrote:
   
   
On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
Hey everyone,
   
The os-ansible-deployment team was working on updates to add
  support
for
the latest version of juno and noticed some interesting version
specifiers
introduced into global-requirements.txt in January. It
  introduced some
version specifiers that seem a bit impossible like the one for
requests
[1]. There are others that equate presently to pinning the
  versions of
the
packages [2, 3, 4].
   
I understand fully and support the commit because of how it
  improves
pretty much everyone’s quality of life (no fires to put out in
  the
middle
of the night on the weekend). I’m also aware that a lot of the
downstream
redistributors tend to work from global-requirements.txt when
determining
what to package/support.
   
It seems to me like there’s room to clean up some of these
requirements
to
make them far more explicit and less misleading to the human eye
  (even
though tooling like pip can easily parse/understand these).
   
I think that's the idea. These requirements were generated
automatically, and fixed issues that were holding back several
projects.
Now we can apply updates to them by hand, to either move the lower
bounds down (as in the case Ihar pointed out with stevedore) or
  clean
up
the range definitions. We should not raise the limits of any Oslo
libraries, and we should consider raising the limits of
  third-party
libraries very carefully.
   
We should make those changes on one library at a time, so we can
  see
what effect each change has on the other requirements.
   
   
I also understand that stable-maint may want to occasionally
  bump the
caps
to see if newer versions will not break everything, so what is
  the
right
way forward? What is the best way to both maintain a stable
  branch
with
known working dependencies while helping out those who do so
  much work
for
us (downstream and stable-maint) and not permanently pinning to
certain
working versions?
   
Managing the upper bounds is still under discussion. Sean pointed
  out
that we might want hard caps so that updates to stable branch were
explicit. I can see either side of that argument and am still on
  the
fence about the best approach.
   
History has shown that it's too much work keeping testing
  functioning
for stable branches if we leave dependencies uncapped. If
  particular
people are interested in bumping versions when releases happen,
  it's
easy enough to do with a requirements proposed update. It will
  even run
tests that in most cases will prove that it works.
   
It might even be possible for someone to build some automation
  that did
that as stuff from pypi released so we could have the best of both
worlds. But I think capping is definitely something we want as a
project, and it reflects the way that most deployments will
  consume this
code.
   
-Sean
   
--
Sean Dague
http://dague.net
   
Right. No one is arguing the very clear benefits of all of this.
   
I’m just wondering if for the example version identifiers that I
  gave in
my original message (and others that are very similar) if we want
  to make
the strings much simpler for people who tend to work from them
  (i.e.,
downstream re-distributors whose jobs are already difficult
  enough). I’ve
offered to help at least one of them in the past who maintains all
  of
their distro’s packages themselves, but they refused so I’d like to
  help
them anyway possible. Especially if any of them chime in as this
  being
something that would be helpful.
   
Ok, your links got kind of scrambled. Can you next time please inline
the key relevant content in the email, because I think we all missed
  the
original message intent as the key content was only in footnotes.
   
From my point of view, normalization patches would be fine.
   
requests=1.2.1,!=2.4.0,=2.2.1
   
Is actually an odd one, because that's still there because we're
  using
Trusty level requests in the tests, and my ability to have devstack
  not
install that has thus far failed.
   
Things like:
   
osprofiler=0.3.0,=0.3.0 # 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-19 Thread Adam Gandelman
This creates a bit of a problem for downstream (packagers and probably
others)  Shipping a requirements.txt with explicit pins will end up
producing an egg with a requires.txt that reflects those pins, unless there
is some other magic planned that I'm not aware of.  I can't speak for all
packaging flavors, but I know debian packaging interacts quite closely with
things like requirements.txt and resulting egg's requires.txt to determine
appropriate system-level package dependencies.  This would require a lot of
tedious work on packagers part to get something functional.

What if its flipped? How about keeping requirements.txt with the caps, and
using that as input to produce something like requirements.gate that passed
to 'pip install --no-deps'  on our slaves?  We'd end up installing and
using the explicit pinned requirements while the services/libraries
themselves remain flexible.  This might the issue Doug pointed out, where
requirements updates across projects are not synchronized.

Adam



On Thu, Feb 19, 2015 at 12:59 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Wed, Feb 18, 2015 at 7:14 AM, Doug Hellmann d...@doughellmann.com
 wrote:



 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
   On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:
  
  
  
   On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
   On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add
 support
   for
   the latest version of juno and noticed some interesting version
   specifiers
   introduced into global-requirements.txt in January. It
 introduced some
   version specifiers that seem a bit impossible like the one for
   requests
   [1]. There are others that equate presently to pinning the
 versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it
 improves
   pretty much everyone’s quality of life (no fires to put out in
 the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
   requirements
   to
   make them far more explicit and less misleading to the human
 eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
   projects.
   Now we can apply updates to them by hand, to either move the
 lower
   bounds down (as in the case Ihar pointed out with stevedore) or
 clean
   up
   the range definitions. We should not raise the limits of any Oslo
   libraries, and we should consider raising the limits of
 third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we can
 see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally
 bump the
   caps
   to see if newer versions will not break everything, so what is
 the
   right
   way forward? What is the best way to both maintain a stable
 branch
   with
   known working dependencies while helping out those who do so
 much work
   for
   us (downstream and stable-maint) and not permanently pinning to
   certain
   working versions?
  
   Managing the upper bounds is still under discussion. Sean
 pointed out
   that we might want hard caps so that updates to stable branch
 were
   explicit. I can see either side of that argument and am still on
 the
   fence about the best approach.
  
   History has shown that it's too much work keeping testing
 functioning
   for stable branches if we leave dependencies uncapped. If
 particular
   people are interested in bumping versions when releases happen,
 it's
   easy enough to do with a requirements proposed update. It will
 even run
   tests that in most cases will prove that it works.
  
   It might even be possible for someone to build some automation
 that did
   that as stuff from pypi released so we could have the best of both
   worlds. But I think capping is definitely something we want as a
   project, and it reflects the way that most deployments will
 consume this
   code.
  
   -Sean
  
   --
   Sean Dague
   http://dague.net
  
   Right. No one is arguing the very clear benefits of all of this.
  
   I’m just wondering if for the example version identifiers that I
 gave in
   my original message (and others that are very similar) if we want
 to make
   the strings much simpler for people who tend to work from them
 (i.e.,
   downstream re-distributors 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-19 Thread Joe Gordon
On Thu, Feb 19, 2015 at 1:48 PM, Adam Gandelman ad...@ubuntu.com wrote:

 This creates a bit of a problem for downstream (packagers and probably
 others)  Shipping a requirements.txt with explicit pins will end up
 producing an egg with a requires.txt that reflects those pins, unless there
 is some other magic planned that I'm not aware of.  I can't speak for all
 packaging flavors, but I know debian packaging interacts quite closely with
 things like requirements.txt and resulting egg's requires.txt to determine
 appropriate system-level package dependencies.  This would require a lot of
 tedious work on packagers part to get something functional.

 What if its flipped? How about keeping requirements.txt with the caps, and
 using that as input to produce something like requirements.gate that passed
 to 'pip install --no-deps'  on our slaves?  We'd end up installing and
 using the explicit pinned requirements while the services/libraries
 themselves remain flexible.  This might the issue Doug pointed out, where
 requirements updates across projects are not synchronized.


Switching them to requirements.txt and requirements.gate works for me. If a
simple renaming makes things better, then great!

As for Doug's comment, yes we need to work something out to overwrite
requirements.gate, under your proposed naming, with global requirments


 Adam



 On Thu, Feb 19, 2015 at 12:59 PM, Joe Gordon joe.gord...@gmail.com
 wrote:



 On Wed, Feb 18, 2015 at 7:14 AM, Doug Hellmann d...@doughellmann.com
 wrote:



 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
   On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:
  
  
  
   On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
   On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add
 support
   for
   the latest version of juno and noticed some interesting version
   specifiers
   introduced into global-requirements.txt in January. It
 introduced some
   version specifiers that seem a bit impossible like the one for
   requests
   [1]. There are others that equate presently to pinning the
 versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it
 improves
   pretty much everyone’s quality of life (no fires to put out in
 the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
   requirements
   to
   make them far more explicit and less misleading to the human
 eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
   projects.
   Now we can apply updates to them by hand, to either move the
 lower
   bounds down (as in the case Ihar pointed out with stevedore) or
 clean
   up
   the range definitions. We should not raise the limits of any
 Oslo
   libraries, and we should consider raising the limits of
 third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we
 can see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally
 bump the
   caps
   to see if newer versions will not break everything, so what is
 the
   right
   way forward? What is the best way to both maintain a stable
 branch
   with
   known working dependencies while helping out those who do so
 much work
   for
   us (downstream and stable-maint) and not permanently pinning to
   certain
   working versions?
  
   Managing the upper bounds is still under discussion. Sean
 pointed out
   that we might want hard caps so that updates to stable branch
 were
   explicit. I can see either side of that argument and am still
 on the
   fence about the best approach.
  
   History has shown that it's too much work keeping testing
 functioning
   for stable branches if we leave dependencies uncapped. If
 particular
   people are interested in bumping versions when releases happen,
 it's
   easy enough to do with a requirements proposed update. It will
 even run
   tests that in most cases will prove that it works.
  
   It might even be possible for someone to build some automation
 that did
   that as stuff from pypi released so we could have the best of
 both
   worlds. But I think capping is definitely something we want as a
   project, and it reflects the way that most deployments will
 consume this
   code.
  
   -Sean
  
   --
   Sean 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-19 Thread Adam Gandelman
Its more than just the naming.  In the original proposal, requirements.txt
is the compiled list of all pinned deps (direct and transitive), while
requirements.in reflects what people will actually use.  Whatever is in
requirements.txt affects the egg's requires.txt. Instead, we can keep
requirements.txt unchanged and have it still be the canonical list of
dependencies, while
reqiurements.out/requirements.gate/requirements.whatever is an upstream
utility we produce and use to keep things sane on our slaves.

Maybe all we need is:

* update the existing post-merge job on the requirements repo to produce a
requirements.txt (as it does now) as well the compiled version.

* modify devstack in some way with a toggle to have it process dependencies
from the compiled version when necessary

I'm not sure how the second bit jives with the existing devstack
installation code, specifically with the libraries from git-or-master but
we can probably add something to warm the system with dependencies from the
compiled version prior to calling pip/setup.py/etc

Adam



On Thu, Feb 19, 2015 at 2:31 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Thu, Feb 19, 2015 at 1:48 PM, Adam Gandelman ad...@ubuntu.com wrote:

 This creates a bit of a problem for downstream (packagers and probably
 others)  Shipping a requirements.txt with explicit pins will end up
 producing an egg with a requires.txt that reflects those pins, unless there
 is some other magic planned that I'm not aware of.  I can't speak for all
 packaging flavors, but I know debian packaging interacts quite closely with
 things like requirements.txt and resulting egg's requires.txt to determine
 appropriate system-level package dependencies.  This would require a lot of
 tedious work on packagers part to get something functional.

 What if its flipped? How about keeping requirements.txt with the caps,
 and using that as input to produce something like requirements.gate that
 passed to 'pip install --no-deps'  on our slaves?  We'd end up installing
 and using the explicit pinned requirements while the services/libraries
 themselves remain flexible.  This might the issue Doug pointed out, where
 requirements updates across projects are not synchronized.


 Switching them to requirements.txt and requirements.gate works for me. If
 a simple renaming makes things better, then great!

 As for Doug's comment, yes we need to work something out to overwrite
 requirements.gate, under your proposed naming, with global requirments


 Adam



 On Thu, Feb 19, 2015 at 12:59 PM, Joe Gordon joe.gord...@gmail.com
 wrote:



 On Wed, Feb 18, 2015 at 7:14 AM, Doug Hellmann d...@doughellmann.com
 wrote:



 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
   On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:
  
  
  
   On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
   On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net
 wrote:
  
   On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add
 support
   for
   the latest version of juno and noticed some interesting
 version
   specifiers
   introduced into global-requirements.txt in January. It
 introduced some
   version specifiers that seem a bit impossible like the one for
   requests
   [1]. There are others that equate presently to pinning the
 versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it
 improves
   pretty much everyone’s quality of life (no fires to put out
 in the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
   requirements
   to
   make them far more explicit and less misleading to the human
 eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
   projects.
   Now we can apply updates to them by hand, to either move the
 lower
   bounds down (as in the case Ihar pointed out with stevedore)
 or clean
   up
   the range definitions. We should not raise the limits of any
 Oslo
   libraries, and we should consider raising the limits of
 third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we
 can see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally
 bump the
   caps
   to see if newer versions will not break everything, so what
 is the
   right
   way forward? What is the best way to both 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Jeremy Stanley
On 2015-02-18 10:00:31 -0500 (-0500), Doug Hellmann wrote:
 I'm interested in seeing what that list looks like. I suspect we have
 some libraries listed in the global requirements now that aren't
 actually used
[...]

Shameless plug for https://review.openstack.org/148071 . It turns up
a lot in master but each needs to be git-blame researched to confirm
why it was added so that we don't prematurely remove those with
pending changes in consuming projects. Doing the same for stable
branches would likely be a lot more clear-cut.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Donald Stufft

 On Feb 18, 2015, at 10:14 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
 On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
 On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,
 
 The os-ansible-deployment team was working on updates to add support
 for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for
 requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].
 
 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
 middle
 of the night on the weekend). I’m also aware that a lot of the
 downstream
 redistributors tend to work from global-requirements.txt when
 determining
 what to package/support.
 
 It seems to me like there’s room to clean up some of these
 requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several
 projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean
 up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 
 
 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
 right
 way forward? What is the best way to both maintain a stable branch
 with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to
 certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.
 
 History has shown that it's too much work keeping testing functioning
 for stable branches if we leave dependencies uncapped. If particular
 people are interested in bumping versions when releases happen, it's
 easy enough to do with a requirements proposed update. It will even run
 tests that in most cases will prove that it works.
 
 It might even be possible for someone to build some automation that did
 that as stuff from pypi released so we could have the best of both
 worlds. But I think capping is definitely something we want as a
 project, and it reflects the way that most deployments will consume this
 code.
 
-Sean
 
 --
 Sean Dague
 http://dague.net
 
 Right. No one is arguing the very clear benefits of all of this.
 
 I’m just wondering if for the example version identifiers that I gave in
 my original message (and others that are very similar) if we want to make
 the strings much simpler for people who tend to work from them (i.e.,
 downstream re-distributors whose jobs are already difficult enough). I’ve
 offered to help at least one of them in the past who maintains all of
 their distro’s packages themselves, but they refused so I’d like to help
 them anyway possible. Especially if any of them chime in as this being
 something that would be helpful.
 
 Ok, your links got kind of scrambled. Can you next time please inline
 the key relevant content in the email, because I think we all missed the
 original message intent as the key content was only in footnotes.
 
 From my point of view, normalization patches would be fine.
 
 requests=1.2.1,!=2.4.0,=2.2.1
 
 Is actually an odd one, because that's still there because we're using
 Trusty level requests in the tests, and my ability to have devstack not
 install that has thus far failed.
 
 Things like:
 
 osprofiler=0.3.0,=0.3.0 # Apache-2.0
 
 Can clearly be normalized to osprofiler==0.3.0 if you want to propose
 the patch manually.
 
 
 global-requirements for stable branches serves two uses:
 
 1. Specify the set of dependencies that we would like to test against
 2.  A tool for downstream packagers to use when determining what to
 package/support.
 
 For #1, Ideally we would like a set of all dependencies, including
 transitive, with explicit versions (very similar to the 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Doug Hellmann


On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
 
  On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add support
   for
   the latest version of juno and noticed some interesting version
   specifiers
   introduced into global-requirements.txt in January. It introduced some
   version specifiers that seem a bit impossible like the one for
  requests
   [1]. There are others that equate presently to pinning the versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it improves
   pretty much everyone’s quality of life (no fires to put out in the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
  requirements
   to
   make them far more explicit and less misleading to the human eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
  projects.
   Now we can apply updates to them by hand, to either move the lower
   bounds down (as in the case Ihar pointed out with stevedore) or clean
  up
   the range definitions. We should not raise the limits of any Oslo
   libraries, and we should consider raising the limits of third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we can see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally bump the
   caps
   to see if newer versions will not break everything, so what is the
   right
   way forward? What is the best way to both maintain a stable branch
  with
   known working dependencies while helping out those who do so much work
   for
   us (downstream and stable-maint) and not permanently pinning to
  certain
   working versions?
  
   Managing the upper bounds is still under discussion. Sean pointed out
   that we might want hard caps so that updates to stable branch were
   explicit. I can see either side of that argument and am still on the
   fence about the best approach.
  
   History has shown that it's too much work keeping testing functioning
   for stable branches if we leave dependencies uncapped. If particular
   people are interested in bumping versions when releases happen, it's
   easy enough to do with a requirements proposed update. It will even run
   tests that in most cases will prove that it works.
  
   It might even be possible for someone to build some automation that did
   that as stuff from pypi released so we could have the best of both
   worlds. But I think capping is definitely something we want as a
   project, and it reflects the way that most deployments will consume this
   code.
  
-Sean
  
   --
   Sean Dague
   http://dague.net
  
   Right. No one is arguing the very clear benefits of all of this.
  
   I’m just wondering if for the example version identifiers that I gave in
   my original message (and others that are very similar) if we want to make
   the strings much simpler for people who tend to work from them (i.e.,
   downstream re-distributors whose jobs are already difficult enough). I’ve
   offered to help at least one of them in the past who maintains all of
   their distro’s packages themselves, but they refused so I’d like to help
   them anyway possible. Especially if any of them chime in as this being
   something that would be helpful.
 
  Ok, your links got kind of scrambled. Can you next time please inline
  the key relevant content in the email, because I think we all missed the
  original message intent as the key content was only in footnotes.
 
  From my point of view, normalization patches would be fine.
 
  requests=1.2.1,!=2.4.0,=2.2.1
 
  Is actually an odd one, because that's still there because we're using
  Trusty level requests in the tests, and my ability to have devstack not
  install that has thus far failed.
 
  Things like:
 
  osprofiler=0.3.0,=0.3.0 # Apache-2.0
 
  Can clearly be normalized to osprofiler==0.3.0 if you want to propose
  the patch manually.
 
 
 global-requirements for stable branches serves two uses:
 
 1. Specify the set of dependencies that we would like to test against
 2.  A tool for downstream packagers to use when determining what to
 package/support.
 
 For #1, Ideally we would like a set of all dependencies, including
 transitive, with explicit versions (very similar to the output of
 pip-freeze). But for #2 the 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Donald Stufft

 On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
 On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,
 
 The os-ansible-deployment team was working on updates to add support
 for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for
 requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].
 
 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
 middle
 of the night on the weekend). I’m also aware that a lot of the
 downstream
 redistributors tend to work from global-requirements.txt when
 determining
 what to package/support.
 
 It seems to me like there’s room to clean up some of these
 requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several
 projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean
 up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 
 
 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
 right
 way forward? What is the best way to both maintain a stable branch
 with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to
 certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.
 
 History has shown that it's too much work keeping testing functioning
 for stable branches if we leave dependencies uncapped. If particular
 people are interested in bumping versions when releases happen, it's
 easy enough to do with a requirements proposed update. It will even run
 tests that in most cases will prove that it works.
 
 It might even be possible for someone to build some automation that did
 that as stuff from pypi released so we could have the best of both
 worlds. But I think capping is definitely something we want as a
 project, and it reflects the way that most deployments will consume this
 code.
 
 -Sean
 
 --
 Sean Dague
 http://dague.net
 
 Right. No one is arguing the very clear benefits of all of this.
 
 I’m just wondering if for the example version identifiers that I gave in
 my original message (and others that are very similar) if we want to make
 the strings much simpler for people who tend to work from them (i.e.,
 downstream re-distributors whose jobs are already difficult enough). I’ve
 offered to help at least one of them in the past who maintains all of
 their distro’s packages themselves, but they refused so I’d like to help
 them anyway possible. Especially if any of them chime in as this being
 something that would be helpful.
 
 Ok, your links got kind of scrambled. Can you next time please inline
 the key relevant content in the email, because I think we all missed the
 original message intent as the key content was only in footnotes.
 
 From my point of view, normalization patches would be fine.
 
 requests=1.2.1,!=2.4.0,=2.2.1
 
 Is actually an odd one, because that's still there because we're using
 Trusty level requests in the tests, and my ability to have devstack not
 install that has thus far failed.
 
 Things like:
 
 osprofiler=0.3.0,=0.3.0 # Apache-2.0
 
 Can clearly be normalized to osprofiler==0.3.0 if you want to propose
 the patch manually.
 
 
 global-requirements for stable branches serves two uses:
 
 1. Specify the set of dependencies that we would like to test against
 2.  A tool for downstream packagers to use when determining what to
 package/support.
 
 For #1, Ideally we would like a set of all dependencies, including
 transitive, with explicit versions (very similar to the output of
 pip-freeze). But for #2 the standard requirement file with a range is
 preferred. Putting an upper bound on each dependency, 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Doug Hellmann


On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
  On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com wrote:
  
  
  
  On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
  On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
  
  On 02/16/2015 08:50 PM, Ian Cordasco wrote:
  On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
  On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
  On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
  Hey everyone,
  
  The os-ansible-deployment team was working on updates to add support
  for
  the latest version of juno and noticed some interesting version
  specifiers
  introduced into global-requirements.txt in January. It introduced some
  version specifiers that seem a bit impossible like the one for
  requests
  [1]. There are others that equate presently to pinning the versions of
  the
  packages [2, 3, 4].
  
  I understand fully and support the commit because of how it improves
  pretty much everyone’s quality of life (no fires to put out in the
  middle
  of the night on the weekend). I’m also aware that a lot of the
  downstream
  redistributors tend to work from global-requirements.txt when
  determining
  what to package/support.
  
  It seems to me like there’s room to clean up some of these
  requirements
  to
  make them far more explicit and less misleading to the human eye (even
  though tooling like pip can easily parse/understand these).
  
  I think that's the idea. These requirements were generated
  automatically, and fixed issues that were holding back several
  projects.
  Now we can apply updates to them by hand, to either move the lower
  bounds down (as in the case Ihar pointed out with stevedore) or clean
  up
  the range definitions. We should not raise the limits of any Oslo
  libraries, and we should consider raising the limits of third-party
  libraries very carefully.
  
  We should make those changes on one library at a time, so we can see
  what effect each change has on the other requirements.
  
  
  I also understand that stable-maint may want to occasionally bump the
  caps
  to see if newer versions will not break everything, so what is the
  right
  way forward? What is the best way to both maintain a stable branch
  with
  known working dependencies while helping out those who do so much work
  for
  us (downstream and stable-maint) and not permanently pinning to
  certain
  working versions?
  
  Managing the upper bounds is still under discussion. Sean pointed out
  that we might want hard caps so that updates to stable branch were
  explicit. I can see either side of that argument and am still on the
  fence about the best approach.
  
  History has shown that it's too much work keeping testing functioning
  for stable branches if we leave dependencies uncapped. If particular
  people are interested in bumping versions when releases happen, it's
  easy enough to do with a requirements proposed update. It will even run
  tests that in most cases will prove that it works.
  
  It might even be possible for someone to build some automation that did
  that as stuff from pypi released so we could have the best of both
  worlds. But I think capping is definitely something we want as a
  project, and it reflects the way that most deployments will consume this
  code.
  
  -Sean
  
  --
  Sean Dague
  http://dague.net
  
  Right. No one is arguing the very clear benefits of all of this.
  
  I’m just wondering if for the example version identifiers that I gave in
  my original message (and others that are very similar) if we want to make
  the strings much simpler for people who tend to work from them (i.e.,
  downstream re-distributors whose jobs are already difficult enough). I’ve
  offered to help at least one of them in the past who maintains all of
  their distro’s packages themselves, but they refused so I’d like to help
  them anyway possible. Especially if any of them chime in as this being
  something that would be helpful.
  
  Ok, your links got kind of scrambled. Can you next time please inline
  the key relevant content in the email, because I think we all missed the
  original message intent as the key content was only in footnotes.
  
  From my point of view, normalization patches would be fine.
  
  requests=1.2.1,!=2.4.0,=2.2.1
  
  Is actually an odd one, because that's still there because we're using
  Trusty level requests in the tests, and my ability to have devstack not
  install that has thus far failed.
  
  Things like:
  
  osprofiler=0.3.0,=0.3.0 # Apache-2.0
  
  Can clearly be normalized to osprofiler==0.3.0 if you want to propose
  the patch manually.
  
  
  global-requirements for stable branches serves two uses:
  
  1. Specify the set of dependencies that we would like to test against
  2.  A tool for downstream packagers to use when determining what to
  package/support.
  
  For #1, Ideally we would like a set of all dependencies, including
  

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-17 Thread Joe Gordon
On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:

 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
  On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
  On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
  On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
  Hey everyone,
 
  The os-ansible-deployment team was working on updates to add support
  for
  the latest version of juno and noticed some interesting version
  specifiers
  introduced into global-requirements.txt in January. It introduced some
  version specifiers that seem a bit impossible like the one for
 requests
  [1]. There are others that equate presently to pinning the versions of
  the
  packages [2, 3, 4].
 
  I understand fully and support the commit because of how it improves
  pretty much everyone’s quality of life (no fires to put out in the
  middle
  of the night on the weekend). I’m also aware that a lot of the
  downstream
  redistributors tend to work from global-requirements.txt when
  determining
  what to package/support.
 
  It seems to me like there’s room to clean up some of these
 requirements
  to
  make them far more explicit and less misleading to the human eye (even
  though tooling like pip can easily parse/understand these).
 
  I think that's the idea. These requirements were generated
  automatically, and fixed issues that were holding back several
 projects.
  Now we can apply updates to them by hand, to either move the lower
  bounds down (as in the case Ihar pointed out with stevedore) or clean
 up
  the range definitions. We should not raise the limits of any Oslo
  libraries, and we should consider raising the limits of third-party
  libraries very carefully.
 
  We should make those changes on one library at a time, so we can see
  what effect each change has on the other requirements.
 
 
  I also understand that stable-maint may want to occasionally bump the
  caps
  to see if newer versions will not break everything, so what is the
  right
  way forward? What is the best way to both maintain a stable branch
 with
  known working dependencies while helping out those who do so much work
  for
  us (downstream and stable-maint) and not permanently pinning to
 certain
  working versions?
 
  Managing the upper bounds is still under discussion. Sean pointed out
  that we might want hard caps so that updates to stable branch were
  explicit. I can see either side of that argument and am still on the
  fence about the best approach.
 
  History has shown that it's too much work keeping testing functioning
  for stable branches if we leave dependencies uncapped. If particular
  people are interested in bumping versions when releases happen, it's
  easy enough to do with a requirements proposed update. It will even run
  tests that in most cases will prove that it works.
 
  It might even be possible for someone to build some automation that did
  that as stuff from pypi released so we could have the best of both
  worlds. But I think capping is definitely something we want as a
  project, and it reflects the way that most deployments will consume this
  code.
 
   -Sean
 
  --
  Sean Dague
  http://dague.net
 
  Right. No one is arguing the very clear benefits of all of this.
 
  I’m just wondering if for the example version identifiers that I gave in
  my original message (and others that are very similar) if we want to make
  the strings much simpler for people who tend to work from them (i.e.,
  downstream re-distributors whose jobs are already difficult enough). I’ve
  offered to help at least one of them in the past who maintains all of
  their distro’s packages themselves, but they refused so I’d like to help
  them anyway possible. Especially if any of them chime in as this being
  something that would be helpful.

 Ok, your links got kind of scrambled. Can you next time please inline
 the key relevant content in the email, because I think we all missed the
 original message intent as the key content was only in footnotes.

 From my point of view, normalization patches would be fine.

 requests=1.2.1,!=2.4.0,=2.2.1

 Is actually an odd one, because that's still there because we're using
 Trusty level requests in the tests, and my ability to have devstack not
 install that has thus far failed.

 Things like:

 osprofiler=0.3.0,=0.3.0 # Apache-2.0

 Can clearly be normalized to osprofiler==0.3.0 if you want to propose
 the patch manually.


global-requirements for stable branches serves two uses:

1. Specify the set of dependencies that we would like to test against
2.  A tool for downstream packagers to use when determining what to
package/support.

For #1, Ideally we would like a set of all dependencies, including
transitive, with explicit versions (very similar to the output of
pip-freeze). But for #2 the standard requirement file with a range is
preferred. Putting an upper bound on each dependency, instead of using a
'==' was a compromise between the two use cases.

Going forward I 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-17 Thread Joshua Harlow

Joe Gordon wrote:



On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

On 02/16/2015 08:50 PM, Ian Cordasco wrote:
  On 2/16/15, 16:08, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:
 
  On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
  On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
  Hey everyone,
 
  The os-ansible-deployment team was working on updates to add
support
  for
  the latest version of juno and noticed some interesting version
  specifiers
  introduced into global-requirements.txt in January. It
introduced some
  version specifiers that seem a bit impossible like the one for
requests
  [1]. There are others that equate presently to pinning the
versions of
  the
  packages [2, 3, 4].
 
  I understand fully and support the commit because of how it
improves
  pretty much everyone’s quality of life (no fires to put out in the
  middle
  of the night on the weekend). I’m also aware that a lot of the
  downstream
  redistributors tend to work from global-requirements.txt when
  determining
  what to package/support.
 
  It seems to me like there’s room to clean up some of these
requirements
  to
  make them far more explicit and less misleading to the human
eye (even
  though tooling like pip can easily parse/understand these).
 
  I think that's the idea. These requirements were generated
  automatically, and fixed issues that were holding back several
projects.
  Now we can apply updates to them by hand, to either move the lower
  bounds down (as in the case Ihar pointed out with stevedore) or
clean up
  the range definitions. We should not raise the limits of any Oslo
  libraries, and we should consider raising the limits of third-party
  libraries very carefully.
 
  We should make those changes on one library at a time, so we
can see
  what effect each change has on the other requirements.
 
 
  I also understand that stable-maint may want to occasionally
bump the
  caps
  to see if newer versions will not break everything, so what is the
  right
  way forward? What is the best way to both maintain a stable
branch with
  known working dependencies while helping out those who do so
much work
  for
  us (downstream and stable-maint) and not permanently pinning
to certain
  working versions?
 
  Managing the upper bounds is still under discussion. Sean
pointed out
  that we might want hard caps so that updates to stable branch were
  explicit. I can see either side of that argument and am still
on the
  fence about the best approach.
 
  History has shown that it's too much work keeping testing
functioning
  for stable branches if we leave dependencies uncapped. If particular
  people are interested in bumping versions when releases happen, it's
  easy enough to do with a requirements proposed update. It will
even run
  tests that in most cases will prove that it works.
 
  It might even be possible for someone to build some automation
that did
  that as stuff from pypi released so we could have the best of both
  worlds. But I think capping is definitely something we want as a
  project, and it reflects the way that most deployments will
consume this
  code.
 
   -Sean
 
  --
  Sean Dague
  http://dague.net
 
  Right. No one is arguing the very clear benefits of all of this.
 
  I’m just wondering if for the example version identifiers that I
gave in
  my original message (and others that are very similar) if we want
to make
  the strings much simpler for people who tend to work from them (i.e.,
  downstream re-distributors whose jobs are already difficult
enough). I’ve
  offered to help at least one of them in the past who maintains all of
  their distro’s packages themselves, but they refused so I’d like
to help
  them anyway possible. Especially if any of them chime in as this
being
  something that would be helpful.

Ok, your links got kind of scrambled. Can you next time please inline
the key relevant content in the email, because I think we all missed the
original message intent as the key content was only in footnotes.

 From my point of view, normalization patches would be fine.

requests=1.2.1,!=2.4.0,=2.2.1

Is actually an odd one, because that's still there because we're using
Trusty level requests in the tests, and my ability to have devstack not
install that has thus far failed.

Things like:

osprofiler=0.3.0,=0.3.0 # Apache-2.0

Can clearly be normalized to osprofiler==0.3.0 if you want to propose
the patch manually.



Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-17 Thread Ian Cordasco


On 2/17/15, 16:27, Joshua Harlow harlo...@outlook.com wrote:

Joe Gordon wrote:


 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add
 support
   for
   the latest version of juno and noticed some interesting
version
   specifiers
   introduced into global-requirements.txt in January. It
 introduced some
   version specifiers that seem a bit impossible like the one for
 requests
   [1]. There are others that equate presently to pinning the
 versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it
 improves
   pretty much everyone’s quality of life (no fires to put out
in the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
 requirements
   to
   make them far more explicit and less misleading to the human
 eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
 projects.
   Now we can apply updates to them by hand, to either move the
lower
   bounds down (as in the case Ihar pointed out with stevedore) or
 clean up
   the range definitions. We should not raise the limits of any
Oslo
   libraries, and we should consider raising the limits of
third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we
 can see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally
 bump the
   caps
   to see if newer versions will not break everything, so what
is the
   right
   way forward? What is the best way to both maintain a stable
 branch with
   known working dependencies while helping out those who do so
 much work
   for
   us (downstream and stable-maint) and not permanently pinning
 to certain
   working versions?
  
   Managing the upper bounds is still under discussion. Sean
 pointed out
   that we might want hard caps so that updates to stable branch
were
   explicit. I can see either side of that argument and am still
 on the
   fence about the best approach.
  
   History has shown that it's too much work keeping testing
 functioning
   for stable branches if we leave dependencies uncapped. If
particular
   people are interested in bumping versions when releases happen,
it's
   easy enough to do with a requirements proposed update. It will
 even run
   tests that in most cases will prove that it works.
  
   It might even be possible for someone to build some automation
 that did
   that as stuff from pypi released so we could have the best of
both
   worlds. But I think capping is definitely something we want as a
   project, and it reflects the way that most deployments will
 consume this
   code.
  
-Sean
  
   --
   Sean Dague
   http://dague.net
  
   Right. No one is arguing the very clear benefits of all of this.
  
   I’m just wondering if for the example version identifiers that I
 gave in
   my original message (and others that are very similar) if we want
 to make
   the strings much simpler for people who tend to work from them
(i.e.,
   downstream re-distributors whose jobs are already difficult
 enough). I’ve
   offered to help at least one of them in the past who maintains
all of
   their distro’s packages themselves, but they refused so I’d like
 to help
   them anyway possible. Especially if any of them chime in as this
 being
   something that would be helpful.

 Ok, your links got kind of scrambled. Can you next time please
inline
 the key relevant content in the email, because I think we all
missed the
 original message intent as the key content was only in footnotes.

  From my point of view, normalization patches would be fine.

 requests=1.2.1,!=2.4.0,=2.2.1

 Is actually an odd one, because that's still there because we're
using
 Trusty level requests in the tests, and my ability to have devstack
not
 install that 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-17 Thread Sean Dague
On 02/16/2015 08:50 PM, Ian Cordasco wrote:
 On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 02:08 PM, Doug Hellmann wrote:


 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,

 The os-ansible-deployment team was working on updates to add support
 for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].

 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
 middle
 of the night on the weekend). I’m also aware that a lot of the
 downstream
 redistributors tend to work from global-requirements.txt when
 determining
 what to package/support.

 It seems to me like there’s room to clean up some of these requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).

 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.

 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.


 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
 right
 way forward? What is the best way to both maintain a stable branch with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to certain
 working versions?

 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.

 History has shown that it's too much work keeping testing functioning
 for stable branches if we leave dependencies uncapped. If particular
 people are interested in bumping versions when releases happen, it's
 easy enough to do with a requirements proposed update. It will even run
 tests that in most cases will prove that it works.

 It might even be possible for someone to build some automation that did
 that as stuff from pypi released so we could have the best of both
 worlds. But I think capping is definitely something we want as a
 project, and it reflects the way that most deployments will consume this
 code.

  -Sean

 -- 
 Sean Dague
 http://dague.net
 
 Right. No one is arguing the very clear benefits of all of this.
 
 I’m just wondering if for the example version identifiers that I gave in
 my original message (and others that are very similar) if we want to make
 the strings much simpler for people who tend to work from them (i.e.,
 downstream re-distributors whose jobs are already difficult enough). I’ve
 offered to help at least one of them in the past who maintains all of
 their distro’s packages themselves, but they refused so I’d like to help
 them anyway possible. Especially if any of them chime in as this being
 something that would be helpful.

Ok, your links got kind of scrambled. Can you next time please inline
the key relevant content in the email, because I think we all missed the
original message intent as the key content was only in footnotes.

From my point of view, normalization patches would be fine.

requests=1.2.1,!=2.4.0,=2.2.1

Is actually an odd one, because that's still there because we're using
Trusty level requests in the tests, and my ability to have devstack not
install that has thus far failed.

Things like:

osprofiler=0.3.0,=0.3.0 # Apache-2.0

Can clearly be normalized to osprofiler==0.3.0 if you want to propose
the patch manually.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-16 Thread Doug Hellmann


On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,
 
 The os-ansible-deployment team was working on updates to add support for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].
 
 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the middle
 of the night on the weekend). I’m also aware that a lot of the downstream
 redistributors tend to work from global-requirements.txt when determining
 what to package/support.
 
 It seems to me like there’s room to clean up some of these requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).

I think that's the idea. These requirements were generated
automatically, and fixed issues that were holding back several projects.
Now we can apply updates to them by hand, to either move the lower
bounds down (as in the case Ihar pointed out with stevedore) or clean up
the range definitions. We should not raise the limits of any Oslo
libraries, and we should consider raising the limits of third-party
libraries very carefully.

We should make those changes on one library at a time, so we can see
what effect each change has on the other requirements.

 
 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the right
 way forward? What is the best way to both maintain a stable branch with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to certain
 working versions?

Managing the upper bounds is still under discussion. Sean pointed out
that we might want hard caps so that updates to stable branch were
explicit. I can see either side of that argument and am still on the
fence about the best approach.

 
 I’ve CC’d -operators too since I think their input will be invaluable on
 this as well (since I doubt everyone is using distro packages and some
 may
 be doing source-based installations).

I've not copied the operators list, since we try not to cross-post
threads. We should ask them to respond here on the dev list, or maybe
someone can summarize any responses from the other list.

Doug

 
 Cheers,
 Ian
 
 [1]: 
 https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
 c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R126
 [2]: 
 https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
 c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R128
 [3]: 
 https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
 c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R70
 [4]: 
 https://github.com/openstack/requirements/commit/499db6b64071c2afa16390ad2b
 c024e6a96db4ff#diff-d7d5c6fa7118ea10d88f3afeaef4da77R189
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-16 Thread Sean Dague
On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,

 The os-ansible-deployment team was working on updates to add support for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].

 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the middle
 of the night on the weekend). I’m also aware that a lot of the downstream
 redistributors tend to work from global-requirements.txt when determining
 what to package/support.

 It seems to me like there’s room to clean up some of these requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 

 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the right
 way forward? What is the best way to both maintain a stable branch with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.

History has shown that it's too much work keeping testing functioning
for stable branches if we leave dependencies uncapped. If particular
people are interested in bumping versions when releases happen, it's
easy enough to do with a requirements proposed update. It will even run
tests that in most cases will prove that it works.

It might even be possible for someone to build some automation that did
that as stuff from pypi released so we could have the best of both
worlds. But I think capping is definitely something we want as a
project, and it reflects the way that most deployments will consume this
code.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-16 Thread Ian Cordasco
On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:

On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,

 The os-ansible-deployment team was working on updates to add support
for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].

 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
middle
 of the night on the weekend). I’m also aware that a lot of the
downstream
 redistributors tend to work from global-requirements.txt when
determining
 what to package/support.

 It seems to me like there’s room to clean up some of these requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 

 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
right
 way forward? What is the best way to both maintain a stable branch with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.

History has shown that it's too much work keeping testing functioning
for stable branches if we leave dependencies uncapped. If particular
people are interested in bumping versions when releases happen, it's
easy enough to do with a requirements proposed update. It will even run
tests that in most cases will prove that it works.

It might even be possible for someone to build some automation that did
that as stuff from pypi released so we could have the best of both
worlds. But I think capping is definitely something we want as a
project, and it reflects the way that most deployments will consume this
code.

   -Sean

-- 
Sean Dague
http://dague.net

Right. No one is arguing the very clear benefits of all of this.

I’m just wondering if for the example version identifiers that I gave in
my original message (and others that are very similar) if we want to make
the strings much simpler for people who tend to work from them (i.e.,
downstream re-distributors whose jobs are already difficult enough). I’ve
offered to help at least one of them in the past who maintains all of
their distro’s packages themselves, but they refused so I’d like to help
them anyway possible. Especially if any of them chime in as this being
something that would be helpful.

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev