Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-13 Thread Monty Taylor
On 07/10/2014 02:44 PM, Richard Jones wrote:
 On 10 July 2014 23:27, Mulcahy, Stephen stephen.mulc...@hp.com wrote:
 When I last tested bandersnatch, it didn’t work well behind a proxy (in
 fact most of the existing pypi mirroring tools suffered from the same
 problem) – pypi-mirror has worked extremely well for mirroring a subset of
 pypi and doing so behind a proxy. I’d also echo the requirement for a tool
 that provides wheels as we have seen significant performance improvement
 from using wheels with TripleO
 
 devpi works behind a proxy. If bandersnatch doesn't then that bug should be
 addressed ASAP. I'm in contact with its author regarding that.
 
 I'm currently investigating a sensible approach to having wheels be
 automatically built (for the most sensible value of automatic that we can
 determine wink).

We're also thinking about how we continue to offer the pre-built wheels
for each of our build platforms. For infra, what I'm thinking is:

On each mirror slave (We have one for each OS combo we use), do
something similar to:

pip wheel -r global-requirements.txt
rsync $wheelhouse pypi.openstack.org/$(lsb_release)

This may require keeping pypi-mirror and using an option to only do
wheel building so that we can get the directory publication split. Ok. I
got bored and wrote that:

https://review.openstack.org/106638

So if we land that, you can do;

pip wheel -r global-requirements.txt
run-mirror --wheels-only --wheelhouse=wheelhouse --wheeldest=mirror
rsync -avz mirror pypi.openstack.org:/srv/mirror

If we went the devpi route, we could do;

pip wheel -r global-requirements.txt
for pkg in $wheelhouse; do
  devpi upload $pkg
done

And put that into a cron.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-13 Thread James Polley
On Mon, Jul 14, 2014 at 2:58 AM, Monty Taylor mord...@inaugust.com wrote:

 On 07/10/2014 02:44 PM, Richard Jones wrote:
  On 10 July 2014 23:27, Mulcahy, Stephen stephen.mulc...@hp.com wrote:
  When I last tested bandersnatch, it didn’t work well behind a proxy (in
  fact most of the existing pypi mirroring tools suffered from the same
  problem) – pypi-mirror has worked extremely well for mirroring a subset
 of
  pypi and doing so behind a proxy. I’d also echo the requirement for a
 tool
  that provides wheels as we have seen significant performance improvement
  from using wheels with TripleO
 
  devpi works behind a proxy. If bandersnatch doesn't then that bug should
 be
  addressed ASAP. I'm in contact with its author regarding that.
 
  I'm currently investigating a sensible approach to having wheels be
  automatically built (for the most sensible value of automatic that we
 can
  determine wink).

 We're also thinking about how we continue to offer the pre-built wheels
 for each of our build platforms. For infra, what I'm thinking is:

 On each mirror slave (We have one for each OS combo we use), do
 something similar to:

 pip wheel -r global-requirements.txt
 rsync $wheelhouse pypi.openstack.org/$(lsb_release)

 This may require keeping pypi-mirror and using an option to only do
 wheel building so that we can get the directory publication split. Ok. I
 got bored and wrote that:

 https://review.openstack.org/106638

 So if we land that, you can do;

 pip wheel -r global-requirements.txt
 run-mirror --wheels-only --wheelhouse=wheelhouse --wheeldest=mirror
 rsync -avz mirror pypi.openstack.org:/srv/mirror

 If we went the devpi route, we could do;

 pip wheel -r global-requirements.txt
 for pkg in $wheelhouse; do
   devpi upload $pkg
 done

 And put that into a cron.


Obviously keeping pypi-mirror would require the least amount of change to
how we suggest developers set up their systems.

I think the devpi option seems fairly reasonable too. It looks like it's
easier (and faster, and less bandwidth-consuming) than setting up
bandersnatch or apt-mirror, which we currently suggest people consider. It
doesn't look any more heavyweight than having a squid proxy for caching,
which we currently suggest as a bare minimum.

For an individual dev testing their own setup, I think we need a slightly
different approach from the infra approach listed above though. I'm
assuming that it's possible to probe the package index to determine if a
wheel is available for a particular version of a package yet. If that's the
case, we should be able to tweak tools like os-svc-install to notice when
no wheel is available, and build and upload the wheel.

I think this should give us a good balance between making sure that each
build (except the first) uses wheels to save time, still gets the latest
packages (since the last time the system was online at least), and the user
doesn't need to remember to manually update the wheels when they're online.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-13 Thread Gregory Haynes


On Sun, Jul 13, 2014, at 02:36 PM, James Polley wrote:


We're also thinking about how we continue to offer the
pre-built wheels

for each of our build platforms. For infra, what I'm thinking
is:



On each mirror slave (We have one for each OS combo we use), do

something similar to:



pip wheel -r global-requirements.txt

rsync $wheelhouse [1]pypi.openstack.org/$(lsb_release)



This may require keeping pypi-mirror and using an option to
only do

wheel building so that we can get the directory publication
split. Ok. I

got bored and wrote that:



[2]https://review.openstack.org/106638



So if we land that, you can do;



pip wheel -r global-requirements.txt

run-mirror --wheels-only --wheelhouse=wheelhouse
--wheeldest=mirror

rsync -avz mirror pypi.openstack.org:/srv/mirror



If we went the devpi route, we could do;



pip wheel -r global-requirements.txt

for pkg in $wheelhouse; do

  devpi upload $pkg

done



And put that into a cron.


Obviously keeping pypi-mirror would require the least amount
of change to how we suggest developers set up their systems.

I think the devpi option seems fairly reasonable too. It looks
like it's easier (and faster, and less bandwidth-consuming)
than setting up bandersnatch or apt-mirror, which we currently
suggest people consider. It doesn't look any more heavyweight
than having a squid proxy for caching, which we currently
suggest as a bare minimum.

For an individual dev testing their own setup, I think we need
a slightly different approach from the infra approach listed
above though. I'm assuming that it's possible to probe the
package index to determine if a wheel is available for a
particular version of a package yet. If that's the case, we
should be able to tweak tools like os-svc-install to notice
when no wheel is available, and build and upload the wheel.

I think this should give us a good balance between making sure
that each build (except the first) uses wheels to save time,
still gets the latest packages (since the last time the system
was online at least), and the user doesn't need to remember to
manually update the wheels when they're online.



This gave me an idea:

There was talk about pip being able to use a wheel cache
(wheelhouse). Can we bind-mount an arch-specific wheelhouse
from the hypervisor into our chroots as we build? This would
let people get most of the wheel speedup while doing almost no
specifal configuration.



-Greg

References

1. http://pypi.openstack.org/$(lsb_release)
2. https://review.openstack.org/106638
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-13 Thread James Polley




 On 14 Jul 2014, at 08:08, Gregory Haynes g...@greghaynes.net wrote:
 
  
 On Sun, Jul 13, 2014, at 02:36 PM, James Polley wrote:
  
 We're also thinking about how we continue to offer the pre-built wheels
 for each of our build platforms. For infra, what I'm thinking is:
  
 On each mirror slave (We have one for each OS combo we use), do
 something similar to:
  
 pip wheel -r global-requirements.txt
 rsync $wheelhouse pypi.openstack.org/$(lsb_release)
  
 This may require keeping pypi-mirror and using an option to only do
 wheel building so that we can get the directory publication split. Ok. I
 got bored and wrote that:
  
 https://review.openstack.org/106638
  
 So if we land that, you can do;
  
 pip wheel -r global-requirements.txt
 run-mirror --wheels-only --wheelhouse=wheelhouse --wheeldest=mirror
 rsync -avz mirror pypi.openstack.org:/srv/mirror
  
 If we went the devpi route, we could do;
  
 pip wheel -r global-requirements.txt
 for pkg in $wheelhouse; do
   devpi upload $pkg
 done
  
 And put that into a cron.
  
 Obviously keeping pypi-mirror would require the least amount of change to 
 how we suggest developers set up their systems.
  
 I think the devpi option seems fairly reasonable too. It looks like it's 
 easier (and faster, and less bandwidth-consuming) than setting up 
 bandersnatch or apt-mirror, which we currently suggest people consider. It 
 doesn't look any more heavyweight than having a squid proxy for caching, 
 which we currently suggest as a bare minimum.
  
 For an individual dev testing their own setup, I think we need a slightly 
 different approach from the infra approach listed above though. I'm assuming 
 that it's possible to probe the package index to determine if a wheel is 
 available for a particular version of a package yet. If that's the case, we 
 should be able to tweak tools like os-svc-install to notice when no wheel is 
 available, and build and upload the wheel.
  
 I think this should give us a good balance between making sure that each 
 build (except the first) uses wheels to save time, still gets the latest 
 packages (since the last time the system was online at least), and the user 
 doesn't need to remember to manually update the wheels when they're online.
  
 This gave me an idea:
 There was talk about pip being able to use a wheel cache (wheelhouse). Can we 
 bind-mount an arch-specific wheelhouse from the hypervisor into our chroots 
 as we build? This would let people get most of the wheel speedup while doing 
 almost no specifal configuration.

Pip still doesn't handle case insensitivity on file:// indexurls as well as its 
does for http:// - that should be fixed with a 1.6.0 release. There's some 
chance that we'll run into issues with this that we wouldn't hit using http, 
but i don't expect any major issues.

This still leaves us needing to build that arch-specific wheelhouse though


  
 -Greg
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-10 Thread Richard Jones
On 10 July 2014 23:27, Mulcahy, Stephen stephen.mulc...@hp.com wrote:
 When I last tested bandersnatch, it didn’t work well behind a proxy (in
fact most of the existing pypi mirroring tools suffered from the same
problem) – pypi-mirror has worked extremely well for mirroring a subset of
pypi and doing so behind a proxy. I’d also echo the requirement for a tool
that provides wheels as we have seen significant performance improvement
from using wheels with TripleO

devpi works behind a proxy. If bandersnatch doesn't then that bug should be
addressed ASAP. I'm in contact with its author regarding that.

I'm currently investigating a sensible approach to having wheels be
automatically built (for the most sensible value of automatic that we can
determine wink).


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-09 Thread Ben Nemec
On 07/08/2014 11:05 PM, Joe Gordon wrote:
 On Tue, Jul 8, 2014 at 8:54 PM, James Polley j...@jamezpolley.com wrote:
 
 It may not have been clear from the below email, but clarkb clarifies on
 https://bugs.launchpad.net/openstack-ci/+bug/1294381 that the infra team
 is no longer maintaining pypi-mirror

 This has been a very useful tool for tripleo. It's much simpler for new
 developers to set up and use than a full bandersnatch mirror (and requires
 less disk space), and it can create a local cache of wheels which saves
 build time.

 But it's now unsupported.

 To me it seems like we have two options:

 A) Deprecate usage of pypi-mirror; update docs to instruct new devs in
 setting up a local bandersnatch mirror instead
 or
 B) Take on care-and-feeding of the tool.
 or, I guess,
 C) Continue to recommend people use an unsupported unmaintained
 known-buggy tool (it works reasonably well for us today, but it's going to
 work less and less well as time goes by)

 Are there other options I haven't thought of?

 
 I don't know if this fits your requirements but I use
 http://doc.devpi.net/latest/quickstart-pypimirror.html for my development
 needs.

Will that also cache wheels?  In my experience, wheels are one of the
big time savers in tripleo so I would consider it an important feature
to maintain, however we decide to proceed.

 
 

 Do you have thoughts on which option is preferred?


 -- Forwarded message --
 From: Clark Boylan clark.boy...@gmail.com
 Date: Tue, Jul 8, 2014 at 8:50 AM
 Subject: Re: [openstack-dev] Policy around Requirements Adds (was: New
 class of requirements for Stackforge projects)
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org


 On Mon, Jul 7, 2014 at 3:45 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Jul 7, 2014 4:48 PM, Sean Dague s...@dague.net wrote:

 This thread was unfortunately hidden under a project specific tag (I
 have thus stripped all the tags).

 The crux of the argument here is the following:

 Is a stackforge project project able to propose additions to
 global-requirements.txt that aren't used by any projects in OpenStack.

 I believe the answer is firmly *no*.

 ++


 global-requirements.txt provides a way for us to have a single point of
 vetting for requirements for OpenStack. It lets us assess licensing,
 maturity, current state of packaging, python3 support, all in one place.
 And it lets us enforce that integration of OpenStack projects all run
 under a well understood set of requirements.

 The requirements sync that happens after requirements land is basically
 just a nicety for getting openstack projects to the tested state by
 eventual consistency.

 If a stackforge project wants to be limited by global-requirements,
 that's cool. We have a mechanism for that. However, they are accepting
 that they will be limited by it. That means they live with how the
 OpenStack project establishes that list. It specifically means they
 *don't* get to propose any new requirements.

 Basically in this case Solum wants to have it's cake and eat it to. Both
 be enforced on requirements, and not be enforced. Or some 3rd thing that
 means the same as that.

 The near term fix is to remove solum from projects.txt.

 The email included below mentions an additional motivation for using
 global-requirements is to avoid using pypi.python.org and instead use
 pypi.openstack.org for speed and reliability. Perhaps there is a way we
 can
 support use case for stackforge projects not in projects.txt? I thought I
 saw something the other day about adding a full pypi mirror to OpenStack
 infra.

 This is done. All tests are now run against a bandersnatch built full
 mirror of pypi. Enforcement of the global requirements is performed
 via the enforcement jobs.

 On 06/26/2014 02:00 AM, Adrian Otto wrote:
 Ok,

 I submitted and abandoned a couple of reviews[1][2] for a solution
 aimed
 to meet my goals without adding a new per-project requirements file.
 The
 flaw with this approach is that pip may install other requirements
 when
 installing the one(s) loaded from the fallback mirror, and those may
 conflict with the ones loaded from the primary mirror.

 After discussing this further in #openstack-infra this evening, we
 should give serious consideration to adding python-mistralclient to
 global requirements. I have posted a review[3] for that to get input
 from the requirements review team.

 Thanks,

 Adrian

 [1] https://review.openstack.org/102716
 [2] https://review.openstack.org/102719
 [3] https://review.openstack.org/102738
 https://review.openstack.org/1027387

 On Jun 25, 2014, at 9:51 PM, Matthew Oliver m...@oliver.net.au
 mailto:m...@oliver.net.au wrote:


 On Jun 26, 2014 12:12 PM, Angus Salkeld 
 angus.salk...@rackspace.com
 mailto:angus.salk...@rackspace.com wrote:

 On 25/06/14 15:13, Clark Boylan wrote:
 On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto
 adrian.o...@rackspace.com 

Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-09 Thread Donald Stufft
You’re aware pip has a built in command to create a local cache of wheels yea?

Not sure what your requirements are or what you’re using it for but if you 
detail
your requirements I can tell you how well pip can handle the use case out of 
the box.

On Jul 9, 2014, at 12:19 PM, Ben Nemec openst...@nemebean.com wrote:

 On 07/08/2014 11:05 PM, Joe Gordon wrote:
 On Tue, Jul 8, 2014 at 8:54 PM, James Polley j...@jamezpolley.com wrote:
 
 It may not have been clear from the below email, but clarkb clarifies on
 https://bugs.launchpad.net/openstack-ci/+bug/1294381 that the infra team
 is no longer maintaining pypi-mirror
 
 This has been a very useful tool for tripleo. It's much simpler for new
 developers to set up and use than a full bandersnatch mirror (and requires
 less disk space), and it can create a local cache of wheels which saves
 build time.
 
 But it's now unsupported.
 
 To me it seems like we have two options:
 
 A) Deprecate usage of pypi-mirror; update docs to instruct new devs in
 setting up a local bandersnatch mirror instead
 or
 B) Take on care-and-feeding of the tool.
 or, I guess,
 C) Continue to recommend people use an unsupported unmaintained
 known-buggy tool (it works reasonably well for us today, but it's going to
 work less and less well as time goes by)
 
 Are there other options I haven't thought of?
 
 
 I don't know if this fits your requirements but I use
 http://doc.devpi.net/latest/quickstart-pypimirror.html for my development
 needs.
 
 Will that also cache wheels?  In my experience, wheels are one of the
 big time savers in tripleo so I would consider it an important feature
 to maintain, however we decide to proceed.
 
 
 
 
 Do you have thoughts on which option is preferred?
 
 
 -- Forwarded message --
 From: Clark Boylan clark.boy...@gmail.com
 Date: Tue, Jul 8, 2014 at 8:50 AM
 Subject: Re: [openstack-dev] Policy around Requirements Adds (was: New
 class of requirements for Stackforge projects)
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 
 On Mon, Jul 7, 2014 at 3:45 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 On Jul 7, 2014 4:48 PM, Sean Dague s...@dague.net wrote:
 
 This thread was unfortunately hidden under a project specific tag (I
 have thus stripped all the tags).
 
 The crux of the argument here is the following:
 
 Is a stackforge project project able to propose additions to
 global-requirements.txt that aren't used by any projects in OpenStack.
 
 I believe the answer is firmly *no*.
 
 ++
 
 
 global-requirements.txt provides a way for us to have a single point of
 vetting for requirements for OpenStack. It lets us assess licensing,
 maturity, current state of packaging, python3 support, all in one place.
 And it lets us enforce that integration of OpenStack projects all run
 under a well understood set of requirements.
 
 The requirements sync that happens after requirements land is basically
 just a nicety for getting openstack projects to the tested state by
 eventual consistency.
 
 If a stackforge project wants to be limited by global-requirements,
 that's cool. We have a mechanism for that. However, they are accepting
 that they will be limited by it. That means they live with how the
 OpenStack project establishes that list. It specifically means they
 *don't* get to propose any new requirements.
 
 Basically in this case Solum wants to have it's cake and eat it to. Both
 be enforced on requirements, and not be enforced. Or some 3rd thing that
 means the same as that.
 
 The near term fix is to remove solum from projects.txt.
 
 The email included below mentions an additional motivation for using
 global-requirements is to avoid using pypi.python.org and instead use
 pypi.openstack.org for speed and reliability. Perhaps there is a way we
 can
 support use case for stackforge projects not in projects.txt? I thought I
 saw something the other day about adding a full pypi mirror to OpenStack
 infra.
 
 This is done. All tests are now run against a bandersnatch built full
 mirror of pypi. Enforcement of the global requirements is performed
 via the enforcement jobs.
 
 On 06/26/2014 02:00 AM, Adrian Otto wrote:
 Ok,
 
 I submitted and abandoned a couple of reviews[1][2] for a solution
 aimed
 to meet my goals without adding a new per-project requirements file.
 The
 flaw with this approach is that pip may install other requirements
 when
 installing the one(s) loaded from the fallback mirror, and those may
 conflict with the ones loaded from the primary mirror.
 
 After discussing this further in #openstack-infra this evening, we
 should give serious consideration to adding python-mistralclient to
 global requirements. I have posted a review[3] for that to get input
 from the requirements review team.
 
 Thanks,
 
 Adrian
 
 [1] https://review.openstack.org/102716
 [2] https://review.openstack.org/102719
 [3] https://review.openstack.org/102738
 

Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-09 Thread Richard Jones
On 10 July 2014 02:19, Ben Nemec openst...@nemebean.com wrote:

 On 07/08/2014 11:05 PM, Joe Gordon wrote:
  On Tue, Jul 8, 2014 at 8:54 PM, James Polley j...@jamezpolley.com wrote:
 
  It may not have been clear from the below email, but clarkb clarifies on
  https://bugs.launchpad.net/openstack-ci/+bug/1294381 that the infra
 team
  is no longer maintaining pypi-mirror
 
  This has been a very useful tool for tripleo. It's much simpler for new
  developers to set up and use than a full bandersnatch mirror (and
 requires
  less disk space), and it can create a local cache of wheels which saves
  build time.
 
  But it's now unsupported.
 
  To me it seems like we have two options:
 
  A) Deprecate usage of pypi-mirror; update docs to instruct new devs in
  setting up a local bandersnatch mirror instead
  or
  B) Take on care-and-feeding of the tool.
  or, I guess,
  C) Continue to recommend people use an unsupported unmaintained
  known-buggy tool (it works reasonably well for us today, but it's going
 to
  work less and less well as time goes by)
 
  Are there other options I haven't thought of?
 
 
  I don't know if this fits your requirements but I use
  http://doc.devpi.net/latest/quickstart-pypimirror.html for my
 development
  needs.

 Will that also cache wheels?  In my experience, wheels are one of the
 big time savers in tripleo so I would consider it an important feature
 to maintain, however we decide to proceed.


Yes, devpi caches wheels.

I would suggest that if the pip cache approach isn't appropriate then devpi
probably a good solution (though I don't know your full requirements).

The big difference between using devpi and pip caching would be that devpi
will allow you to install packages when you're offline.


   Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-09 Thread Donald Stufft

On Jul 9, 2014, at 7:07 PM, Richard Jones r1chardj0...@gmail.com wrote:

 On 10 July 2014 02:19, Ben Nemec openst...@nemebean.com wrote:
 On 07/08/2014 11:05 PM, Joe Gordon wrote:
  On Tue, Jul 8, 2014 at 8:54 PM, James Polley j...@jamezpolley.com wrote:
 
  It may not have been clear from the below email, but clarkb clarifies on
  https://bugs.launchpad.net/openstack-ci/+bug/1294381 that the infra team
  is no longer maintaining pypi-mirror
 
  This has been a very useful tool for tripleo. It's much simpler for new
  developers to set up and use than a full bandersnatch mirror (and requires
  less disk space), and it can create a local cache of wheels which saves
  build time.
 
  But it's now unsupported.
 
  To me it seems like we have two options:
 
  A) Deprecate usage of pypi-mirror; update docs to instruct new devs in
  setting up a local bandersnatch mirror instead
  or
  B) Take on care-and-feeding of the tool.
  or, I guess,
  C) Continue to recommend people use an unsupported unmaintained
  known-buggy tool (it works reasonably well for us today, but it's going to
  work less and less well as time goes by)
 
  Are there other options I haven't thought of?
 
 
  I don't know if this fits your requirements but I use
  http://doc.devpi.net/latest/quickstart-pypimirror.html for my development
  needs.
 
 Will that also cache wheels?  In my experience, wheels are one of the
 big time savers in tripleo so I would consider it an important feature
 to maintain, however we decide to proceed.
 
 Yes, devpi caches wheels.
 
 I would suggest that if the pip cache approach isn't appropriate then devpi 
 probably a good solution (though I don't know your full requirements).
 
 The big difference between using devpi and pip caching would be that devpi 
 will allow you to install packages when you're offline.
 
 
Richard
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

It doesn’t generate Wheels though, it’ll only cache them if they exist on
PyPI already.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-08 Thread Joe Gordon
On Tue, Jul 8, 2014 at 8:54 PM, James Polley j...@jamezpolley.com wrote:

 It may not have been clear from the below email, but clarkb clarifies on
 https://bugs.launchpad.net/openstack-ci/+bug/1294381 that the infra team
 is no longer maintaining pypi-mirror

 This has been a very useful tool for tripleo. It's much simpler for new
 developers to set up and use than a full bandersnatch mirror (and requires
 less disk space), and it can create a local cache of wheels which saves
 build time.

 But it's now unsupported.

 To me it seems like we have two options:

 A) Deprecate usage of pypi-mirror; update docs to instruct new devs in
 setting up a local bandersnatch mirror instead
 or
 B) Take on care-and-feeding of the tool.
 or, I guess,
 C) Continue to recommend people use an unsupported unmaintained
 known-buggy tool (it works reasonably well for us today, but it's going to
 work less and less well as time goes by)

 Are there other options I haven't thought of?


I don't know if this fits your requirements but I use
http://doc.devpi.net/latest/quickstart-pypimirror.html for my development
needs.



 Do you have thoughts on which option is preferred?


 -- Forwarded message --
 From: Clark Boylan clark.boy...@gmail.com
 Date: Tue, Jul 8, 2014 at 8:50 AM
 Subject: Re: [openstack-dev] Policy around Requirements Adds (was: New
 class of requirements for Stackforge projects)
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org


 On Mon, Jul 7, 2014 at 3:45 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
  On Jul 7, 2014 4:48 PM, Sean Dague s...@dague.net wrote:
 
  This thread was unfortunately hidden under a project specific tag (I
  have thus stripped all the tags).
 
  The crux of the argument here is the following:
 
  Is a stackforge project project able to propose additions to
  global-requirements.txt that aren't used by any projects in OpenStack.
 
  I believe the answer is firmly *no*.
 
  ++
 
 
  global-requirements.txt provides a way for us to have a single point of
  vetting for requirements for OpenStack. It lets us assess licensing,
  maturity, current state of packaging, python3 support, all in one place.
  And it lets us enforce that integration of OpenStack projects all run
  under a well understood set of requirements.
 
  The requirements sync that happens after requirements land is basically
  just a nicety for getting openstack projects to the tested state by
  eventual consistency.
 
  If a stackforge project wants to be limited by global-requirements,
  that's cool. We have a mechanism for that. However, they are accepting
  that they will be limited by it. That means they live with how the
  OpenStack project establishes that list. It specifically means they
  *don't* get to propose any new requirements.
 
  Basically in this case Solum wants to have it's cake and eat it to. Both
  be enforced on requirements, and not be enforced. Or some 3rd thing that
  means the same as that.
 
  The near term fix is to remove solum from projects.txt.
 
  The email included below mentions an additional motivation for using
  global-requirements is to avoid using pypi.python.org and instead use
  pypi.openstack.org for speed and reliability. Perhaps there is a way we
 can
  support use case for stackforge projects not in projects.txt? I thought I
  saw something the other day about adding a full pypi mirror to OpenStack
  infra.
 
 This is done. All tests are now run against a bandersnatch built full
 mirror of pypi. Enforcement of the global requirements is performed
 via the enforcement jobs.
 
  On 06/26/2014 02:00 AM, Adrian Otto wrote:
   Ok,
  
   I submitted and abandoned a couple of reviews[1][2] for a solution
 aimed
   to meet my goals without adding a new per-project requirements file.
 The
   flaw with this approach is that pip may install other requirements
 when
   installing the one(s) loaded from the fallback mirror, and those may
   conflict with the ones loaded from the primary mirror.
  
   After discussing this further in #openstack-infra this evening, we
   should give serious consideration to adding python-mistralclient to
   global requirements. I have posted a review[3] for that to get input
   from the requirements review team.
  
   Thanks,
  
   Adrian
  
   [1] https://review.openstack.org/102716
   [2] https://review.openstack.org/102719
   [3] https://review.openstack.org/102738
   https://review.openstack.org/1027387
  
   On Jun 25, 2014, at 9:51 PM, Matthew Oliver m...@oliver.net.au
   mailto:m...@oliver.net.au wrote:
  
  
   On Jun 26, 2014 12:12 PM, Angus Salkeld 
 angus.salk...@rackspace.com
   mailto:angus.salk...@rackspace.com wrote:
   
   On 25/06/14 15:13, Clark Boylan wrote:
   On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto
   adrian.o...@rackspace.com mailto:adrian.o...@rackspace.com
 wrote:
   Hello,
  
   Solum has run into a constraint with the current scheme for
   requirements management