Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-06-01 Thread Donald Stufft


On June 1, 2015 at 1:03:08 PM, Jeremy Stanley (fu...@yuggoth.org) wrote:
  
 Since pip treats .postN in strange ways, it's not entirely safe to
 rely on the old PBR behavior which would have made this
 2.25.0.post8. In particular, if we were to ever upload that to PyPI
 (terrible idea I know, bear with me anyway), anyone asking pip to
 install python-novaclient==2.25.0 would get the latest 2.25.0.postN
 package rather than the actual 2.25.0 package.

That’s not exactly accurate, the only real special handling is that even
though 2.25.0.post8 is “greater than” 2.25.0, it won’t match if you do
2.25.0 because it’s recommended that you only use post releases for non
code changes (packaging issues, documentation fixes, etc).

I wouldn’t recommend using post releases for code change releases though,
I think you’d be better off just adding a 4th digit to the release number
and being 2.25.0.8 or so. That’s mostly just a semantic thing (except for
) not any functional reason.

---  
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-15 Thread Donald Stufft


 On Apr 15, 2015, at 5:06 AM, Thierry Carrez thie...@openstack.org wrote:
 
 Joe Gordon wrote:
 On Tue, Apr 14, 2015 at 2:55 PM, Chris Dent chd...@redhat.com
 mailto:chd...@redhat.com wrote:
On Tue, 14 Apr 2015, Joe Gordon wrote:
deploy requirements - requirements.txt - which are meant to
be *local
to a deployment*, and are commonly expected to specify very
narrow (or
even exact fit) versions.
 
Link to where this is documented? If this isn't written down
anywhere, then
that should be a pre-requisite to this conversation. Get upstream to
document this.
 
I don't know where it is documented but this was the common wisdom I
knew from the Python community since long before coming to the
OpenStack community. To me, seeing a requirements.txt in a repo that
represents a class of an app or library (rather than an instance of
a deployment) was quite a surprise.
 
(This doesn't have that much bearing on the practical aspects of
this conversation, just wanted to add some anecdata that the precedent
described above is not weird or alien in any way.)
 
 https://packaging.python.org/en/latest/requirements.html
 
 Turns out it was easier then I thought to find the documentation for this.
 
 And the doc is indeed pretty clear. I assumed requirements.txt would
 describe... well... requirements. But like Robert said they are meant to
 describe specific deployments (should really be have been named
 deployment.txt, or at least dependencies.txt).
 

The name is just a convention. They can be named anything as far as pip is 
concerned. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][pbr] fixing up pbr's master branch

2015-03-18 Thread Donald Stufft

 On Mar 18, 2015, at 4:21 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2015-03-19 09:15:36 +1300 (+1300), Robert Collins wrote:
 [...]
 A second but also mandatory change is to synchronise on the final
 pre-release tag definitions in PEP-440, IIRC that was just 'rc' -
 'c'.
 [...]
 
 Mmmwaffles. It was for a time, then by popular demand it got
 switched back to rc again.
 
http://legacy.python.org/dev/peps/pep-0440/#pre-releases
 
 --
 Jeremy Stanley
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

To be clear, both “rc” and “c” are completely supported, the only thing
we changed is which one was the canonical representation. Other than that
using one is equivalent to using the other.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pip wheel' requires the 'wheel' package

2015-03-12 Thread Donald Stufft
Is it using an old version of setuptools? Like 0.6.28.

 On Mar 11, 2015, at 11:28 AM, Timothy Swanson (tiswanso) tiswa...@cisco.com 
 wrote:
 
 I don’t have any solution just chiming in that I see the same error with 
 devstack pulled from master on a new ubuntu trusty VM created last night.
 
 'pip install —upgrade wheel’ indicates:
 Requirement already up-to-date: wheel in 
 /usr/local/lib/python2.7/dist-packages
 
 Haven’t gotten it cleared up.
 
 Thanks,
 
 Tim
 
 On Mar 2, 2015, at 2:11 AM, Smigiel, Dariusz dariusz.smig...@intel.com 
 mailto:dariusz.smig...@intel.com wrote:
 
 
   
 From: yuntong [mailto:yuntong...@gmail.com mailto:yuntong...@gmail.com]
 Sent: Monday, March 2, 2015 7:35 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [devstack] pip wheel' requires the 'wheel' package
 
 Hello,
 I got an error when try to ./stack.sh as:
 2015-03-02 05:58:20.692 | net.ipv4.ip_local_reserved_ports = 35357,35358
 2015-03-02 05:58:20.959 | New python executable in tmp-venv-NoMO/bin/python
 2015-03-02 05:58:22.056 | Installing setuptools, pip...done.
 2015-03-02 05:58:22.581 | ERROR: 'pip wheel' requires the 'wheel' package. 
 To fix this, run: pip install wheel
 
 After pip install wheel, got same error.
 In [2]: wheel.__path__
 Out[2]: ['/usr/local/lib/python2.7/dist-packages/wheel']
 In [5]: pip.__path__
 Out[5]: ['/usr/local/lib/python2.7/dist-packages/pip']
 
 $ which python
 /usr/bin/python
 
 As above, the wheel can be imported successfully,
 any hints ?
 
 Thanks.
 
 
 Did you try pip install –upgrade wheel ?
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Donald Stufft

 On Feb 18, 2015, at 10:14 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
 On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
 On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 08:50 PM, Ian Cordasco wrote:
 On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
 
 On 02/16/2015 02:08 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
 Hey everyone,
 
 The os-ansible-deployment team was working on updates to add support
 for
 the latest version of juno and noticed some interesting version
 specifiers
 introduced into global-requirements.txt in January. It introduced some
 version specifiers that seem a bit impossible like the one for
 requests
 [1]. There are others that equate presently to pinning the versions of
 the
 packages [2, 3, 4].
 
 I understand fully and support the commit because of how it improves
 pretty much everyone’s quality of life (no fires to put out in the
 middle
 of the night on the weekend). I’m also aware that a lot of the
 downstream
 redistributors tend to work from global-requirements.txt when
 determining
 what to package/support.
 
 It seems to me like there’s room to clean up some of these
 requirements
 to
 make them far more explicit and less misleading to the human eye (even
 though tooling like pip can easily parse/understand these).
 
 I think that's the idea. These requirements were generated
 automatically, and fixed issues that were holding back several
 projects.
 Now we can apply updates to them by hand, to either move the lower
 bounds down (as in the case Ihar pointed out with stevedore) or clean
 up
 the range definitions. We should not raise the limits of any Oslo
 libraries, and we should consider raising the limits of third-party
 libraries very carefully.
 
 We should make those changes on one library at a time, so we can see
 what effect each change has on the other requirements.
 
 
 I also understand that stable-maint may want to occasionally bump the
 caps
 to see if newer versions will not break everything, so what is the
 right
 way forward? What is the best way to both maintain a stable branch
 with
 known working dependencies while helping out those who do so much work
 for
 us (downstream and stable-maint) and not permanently pinning to
 certain
 working versions?
 
 Managing the upper bounds is still under discussion. Sean pointed out
 that we might want hard caps so that updates to stable branch were
 explicit. I can see either side of that argument and am still on the
 fence about the best approach.
 
 History has shown that it's too much work keeping testing functioning
 for stable branches if we leave dependencies uncapped. If particular
 people are interested in bumping versions when releases happen, it's
 easy enough to do with a requirements proposed update. It will even run
 tests that in most cases will prove that it works.
 
 It might even be possible for someone to build some automation that did
 that as stuff from pypi released so we could have the best of both
 worlds. But I think capping is definitely something we want as a
 project, and it reflects the way that most deployments will consume this
 code.
 
-Sean
 
 --
 Sean Dague
 http://dague.net
 
 Right. No one is arguing the very clear benefits of all of this.
 
 I’m just wondering if for the example version identifiers that I gave in
 my original message (and others that are very similar) if we want to make
 the strings much simpler for people who tend to work from them (i.e.,
 downstream re-distributors whose jobs are already difficult enough). I’ve
 offered to help at least one of them in the past who maintains all of
 their distro’s packages themselves, but they refused so I’d like to help
 them anyway possible. Especially if any of them chime in as this being
 something that would be helpful.
 
 Ok, your links got kind of scrambled. Can you next time please inline
 the key relevant content in the email, because I think we all missed the
 original message intent as the key content was only in footnotes.
 
 From my point of view, normalization patches would be fine.
 
 requests=1.2.1,!=2.4.0,=2.2.1
 
 Is actually an odd one, because that's still there because we're using
 Trusty level requests in the tests, and my ability to have devstack not
 install that has thus far failed.
 
 Things like:
 
 osprofiler=0.3.0,=0.3.0 # Apache-2.0
 
 Can clearly be normalized to osprofiler==0.3.0 if you want to propose
 the patch manually.
 
 
 global-requirements for stable branches serves two uses:
 
 1. Specify the set of dependencies that we would like to test against
 2.  A tool for downstream packagers to use when determining what to
 package/support.
 
 For #1, Ideally we would like a set of all dependencies, including
 transitive, with explicit versions (very similar

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-18 Thread Donald Stufft
, instead of using a
 '==' was a compromise between the two use cases.
 
 Going forward I propose we have a requirements.in and a requirements.txt
 file. The requirements.in file would contain the range of dependencies,
 and
 requirements.txt would contain the pinned set, and eventually the pinned
 set including transitive dependencies.
 
 Thoughts?
 
 I'm interested in seeing what that list looks like. I suspect we have
 some libraries listed in the global requirements now that aren't
 actually used, and I'm sure there is a long list of transitive
 dependencies to add to it.
 
 I'm not entirely comfortable with the idea of pinning completely, but I
 guess it's the best of two bad options. It solves the we don't have
 enough people around to manage stable branches problem in one way (by
 not letting releases outside our control break our test jobs), but if we
 don't have people around now to fix things who is going to keep up with
 updating that requirements list as new versions of projects come out? We
 can write a job to automatically detect new packages and test them, but
 who is going to review patches submitted by that bot? Maybe that's a
 small enough amount of work that it will be easier to find help.
 
 We've been playing whack-a-mole with issues because we made changes to
 the way we deal with versions and dependencies without fully
 understanding the consequences of some of the changes. They looked
 innocent at first, but because of assumptions in other jobs or other
 parts of the system they caused problems. So I think we should be
 careful about making this decision and think about some of the other
 things that might fall out before pushing more changes up.
 
 For example, if we're syncing requirements into stable branches of
 projects based on requirements.txt, and that becomes a set of pins
 instead of a set of ranges with caps, how do we update projects? Should
 we sync from requirements.in instead of requirements.txt, to allow
 projects to maintain the ranges in their own requirements files? Or do
 we want those requirements files to reflect the pins from the global
 list?

I'm not sure I fully understand what folks are proposing here with two
different files, but if you’re putting ``==`` specifiers into the
install_requires of various projects, then I believe that is going to cause a
fairly large amount of pain.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-11 Thread Donald Stufft

 On Feb 11, 2015, at 11:15 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2015-02-11 11:31:13 + (+), Kuvaja, Erno wrote:
 [...]
 If you don't belong to the group of privileged living in the area
 and receiving free ticket somehow or company paying your
 participation you're not included. $600 + travel + accommodation
 is quite hefty premium to be included, not really FOSS.
 [...]
 
 Here I have to respectfully disagree. Anyone who uploads a change to
 an official OpenStack source code repository for review and has it
 approved/merged since Juno release day gets a 100% discount comp
 voucher for the full conference and design summit coming up in May.
 In addition, much like a lot of other large free software projects
 do for their conferences, the OpenStack Foundation sets aside
 funding[1] to cover travel and lodging for participants who need it.
 Let's (continue to) make sure this _is_ really FOSS, and that any
 of our contributors who want to be involved can be involved.
 
 [1] https://wiki.openstack.org/wiki/Travel_Support_Program

For whatever it's worth, I totally agree that the summits don't make Openstack
not really FOSS and I think the travel program is great, but I do just want
to point out (as someone for whom travel is not monetarily dificult, but
logistically) that decision making which requires travel can be exclusive. I
don't personally get too bothered by it but it feels like maybe the fundamental
issue that some are expericing is when there are decisions being made via a
single channel, regardless of if that channel is a phone call, IRC, a mailing
list, or a design summit. The more channels any particular decision involves
the more likely it is nobody is going to feel like they didn't get a chance
to participate.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Artifacts] Object Version format: SemVer vs pep440

2015-02-10 Thread Donald Stufft

 On Feb 10, 2015, at 3:17 PM, Ian Cordasco ian.corda...@rackspace.com wrote:
 
 
 And of course, the chosen solution should be mappable to database, so
 we may do sorting and filtering on the DB-side.
 So, having it as a simple string and letting the user to decide what
 it means is not an option.
 
 Except for the fact that versions do typically mean more than the values
 SemVer attaches to them. SemVer is further incompatible with any
 versioning scheme using epochs and is so relatively recent compared to
 versioning practices as a whole that I don’t see how we can justify
 restricting what should be a very generic system to something so specific
 to recent history and favored almost entirely by *developers*.

Semver vs PEP 440 is largely a syntax question since PEP 440 purposely does not
have much of an opinion on how something like 2.0.0 and 2.1.0 are related other
than for sorting. We do have operators in PEP 440 that support treating these
versions in a semver like way, and some that support treating them in other
ways.

The primary purpose of PEP 440 was to define a standard way to parse and sort
and specify versions across several hundred thouse versions that currently
exist in PyPI. This means that it is more complicated to implement but it is
much more powerful than semver eve could be. One example, as Ian mentioned is
the lack of the ability to do an Epoch, another example is that PEP 440 has
explicit support for someone taking version 1.0 adding some unofficial patches
to it, and then releasing that in their own distribution channels.

The primary purpose of Semver was to be extremely opinionated in what meaning
you place on the *content* of the version parts and the syntax is really a
secondary concern which exists just to make it easier to parse. This means that
if you know ahead of time that something is Semver you can guess a lot more
information about the relationship of two versions.

It was our intention that PEP 440 would (is?) aimed primarily at people
implementing tools that work with versions, and the additional PEPs or other
documentations would be written on top of PEP 440 to add opinions on what a
version looks like within the framework that PEP 440 sets up. A great example
is the pbr semver document that Monty linked.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Donald Stufft

 On Feb 6, 2015, at 9:00 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
 As part of oslo.messaging initiative to split up requirements into
 certain list of per messaging driver dependencies
 [...]
 
 I'm curious what the end goal is here... when someone does `pip
 install oslo.messaging` what do you/they expect to get installed?
 Your run-parts style requirements.d plan is sort of
 counter-intuitive to me in that I would expect it to contain
 number-prefixed sublists of requirements which should be processed
 collectively in an alphanumeric sort order, but I get the impression
 this is not the goal of the mechanism (I'll be somewhat relieved if
 you tell me I'm mistaken in that regard).
 
 Taking into account suggestion from Monty Taylor i’m bringing this
 discussion to much wider audience. And the question is: aren’t we
 doing something complex or are there any less complex ways to
 accomplish the initial idea of splitting requirements?
 
 As for taking this to a wider audience we (OpenStack) are already
 venturing into special snowflake territory with PBR, however
 requirements.txt is a convention used at least somewhat outside of
 OpenStack-related Python projects. It might make sense to get input
 from the broader Python packaging community on something like this
 before we end up alienating ourselves from them entirely.

I’m not sure what exactly is trying to be achieved here, but I still assert
that requirements.txt is the wrong place for pbr to be looking and it should
instead look for dependencies specified inside of a setup.cfg.

More on topic, I'm not sure what inner dependencies are, but if what you're
looking for is optional dependencies that only are needed in specific situation
then you probably want extras, defined like:

setup(
extras_require={
somename: [
dep1,
dep2,
],
},
)

Then if you do ``pip install myproject[somename]`` it'll include dep1 and dep2
in the list of dependencies, you can also depend on this in other projects
like:

setup(
install_requires=[myproject[somename]=1.0],
)

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-29 Thread Donald Stufft

 On Jan 29, 2015, at 2:27 PM, Monty Taylor mord...@inaugust.com wrote:
 
 On 01/29/2015 11:06 AM, Morgan Fainberg wrote:
 As a quick preface, today there is the assumption you can upgrade and 
 downgrade your SQL Schema. For the most part we do our best to test all of 
 this in our unit tests (do upgrades and downgrades land us in the same 
 schema). What isn’t clearly addressed is that the concept of downgrade might 
 be inherently flawed, especially when you’re talking about the data 
 associated with the migration. The concept that there is a utility that can 
 (and in many cases willfully) cause permanent, and in some cases 
 irrevocable, data loss from a simple command line interface sounds crazy 
 when I try and explain it to someone.
 
 The more I work with the data stored in SQL, and the more I think we should 
 really recommend the tried-and-true best practices when trying to revert 
 from a migration: Restore your DB to a known good state.
 
 * If a migration fails in some spectacular way (that was not handled 
 gracefully) is it possible to use the downgrade mechanic to “fix” it? More 
 importantly, would you trust the data after that downgrade?
 * Once an upgrade has happened and new code is run (potentially making use 
 of the new data structures), is it really safe to downgrade and lose that 
 data without stepping back to a known consistent state?
 
 The other side of this coin is that it prevents us from collapsing data 
 types in the stable store without hints to reverse the migration.
 
 I get the feeling that the reason we provide downward migrations today is 
 because we have in the past. Due to the existence of these downward 
 migrations the expectation is that they will work, and so we’re in a weird 
 feedback-loop.
 
 I’d like to propose we stop setting the expectation that a downwards 
 migration is a “good idea” or even something we should really support. 
 Offering upwards-only migrations would also simplify the migrations in 
 general. This downward migration path is also somewhat broken by the 
 migration collapses performed in a number of projects (to limit the number 
 of migrations that need to be updated when we change a key component such as 
 oslo.db or SQL-Alchemy Migrate to Alembic).
 
 Are downward migrations really a good idea for us to support? Is this 
 downward migration path a sane expectation? In the real world, would any one 
 really trust the data after migrating downwards?
 
 I do not think downward migrations are a good idea. I think they are a
 spectacularly bad idea that is essentially designed to get users into a
 state where they are massively broken.
 
 Operators should fail forward or restore from backup. Giving them
 downgrade scripts will imply that they work, which they probably will
 not once actual data is involved.

+1 on disabling downgrades.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Donald Stufft

 On Dec 15, 2014, at 1:50 PM, Sean Dague s...@dague.net wrote:
 
 On 12/15/2014 12:01 PM, Jeremy Stanley wrote:
 On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
 [...]
 This release is primarily meant to update the SQLAlchemy dependency
 to resolve the issue with the new version of setuptools changing
 how it evaluates version range specifications.
 [...]
 
 However note that I'm in the middle of forcing a refresh on a couple
 of our PyPI mirror servers, so it may be a couple hours before we
 see the effects of this throughout all of our infrastructure.
 
 
 It looks like this change has broken the grenade jobs because now
 oslo.db 1.3.0 ends up being installed in stable/juno environments, which
 has incompatible requirements with the rest of stable juno.
 
 http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz
 
 pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but
 SQLAlchemy=0.9.7,=0.9.99 is required by ['oslo.db']
 
   -Sean

It should probably use the specifier from Juno which matches the old
specifier in functionality.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Donald Stufft

 On Dec 15, 2014, at 1:57 PM, Sean Dague s...@dague.net wrote:
 
 On 12/15/2014 01:53 PM, Donald Stufft wrote:
 
 On Dec 15, 2014, at 1:50 PM, Sean Dague s...@dague.net wrote:
 
 On 12/15/2014 12:01 PM, Jeremy Stanley wrote:
 On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
 [...]
 This release is primarily meant to update the SQLAlchemy dependency
 to resolve the issue with the new version of setuptools changing
 how it evaluates version range specifications.
 [...]
 
 However note that I'm in the middle of forcing a refresh on a couple
 of our PyPI mirror servers, so it may be a couple hours before we
 see the effects of this throughout all of our infrastructure.
 
 
 It looks like this change has broken the grenade jobs because now
 oslo.db 1.3.0 ends up being installed in stable/juno environments, which
 has incompatible requirements with the rest of stable juno.
 
 http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz
 
 pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but
 SQLAlchemy=0.9.7,=0.9.99 is required by ['oslo.db']
 
 -Sean
 
 It should probably use the specifier from Juno which matches the old
 specifier in functionality.
 
 Probably, but that was specifically reverted here -
 https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm
 

Not sure I follow, that doesn’t seem to contain any SQLAlchemy changes?

I mean stable/juno has this - 
SQLAlchemy=0.8.4,=0.9.99,!=0.9.0,!=0.9.1,!=0.9.2,!=0.9.3,!=0.9.4,!=0.9.5,!=0.9.6
and master has this - SQLAlchemy=0.9.7,=0.9.99

I forget who it was but someone suggested just dropping 0.8 in global
requirements over the weekend so that’s what I did.

It appears oslo.db used the SQLAlchemy specifier from master which means that
it won’t work with SQLAlchemy in the 0.8 series. So probably oslo.db should
instead use the one from stable/juno?

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] suds-jurko, new in our global-requirements.txt: what is the point?!?

2014-11-26 Thread Donald Stufft

 On Nov 26, 2014, at 10:34 AM, Thomas Goirand z...@debian.org wrote:
 
 Hi,
 
 I tried to package suds-jurko. I was first happy to see that there was
 some progress to make things work with Python 3. Unfortunately, the
 reality is that suds-jurko has many issues with Python 3. For example,
 it has many:
 
 except Exception, e:
 
 as well as many:
 
 raise Exception, 'Duplicate key %s found' % k
 
 This is clearly not Python3 code. I tried quickly to fix some of these
 issues, but as I fixed a few, others appear.
 
 So I wonder, what is the point of using suds-jurko, which is half-baked,
 and which will conflict with the suds package?
 
It looks like it uses 2to3 to become Python 3 compatible.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

 On Nov 23, 2014, at 6:30 PM, Monty Taylor mord...@inaugust.com wrote:
 
 On 11/23/2014 06:13 PM, Robert Collins wrote:
 On 24 November 2014 at 11:01, victor stinner
 victor.stin...@enovance.com wrote:
 Hi,
 
 I'm happy to announce you that I just finished the last piece of the puzzle 
 to add support for trollius coroutines in Oslo Messaging! See my two 
 changes:
 
 * Add a new aiogreen executor:
  https://review.openstack.org/#/c/136653/
 * Add an optional executor callback to dispatcher:
  https://review.openstack.org/#/c/136652/
 
 Related projects:
 
 * asyncio is an event loop which is now part of Python 3.4:
  http://docs.python.org/dev/library/asyncio.html
 * trollius is the port of the new asyncio module to Python 2:
  http://trollius.readthedocs.org/
 * aiogreen implements the asyncio API on top of eventlet:
  http://aiogreen.readthedocs.org/
 
 For the long story and the full history of my work on asyncio in OpenStack 
 since one year, read:
 http://aiogreen.readthedocs.org/openstack.html
 
 The last piece of the puzzle is the new aiogreen project that I released a 
 few days ago. aiogreen is well integrated and fully compatible with 
 eventlet, it can be used in OpenStack without having to modify code. It is 
 almost fully based on trollius, it just has a small glue to reuse eventlet 
 event loop (get read/write notifications of file descriptors).
 
 In the past, I tried to use the greenio project, which also implements the 
 asyncio API, but it didn't fit well with eventlet. That's why I wrote a new 
 project.
 
 Supporting trollius coroutines in Oslo Messaging is just the first part of 
 the global project. Here is my full plan to replace eventlet with asyncio.
 
 ...
 
 So - the technical bits of the plan sound fine.
 
 On WSGI - if we're in an asyncio world, I don't think WSGI has any
 relevance today - it has no async programming model. While is has
 incremental apis and supports generators, thats not close enough to
 the same thing: so we're going to have to port our glue code to
 whatever container we end up with. As you know I'm pushing on a revamp
 of WSGI right now, and I'd be delighted to help put together a
 WSGI-for-asyncio PEP, but I think its best thought of as a separate
 thing to WSGI per se. It might be a profile of WSGI2 though, since
 there is quite some interest in truely async models.
 
 However I've a bigger picture concern. OpenStack only relatively
 recently switched away from an explicit async model (Twisted) to
 eventlet.
 
 I'm worried that this is switching back to something we switched away
 from (in that Twisted and asyncio have much more in common than either
 Twisted and eventlet w/magic, or asyncio and eventlet w/magic).
 
 If Twisted was unacceptable to the community, what makes asyncio
 acceptable? [Note, I don't really understand why Twisted was moved
 away from, since our problem domain is such a great fit for reactor
 style programming - lots of networking, lots of calling of processes
 that may take some time to complete their work, and occasional DB
 calls [which are equally problematic in eventlet and in
 asyncio/Twisted]. So I'm not arguing against the move, I'm just
 concerned that doing it without addressing whatever the underlying
 thing was, will fail - and I'm also concerned that it will surprise
 folk - since there doesn't seem to be a cross project blueprint
 talking about this fairly fundamental shift in programming model.
 
 I'm not going to comment on the pros and cons - I think we all know I'm
 a fan of threads. But I have been around a while, so - for those who
 haven't been:
 
 When we started the project, nova used twisted and swift used eventlet.
 As we've consistently endeavored to not have multiple frameworks, we
 entered in to the project's first big flame war:
 
 twisted vs. eventlet
 
 It was _real_ fun, I promise. But a the heart was a question of whether
 we were going to rewrite swift in twisted or rewrite nova in eventlet.
 
 The main 'winning' answer came down to twisted being very opaque for new
 devs - while it's very powerful for experienced devs, we decided to opt
 for eventlet which does not scare new devs with a completely different
 programming model. (reactors and deferreds and whatnot)
 
 Now, I wouldn't say we _just_ ported from Twisted, I think we finished
 that work about 4 years ago. :)
 

For whatever it’s worth, I find explicit async io to be _way_ easier to
understand for the same reason I find threaded code to be a rats nest.

The co-routine style of asyncio (or Twisted’s inlineCallbacks) solves
almost all of the problems that I think most people have with explicit
asyncio (namely the callback hell) while still getting the benefits.

Glyph wrote a good post that mirrors my opinions on implicit vs explicit
here: https://glyph.twistedmatrix.com/2014/02/unyielding.html.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev

Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

 On Nov 23, 2014, at 7:21 PM, Mike Bayer mba...@redhat.com wrote:
 
 Given that, I’ve yet to understand why a system that implicitly defers CPU 
 use when a routine encounters IO, deferring to other routines, is relegated 
 to the realm of “magic”.   Is Python reference counting and garbage 
 collection “magic”?How can I be sure that my program is only declaring 
 memory, only as much as I expect, and then freeing it only when I absolutely 
 say so, the way async advocates seem to be about IO?   Why would a high level 
 scripting language enforce this level of low-level bookkeeping of IO calls as 
 explicit, when it is 100% predictable and automatable ?

The difference is that in the many years of Python programming I’ve had to 
think about garbage collection all of once. I’ve yet to write a non trivial 
implicit IO application where the implicit context switch didn’t break 
something and I had to think about adding explicit locks around things.

Really that’s what it comes down to. Either you need to enable explicit context 
switches (via callbacks or yielding, or whatever) or you need to add explicit 
locks. Neither solution allows you to pretend that context switching isn’t 
going to happen nor prevents you from having to deal with it. The reason I 
prefer explicit async is because the failure mode is better (if I forget to 
yield I don’t get the actual value so my thing blows up in development) and it 
ironically works more like blocking programming because I won’t get an implicit 
context switch in the middle of a function. Compare that to the implicit async 
where the failure mode is that at runtime something weird happens.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

 On Nov 23, 2014, at 7:29 PM, Mike Bayer mba...@redhat.com wrote:
 
 
 Glyph wrote a good post that mirrors my opinions on implicit vs explicit
 here: https://glyph.twistedmatrix.com/2014/02/unyielding.html.
 
 this is the post that most makes me think about the garbage collector 
 analogy, re: “gevent works perfectly fine, but sorry, it just isn’t 
 “correct”.  It should be feared! ”.   Unfortunately Glyph has orders of 
 magnitude more intellectual capabilities than I do, so I am ultimately not an 
 effective advocate for my position; hence I have my fallback career as a 
 cheese maker lined up for when the async agenda finally takes over all 
 computer programming.

Like I said, I’ve had to think about garbage collecting all of once in my 
entire Python career. Implicit might be theoretically nicer but until it can 
actually live up to the “gets out of my way-ness” of the abstractions you’re 
citing I’d personally much rather pass on it.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

 On Nov 23, 2014, at 7:55 PM, Mike Bayer mba...@redhat.com wrote:
 
 
 On Nov 23, 2014, at 7:30 PM, Donald Stufft don...@stufft.io wrote:
 
 
 On Nov 23, 2014, at 7:21 PM, Mike Bayer mba...@redhat.com wrote:
 
 Given that, I’ve yet to understand why a system that implicitly defers CPU 
 use when a routine encounters IO, deferring to other routines, is relegated 
 to the realm of “magic”.   Is Python reference counting and garbage 
 collection “magic”?How can I be sure that my program is only declaring 
 memory, only as much as I expect, and then freeing it only when I 
 absolutely say so, the way async advocates seem to be about IO?   Why would 
 a high level scripting language enforce this level of low-level bookkeeping 
 of IO calls as explicit, when it is 100% predictable and automatable ?
 
 The difference is that in the many years of Python programming I’ve had to 
 think about garbage collection all of once. I’ve yet to write a non trivial 
 implicit IO application where the implicit context switch didn’t break 
 something and I had to think about adding explicit locks around things.
 
 that’s your personal experience, how is that an argument?  I deal with the 
 Python garbage collector, memory management, etc. *all the time*.   I have a 
 whole test suite dedicated to ensuring that SQLAlchemy constructs tear 
 themselves down appropriately in the face of gc and such: 
 https://github.com/zzzeek/sqlalchemy/blob/master/test/aaa_profiling/test_memusage.py
  .   This is the product of tons of different observed and reported issues 
 about this operation or that operation forming constructs that would take up 
 too much memory, wouldn’t be garbage collected when expected, etc.  
 
 Yet somehow I still value very much the work that implicit GC does for me and 
 I understand well when it is going to happen.  I don’t decide that that whole 
 world should be forced to never have GC again.  I’m sure you wouldn’t be 
 happy if I got Guido to drop garbage collection from Python because I showed 
 how sometimes it makes my life more difficult, therefore we should all be 
 managing memory explicitly.

Eh, Maybe you need to do that, that’s fine I suppose. Though the option isn’t 
between something with a very clear failure condition and something with a 
“weird things start happening” error condition. It’s between “weird things 
start happening” and “weird things start happening, just they are less likely 
to happen less”. Implicit context switches introduce a new harder to debug 
failure mode over blocking code that explicit context switches do not.

 
 I’m sure my agenda here is pretty transparent.  If explicit async becomes the 
 only way to go, SQLAlchemy basically closes down.   I’d have to rewrite it 
 completely (after waiting for all the DBAPIs that don’t exist to be written, 
 why doesn’t anyone ever seem to be concerned about that?) , and it would run 
 much less efficiently due to the massive amount of additional function call 
 overhead incurred by the explicit coroutines.   It’s a pointless amount of 
 verbosity within a scripting language.  

I don’t really take performance issues that seriously for CPython. If you care 
about performance you should be using PyPy. I like that argument though because 
the same argument is used against the GCs which you like to use as an example 
too.

The verbosity isn’t really pointless, you have to be verbose in either 
situation, either explicit locks or explicit context switches. If you don’t 
have explicit locks you just have buggy software instead.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-23 Thread Donald Stufft

 On Nov 23, 2014, at 9:09 PM, Mike Bayer mba...@redhat.com wrote:
 
 
 On Nov 23, 2014, at 8:23 PM, Donald Stufft don...@stufft.io wrote:
 
 I don’t really take performance issues that seriously for CPython. If you 
 care about performance you should be using PyPy. I like that argument though 
 because the same argument is used against the GCs which you like to use as 
 an example too.
 
 The verbosity isn’t really pointless, you have to be verbose in either 
 situation, either explicit locks or explicit context switches. If you don’t 
 have explicit locks you just have buggy software instead.
 
 Funny thing is that relational databases will lock on things whether or not 
 the calling code is using an async system.  Locks are a necessary thing in 
 many cases.  That lock-based concurrency code can’t be mathematically proven 
 bug free doesn’t detract from its vast usefulness in situations that are not 
 aeronautics or medical devices.

Sure, databases will do it regardless so they aren’t a very useful topic of 
discussion here since their operation is external to the system being developed 
and they will operate the same regardless.

There’s a long history of implicit context switches causing buggy software that 
breaks. As far as I can tell the only downsides to explicit context switches 
that don’t stem from an inferior interpreter seem to be “some particular API in 
my head isn’t as easy with it” and “I have to type more letters”. The first one 
I’d just say that constraints make the system and that there are lots of APIs 
which aren’t really possible or easy in Python because of one design decision 
or another. For the second one I’d say that Python isn’t a language which 
attempts to make code shorter, just easier to understand what is going to 
happen when.

Throwing out hyperboles like “mathematically proven” isn’t a particular 
valuable statement. It is *easier* to reason about what’s going to happen with 
explicit context switches. Maybe you’re a better programmer than I am and 
you’re able to keep in your head every place that might do an implicit context 
switch in an implicit setup and you can look at a function and go “ah yup, 
things are going to switch here and here”. I certainly can’t. I like my 
software to maximize the ability to locally reason about a particular chunk of 
code.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No PROTOCOL_SSLv3 in Python 2.7 in Sid since 3 days

2014-11-22 Thread Donald Stufft

 On Nov 22, 2014, at 1:45 AM, Robert Collins robe...@robertcollins.net wrote:
 
 On 22 November 2014 08:11, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2014-11-21 12:31:08 -0500 (-0500), Donald Stufft wrote:
 Death to SSLv3 IMO.
 
 Sure, we should avoid releasing new versions of things which assume
 SSLv3 support is present in underlying libraries/platforms (it's
 unclear to me why anyone even thought it was good to make that
 configurable to this degree in openstack-common, but it probably
 dates back to before the nova common split). But what we're talking
 about here is fixing a deployability/usability bug where the
 software is assuming the presence of something removed from a
 dependency on some platform. I'd rather not conflate it with
 knee-jerk SSLv3 Bad rhetoric which risks giving casual readers the
 impression there's some vulnerability here.
 
 Ceasing to assume the presence of SSLv3 support is a safe choice for
 the software in question. Forcing changes to stable branches for
 this should be taken on its merits as a normal bug, and not
 prioritized because of any perceived security impact.
 
 Given the persistent risks of downgrade attacks, I think this does
 actually qualify as a security issue: not that its breaking,but that
 SSLv3 is advertised and accepted anywhere.
 
 The lines two lower:
try:
_SSL_PROTOCOLS[sslv2] = ssl.PROTOCOL_SSLv2
except AttributeError:
pass
 
 Are even more concerning!
 
 That said, code like:
 https://github.com/mpaladin/python-amqpclt/blob/master/amqpclt/kombu.py#L101
 
 is truely egregious!
 
 :)
 

Yes this. SSLv3 isn’t a “Well as long as you have newer things enabled it’s
fine” it’s a “If you have this enabled at all it’s a problem”. As far as I
am aware without TLS_FALLBACK_SCSV a MITM who is willing to do active attacks
can force the connection over to the lowest protocol that a client and server
support. There is no way for the server to verify that the message sent from
the client that indicates their highest was not modified in transit.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No PROTOCOL_SSLv3 in Python 2.7 in Sid since 3 days

2014-11-22 Thread Donald Stufft
I'm in my phone but rfc 2246 says that there are many ways in which an attacker 
can attempt to make an attacker drop down to the least secure option they both 
support. It's like the second or third paragraph of that section. 


 On Nov 22, 2014, at 4:00 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2014-11-22 13:37:55 -0500 (-0500), Donald Stufft wrote:
 Yes this. SSLv3 isn’t a “Well as long as you have newer things
 enabled it’s fine” it’s a “If you have this enabled at all it’s a
 problem”. As far as I am aware without TLS_FALLBACK_SCSV a MITM
 who is willing to do active attacks can force the connection over
 to the lowest protocol that a client and server support. There is
 no way for the server to verify that the message sent from the
 client that indicates their highest was not modified in transit.
 
 IETF RFC 2246 disagrees with you on this. Please cite sources
 (besides interactions with Web browsers that sidestep TLS version
 negotiation a la POODLE). You're suggesting a vulnerability far
 worse than e.g. CVE-2014-3511 in OpenSSL, which would definitely be
 something I haven't seen disclosed to date. It's very easy to fall
 into the protocol shaming trap, and I don't think it's at all
 helpful.
 -- 
 Jeremy Stanley
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No PROTOCOL_SSLv3 in Python 2.7 in Sid since 3 days

2014-11-22 Thread Donald Stufft
I refreshed my memory and I was wrong about the specific attack. However the 
point still stands that both the rfc and respected folks such as Thomas porin 
state that you should look at the version negotiation as a way to selectively 
enable new features not as a way to ensure that a connection uses a secure 
option when both a secure and an insecure option exist. 

http://crypto.stackexchange.com/a/10496


 On Nov 22, 2014, at 4:13 PM, Donald Stufft don...@stufft.io wrote:
 
 I'm in my phone but rfc 2246 says that there are many ways in which an 
 attacker can attempt to make an attacker drop down to the least secure option 
 they both support. It's like the second or third paragraph of that section. 
 
 
 On Nov 22, 2014, at 4:00 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2014-11-22 13:37:55 -0500 (-0500), Donald Stufft wrote:
 Yes this. SSLv3 isn’t a “Well as long as you have newer things
 enabled it’s fine” it’s a “If you have this enabled at all it’s a
 problem”. As far as I am aware without TLS_FALLBACK_SCSV a MITM
 who is willing to do active attacks can force the connection over
 to the lowest protocol that a client and server support. There is
 no way for the server to verify that the message sent from the
 client that indicates their highest was not modified in transit.
 
 IETF RFC 2246 disagrees with you on this. Please cite sources
 (besides interactions with Web browsers that sidestep TLS version
 negotiation a la POODLE). You're suggesting a vulnerability far
 worse than e.g. CVE-2014-3511 in OpenSSL, which would definitely be
 something I haven't seen disclosed to date. It's very easy to fall
 into the protocol shaming trap, and I don't think it's at all
 helpful.
 -- 
 Jeremy Stanley
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-21 Thread Donald Stufft

 On Nov 21, 2014, at 3:59 AM, Thomas Goirand z...@debian.org wrote:
 
 
 I'm not sure I understand the meaning behind this question. bower
 install angular downloads a bower package called angular.
 
 Isn't there is a simple URL that I may use with wget? I don't really
 want to use bower directly, I just would like to have a look to the
 content of the bower package.

You can’t. Bower doesn’t have “traditional” packages where you take a
directory and archive it using tar/zip/whatever and then upload it to
some repo. Bower has a registry which maps names to git URLs and then
the bower CLI looks up that mapping, fetches the git repository and then
uses that as the input to the “look at metadata and do stuff with files”
part of the package manager instead of the output of an un-unarchival
command.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-21 Thread Donald Stufft

 On Nov 21, 2014, at 11:32 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2014-11-21 07:31:36 -0500 (-0500), Donald Stufft wrote:
 You can’t. Bower doesn’t have “traditional” packages where you take a
 directory and archive it using tar/zip/whatever and then upload it to
 some repo. Bower has a registry which maps names to git URLs and then
 the bower CLI looks up that mapping, fetches the git repository and then
 uses that as the input to the “look at metadata and do stuff with files”
 part of the package manager instead of the output of an un-unarchival
 command.
 
 This raises interesting free software philosophy/license
 questions... how do I redistribute (or even examine) the source of
 a bower-managed package? Is there a way without actually
 reverse-engineering the toolchain?

Well it’s a git repository, so you could just clone it and look at it.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-21 Thread Donald Stufft

 On Nov 21, 2014, at 11:57 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2014-11-21 11:39:00 -0500 (-0500), Donald Stufft wrote:
 Well it’s a git repository, so you could just clone it and look at
 it.
 
 Aha, your earlier description made it sound like Bower was a file
 registry mapping to various random contents from a bunch of revision
 control repositories to assemble any one package. If Bower packages
 generally map back to one repository per package (even if there are
 multiple packages per repository) then that seems much more sane to
 deal with.
 -- 
 Jeremy Stanley
 

Yea sorry, the bower registry (aka bower PyPI) is a mapping of name: git URL.


---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No PROTOCOL_SSLv3 in Python 2.7 in Sid since 3 days

2014-11-21 Thread Donald Stufft

 On Nov 21, 2014, at 11:51 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2014-11-21 09:38:00 -0500 (-0500), Doug Hellmann wrote:
 The patch drops support entirely, but as Brant points out that
 isn’t backwards-compatible. I’d be interested to hear from the
 security team about whether the security issues trump the
 backwards compatibility issues here or if we should maintain
 optional support (that is, allow v3 if we detect that we can use
 it because the symbol is present). 
 
 Thomas, can you get one or two of the security team to comment on
 the patch?
 
 The discussion in https://launchpad.net/bugs/1381365 is relevant to
 this topic.
 -- 
 Jeremy Stanley
 

Death to SSLv3 IMO.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-14 Thread Donald Stufft

 On Nov 14, 2014, at 7:48 AM, Matthias Runge mru...@redhat.com wrote:
 
 On 13/11/14 19:11, Donald Stufft wrote:
 
 As far as I’m aware npm supports TLS the same as pip does. That secures the
 transport between the end users and the repository so you can be assured
 that there is no man in the middle. Security wise npm (and pip) are about
 ~95% (mad up numbers, but you can get the gist) of the effectiveness as the
 OS package managers.
 
 Oh, e.g rpm allows packages to be cryptographically signed, and
 depending on your systems config, that is enforced. This is quite
 different from just tls'ing a connection.

You do realize that TLS provides cryptographic proof of authenticity and
integrity just like PGP does right? (It also provides the cool benefit of
privacy which PGP signing does not).

Generally even with PGP signing you still have a number of online keys sitting
on servers which are able to sign packages and the tooling will accept their
signatures. The essential difference is basically, with TLS you depend on the
web server to not be compromised, with PGP signing you depend on the build
server to not be compromised.

In theory you *can* use PGP signing in a way that all of the signing keys are
offline, however this requires having a person manually sign all artifacts that
are created (and even then, you'd want them to also generate said artifacts
to ensure that they were not compromised). However in the real world, most (if
not all) systems involve online keys.

All this isn't to say that TLS is 100% as good as using something like PGP for
signatures though. PGP does have some good benefits, the major one being that
it travels better/easier/at all. For instance a PGP signature can be
transfered alongside a package file and hosted on untrusted mirrors while
relying on TLS means that you *must* trust the machine from which you're
getting the files from.

TLS is a fairly decent way of securing a package infrastructure though, it
prevents all of the major attacks that PGP signing does in practice but it
moves the high value target from the build machines to the web servers and
makes mirroring require trusting the mirror.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] improving PyPi modules design FHS (was: the future of angularjs development in Horizon)

2014-11-14 Thread Donald Stufft

 On Nov 13, 2014, at 6:29 PM, Thomas Goirand z...@debian.org wrote:
 
 On 11/14/2014 06:40 AM, Donald Stufft wrote:
 Sure! That's how I do most of my Python modules these days. I don't just
 create them from scratch, I use my own debpypi script, which generates
 a template for packaging. But it can't be fully automated. I could
 almost do it in a fully automated manner for PEAR packages for PHP (see
 debpear in the Debian archive), but it's harder with Python and 
 pip/PyPi.
 
 I would be interested to know what makes Python harder in this regard, I
 would like to fix it.
 
 The fact that the standard from PyPi is very fuzzy is one of the issue.
 There's nothing in the format (for example in the DOAP.xml record) that
 tells if a module supports Python3 for example. Then the short and long
 descriptions aren't respected, often, you get some changelog entries
 there. Then there's no real convention for the location of the sphinx
 doc. There's also the fact that dependencies for Python have to be
 written by hand on a Debian package. See for example, dependencies on
 arparse, distribute, ordereddict, which I never put in a Debian package
 as it's available in Python 2.7. Or the fact that there's no real unique
 place where dependencies are written on a PyPi package (is it hidden
 somewhere in setup.py, or is it explicitly written in
 requirements.txt?). Etc. On the PHP world, everything is much cleaner,
 in the package.xml, which is very easily parse-able.
 
 (This is fairly off topic, so if you want to reply to this in private that’s
 fine):
 
 Let's just change the subject line, so that those not interested in the
 discussion can skip the topic entirely.
 
 Nothing that says if it supports py3:
Yea, this is a problem, you can somewhat estimate it using the Python 3
classifier though.
 
 The issue is that this is a not-mandatory tag. And often, it isn't set.
 
 Short and Long descriptions aren’t respected:
I’m not sure what you mean by isn’t respected?
 
 On my templating script, I grab what's supposed to be the short and long
 description. But this leads to importing some RST format long
 description that do include unrelated things. In fact, I'm not even sure
 there's such things as long and short desc in the proper way, so that it
 could just be included in debian/control without manual work.

I suspect this is just a difference between the two systems them. We do have
such concepts as short and long description, but we support mark up (via RST)
in the long description and obviously since PyPI is a not a curated index 
there’s
nothing stopping people from doing whatever they want in those descriptions.

 
 Have to write dependencies by hand:
Not sure what you mean by not depending on argparse, distribute, 
 ordereddict,
etc? argparse and order edict are often depended on because of Python 2.6,
 
 Right. I think this is an issue in Debian: we should have had a
 Provides: in python 2.7, so that it wouldn't have mater. I just hope
 this specific issue will just fade away as Python 2.6 gets older and
 less used.

For those particular cases probably, the general issue likely won’t go away 
though,
it’ll occur anytime a new version of Python adds a new module that is either 
already
available separately or that someone writes a backport package for older 
versions
of Python. On the plus side the newer formats support conditional dependencies 
so
you can say things like:

Requires-Diet: argparse; python_version == ‘2.6'

which will cause it to only be a dependency on Python 2.6. The sdist format 
doesn’t
yet support this (although since setup.py is executable you can approximate it 
by
generating a list of dependencies that varies depending on Python version).

 
setuptools/distribute should only be dependended on if the project is 
 using
entry points or something similar.
 
 If only everyone was using PBR... :)
 
 No unique place where dependencies are written:
If the project is using setuptools (or is usable from pip) then 
 dependencies
should be inside of the install_requires field in the setup.py. I can send
some code for getting this information. Sadly it’s not in a static form 
 yet
so it requires executing the setup.py.
 
 Executing blindly setup.py before I can inspect it would be an issue.
 However, yes please, I'm curious on how to extract the information, so
 please do send the code!

I just woke up so I’ll extract it from pip and send it later today, however
the general gist is that you execute ``setup.py egg_info`` which will generate
a .egg-info directory alongside the setup.py file, and then inside of that
is a requires.txt file which can be parsed to extract the dependencies. The
gotchas here are that the egg_info command and the idea of dependencies at all
is a setuptools feature not distutils, so it only works if the project supports
setuptools style setup.py. Even if they don’t support it you can force the 
setup.py
to use setuptools with a nasty

Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-14 Thread Donald Stufft

 On Nov 14, 2014, at 1:57 PM, Thomas Goirand z...@debian.org wrote:
 
 On 11/14/2014 08:48 PM, Matthias Runge wrote:
 On 13/11/14 19:11, Donald Stufft wrote:
 
 As far as I’m aware npm supports TLS the same as pip does. That secures the
 transport between the end users and the repository so you can be assured
 that there is no man in the middle. Security wise npm (and pip) are about
 ~95% (mad up numbers, but you can get the gist) of the effectiveness as the
 OS package managers.
 
 Oh, e.g rpm allows packages to be cryptographically signed, and
 depending on your systems config, that is enforced. This is quite
 different from just tls'ing a connection.
 
 Matthias
 
 Just like the Debian Release file is signed into a Release.gpg. So, the
 RPM system signs every package, while in Debian, it's the full
 repository that is signed. That's 2 different approaches that both
 works. pip doesn't offer this kind of security, but at the same time, is
 there any kind of check for things that are uploaded to PyPi? Is there
 at least a peer review process?

The entirety of PyPI is signed. It’s not possible to get a copy of our 
equivalent
to Release.gpg that isn’t cryptographically proven to have been sent by a server
possessing our RSA private key.

No, PyPI is not a curated repository, nor are any of the language repos that
I’m aware of. That really has nothing to do with securely fetching a particular
package, it only has to do with whether the contents of said package are safe
to use. It means that people installing a package from PyPI have to decide if
they trust the author of the package prior to installing it, but if they do
trust that author then it is roughly as safe to install from PyPI as it is to
install from Debian. The Linux distros are curated repositories so you need to
decide if you want to trust the gatekeepers of the distro instead of the authors
of the software you’re using (or really you probably need to trust both since
a malicious author could likely hide back doors that would go unnoticed during
packaging as a .deb).

 
 You do realize that TLS provides cryptographic proof of authenticity
 and integrity just like PGP does right? (It also provides the cool
 benefit of privacy which PGP signing does not).
 
 Do you realize that with the TLS system, you have to trust every and all
 CA, while with PGP, you only need to trust a single fingerprint?

You absolutely do not need to trust every single CA, or even any CAs at all.
If I recall npm pins which CA they trust. Pip doesn’t (yet) do this because
of some historical reasons but it’s on my list of things as well. It’s no
harder to limit the set of CAs or even individual certificates that are accepted
as valid than it is to limit the set of PGP keys you trust.

 
 All this isn't to say that TLS is 100% as good as using something
 like PGP for signatures though.
 
 I don't agree. I don't trust the CNNIC or the hong-kong post office,
 though their key is on every browser. I do trust the Debian PGP key
 generated by the Debian team.

See above, you’re operating under a misconception that TLS mandates using
the same set of CAs as the browsers use.

 
 TLS is a fairly decent way of securing a package infrastructure
 though, it prevents all of the major attacks that PGP signing does in
 practice but it moves the high value target from the build machines
 to the web servers [...]
 
 And ... to a huge list of root CA which you have to trust.

Already discussed above.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-14 Thread Donald Stufft

 On Nov 14, 2014, at 2:39 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2014-11-15 02:57:15 +0800 (+0800), Thomas Goirand wrote:
 [...]
 Do you realize that with the TLS system, you have to trust every
 and all CA, while with PGP, you only need to trust a single
 fingerprint?
 [...]
 
 Technically not true *if* the package retrieval tools implement
 certificate pinning rather than trusting any old CA (after all,
 they're not Web browsers, so they could in theory do that with
 minimal impact).
 
 Too bad https://github.com/pypa/pip/issues/1168 hasn't gotten much
 traction.

Yea, primary reason that hasn’t been done is up until recently we (PyPI)
were relying on the TLS certificate provided by Fastly and they were
unwilling to make a promise to also be using a particular CA for the
next N years. We now have dedicated IP addresses with them so we can
provide them with whatever certificate we want, now it’s just a matter
of selecting CAs and the political process.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Donald Stufft
 how that could work
 here:
 
 http://nodejs.org/api/modules.html#modules_addenda_package_manager_tips
 
 It's fun to read, but very naive. First thing that is very shocking is
 that arch independent things gets installed into /usr/lib, where they
 belong in /usr/share. If that is what the NPM upstream produces, that's
 scary: he doesn't even know how the FSHS (FileSystem Hierarchy Standard)
 works.

I may be wrong, but doesn’t the FHS state that /usr/share is for arch
independent *data* that is read only? I believe it also states that
/usr/lib is for object files, libraries, and internal binaries. As far as
I’m aware the things that npm installs are libraries the same as what
pip installs and should go under /usr/lib yea?

 
 Has anyone written such wrapper packages? Not the xstatic system which
 seems to incur a porting effort -- but really a wrapper system that
 can translate any node module into a system package.
 
 The xstatic packages are quite painless, from my view point. What's
 painful is to link an existing xstatic package with an already existing
 libjs-* package that may have a completely different directory
 structure. You can then end-up with a forest of symlinks, but there's no
 way around that. No wrapper can solve that problem either. And more
 generally, a wrapper that writes a $distribution source package out of a
 $language-specific package manager will never solve all, it will only
 reduce the amount of packaging work.
 
 Cheers,
 
 Thomas Goirand (zigo)
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Donald Stufft

 On Nov 13, 2014, at 5:23 PM, Thomas Goirand z...@debian.org wrote:
 
 On 11/14/2014 02:11 AM, Donald Stufft wrote:
 On Nov 13, 2014, at 12:38 PM, Thomas Goirand z...@debian.org wrote:
 On 11/13/2014 10:56 PM, Martin Geisler wrote:
 However, the whole JavaScript ecosystem seems to be centered around the
 idea of doing local installations. That means that you no longer need
 the package manager to install the software -- you only need a package
 manager to install the base system (NodeJs and npm for JavaScript).
 
 Yeah... Just like for Java, PHP, Perl, Python, you-name-it...
 
 In what way Javascript will be any different from all of these languages?
 
 Node.js, and npm in particular tends to solve the dependency hell problem
 by making it possible to install multiple versions of a particular thing
 and use them all within the same process. As far as I know the OS tooling
 doesn’t really handle SxS installs of the same thing in multiple versions
 very well, I think the closest that you can do is multiple separate packages
 with version numbers in the package name?
 
 Yeah, and for a very good reason: having multiple version of the same
 thing is just really horrible, and should be avoided at all costs.

I don’t disagree with you that I don’t particularly like that situation, just
saying that node.js/npm *is* special in this regard because it’s entirely
possible that you can’t resolve things to a single version per dependency and
their tooling will just work for that.

 
 Also, does your $language-specific-package--manager has enough checks so
 that there's no man in the middle attack possible? Is it secured enough?
 Can a replay attack be done on it? Does it supports any kind of
 cryptography checks like yum or apt does? I'm almost sure that's not the
 case. pip is really horrible in this regard. I haven't checked, but I'm
 almost sure what we're proposing (eg: npm and such) have the same
 weakness. And here, I'm only scratching security concerns. There's other
 concerns, like how good is the dependency solver and such (remember: it
 took *years* for apt to be as good as it is right now, and it still has
 some defects).
 
 As far as I’m aware npm supports TLS the same as pip does. That secures the
 transport between the end users and the repository so you can be assured
 that there is no man in the middle. Security wise npm (and pip) are about
 ~95% (mad up numbers, but you can get the gist) of the effectiveness as the
 OS package managers.
 
 I don't agree at all with this view. Using TLS is *far* from being
 enough IMO. But that's not the point. Using anything else than the
 distribution package manager is a hack (or unfinished work).

This is an argument that I don’t think either of us will convince the other of,
so I’ll just say agree to disagree.

 
 On 11/14/2014 12:59 AM, Martin Geisler wrote:
 It seems to me that it should be possible translate the node module
 into system level packages in a mechanical fashion, assuming that
 you're willing to have a system package for each version of the node
 module
 
 Sure! That's how I do most of my Python modules these days. I don't just
 create them from scratch, I use my own debpypi script, which generates
 a template for packaging. But it can't be fully automated. I could
 almost do it in a fully automated manner for PEAR packages for PHP (see
 debpear in the Debian archive), but it's harder with Python and pip/PyPi.
 
 I would be interested to know what makes Python harder in this regard, I
 would like to fix it.
 
 The fact that the standard from PyPi is very fuzzy is one of the issue.
 There's nothing in the format (for example in the DOAP.xml record) that
 tells if a module supports Python3 for example. Then the short and long
 descriptions aren't respected, often, you get some changelog entries
 there. Then there's no real convention for the location of the sphinx
 doc. There's also the fact that dependencies for Python have to be
 written by hand on a Debian package. See for example, dependencies on
 arparse, distribute, ordereddict, which I never put in a Debian package
 as it's available in Python 2.7. Or the fact that there's no real unique
 place where dependencies are written on a PyPi package (is it hidden
 somewhere in setup.py, or is it explicitly written in
 requirements.txt?). Etc. On the PHP world, everything is much cleaner,
 in the package.xml, which is very easily parse-able.

(This is fairly off topic, so if you want to reply to this in private that’s
fine):

Nothing that says if it supports py3:
Yea, this is a problem, you can somewhat estimate it using the Python 3
classifier though.

Short and Long descriptions aren’t respected:
I’m not sure what you mean by isn’t respected?

No real convention for the location of the sphinx docs:
Ok, I’ll add this to the list of things that needs work.

Have to write dependencies by hand:
Not sure what you mean by not depending on argparse, distribute, 
ordereddict,
etc

Re: [openstack-dev] [oslo] dropping namespace packages

2014-11-12 Thread Donald Stufft

 On Nov 12, 2014, at 3:32 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 We rather quickly came to consensus at the summit that we should drop the use 
 of namespace packages in Oslo libraries [1]. As far as I could tell, everyone 
 was happy with my proposed approach [2] of moving the code from oslo.foo to 
 oslo_foo and then creating a backwards-compatibility shim in oslo.foo that 
 imports public symbols from oslo_foo. We also agreed that we would not rename 
 existing libraries, and we would continue to use the same naming convention 
 for new libraries. So the distribution and git repository both will be called 
 “oslo.foo” and the import statement would look like “from oslo_foo import 
 bar”.
 
 Doug
 
 [1] https://etherpad.openstack.org/p/kilo-oslo-namespace-packages
 [2] https://review.openstack.org/128759

+1 for whatever my vote is worth :)

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-12 Thread Donald Stufft

 On Nov 12, 2014, at 8:29 PM, Thomas Goirand z...@debian.org wrote:
 
 On 11/12/2014 09:31 PM, Monty Taylor wrote:
 jshint is NOT free software.
 
 https://github.com/jshint/jshint/blob/master/src/jshint.js#L19
 Reasonable people disagree on this point.
 
 Feel free to have this debate with the entire Debian community. When
 you're done, then come back to us, and we can use jshint. In the mean
 while, let's not use it. (and by the way, read again the Debian Free
 Software Guideline and especially point number 5 and 6, I'm not saying
 that this is *my* point of argumentation, but it has been so within some
 threads in Debian).

The terrible line in the jshint license is going to go away in the future,
you can read https://github.com/jshint/jshint/issues/1234#issuecomment-56875247
but the tl;dr is that Doug Crockford gave Eclipse a license to his software
without that line, and eclipse is sharing it so jshint is going to rebase off
of the “Crockford” version and onto the Eclipse version.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross distribution talks on Friday

2014-11-10 Thread Donald Stufft

 On Nov 10, 2014, at 11:43 AM, Adam Young ayo...@redhat.com wrote:
 
 On 11/01/2014 06:51 PM, Alan Pevec wrote:
 %install
 export OSLO_PACKAGE_VERSION=%{version}
 %{__python} setup.py install -O1 --skip-build --root %{buildroot}
 
 Then everything should be ok and PBR will become your friend.
 Still not my friend because I don't want a _build_ tool as runtime 
 dependency :)
 e.g. you don't ship make(1) to run C programs, do you?
 For runtime, only pbr.version part is required but unfortunately
 oslo.version was abandoned.
 
 Cheers,
 Alan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Perhaps we need a top level Python Version library, not Oslo?  Is there such 
 a thing?  Seems like it should not be something specific to OpenStack

What does pbr.version do?

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Donald Stufft

 On Sep 19, 2014, at 11:54 AM, Brant Knudson b...@acm.org wrote:
 
 
 I don't think anyone would be complaining if glanceclient didn't have the 
 need to reach into and monkeypatch requests's connection pool manager[1]. Is 
 there a way to tell requests to build the https connections differently 
 without monkeypatching urllib3.poolmanager?
 
 glanceclient's monkeypatching of the global variable here is dangerous since 
 it will mess with the application and every other library if the application 
 or another library uses glanceclient.
 
 [1] 
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75
  
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75
 

Why does it need to use it’s own VerifiedHTTPSConnection class? Ironically
reimplementing that is probably more dangerous for security than requests
bundling urllib3 ;)

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Donald Stufft

 On Sep 19, 2014, at 12:42 PM, Mark Washenberger 
 mark.washenber...@markwash.net wrote:
 
 
 
 On Fri, Sep 19, 2014 at 8:59 AM, Donald Stufft don...@stufft.io 
 mailto:don...@stufft.io wrote:
 
 On Sep 19, 2014, at 11:54 AM, Brant Knudson b...@acm.org 
 mailto:b...@acm.org wrote:
 
 
 I don't think anyone would be complaining if glanceclient didn't have the 
 need to reach into and monkeypatch requests's connection pool manager[1]. Is 
 there a way to tell requests to build the https connections differently 
 without monkeypatching urllib3.poolmanager?
 
 glanceclient's monkeypatching of the global variable here is dangerous since 
 it will mess with the application and every other library if the application 
 or another library uses glanceclient.
 
 [1] 
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75
  
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75
 
 
 Why does it need to use it’s own VerifiedHTTPSConnection class? Ironically
 reimplementing that is probably more dangerous for security than requests
 bundling urllib3 ;)
 
 We supported the option to skip SSL compression since before adopting 
 requests (see 556082cd6632dbce52ccb67ace57410d61057d66), useful when 
 uploading already compressed images.
 

Is that all it’s used for? Probably it’s sane to just delete it then.

On Python 3.2+, 2.7.9+ Python provides the APIs to do it in the stdlib and 
urllib3 (and thus requests) will remove TLS Compression by default.

Python 2.6, and 2.7.0-2.7.8 do not provide the APIs to do so, however on Python 
2.x if you install pyOpenSSL, ndg-httpsclient, and pyasn1 then it’ll also 
disable TLS compression (automatically if you use requests, you have to do an 
import + function call with raw urllib3).

So you can remove all that code and just let requests/urllib3 handle it on 
3.2+, 2.7.9+ and for anything less than that either use conditional 
dependencies to have glance client depend on pyOpenSSL, ndg-httpsclient, and 
pyasn1 on Python 2.x, or let them be optional and if people want to disable TLS 
compression in those versions they can install those versions themselves.

By the way, everything above holds true for SNI as well.

This seems like the best of both worlds, glance client isn’t importing stuff 
from the vendored requests.packages.*, people get TLS Compression disabled (by 
default or optional depending on the choice the project makes), and it no 
longer has to maintain it’s own copy of security sensitive code.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Donald Stufft

 On Sep 19, 2014, at 2:26 PM, Chmouel Boudjnah chmo...@enovance.com wrote:
 
 
 On Fri, Sep 19, 2014 at 6:58 PM, Donald Stufft don...@stufft.io 
 mailto:don...@stufft.io wrote:
 So you can remove all that code and just let requests/urllib3 handle it on 
 3.2+, 2.7.9+ and for anything less than that either use conditional 
 dependencies to have glance client depend on pyOpenSSL, ndg-httpsclient, and 
 pyasn1 on Python 2.x, or let them be optional and if people want to disable 
 TLS compression in those versions they can install those versions themselves.
 
 
 we have that issue as well for swiftclient, see the great write-up from 
 stuart here :
 
 https://answers.launchpad.net/swift/+question/196920 
 https://answers.launchpad.net/swift/+question/196920
 
 just removing it this and let hope that users uses bleeding edge python 
 (which they don't) is not going to work for us. and the pyOpenSSL way is very 
 unfriendly to the end-user as well.
 
 

Unfortunately those are the only options besides using a different TLS 
implementation besides pyOpenSSL all together.

Python 2.x standard library did not include the requisite nobs for configuring 
this, it wasn’t until Python 3.2+ that the ssl module in the standard library 
gained the ability to have these kinds of things applied to it. Python 2.7.9 
contains a back port of the 3.x ssl module to Python 2.7, so that’s the first 
time in the 2.x line that the standard library has the knobs to change these 
things.

The alternative to 3.2+ or 2.7.9+ is using an alternative TLS implementation, 
of which pyOpenSSL is by far the most popular (and it’s what glanceclient is 
using right now).

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-18 Thread Donald Stufft

 On Sep 18, 2014, at 7:43 AM, Thomas Goirand z...@debian.org wrote:
 
 On 09/18/2014 04:01 PM, Flavio Percoco wrote:
 After having gone through the whole thread and read all the concerns,
 problems and reasonings, I think we should stick to requests as-is for
 now and deal with this particular issue.
 
 Regardless of the vendorized urllib3 package, I believe requests is a
 good library with an easy-to-consume API and it has solved several
 problems throughout OpenStack. Not to mention it's also helpped with
 making OpenStack more consistent. We've put a lot of effort to get to
 this point and I don't think we should revert all that because of the
 vendorized `urllib3` package.
 
 Cheers,
 Flavio
 
 I, at least, haven't suggested that we stop using requests. Just that we
 don't internally use any of the requests.packages.* stuff.
 
 The rest of the debate about the good or bad things with vendorizing,
 even if it is my view that it's a really bad thing, is IMO not
 interesting for the OpenStack project.

I don’t believe that’s a good idea. If you’re wanting to use urllib3 in order
to interact with requests than you *should* be using requests.packages.urllib3,
to do anything else risks having two different versions of urllib3 primitives at
play in one subsystem.

It’s not even completely possible in the case that prompted this thread 
originally
since the reason requests.packages.urllib3 was being imported from was so that
there could be an is instance() check against one of the classes. If that wasn’t
imported from requests.packages.urllib3 but instead from just urllib3 than the
isinstance check would always fail.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-18 Thread Donald Stufft

 On Sep 18, 2014, at 7:54 AM, Thomas Goirand z...@debian.org wrote:
 
 
 Linux distributions are not the end be all of distribution models and
 they don’t get to dictate to upstream.
 
 Well, distributions is where the final user is, and where software gets
 consumed. Our priority should be the end users.


Distributions are not the only place that people get their software from,
unless you think that the ~3 million downloads requests has received
on PyPI in the last 30 days are distributions downloading requests to
package in their OSs.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-18 Thread Donald Stufft

 On Sep 18, 2014, at 9:00 AM, Chmouel Boudjnah chmo...@enovance.com wrote:
 
 
 On Thu, Sep 18, 2014 at 1:58 PM, Donald Stufft don...@stufft.io 
 mailto:don...@stufft.io wrote:
 Distributions are not the only place that people get their software from,
 unless you think that the ~3 million downloads requests has received
 on PyPI in the last 30 days are distributions downloading requests to
 package in their OSs.
 
 
 I think Thomas was speaking in the context of how OpenStack is used by the 
 end user and that probably the point of debate here, requests ships libraries 
 inside to make it easy for users that doen't use Linux distro packages, when 
 in OpenStack (or at least in prod) packagers are something we generally very 
 much care about.

Even then, my statement holds true, just with different numbers.

Every distribution modifies upstream in different ways, I think it's insane to
do contortions which will break things for people *not* getting things through
those channels. If distributions are going to modify one upstream project they
should expect to need to modify things that depend on that project in ways that
are sensitive to what they've modified.

The only real sane thing IMO is for openstack to consider requests as it is on
PyPI. If openstack wants to make it easier for downstream to de-vendor urllib3
from requests then when openstack wants to import from requests.packages.* it
can instead do:

try:
from requests.packages import urllib3
except ImportError:
import urllib3

This will cause it to work correctly when requests is installed in a pristine
state, and will fallback to handling the modifications that some downstream
redistributors make.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-18 Thread Donald Stufft

 On Sep 18, 2014, at 10:18 AM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Donald Stufft's message of 2014-09-18 04:58:06 -0700:
 
 On Sep 18, 2014, at 7:54 AM, Thomas Goirand z...@debian.org wrote:
 
 
 Linux distributions are not the end be all of distribution models and
 they don’t get to dictate to upstream.
 
 Well, distributions is where the final user is, and where software gets
 consumed. Our priority should be the end users.
 
 
 Distributions are not the only place that people get their software from,
 unless you think that the ~3 million downloads requests has received
 on PyPI in the last 30 days are distributions downloading requests to
 package in their OSs.
 
 
 Do pypi users not also need to be able to detect and fix any versions
 of libraries they might have? If one has some virtualenvs with various
 libraries and apps installed and no --system-site-packages, one would
 probably still want to run 'pip freeze' in all of them and find out what
 libraries are there and need to be fixed.
 
 Anyway, generally security updates require a comprehensive strategy.
 One common comprehensive strategy is version assertion.
 
 Vendoring complicates that immensely.

It doesn’t really matter. PyPI doesn’t dictate to projects who host there what
that project is allowed to do except in some very broad circumstances. Whether
or not requests *should* do this doesn't really have any bearing on what
Openstack should do to cope with it. The facts are that requests does it, and
that people pulling things from PyPI is an actual platform that needs thought
about.

This leaves Openstack with a few reasonable/sane options:

1) Decide that vendoring in requests is unacceptable to what Openstack as a
   project is willing to support, and cease the use of requests.
2) Decide that what requests offers is good enough that it outweighs the fact
   that it vendors urllib3 and continue using it.

If the 2nd option is chosen, then doing anything but supporting the fact that
requests vendors urllib3 within the code that openstack writes is hurting the
users who fetch these projects from PyPI because you don't agree with one of
the choices that requests makes. By all means do conditional imports to lessen
the impact that the choice requests has made (and the one that Openstack has
made to use requests) on downstream distributors, but unconditionally importing
from the top level urllib3 for use within requests is flat out wrong.

Obviously neither of these options excludes the choice to lean on requests to
reverse this decision as well. However that is best done elsewhere as the
person making that decision isn't a member of these mailing lists as far as
I am aware.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-18 Thread Donald Stufft

 On Sep 18, 2014, at 12:29 PM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Donald Stufft's message of 2014-09-18 07:30:27 -0700:
 
 On Sep 18, 2014, at 10:18 AM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Donald Stufft's message of 2014-09-18 04:58:06 -0700:
 
 On Sep 18, 2014, at 7:54 AM, Thomas Goirand z...@debian.org wrote:
 
 
 Linux distributions are not the end be all of distribution models and
 they don’t get to dictate to upstream.
 
 Well, distributions is where the final user is, and where software gets
 consumed. Our priority should be the end users.
 
 
 Distributions are not the only place that people get their software from,
 unless you think that the ~3 million downloads requests has received
 on PyPI in the last 30 days are distributions downloading requests to
 package in their OSs.
 
 
 Do pypi users not also need to be able to detect and fix any versions
 of libraries they might have? If one has some virtualenvs with various
 libraries and apps installed and no --system-site-packages, one would
 probably still want to run 'pip freeze' in all of them and find out what
 libraries are there and need to be fixed.
 
 Anyway, generally security updates require a comprehensive strategy.
 One common comprehensive strategy is version assertion.
 
 Vendoring complicates that immensely.
 
 It doesn’t really matter. PyPI doesn’t dictate to projects who host there 
 what
 that project is allowed to do except in some very broad circumstances. 
 Whether
 or not requests *should* do this doesn't really have any bearing on what
 Openstack should do to cope with it. The facts are that requests does it, and
 that people pulling things from PyPI is an actual platform that needs thought
 about.
 
 This leaves Openstack with a few reasonable/sane options:
 
 1) Decide that vendoring in requests is unacceptable to what Openstack as a
   project is willing to support, and cease the use of requests.
 2) Decide that what requests offers is good enough that it outweighs the fact
   that it vendors urllib3 and continue using it.
 
 
 There's also 3) fork requests, which is the democratic way to vote out
 an upstream that isn't supporting the needs of the masses.
 
 I don't think we're anywhere near there, but I wanted to make it clear
 there _is_ a more extreme option.

Technically that’s just a specific case of option 1) ;)

But yes that’s a thing Openstack can do.

 
 If the 2nd option is chosen, then doing anything but supporting the fact that
 requests vendors urllib3 within the code that openstack writes is hurting the
 users who fetch these projects from PyPI because you don't agree with one of
 the choices that requests makes. By all means do conditional imports to 
 lessen
 the impact that the choice requests has made (and the one that Openstack has
 made to use requests) on downstream distributors, but unconditionally 
 importing
 from the top level urllib3 for use within requests is flat out wrong.
 
 Obviously neither of these options excludes the choice to lean on requests to
 reverse this decision as well. However that is best done elsewhere as the
 person making that decision isn't a member of these mailing lists as far as
 I am aware.
 
 
 To be clear, I think we should keep using requests. But we should lend
 our influence upstream and explain that our users are required to deal
 with this in a way that perhaps hasn't been considered or given the
 appropriate priority.

I think that’s completely reasonable. I don’t think there’s going to be much 
movement,
I’ve had this argument with Kenneth on more than one occasion and he’s very 
happy
with his decision to vendor urllib3, but hey maybe Openstack would have better 
luck.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Donald Stufft
I don't know the specific situation but it's appropriate to do this if you're 
using requests and wish to interact with the urllib3 that requests is using.

 On Sep 17, 2014, at 11:15 AM, Thomas Goirand z...@debian.org wrote:
 
 Hi,
 
 I'm horrified by what I just found. I have just found out this in
 glanceclient:
 
  File bla/tests/test_ssl.py, line 19, in module
from requests.packages.urllib3 import poolmanager
 ImportError: No module named packages.urllib3
 
 Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
 Not from requests. The fact that requests is embedding its own version
 of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
 removed from requests.
 
 In Debian, we spend a lot of time to un-vendorize stuff, because
 that's a security nightmare. I don't want to have to patch all of
 OpenStack to do it there as well.
 
 And no, there's no good excuse here...
 
 Thomas Goirand (zigo)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Donald Stufft
Looking at the code on my phone it looks completely correct to use the vendored 
copy here and it wouldn't actually work otherwise. 

 On Sep 17, 2014, at 11:17 AM, Donald Stufft don...@stufft.io wrote:
 
 I don't know the specific situation but it's appropriate to do this if you're 
 using requests and wish to interact with the urllib3 that requests is using.
 
 On Sep 17, 2014, at 11:15 AM, Thomas Goirand z...@debian.org wrote:
 
 Hi,
 
 I'm horrified by what I just found. I have just found out this in
 glanceclient:
 
 File bla/tests/test_ssl.py, line 19, in module
   from requests.packages.urllib3 import poolmanager
 ImportError: No module named packages.urllib3
 
 Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
 Not from requests. The fact that requests is embedding its own version
 of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
 removed from requests.
 
 In Debian, we spend a lot of time to un-vendorize stuff, because
 that's a security nightmare. I don't want to have to patch all of
 OpenStack to do it there as well.
 
 And no, there's no good excuse here...
 
 Thomas Goirand (zigo)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Donald Stufft

 On Sep 17, 2014, at 10:24 PM, Thomas Goirand z...@debian.org wrote:
 
 On 09/18/2014 08:22 AM, Morgan Fainberg wrote:
 I think that all of the conversation to this point has been valuable,
 the general consensus is vendoring a library is not as desirable as
 using it strictly as a dependency. It would be nice in a perfect
 world if vendoring wasn’t and issue, but in this case I think the
 root of the matter is that Debian un-vendors urllib3 and we have
 referenced the vendored urllib3 instead of installing and utilizing
 urllib3 directly.
 
 This poses at least one problem for us: we are not able to guarantee
 we’re using the same urllib3 library as requests is. I am unsure how
 big of a deal this ends up being, but it is a concern and has brought
 up a question of how to handle this in the most appropriate and
 consistent way across all of the distributions we as OpenStack support. 
 
 Does this make requests a bad library we should toss aside for
 something else? Instead of being concerned with the reasons for
 vendoring urllib3 (or un-vendoring it) we should shift the conversation
 towards two questions:
 
 1. Is it a real issue if the version of urllib3 is mismatched between
 our client libraries and requests? 
 2. If it is a real issue how are we solving it?
 
 The main issue is that urllib3 in requests, as other pointed out, is not
 up-to-date, and will not be updated. In fact, that's the main reason why
 the upstream authors of requests are vendorizing: it's because they
 don't want to carry the burden of staying up-to-date.

I don’t think this is remotely true, often times requests updates itself
to versions of urllib3 which aren’t even released yet. Sometimes urllib3
might make commits and do a release that happens between requests
versions though. I mean technically they might be not up to date until
their next version release though.

 
 And then, there's incompatibilities and divergences that appear, leading
 to all sorts of unexpected issues, like one thing working with pip, but
 not with the packages. This kind of issues are very hard to understand
 and debug. Distributions may report the issue upstream, then upstream
 will say but it's working for me, and then we may loose a lot of time.
 This happened already, and may happen again if we don't care enough.

I think this is bound to happen anytime you have downstream modifying
things. It happens in pip (pip vendors things too) and yea it’s quite annoying
but part of PEP 440 is providing ways for downstream to signal they’ve
modified things so that instead of “foo 1.0” you have “foo 1.0+ubuntu1” or
whatever.

 
 Obviously we can work with the requests team to figure out the best
 approach.
 
 There's only a single approach that works: have the requests upstream
 authors to stop embedding foreign code, and use the dependency instead.

There are legitimate reasons for a project to vendor things. Linux distributions
are not the end be all of distribution models and they don’t get to dictate to
upstream.

Generally I agree that requests should not vendor urllib3, but it’s not a clear
cut thing where there is one right way to do it.

 
 We should focus on the solution here rather than continuing
 down the path of whether requests should/shouldn’t be vendoring it’s
 dependencies since it is clear that the team has their reasons and
 does not want to switch to the dependency model again.
 
 I'm sure they have tons of wrong reasons. If they don't want to change
 anything, then we can only try to work around the issue, and never use
 the embedded version.

Generally you either work with the embedded versions or you don’t
use requests. You’re going to get very strange incompatibility problems
if you try to mis requests.packages.urllib3 and urllib3 in one codebase
and if you’re using requests at all it’s going to be expecting to use
the embedded copy of urllib3.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pbr alpha and dev version handling

2014-09-17 Thread Donald Stufft

 On Sep 17, 2014, at 10:42 PM, Monty Taylor mord...@inaugust.com wrote:
 
 I'm more in favor of option 2. Semver doesn't really support _ANY_ of
 the PEP440 things we're doing - and I'm fine with that personally. The
 dev version of an alpha _is_ supported by PEP440.
 
 If we considered the logic to be:
 
 The most recent tag is a pre-release tag, we no longer need to GENERATE
 pre-release versions, but instead start emitting post-release versions
 of the human-generated pre-release - then I think we're good. I also
 think that since they are post-release versions of the most recent
 human-generated pre-release, the deb/rpm translation logic is likely
 largely untouched.
 
 1.3.0.0a3.dev10 would become 1.3.0.0~a3.dev10 in debian world - which
 would sort as desired.

Unrelated, but for the record, PEP 440 allows 1.3.0.a3.dev10 now, you don’t
need the extra zero anymore.

 
 I do think that we should a) release oslo more frequently than the cycle
 and b) ditch pre-versioning altogether, but I do not think that is
 feasible right now. It is a conversation to have in Paris though.
 
 What do you all think?
 
 Doug
 
 
 [1] https://bugs.launchpad.net/pbr/+bug/1370608 
 https://bugs.launchpad.net/pbr/+bug/1370608 
 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org 
 mailto:OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-27 Thread Donald Stufft

 On Aug 27, 2014, at 11:45 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 On Aug 27, 2014, at 10:31 AM, Sean Dague s...@dague.net 
 mailto:s...@dague.net wrote:
 
 So this change came in with adding glance.store -
 https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a
 bad direction to be headed.
 
 Here is the problem when it comes to working with code from git, in
 python, that uses namespaces, it's kind of a hack that violates the
 principle of least surprise.
 
 For instance:
 
 cd /opt/stack/oslo.vmware
 pip install .
 cd /opt/stack/olso.config
 pip install -e .
 python -m olso.vmware
 /usr/bin/python: No module named olso.vmware
 
 In python 2.7 (using pip) namespaces are a bolt on because of the way
 importing modules works. And depending on how you install things in a
 namespace will overwrite the base __init__.py for the top level part of
 the namespace in such a way that you can't get access to the submodules.
 
 It's well known, and every conversation with dstuft that I've had in the
 past was don't use namespaces”.
 
 I’ve been using namespace packages on and off for 10+ years, and OpenStack is 
 the first project where I’ve encountered any issues. That doesn’t necessarily 
 mean we shouldn’t change, but it’s also not fair to paint them as completely 
 broken. Many projects continue to use them successfully.

Just for the record, there are at least 3 different ways of installing a 
package using pip (under the cover ways), and there are two different ways for 
pip to tell setuptools to handle the namespace packages. Unfortunately both 
ways of namespace package handling only work on 2/3 of the ways to install 
things. Unfortunately theres not much that can be done about this, it’s a 
fundamental flaw in the way setuptools namespace packages work.

The changes in Python 3 to enable real namespace packages should work without 
those problems, but you know, Python 3 only.

Generally it’s my opinion that ``import foo_bar`` isn’t particularly any better 
or worse than ``import foo.bar``. The only real benefit is being able to 
iterate over ``foo.*``, however I’d just recommend using entry points instead 
of trying to do magic based on the name.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-09 Thread Donald Stufft
 http://pypi.openstack.org/ and then fall back to to
 pypi.python.org
 http://pypi.python.org/. I think the only real solution is what
 Angus mentioned, remove yourself from projects.txt at least until
 all
 your dependencies can be provided by pypi.openstack.org
 http://pypi.openstack.org/ or another solution is put into
 place. In
 the mean time you can at least progress and continue development.
 
 If you code requires a direct dependency (rather then an optional
 dependency) of some non integrated project, then your stuck until
 they
 are.
 
 
 
 
 == Call To Action ==
 
 What do you think of this approach to satisfy a balance of
 interests? Everything remains the same for OpenStack projects, and
 Stackforge projects get a new feature that allows them to require
 software that has not yet been integrated. Are there even better
 options that we should consider?
 
 Thanks,
 
 Adrian Otto
 
 
 References:
 [1] https://review.openstack.org/openstack/requirements
 
 For what it is worth the Infra team has also been looking at
 potentially using something like bandersnatch to mirror all of pypi
 which is now a possibility because OpenStack doesn't depend on
 packages that are hosted external to pypi. We would then do
 requirements enforcement via checks rather than explicit use of a
 restricted mirror. There are some things to sort out like platform
 dependent wheels (I am not sure that any OpenStack project directly
 consumes these but I have found them to be quite handy) and the
 potential need for more enforcement to keep this working, but I think
 this is a possibility.
 
 This would be neat.
 
 -Angus
 
 
 Clark
 
 [2]
 
 
 https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/slave_scripts/select-mirror.sh#n54
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 --
 Sean Dague
 http://dague.net
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pypi-mirror is now unsupported - what do we do now?

2014-07-09 Thread Donald Stufft

On Jul 9, 2014, at 7:07 PM, Richard Jones r1chardj0...@gmail.com wrote:

 On 10 July 2014 02:19, Ben Nemec openst...@nemebean.com wrote:
 On 07/08/2014 11:05 PM, Joe Gordon wrote:
  On Tue, Jul 8, 2014 at 8:54 PM, James Polley j...@jamezpolley.com wrote:
 
  It may not have been clear from the below email, but clarkb clarifies on
  https://bugs.launchpad.net/openstack-ci/+bug/1294381 that the infra team
  is no longer maintaining pypi-mirror
 
  This has been a very useful tool for tripleo. It's much simpler for new
  developers to set up and use than a full bandersnatch mirror (and requires
  less disk space), and it can create a local cache of wheels which saves
  build time.
 
  But it's now unsupported.
 
  To me it seems like we have two options:
 
  A) Deprecate usage of pypi-mirror; update docs to instruct new devs in
  setting up a local bandersnatch mirror instead
  or
  B) Take on care-and-feeding of the tool.
  or, I guess,
  C) Continue to recommend people use an unsupported unmaintained
  known-buggy tool (it works reasonably well for us today, but it's going to
  work less and less well as time goes by)
 
  Are there other options I haven't thought of?
 
 
  I don't know if this fits your requirements but I use
  http://doc.devpi.net/latest/quickstart-pypimirror.html for my development
  needs.
 
 Will that also cache wheels?  In my experience, wheels are one of the
 big time savers in tripleo so I would consider it an important feature
 to maintain, however we decide to proceed.
 
 Yes, devpi caches wheels.
 
 I would suggest that if the pip cache approach isn't appropriate then devpi 
 probably a good solution (though I don't know your full requirements).
 
 The big difference between using devpi and pip caching would be that devpi 
 will allow you to install packages when you're offline.
 
 
Richard
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

It doesn’t generate Wheels though, it’ll only cache them if they exist on
PyPI already.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adopting pylockfile

2014-06-23 Thread Donald Stufft

On Jun 23, 2014, at 11:30 AM, Monty Taylor mord...@inaugust.com wrote:

 On 06/23/2014 11:24 AM, Ben Nemec wrote:
 On 06/23/2014 10:02 AM, Doug Hellmann wrote:
 On Mon, Jun 23, 2014 at 10:38 AM, Ben Nemec openst...@nemebean.com wrote:
 On 06/23/2014 08:41 AM, Julien Danjou wrote:
 Hi there,
 
 We discovered a problem in pylockfile recently, and after
 discussing with its current maintainer, it appears that more help
 and workforce would be require:
 
 https://github.com/smontanaro/pylockfile/issues/11#issuecomment-45634012
 
 Since we are using it via oslo lockutils module, I proposed to
 adopt this project under the Oslo program banner. The review to
 copy the repository to our infrastructure is up at:
 
 https://review.openstack.org/#/c/101911/
 
 We actually don't use this in lockutils - we use our own
 implementation of LockFile because there was some sort of outstanding
 bug in pylockfile that made it not work for us.  The only place I can
 see that we do use that project is in the oslo.db code because we
 didn't want to depend on incubator modules there, but once
 oslo.concurrency graduates we can switch to using our own locking
 implementation again.
 
 Basically I think this would be duplicating what we're already doing
 in lockutils, so I'm -1 on it.  I'd rather focus on getting
 oslo.concurrency graduated and remove pylockfile from
 global-requirements to make sure no one is using it anymore.
 
 Which makes more sense, continuing to maintain our library or fixing
 that bug and maintaining pylockfile? How big is pylockfile compared to
 what we have? Does it solve problems our existing locking code doesn't
 solve (and that we care about)?
 
 It looks to me like pylockfile would provide a subset of the
 functionality in lockutils (for example, I don't see anything to replace
 the @lock decorator).  So I don't think we could just drop lockutils and
 switch to it.  We'd just be switching out the underlying lock mechanism,
 and we know how well that has gone in the past. ;-)
 
 But if we had originally thought to use pylockfile except for the bug,
 and if oslo.lockutils brings in oslo.config making it not suitable for
 general usage - it seems like it would be a good thing for the wider
 community if we 'fix' pylockfile and make oslo.lockutils the
 oslo-ification of it?
 
 I mean, ultimately like everything else in OpenStack we don't REALLY
 want to just have our own set of parallel libs to what the rest of
 python uses, do we?

+100

 
 
 
 This also makes me wonder if oslo.concurrency should not be an oslo.*
 library since this functionality is more generally applicable outside
 OpenStack.  Something to discuss anyway.
 
 That makes sense. When I made the list of libraries to release this
 time, I called them all oslo.foo because I wasn't digging into the
 code deep enough to figure out whether they could be something else. I
 expected the person managing the spec for the release to do that step,
 but I may not have made that clear.
 
 The main thing I would be concerned with about using a non-oslo name
 for oslo.concurrency is whether or not it uses another oslo library
 like oslo.config. If we can completely avoid those dependencies, then
 it should be safe to release it under a name other than
 oslo.concurrency.
 
 Oh, that's probably why I didn't suggest this in the first place.
 lockutils uses oslo.config, so it does need to be in the oslo namespace.
 
 I don't think we can drop the oslo.config dep, but we might be able to
 decouple it like oslo.db did.  I think that would be messy though
 because Windows is where problems would come up and we don't test
 Windows in the gate. :-/
 
 
 Doug
 
 
 
 Cheers,
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] versioning and releases

2014-06-10 Thread Donald Stufft

On Jun 10, 2014, at 5:19 PM, Mark McLoughlin mar...@redhat.com wrote:

 
 The new CI system can create packages as
 Python wheels and publish them to the appropriate servers, which means
 projects will no longer need to refer explicitly to pre-release
 tarballs.
 
 The details are a bit more nuanced here - pip won't install alpha
 libraries unless you explicitly request them with a command line flag to
 install any alphas available or you explicitly require the alpha
 version.

It doesn’t have to explicitly require the alpha, it just has to include 
pre-releases
so stuff like =1.2a0,1.3 would include 1.2a0 and higher, and “less than 1.3”.
Unfortunately that’s a bit wonky still because 1.3a0 is technically  1.3 so
it’d match too. At least until we finish implementing PEP 440.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] use of the oslo namespace package

2014-04-08 Thread Donald Stufft

On Apr 8, 2014, at 3:28 AM, Mark McLoughlin mar...@redhat.com wrote:

 On Mon, 2014-04-07 at 15:24 -0400, Doug Hellmann wrote:
 We can avoid adding to the problem by putting each new library in its
 own package. We still want the Oslo name attached for libraries that
 are really only meant to be used by OpenStack projects, and so we need
 a naming convention. I'm not entirely happy with the crammed
 together approach for oslotest and oslosphinx. At one point Dims and
 I talked about using a prefix oslo_ instead of just oslo, so we
 would have oslo_db, oslo_i18n, etc. That's also a bit ugly,
 though. Opinions?
 
 Uggh :)
 
 Given the number of problems we have now (I help about 1 dev per week
 unbreak their system),
 
 I've seen you do this - kudos on your patience.
 
 I think we should also consider renaming the
 existing libraries to not use the namespace package. That isn't a
 trivial change, since it will mean updating every consumer as well as
 the packaging done by distros. If we do decide to move them, I will
 need someone to help put together a migration plan. Does anyone want
 to volunteer to work on that?
 
 One thing to note for any migration plan on this - we should use a new
 pip package name for the new version so people with e.g.
 
   oslo.config=1.2.0
 
 don't automatically get updated to a version which has the code in a
 different place. You should need to change to e.g.
 
  osloconfig=1.4.0
 
 Before we make any changes, it would be good to know how bad this
 problem still is. Do developers still see issues on clean systems, or
 are all of the problems related to updating devstack boxes? Are people
 figuring out how to fix or work around the situation on their own? Can
 we make devstack more aggressive about deleting oslo libraries before
 re-installing them? Are there other changes we can make that would be
 less invasive?
 
 I don't have any great insight, but hope we can figure something out.
 It's crazy to think that even though namespace packages appear to work
 pretty well initially, it might end up being so unworkable we would need
 to switch.
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Primarily this is because there are 3ish ways for a package to be installed, and
two methods of namespace packages (under the hood). However there is no
one single way to install a namespace package that works for all 3ish ways
to install a package.

Relevant: https://github.com/pypa/pip/issues/3

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Donald Stufft

On Apr 4, 2014, at 10:41 AM, Chuck Thier cth...@gmail.com wrote:

 Howdy,
 
 Now that swift has aligned with the other projects to use requests in 
 python-swiftclient, we have lost a couple of features.
 
 1.  Requests doesn't support expect: 100-continue.  This is very useful for 
 services like swift or glance where you want to make sure a request can 
 continue before you start uploading GBs of data (for example find out that 
 you need to auth).
 
 2.  Requests doesn't play nicely with eventlet or other async frameworks [1]. 
  I noticed this when suddenly swift-bench (which uses swiftclient) wasn't 
 performing as well as before.  This also means that, for example, if you are 
 using keystone with swift, the auth requests to keystone will block the proxy 
 server until they complete, which is also not desirable.

requests should work fine if you used the event let monkey patch the socket 
module prior to import requests.

 
 Does anyone know if these issues are being addressed, or begun working on 
 them?
 
 Thanks,
 
 --
 Chuck
 
 [1] 
 http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Donald Stufft

On Mar 19, 2014, at 10:18 AM, Kurt Griffiths kurt.griffi...@rackspace.com 
wrote:

 Thierry Carrez wrote:
 
 There was historically a lot of deviation, but as we add more projects
 that deviation is becoming more costly.
 
 I totally understand the benefits of reducing the variance between
 projects, and to be sure, I am not suggesting we have 10 different
 libraries to do X.  However, as more projects are added, the variety of
 requirements also increases, and it becomes very difficult for a single
 library to meet all the projects' needs without some projects having to
 make non-trivial compromises.
 
 One approach to this that I’ve seen work well in other communities is to
 define a small set of options that cover the major use cases.
 
 My question would be, can Pecan be improved to also cover Marconi's use
 case ? Could we have the best of both worlds (an appropriate tool *and*
 convergence) ?
 
 That would certainly be ideal, but as always, the devil is in the details.
 
 Pecan performance has been improving, so on that front there may be an
 opportunity for convergence (assuming webob also improves in performance).
 However, with respect to code paths and dependencies, I am not clear on
 the path forward. Some dependencies could be removed by creating some kind
 of “pecan-light” library, but that would need to be done in a way that
 does not break projects that rely on those extra features. That would
 still leave webob, which is an often-used selling point for Pecan. I am
 not confident that webob can be modified to address Marconi and Swift's
 needs without making backwards-incompatible changes to the library which
 would obviously not be acceptable to the broader Python community.

I’m not sure that “number of dependencies” is a useful metric at all tbh. At the
very least it’s not a very telling metric in the way it was presented in the 
review.
An example - A tool that has to safely render untrusted HTML, you could do
it with nothing but the stdlib using say regex based parsers (and get it wrong) 
or
you could depend on bleach which depends on html5lib. Using the “number of
dependencies” metric the first would be considered the superior method
however it’s deeply flawed.

The reason given in the report is that more dependencies = larger attack 
surface,
but that’s not really accurate either. Often times you’ll find that if two 
libraries
solve the same problems, one with dependencies and one without the one
without dependencies has a less battle tested reimplementation of whatever
functionality the other library has a dependency for.

In order to accurately assess the impact of dependencies you have to understand
what the library is using those dependencies for, how well tested those 
dependencies
are, what the release cycle and backwards compatibility policy of those 
dependencies
are, and what the other project is doing in place of a dependency for the 
feature(s)
that depend on them (which the answer may be that it doesn’t have that feature,
and then you have to decide if that feature is useful to you and if you’ll need 
to add
a dependency or write less battle tested code in order to get it).

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] universal wheel support

2014-02-08 Thread Donald Stufft

On Feb 8, 2014, at 7:08 PM, Monty Taylor mord...@inaugust.com wrote:

 Hey all!
 
 There are a bunch of patches adding:
 
 [wheel]
 universal = 1
 
 to setup.cfg:
 
 https://review.openstack.org/#/q/status:open+topic:wheel-publish,n,z
 
 I wanted to follow up on what the deal is with them, and what I think we 
 should do about them.
 
 universal means that a wheel can be made that can work with any python. 
 That's awesome, and we want it - it makes the wheel publishing code easier. I 
 don't think we want it turned on for any project that doesn't, in fact, 
 support python3 - because we'd be producing a wheel that says it works in 
 python3.
 
 To be fair - the wheel itself will work just fine in python3 - it's just the 
 software that doesn't - and we upload tarballs right now which don't block 
 attempts to use them in python3.
 
 SO -
 
 my pedantic side says:
 
 Let's only land universal = 1 into python3 supporting projects
 
 upon further reflection, I think my other side says:
 
 It's fine, let's land it everywhere, it doesn't hurt anything, and then we 
 can stop worrying about it
 
 Thoughts?
 
 Monty
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Techincally you can upload a Wheel that supports any Python version, but I 
don’t believe it’s exposed in the Wheel software at all.

However the side effect of publishing a py2 only wheel is that if someone tries 
to install that package using python3, instead of a Wheel they’ll download the 
sdist and try to install that. There’s a good chance that will install fine, 
just as a Wheel would, and the error won’t be discovered until they try to run 
it.

Essentially the wheel tags are supposed to be used to determine which Wheel is 
most likely to be compatible with the environment that is being installed into, 
it is not designed to restrict which environments a project supports. There is 
metadata for that in the new PEPs but nothing supports it yet.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] universal wheel support

2014-02-08 Thread Donald Stufft

On Feb 8, 2014, at 7:18 PM, Donald Stufft don...@stufft.io wrote:

 Techincally you can upload a Wheel that supports any Python version, but I 
 don’t believe it’s exposed in the Wheel software at all.

This is supposed to be “any Python2 version”.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Donald Stufft
It's nice when someone removes a file from the project. They get license 
information transmitted automatically without needing to do extra work. 

 On Feb 5, 2014, at 10:46 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
 that we have to put the same license into every single file of source code 
 in our projects.
 
 Meh, probably just habit and copy/paste behavior.
 
  In my past experience, a single LICENSE file at the root-level of the 
 project has been sufficient to declare the license chosen for a project.
 
 Agreed, and the git history is enough to figure out who worked on a
 particular file. But, there's been many discussions about this topic
 over the years, and it's just not been a priority, frankly.
 
 Github even has the capacity to choose a license and generate that file for 
 you, it's neat.
 
 True, but we don't use GitHub :) We only only use it as a mirror for
 Gerrit.
 
 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Donald Stufft
Avoiding namespace packages is a good idea in general. At least until Python 
3.whatever is baseline. 

 On Feb 5, 2014, at 10:58 AM, Doug Hellmann doug.hellm...@dreamhost.com 
 wrote:
 
 
 
 
 On Wed, Feb 5, 2014 at 11:44 AM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-02-05 09:05, Doug Hellmann wrote:
 
 
 On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-01-08 12:14, Doug Hellmann wrote:
 
 
 
 On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-01-08 11:16, Sean Dague wrote:
 On 01/08/2014 12:06 PM, Doug Hellmann wrote:
 snip
 Yeah, that's what made me start thinking oslo.sphinx should be called
 something else.
 
 Sean, how strongly do you feel about not installing oslo.sphinx in
 devstack? I see your point, I'm just looking for alternatives to the
 hassle of renaming oslo.sphinx.
 
 Doing the git thing is definitely not the right thing. But I guess I got
 lost somewhere along the way about what the actual problem is. Can
 someone write that up concisely? With all the things that have been
 tried/failed, why certain things fail, etc.
 The problem seems to be when we pip install -e oslo.config on the system, 
 then pip install oslo.sphinx in a venv.  oslo.config is unavailable in 
 the venv, apparently because the namespace package for o.s causes the 
 egg-link for o.c to be ignored.  Pretty much every other combination I've 
 tried (regular pip install of both, or pip install -e of both, regardless 
 of where they are) works fine, but there seem to be other issues with all 
 of the other options we've explored so far.
 
 We can't remove the pip install -e of oslo.config because it has to be 
 used for gating, and we can't pip install -e oslo.sphinx because it's not 
 a runtime dep so it doesn't belong in the gate.  Changing the toplevel 
 package for oslo.sphinx was also mentioned, but has obvious drawbacks too.
 
 I think that about covers what I know so far.
 Here's a link dstufft provided to the pip bug tracking this problem: 
 https://github.com/pypa/pip/issues/3
 Doug
 This just bit me again trying to run unit tests against a fresh Nova tree. 
I don't think it's just me either - Matt Riedemann said he has been 
 disabling site-packages in tox.ini for local tox runs.  We really need to 
 do _something_ about this, even if it's just disabling site-packages by 
 default in tox.ini for the affected projects.  A different option would be 
 nice, but based on our previous discussion I'm not sure we're going to 
 find one.
 Thoughts?
  
 Is the problem isolated to oslo.sphinx? That is, do we end up with any 
 configurations where we have 2 oslo libraries installed in different modes 
 (development and regular) where one of those 2 libraries is not 
 oslo.sphinx? Because if the issue is really just oslo.sphinx, we can rename 
 that to move it out of the namespace package.
 
 oslo.sphinx is the only one that has triggered this for me so far.  I think 
 it's less likely to happen with the others because they tend to be runtime 
 dependencies so they get installed in devstack, whereas oslo.sphinx doesn't 
 because it's a build dep (AIUI anyway).
 
 That's pretty much what I expected.
 
 Can we get a volunteer to work on renaming oslo.sphinx?
 
 Doug
  
  
 Doug
 -Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-28 Thread Donald Stufft

On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:

 On Tue, Jan 28 2014, Doug Hellmann wrote:
 
 There are several reviews related to adding VMware interface code to the
 oslo-incubator so it can be shared among projects (start at
 https://review.openstack.org/#/c/65075/7 if you want to look at the code).
 
 I expect this code to be fairly stand-alone, so I wonder if we would be
 better off creating an oslo.vmware library from the beginning, instead of
 bringing it through the incubator.
 
 Thoughts?
 
 This sounds like a good idea, but it doesn't look OpenStack specific, so
 maybe building a non-oslo library would be better.
 
 Let's not zope it! :)

+1 on not making it an oslo library.

 
 -- 
 Julien Danjou
 # Free Software hacker # independent consultant
 # http://julien.danjou.info
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-18 Thread Donald Stufft

On Jan 18, 2014, at 12:58 AM, Robert Collins robe...@robertcollins.net wrote:

 Out of interest - whats the overhead of running tls compression
 against compressed data? Is it really noticable?

The overhead doesn’t really matter much as you want TLS
Compression disabled because of CRIME anyways. Most Linux
distros and such ship with it disabled by default now IIRC.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-18 Thread Donald Stufft

On Jan 18, 2014, at 9:58 AM, Jesse Noller jesse.nol...@rackspace.com wrote:

 
 On Jan 18, 2014, at 12:00 AM, Jamie Lennox jamielen...@redhat.com wrote:
 
 I can't see any reason that all of these situations can't be met. 
 
 We can finally take the openstack pypi namespace, move keystoneclient - 
 openstack.keystone and similar for the other projects. Have them all based 
 upon openstack.base and probably an openstack.transport for transport.
 
 For the all-in-one users we can then just have openstack.client which 
 depends on all of the openstack.x projects. This would satisfy the 
 requirement of keeping projects seperate, but having the one entry point for 
 newer users. Similar to the OSC project (which could acutally rely on the 
 new all-in-one).
 
 This would also satisfy a lot of the clients who have i know are looking to 
 move to a version 2 and break compatability with some of the crap from the 
 early days.
 
 I think what is most important here is deciding what we want from our 
 clients and discussing a common base that we are happy to support - not just 
 renaming the existing ones.
 
 (I don't buy the problem with large amounts of dependencies, if you have a 
 meta-package you just have one line in requirements and pip will figure the 
 rest out.)
 
 You’re assuming:
 
 1: Pip works when installing the entire dependency graph (it often doesn’t)
 2: For some of these requirements, the user has a compiler installed (they 
 don’t)
 3: Installing 1 “meta package” that install N+K dependencies makes end user 
 consumers happy (it doesn’t)
 4: All of these dependencies make shipping a single binary deployment easier 
 (it doesn’t)
 5: Installing and using all of these things makes using openstack within my 
 code conceptually simpler (it doesn’t)
 
 We can start with *not* renaming the sub clients (meaning) collapsing them 
 into the singular namespace; but the problem is that every one of those sub 
 dependencies is potential liability to someone using this single client. 
 
 If yes, we could only target fedora, and rely on yum  rpm, I’d agree with 
 you - but for python application dependencies across multiple OSes and 
 developers doing ci/cd using these systems I can’t. I also don’t want user to 
 stumble into the nuanced vagaries of the sub-clients when writing application 
 code; writing glue code to bind them all together does work very well (we 
 know this from experience).
 

As much as I would like to say (with my pip developer and PyPI Admin hat on) 
that depending on 22+ libraries in a single client will be a seamless 
experience for end users I can’t in good faith say that it would be yet. We’re 
working on trying to make that true but honestly each dependency in a graph 
does introduce risk.

As of right now there is no real dependency solver in pip, so if someone 
depends on the openstack client themselves, and then depends on something else 
that depends on one of the sub clients as well if those version specs don’t 
match up there is a very good chance that the end user will run into a very 
confusing message at runtime. Openstack itself has run into this problem and it 
was a big motivator for the global requirements project.

Additionally it’s not uncommon for users to have policy driven requirements 
that require them to get every dependency they pull in checked for compliance 
(License, security etc). Having a 22+ node dependency graph makes this issue 
much harder in general.

I also believe in general it’s asking for user confusion. It’s much simpler to 
document a single way of doing it, however splitting the clients up and then 
wrapping them with a single “openstack” client means that you have at least two 
ways of doing something; The direct “use just a single library” approach and 
the “use the openstack wrapper” approach. Don’t underestimate the confusion 
this will cause end users.

Keeping them all under one project will make it far easier to have a cohesive 
API amongst all the various services, it will reduce duplication of efforts, as 
well as make it easier to track security updates and I believe a wholly 
superior end user experience.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Donald Stufft

On Jan 16, 2014, at 2:36 PM, Joe Gordon joe.gord...@gmail.com wrote:

 2) major overhaul of client libraries so they are all based off a common base 
 library. This would cover namespace changes, and possible a push to move CLI 
 into python-openstackclient


This seems like the biggest win to me. 

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Donald Stufft

On Jan 16, 2014, at 4:06 PM, Jesse Noller jesse.nol...@rackspace.com wrote:

 
 On Jan 16, 2014, at 2:22 PM, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 Since it’s pretty easy to get lost among all the opinions I’d like to 
 clarify/ask a couple of things:
 
 Keeping all the clients physically separate/combining them in to a single 
 library. Two things here:
 In case of combining them, what exact project are we considering? If this 
 list is limited to core projects like nova and keystone what policy could we 
 have for other projects to join this list? (Incubation, graduation, 
 something else?)
 In terms of granularity and easiness of development I’m for keeping them 
 separate but have them use the same boilerplate code, basically we need a 
 OpenStack Rest Client Framework which is flexible enough to address all the 
 needs in an abstract domain agnostic manner. I would assume that combining 
 them would be an additional organizational burden that every stakeholder 
 would have to deal with.
 
 Keeping them separate is awesome for *us* but really, really, really sucks 
 for users trying to use the system. 

I agree. Keeping them separate trades user usability for developer usability, I 
think user usability is a better thing to strive for.


 
 Has anyone ever considered an idea of generating a fully functional REST 
 client automatically based on an API specification (WADL could be used for 
 that)? Not sure how convenient it would be, it really depends on a 
 particular implementation, but as an idea it could be at least thought of. 
 Sounds a little bit crazy though, I recognize it :).
 
 Renat Akhmerov
 
 On 16 Jan 2014, at 11:52, Chmouel Boudjnah chmo...@enovance.com wrote:
 
 On Thu, Jan 16, 2014 at 8:40 PM, Donald Stufft don...@stufft.io wrote:
 
 On Jan 16, 2014, at 2:36 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 2) major overhaul of client libraries so they are all based off a common 
 base library. This would cover namespace changes, and possible a push to 
 move CLI into python-openstackclient
 This seems like the biggest win to me. 
 
 
 +1 
 
 Chmouel. 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Donald Stufft

On Jan 16, 2014, at 3:22 PM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Has anyone ever considered an idea of generating a fully functional REST 
 client automatically based on an API specification (WADL could be used for 
 that)? Not sure how convenient it would be, it really depends on a particular 
 implementation, but as an idea it could be at least thought of. Sounds a 
 little bit crazy though, I recognize it :).


Also please no. If you want “automatic client” stop being “REST” (which has 
been diluted to be meaningless or at least nothing like it’s actual definition) 
and be REST and/or Hypermedia. Things like WADL are completely antagonistic to 
actual REST (aka Hypermedia).

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Donald Stufft

On Jan 16, 2014, at 8:42 PM, Jesse Noller jesse.nol...@rackspace.com wrote:

 
 
 On Jan 16, 2014, at 4:59 PM, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 On 16 Jan 2014, at 13:06, Jesse Noller jesse.nol...@rackspace.com wrote:
 
 Since it’s pretty easy to get lost among all the opinions I’d like to 
 clarify/ask a couple of things:
 
 Keeping all the clients physically separate/combining them in to a single 
 library. Two things here:
 In case of combining them, what exact project are we considering? If this 
 list is limited to core projects like nova and keystone what policy could 
 we have for other projects to join this list? (Incubation, graduation, 
 something else?)
 In terms of granularity and easiness of development I’m for keeping them 
 separate but have them use the same boilerplate code, basically we need a 
 OpenStack Rest Client Framework which is flexible enough to address all 
 the needs in an abstract domain agnostic manner. I would assume that 
 combining them would be an additional organizational burden that every 
 stakeholder would have to deal with.
 
 Keeping them separate is awesome for *us* but really, really, really sucks 
 for users trying to use the system. 
 
 You may be right but not sure that adding another line into requirements.txt 
 is a huge loss of usability.
 
 
 It is when that 1 dependency pulls in 6 others that pull in 10 more - every 
 little barrier or potential failure from the inability to make a static 
 binary to how each tool acts different is a paper cut of frustration to an 
 end user.
 
 Most of the time the clients don't even properly install because of 
 dependencies on setuptools plugins and other things. For developers (as I've 
 said) the story is worse: you have potentially 22+ individual packages and 
 their dependencies to deal with if they want to use a complete openstack 
 install from their code.
 
 So it doesn't boil down to just 1 dependency: it's a long laundry list of 
 things that make consumers' lives more difficult and painful.
 
 This doesn't even touch on the fact there aren't blessed SDKs or tools 
 pointing users to consume openstack in their preferred programming language.
 
 Shipping an API isn't enough - but it can be fixed easily enough.

There’s also the discovery problem, it’s incredibly frustrating if, as I’m 
starting out to use an Openstack based cloud, everytime I want to touch some 
new segment of the service I need to go find out what the client lib is for 
that, possibly download the dependencies for it, possibly get it approved, etc. 

Splitting up services makes a lot of sense on the server side, but to the 
consumer a cloud often times isn’t a disjoint set of services that happen to be 
working in parallel, it is a single unified product where they may not know the 
boundary lines, or at the very least the boundaries can be fuzzy for them.

 
 Renat Akhmerov
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev