Re: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-07 Thread Monty Taylor

On 11/5/18 3:21 AM, Mohammed Naser wrote:



Sent from my iPhone


On Nov 5, 2018, at 10:19 AM, Thierry Carrez  wrote:

Monty Taylor wrote:

[...]
What if we added support for serving vendor data files from the root of a 
primary URL as-per RFC 5785. Specifically, support deployers adding a json file 
to .well-known/openstack/client that would contain what we currently store in 
the openstacksdk repo and were just discussing splitting out.
[...]
What do people think?


I love the idea of public clouds serving that file directly, and the user 
experience you get from it. The only two drawbacks on top of my head would be:

- it's harder to discover available compliant openstack clouds from the client.

- there is no vetting process, so there may be failures with weird clouds 
serving half-baked files that people may blame the client tooling for.

I still think it's a good idea, as in theory it aligns the incentive of 
maintaining the file with the most interested stakeholder. It just might need 
some extra communication to work seamlessly.


I’m thinking out loud here but perhaps a simple linter that a cloud provider 
can run will help them make sure that everything is functional.


I've got an initial patch up:

WIP Support remote vendor profiles https://review.openstack.org/616228

it works with vexxhost's published vendor file.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-06 Thread Monty Taylor

On 11/5/18 3:21 AM, Mohammed Naser wrote:



Sent from my iPhone


On Nov 5, 2018, at 10:19 AM, Thierry Carrez  wrote:

Monty Taylor wrote:

[...]
What if we added support for serving vendor data files from the root of a 
primary URL as-per RFC 5785. Specifically, support deployers adding a json file 
to .well-known/openstack/client that would contain what we currently store in 
the openstacksdk repo and were just discussing splitting out.
[...]
What do people think?


I love the idea of public clouds serving that file directly, and the user 
experience you get from it. The only two drawbacks on top of my head would be:

- it's harder to discover available compliant openstack clouds from the client.

- there is no vetting process, so there may be failures with weird clouds 
serving half-baked files that people may blame the client tooling for.

I still think it's a good idea, as in theory it aligns the incentive of 
maintaining the file with the most interested stakeholder. It just might need 
some extra communication to work seamlessly.


I’m thinking out loud here but perhaps a simple linter that a cloud provider 
can run will help them make sure that everything is functional.


In fact, once we get it fleshed out and support added - perhaps we could 
add a tempest test that checks for a well-known file - and include it in 
compliance testing. Basically - if your cloud publishes a vendor 
profile, then the information in it should be accurate and should work, 
right?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-04 Thread Monty Taylor

Heya,

I've floated a half-baked version of this idea to a few people, but 
lemme try again with some new words.


What if we added support for serving vendor data files from the root of 
a primary URL as-per RFC 5785. Specifically, support deployers adding a 
json file to .well-known/openstack/client that would contain what we 
currently store in the openstacksdk repo and were just discussing 
splitting out.


Then, an end-user could put a url into the 'cloud' parameter.

Using vexxhost as an example, if Mohammed served:

{
  "name": "vexxhost",
  "profile": {
"auth_type": "v3password",
"auth": {
  "auth_url": "https://auth.vexxhost.net/v3;
},
"regions": [
  "ca-ymq-1",
  "sjc1"
],
"identity_api_version": "3",
"image_format": "raw",
"requires_floating_ip": false
  }
}

from https://vexxhost.com/.well-known/openstack/client

And then in my local config I did:

import openstack
conn = openstack.connect(
cloud='https://vexxhost.com',
username='my-awesome-user',
...)

The client could know to go fetch 
https://vexxhost.com/.well-known/openstack/client to use as the profile 
information needed for that cloud.


If I wanted to configure a clouds.yaml entry, it would look like:

clouds:
  mordred-vexxhost:
profile: https://vexxhost.com
auth:
  username: my-awesome-user

And I could just

conn = openstack.connect(cloud='mordred-vexxhost')

The most important part being that we define the well-known structure 
and interaction. Then we don't need the info in a git repo managed by 
the publiccloud-wg - each public cloud can manage it itself. But also - 
non-public clouds can take advantage of being able to define such 
information for their users too - and can hand a user a simple global 
entrypoint for discover. As they add regions - or if they decide to 
switch from global keystone to per-region keystone, they can just update 
their profile file and all will be good with the world.


Of course, it's a convenience, so nothing forces anyone to deploy one.

For backwards compat, as public clouds we have vendor profiles for start 
deploying a well-known profile, we can update the baked-in named profile 
in openstacksdk to simply reference the remote url and over time 
hopefully there will cease to be any information that's useful in the 
openstacksdk repo.


What do people think?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [publiccloud-wg][sdk][osc][tc] Extracting vendor profiles from openstacksdk

2018-10-29 Thread Monty Taylor

On 10/29/18 10:47 AM, Doug Hellmann wrote:

Monty Taylor  writes:


Heya,

Tobias and I were chatting at OpenStack Days Nordic about the Public
Cloud Working Group potentially taking over as custodians of the vendor
profile information [0][1] we keep in openstacksdk (and previously in
os-client-config)

I think this is a fine idea, but we've got some dancing to do I think.

A few years ago Dean and I talked about splitting the vendor data into
its own repo. We decided not to at the time because it seemed like extra
unnecessary complication. But I think we may have reached that time.

We should split out a new repo to hold the vendor data json files. We
can manage this repo pretty much how we manage the
service-types-authority [2] data now. Also similar to that (and similar
to tzdata) these are files that contain information that is true
currently and is not release specific - so it should be possible to
update to the latest vendor files without updating to the latest
openstacksdk.

If nobody objects, I'll start working through getting a couple of new
repos created. I'm thinking openstack/vendor-profile-data, owned/managed
by Public Cloud WG, with the json files, docs, json schema, etc, and a
second one, openstack/os-vendor-profiles - owned/managed by the
openstacksdk team that's just like os-service-types [3] and is a
tiny/thin library that exposes the files to python (so there's something
to depend on) and gets proposed patches from zuul when new content is
landed in openstack/vendor-profile-data.

How's that sound?


I understand the benefit of separating the data files from the SDK, but
what is the benefit of separating the data files from the code that
reads them?


I'd say primarily so that the same data files can be used from other 
languages. (similar to having the service-types-authority data exist 
separate from the python library that consumes it.)


Also - there is a separation of concerns, potentially. The review team 
for a vendor-data repo could just be public cloud sig folks - and what 
they are reviewing is the accuracy of the data. The python code to 
consume that and interpret it is likely a different set of humans.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Monty Taylor

On 10/19/2018 01:58 PM, Andreas Jaeger wrote:

On 19/10/2018 18.30, Clark Boylan wrote:
 > [...]
Because zuul config is branch specific we could set up every project 
to use a `openstack-python3-jobs` template then define that template 
differently on each branch. This would mean you only have to update 
the location where the template is defined and not need to update 
every other project after cutting a stable branch. I would suggest we 
take advantage of that to reduce churn.


Alternative we have a single "openstack-python3-jobs" template in an 
unbranched repo like openstack-zuul-jobs and define different jobs per 
branch.


The end result would be the same, each repo uses the same template and 
no changes are needed for the repo when branching...


Yes - I agree that we should take advantage of zuul's branching support. 
And I agree with Andreas that we should just use branch matchers in 
openstack-zuul-jobs to do it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-10-16 Thread Monty Taylor

On 10/15/2018 08:24 PM, Tim Burke wrote:


On Oct 15, 2018, at 5:00 PM, Monty Taylor <mailto:mord...@inaugust.com>> wrote:


If we decide as a community to shift our testing of python3 to be 3.6 
- or even 3.7 - as long as we still are testing 2.7, I'd argue we're 
adequately covered for 3.5.


That's not enough for me to be willing to declare support. I'll grant 
that we'd catch the obvious SyntaxErrors, but that could be achieved 
just as easily (and probably more cheaply, resource-wise) with multiple 
linter jobs. The reason you want unit tests to actually run is to catch 
the not-so-obvious bugs.


For example: there are a bunch of places in Swift's proxy-server where 
we get a JSON response from a backend server, loads() it up, and do some 
work based on it. As I've been trying to get the proxy ported to py3, I 
keep writing json.loads(rest.body.decode()). I'll sometimes get pushback 
from reviewers saying this shouldn't be necessary, and then I need to 
point out that while json.loads() is happy to accept either bytes or 
unicode on both py27 and py36, bytes will cause a TypeError on py35. And 
since https://bugs.python.org/issue17909 was termed an enhancement and 
not a regression (I guess the contract is str-or-unicode, for whatever 
str is?), I'm not expecting a backport.


TLDR; if we want to say that something works, best to actually test that 
it works. I might be willing to believe that py35 and py37 working 
implies that py36 will work, but py27 -> py3x tells me little about 
whether py3w works for any w < x.


Fair point - you've convinced me!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-10-15 Thread Monty Taylor

On 10/15/2018 06:39 PM, Zane Bitter wrote:


In fact, as far as we know the version we have to support in CentOS may 
actually be 3.5, which seems like a good reason to keep it working for 
long enough that we can find out for sure one way or the other.


I certainly hope this is not what ends up happening, but seeing as how I 
actually do not know - I agree, I cannot discount the possibility that 
such a thing would happen.


That said - until such a time as we get to actually drop python2, I 
don't see it as an actual issue. The reason being - if we test with 2.7 
and 3.7 - the things in 3.6 that would break 3.5 get gated by the 
existence of 2.7 for our codebase.


Case in point- the instant 3.6 is our min, I'm going to start replacing 
every instance of:


  "foo {bar}".format(bar=bar)

in any code I spend time in with:

  f"foo {bar}"

It TOTALLY won't parse on 3.5 ... but it also won't parse on 2.7.

If we decide as a community to shift our testing of python3 to be 3.6 - 
or even 3.7 - as long as we still are testing 2.7, I'd argue we're 
adequately covered for 3.5.


The day we decide we can drop 2.7 - if we've been testing 3.7 for 
python3 and it turns out RHEL/CentOS 8 ship with python 3.5, then 
instead of just deleting all of the openstack-tox-py27 jobs, we'd 
probably just need to replace them with openstack-tox-py35 jobs, as that 
would be our new low-water mark.


Now, maybe we'll get lucky and RHEL/CentOS 8 will be a future-looking 
release and will ship with python 3.7 AND so will the corresponding 
Ubuntu LTS - and we'll get to only care about one release of python for 
a minute. :)


Come on - I can dream, right?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-15 Thread Monty Taylor

On 10/10/2018 03:17 PM, Ben Nemec wrote:



On 10/10/18 1:35 PM, Greg Hill wrote:


    I'm not sure how using pull requests instead of Gerrit changesets 
would

    help "core reviewers being pulled on to other projects"?


The 2 +2 requirement works for larger projects with a lot of 
contributors. When you have only 3 regular contributors and 1 of them 
gets pulled on to a project and can no longer actively contribute, you 
have 2 developers who can +2 each other but nothing can get merged 
without that 3rd dev finding time to add another +2. This is what 
happened with Taskflow a few years back. Eventually the other 2 gave 
up and moved on also.


As the others have mentioned, this doesn't need to continue to be a 
blocker. If the alternative is nobody working on the project at all, a 
single approver policy is far better. In practice it's probably not much 
different from having a general oslo core rubber stamp +2 a patch that 
was already reviewed by a taskflow expert.


Just piling on on this. We do single-core approves in openstacksdk, 
although for REALLY hairy patches I try to get a few more people to get 
eyeballs on something.




    Is this just about preferring not having a non-human gatekeeper like
    Gerrit+Zuul and being able to just have a couple people merge 
whatever
    they want to the master HEAD without needing to talk about +2/+W 
rights?



We plan to still have a CI gatekeeper, probably Travis CI, to make 
sure PRs past muster before being merged, so it's not like we're 
wanting to circumvent good contribution practices by committing 
whatever to HEAD. But the +2/+W rights thing was a huge PITA to deal 
with with so few contributors, for sure.


I guess this would be the one concern I'd have about moving it out. We 
still have a fair number of OpenStack projects depending on taskflow[1] 
to one degree or another, and having taskflow fully integrated into the 
OpenStack CI system is nice for catching problems with proposed changes 
early.


I second this. Especially for a library like taskflow where the value is 
in the behavior engine, describing that as an API with an API surface is 
a bit harder than just testing a published library interface.


It's also worth noting that we're working on plans to get the OpenStack 
Infra systems rebranded so that concerns people might have about brand 
association can be mitigated.


I think there was some work recently to get OpenStack CI voting 
on Github, but it seems inefficient to do work to move it out of 
OpenStack and then do more work to partially bring it back.


Zuul supports cross-source dependencies and we have select github repos 
configured in OpenStack's Zuul so that projects can do cross-project 
verification.


I suppose the other option is to just stop CI'ing on OpenStack and rely 
on the upper-constraints gating we do for our other dependencies. That 
would be unfortunate, but again if the alternative is no development at 
all then it might be a necessary compromise.


I agree - if the main roadblock is just the 2x+2 policy, which is 
solvable without moving anything, then the pain of moving the libraries 
out to github just to turn around and cobble together a cross-source 
advisory testing system seems not very worth it and I'd be more inclined 
to use upper-constraints.


By and large moving these is going to be pretty disruptive, so I'd 
personally prefer that they stayed where they. There are PLENTY of 
things hosted in OpenStack's infrastructure that are not OpenStack - or 
even OpenStack specific.


1: 
http://codesearch.openstack.org/?q=taskflow=nope=requirements.txt= 





    If it's just about preferring the pull request workflow versus the
    Gerrit rebase workflow, just say so. Same for just preferring the
    Github
    UI versus Gerrit's UI (which I agree is awful).


I mean, yes, I personally prefer the Github UI and workflow, but that 
was not a primary consideration. I got used to using gerrit well 
enough. It was mostly the  There's also a sense that if a project is 
in the Openstack umbrella, it's not useful outside Openstack, and 
Taskflow is designed to be a general purpose library. The hope is that 
just making it a regular open source project might attract more users 
and contributors.


I think we might be intertwining a few things that don't have to be 
intertwined.


The libraries are currently part of the OpenStack umbrella, and as part 
of that are hosted in OpenStack's developer infrastructure.


They can remain "part of OpenStack" and be managed with a relaxed core 
reviewer policy. This way, should they be desired, things like the 
release management team can still be used.


They can cease being "part of OpenStack" without needing to move away 
from the OpenStack Developer Infrastructure. As I mentioned earlier 
we're working on rebranding the Developer Infrastructure, so if there is 
a concern that a git repo existing within the Developer Infrastructure 
implies being "part of 

Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-15 Thread Monty Taylor

On 10/15/2018 05:49 AM, Stephen Finucane wrote:

On Wed, 2018-10-10 at 18:51 +, Jeremy Stanley wrote:

On 2018-10-10 13:35:00 -0500 (-0500), Greg Hill wrote:
[...]

We plan to still have a CI gatekeeper, probably Travis CI, to make sure PRs
past muster before being merged, so it's not like we're wanting to
circumvent good contribution practices by committing whatever to HEAD.


Travis CI has gained the ability to prevent you from merging changes
which fail testing? Or do you mean something else when you refer to
it as a "gatekeeper" here?


Yup but it's GitHub feature rather than specifically a Travis CI
feature.

   https://help.github.com/articles/about-required-status-checks/

Doesn't help the awful pull request workflow but that's neither here
nor there.


It's also not the same as gating.

The github feature is the equivalent of "Make sure the votes in check 
are green before letting someone click the merge button"


The zuul feature is "run the tests between the human decision to merge 
and actually merging with the code in the state it will actually be in 
when merged".


It sounds nitpicky, but the semantic distinction is important - and it 
catches things more frequently than you might imagine.


That said - Zuul supports github, and there are Zuuls run by 
not-openstack, so taking a project out of OpenStack's free 
infrastructure does not mean you have to also abandon Zuul. The 
OpenStack Infra team isn't going to run a zuul to gate patches on a 
GitHub project - but other people might be happy to let you use a Zuul 
so that you don't have to give up the Zuul features in place today. If 
you go down that road, I'd suggest pinging the 
softwarefactory-project.io folks or the openlab folks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Polygerrit

2018-10-15 Thread Monty Taylor

On 10/15/2018 07:08 AM, Jeremy Stanley wrote:

On 2018-10-15 11:54:28 +0100 (+0100), Stephen Finucane wrote:
[...]

As an aside, are there any plans to enable PolyGerrit [1] in the
OpenStack Gerrit instance?

[...]

I believe so, but first we need to upgrade to a newer Gerrit version
which provides it (that in turn requires a newer Java which needs a
server built from a newer distro version, which is all we've gotten
through on the upgrade plan so far).


I'm working on this now, so hopefully we should have an ETA soonish.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg][sdk][osc][tc] Extracting vendor profiles from openstacksdk

2018-10-15 Thread Monty Taylor

Heya,

Tobias and I were chatting at OpenStack Days Nordic about the Public 
Cloud Working Group potentially taking over as custodians of the vendor 
profile information [0][1] we keep in openstacksdk (and previously in 
os-client-config)


I think this is a fine idea, but we've got some dancing to do I think.

A few years ago Dean and I talked about splitting the vendor data into 
its own repo. We decided not to at the time because it seemed like extra 
unnecessary complication. But I think we may have reached that time.


We should split out a new repo to hold the vendor data json files. We 
can manage this repo pretty much how we manage the 
service-types-authority [2] data now. Also similar to that (and similar 
to tzdata) these are files that contain information that is true 
currently and is not release specific - so it should be possible to 
update to the latest vendor files without updating to the latest 
openstacksdk.


If nobody objects, I'll start working through getting a couple of new 
repos created. I'm thinking openstack/vendor-profile-data, owned/managed 
by Public Cloud WG, with the json files, docs, json schema, etc, and a 
second one, openstack/os-vendor-profiles - owned/managed by the 
openstacksdk team that's just like os-service-types [3] and is a 
tiny/thin library that exposes the files to python (so there's something 
to depend on) and gets proposed patches from zuul when new content is 
landed in openstack/vendor-profile-data.


How's that sound?

Thanks!
Monty

[0] 
http://git.openstack.org/cgit/openstack/openstacksdk/tree/openstack/config/vendors
[1] 
https://docs.openstack.org/openstacksdk/latest/user/config/vendor-support.html

[2] http://git.openstack.org/cgit/openstack/service-types-authority
[3] http://git.openstack.org/cgit/openstack/os-service-types

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Monty Taylor

On 09/26/2018 04:33 PM, Dean Troyer wrote:

On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann  wrote:

I started documenting the compute API gaps in OSC last release [1]. It's a
big gap and needs a lot of work, even for existing CLIs (the cold/live
migration CLIs in OSC are a mess, and you can't even boot from volume where
nova creates the volume for you). That's also why I put something into the
etherpad about the OSC core team even being able to handle an onslaught of
changes for a goal like this.


The OSC core team is very thin, yes, it seems as though companies
don't like to spend money on client-facing things...I'll be in the
hall following this thread should anyone want to talk...

The migration commands are a mess, mostly because I got them wrong to
start with and we have only tried to patch it up, this is one area I
think we need to wipe clean and fix properly.  Yay! Major version
release!


I thought the same, and we talked about this at the Austin summit, but OSC
is inconsistent about this (you can live migrate a server but you can't
evacuate it - there is no CLI for evacuation). It also came up at the Stein
PTG with Dean in the nova room giving us some direction. [2] I believe the
summary of that discussion was:



a) to deal with the core team sprawl, we could move the compute stuff out of
python-openstackclient and into an osc-compute plugin (like the
osc-placement plugin for the placement service); then we could create a new
core team which would have python-openstackclient-core as a superset


This is not my first choice but is not terrible either...


b) Dean suggested that we close the compute API gaps in the SDK first, but
that could take a long time as well...but it sounded like we could use the
SDK for things that existed in the SDK and use novaclient for things that
didn't yet exist in the SDK


Yup, this can be done in parallel.  The unit of decision for use sdk
vs use XXXclient lib is per-API call.  If the client lib can use an
SDK adapter/session it becomes even better.  I think the priority for
what to address first should be guided by complete gaps in coverage
and the need for microversion-driven changes.


This might be a candidate for one of these multi-release goals that the TC
started talking about at the Stein PTG. I could see something like this
being a goal for Stein:

"Each project owns its own osc- plugin for OSC CLIs"

That deals with the core team and sprawl issue, especially with stevemar
being gone and dtroyer being distracted by shiny x-men bird related things.
That also seems relatively manageable for all projects to do in a single
release. Having a single-release goal of "close all gaps across all service
types" is going to be extremely tough for any older projects that had CLIs
before OSC was created (nova/cinder/glance/keystone). For newer projects,
like placement, it's not a problem because they never created any other CLI
outside of OSC.


I think the major difficulty here is simply how to migrate users from
today state to future state in a reasonable manner.  If we could teach
OSC how to handle the same command being defined in multiple plugins
properly (hello entrypoints!) it could be much simpler as we could
start creating the new plugins and switch as the new command
implementations become available rather than having a hard cutover.

Or maybe the definition of OSC v4 is as above and we just work at it
until complete and cut over at the end.


I think that sounds pretty good, actually. We can also put the 'just get 
the sdk Connection' code in.


You mentioned earlier that python-*client that can take an existing ksa 
Adapter as a constructor parameter make this easier - maybe let's put 
that down as a workitem for this? Becuase if we could do that- then we 
know we've got discovery and config working consistently across the 
board no matter if a call is using sdk or python-*client primitives 
under the cover - so everything will respond to env vars and command 
line options and clouds.yaml consistently.


For that to work, a python-*client Client that took an 
keystoneauth1.adapter.Adapter would need to take it as gospel and not do 
further processing of config, otherwise the point is defeated. But it 
should be straightforward to do in most cases, yeah?



 Note that the current APIs
that are in-repo (Compute, Identity, Image, Network, Object, Volume)
are all implemented using the plugin structure, OSC v4 could start as
the breaking out of those without command changes (except new
migration commands!) and then the plugins all re-write and update at
their own tempo.  Dang, did I just deconstruct my project?


Main difference is making sure these new deconstructed plugin teams 
understand the client support lifecycle - which is that we don't drop 
support for old versions of services in OSC (or SDK). It's a shift from 
the support lifecycle and POV of python-*client, but it's important and 
we just need to all be on the same page.



One thing I don't like 

Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Monty Taylor

On 09/26/2018 04:12 PM, Dean Troyer wrote:

On Wed, Sep 26, 2018 at 3:01 PM, Doug Hellmann  wrote:

Would it be useful to have the SDK work in OSC as a prerequisite to the
goal work? I would hate to have folks have to write a bunch of things
twice.


I don't think this is necessary, once we have the auth and service
discovery/version negotiation plumbing in OSC properly new things can
be done in OSC without having to wait for conversion.  Any of the
existing client libs that can utilize an adapter form the SDK makes
this even simpler for conversion.


As one might expect, I agree with Dean. I don't think we need to wait on it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Monty Taylor

On 09/26/2018 01:55 PM, Tim Bell wrote:


Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose this 
for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.


++

It's also worth noting that we're REALLY close to a 1.0 of openstacksdk 
(all the patches are in flight, we just need to land them) - and once 
we've got that we'll be in a position to start shifting 
python-openstackclient to using openstacksdk instead of python-*client.


This will have the additional benefit that, once we've migrated CLIs to 
python-openstackclient as per this goal, and once we've migrated 
openstackclient itself to openstacksdk, the number of different 
libraries one needs to install to interact with openstack will be 
_dramatically_ lower.



-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

 It's time to start thinking about community-wide goals for the T series.
 
 We use community-wide goals to achieve visible common changes, push for

 basic levels of consistency and user experience, and efficiently improve
 certain areas where technical debt payments have become too high -
 across all OpenStack projects. Community input is important to ensure
 that the TC makes good decisions about the goals. We need to consider
 the timing, cycle length, priority, and feasibility of the suggested
 goals.
 
 If you are interested in proposing a goal, please make sure that before

 the summit it is described in the tracking etherpad [1] and that you
 have started a mailing list thread on the openstack-dev list about the
 proposal so that everyone in the forum session [2] has an opportunity to
 consider the details.  The forum session is only one step in the
 selection process. See [3] for more details.
 
 Doug
 
 [1] https://etherpad.openstack.org/p/community-goals

 [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
 [3] https://governance.openstack.org/tc/goals/index.html
 
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Monty Taylor

On 09/19/2018 09:37 AM, Colleen Murphy wrote:

On Wed, Sep 19, 2018, at 4:23 PM, Monty Taylor wrote:

On 09/19/2018 08:25 AM, Chris Dent wrote:





also, cmurphy has been working on updating some of keystone's legacy
jobs recently:

https://review.openstack.org/602452

which might also be a source for copying from.



Disclaimer before anyone blindly copies: https://bit.ly/2vq26SR


Bah. Blindly copy all the things!!!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Monty Taylor

On 09/19/2018 09:23 AM, Monty Taylor wrote:

On 09/19/2018 08:25 AM, Chris Dent wrote:


I have a patch in progress to add some simple integration tests to
placement:

 https://review.openstack.org/#/c/601614/

They use https://github.com/cdent/gabbi-tempest . The idea is that
the method for adding more tests is to simply add more yaml in
gate/gabbits, without needing to worry about adding to or think
about tempest.

What I have at that patch works; there are two yaml files, one of
which goes through the process of confirming the existence of a
resource provider and inventory, booting a server, seeing a change
in allocations, resizing the server, seeing a change in allocations.

But this is kludgy in a variety of ways and I'm hoping to get some
help or pointers to the right way. I'm posting here instead of
asking in IRC as I assume other people confront these same
confusions. The issues:

* The associated playbooks are cargo-culted from stuff labelled
   "legacy" that I was able to find in nova's jobs. I get the
   impression that these are more verbose and duplicative than they
   need to be and are not aligned with modern zuul v3 coolness.


Yes. Your life will be much better if you do not make more legacy jobs. 
They are brittle and hard to work with.


New jobs should either use the devstack base job, the devstack-tempest 
base job or the devstack-tox-functional base job - depending on what 
things are intended.


You might want to check out:

https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html

also, cmurphy has been working on updating some of keystone's legacy 
jobs recently:


https://review.openstack.org/602452

which might also be a source for copying from.


* It takes an age for the underlying devstack to build, I can
   presumably save some time by installing fewer services, and making
   it obvious how to add more when more are required. What's the
   canonical way to do this? Mess with {enable,disable}_service, cook
   the ENABLED_SERVICES var, do something with required_projects?


http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190

Has an example of disabling services, of adding a devstack plugin, and 
of adding some lines to localrc.



http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117

Has some more complex config bits in it.

In your case, I believe you want to have parent: devstack-tempest 
instead of parent: devstack-tox-functional




* This patch, and the one that follows it [1] dynamically install
   stuff from pypi in the post test hooks, simply because that was
   the quick and dirty way to get those libs in the environment.
   What's the clean and proper way? gabbi-tempest itself needs to be
   in the tempest virtualenv.


This I don't have an answer for. I'm guessing this is something one 
could do with a tempest plugin?


K. This:

http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/.zuul.yaml#n184

Has an example of a job using a tempest plugin.


* The post.yaml playbook which gathers up logs seems like a common
   thing, so I would hope could be DRYed up a bit. What's the best
   way to that?


Yup. Legacy devstack-gate based jobs are pretty terrible.

You can delete the entire post.yaml if you move to the new devstack base 
job.


The base devstack job has a much better mechanism for gathering logs.


Thanks very much for any input.

[1] perf logging of a loaded placement: 
https://review.openstack.org/#/c/602484/




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Monty Taylor

On 09/19/2018 08:25 AM, Chris Dent wrote:


I have a patch in progress to add some simple integration tests to
placement:

     https://review.openstack.org/#/c/601614/

They use https://github.com/cdent/gabbi-tempest . The idea is that
the method for adding more tests is to simply add more yaml in
gate/gabbits, without needing to worry about adding to or think
about tempest.

What I have at that patch works; there are two yaml files, one of
which goes through the process of confirming the existence of a
resource provider and inventory, booting a server, seeing a change
in allocations, resizing the server, seeing a change in allocations.

But this is kludgy in a variety of ways and I'm hoping to get some
help or pointers to the right way. I'm posting here instead of
asking in IRC as I assume other people confront these same
confusions. The issues:

* The associated playbooks are cargo-culted from stuff labelled
   "legacy" that I was able to find in nova's jobs. I get the
   impression that these are more verbose and duplicative than they
   need to be and are not aligned with modern zuul v3 coolness.


Yes. Your life will be much better if you do not make more legacy jobs. 
They are brittle and hard to work with.


New jobs should either use the devstack base job, the devstack-tempest 
base job or the devstack-tox-functional base job - depending on what 
things are intended.


You might want to check out:

https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html

also, cmurphy has been working on updating some of keystone's legacy 
jobs recently:


https://review.openstack.org/602452

which might also be a source for copying from.


* It takes an age for the underlying devstack to build, I can
   presumably save some time by installing fewer services, and making
   it obvious how to add more when more are required. What's the
   canonical way to do this? Mess with {enable,disable}_service, cook
   the ENABLED_SERVICES var, do something with required_projects?


http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190

Has an example of disabling services, of adding a devstack plugin, and 
of adding some lines to localrc.



http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117

Has some more complex config bits in it.

In your case, I believe you want to have parent: devstack-tempest 
instead of parent: devstack-tox-functional




* This patch, and the one that follows it [1] dynamically install
   stuff from pypi in the post test hooks, simply because that was
   the quick and dirty way to get those libs in the environment.
   What's the clean and proper way? gabbi-tempest itself needs to be
   in the tempest virtualenv.


This I don't have an answer for. I'm guessing this is something one 
could do with a tempest plugin?



* The post.yaml playbook which gathers up logs seems like a common
   thing, so I would hope could be DRYed up a bit. What's the best
   way to that?


Yup. Legacy devstack-gate based jobs are pretty terrible.

You can delete the entire post.yaml if you move to the new devstack base 
job.


The base devstack job has a much better mechanism for gathering logs.


Thanks very much for any input.

[1] perf logging of a loaded placement: 
https://review.openstack.org/#/c/602484/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][horizon] Issues we found when using Community Images

2018-08-22 Thread Monty Taylor

On 08/22/2018 04:31 PM, Andy Botting wrote:

Hi all,

We've recently moved to using Glance's community visibility on the 
Nectar Research Cloud. We had lots of public images (12255), and we 
found it was becoming slow to list them all and the community image 
visibility seems to fit our use-case nicely.


We moved all of our user's images over to become community images, and 
left our 'official' images as the only public ones.


We found a few issues, which I wanted to document, if anyone else is 
looking at doing the same thing.


-> Glance API has no way of returning all images available to me in a 
single API request (https://bugs.launchpad.net/glance/+bug/1779251)
The default list of images is perfect (all available to me, except 
community), but there's a heap of cases where you need to fetch all 
images including community. If we did have this, my next points would be 
a whole lot easier to solve.


-> Horizon's support for Community images is very lacking 
(https://bugs.launchpad.net/horizon/+bug/1779250)
On the surface, it looks like Community images are supported in Horizon, 
but it's only as far as listing images in the Images tab. Trying to boot 
a Community image from the Launch Instance wizard is actually 
impossible, as community images don't appear in that list at all. The 
images tab in Horizon dynamically builds the list of images on the 
Images tab through new Glance API calls when you use any filters (good).
In contrast, the source tab on the Launch Images wizard loads all images 
at the start (slow with lots of images), then relies on javascript 
client-side filtering of the list. I've got a dirty patch to fix this 
for us by basically making two Glance API requests (one without 
specifying visibility, and another with visibility=community), then 
merging the data. This would be better handled the same way as the 
Images tab, with new Glance API requests when filtering.


-> Users can't set their own images as Community from the dashboard
Should be relatively easy to add this. I'm hoping to look into fixing 
this soon.


-> Murano / Sahara image discovery
These projects rely on images to be chosen when creating new 
environments, and it looks like they use a glance list for their 
discovery. They both suffer from the same issue and require their images 
to be non-community for them to find their images.


-> Openstack Client didn't support listing community images at all 
(https://storyboard.openstack.org/#!/story/2001925 
)
It did support setting images to community, but support for actually 
listing them was missing.  Support has  now been added, but not sure if 
it's made it to a release yet.


We've got a few more things I want to do related to images, sdk, 
openstackclient *and* horizon to make rollouts like this a bit better.


I'm betting when I do that I shoujld add murano, sahara and heat to the 
list.


We're currently having to add the new support in like 5 places, which is 
where some of the holes come from. Hopefully we'll get stuff solid on 
that front soon - but thanks for the feedback!


Apart from these issues, our migration was pretty successful with 
minimal user complaints.


\o/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk] Pruning core team

2018-08-10 Thread Monty Taylor

On 08/10/2018 12:53 PM, Dean Troyer wrote:

On Fri, Aug 10, 2018 at 7:06 AM, Monty Taylor  wrote:

I'd like to propose removing Brian Curtin, Clint Byrum, David Simard, and
Ricardo Cruz.

Thoughts/concerns?


Reluctant +1, thanks guys for all the hard work!


+100 to the reluctant - and the thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] Pruning core team

2018-08-10 Thread Monty Taylor

Hey everybody,

We have some former contributors who haven't been involved in the last 
cycle that we should prune from the roster. They're all wonderful humans 
and it would be awesome to have them back if life presented them an 
opportunity to be involved again.


I'd like to propose removing Brian Curtin, Clint Byrum, David Simard, 
and Ricardo Cruz.


Thoughts/concerns?

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] Propose adding Dmitry Tantsur to openstacksdk-core

2018-08-10 Thread Monty Taylor

Hey everybody,

I'd like to propose Dmitry Tantsur (dtantsur) as a new openstacksdk core 
team member. He's been diving in to some of the hard bits, such as 
dealing with microversions, and has a good grasp of the resource/proxy 
layer. His reviews have been super useful broadly, and he's also helping 
drive Ironic related functionality.


Thoughts/concerns?

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG Denver Horns

2018-08-08 Thread Monty Taylor

On 08/08/2018 09:12 AM, Jeremy Stanley wrote:

On 2018-08-08 06:51:27 -0500 (-0500), David Medberry wrote:

So basically, we have added "sl" to osc. Duly noted.

(FWIW, I frequently use "sl" as a demo of how "live" a VM is during live
migration. The train "stutters" a bit during the cutover.)

Now I can base it on PTG design work in a backronym fashion.

[...]

Speaking of which, is it too soon to put in bids to name the Denver
summit and associated release in 2019 "OpenStack Train"? I feel like
we're all honorary railroad engineers by now.


It seems like a good opportunity to apply the Brian Waldon exception.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][sdk][release] FFE request for os-service-types

2018-08-07 Thread Monty Taylor

Heya!

I'd like to request a FFE for os-service-types to release 1.3.0.

The main change is the inclusion of the qinling data from 
service-types-authority, as well as the addition of an alias for magnum.


There are also two minor changes to the python portion - a parameter was 
added to get_service_type allowing for a more permissive approach to 
unknown service - and the library now handles life correctly if a 
service type is requested with the incorrect number of _'s and -'s.


Nothing should need a lower bounds bump - only the normal U-C bump.

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-07 Thread Monty Taylor

On 08/07/2018 05:03 PM, Akihiro Motoki wrote:

Hi Cinder and API-SIG folks,

During reviewing a horizon bug [0], I noticed the behavior of Cinder API 
3.0 was changed.
Cinder introduced more strict schema validation for creating/updating 
volume encryption type

during Rocky and a new micro version 3.53 was introduced[1].

Previously, Cinder API like 3.0 accepts unused fields in POST requests
but after [1] landed unused fields are now rejected even when Cinder API 
3.0 is used.
In my understanding on the microversioning, the existing behavior for 
older versions should be kept.

Is it correct?


I agree with your assessment that 3.0 was used there - and also that I 
would expect the api validation to only change if 3.53 microversion was 
used.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][release] FFE for openstacksdk 0.17.2

2018-08-07 Thread Monty Taylor

Hey all,

I'd like to request an FFE to release 0.17.2 of openstacksdk from 
stable/rocky.


Infra discovered an issue that affects the production nodepool related 
to the multi-threaded TaskManager and exception propagation. When it 
gets triggered, we lose an entire cloud of capacity (whoops) until we 
restart the associated nodepool-launcher process.


Nothing in OpenStack uses the particular feature in openstacksdk in 
question (yet), so nobody should need to even bump constraints.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Monty Taylor

On 08/01/2018 08:17 AM, Andrey Kurilin wrote:



ср, 1 авг. 2018 г. в 15:37, Monty Taylor <mailto:mord...@inaugust.com>>:


On 08/01/2018 06:22 AM, Luigi Toscano wrote:
 > On Wednesday, 1 August 2018 12:49:13 CEST Andrey Kurilin wrote:
 >> Hey Ian and stackers!
 >>
 >> ср, 1 авг. 2018 г. в 8:45, Ian Wienand mailto:iwien...@redhat.com>>:
 >>> Hello,
 >>>
 >>> It seems freenode is currently receiving a lot of unsolicited
traffic
 >>> across all channels.  The freenode team are aware [1] and doing
their
 >>> best.
 >>>
 >>> There are not really a lot of options.  We can set "+r" on channels
 >>> which means only nickserv registered users can join channels. 
We have

 >>> traditionally avoided this, because it is yet one more barrier to
 >>> communication when many are already unfamiliar with IRC access.
 >>> However, having channels filled with irrelevant messages is
also not
 >>> very accessible.
 >>>
 >>> This is temporarily enabled in #openstack-infra for the time
being, so
 >>> we can co-ordinate without interruption.
 >>>
 >>> Thankfully AFAIK we have not needed an abuse policy on this before;
 >>> but I guess we are the point we need some sort of coordinated
 >>> response.
 >>>
 >>> I'd suggest to start, people with an interest in a channel can
request
 >>> +r from an IRC admin in #openstack-infra and we track it at [2] >>>
 >>> Longer term ... suggestions welcome? :)
 >>
 >> Move to Slack? We can provide auto-sending to emails invitations for
 >> joining by clicking the button on some page at openstack.org
<http://openstack.org>. It will not
 >> add more berrier for new contributors and, at the same time,
this way will
 >> give some base filtering by emails at least.

slack is pretty unworkable for many reasons. The biggest of them is
that
it is not Open Source and we don't require OpenStack developers to use
proprietary software to work on OpenStack.

The quality of slack that makes it effective at fighting spam is also
the quality that makes it toxic as a community platform - the need for
an invitation and being structured as silos.

Even if we were to decide to abandon our Open Source principles and
leave behind those in our contributor base who believe that Free
Software Needs Free Tools [1] - moving to slack would be a GIANT
undertaking. As such, it would not be a very effective way to deal with
this current spam storm.

 > No, please no. If we need to move to another service, better go
to a FLOSS
 > one, like Matrix.org, or others.

We had some discussion in Vancouver about investigating the use of
Matrix. We are a VERY large community, so we need to do scale and
viability testing before it's even a worthy topic to raise with the TC
and the community for consideration. If we did, we'd aim to run our own
home server.


The last paragraph is the best answer why we never switch from IRC.
"we are a VERY large community"

Looking back at migration to Zuul V3: the project which is written by 
folks who
know potencial high-load and usage, the project which has a great 
background.
Some issues appeared only after launching it in production. Fortunately, 
Zuul-community

quickly fixed them and we have this great CI system now.

As for the FOSS alternatives for the Slack aka modern IRC, I did not 
heard anything
scalable for the size we need. Also, in case of any issues, they will 
not be fixed as

quickly as it was with Zull V3 (thank you folks!).


Yes. This is an excellent point. In fact, just trying to figure out how 
to properly test that a different choice can handle the scale is ... 
very hard at best.


Another issue, the alternative should be popular, modern and usable. IRC 
is the thing which

is used by a lot of communities (i.e. you do not need to install some
no-name tool to communicate for one more topic), the same for Slack and 
I suppose
some other tools havethe same popularity (but I do not have installed 
versions of them).
If the alternative doesn't feet these criteria, a lot of people will 
stay at Freenode and migration will fail.


Yup. Totally agree.


However, it's worth noting that matrix is not immune to spam. As an
open
federated protocol, it's a target as well. Running our own home server
might give us some additional tools - but it might not, and we might be
in the same scenario except now we're running another service and we
had
the pain of moving.

All that to say though, matrix seems like th

Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Monty Taylor

On 08/01/2018 08:38 AM, Jeremy Stanley wrote:

On 2018-08-01 09:58:48 -0300 (-0300), Rafael Weingärtner wrote:

What about Rocket chat instead of Slack? It is open source.
https://github.com/RocketChat/Rocket.Chat

Monty, what kind of evaluation would you guys need? I might be
able to help.


Consider reading and possibly resurrecting the infra spec for it:

 https://review.openstack.org/319506

My main concern is how we'll go about authenticating and policing
whatever gateway we set up. As soon as spammers and other abusers
find out there's an open (or nearly so) proxy to a major IRC
network, they'll use it to hide their origins from the IRC server
operators and put us in the middle of the problem.


To be clear -- I was not suggesting running matrix and IRC. I was 
suggesting investigating running a matrix home server and the 
permanently moving all openstack channels to it.


matrix synapse supports federated identity providers with saml and cas 
support implemented. I would imagine we'd want to configure it to 
federate to openstackid for logging in to the home server -so that might 
involve either adding saml support to openstackid or writing an 
openid-connect driver to synapse.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Monty Taylor

On 08/01/2018 12:45 AM, Ian Wienand wrote:

Hello,

It seems freenode is currently receiving a lot of unsolicited traffic
across all channels.  The freenode team are aware [1] and doing their
best.

There are not really a lot of options.  We can set "+r" on channels
which means only nickserv registered users can join channels.  We have
traditionally avoided this, because it is yet one more barrier to
communication when many are already unfamiliar with IRC access.
However, having channels filled with irrelevant messages is also not
very accessible.

This is temporarily enabled in #openstack-infra for the time being, so
we can co-ordinate without interruption.

Thankfully AFAIK we have not needed an abuse policy on this before;
but I guess we are the point we need some sort of coordinated
response.

I'd suggest to start, people with an interest in a channel can request
+r from an IRC admin in #openstack-infra and we track it at [2]


To mitigate the pain caused by +r - we have created a channel called 
#openstack-unregistered and have configured the channels with the +r 
flag to forward people to it. We have also set an entrymsg on 
#openstack-unregistered to:


"Due to a prolonged SPAM attack on freenode, we had to configure 
OpenStack channels to require users to be registered. If you are here, 
you tried to join a channel without being logged in. Please see 
https://freenode.net/kb/answer/registration for instructions on 
registration with NickServ, and make sure you are logged in."


So anyone attempting to join a channel with +r should get that message.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Monty Taylor

On 08/01/2018 07:44 AM, Doug Hellmann wrote:

Excerpts from Andrey Kurilin's message of 2018-08-01 15:21:37 +0300:

ср, 1 авг. 2018 г. в 14:11, Dmitry Tantsur :


On 08/01/2018 12:49 PM, Andrey Kurilin wrote:

Hey Ian and stackers!

ср, 1 авг. 2018 г. в 8:45, Ian Wienand mailto:iwien...@redhat.com>>:

 Hello,

 It seems freenode is currently receiving a lot of unsolicited traffic
 across all channels.  The freenode team are aware [1] and doing their
 best.

 There are not really a lot of options.  We can set "+r" on channels
 which means only nickserv registered users can join channels.  We

have

 traditionally avoided this, because it is yet one more barrier to
 communication when many are already unfamiliar with IRC access.
 However, having channels filled with irrelevant messages is also not
 very accessible.

 This is temporarily enabled in #openstack-infra for the time being,

so

 we can co-ordinate without interruption.

 Thankfully AFAIK we have not needed an abuse policy on this before;
 but I guess we are the point we need some sort of coordinated
 response.

 I'd suggest to start, people with an interest in a channel can

request

 +r from an IRC admin in #openstack-infra and we track it at [2]

 Longer term ... suggestions welcome? :)


Move to Slack? We can provide auto-sending to emails invitations for

joining by

clicking the button on some page at openstack.org .

It

will not add more berrier for new contributors and, at the same time,

this way

will give some base filtering by emails at least.


A few potential barriers with slack or similar solutions: lack of FOSS
desktop
clients (correct me if I'm wrong),



The second link from google search gives an opensource client written in
python https://github.com/raelgc/scudcloud . Also, there is something which
is written in golang.


complete lack of any console clients (ditto),



Again, google gives several ones as first results -
https://github.com/evanyeung/terminal-slack
https://github.com/erroneousboat/slack-term

serious limits on free (as in beer) tariff plans.




I can make an assumption that for marketing reasons, Slack Inc can propose
extended Free plan.
But anyway, even with default one the only thing which can limit us is
`10,000 searchable messages` which is bigger than 0 (freenode doesn't store
messages).


Why I like slack? because a lot of people are familar with it (a lot of
companies use it as like some opensource communities, like k8s )

PS: I realize that OpenStack Community will never go away from Freenode and
IRC, but I do not want to stay silent.


We are unlikely to select slack because the platform itself is
proprietary, even if there are OSS clients. That said, there have
been some discussions about platforms such as Matrix, which is
similar to slack and also OSS.

I think the main thing that is blocking any such move right now is
the fact that we're lacking someone with time to evaluate the tool
to see what it would take for us to run it. If you're interested in
this, maybe you can work with the infrastructure team to plan and
implement that evaluation?


In Vancouver I signed up to work on this - but so far it has been lower 
in priority than other tasks. I'll circle around with people today and 
see what we think about relative priorities.


That said - Doug's invitation is quite valid - help would be welcome and 
I'd be happy to connect with someone who has time to help with this.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Monty Taylor

On 08/01/2018 06:22 AM, Luigi Toscano wrote:

On Wednesday, 1 August 2018 12:49:13 CEST Andrey Kurilin wrote:

Hey Ian and stackers!

ср, 1 авг. 2018 г. в 8:45, Ian Wienand :

Hello,

It seems freenode is currently receiving a lot of unsolicited traffic
across all channels.  The freenode team are aware [1] and doing their
best.

There are not really a lot of options.  We can set "+r" on channels
which means only nickserv registered users can join channels.  We have
traditionally avoided this, because it is yet one more barrier to
communication when many are already unfamiliar with IRC access.
However, having channels filled with irrelevant messages is also not
very accessible.

This is temporarily enabled in #openstack-infra for the time being, so
we can co-ordinate without interruption.

Thankfully AFAIK we have not needed an abuse policy on this before;
but I guess we are the point we need some sort of coordinated
response.

I'd suggest to start, people with an interest in a channel can request
+r from an IRC admin in #openstack-infra and we track it at [2] >>>
Longer term ... suggestions welcome? :)


Move to Slack? We can provide auto-sending to emails invitations for
joining by clicking the button on some page at openstack.org. It will not
add more berrier for new contributors and, at the same time, this way will
give some base filtering by emails at least.


slack is pretty unworkable for many reasons. The biggest of them is that 
it is not Open Source and we don't require OpenStack developers to use 
proprietary software to work on OpenStack.


The quality of slack that makes it effective at fighting spam is also 
the quality that makes it toxic as a community platform - the need for 
an invitation and being structured as silos.


Even if we were to decide to abandon our Open Source principles and 
leave behind those in our contributor base who believe that Free 
Software Needs Free Tools [1] - moving to slack would be a GIANT 
undertaking. As such, it would not be a very effective way to deal with 
this current spam storm.



No, please no. If we need to move to another service, better go to a FLOSS
one, like Matrix.org, or others.


We had some discussion in Vancouver about investigating the use of 
Matrix. We are a VERY large community, so we need to do scale and 
viability testing before it's even a worthy topic to raise with the TC 
and the community for consideration. If we did, we'd aim to run our own 
home server.


However, it's worth noting that matrix is not immune to spam. As an open 
federated protocol, it's a target as well. Running our own home server 
might give us some additional tools - but it might not, and we might be 
in the same scenario except now we're running another service and we had 
the pain of moving.


All that to say though, matrix seems like the best potential option 
available that meets the largest number of desires from our user base. 
Once we've checked it out for viability it might be worth discussing.


As above, any effort there is a pretty giant one that will require a 
large amount of planning, a pretty sizeable amount of technical 
preparation and would be disruptive at the least, I don't think that'll 
help us with the current spam storm though.


Monty

[1] https://mako.cc/writing/hill-free_tools.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][release] FFE for openstacksdk 0.17.1

2018-07-30 Thread Monty Taylor

Heya,

I'd like to request a FFE to release 0.17.1 of openstacksdk from 
stable/rocky. The current rocky release, 0.17.0, added a feature (being 
able to pass data directly to an object upload rather that requiring a 
file or file-like object) - but it is broken if you pass an interator 
because it (senselessly) tries to run len() on the data parameter.


The new feature is not used anywhere in OpenStack yet. The first 
consumer (and requestor of the feature) is Infra, who are looking at 
using it as part of our efforts to start uploading build log files to swift.


We should not need a g-r bump - since nothing in OpenStack uses the 
feature yet, none of the OpenStack projects need their depends changed. 
OTOH, openstacksdk is a thing we expect end-users to use, and once they 
see the shiny new feature they might use it - and then be sad that it's 
half broken.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][release][election][adjutant] Welcome Adjutant as an official project!

2018-07-26 Thread Monty Taylor

On 07/17/2018 08:19 PM, Adrian Turjak wrote:

Thanks!

As the current project lead for Adjutant I welcome the news, and while I
know it wasn't an easy process would like to thank everyone involved in
the voting. All the feedback (good and bad) will be taken on board to
make the service as suited for OpenStack as possible in the space we've
decided it can fit.

Now to onboarding, choosing a suitable service type, and preparing for a
busy Stein cycle!


Welcome!

I believe you're already aware, but once you have chosen a service type, 
make sure to submit a patch to 
https://git.openstack.org/cgit/openstack/service-types-authority



On 18/07/18 05:52, Doug Hellmann wrote:

The Adjutant team's application [1] to become an official project
has been approved. Welcome!

As I said on the review, because it is past the deadline for Rocky
membership, Adjutant will not be considered part of the Rocky
release, but a future release can be part of Stein.

The team should complete the onboarding process for new projects,
including holding PTL elections for Stein, setting up deliverable
files in the openstack/releases repository, and adding meeting
information to eavesdrop.openstack.org.

I have left a comment on the patch setting up the Stein election
to ask that the Adjutant team be included.  We can also add Adjutant
to the list of projects on docs.openstack.org for Stein, after
updating your publishing job(s).

Doug

[1] https://review.openstack.org/553643

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] PTL Candidacy for the Stein cycle

2018-07-26 Thread Monty Taylor

Hi everybody!

I'd like to run for PTL of OpenStackSDK again.

This last cycle was great. os-client-config is now just a thin wrapper 
around openstacksdk. shade still has a bunch of code, but the shade 
OpenStackCloud object is a subclass of openstack.connection.Connection,

so we're in good position to turn shade into a thin wrapper.

Ansible and nodepool are now using openstacksdk directly rather than
shade and os-client-config. python-openstackclient is also now using
openstacksdk for config instead of os-client-config. We were able to 
push some of the special osc code down into keystoneauth so that it gets 
its session directly from openstacksdk now too.


We plumbed os-service-types in to the config layer so that people can
use any of the official aliases for a service in their config. 
Microversion discovery was added - and we actually even are using it for 
at least one method (way to be excited, right?)


I said last time that we needed to get a 1.0 out during this cycle and 
we did not accomplish that.


Moving forward my number one priority for the Stein cycle is to get the 
1.0 release cut, hopefully very early in the cycle. We need to finish 
plumbing discovery through everywhere, and we need to rationalize the 
Resource objects and the shade munch objects. As soon as those two are 
done, 1.0 here we come.


After we've got a 1.0, I think we should focus on getting 
python-openstackclient starting to use more of openstacksdk. I'd also 
like to
start getting services using openstacksdk so that we can start reducing 
the number of moving parts everywhere.


We have cross-testing with the upstream Ansible modules. We should move 
the test playbooks themselves out of the openstacksdk repo and into the 
Ansible repo.


The caching layer needs an overhaul. What's there was written with
nodepool in mind, and is **heavily** relied on in the gate. We can't 
break that, but it's not super friendly for people who are not nodepool 
(which is most people)


I'd like to start moving methods from the shade layer into the sdk
proxy layer and, where it makes sense, make the shade layer simple 
passthrough calls to the proxy layer. We really shouldn't have two 
different methods for uploading images to a cloud, for instance.


Finally, we have some AMAZING docs - but with the merging of shade and
os-client-config the overview leaves much to be desired in terms of 
leading people towards making the right choices. It would be great to 
get that cleaned up.


I'm sure there will be more things to do too. There always are.

In any case, I'd love to keep helping to pushing these rocks uphill.

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Monty Taylor

On 07/12/2018 11:05 AM, Matthew Thode wrote:

On 18-07-12 13:52:56, Jeremy Stanley wrote:

On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote:
[...]

I think most of the problems with Fedora stability are around
bringing up a new Fedora every 6 months or so. They tend to change
sufficiently within that time period to make this a fairly
involved exercise. But once working they work for the ~13 months
of support they offer. I know Paul Belanger would like to iterate
more quickly and just keep the most recent Fedora available
(rather than ~2).

[...]

Regardless its instability/churn makes it unsuitable for stable
branch jobs because the support lifetime of the distro release is
shorter than the maintenance lifetime of our stable branches. Would
probably be fine for master branch jobs but not beyond, right?


I'm of the opinion that we should decouple from distro supported python
versions and rely on what versions upstream python supports (longer
lifetimes than our releases iirc).


Yeah. I don't want to boil the ocean too much ... but as I mentioned in 
my other reply, I'm very pleased with pyenv. I would not be opposed to 
switching to that for all of our python installation needs. OTOH, I'm 
not going to push for it, nor do I have time to implement such a switch. 
But I'd vote for it and cheer someone on if they did.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Monty Taylor

On 07/12/2018 08:37 AM, Clark Boylan wrote:

On Wed, Jul 11, 2018, at 9:34 PM, Tony Breeds wrote:

Hi Folks,
 We have a pit of a problem in openstack/requirements and I'd liek to
chat about it.

Currently when we generate constraints we create a venv for each
(system) python supplied on the command line, install all of
global-requirements into that venv and capture the pip freeze.

Where this falls down is if we want to generate a freeze for python 3.4
and 3.5 we need an image that has both of those.  We cheated and just
'clone' them so if python3 is 3.4 we copy the results to 3.5 and vice
versa.  This kinda worked for a while but it has drawbacks.

I can see a few of options:

1. Build pythons from source and use that to construct the venv
[please no]


Fungi mentions that 3.3 and 3.4 don't build easily on modern linux distros. 
However, 3.3 and 3.4 are also unsupported by Python at this point, maybe we can 
ignore them and focus on 3.5 and forward? We don't build new freeze lists for 
the stable branches, this is just a concern for master right?


FWIW, I use pyenv for python versions on my laptop and love it. I've 
completely given up on distro-provided python for my own usage.




2. Generate the constraints in an F28 image.  My F28 has ample python
versions:
  - /usr/bin/python2.6
  - /usr/bin/python2.7
  - /usr/bin/python3.3
  - /usr/bin/python3.4
  - /usr/bin/python3.5
  - /usr/bin/python3.6
  - /usr/bin/python3.7
I don't know how valid this still is but in the past fedora images
have been seen as unstable and hard to keep current.  If that isn't
still the feeling then we could go down this path.  Currently there a
few minor problems with bindep.txt on fedora and generate-constraints
doesn't work with py3 but these are pretty minor really.


I think most of the problems with Fedora stability are around  bringing up a 
new Fedora every 6 months or so. They tend to change sufficiently within that 
time period to make this a fairly involved exercise. But once working they work 
for the ~13 months of support they offer. I know Paul Belanger would like to 
iterate more quickly and just keep the most recent Fedora available (rather 
than ~2).



3. Use docker images for python and generate the constraints with
them.  I've hacked up something we could use as a base for that in:
   https://review.openstack.org/581948

There are lots of open questions:
  - How do we make this nodepool/cloud provider friendly ?
* Currently the containers just talk to the main debian mirrors.
  Do we have debian packages? If so we could just do sed magic.


http://$MIRROR/debian (http://mirror.dfw.rax.openstack.org/debian for example) 
should be a working amd64 debian package mirror.


  - Do/Can we run a registry per provider?


We do not, but we do have a caching dockerhub registry proxy in each 
region/provider. http://$MIRROR:8081/registry-1.docker if using older docker 
and http://$MIRROR:8082 for current docker. This was a compromise between 
caching the Internet and reliability.


there is also

https://review.openstack.org/#/c/580730/

which adds a role to install docker and configure it to use the correct 
registry.



  - Can we generate and caches these images and only run pip install -U
g-r to speed up the build


Between cached upstream python docker images and prebuilt wheels mirrored in 
every cloud provider region I wonder if this will save a significant amount of 
time? May be worth starting without this and working from there if it remains 
slow.


  - Are we okay with using docker this way?


Should be fine, particularly if we are consuming the official Python images.


Agree. python:3.6 and friends are great.



I like #2 the most but I wanted to seek wider feedback.


I think each proposed option should work as long as we understand the 
limitations each presents. #2 should work fine if we have individuals 
interested and able to spin up new Fedora images and migrate jobs to that image 
after releases happen.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] swift containers.

2018-07-09 Thread Monty Taylor

On 07/09/2018 02:46 AM, jayshankar nair wrote:





Hi,

I am unable to create containers in object store.

"Unable to get the Swift service info".
"Unable to get the swift container listing".

My horizon is running on 192.168.0.19. My swift is running on 
192.168.0.12(how can i change it).


I am trying to list the container with python sdk. Is this the right api.

from openstack import connection
conn = connection.Connection(auth_url="http://192.168.0.19:5000/v3;,
                       project_name="admin",username="admin",
                       password="6908a8d218f843dd",
                       user_domain_id="default",
                       project_domain_id="default",
                       identity_api_version=3)


That looks fine (although you don't need identity_api_version=3 - you 
are specifying domain_ids - it'll figure things out)


Can you add:

import openstack
openstack.enable_logging(http_debug=True)

before your current code and paste the output?


for container in conn,object_store.containers():
print(container.name).

I need documentation of python sdk


https://docs.openstack.org/openstacksdk/latest/


Thanks,
Jayshankar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread Monty Taylor

On 07/05/2018 01:55 PM, melanie witt wrote:

+openstack-dev@

On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
But, I can not use nova command, endpoint nova have been redirected 
from https to http. Here:http://prntscr.com/k2e8s6  (command: nova 
–insecure service list)
First of all, it seems that the nova client is hitting /v2.1 instead 
of /v2.1/ URI and this seems to be triggering the redirect.


Since openstack CLI works, I presume it must be using the correct URL 
and hence it’s not getting redirected.


And this is error log: Unable to establish connection 
tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', 
BadStatusLine("''",))
Looks to me that nova-api does a redirect to an absolute URL. I 
suspect SSL is terminated on the HAProxy and nova-api itself is 
configured without SSL so it redirects to an http URL.


In my opinion, nova would be more load-balancer friendly if it used a 
relative URI in the redirect but that’s outside of the scope of this 
question and since I don’t know the context behind choosing the 
absolute URL, I could be wrong on that.


Thanks for mentioning this. We do have a bug open in python-novaclient 
around a similar issue [1]. I've added comments based on this thread and 
will consult with the API subteam to see if there's something we can do 
about this in nova-api.


A similar thing came up the other day related to keystone and version 
discovery. Version discovery documents tend to return full urls - even 
though relative urls would make public/internal API endpoints work 
better. (also, sometimes people don't configure things properly and the 
version discovery url winds up being incorrect)


In shade/sdk - we actually construct a wholly-new discovery url based on 
the url used for the catalog and the url in the discovery document since 
we've learned that the version discovery urls are frequently broken.


This is problematic because SOMETIMES people have public urls deployed 
as a sub-url and internal urls deployed on a port - so you have:


Catalog:
public: https://example.com/compute
internal: https://compute.example.com:1234

Version discovery:
https://example.com/compute/v2.1

When we go to combine the catalog url and the versioned url, if the user 
is hitting internal, we product 
https://compute.example.com:1234/compute/v2.1 - because we have no way 
of systemically knowing that /compute should also be stripped.


VERY LONG WINDED WAY of saying 2 things:

a) Relative URLs would be *way* friendlier (and incidentally are 
supported by keystoneauth, openstacksdk and shade - and are written up 
as being a thing people *should* support in the documents about API 
consumption)


b) Can we get agreement that changing behavior to return or redirect to 
a relative URL would not be considered an api contract break? (it's 
possible the answer to this is 'no' - so it's a real question)


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][python-openstackclient] osc-included image signing

2018-06-29 Thread Monty Taylor

On 06/29/2018 05:38 AM, Josephine Seifert wrote:

Hello Dean,

thanks for your code comments so far.


Looking at the changes you have to cursive, if that is all you need
from it those bits could easily go somewhere in osc or osc-lib if you
don't also need them elsewhere.

There lies the problem, because we also want to implement signature
generation in nova for the "server image create". Do you have a
suggestion, where we could implement this instead of cursive?


I was just chatting with Dean about this in IRC. I'd like to suggest 
putting the image signing code into openstacksdk. Users of openstacksdk 
would almost certainly also want to be able to sign images they're going 
to upload. That would take care of having it in a library and also 
having that library be something OSC depends on.


We aren't using SDK in nova yet - but it shouldn't be hard to get some 
POC patches up to include it ... and to simplify a few other things.


I'd be more than happy to work with you on getting the code in.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client

2018-06-20 Thread Monty Taylor

On 06/20/2018 10:23 AM, Dean Troyer wrote:

On Tue, Jun 19, 2018 at 9:42 PM, zijian1...@163.com  wrote:

Thks for replying, just want to confirm, you mentioned "We have intended to
migrate everything to use
OpenStackSDK", the current use of python-*client is:
1. OSC
2. all services that need to interact with other services (e.g.:  nova
libraries: self.volume_api = volume_api or cinder.API())
Do you mean that both of the above will be migrated to use the OpenStack
SDK?


I am only directly speaking for OSC.  Initially we did not think that
services using the SDK would be feasible, Monty has taken it to a
place where that should now be a possibility.  I am willing to find
out that doing so is a good idea. :)


Yes, I think this is a good idea to explore - but I also think we should 
be conservative with the effort. There are things we'll need to learn 
about and improve.


We're VERY close to time for making the push to get OSC converted (we 
need to finish one more patch for version discovery / microversion 
support first - and I'd really like to get an answer for the 
Resource/munch/shade interaction in - but that's honestly realistically 
like 2 or maybe 3 patches, even though they will be 2 or 3 complex patches.


I started working a bit on osc-lib patches - I'll try to get those 
pushed up soon.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client

2018-06-19 Thread Monty Taylor

On 06/19/2018 03:51 PM, Dean Troyer wrote:

On Tue, Jun 19, 2018 at 11:15 AM, 李健  wrote:

So, my question is, why does the network service not use the
python2-neutronclient to get the client like other core projects, but
instead uses another separate project(openstacksdk)?


There were multiple reasons to not use neutron client lib for OSC and
the SDk was good enough at the time to use ut in spite of not being at
a 1.0 release.  We have intended to migrate everything to use
OpenStackSDK and eliminate OSC's use of the python-*client libraries
completely.  We are waiting on an SDK 1.0 release, it has stretched on
for years longer than originally anticipated but the changes we have
had to accommodate in the network commands in the past convinced me to
wait until it was declared stable, even though it has been nearly
stable for a while now.


Soon. Really soon. I promise.


My personal opinion, openstacksdk is a project that can be used
independently, it is mainly to provide a unified sdk for developers, so
there should be no interdependence between python-xxxclient and
openstacksdk, right?


Correct, OpenStackSDK has no dependency on any of the python-*client
libraries..  Its primary dependency is on keystoneauth for the core
authentication logic, that was long ago pulled out of the keystone
client package.

dt




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo gate is blocked - please read

2018-06-14 Thread Monty Taylor

On 06/13/2018 07:50 PM, Emilien Macchi wrote:
TL;DR: gate queue was 25h+, we put all patches from gate on standby, do 
not restore/recheck until further announcement.


We recently enabled the containerized undercloud for multinode jobs and 
we believe this was a bit premature as the container download process 
wasn't optimized so it's not pulling the mirrors for the same containers 
multiple times yet.
It caused the job runtime to increase and probably the load on docker.io 
 mirrors hosted by OpenStack Infra to be a bit slower 
to provide the same containers multiple times. The time taken to prepare 
containers on the undercloud and then for the overcloud caused the jobs 
to randomly timeout therefore the gate to fail in a high amount of 
times, so we decided to remove all jobs from the gate by abandoning the 
patches temporarily (I have them in my browser and will restore when 
things are stable again, please do not touch anything).


Steve Baker has been working on a series of patches that optimize the 
way we prepare the containers but basically the workflow will be:
- pull containers needed for the undercloud into a local registry, using 
infra mirror if available

- deploy the containerized undercloud
- pull containers needed for the overcloud minus the ones already pulled 
for the undercloud, using infra mirror if available

- update containers on the overcloud
- deploy the containerized undercloud


That sounds like a great improvement. Well done!

With that process, we hope to reduce the runtime of the deployment and 
therefore reduce the timeouts in the gate.
To enable it, we need to land in that order: 
https://review.openstack.org/#/c/571613/, 
https://review.openstack.org/#/c/574485/, 
https://review.openstack.org/#/c/571631/ and 
https://review.openstack.org/#/c/568403.


In the meantime, we are disabling the containerized undercloud recently 
enabled on all scenarios: https://review.openstack.org/#/c/575264/ for 
mitigation with the hope to stabilize things until Steve's patches land.
Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the 
containerized undercloud on scenarios after checking that we don't have 
timeouts and reasonable deployment runtimes.


That's the plan we came with, if you have any question / feedback please 
share it.

--
Emilien, Steve and Wes


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystoneauth version auto discovery for internal endpoints in queens

2018-05-12 Thread Monty Taylor

On 05/11/2018 03:37 PM, Vlad Gusev wrote:

Hello.

We faced a bug in keystoneauth, which haven't existed before Queens.


Sorry about that.

In our OpenStack deployments we use urls like http://controller:5000/v3 
for internal and admin endpoints and urls like 
https://api.example.org/identity/v3 for public endpoints.


Thank you for using suburl deployment for your public interface and not 
the silly ports!!!


We set option 
public_endpoint in [default] section of the 
keystone.conf/nova.conf/cinder.conf/glance.conf/neutron.conf. For 
example, for keystone it is 
'public_endpoint=https://api.example.org/identity/'.


Since keystoneauth 3.2.0 or commit 
https://github.com/openstack/keystoneauth/commit/8b8ff830e89923ca6862362a5d16e496a0c0093c 
all internal client requests to the internal endpoints (for example, 
openstack server list from controller node) fail with 404 error, because 
it tries to do auto discovery at the http://controller:5000/v3. It gets 
{"href": "https://api.example.org/identity/v3/;, "rel": "self"} because 
of the public_endpoint option, and then in function 
_combine_relative_url() (keystoneauth1/discover.py:405) keystoneauth 
combines http://controller:5000/ with the path from public href. So 
after auto discovery attempt it goes to the wrong path 
http://controller:5000/identity/v3/


Ok. I'm going to argue that there are bugs on both the server side AND 
in keystoneauth. I believe I know how to fix the keystoneauth one- but 
let me describe why I think the server is broken as well, and then we 
can figure out how to fix that. I'm going to describe it in slightly 
excruciating detail, just to make sure we're all on the same page about 
mechanics that may be going on behind the scenes.


The user has said:

  I want the internal interface of v3 of the identity service

First, the identity service has to be found in the catalog. Looking in 
the catalog, we find this:


  {
"endpoints": [
  {
"id": "4deb4d0504a044a395d4480741ba628c",
"interface": "public",
"region": "RegionOne",
"url": "https://api.example.com/identity;
  },
  {
"id": "012322eeedcd459edabb4933021112bc",
"interface": "internal",
"region": "RegionOne",
"url": "http://controller:5000/v3;
  }
],
"name": "keystone",
"type": "identity"
  },

We've found the entry for 'identity' service, and looking at the 
endpoints we see that the internal endpoint is:


  http://controller:5000/v3

The next step is version discovery, because the user wants version 3 of 
the api. (I'm skipping possible optimizations that can be applied on 
purpose)


To do version discovery, one does a GET on the endpoint found in the 
catalog, so GET http://controller:5000/v3. That returns:


{
  "versions": {
"values": [
  {
"status": "stable",
"updated": "2016-04-04T00:00:00Z",
"media-types": [
  {
"base": "application/json",
"type": "application/vnd.openstack.identity-v3+json"
  }
],
"id": "v3.6",
"links": [
  {
"href": "https://api.example.com/identity/v3/;,
"rel": "self"
  }
]
  }
]
  }
}

Here is the server-side bug. A GET on the discovery document on the 
internal endpoint returned an endpoint for the public interface. That is 
incorrect information. GET http://controller:5000/v3 should return either:


{
  "versions": {
"values": [
  {
"status": "stable",
"updated": "2016-04-04T00:00:00Z",
"media-types": [
  {
"base": "application/json",
"type": "application/vnd.openstack.identity-v3+json"
  }
],
"id": "v3.6",
"links": [
  {
"href": "http://controller:5000/v3/;,
"rel": "self"
  }
]
  }
]
  }
}

or

{
  "versions": {
"values": [
  {
"status": "stable",
"updated": "2016-04-04T00:00:00Z",
"media-types": [
  {
"base": "application/json",
"type": "application/vnd.openstack.identity-v3+json"
  }
],
"id": "v3.6",
"links": [
  {
"href": "/v3/",
"rel": "self"
  }
]
  }
]
  }
}

That's because the discovery documents are maps to what the user wants. 
The user needs to be able to follow them automatically.


NOW - there is also a keystoneauth bug in play here that combined with 
this server-side bug have produced the issue you have.


That is in the way we do the catalog / discovery URL join.

First of all - we do the catalog / discovery URL join because of a 
frequently occuring deployment bug in the other direction. That is, it 
is an EXTREMELY common misconfiguration for the discovery url to return 
the internal url (this is what happens if 

Re: [openstack-dev] [sdk] issues with using OpenStack SDK Python client

2018-05-04 Thread Monty Taylor

On 05/04/2018 01:34 PM, gerard.d...@wipro.com wrote:

Hi everybody,

As a bit of a novice, I'm trying to use OpenStack SDK 0.13 in an 
OPNFV/ONAP project (Auto).


Yay! Welcome.

I'm able to use the compute and network proxies, but have problems with 
the identity proxy,


so I can't create projects and users.

With network, I can create a network, a router, router interfaces, but 
can't add a gateway to a router. Also, deleting a router fails.


With compute, I can't create flavors, and not sure if there is a 
"create_image" method ?


Specific issues are listed below with more details.

Any pointers (configuration, installation, usage, ...) and URLs to 
examples and documentation would be welcome.


For documentation, I've been looking mostly at:

https://docs.openstack.org/openstacksdk/latest/user/proxies/network.html

https://docs.openstack.org/openstacksdk/latest/user/proxies/compute.html

https://docs.openstack.org/openstacksdk/latest/user/proxies/identity_v3.html 


Yes, that's all good documentation to follow.



Thanks in advance,

Gerard

For all code, import statement and Connection creation is as follows 
(constants defined before):


import openstack

conn = openstack.connect(cloud=OPENSTACK_CLOUD_NAME, 
region_name=OPENSTACK_REGION_NAME)


1) problem adding a gateway (external network) to a router:

not sure how to build a dictionary body (couldn't find examples online)

tried this:

network_dict_body = {'network_id': public_network.id}

and this (from looking at a router printout):

network_dict_body = {

     'external_fixed_ips': [{'subnet_id' : public_subnet.id}],

     'network_id': public_network.id

}

in both cases, tried this command:

conn.network.add_gateway_to_router(onap_router,network_dict_body)

getting the error:

Exception:  add_gateway_to_router() takes 2 
positional arguments but 3 were given


The signature for add_gateway_to_router looks like this:

  def add_gateway_to_router(self, router, **body):

the ** indicate that it's looking for the body as keyword arguments. 
Change your code to:


  conn.network.add_gateway_to_router(onap_router, **network_dict_body)

and it should work.

You could also, should you feel like it, do:

  conn.network.add_gateway_to_router(
  onap_router,
  external_fixed_ips=[{'subnet_id' : public_subnet.id}],
  network_id=public_network.id
  )

which is basically what the ** in **network_dict_body is doing.


printing the router gave this:

openstack.network.v2.router.Router(distributed=False, 
tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, 
created_at=2018-05-01T01:16:08Z, external_gateway_info=None, 
status=ACTIVE, availability_zone_hints=[], ha=False, tags=[], 
description=Router created for ONAP, admin_state_up=True, revision=1, 
flavor_id=None, id=b923fba5-5027-47b6-b679-29c331ac1aba, 
updated_at=2018-05-01T01:16:08Z, routes=[], name=onap_router, 
availability_zones=[])


2) problem deleting routers:

onap_router = conn.network.find_router(ONAP_ROUTER_NAME)

conn.network.delete_router(onap_router.id)

(same if conn.network.delete_router(onap_router))

getting the error:

Exception:  'NoneType' object has no attribute 
'_body'


I'm not sure yet what's causing this - it's the same issue you're having 
below with flavors - I'm looking in to it.


Do you have tracebacks for the exception?


printing the router that had been created gave this:

openstack.network.v2.router.Router(description=Router created for ONAP, 
status=ACTIVE, routes=[], updated_at=2018-05-01T01:16:11Z, ha=False, 
id=b923fba5-5027-47b6-b679-29c331ac1aba, external_gateway_info=None, 
admin_state_up=True, availability_zone_hints=[], 
tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, name=onap_router, 
availability_zones=['nova'], tags=[], revision=3, distributed=False, 
flavor_id=None, created_at=2018-05-01T01:16:08Z)


3) problem reaching the identity service:


There is an underlying bug/deficiency in the code that's next on my list 
to fix, but I'm waiting to land a patch to keystoneauth first. For now, 
add identity_api_version=3 to your openstack.connect line and it should 
work.


Also - sorry about that - that's a terrible experience and is definitely 
not the way it should/will work.


(although I can reach compute and network services, and although there 
are users and projects in the Openstack instance: "admin" and "service" 
projects, "ceilometer", "nova", etc. (and "admin") users)


     print("\nList Users:")

     i=1

     for user in conn.identity.users():

     print('User',str(i),'\n',user,'\n')

     i+=1

getting the error:

List Users:

Exception:  
NotFoundException: 404


     print("\nList Projects:")

     i=1

     for project in conn.identity.projects():

     print('Project',str(i),'\n',project,'\n')

     i+=1

also getting an error, but not the same as users:

List Projects:

Exception:  'Proxy' object has no attribute 
'projects'


if trying to create a project:

     onap_project 

[openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-25 Thread Monty Taylor

Hi everybody,

We've been working on navigating through from an interesting situation 
over the past few months, but there isn't a top-level overview of what's 
going on with it. That's my bad - I've been telling AJaeger I was going 
to send an email out for a while.


projects with test requirements on git repo urls of other projects
--

There are a bunch of projects that need, for testing purposes, to depend 
on other projects. The majority are either neutron or horizon plugins, 
but conceptually there is nothing neutron or horizon specific about the 
issue. The problem they're trying to deal with is that they are a plugin 
to a service and they need to be able to import code from the service 
they are a plugin to in their unit tests.


To make things even more complicated, some of the plugins actually 
duepend on each other for real, not just as a "we need this for testing"


There is trouble in paradise though - which is that we don't allow git 
urls in requirements files. To work around this, the projects in 
question added additional pip install lines to a tox_install.sh script - 
essentially bypassing the global-requirements process and system 
completely.


This went unnoticed in a general sense until we started working through 
removing the use of zuul-cloner which is not needed any longer in Zuul v3.


unwinding things


There are a few different options, but it's important to keep in mind 
that we ultimately want all of the following:


* The code works
* Tests can run properly in CI
* "Depends-On" works in CI so that you can test changes cross-repo
* Tests can run properly locally for developers
* Deployment requirements are accurately communicated to deployers

The approach so far
---

The approach so far has been releasing service projects to PyPI and 
reworking the projects to depend on those releases.


This approach takes advantage of the tox-siblings feature in the gate to 
ensure we're cross-testing master of projects with each other.


tox-siblings
---

There is a feature in the Zuul tox jobs we refer to as "tox-siblings" 
(this is because historically - wow we have historical context for zuul 
v3 now - it was implemented as a separate role) What it does is ensure 
that if you are running a tox job and you add additional projects to 
required-projects in the job config, that the git versions of those 
projects will be installed into the tox virtualenv - but only for 
projects that would have been installed by tox otherwise. This way 
required-projects is both safe to use and has the effect you'd expect.


tox-siblings is intended to enable ADDITIONALLY cross-testing projects 
that otherwise have a normal dependency relationship in the gate. People 
have been adding jobs like cross-something-something or something-tips 
in an ad-hoc manner for a while - and in many cases the git parts of 
that were actually somewhat not correct - so this is an attempt to 
provide the thing people want in those scenarios in a consistent manner. 
But it always should be helper logic for more complex gate jobs, not as 
a de-facto part of a project's basic install.


Current Approach is wrong


Unfortunately, as part of trying to unwind the plugins situation, we've 
walked ourselves into a situation where the gate is the only thing that 
has the correct installation information for some projects, and that's 
not good.


From a networking plugin approach the "depend on release and use 
tox-siblings" assumes that 'depend on release of neutron' is or can be 
the common case with the ability to add a second tox job to check master 
against master.


If that's not a real thing, then depending on releases + tox_siblings in 
the gate is solving the wrong problem.


Specific Suggestions


As there are a few different scenarios, I want to suggest we do a few 
different things.


* Prefer interface libraries on PyPI that projects depend on

Like python-openstackclient and osc-lib, this is the *best* approach
for projects with plugins. Such interface libraries need to be able to 
do intermediate releases - and those intermediate releases need to not 
break the released version of the projects. This is the hardest and 
longest thing to do as well, so it's most likely to be a multi-cycle effort.


* Treat inter-plugin depends as normal library depends

If networking-bgpvpn depends on networking-bagpipe and networking-odl, 
then networking-bagpipe and networking-odl need to be released to PyPI 
just like any other library in OpenStack. These are real runtime 
dependencies.


Yes, this is more coordination work, but it's work we do everywhere 
already and it's important.


If we do that for inter-plugin depends, then the normal tox jobs should 
test against the most recent release of the other plugin, and people can 
make a -tips style job like the 

[openstack-dev] [sdk][osc][openstackclient] Migration to storyboard complete

2018-04-14 Thread Monty Taylor

Hey everybody,

The migration of the openstacksdk and python-openstackclient 
repositories to storyboard has been completed. Each of the repos owned 
by those teams has been migrated, and project groups now also exist for 
each.


python-openstackclient

group: https://storyboard.openstack.org/#!/project_group/80

python-openstackclient https://storyboard.openstack.org/#!/project/975
cliff https://storyboard.openstack.org/#!/project/977
osc-lib https://storyboard.openstack.org/#!/project/974
openstackclient https://storyboard.openstack.org/#!/project/971

openstacksdk

group: https://storyboard.openstack.org/#!/project_group/78

openstacksdk https://storyboard.openstack.org/#!/project/972
os-client-config https://storyboard.openstack.org/#!/project/973
os-service-types https://storyboard.openstack.org/#!/project/904
requestsexceptions https://storyboard.openstack.org/#!/project/835
shade https://storyboard.openstack.org/#!/project/760

Happy storyboarding.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Monty Taylor

On 04/12/2018 11:27 AM, Clark Boylan wrote:

On Wed, Apr 11, 2018, at 5:45 PM, Matt Riedemann wrote:

I also seem to remember that [extras] was less than user-friendly for
some reason, but maybe that was just because of how our CI jobs are
setup? Or I'm just making that up. I know it's pretty simple to install
the stuff from extras for tox runs, it's just an extra set of
dependencies to list in the tox.ini.


One concern I have as a user is that extras are not very discoverable without 
reading the source setup.cfg file. This can be addressed by improving 
installation docs to explain what the extras options are and why you might want 
to use them.


Yeah - they're kind of an advanced feature that most python people don't 
seem to know exists at all.


I'm honestly worried about us expanding our use of them and would prefer 
we got rid of our usage. I don't think the upcoming Pipfile stuff 
supports them at all - and I believe that's on purpose.



Another idea was to add a 'all' extras that installed all of the more fine 
grained extra's options. That way a user can just say give me all the features 
I don't care even if I can't use them all I know the ones I can use will be 
properly installed.

As for the CI jobs its just a matter of listing the extras in the appropriate 
requirements files or explicitly installing them.


How about instead of extras we just make some additional packages? Like, 
for instance make a "nova-zvm-support" repo that contains the extra 
requirements in it and that we publish to PyPI. Then a user could do 
"pip install nova nova-zvm-support" instead of "pip install nova[zvm]".


That way we can avoid installing optional things for the common case 
when they're not going to be used (including in the gate where we have 
no Z machines) but still provide a mechanism for users to easily install 
the software they need. It would also let a 3rd-party ci that DOES have 
some Z to test against to set up a zuul job that puts nova-zvm-support 
into its required-projects and test the combination of the two.


We could do a similar thing for the extras in keystoneauth. Make a 
keystoneauth-kerberos and a keystoneauth-saml2 and a keystoneauth-oauth1.


Just a thought...

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PBR and Pipfile

2018-04-09 Thread Monty Taylor

On 04/08/2018 04:10 AM, Gaetan wrote:

Hello OpenStack dev community,

I am currently working on the support of Pipfile for PBR ([1]), and I 
also follow actively the work on pipenv, which is now in officially 
supported by PyPA.


Awesome - welcome! This is a fun topic ...

There have been recently an intense discussion on the difficulties about 
Python libraries development, and how to spread good practices [2] on 
the pipenv community and enhance its documentation.


As a user of PBR, and big fan of it, I try to bridge the link between 
pbr and pipenv (with [1]) but I am interested in getting the feedback of 
Python developers of OpenStack that may have much more experience using 
PBR and more generally packaging python libraries than me.


Great - I'll comment more on this a little later.

The main point is that packaging an application is quite easy or at 
least understandable by newcomers, using `requirements.txt` or 
`Pipfile`+ `Pipfile.lock` with pipenv. At least it is easily "teachable".
Packaging a library is harder, and require to explain why by default 
`requirements.txt`(or `Pipfile`) does not work. Some "advanced" 
documentation exists but it still hard to understand why Python ended up 
with something complex for libraries ([3]).
One needs to ensure `install_requires`declares the dependencies to that 
pip can find them during transitive dependencies installation (that is, 
installing the dependencies of a given dependency). PBR helps on this 
point but some does not want its other features.


In general, as you might imagine, pbr has a difference of opinion with 
the pypa community about requirements.txt and install_requires. I'm 
going to respond from my POV about how things should work - and how I 
believe they MUST work for a project such as OpenStack to be able to 
operate.


There are actually three different relevant use cases here, with some 
patterns available to draw from. I'm going to spell them out to just 
make sure we're on the same page.


* Library
* Application
* Suite of Coordinated Applications

A Library needs to declare the requirements it has along with any 
relevant ranges. Such as "this library requires 'foo' at at least 
version 2 but less than version 4". Since it's a library it needs to be 
able to handle being included in more than one application that may have 
different sets of requirements, so as a library it should attempt to 
have as wide a set of acceptable requirements as possible - but it 
should declare if there are versions of requirements it does not work 
with. In Pipfile world, this means "commit Pipfile but not 
Pipfile.lock". In pbr+requirements.txt it means "commit the 
requirements.txt with ranges and not == declared."


An Application isn't included in other things, it's the end point. So 
declaring a specific set of versions of things that the application is 
known to work in addition to the logical requirement range is considered 
a best practice. In Pipfile world, this is "commit both Pipefile and 
Pipfile.lock". There isn't a direct analog for pbr+requirements.txt, 
although you could simulate this by executing pip with a -c 
constraints.txt file.


A Suite of Coordinated Applications (like OpenStack) needs to 
communicate the specific versions the applications have been tested to 
work with, but they need to be the same so that all of the applications 
can be deployed side-by-side on the same machine without conflict. In 
OpenStack we do this by keeping a centrally managed constraints file [1] 
that our CI system adds to the pip install line when installing any of 
the OpenStack projects. A person who wants to install OpenStack from pip 
can also choose to do so using the upper-constraints.txt file and they 
can know they'll be getting the versions of dependencies we tested with. 
There is also no direct support for making this easier in pbr. For 
Pipfile, I believe we'd want to see is adding support for --constraints 
to pipenv install - so that we can update our Pipfile.lock file for each 
application in the context of the global constraints file. This can be 
simulated today without any support from pipenv directly like this:


  pipenv install
  $(pipenv --venv)/bin/pip install -U -c 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt 
-r requirements.txt

  pipenv lock

There is also works on PEP around pyproject.toml ([4]), which looks 
quite similar to PBR's setup.cfg. What do you think about it?


It's a bit different. There is also a philosophical disagreement about 
the use of TOML that's not worth going in to here - but from a pbr 
perspecitve I'd like to minimize use of pyproject.toml to the bare 
minimm needed to bootstrap things into pbr's control. In the first phase 
I expect to replace our current setup.py boilerplate:


setuptools.setup(
setup_requires=['pbr'],
pbr=True)

with:

setuptool.setup(pbr=True)

and add pyproject.toml files with:

[build-system]
requires = ["setuptools", 

Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-06 Thread Monty Taylor

On 04/05/2018 07:39 PM, Paul Belanger wrote:

On Thu, Apr 05, 2018 at 01:27:13PM -0700, Clark Boylan wrote:

On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote:

On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote:

On 18-03-31 15:00:27, Jeremy Stanley wrote:

According to a notice[1] posted to the pypa-announce and
distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0
is expected to be released in two weeks (over the April 14/15
weekend). We know it's at least going to start breaking[2] DevStack
and we need to come up with a plan for addressing that, but we don't
know how much more widespread the problem might end up being so
encourage everyone to try it out now where they can.



I'd like to suggest locking down pip/setuptools/wheel like openstack
ansible is doing in
https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt

We could maintain it as a separate constraints file (or infra could
maintian it, doesn't mater).  The file would only be used for the
initial get-pip install.


In the past we've done our best to avoid pinning these tools because 1)
we've told people they should use latest for openstack to work and 2) it
is really difficult to actually control what versions of these tools end
up on your systems if not latest.

I would strongly push towards addressing the distutils package deletion
problem that we've run into with pip10 instead. One of the approaches
thrown out that pabelanger is working on is to use a common virtualenv
for devstack and avoid the system package conflict entirely.


I was mistaken and pabelanger was working to get devstack's USE_VENV option working which 
installs each service (if the service supports it) into its own virtualenv. There are two 
big drawbacks to this. This first is that we would lose coinstallation of all the 
openstack services which is one way we ensure they all work together at the end of the 
day. The second is that not all services in "base" devstack support USE_VENV 
and I doubt many plugins do either (neutron apparently doesn't?).


Yah, I agree your approach is the better, i just wanted to toggle what was
supported by default. However, it is pretty broken today.  I can't imagine
anybody actually using it, if so they must be carrying downstream patches.

If we think USE_VENV is valid use case, for per project VENV, I suggest we
continue to fix it and update neutron to support it.  Otherwise, we maybe should
rip and replace it.


I'd vote for ripping it out.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] Repo rename complete

2018-03-23 Thread Monty Taylor
The openstack/python-openstacksdk repo has been renamed to 
openstack/openstacksdk.


The following patch:

https://review.openstack.org/#/c/555875

Updates the .gitreview file (and other things) to point at the new repo.

You'll want to update your local git remotes to pull from and submit to 
the correct location. There are git commands you can use - I personally 
just edit the .git/config file in the repo. :)


Monty

PS. Gerrit will not show lists of openstacksdk reviews until its online 
reindex has completed, which may take a few hours.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-23 Thread Monty Taylor

On 03/22/2018 05:43 AM, Stephen Finucane wrote:

On Wed, 2018-03-21 at 09:57 -0500, Sean McGinnis wrote:

On Wed, Mar 21, 2018 at 10:49:02AM +, Stephen Finucane wrote:

tl;dr: Make sure you stop using pbr's autodoc feature before converting
them to the new PTI for docs.

[snip]

I've gone through and proposed a couple of reverts to fix projects
we've already broken. However, going forward, there are two things
people should do to prevent issues like this popping up.


Unfortunately this will not work to just revert the changes. That may fix
things locally, but they will not pass in gate by going back to the old way.

Any cases of this will have to actually be updated to not use the unsupported
pieces you point out. But the doc builds will still need to be done the way
they are now, as that is what the PTI requires at this point.


That's unfortunate. What we really need is a migration path from the
'pbr' way of doing things to something else. I see three possible
avenues at this point in time:

1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar
   things to 'sphinx-apidoc' but it takes the form of an extension.
   From my brief experiments, the output generated from this is
   radically different and far less comprehensive than what 'sphinx-
   apidoc' generates. However, it supports templating so we could
   probably configure this somehow and add our own special directive
   somewhere like 'openstackdocstheme'
2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back
   against upstream Sphinx [1]. This essentially does what the PBR
   extension does but moves configuration into 'conf.py'. However, this
   is currently held up as I can't adequately explain the differences
   between this and 'sphinx.ext.autosummary' (there's definite overlap
   but I don't understand 'autosummary' well enough to compare them).
3. Modify the upstream jobs that detect the pbr integration and have
   them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
   technically appealing approach as it still leaves us unable to build
   stuff locally and adds yet more "magic" to the gate, but it does let
   us progress.


I'd suggest a #4:

Take the sphinx.ext.apidoc extension and make it a standalone extension 
people can add to doc/requirements.txt and conf.py. That way we don't 
have to convince the sphinx folks to land it.


I'd been thinking for a while "we should just write a sphinx extension 
with the pbr logic in it" - but hadn't gotten around to doing anything 
about it. If you've already written that extension - I think we're in 
potentially great shape!



Try as I may, I don't really have the bandwidth to work on this for
another few weeks so I'd appreciate help from anyone with sufficient
Sphinx-fu to come up with a long-term solution to this issue.





Cheers,
Stephen

[1] https://github.com/sphinx-doc/sphinx/pull/4101/files


  * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections
from 'setup.cfg' in any patches that aim to convert a project to use
the new PTI. This will ensure the gate catches any potential
issues.
  * In addition, if your project uses the pbr autodoc feature, you
should either (a) remove these docs from your documentation tree or
(b) migrate to something else like the 'sphinx.ext.autosummary'
extension [5]. I aim to post instructions on the latter shortly.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk] git repo rename and storyboard migration

2018-03-22 Thread Monty Taylor

On 03/22/2018 02:51 AM, Jens Harbott wrote:

2018-03-21 21:44 GMT+01:00 Monty Taylor <mord...@inaugust.com>:

Hey everybody!

This upcoming Friday we're scheduled to complete the transition from
python-openstacksdk to openstacksdk. This was started a while back (Tue Jun
16 12:05:38 2015 to be exact) by changing the name of what gets published to
PyPI. Renaming the repo is to get those two back inline (and remove a hack
in devstack to deal with them not being the same)

Since this is a repo rename, it means that local git remotes will need to be
updated. This can be done either via changing urls in .git/config - or by
just re-cloning.

Once that's done, we'll be in a position to migrate to storyboard. shade is
already over there, which means we're currently split between storyboard and
launchpad for the openstacksdk team repos.

diablo_rojo has done a test migration and we're good to go there - so I'm
thinking either Friday post-repo rename - or sometime early next week. Any
thoughts or opinions?

This will migrate bugs from launchpad for python-openstacksdk and
os-client-config.


IMO this list is still much too long [0] and I expect it will make
dealing with the long backlog even more tedious if the bugs are moved.


storyboard is certainly not perfect, but there are also great features 
it does have to help deal with backlog. We can set up a board, like we 
did for zuulv3:


https://storyboard.openstack.org/#!/board/41

Jim also wrote 'boartty' which is like gertty but for doing storyboard 
things.


Which is to say - it's got issues, but it's also got a bunch of 
positives too.



Also there are lots of issues that intersect between sdk and
python-openstackclient, so moving both at the same time would also
sound reasonable.


I could see waiting until we move python-openstackclient. However, we've 
got the issue already with shade bugs being in storyboard already and 
sdk bugs being in launchpad. With shade moving to having its 
implementation be in openstacksdk, over this cycle I expect the number 
of bugs people report against shade wind up actually being against 
openstacksdk to increase quite a bit.


Maybe we should see if the python-openstackclient team wants to migrate too?

What do people think?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is openstacksdk backward compatible?

2018-03-22 Thread Monty Taylor

On 03/22/2018 05:14 AM, zijian1...@163.com wrote:

Hello, everyone
The openstack version I deployed was Newton,  now 
I am ready to use openstacksdk for development, *, 
*I noticed that the latest *openstacksdk *version is 0.12 
,this makes me a bit confused, How do I know if this sdk(0.12) 
is compatible with my openstack(Newton)?


openstacksdk should work with all existing versions of OpenStack. If you 
ever find a scenario when latest openstacksdk does not work with the 
cloud you are using, it is a bug and we will fix it as quickly as possible.


We'll hopefully be releasing a 1.0 of openstacksdk soon, however 0.12 
should be safe for you to use.


I also noticed that the 
python-xxxclient version and openstack version are strictly corresponding, 
such as "python-novaclient/tree/newton-eol", 
"python-neutronclient/tree/newton-eol". So, should I use 
python-xxxclient instead of openstacksdk?


Please do not use python-xxxclient for any new work. Doing so will cause 
you nothing but pain.


Let us know if you have any issues with openstacksdk ... #openstack-sdks 
in IRC is a great place to find people.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-21 Thread Monty Taylor

On 03/16/2018 09:29 AM, Kwan, Louie wrote:

In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt,

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do


git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens


And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0


Would you mind sharing why you need the older versions?

os-service-types is explicitly designed such that the latest version 
should always be correct.


If there is something in 1.2.0 that has broken you in some way that you 
need an older version, that's a problem and we should look in to it.


The story is intended to be similar for sdk moving forward ... but we're 
still pre-1.0, so that makes sense at the moment. I'm still interested 
in what specific issue you had, just to make sure we're aware of issues 
people are having.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] git repo rename and storyboard migration

2018-03-21 Thread Monty Taylor

Hey everybody!

This upcoming Friday we're scheduled to complete the transition from 
python-openstacksdk to openstacksdk. This was started a while back (Tue 
Jun 16 12:05:38 2015 to be exact) by changing the name of what gets 
published to PyPI. Renaming the repo is to get those two back inline 
(and remove a hack in devstack to deal with them not being the same)


Since this is a repo rename, it means that local git remotes will need 
to be updated. This can be done either via changing urls in .git/config 
- or by just re-cloning.


Once that's done, we'll be in a position to migrate to storyboard. shade 
is already over there, which means we're currently split between 
storyboard and launchpad for the openstacksdk team repos.


diablo_rojo has done a test migration and we're good to go there - so 
I'm thinking either Friday post-repo rename - or sometime early next 
week. Any thoughts or opinions?


This will migrate bugs from launchpad for python-openstacksdk and 
os-client-config.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG

2018-03-20 Thread Monty Taylor

On 03/17/2018 03:34 AM, Emilien Macchi wrote:
That way, we'll be able to have some early testing on python3-only 
environments (thanks containers!) without changing the host OS.


All hail our new python3-only overlords!!!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] Cleaning up openstacksdk core team

2018-02-26 Thread Monty Taylor

Hey all,

A bunch of stuff has changed in SDK recently, and a few of the 
historical sdk core folks have also not been around. I'd like to propose 
removing the following people from the core team:


  Everett Towes
  Jesse Noller
  Richard Theis
  Terry Howe

They're all fantastic humans but they haven't had any activity in quite 
some time - and not since all the changes of the sdk/shade merge. As is 
normal in OpenStack land, they'd all be welcome back if they found 
themselves in a position to dive in again.


Any objections?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Monty Taylor

On 02/26/2018 10:55 AM, Rabi Mishra wrote:
On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor <mord...@inaugust.com 
<mailto:mord...@inaugust.com>> wrote:


On 02/26/2018 09:57 AM, Akihiro Motoki wrote:

Hi neutron and openstacksdk team,

This mail proposes to change the first priority of neutron-related
python binding to OpenStack SDK rather than neutronclient python
bindings.
I think it is time to start this as OpenStack SDK became a official
project in Queens.


++


[Current situations and problems]

Network OSC commands are categorized into two parts: OSC and
neutronclient OSC plugin.
Commands implemented in OSC consumes OpenStack SDK
and commands implemented as neutronclient OSC plugin consumes
neutronclient python bindings.
This brings tricky situation that some features are supported
only in
OpenStack SDK and some features are supported only in neutronclient
python bindings.

[Proposal]

The proposal is to implement all neutron features in OpenStack
SDK as
the first citizen,
and the neutronclient OSC plugin consumes corresponding
OpenStack SDK APIs.

Once this is achieved, users of OpenStack SDK users can see all
network related features.

[Migration plan]

The migration starts from Rocky (if we agree).

New features should be supported in OpenStack SDK and
OSC/neutronclient OSC plugin as the first priority. If new feature
depends on neutronclient python bindings, it can be implemented in
neutornclient python bindings first and they are ported as part of
existing feature transition.

Existing features only supported in neutronclient python
bindings are
ported into OpenStack SDK,
and neutronclient OSC plugin will consume them once they are
implemented in OpenStack SDK.


I think this is a great idea. We've got a bunch of good
functional/integrations tests in the sdk gate as well that we can
start running on neutron patches so that we don't lose cross-gating.

[FAQ]

1. Will neutornclient python bindings be removed in future?

Different from "neutron" CLI, as of now, there is no plan to
drop the
neutronclient python bindings.
Not a small number of projects consumes it, so it will be
maintained as-is.
The only change is that new features are implemented in
OpenStack SDK first and
enhancements of neutronclient python bindings will be minimum.

2. Should projects that consume neutronclient python bindings switch
to OpenStack SDK?

Necessarily not. It depends on individual projects.
Projects like nova that consumes small set of neutron features can
continue to use neutronclient python bindings.
Projects like horizon or heat that would like to support a wide
range
of features might be better to switch to OpenStack SDK.


We've got a PTG session with Heat to discuss potential wider-use of
SDK (and have been meaning to reach our to horizon as well) Perhaps
a good first step would be to migrate the
heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from
neutronclient to SDK.


Yeah, this would only be possible after openstacksdk supports all 
neutron features as mentioned in the proposal.


++

Note: We had initially added the OpenStackSDKPlugin in heat to support 
neutron segments and were thinking of doing all new neutron stuff with 
openstacksdk. However, we soon realised that it's not possible when 
implementing neutron trunk support and had to drop the idea.


Maybe we start converting one thing at a time and when we find something 
sdk doesn't support we should be able to add it pretty quickly... which 
should then also wind up improving the sdk layer.



There's already an
heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in
Heat. I started a patch to migrate senlin from senlinclient (which
is just a thin wrapper around sdk):
https://review.openstack.org/#/c/532680/
<https://review.openstack.org/#/c/532680/>

For those of you who are at the PTG, I'll be giving an update on SDK
after lunch on Wednesday. I'd also be more than happy to come chat
about this more in the neutron room if that's useful to anybody.

Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/opens

[openstack-dev] [sdk] Nominating Adrian Turjak for core

2018-02-26 Thread Monty Taylor

Hey everybody,

I'd like to nominate Adrian Turjak (adriant) for openstacksdk-core. He's 
an Operator/End User and brings *excellent* deep/strange/edge-condition 
bugs. He also has a great understanding of the mechanics between 
Resource/Proxy objects and is super helpful in verifying fixes work in 
the real world.


It's worth noting that Adrian's overall review 'stats' aren't what it 
traditionally associated with a 'core', but I think this is a good 
example that life shouldn't be driven by stackalytics and the being a 
core reviewer is about understanding the code base and being able to 
evaluate proposed changes. From my POV, Adrian more than qualifies.


Thoughts?
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Monty Taylor

On 02/26/2018 09:57 AM, Akihiro Motoki wrote:

Hi neutron and openstacksdk team,

This mail proposes to change the first priority of neutron-related
python binding to OpenStack SDK rather than neutronclient python
bindings.
I think it is time to start this as OpenStack SDK became a official
project in Queens.


++


[Current situations and problems]

Network OSC commands are categorized into two parts: OSC and
neutronclient OSC plugin.
Commands implemented in OSC consumes OpenStack SDK
and commands implemented as neutronclient OSC plugin consumes
neutronclient python bindings.
This brings tricky situation that some features are supported only in
OpenStack SDK and some features are supported only in neutronclient
python bindings.

[Proposal]

The proposal is to implement all neutron features in OpenStack SDK as
the first citizen,
and the neutronclient OSC plugin consumes corresponding OpenStack SDK APIs.

Once this is achieved, users of OpenStack SDK users can see all
network related features.

[Migration plan]

The migration starts from Rocky (if we agree).

New features should be supported in OpenStack SDK and
OSC/neutronclient OSC plugin as the first priority. If new feature
depends on neutronclient python bindings, it can be implemented in
neutornclient python bindings first and they are ported as part of
existing feature transition.

Existing features only supported in neutronclient python bindings are
ported into OpenStack SDK,
and neutronclient OSC plugin will consume them once they are
implemented in OpenStack SDK.


I think this is a great idea. We've got a bunch of good 
functional/integrations tests in the sdk gate as well that we can start 
running on neutron patches so that we don't lose cross-gating.



[FAQ]

1. Will neutornclient python bindings be removed in future?

Different from "neutron" CLI, as of now, there is no plan to drop the
neutronclient python bindings.
Not a small number of projects consumes it, so it will be maintained as-is.
The only change is that new features are implemented in OpenStack SDK first and
enhancements of neutronclient python bindings will be minimum.

2. Should projects that consume neutronclient python bindings switch
to OpenStack SDK?

Necessarily not. It depends on individual projects.
Projects like nova that consumes small set of neutron features can
continue to use neutronclient python bindings.
Projects like horizon or heat that would like to support a wide range
of features might be better to switch to OpenStack SDK.


We've got a PTG session with Heat to discuss potential wider-use of SDK 
(and have been meaning to reach our to horizon as well) Perhaps a good 
first step would be to migrate the 
heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from 
neutronclient to SDK. There's already an 
heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in Heat. I 
started a patch to migrate senlin from senlinclient (which is just a 
thin wrapper around sdk): https://review.openstack.org/#/c/532680/


For those of you who are at the PTG, I'll be giving an update on SDK 
after lunch on Wednesday. I'd also be more than happy to come chat about 
this more in the neutron room if that's useful to anybody.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] About the convention to use '.' instead of 'source'.

2018-02-18 Thread Monty Taylor

On 02/17/2018 03:03 PM, Jeremy Stanley wrote:

On 2018-02-17 13:47:02 -0500 (-0500), Hongbin Lu wrote:
[...]

If anyone can clarify the rationals of this convention, it will be
really helpful.

[...]

There's a trade-off here: while `.` is standardized in POSIX sh
(under Utilities, Dot in the specification), it's easy to miss when
reading documentation and/or cutting and pasting from examples. On
the other hand, `source` is easier to see but was originally unique
to csh (which lacks `.`) and subsequently borrowed by the bash shell
environment as an alias for `.` ostensibly to ease migration for
users of csh and its derivatives. The `source` command is not
implemented by a number of other popular shells however, which may
make it a poor interoperability choice (given csh is an arguably
less popular shell these days) unless we assume a specific shell
(e.g., bash).


I'd honestly argue in favor of assuming bash and using 'source' because 
it's more readable. We don't make allowances for alternate shells in our 
examples anyway.


I personally try to use 'source' vs . and $() vs. `` as aggressively as 
I can.


That said - I completely agree with fungi on the description of the 
tradeoffs of each direction, and I do think it's valuable to pick one 
for the docs.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk][release][requirements] FFE request for openstacksdk 0.11.3

2018-02-04 Thread Monty Taylor

On 02/04/2018 01:35 PM, Matthew Thode wrote:

On 18-02-04 09:32:25, Monty Taylor wrote:

Hi!

I'd like to request another FFE to fix several neutron commands in
python-openstackclient for queens and also to unbreak
python-openstackclient's gate.

The release proposal patch is here:

https://review.openstack.org/540657

The issue at hand was:

The osc-functional-devstack-tips job, which tests master changes of
openstackclient and openstacksdk against each other with
openstackclient's functional tests was broken and was not testing
master against master but rather master of openstackclient against
released version of SDK. Therefore, the gate that was protecting us
against breakages like these was incorrect and let us land a patch that made
invalid query parameters raise errors instead of silently not filtering -
without also adding missing but needed query parameters as valid.

The gate job has been fixed and SDK as of the proposed commit fixes the
osc-functional-devstack-tips job. That can be seen in
https://review.openstack.org/540554/ The osc-functional-devstack job, which
checks OSC master against released SDK is broken with sdk 0.11.2 because of
the bug fixed in the SDK patch.

We would want to bump the upper-constraints from 0.11.2 to 0.11.3 in both
stable/queens and master upper-constraints files.



As a UC bump you have my +2

It seems like these things require a gr bump even more now, which would
cause client re-releases iirc.  My question has more to do with having
this not happen again.  Do you cross gate with other projects (clients)?
That would allow you to check what's going into your master with what's
in the client master to ensure no breakage.


Yah - we do ... the problem was that we had a bug in this one which 
caused it to be testing against released versions not master versions. 
That has been rectified, so we SHOULD be good moving forward.


Also - as we find or grow more things that consume SDK, we'll add 
appropriate cross-gate jobs for them as well.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk][release][requirements] FFE request for openstacksdk 0.11.3

2018-02-04 Thread Monty Taylor

Hi!

I'd like to request another FFE to fix several neutron commands in 
python-openstackclient for queens and also to unbreak 
python-openstackclient's gate.


The release proposal patch is here:

https://review.openstack.org/540657

The issue at hand was:

The osc-functional-devstack-tips job, which tests master changes of
openstackclient and openstacksdk against each other with
openstackclient's functional tests was broken and was not testing
master against master but rather master of openstackclient against
released version of SDK. Therefore, the gate that was protecting us
against breakages like these was incorrect and let us land a patch that 
made invalid query parameters raise errors instead of silently not 
filtering - without also adding missing but needed query parameters as 
valid.


The gate job has been fixed and SDK as of the proposed commit fixes the
osc-functional-devstack-tips job. That can be seen in 
https://review.openstack.org/540554/ The osc-functional-devstack job, 
which checks OSC master against released SDK is broken with sdk 0.11.2 
because of the bug fixed in the SDK patch.


We would want to bump the upper-constraints from 0.11.2 to 0.11.3 in 
both stable/queens and master upper-constraints files.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remembering Shawn Pearce (fwd)

2018-02-02 Thread Monty Taylor

On 02/02/2018 09:52 AM, Adam Spiers wrote:

Dear Stackers,

Since git and Gerrit are at the heart of our development process, I am
passing on this very sad news from the git / Gerrit communities that
Shawn Pearce has passed away after an aggressive lung cancer.

Shawn was founder of Gerrit / JGit / libgit2 / git-gui, and the third
most prolific contributor to git itself.

    https://gitenterprise.me/2018/01/30/shawn-pearce-a-true-leader/
    https://sfconservancy.org/blog/2018/jan/30/shawn-pearce/
    https://twitter.com/cdibona/status/957822400518696960

https://public-inbox.org/git/CAP8UFD0aKqT5YXJx9-MqeKCKhOVGxninRf8tv30=hkgvmhg...@mail.gmail.com/T/#mf5c158c68565c1c68c80b6543966ef2cad6d151c 


https://groups.google.com/forum/#!topic/repo-discuss/B4P7G1YirdM/discussion


He is survived by his wife and two young sons.  A memorial fund has
been set up in aid of the boys' education and future:


https://gitenterprise.me/2018/01/30/gerrithub-io-donations-to-shawns-family/ 



Thank you Shawn for enriching our lives with your great contributions
to the FLOSS community.


++

OpenStack would not be where it is today without Shawn's work.


- Forwarded message from Adam Spiers  -

Date: Fri, 2 Feb 2018 15:12:35 +
From: Adam Spiers 
To: Luca Milanesio 
Subject: Re: Fwd: Remembering Shawn Pearce

Hi Luca, that's such sad news :-(  What an incredible contribution
Shawn made to the community.  In addition to Gerrit, I use git-gui and
gitk regularly, and also my git-deps utility is based on libgit2.  I
had no idea he wrote them all, and many other things.

I will certainly donate and also ensure that the OpenStack community
is aware of the memorial fund.  Thanks a lot for letting me know!

Luca Milanesio  wrote:

Hi Adam,
you probably have received this very sad news :-(
As GerritForge we are actively supporting, contributing and promoting 
the donations to Shawn's Memorial Fund 
(https://www.gofundme.com/shawn-pearce-memorial-fund) and added a 
donation button to GerritHub.io .


Feel free to spread the sad news to the OpenStack community you are in 
touch with.

---
Luca Milanesio
GerritForge
3rd Fl. 207 Regent Street
London W1B 3HH - UK
http://www.gerritforge.com 

l...@gerritforge.com 
Tel:  +44 (0)20 3292 0677
Mob: +44 (0)792 861 7383
Skype: lucamilanesio
http://www.linkedin.com/in/lucamilanesio 



> Begin forwarded message:
> > From: "'Dave Borowitz' via Repo and Gerrit Discussion" 


> Subject: Remembering Shawn Pearce
> Date: 29 January 2018 at 15:15:05 GMT
> To: repo-discuss 
> Reply-To: Dave Borowitz 
> > Dear Gerrit community,
> > I am very saddened to report that Shawn Pearce, long-time Git 
contributor and founder of the Gerrit Code Review project, passed away 
over the weekend after being diagnosed with lung cancer last year. He 
spent his final days comfortably in his home, surrounded by family, 
friends, and colleagues.
> > Shawn was an exceptional software engineer and it is impossible to 
overstate his contributions to the Git ecosystem. He had everything 
from the driving high-level vision to the coding skills to solve any 
complex problem and bring his vision to reality. If you had the 
pleasure of collaborating with him on code reviews, as I know many of 
you did, you've seen first-hand his dedication and commitment to 
quality. You can read more about his contributions in this recent 
interview 
. 

> > In addition to his technical contributions, Shawn truly loved the 
open-source communities he was a part of, and the Gerrit community in 
particular. Growing the Gerrit project from nothing to a global 
community with hundreds of contributors used by some of the world's 
most prominent tech companies is something he was extremely proud of.
> > Please join me in remembering Shawn Pearce and continuing his 
legacy. Feel free to use this thread to share your memories with the 
community Shawn loved.
> > If you are interested, his family has set up GoFundMe page 
 to put towards 
his children's future.

> > Best wishes,
> Dave Borowitz
> > > --
> --
> To unsubscribe, email repo-discuss+unsubscr...@googlegroups.com
> More info at http://groups.google.com/group/repo-discuss?hl=en 


> > ---
> You received this message because you are subscribed to the Google 
Groups "Repo and Gerrit Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, 
send an email to repo-discuss+unsubscr...@googlegroups.com 
.
> For more options, visit 

Re: [openstack-dev] [requirements] FFE for delayed libraries

2018-02-01 Thread Monty Taylor

On 02/01/2018 01:47 PM, Matthew Thode wrote:

On 18-02-01 13:44:19, Sean McGinnis wrote:

Due to gate issues and other delays, there's quite a handful of libs that were
not released in time for the requirements freeze.

We now believe we've gotten all libraries processed for the final Queens
releases. In order to reduce the load, we have batches all upper constraints
bumps for these libs into one patch:

https://review.openstack.org/#/c/540105/

This is my official FFE request to have these updates accepted yet for Queens
past the requirements freeze.

If anyone is aware of any issues with these, please bring that to our attention
as soon as possible.

Thanks,
Sean


Affected Updates


update constraint for python-saharaclient to new release 1.5.0
update constraint for instack-undercloud to new release 8.2.0
update constraint for paunch to new release 2.2.0
update constraint for python-mistralclient to new release 3.2.0
update constraint for python-senlinclient to new release 1.7.0
update constraint for pycadf to new release 2.7.0
update constraint for os-refresh-config to new release 8.2.0
update constraint for tripleo-common to new release 8.4.0
update constraint for reno to new release 2.7.0
update constraint for os-net-config to new release 8.2.0
update constraint for os-apply-config to new release 8.2.0
update constraint for os-client-config to new release 1.29.0
update constraint for ldappool to new release 2.2.0
update constraint for aodhclient to new release 1.0.0
update constraint for python-searchlightclient to new release 1.3.0
update constraint for mistral-lib to new release 0.4.0
update constraint for os-collect-config to new release 8.2.0
update constraint for ceilometermiddleware to new release 1.2.0
update constraint for tricircleclient to new release 0.3.0
update constraint for requestsexceptions to new release 1.4.0
update constraint for python-magnumclient to new release 2.8.0
update constraint for tosca-parser to new release 0.9.0
update constraint for python-tackerclient to new release 0.11.0
update constraint for python-heatclient to new release 1.14.0



officially accepted, thanks for keeping me updated while this was going
on.



After the release of openstacksdk 0.11.1, we got a bug report:

https://bugs.launchpad.net/python-openstacksdk/+bug/1746535

about a regression with python-openstackclient and query parameters. The 
fix was written, landed, backported to stable/queens and released.


I'd like to request we add 0.11.2 to the library FFE.

Thanks!
Monty



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk][ptl] PTL Candidacy for Rocky

2018-01-31 Thread Monty Taylor

Hi everybody!

I'd like to run for PTL of OpenStackSDK again

This last cycle was pretty exciting. We merged the shade and 
openstacksdk projects into a single team. We shifted os-client-config to 
that team as well. We merged the code from shade and os-client-config 
into openstacksdk, and then renamed the team.


It wasn't just about merging projects though. We got some rework done to 
base the Proxy classes on keystoneauth Adapters providing direct 
passthrough REST availability for services. We finished the 
Resource2/Proxy2 transition. We updated pagination to work for all of 
the OpenStack services - and in the process uncovered a potential 
cross-project goal. And we tied services in openstacksdk to services 
listed in the Service Types Authority.


Moving forward, there's tons to do.

First and foremost we need to finish integrating the shade code into the 
sdk codebase. The sdk layer and the shade layer are currently friendly 
but separate, and that doesn't make sense long term. To do this, we need 
to figure out a plan for rationalizing the return types - shade returns 
munch.Munch objects which are dicts that support object attribute 
access. The sdk returns Resource objects.


There are also multiple places where the logic in the shade layer can 
and should move into the sdk's Proxy layer. Good examples of this are 
swift object uploads and downloads and glance image uploads.


I'd like to move masakari and tricircle's out-of-tree SDK classes in tree.

shade's caching and rate-limiting layer needs to be shifted to be able 
to apply to both levels, and the special caching for servers, ports and
floating-ips needs to be replaced with the general system. For us to do 
that though, the general system needs to be improved to handle 
nodepool's batched rate-limited use case as well.


We need to remove the guts of both shade and os-client-config in their 
repos and turn them into backwards compatibility shims.


We need to work with the python-openstackclient team to finish getting 
the current sdk usage updated to the non-Profile-based flow, and to make 
sure we're providing what they need to start replacing uses of 
python-*client with uses of sdk.


I know the folks with the shade team background are going to LOVE this 
one, but we need to migrate existing sdk tests that mock sdk objects to 
requests-mock. (We also missed a few shade tests that still mock out 
methods on OpenStackCloud that need to get transitioned)


Finally - we need to get a 1.0 out this cycle. We're very close - the 
main sticking point now is the shade/os-client-config layer, and 
specifically cleaning up a few pieces of shade's API that weren't great 
but which we couldn't change due to API contracts.


I'm sure there will be more things to do too. There always are.

In any case, I'd love to keep helping to pushing these rocks uphill.

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [legal-discuss] [tc] [all] TC Report 18-05

2018-01-30 Thread Monty Taylor

On 01/30/2018 01:05 PM, Monty Taylor wrote:

On 01/30/2018 12:53 PM, Graham Hayes wrote:

On 30/01/18 18:34, Chris Dent wrote:



## Today's Board Meeting

I attended [today's Board
Meeting](https://wiki.openstack.org/wiki/Governance/Foundation/30Jan2018BoardMeeting) 



but it seems that according to the [transparency
policy](http://www.openstack.org/legal/transparency-policy/) I can't
comment:


  No commenting on Board meeting contents and decisions until
  Executive Director publishes a meeting summary


That doesn't sound like transparency to me, but I assume there must be
reasons.


I was under the assumption that this only applied to board members, but
I am open to correction.

Can someone on legal-discuss clarify?


That is correct. As officers information published from Board members 
could be construed as "official" communication. The approach we've taken 
on this topic is that Board members refrain from publishing their 
thoughts/feedback/take/opinion/summary of a meeting until after Jonathan 
has published an 'official' summary of the meeting, at which point, 
since there is an official document we're free to comment as we desire.


There is nothing preventing anyone who is not a board member from 
publishing a summary or thoughts or live-tweeting the whole thing.


markmc has historically written excellent summaries and has posted them 
as soon as Jonathan's have gone out.


Gah. Didn't reply originally to openstack-dev.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-01-24 Thread Monty Taylor

On 01/24/2018 02:25 PM, David Shrewsbury wrote:

This is a (the?) killer feature.


On Wed, Jan 24, 2018 at 11:33 AM, James E. Blair > wrote:


Hi,

We recently introduced a new URL-based syntax for Depends-On: footers
in commit messages:

   Depends-On: https://review.openstack.org/535851


The old syntax will continue to work for a while, but please begin using
the new syntax on new changes.

Why are we changing this?  Zuul has grown the ability to interact with
multiple backend systems (Gerrit, GitHub, and plain Git so far), and we
have extended the cross-repo-dependency feature to support multiple
systems.  But Gerrit is the only one that uses the change-id syntax.
URLs, on the other hand, are universal.

That means you can write, as in https://review.openstack.org/535541
, a
commit message such as:

   Depends-On:
https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/17


Or in a Github pull request like
https://github.com/ansible/ansible/pull/20974
, you can write:

   Depends-On: https://review.openstack.org/536159


But we're getting a bit ahead of ourselves here -- we're just getting
started with Gerrit <-> GitHub dependencies and we haven't worked
everything out yet.  While you can Depends-On any GitHub URL, you can't
add any project to required-projects yet, and we need to establish a
process to actually report on GitHub projects.  But cool things are
coming.

We will continue to support the Gerrit-specific syntax for a while,
probably for several months at least, so you don't need to update the
commit messages of changes that have accumulated precious +2s.  But do
please start using the new syntax now, so that we can age the old syntax
out.

There are a few differences in using the new syntax:

* Rather than copying the change-id from a commit message, you'll need
   to get the URL from Gerrit.  That means the dependent change already
   needs to be uploaded.  In some complex situations, this may mean that
   you need to amend an existing commit message to add in the URL later.

   If you're uploading both changes, Gerrit will output the URL when you
   run git-review, and you can copy it from there.  If you are
looking at
   an existing change in Gerrit, you can copy the URL from the permalink
   at the top left of the page.  Where it says "Change 535855 - Needs
   ..." the change number itself is the permalink of the change.



Is the permalink the only valid format here for gerrit? Or does the fully
expanded link also work. E.g.,

    Depends-On: https://review.openstack.org/536540

versus

    Depends-On: https://review.openstack.org/#/c/536540/


The fully expanded one works too. See:

  https://review.openstack.org/#/c/520812/

for an example of a patch with expanded links.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk][masakari][tricircle] Inclusion of SDK classes in openstacksdk tree

2018-01-24 Thread Monty Taylor

On 01/21/2018 07:08 PM, joehuang wrote:

Hello, Monty,

Tricircle did not develop any extra Neutron network resources, Tricircle 
provide plugins under Neutron, and same support resources as Neutron have. To 
ease the management of multiple Neutron servers, one Tricircle Admin API is 
provided to manage the resource routings between local neutron(s) and central 
neutron, it's one standalone service, and only for cloud administrator, 
therefore python-tricircleclient adn CLI were developed to support these 
administration functions.

do you mean to put  Tricircle Admin API sdk under openstacksdk tree?


Yes - if you want to, you are welcome to put them there.


Best Regards
Chaoyi Huang (joehuang)


From: Monty Taylor [mord...@inaugust.com]
Sent: 21 January 2018 1:22
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [sdk][masakari][tricircle] Inclusion of SDK classes in 
openstacksdk tree

Hey everybody,

Wanted to send a quick note to let people know that all OpenStack
services are more than welcome to put any openstacksdk proxy and
resource classes they have directly into the openstacksdk tree.

Looking through codesearch.openstack.org, masakariclient and tricircle
each have SDK-related classes in their trees.

You don't HAVE to put the code into openstacksdk. In fact, I wrote a
patch for masakariclient to register the classes with
openstack.connection.Connection:

https://review.openstack.org/#/c/534883/

But I wanted to be clear that the code is **welcome** directly in tree,
and that anyone working on an OpenStack service is welcome to put
support code directly in the openstacksdk tree.

Monty

PS. Joe - you've also got some classes in the tricircle test suite
extending the network service. I haven't followed all the things ... are
the tricircle network extensions done as neutron plugins now? (It looks
like they are) If so, why don't we figure out getting your network
resources in-tree as well.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][packaging] django-openstack-auth retirement

2018-01-24 Thread Monty Taylor

On 01/24/2018 08:25 AM, Jeremy Stanley wrote:

On 2018-01-23 23:19:59 +0900 (+0900), Akihiro Motoki wrote:
[...]

Horizon usually does not publish its releases to PyPI, so I think what
we can do is to document it.

P.S.
The only exceptions on PyPI horizon are 12.0.2 and 2012.2 releases.
12.0.2 was released last week but I don't know why it is available at
PyPI. In deliverables/pike/horizon.yaml in the openstack/releases
repo, we don't have "include-pypi-link: yes".

[...]

Right, my "if we were already" was a reference to the work under
discussion to eventually get all OpenStack services publishing
wheels and sdists on PyPI. There are still some logistical issues to
be ironed out (e.g., the fact that we don't control the "keystone"
entry there) so I doubt it'll happen before Queens releases but we
might have it going sometime in the Rocky cycle. I was mostly just
lamenting that it's not something we can take advantage of for the
current DOA/Horizon transition.


Horizon and neutron were updated to start publishing to PyPI already.

https://review.openstack.org/#/c/531822/

This is so that we can start working on unwinding the neutron and 
horizon specific versions of jobs for neutron and horizon plugins.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk][masakari][tricircle] Inclusion of SDK classes in openstacksdk tree

2018-01-20 Thread Monty Taylor

Hey everybody,

Wanted to send a quick note to let people know that all OpenStack 
services are more than welcome to put any openstacksdk proxy and 
resource classes they have directly into the openstacksdk tree.


Looking through codesearch.openstack.org, masakariclient and tricircle 
each have SDK-related classes in their trees.


You don't HAVE to put the code into openstacksdk. In fact, I wrote a 
patch for masakariclient to register the classes with 
openstack.connection.Connection:


https://review.openstack.org/#/c/534883/

But I wanted to be clear that the code is **welcome** directly in tree, 
and that anyone working on an OpenStack service is welcome to put 
support code directly in the openstacksdk tree.


Monty

PS. Joe - you've also got some classes in the tricircle test suite 
extending the network service. I haven't followed all the things ... are 
the tricircle network extensions done as neutron plugins now? (It looks 
like they are) If so, why don't we figure out getting your network 
resources in-tree as well.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][masakari][api] masakari service-type, docs, api-ref and releasenotes

2018-01-17 Thread Monty Taylor

Hey everybody,

I noticed today while preparing patches to projects that are using 
openstacksdk that masakari is not listed in service-types-authority. [0]


I pushed up a patch to fix that [1] as well as to add api-ref, docs and 
releasenotes jobs to the masakari repo so that each of those will be 
published appropriately.


As part of doing this, it came up that 'ha' is a pretty broad 
service-type and that perhaps it should be 'compute-ha' or 'instance-ha'.


The service-type is a unique key for identifying a service in the 
catalog, so the same service-type cannot be shared amongst openstack 
services. It is also used for api-ref publication (to
https://developer.openstack.org/api-ref/{service-type} ) - and in 
openstacksdk as the name used for the service attribute on the 
Connection object. (So the service-type 'ha' would result in having 
conn.ha on an openstack.connection.Connection)


We do support specifying historical aliases. Since masakari has been 
using ha up until now, we'll need to list is in the aliases at the very 
least.


Do we want to change it? What should we change it to?

Thanks!
Monty

[0] http://git.openstack.org/cgit/openstack/service-types-authority
[1] https://review.openstack.org/#/c/534875/
[2] https://review.openstack.org/#/c/534878/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Retiring ceilometerclient

2018-01-10 Thread Monty Taylor

On 01/10/2018 05:44 PM, Doug Hellmann wrote:

Excerpts from Monty Taylor's message of 2018-01-10 17:40:28 -0600:

On 01/10/2018 04:10 PM, Jon Schlueter wrote:

On Thu, Nov 23, 2017 at 4:12 PM, gordon chung  wrote:




On 2017-11-22 04:18 AM, Julien Danjou wrote:

Hi,

Now that the Ceilometer API is gone, we really don't need
ceilometerclient anymore. I've proposed a set of patches to retire it:

 https://review.openstack.org/#/c/522183/




So my question here is are we missing a process check for retiring a
project that is still in
the requirements of several other OpenStack projects?

I went poking around and found that rally [4], heat [1], aodh [3] and
mistral [2] still had references to
ceilometerclient in the RPM packaging in RDO Queens, and on digging a
bit more they
were still in the requirements for at least those 4 projects.

I would think that a discussion around retiring a project should also
include at least enumerating
which projects are currently consuming it [5].  That way a little bit
of pressure on those consumers
can be exerted to evaluate their usage of an about to be retired
project.  It shouldn't stop the
discussions around retiring a project just a data point for decision making.


It's worth pointing out that openstacksdk has ceilometer REST API
support in it, although it is special-cased since ceilometer was retired
before we even made the service-types-authority:

http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/connection.py#n234


Whoops, that's not ceilometer - that's gnocchi I think?

ceilometer support *does* have a service-types-authority reference so 
*isn't* special-cased and is here:


http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/meter


We can either keep it there indefinitely (there is no cost to keeping
it, other than that one "self._load('metric')" line) - or we could take
this opportunity to purge it from sdk as well.

BUT - if we're going to remove it from SDK I'd rather we do it in the
very-near-future because we're getting closer to a 1.0 for SDK and once
that happens if ceilometer is still there ceilometer support will remain
until the end of recorded history.

We could keep it and migrate the heat/mistral/rally/aodh
ceilometerclient uses to be SDK uses (although heaven knows how we test
that without a ceilometer in devstack)

I honestly do not have a strong opinion in either direction and welcome
input on what people would like to see done.

Monty



If ceilometer itself is deprecated, do we need to maintain support
in any of our tools?


We do not - although if we had had ceilometer support in shade I would 
be very adamant that we continue to support it to the best of our 
ability for forever, since you never know who out there is running on an 
old cloud that still has it.


This is why I could go either way personally from an SDK perspective - 
we don't have a 1.0 release of SDK yet, so if we do think it's best to 
just clean house, now's the time.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Retiring ceilometerclient

2018-01-10 Thread Monty Taylor

On 01/10/2018 04:10 PM, Jon Schlueter wrote:

On Thu, Nov 23, 2017 at 4:12 PM, gordon chung  wrote:




On 2017-11-22 04:18 AM, Julien Danjou wrote:

Hi,

Now that the Ceilometer API is gone, we really don't need
ceilometerclient anymore. I've proposed a set of patches to retire it:

https://review.openstack.org/#/c/522183/




So my question here is are we missing a process check for retiring a
project that is still in
the requirements of several other OpenStack projects?

I went poking around and found that rally [4], heat [1], aodh [3] and
mistral [2] still had references to
ceilometerclient in the RPM packaging in RDO Queens, and on digging a
bit more they
were still in the requirements for at least those 4 projects.

I would think that a discussion around retiring a project should also
include at least enumerating
which projects are currently consuming it [5].  That way a little bit
of pressure on those consumers
can be exerted to evaluate their usage of an about to be retired
project.  It shouldn't stop the
discussions around retiring a project just a data point for decision making.


It's worth pointing out that openstacksdk has ceilometer REST API 
support in it, although it is special-cased since ceilometer was retired 
before we even made the service-types-authority:


http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/openstack/connection.py#n234

We can either keep it there indefinitely (there is no cost to keeping 
it, other than that one "self._load('metric')" line) - or we could take 
this opportunity to purge it from sdk as well.


BUT - if we're going to remove it from SDK I'd rather we do it in the 
very-near-future because we're getting closer to a 1.0 for SDK and once 
that happens if ceilometer is still there ceilometer support will remain 
until the end of recorded history.


We could keep it and migrate the heat/mistral/rally/aodh 
ceilometerclient uses to be SDK uses (although heaven knows how we test 
that without a ceilometer in devstack)


I honestly do not have a strong opinion in either direction and welcome 
input on what people would like to see done.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] New documentation PTI jobs are live - some fallout possible

2017-12-18 Thread Monty Taylor

Hey everybody,

For several weeks we've been working on implementing updates to the 
Sphinx documentation jobs to use the new PTI. [0][1][2][3][4]


tl;dr
* we're no longer using tox for building sphinx docs
* if your doc build jobs break all of a sudden, ping us in #openstack-infra
* there is some optional cleanup you can now do if you want

The command that is run to build docs
-

The command that is run by the Zuul jobs is::

sphinx-build -b html doc/source doc/build/html

If the project has a setup.cfg file with:

[build_sphinx]
warnings-is-error=1

then a -W is added:

sphinx-build -W -b html doc/source doc/build/html

We are no longer using tox for sphinx jobs
--

There are some interesting variations in how people are doing doc 
requirements out there. We've caught many of them, but even just this 
morning found two new ways in which projects had managed to be different.


The long and short of it is that the place we *want* to be is for there 
to be a file, doc/requirements.txt, that contains requirements necessary 
for building sphinx docs. That same interface can be used by Javascript, 
Go, Rust, C++, Ruby, Ansible, Puppet, whatever really - it's not python 
specific.


To facilitate moving to that file, the jobs look for a 
doc/requirements.txt file and use it if they find it. If they don't find 
it, they look for a test-requirements.txt file (since that's what should 
have been driving the old version of the PTI anyway)


If your doc build jobs break all of a sudden
-

Cases in which you might not be handled and will need a patch to your 
project:


* You did not have your doc requirements expressed in test-requirements.txt.

Some projects have used a [doc] extra in their setup.cfg file. Others 
just have requirements listed on the [docs] and [venv] environments in 
their tox.ini file.


The simple fix is simply to push up a patch putting your doc 
requirements into doc/requirements.txt


* Your docs depend on the pbr autodoc feature.

My bad. TOTALLY missed this one when we were prepping. There is a patch 
[5] up that should fix/workaround this issue that we'll land as soon as 
we verifty it fixes the pbr autodoc using projects.


* Your tools/tox_install.sh is doing something weird and we didn't find 
it and fix it before hand.


These are more case-by-case - so definitely come find us.

There is some optional cleanup you can now do if you want
-

Now that the job update is live, it's no longer necessary for doc 
requirements to be listed in the normal test requirements (turns out you 
don't need to install sphinx to run your unittests)


* If you have a doc/requirements.txt already that we added recently, we 
will have put a reference to it in the [venv] tox.ini. You can remove it 
from there - or honestly from any tox venv that isn't [docs]


* If you have a normal test-requirements.txt file that has both types, 
you can move your documentation requirements to doc/requirements.txt and 
remove them from test-requirements.txt


* If you have distro-package requirements that are needed for your docs 
to build, you can (and should) add them to bindep.txt and mark them with 
the 'doc' profile. We also fall back to finding things in the 'test' 
profile - but being more explicit is more better.


* If you have a 'docs' environment for tox:

  ** Reminder: the tox 'docs' environment is a developer convenience. 
It is not used in building anything in the gate. **


If you are a project that follows constraints, update your docs env to 
look something like:


[docs]
deps =

-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
  -r{toxinidir}/requirements.txt
  -r{toxinidir}/doc/requirements.txt
commands = sphinx-build -b html doc/source doc/build/html

The constraints file and requirements.txt file entries are important to 
ensure that you're getting constraints applied. If you are not a project 
that follows the upper-constraints system, you can omit both the 
constraints reference and the requirements.txt reference:


[docs]
deps = -r{toxinidir}/doc/requirements.txt
commands = sphinx-build -b html doc/source doc/build/html

Thanks!
Monty

[0] https://review.openstack.org/#/c/508694/
[1] 
https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation
[2] 
https://governance.openstack.org/tc/reference/pti/python.html#documentation
[3] 
https://governance.openstack.org/tc/reference/pti/golang.html#documentation
[4] 
https://governance.openstack.org/tc/reference/pti/javascript.html#documentation

[5] https://review.openstack.org/#/c/528796

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [python-openstacksdk][shade] New cores proposed for shade/sdk

2017-12-04 Thread Monty Taylor

On 11/30/2017 10:00 AM, Dean Troyer wrote:

On Thu, Nov 30, 2017 at 8:00 AM, Monty Taylor <mord...@inaugust.com> wrote:

So let's add them to the core team, yeah?


+1

dt



Hearing no dissent I've added both. Welcome aboard!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] [api] [sdks] [keystone] Sahara APIv2: service discovery

2017-12-01 Thread Monty Taylor

On 12/01/2017 05:04 AM, Luigi Toscano wrote:

On Friday, 1 December 2017 01:34:36 CET Monty Taylor wrote:


First and most importantly you need to update python-saharaclient to
make sure it can handle it an unversioned endpoint in the catalog (by
doing discovery) - and that if it finds an unversioned endpoint in the
catalog it knows to prepend project-id to the urls is sends. The
easiest/best way to do this it to make sure it's delegating version
discovery to keystoneauth ... I will be more than happy to help you get
that updated.

Then, for now, recommend that *new* deployments put the unversioned
endpoint into their catalog, but that existing deployments keep the v1
endpoint in the catalog even if they upgrade sahara to a version that
has v2 as well. (The full description of version discovery describes how
to get to a newer version even if an older version is in the catalog, so
people can opt-in to v2 if it's there with no trouble)

That gets us to a state where:

- existing deployments with users using v1 are not broken
- existing deployments that upgrade can have user's opt-in to v2 easily
- new deployments will have both v1 and v2 - but users who want to use
v1 will have to do so with a client that understands actually doing
discovery


Does it work even if we would like to keep v1 as default for a while? v2, at
least in the first release, will be marked as experimental; hopefully it
should stabilize soon, but still.


Totally. In the version discovery document returned by sahara, keep v1 
listed as "CURRENT" and list v2 as "EXPERIMENTAL". Then, when you're 
ready to declare v2 as the recommended API, change v1 to "SUPPORTED" and 
v1 to "CURRENT".



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] [api] [sdks] [keystone] Sahara APIv2: service discovery

2017-11-30 Thread Monty Taylor

On 11/30/2017 03:07 PM, Jeremy Freudberg wrote:

Hi all,

In the Sahara world, we are getting ready to expose our experimental
v2 API to real users and not just curious devs. Therefore we need to
start thinking about service/version discovery of this new API.


\o/


Earlier this year, the service types authority was created, and one of
the things it asserted was that different service types for each API
version (like Cinder and Mistral did) is bad.

So, it would entail that we should not adopt the `data-processingv2`
service type.


Yes. Please don't... the service-types data has made its way into many 
places now.



Unfortunately it's not so easy because Sahara API v1 relies on project
ID in the URL, and therefore is expected to be registered with the
project ID template in the Keystone service catalog. But API v2 does
not accept project ID in the URL.

We don't want to break existing clients' ability to discover and use
Sahara among all clouds. So if we changed the expectation of the
endpoint for the current `dataprocessing` service type to never
contain project ID, some clients might get spooked. (Correct me if I'm
wrong)


WELL - there's totally a way to do this that works, although it's gonna 
be somewhat annoying.


First and most importantly you need to update python-saharaclient to 
make sure it can handle it an unversioned endpoint in the catalog (by 
doing discovery) - and that if it finds an unversioned endpoint in the 
catalog it knows to prepend project-id to the urls is sends. The 
easiest/best way to do this it to make sure it's delegating version 
discovery to keystoneauth ... I will be more than happy to help you get 
that updated.


Then, for now, recommend that *new* deployments put the unversioned 
endpoint into their catalog, but that existing deployments keep the v1 
endpoint in the catalog even if they upgrade sahara to a version that 
has v2 as well. (The full description of version discovery describes how 
to get to a newer version even if an older version is in the catalog, so 
people can opt-in to v2 if it's there with no trouble)


That gets us to a state where:

- existing deployments with users using v1 are not broken
- existing deployments that upgrade can have user's opt-in to v2 easily
- new deployments will have both v1 and v2 - but users who want to use 
v1 will have to do so with a client that understands actually doing 
discovery


Then let it sit that way for a while, and we can work to make sure that 
other clients with sahara support are also up to date with version 
discovery.


There will eventually come a point where a deployer will decide they 
want to change their catalog from /v1/{project_id} to / ... but by then 
we should have all the clients able to understand discovery fully.



So, we either need to break the rules and create the
`data-processingv2` type anyway, or we can create a new type just
called, for example, `bigdata` which going forward can be used to
discover either v1 or v2 without any interop concerns.


I think renaming to bigdata is less terrible than data-processing2 ... 
but let's see if we can't get things to work the other day first - 
there's a lot of churn otherwise.



This is not an aspect of OpenStack I know a lot about, so any guidance
is appreciated. Once we figure out a way forward I will make sure
patches get proposed to the service types authority repo.


Almost nobody does. :) But we can totally figure this one out.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [E] Re: nova diagnostics in client library/SDK

2017-11-30 Thread Monty Taylor

On 11/29/2017 12:49 PM, Gordon, Kent S wrote:


On Tue, Nov 28, 2017 at 2:15 PM, Monty Taylor <mord...@inaugust.com 
<mailto:mord...@inaugust.com>> wrote:


On 11/03/2017 11:31 AM, Gordon, Kent S wrote:

Do any of the python client libraries implement the nova
diagnostics API?


https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.openstack.org_wiki_Nova-5FVM-5FDiagnostics=DwIGaQ=udBTRvFvXC5Dhqg7UHpJlPps3mZ3LRxpb6__0PomBTQ=Xkn6r0Olgrmyl97VKakpX0o-JiB_old4u22bFbcLdRo=p2ULfonZvWd4C82lmFExdHuyh-NeUTzyu-q5M0kTDNg=RI4HTKLenL00VdvmCqFfjr5IMJV4HfWW_UkH1R1BWSU=

<https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.openstack.org_wiki_Nova-5FVM-5FDiagnostics=DwIGaQ=udBTRvFvXC5Dhqg7UHpJlPps3mZ3LRxpb6__0PomBTQ=Xkn6r0Olgrmyl97VKakpX0o-JiB_old4u22bFbcLdRo=p2ULfonZvWd4C82lmFExdHuyh-NeUTzyu-q5M0kTDNg=RI4HTKLenL00VdvmCqFfjr5IMJV4HfWW_UkH1R1BWSU=>


Not to my knowledge, no. However, adding support for it should be
easy enough to accomplish and would be a welcome addition.

This is the API you're talking about?


https://urldefense.proofpoint.com/v2/url?u=https-3A__developer.openstack.org_api-2Dref_compute_-23servers-2Ddiagnostics-2Dservers-2Ddiagnostics=DwIGaQ=udBTRvFvXC5Dhqg7UHpJlPps3mZ3LRxpb6__0PomBTQ=Xkn6r0Olgrmyl97VKakpX0o-JiB_old4u22bFbcLdRo=p2ULfonZvWd4C82lmFExdHuyh-NeUTzyu-q5M0kTDNg=2GH64mANdI_uV67Gt2YvoBJlR7uHHl17EB-URrOMN-E=

<https://urldefense.proofpoint.com/v2/url?u=https-3A__developer.openstack.org_api-2Dref_compute_-23servers-2Ddiagnostics-2Dservers-2Ddiagnostics=DwIGaQ=udBTRvFvXC5Dhqg7UHpJlPps3mZ3LRxpb6__0PomBTQ=Xkn6r0Olgrmyl97VKakpX0o-JiB_old4u22bFbcLdRo=p2ULfonZvWd4C82lmFExdHuyh-NeUTzyu-q5M0kTDNg=2GH64mANdI_uV67Gt2YvoBJlR7uHHl17EB-URrOMN-E=>

yes

If you feel like hacking on it, a patch to
openstack/python-openstacksdk would be the best way to go.

However, this is microversion-protected, and this would be the first
such feature in the SDK. So if diving that far down the rabbithole
sounds like too much, either bug me until I get around to it - or do
as much of it as makes sense (like adding a Resource class based on
openstack.resource2) but ignore the microversion bit and I can help
finish it off.


It has been a while since I did a lot of development.  Let me see how 
far I can get.


No worries - I can also just knock it out for you ... mostly wanted to 
be welcoming in case you *wanted* to add it. (it's no fun if I swoop in 
and steal people's hacking projects)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk][shade] New cores proposed for shade/sdk

2017-11-30 Thread Monty Taylor

Hey everybody,

I'd like to propose adding Slawek Kaplonski (slaweq) and Rosario Di 
Somma (rods) to the shade/sdk core team. While they are clearly two 
different humans, I'm going to speak of them collectively since what I 
have to say about each of them is largely the same.


They were both super helpful in getting the REST transition completed 
and both of them have shown both a good understanding of the code - and 
that they know their boundaries. (Knowing when you don't know something 
turns out to be super important)


They both have exhibited an attention to detail, the ability to 
troubleshoot the weird kinds of things that we run in to in shade-land, 
and the ability to safely stage large or ugly changes in digestible chunks.


They have both shown an eagerness to jump in, figure out and handle hard 
issues.


Now that we've finished RESTification it's time to jump straight into 
shade/sdk integration. On the one hand it'll be a repeat of a bunch of 
what we just did - but on the other it's going to be infinitely easier 
because of all of the work slaweq and rods put in over the past year.


So let's add them to the core team, yeah?

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-29 Thread Monty Taylor

On 11/28/2017 07:14 PM, Joshua Harlow wrote:

Small side-question,

Why would this just be limited to openstack clouds?

>

Would it be?


That's a great question. I think, at least for now, attempting to 
support non-OpenStack clouds would be too much and would cause us to 
have a thing that tries to solve all the problems and ends up solving 
none of them.


The problem is that, as much as there are deployer differences between 
OpenStack clouds, papering over them isn't THAT bad from an interface 
perspective, since the fundamental concepts are all the same.


Once you add non-OpenStack clouds, you have to deal with the extreme 
impedence mismatch between core concepts, and the use of similar names 
for different things.


For instance - an OpenStack Availability Zone and an AWS AZ are **not** 
the same thing. So you'd either need to use a different word mapped to 
each one (which would confuse either OpenStack or AWS users) or you'd 
have an oaktree concept mean different things depending on which cloud 
happened to be there.


All that said - I don't think there's anything architecturally that 
would prevent such work from happening- I just think it's fraught with 
peril and unlikely to be super successful and that we should focus on 
making sure OpenStack users can consume multi-cloud and 
multi-cloud-region sanely. Then, once we're happy with that and have 
served the needs of our OpenStack users, if someone comes up with a plan 
that adds support for non-OpenStack backend drivers for oaktree in a way 
that does not make life worse for the OpenStack users - then why not.



Monty Taylor wrote:

Hey everybody!

https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management

I've CC'd everyone who listed interest directly, just in case you're not
already on the openstack-dev list. If you aren't, and you are in fact
interested in this topic, please subscribe and make sure to watch for
[oaktree] subject headings.

We had a great session in Sydney about the needs of managing resources
across multiple clouds. During the session I pointed out the work that
had been started in the Oaktree project [0][1] and offered that if the
people who were interested in the topic thought we'd make progress best
by basing the work on oaktree, that we should bootstrap a new core team
and kick off some weekly meetings. This is, therefore, the kickoff email
to get that off the ground.

All of the below is thoughts from me and a description of where we're at
right now. It should all be considered up for debate, except for two
things:

- gRPC API
- backend implementation based on shade

As those are the two defining characteristics of the project. For those
who weren't in the room, justifications for those two characteristics 
are:


gRPC API


There are several reasons why gRPC.

* Make it clear this is not a competing REST API.

OpenStack has a REST API already. This is more like a 'federation' API
that knows how to talk to one or more clouds (similar to the kubernetes
federation API)

* Streaming and async built in

One of the most costly things in using the OpenStack API is polling.
gRPC is based on HTTP/2 and thus supports streaming and other exciting
things. This means an oaktree running in or on a cloud can do its
polling loops over the local network and the client can just either wait
on a streaming call until the resource is ready, or can fire an async
call and deal with it later on a notification channel.

* Network efficiency

Protobuf over HTTP/2 is a super-streamlined binary protocol, which
should actually be really nice for our friends in Telco land who are
using OpenStack for Edge-related tasks in 1000s of sites. All those
roundtrips add up at scale.

* Multi-language out of the box

gRPC allows us to directly generate consistent consumption libs for a
bunch of languages - or people can grab the proto files and integrate
those into their own build if they prefer.

* The cool kids are doing it

To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf
instead of JSON for service-to-service communication back in 2010 - so
it's not ACTUALLY a new idea... but with Google pushing it and support
from the CNCF, gRPC is actually catching on broadly. If we're writing a
new thing, let's lean forward into it.

Backend implementation in shade
---

If the service is defined by gRPC protos, why not implement the service
itself in Go or C++?

* Business logic to deal with cloud differences

Adding a federation API isn't going to magically make all of those
clouds work the same. We've got that fairly well sorted out in shade and
would need to reimplement basically all of shade in other other language.

* shade is battle tested at scale

shade is what Infra's nodepool uses. In terms of high-scale API
consumption, we've learned a TON of lessons. Much of the design inside
of shade is the result of real-world scaling issues. It's Open Source,
so we could obviously copy 

Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Monty Taylor
t 
the user doesn't know about, it will delete it on error) So for "create 
many servers" the process for each will succeed or fail. Depending on 
the resource it's either safe or not safe to retry without deleting - 
but that'll be something we'll want to rationalize in an oaktree context.


For things that do need a greater amount of transactional consistency, 
like "I want 4 vms, a private network and a load balancer in each of my 
cloud-regions" ... I believe the shade/oaktree operation would be "run 
this heat template everywhere" Heat already handles convergence 
operations, shade trying to from the outside would be OY.


>
> What happens in the above if a third user Y is creating resources in one
> of those clouds outside the view of oaktree... ya da ya da... What
> happens if they are both targeting the same tenant...

Yup. That should actually work fine (we do this all the time)- it's why 
we assume the cloud is the source of truth for what exists and not a 
local data store (two phase commits across WAN links anybody?)


> Perhaps a decent idea to start some kind of etherpad to start listing
> these questions (and at least think about them a wee bit) down?

Sounds great!

> Monty Taylor wrote:
>> Hey everybody!
>>
>> https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management
>>
>> I've CC'd everyone who listed interest directly, just in case you're not
>> already on the openstack-dev list. If you aren't, and you are in fact
>> interested in this topic, please subscribe and make sure to watch for
>> [oaktree] subject headings.
>>
>> We had a great session in Sydney about the needs of managing resources
>> across multiple clouds. During the session I pointed out the work that
>> had been started in the Oaktree project [0][1] and offered that if the
>> people who were interested in the topic thought we'd make progress best
>> by basing the work on oaktree, that we should bootstrap a new core team
>> and kick off some weekly meetings. This is, therefore, the kickoff email
>> to get that off the ground.
>>
>> All of the below is thoughts from me and a description of where we're at
>> right now. It should all be considered up for debate, except for two
>> things:
>>
>> - gRPC API
>> - backend implementation based on shade
>>
>> As those are the two defining characteristics of the project. For those
>> who weren't in the room, justifications for those two characteristics
>> are:
>>
>> gRPC API
>> 
>>
>> There are several reasons why gRPC.
>>
>> * Make it clear this is not a competing REST API.
>>
>> OpenStack has a REST API already. This is more like a 'federation' API
>> that knows how to talk to one or more clouds (similar to the kubernetes
>> federation API)
>>
>> * Streaming and async built in
>>
>> One of the most costly things in using the OpenStack API is polling.
>> gRPC is based on HTTP/2 and thus supports streaming and other exciting
>> things. This means an oaktree running in or on a cloud can do its
>> polling loops over the local network and the client can just either wait
>> on a streaming call until the resource is ready, or can fire an async
>> call and deal with it later on a notification channel.
>>
>> * Network efficiency
>>
>> Protobuf over HTTP/2 is a super-streamlined binary protocol, which
>> should actually be really nice for our friends in Telco land who are
>> using OpenStack for Edge-related tasks in 1000s of sites. All those
>> roundtrips add up at scale.
>>
>> * Multi-language out of the box
>>
>> gRPC allows us to directly generate consistent consumption libs for a
>> bunch of languages - or people can grab the proto files and integrate
>> those into their own build if they prefer.
>>
>> * The cool kids are doing it
>>
>> To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf
>> instead of JSON for service-to-service communication back in 2010 - so
>> it's not ACTUALLY a new idea... but with Google pushing it and support
>> from the CNCF, gRPC is actually catching on broadly. If we're writing a
>> new thing, let's lean forward into it.
>>
>> Backend implementation in shade
>> ---
>>
>> If the service is defined by gRPC protos, why not implement the service
>> itself in Go or C++?
>>
>> * Business logic to deal with cloud differences
>>
>> Adding a federation API isn't going to magically make all of those
>> clouds work the same. We've got that fairly well sorted out in shade and
>> would need to re

[openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Monty Taylor

Hey everybody!

https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management

I've CC'd everyone who listed interest directly, just in case you're not 
already on the openstack-dev list. If you aren't, and you are in fact 
interested in this topic, please subscribe and make sure to watch for 
[oaktree] subject headings.


We had a great session in Sydney about the needs of managing resources 
across multiple clouds. During the session I pointed out the work that 
had been started in the Oaktree project [0][1] and offered that if the 
people who were interested in the topic thought we'd make progress best 
by basing the work on oaktree, that we should bootstrap a new core team 
and kick off some weekly meetings. This is, therefore, the kickoff email 
to get that off the ground.


All of the below is thoughts from me and a description of where we're at 
right now. It should all be considered up for debate, except for two things:


- gRPC API
- backend implementation based on shade

As those are the two defining characteristics of the project. For those 
who weren't in the room, justifications for those two characteristics are:


gRPC API


There are several reasons why gRPC.

* Make it clear this is not a competing REST API.

OpenStack has a REST API already. This is more like a 'federation' API 
that knows how to talk to one or more clouds (similar to the kubernetes 
federation API)


* Streaming and async built in

One of the most costly things in using the OpenStack API is polling. 
gRPC is based on HTTP/2 and thus supports streaming and other exciting 
things. This means an oaktree running in or on a cloud can do its 
polling loops over the local network and the client can just either wait 
on a streaming call until the resource is ready, or can fire an async 
call and deal with it later on a notification channel.


* Network efficiency

Protobuf over HTTP/2 is a super-streamlined binary protocol, which 
should actually be really  nice for our friends in Telco land who are 
using OpenStack for Edge-related tasks in 1000s of sites. All those 
roundtrips add up at scale.


* Multi-language out of the box

gRPC allows us to directly generate consistent consumption libs for a 
bunch of languages - or people can grab the proto files and integrate 
those into their own build if they prefer.


* The cool kids are doing it

To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf 
instead of JSON for service-to-service communication back in 2010 - so 
it's not ACTUALLY a new idea... but with Google pushing it and support 
from the CNCF, gRPC is actually catching on broadly. If we're writing a 
new thing, let's lean forward into it.


Backend implementation in shade
---

If the service is defined by gRPC protos, why not implement the service 
itself in Go or C++?


* Business logic to deal with cloud differences

Adding a federation API isn't going to magically make all of those 
clouds work the same. We've got that fairly well sorted out in shade and 
would need to reimplement basically all of shade in other other language.


* shade is battle tested at scale

shade is what Infra's nodepool uses. In terms of high-scale API 
consumption, we've learned a TON of lessons. Much of the design inside 
of shade is the result of real-world scaling issues. It's Open Source, 
so we could obviously copy all of that elsewhere - but why? It exists 
and it works, and oaktree itself should be a scale-out shared-nothing 
kind of service anyway.


The hard bits here aren't making API calls to 3 different clouds, the 
hard bits are doing that against 3 *different* clouds and presenting the 
results sanely and consistently to the original user.


Proposed Structure
==

PTL
---

As the originator of the project, I'll take on the initial PTL role. 
When the next PTL elections roll around, we should do a real election.


Initial Core Team
-

oaktree is still small enough that I don't think we need to be super 
protective - so I think if you're interested in working on it and you 
think you'll have the bandwidth to pay attention, let me know and I'll 
add you to the team.


General rules of thumb I try to follow on top of normal OpenStack 
reviewing guidelines:


* Review should mostly be about suitability of design/approach. Style 
issues should be handled by pep8/hacking (with one exception, see 
below). Functional issues should be handled with tests. Let the machines 
be machines and humans be humans.


* Use followup patches to fix minor things rather than causing an 
existing patch to get re-spun and need to be re-reviewed.


The one style exception ... I'm a big believer in not using visual 
indentation - but I can't seem to get pep8 or hacking to complain about 
its use. This isn't just about style - visual indentation causes more 
lines to be touched during a refactor than are necessary making the 
impact of a change harder to see.


good:


Re: [openstack-dev] nova diagnostics in client library/SDK

2017-11-28 Thread Monty Taylor

On 11/03/2017 11:31 AM, Gordon, Kent S wrote:

Do any of the python client libraries implement the nova diagnostics API?

https://wiki.openstack.org/wiki/Nova_VM_Diagnostics


Not to my knowledge, no. However, adding support for it should be easy 
enough to accomplish and would be a welcome addition.


This is the API you're talking about?

https://developer.openstack.org/api-ref/compute/#servers-diagnostics-servers-diagnostics

If you feel like hacking on it, a patch to openstack/python-openstacksdk 
would be the best way to go.


However, this is microversion-protected, and this would be the first 
such feature in the SDK. So if diving that far down the rabbithole 
sounds like too much, either bug me until I get around to it - or do as 
much of it as makes sense (like adding a Resource class based on 
openstack.resource2) but ignore the microversion bit and I can help 
finish it off.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [shade][sdk] Retiring #openstack-shade IRC channel - come join #openstack-sdks

2017-11-28 Thread Monty Taylor
With the adoption of python-openstacksdk into the shade team and the 
merging of the shade code into the python-openstacksdk codebase, having 
both an #openstack-shade and an #openstack-sdks channel is more 
confusing than helpful. [0]


To solve that, I'd like to shift shade-related development discussion 
into #openstack-sdks with the rest of it.


Thanks!
Monty

[0] Why is there still shade-related development discussion if the code 
has been merged into python-openstacksdk? WELL- we still need to turn 
the existing shade codebase into a compatibilty shim layer on top of the 
code in python-openstacksdk. That's probably going to take at *least* a 
cycle ... and we'll need to make sure that the shade layer continues to 
work for as long as we're still around and kicking.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Changes to releasenotes and docs build jobs

2017-11-22 Thread Monty Taylor

Hey everybody!

Following recent changes [0] in the PTI [1][2] we're rolling out some 
changes to the build-openstack-releasenotes and 
build-openstack-sphinx-docs jobs. If you see patches with the topic 
'updated-pti' - they are in service of this.


The most important thing to be aware of is that we'll no longer be using 
tox for either of them. There are a few reasons for that - the one 
that's most important to me is that it allows us to use the exact same 
docs and releasenotes build jobs for python, go, javascript or any other 
language without needing to add python build tooling to non-python 
repos. We'll also align more with how readthedocs does things in the 
broader python ecosystem.


It's also worth reminding people that we've NEVER used 'tox -edocs' in 
the gate for docs jobs - so anyone who has additional things in their 
docs environment has not been having those things run. For folks running 
doc8, we recommend adding those checks to your pep8 environment instead.


It's also worth noting that we're adding support for a 
doc/requirements.txt file (location chosen for alignment readthedocs) to 
express requirements needed for all docs (for both releasenotes and 
docs). We'll start off falling back to test-requirements.txt ... but, we 
recommend splitting doc related requirements into doc/requirements.txt - 
since that will mean not needing to install Sphinx when doing tox unittests.


Specific info
=

Releasenotes


The patches for releasenotes have been approved and merged.

* We use -W for all releasenotes builds - this means warnings are always 
errors for releasenotes. That shouldn't bother anyone, as most of the 
releasenotes content is generated by reno anyway.


* We're temporarily installing the project to get version number. Doing 
this will be removed as soon as the changes in 
topic:releasenotes-version land. Note this only changes the version 
number on the front page, not what is shown. topics:releasenotes-version


* Installs dependencies via bindep for doc environment.

* doc/requirements.txt is used for installation of python dependencies. 
Things like whereto or openstackdocstheme should go there.


Documentation builds


* We use -W only if setup.cfg sets it

* Installs dependencies via bindep for doc environment. Binary deps, 
such as graphviz, should be listed in bindep and marked with a 'doc' tag.


* doc/requirements.txt is used for installation of python dependencies.
Things like whereto or openstackdocstheme should go there.

* tox_install.sh used to install project if it exists. Because of the 
current situation with neutron and horizon plugins it's necessary to run 
tox_install.sh if it exists as part of setup. We eventually want to make 
that go away, but that's a different effort. There are seven repos with 
a problematic tox_install.sh - patches will be arriving to fix them, and 
we won't land the build-openstack-sphinx-docs changes until they have 
all landed.



We've prepared these with a bunch of depends-on patches across a 
collection of projects, so we don't anticipate much in the way of pain 
... but life happens, so if you notice anything go south with 
releasenotes or sphinx jobs, please let us know and we can help solve 
any issues.


Thanks!
Monty

[0] https://review.openstack.org/#/c/509868
[1] https://review.openstack.org/#/c/508694
[2] 
https://governance.openstack.org/tc/reference/project-testing-interface.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release][neutron][horizon] Publishing server projects to PyPI

2017-11-17 Thread Monty Taylor

On 11/17/2017 10:51 AM, Andreas Jaeger wrote:

On 2017-11-17 17:27, Monty Taylor wrote:

Hey everybody!

tl;dr - We'd like to start publishing the server projects to PyPI

Background
==

The move to Zuul v3 has highlighted an odd situation we're in with some
of our projects, most notably neutron and horizon plugin projects.
Namely, those plugins need to depend on neutron and horizon, but we do
not publish neutron and horizon releases to PyPI.

There are a couple of reasons why we haven't historically published
server projects. We were concerned that doing so would 'encourage'
installation of the services from pip. Also that, because it is pip and
not dpkg/rpm/emerge there's no chance that 'pip install nova' will
result in a functioning service. (It's worth noting that pip and PyPI in
general were in a much different state 6 years ago than today - thanks
dstufft for all the great work!)

I think it's safe to say that the 'ship has sailed' on those two issues
... which is to say we're already well past the point of return on
either issue. pip is used or not used by deployers as they see fit, and
I don't think 'pip install nova' not producing a working nova isn't
going to actually surprise anyone.

Moving Forward
==

Before we can do anything, we need to update some of the release
validation scripts to allow this (they currently double-check that we're
doing the right things with the right type of project):
https://review.openstack.org/#/c/521115/

Once that's done, rather than doing a big-bang transition, the plan is
to move server projects from using the release-openstack-server template
to using the publish-to-pypi template as they are ready.

This should simplify a great many things for horizon and neutron - and
allow us to get rid of the horizon and neutron specific jobs and
templates. There are a few gotchas we'll need to work through - notably
there is another project there already named "Keystone" - although it
seems not very used. I've reached out to the author to see if he's
willing to relinquish. If he's not, we'll have to get creative.


One question on this: right now the dashboard and neutron plugins test
against current git head. Wouldn't installing from pypi mean that they
test against an older stable version?


Yes - by default - so we actually probably can't get rid of all of the 
variants. However, the tox-siblings support in the tox jobs would allow 
us to remove the zuul-cloner and other magic from the install_tox.sh 
scripts in the repos and just let things work normally.


Incidentally, it would also allow for the plugin projects to decide if 
they wanted to test against latest stable or master - or both.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][release][neutron][horizon] Publishing server projects to PyPI

2017-11-17 Thread Monty Taylor

Hey everybody!

tl;dr - We'd like to start publishing the server projects to PyPI

Background
==

The move to Zuul v3 has highlighted an odd situation we're in with some 
of our projects, most notably neutron and horizon plugin projects. 
Namely, those plugins need to depend on neutron and horizon, but we do 
not publish neutron and horizon releases to PyPI.


There are a couple of reasons why we haven't historically published 
server projects. We were concerned that doing so would 'encourage' 
installation of the services from pip. Also that, because it is pip and 
not dpkg/rpm/emerge there's no chance that 'pip install nova' will 
result in a functioning service. (It's worth noting that pip and PyPI in 
general were in a much different state 6 years ago than today - thanks 
dstufft for all the great work!)


I think it's safe to say that the 'ship has sailed' on those two issues 
... which is to say we're already well past the point of return on 
either issue. pip is used or not used by deployers as they see fit, and 
I don't think 'pip install nova' not producing a working nova isn't 
going to actually surprise anyone.


Moving Forward
==

Before we can do anything, we need to update some of the release 
validation scripts to allow this (they currently double-check that we're 
doing the right things with the right type of project): 
https://review.openstack.org/#/c/521115/


Once that's done, rather than doing a big-bang transition, the plan is 
to move server projects from using the release-openstack-server template 
to using the publish-to-pypi template as they are ready.


This should simplify a great many things for horizon and neutron - and 
allow us to get rid of the horizon and neutron specific jobs and 
templates. There are a few gotchas we'll need to work through - notably 
there is another project there already named "Keystone" - although it 
seems not very used. I've reached out to the author to see if he's 
willing to relinquish. If he's not, we'll have to get creative.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Existential question about Service/ServiceFilter

2017-11-13 Thread Monty Taylor

Hey everybody,

In the shade stack, one of the main changes is making Proxy a subclass 
of keystoneauth1.adapter.Adapter. This means we don't have to carry 
endpoint_override dicts around to pass to rest calls on the session, but 
instead let ksa handle that for us inside the Adapter structure. With 
that change, it means that Service/ServiceFilter is now *essentially* 
just doing the same job as the *_api_version code from os-client-config.


That logic is still important because we need to, at Connection 
construction time, know what version of a given service's Proxy object 
to instantiate and attach to the connection. (so that conn.image is 
openstack.image.v2._proxy.Proxy on v2 clouds and 
openstack.image.v1._proxy.Proxy on v1 clods)


We could just remove them and let the OCC CloudConfig object and config 
settings handle it, since we currently have default versions set there.


HOWEVER - I'm writing this because we'd also like to stop having default 
versions encoded in config and instead rely on doing version discovery 
properly to find the right versions, leaving *_api_version settings as 
overrides, not required config.


This brings us to the issue:

* In the current structure, if we use version discovery, we'd need to 
run it - for every service on the cloud - in the Connection constructor. 
That'll be unworkably slow.


* If we don't use version discovery, we can't remove the default version 
settings from openstack.config/clouds.yaml - and private clouds or other 
clouds that don't have vendor settings files will require the user to 
set overrides (that could be detected) on clouds that did not have the 
version of a service that we have set as our default.


* We could add a magic multi-version Proxy object that only does version 
discovery when someone tries to use it, and once it has done discovery 
creates the correct versioned Proxy object and does getattribute 
passthrough to it. This has the potential benefit of also being able to 
have the 'magic' Proxy have a copy of each known version. For instance:


  conn = Connection(cloud='example')
  # On this access, Connection has a magic Proxy Image object at
  # conn.image. When conn.image.__getattribute__ is run, it is
  # discovered that there is no 'real' proxy. ksa discovery is run,
  # v1 and v2 are discovered and openstack.image.v2._proxy.Proxy
  # is created and added the 'real' proxy attribute. Subsequent
  # __getattribute__ calls on conn.image get passed through to the
  # v2 proxy:
  conn.image.images()
  # If someone knows they want a specific version for a call, the
  # magic proxy can also add instances for each found version as
  # attributes, allowing:
  conn.image.v1.images()
  # or
  conn.image.v2.images()

* We could add magic __getattribute__ stuff to Connection itself so that 
the first time someone tries to touch one of the service proxy 
attributes we do version discovery for that and then just attach the 
correct proxy object in place the first time. It could also attach 
versioned attributes as above:


  conn = Connection(cloud='example')
  # On this access, conn.__getattribute__ looks for image, sees
  # that it's in the list of service-types but doesn't have a
  # Proxy object yet. It does ksa discovery, finds that
  # v1 and v2 exist and makes a openstack.image.v2._proxy.Proxy
  # object that it attaches to conn.image. Subsequent
  # __getattribute__ calls on conn note that conn.image exists
  # so just work as normal.
  conn.image.images()
  # If someone knows they want a specific version for a call, the
  # original __getattribute__ call can also add instances for each
  # found version as attributes on the 'main' proxy, allowing:
  conn.image.v1.images()
  # or
  conn.image.v2.images()

* We could do neither of those things and instead rely on persistent 
caching of version discovery. Perhaps add a 'login' command to OSC that 
will do an auth, grab the catalog and locally cache all the version 
discovery. If we find that cached data we'll use it, otherwise we'll 
generate it and cache it on our first use. I'm a bit skeptical that this 
is entirely the right call, as it would potentially be a large startup 
cost at times the user isn't expecting, and without cooperative remote 
content (such as the proposed profile document) invalidation becomes 
super hard to get right. (I DO think we should add persistent caching of 
version discovery documents to lower cost when using OSC on to of SDK - 
but I worry about having basic functionality depend on such code.


Does anyone have a preference of one approach over the other? Or other 
suggestions?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk][shade] Reviewing the merge-shade patches

2017-11-13 Thread Monty Taylor

On 11/13/2017 09:06 AM, David Shrewsbury wrote:

Hi!

On Mon, Nov 13, 2017 at 9:54 AM, Monty Taylor <mord...@inaugust.com 
<mailto:mord...@inaugust.com>> wrote:


Hey everybody,

You may have noticed a giant patch series up:

https://review.openstack.org/#/q/topic:merge-shade
<https://review.openstack.org/#/q/topic:merge-shade>



One thing I don't see covered here is the current set of Ansible module 
tests. Unless I've missed it,
I don't see any migration of those, nor any reference of the plan for 
them in the future. I know that
we were waiting for Zuulv3 to do some cool github integration things. 
And since the modules import
shade, not this code, we won't be breaking them. But, what's the plan 
for that?


That's a great question. I actually did add the ansible functional tests 
to the "add zuulv3 jobs" patch - but as you say, those aren't really 
testing anything yet since the ansible modules do "import shade"


That brings up a few really good things that should be pointed out:

1) The intent is to turn the code in the shade repo into a thin compat 
shim that imports python-openstacksdk and subclasses/wraps the sdk 
object. That'll let us fix some of the interface mistakes we made in 
shade but are stuck with for compat reasons in the sdk version of the 
code ... but can still keep the old defaults/behavior in the shade code. 
For instance ... we've learned that fetching extra_specs on flavors in 
shade was ... stupid. We can have the sdk version NOT fetch by default, 
then in shade do:


def list_flavors(self, fetch_extra_specs=True):
return super(OpenStackCloud, self).list_flavors(
fetch_extra_specs=fetch_extra_specs)

With this in place, it should mean that shade users can continue on 
without being broken.


2) Once we have a release of openstacksdk that has the shade code in a 
place we're happy with, we should update the ansible modules to do 
import openstack instead of import shade


3) Cross-testing shade, os-client-config and openstacksdk is essential, 
so that we can make sure we're not breaking anything as we work on 
compat shims. The same goes with python-openstackclient, but current v3 
and the v4 branch dean is working on. We should probably add a shared 
change queue to the zuul v3 config for each of them. We can't add 
python-openstackclient to that shared queue since I'm pretty sure it's 
in the integrated-gate change queue. We COULD add shade, sdk and occ to 
the integrated-gate queue ... but that might slow us down at the moment, 
so maybe once it's all tied in together like we want it ...


4) openstack.cloud is the wrong home for the shade code. It's just there 
for expedience sake for now (let's not change TOO many things all at 
once) I'm *pretty* sure the shade methods want to just live on the 
Connection object, so that we wind up with:


  conn = openstack.connection.Connection(cloud='example')
  conn.list_servers() # shade version
  conn.compute.servers() # sdk version
  conn.compute.get('/servers') # REST version

We could alternately put it as a sub-resource, but since the shade 
methods are intended to be the 'easy' methods, doing this:


  conn = openstack.connection.Connection(cloud='example')
  conn.cloud.list_servers() # shade version
  conn.compute.servers() # sdk version
  conn.compute.get('/servers') # REST version

feels wrong. Maybe keeping the code in openstack/cloud is fine and just 
making it a mixin class (kind of like how we do in shade today with the 
normalize methods) would allow for some better source-code organization.


I also think that the OpenStackCloud/OperatorCloud split from shade 
wound up being more confusing than helpful.


Finally - while I'm rambling here ... after we finish the Resource -> 
Resource2 migration, I'd love to ponder whether or not we can make 
Resource2 be a subclass of munch.Munch. shade needs to be able to return 
objects that are directly json serializable (so that the ansible layer 
works easily/well) - it would be awesome if we didn't have to shove a 
to_dict() into every return on that layer. I haven't dug in to the magic 
guts of Resource2 fully, so I'm not 100% sure doing that will work. 
Since Resource2 is already doing data model definition via object 
parameters the munch part may not be super useful. We'll have to think 
about how we can pass through something so that shade users get things 
that are isinstance(munch.Munch) though - which is maybe a metaclass if 
we can't do it directly at the Resource layer.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk][shade] Reviewing the merge-shade patches

2017-11-13 Thread Monty Taylor

Hey everybody,

You may have noticed a giant patch series up:

  https://review.openstack.org/#/q/topic:merge-shade

It's a big beastie, so I thought a synopsis of what's going would be 
useful. Also, there's a pile of patches towards the end that we want to 
squash before landing but which have been pushed up in smaller patches 
because reading it all as one patch would be unpossible - but reworking 
the unittest mocks for each step would be wasted energy.


Any of the patches in this email that are green from Zuul are ready for 
review/merge.


The easy patches:

These next patches are easy because they're just mechanical - nothing 
really will have changed in SDK because of them.


https://review.openstack.org/#/c/518128 - Merge shade and 
os-client-config into the tree


You can't see the whole picture in gerrit. This merges in shade and 
os-client-config, including their entire git history. Most of that merge 
got pushed up to a throwaway branch so that the review in gerrit would 
be a single patch containing the merge. It's best to just grab the repo 
at that branch and take a look.


It's also important to note here that I'm pretty sure openstack.cloud is 
not actually where shade wants to live and that openstack.config needs a 
refactor - but we can do that after this stack, as the stack is too big 
already.


https://review.openstack.org/#/c/518129/ - Merge in recent fixes from 
the shade repo


Merges patches that already landed in shade between the time I made the 
first merge commit and proposed it. No need to actually review the 
content itself.


https://review.openstack.org/#/c/518130 - Add jobs for Zuul v3

Adds functional zuulv3 native test jobs that match what shade was doing. 
These run both shade and sdk's functional tests (they're one set of tests)


https://review.openstack.org/#/c/518946 - Fix magnum functional test

Fixes the previous one.

https://review.openstack.org/#/c/518131 - Handle glance image pagination 
links better
https://review.openstack.org/#/c/518132 - Fix regression for 
list_router_interfaces
https://review.openstack.org/#/c/518133 - Support filtering servers in 
list_servers using arbitrary parameters


Patches landed in shade in the interim. No need to review content.

Once those are landed we're ready for some fun:

https://review.openstack.org/#/c/518799 - Migrate to testtools for 
functional tests


Reworks the functional base class to use the testtools based functional 
base class from shade, along with the logger fakes and whatnot. The 
biggest change here is the move to using setUp/tearDown instead of 
setUpClass/tearDownClass.


This patch is annoyingly large, but it's also important so that we have 
one consistent approach to these things across the combined codebase.


https://review.openstack.org/#/c/519029 - Rework config and rest layers

The squahsed/rollup patch. It can be viewed broken out by looking at the 
other patches that have https://review.openstack.org/#/c/518799 as a parent.


This is where all of the invasive breaking-changes happen. Most notably 
stepping away from Profile, Session and Authenticator in favor of using 
openstack.config/os-client-config and the Adapter class from 
keystoneauth so that we can rely on ksa for discovery and whatnot. It 
also adds a pure-rest interface on each service, removes the plugin 
entrypoints structure and and renames 4 of the service objects to match 
their official service-type (although it also keeps aliases so that 
shouldn't cause too much carnage for people)


Removal of profile/authenticator is the biggest user-visible breaking 
change, and it's not ACTUALLY removed in this patch (although it was at 
one point) We have to keep Profile around long enough to remove its use 
from OSC - including osc3, since OSC is used to set up resources as part 
of devstack. It might be worthwhile to consider fleshing out the 
backwards compat for allowing profiles to be passed in for a short time 
since it's there now.


There's still a little more plumbing to do before we can start working 
on using this layer under the shade code - most notably we need to plumb 
in the task manager ... but to my knowledge this stack works with Dean's 
OSC4 branch so we should be in good shape.


https://review.openstack.org/#/c/501438 - Add notes about moving forward

A document with some thoughts - some of which are a good idea and some 
not - about next steps and what we need to get done.


Thanks for the patience in reading these ... they're SUPER long. In many 
cases it's about letting the test suites let us know they're ok.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Zuul v3 Rollout Update - devstack-gate issues edition

2017-10-11 Thread Monty Taylor

  But, Mousie, thou art no thy-lane,
  In proving foresight may be vain;
  The best-laid schemes o' mice an' men
  Gang aft agley,
  An' lea'e us nought but grief an' pain,
  For promis'd joy!
- To a Mouse, Rabbie Burns, 1785

We have awoken this fine morning to find ourselves having two different 
devstack-gate issues[1][2] that are related neither to each other, nor 
to Zuul v3 itself (although one of them only surfaces in Zuul v3)


Given the typically long iteration time on working through base 
devstack-gate issues, it seems rather imprudent to flip the v3 switch 
until they are sorted.


Consider the rollout on hold until the devstack-gate issues are sorted. 
We'll follow up when we it's a go again.


Thanks!
Monty

[1] Ownership of the in-image artifact cache. Has a patch working 
through the gate now: https://review.openstack.org/#/c/511260/


[2] Issue with our Ubuntu mirrors being out of sync causing package 
version conflicts between mainline and UCA mirrors. Root cause and 
solutions are being worked.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][election] TC non-candidacy

2017-10-10 Thread Monty Taylor

Hey everybody!

I have decided not to seek reelection for the TC.

I have had the honor and privilege of serving you on the TC and it's 
predecessor the PPB since the fall of 2011 (with one six month absence 
that I blame on Jay Pipes)


There are a wealth of amazing people running for the TC this cycle, many 
of whom have a different genetic, cultural or geographic background than 
I do. I look forward to seeing how they shepherd our amazing community.


I am not going anywhere. We're just getting Zuul v3 rolled out, there's 
a pile of work to do to merge and rationalize shade and openstacksdk and 
I managed to sign myself up to implement application credentials in 
Keystone. I still haven't even managed to convince all of you that 
Floating IPs are a bad idea...


Thank you, from the bottom of my heart, for the trust you have placed in 
me. I look forward to seeing as many of you as I can in Sydney, 
Vancouver, Berlin and who knows where else.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] IMPORTANT Information about Zuul v3 rollout, the sequel

2017-10-10 Thread Monty Taylor

Hey everybody,

As noted by fungi yesterday:


http://lists.openstack.org/pipermail/openstack-dev/2017-October/123337.html

We are planning to roll Zuul v3 out to take over the gate again tomorrow.

Since then it has become evident that what people should be doing about 
that right now is unclear. SOO 


Things to Do Today to Prepare
=

* Please triage failures: As of right now we are aware of no SYSTEMIC 
issues that should cause v3 jobs to fail. If you have a job failing in 
the v3 check pipeline, you should at the very least triage it.


* Check the fixed issues and open issues etherpad when you triage:

  https://etherpad.openstack.org/p/zuulv3-fixed-issues

* Restart coming - only recheck things you're concerned about - Things 
are backed up on v3 due to capacity management, being down a cloud and 
running double jobs. We're about to restart v3 to pick up some changes. 
That will reset the v3 queues allowing you to recheck things to verify 
if they have been fixed and get a response more quickly.


* Read migration guide if you haven't already:

  https://docs.openstack.org/infra/manual/zuulv3.html

A Few Notes About Tomorrow
==

* The performance issues from before have been sorted out, so responding 
to issues should no longer take hours.


* The status page should also behave normally - it is currently in an 
exceptional state due to nodepool capacity management.


* We have a temporary high-priority check pipeline for project-config 
changes, so fixes to that repo will be able to merge quickly without 
blocking work in other repos.


* We'll be tracking known issues at:

  https://etherpad.openstack.org/p/zuulv3-issues

* Shifting jobs to being in-repo jobs is totally in-game so that you can 
iterate on things without us:



https://docs.openstack.org/infra/manual/zuulv3.html#moving-legacy-jobs-to-projects

* Reach out at the first sign off issues - and roll up your sleeves. 
We're in #openstack-infra and we'll be primed to jump immediately on any 
issues you're seeing, but we can't do that if we don't know about the 
issues. The teams that  have been the most successful so far have been 
the ones who have been talking to us and who have had someone dive in 
and migrate jobs away from legacy jobs. We know not everyone has the 
bandwidth to have a person dive in on reworking jobs tomorrow, but at 
least let us know if you're having issues.


The most important thing is to communicate with us.

We cannot say thank you enough for your patience over the last week and 
a half. We expect tomorrow's 'rollout, the sequel' to be MUCH smoother, 
but there will surely still be issues. The fixes from the last week 
should allow us to respond to those issues much more quickly.


Thanks,
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs and tarballs

2017-10-05 Thread Monty Taylor

On 09/30/2017 10:40 AM, Doug Hellmann wrote:

Excerpts from Monty Taylor's message of 2017-09-30 10:20:08 -0500:

Hey everybody,

Oh goodie, I can hear you say, let's definitely spend some time
bikeshedding about specific command invocations related to building docs
and tarballs!!!

tl;dr I want to change the PTI for docs and tarball building to be less
OpenStack-specific

The Problem
===

As we work on Zuul v3, there are a set of job definitions that are
rather fundamental that can totally be directly shared between Zuul
installations whether those Zuuls are working with OpenStack content or
not. As an example "tox -epy27" is a fairly standard thing, so a Zuul
job called "tox-py27" has no qualities specific to OpenStack and could
realistically be used by anyone who uses tox to manage their project.

Docs and Tarballs builds for us, however, are the following:

tox -evenv -- python setup.py sdist
tox -evenv -- python setup.py build_sphinx

Neither of those are things that are likely to work outside of
OpenStack. (The 'venv' tox environment is not a default tox thing)

I'm going to argue neither of them are actually providing us with much
value.

Tarball Creation


Tarball creation is super simple. setup_requires is already handled out
of band of everything else. Go clone nova into a completely empty system
and run python setup.py sdist ... and it works. (actually, nova is big.
use something smaller like gertty ...)

 docker run -it --rm python bash -c 'git clone \
   https://git.openstack.org/openstack/gertty && cd gertty \
   && python setup.py sdist'

There is not much value in that tox wrapper - and it's not like it's
making it EASIER to run the command. In fact, it's more typing.

I propose we change the PTI from:

tox -evenv python setup.py sdist

to:

python setup.py sdist

and then change the gate jobs to use the non-tox form of the command.

I'd also like to further change it to be explicit that we also build
wheels. So the ACTUAL commands that the project should support are:

python setup.py sdist
python setup.py bdist_wheel

All of our projects support this already, so this should be a no-op.

Notes:

* Python projects that need to build C extensions might need their pip
requirements (and bindep requirements) installed in order to run
bdist_wheel. We do not support that broadly at the moment ANYWAY - so
I'd like to leave that as an outlier and handle it when we need to
handle it.

* It's *possible* that somewhere we have a repo that has somehow done
something that would cause python setup.py sdist or python setup.py
bdist_wheel to not work without pip requirements installed. I believe we
should consider that a bug and fix it in the project if we find such a
thing - but since we use pbr in all of the OpenStack projects, I find it
extremely unlikely.

Governance patch submitted: https://review.openstack.org/508693

Sphinx Documentation


Doc builds are more complex - but I think there is a high amount of
value in changing how we invoke them for a few reasons.

a) nobody uses 'tox -evenv -- python setup.py build_sphinx' but us
b) we decided to use sphinx for go and javascript - but we invoke sphinx
differently for each of those (since they naturally don't have tox),
meaning we can't just have a "build-sphinx-docs" job and even share it
with ourselves.
c) readthedocs.org is an excellent Open Source site that builds and
hosts sphinx docs for projects. They have an interface for docs
requirements documented and defined that we can align. By aligning,
projects can use migrate between docs.o.o and readthedocs.org and still
have a consistent experience.

The PTI I'd like to propose for this is more complex, so I'd like to
describe it in terms of:

- OpenStack organizational requirements
- helper sugar for developers with per-language recommendations

I believe the result can be a single in-tree doc PTI that applies to
python, go and javascript - and a single Zuul job that applies to all of
our projects AND non-OpenStack projects as well.

Organizational Requirements
---

Following readthedocs.org logic we can actually support a wider range of
schemes technically, but there is human value in having consistency on
these topics across our OpenStack repos.

* docs live in doc/source
* Python requirements needed by Sphinx to build the docs live in
doc/requirements.txt

If the project is python:

* doc/requirements.txt can assume the project will have been installed
* The following should be set in setup.cfg:

[build_sphinx]
source-dir = doc/source
build-dir = doc/build

Doing the above allows the following commands to work cleanly in
automation no matter what the language is:

[ -f doc/requirements.txt ] && pip install -rdoc/requirements.txt
sphinx-build -b doc/source doc/build


I suspect you mean "sphinx-build -b html doc/source doc/build" (the
"html" arg was missing).


Yes! Please amend all 

Re: [openstack-dev] [infra][docs][i18n][ptls] PDFs for project-specific docs with unified doc builds

2017-10-05 Thread Monty Taylor

On 09/25/2017 08:50 AM, Doug Hellmann wrote:

[Topic tags added to subject line]

Excerpts from Ian Y. Choi's message of 2017-09-22 07:29:23 +0900:

Hello,

"Build PDF docs from rst-based guide documents" [1] was implemented in Ocata
cycle, and I have heard that there were a small conversation at the
Denver PTG
regarding getting PDFs for project-specific docs setup to help translations.

In my opinion, it would be a nice idea to extend [1] to project-specific
docs with unified doc builds. It seems that unified doc builds have been
much enhanced with [2]. Now I think having PDF build functionalities in
unified
doc builds would be a way to easily have PDFs for project-specific docs.

Would someone have any idea on this or help it with some good pointers?


The job-template for the unified doc build job is in
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/openstack-publish-jobs.yaml#n22

It uses the "docs" macro, which invokes the script
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/run-docs.sh

I think we would want to place any new logic for extending the build in
that script, although we should coordinate any changes with the Zuul v3
rollout because as part of that I have seen some suggestions to change
the expected interface for building documentation and we want to make
sure any changes we make will work with the updated interface.


The existing jobs for v3 are here:

http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/jobs.yaml#n159
http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/playbooks/sphinx-docs

Based on my other email about docs and the PTI:

http://lists.openstack.org/pipermail/openstack-dev/2017-September/122923.html

I'd like to make a similar one in openstack-infra/zuul-jobs that doesn't 
use tox but instead just does sphinx commands so that it can be used by 
non-OpenStack folks too.


I think if we can figure out a good way to incorporate translations that 
would be valuable. I don't know if that means a sphinx plugin, or a 
published set of guidelines of "if you are using the zuul 
build-sphinx-docs job and you have translations, put them here and it'll 
all work" ... my hunch is that a plugin would be nice because it would 
locate the logic needed in


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-03 Thread Monty Taylor

On 10/03/2017 11:17 AM, Dean Troyer wrote:

On Mon, Oct 2, 2017 at 9:13 PM, Jamie Lennox  wrote:

I'm really sad to announce that I'll be leaving the OpenStack community (at
least for a while), I've accepted a new position unrelated to OpenStack
that'll begin in a few weeks, and am going to be mostly on holiday until
then.


No, this just will not do. -2


I concur. Will a second -2 help?


Seriously, it has been a great pleasure to 'try to take over the
world' with you, at least that is what I recall as the goal we set in
Hong Kong.  The entire interaction of Python-based clients with
OpenStack has been made so much better with your contributions and
OpenStackClient would not have gotten as far as it has without them.


Your contributions and impact around these parts cannot be overstated. I 
have enjoyed our time working together and hold your work and 
contributions in extremely high regard.


Best of luck in your next endeavor - they are lucky to have you!

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Important information for people with in-repo Zuul v3 config

2017-10-03 Thread Monty Taylor

Hi everybody,

The partial rollback of Zuulv3 is in place now. Zuulv2 is acting as your 
gate keeper once again. The status page for Zuulv2 can be found at 
http://status.openstack.org/zuul and Zuulv3 can be found at 
http://zuulv3.openstack.org.


With the partial rollback of v3, we've left the v3 check pipeline 
configured for everyone so that new v3 jobs can be iterated on in 
preparation for rolling forward. Doing so leaves open a potential hole 
for breakage, so ...


If you propose any changes to your repos that include changes to zuul 
config files (.zuul.yaml or .zuul.d) - PLEASE make sure that Zuul v3 
runs check jobs and responds before approving the patch.


If you don't don't do that, you could have zuul v2 land a patch that 
contains a syntax error that would result in invalid config for v3.
Note that this would break not only your repo - but all testing using 
Zuul v3 (in which case we would have to temporarily remove your 
repository from the v3 configuration or ask for immediate revert)!


Keep in mind that as we work on diagnosing the issue that caused the 
rollback, we could be restarting v3, shutting it down for a bit or it 
could be wedged - so v3 might not respond.


Make sure you get a response from v3 on any v3 related patches. Please.

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Zuul v3 Status - and Rollback Information

2017-10-03 Thread Monty Taylor

Hey everybody!

Thank you all for your patience over the last week as we've worked 
through the v3 rollout.  While we anticipated some problems, we've run 
into a number of unexpected issues, some of which will take some time to 
resolve.  It has become unreasonable to continue in our current state 
while they are being worked.


With that in mind, we are going to perform a partial rollback to Zuul v2 
while we work on it so that teams can get work done. The details of that 
are as follows:


The project-config repo remains frozen.  Generally we don't want to make 
changes to v2 jobs.  If a change must be made, it will need to be made 
to both v2 and v3 versions.  We will not run the migration script again.


Zuul v3 will continue to run check and periodic jobs on all repos.  It 
will leave review messages, including +1/-1 votes.


Our nodepool quota will be allocated 80% to Zuul v2, and 20% to ZUul 
v3.  This will slow v2 down slightly, but allow us to continue to 
exercise v3 enough to find problems.


Zuul v2 and v3 can not both gate a project or set of projects.  In 
general, Zuul v2 will be gating all projects, except the few projects 
that are specifically v3-only: zuul-jobs, openstack-zuul-jobs, 
project-config, and zuul itself.


We appreciate that some projects would prefer to use v3 exclusively, 
even while we continue to work to stabilize it.  However, in order to 
complete our work as quickly as possible, we may need to restart 
frequently or take extended v3 downtimes.  Because of this, and the 
reduced capacity that v3 will be running with, we will keep the set of 
projects gating under v3 limited to only those necessary.  But keep in 
mind, v3 will still be running check jobs on all projects, so you can 
continue to iterate on v3 job content in check.


If you modified a script in your repo that is called from a job to work 
in v3, you may need to modify it to be compatible with both.  If you 
need to determine whether you are running under Zuul v2 or under v3 with 
legacy compatibility shims, check for the LOG_PATH environment 
variable.  It will only be present when running under Zuul v2 (and it is 
the variable that we are least likely to add to the v3 compatibility shim).


Again - thank you all for your patience, and for all of the great work 
people have done working through the issues we've uncovered. As soon as 
we've got a handle on the most critical issues, we'll plan another 
roll-forward ... hopefully in a week or two, but we'll send out status 
updates as we have them.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][docs][release] Updating the PTI for docs and tarballs

2017-09-30 Thread Monty Taylor

Hey everybody,

Oh goodie, I can hear you say, let's definitely spend some time 
bikeshedding about specific command invocations related to building docs 
and tarballs!!!


tl;dr I want to change the PTI for docs and tarball building to be less 
OpenStack-specific


The Problem
===

As we work on Zuul v3, there are a set of job definitions that are 
rather fundamental that can totally be directly shared between Zuul 
installations whether those Zuuls are working with OpenStack content or 
not. As an example "tox -epy27" is a fairly standard thing, so a Zuul 
job called "tox-py27" has no qualities specific to OpenStack and could 
realistically be used by anyone who uses tox to manage their project.


Docs and Tarballs builds for us, however, are the following:

tox -evenv -- python setup.py sdist
tox -evenv -- python setup.py build_sphinx

Neither of those are things that are likely to work outside of 
OpenStack. (The 'venv' tox environment is not a default tox thing)


I'm going to argue neither of them are actually providing us with much 
value.


Tarball Creation


Tarball creation is super simple. setup_requires is already handled out 
of band of everything else. Go clone nova into a completely empty system 
and run python setup.py sdist ... and it works. (actually, nova is big. 
use something smaller like gertty ...)


   docker run -it --rm python bash -c 'git clone \
 https://git.openstack.org/openstack/gertty && cd gertty \
 && python setup.py sdist'

There is not much value in that tox wrapper - and it's not like it's 
making it EASIER to run the command. In fact, it's more typing.


I propose we change the PTI from:

  tox -evenv python setup.py sdist

to:

  python setup.py sdist

and then change the gate jobs to use the non-tox form of the command.

I'd also like to further change it to be explicit that we also build 
wheels. So the ACTUAL commands that the project should support are:


  python setup.py sdist
  python setup.py bdist_wheel

All of our projects support this already, so this should be a no-op.

Notes:

* Python projects that need to build C extensions might need their pip 
requirements (and bindep requirements) installed in order to run 
bdist_wheel. We do not support that broadly at the moment ANYWAY - so 
I'd like to leave that as an outlier and handle it when we need to 
handle it.


* It's *possible* that somewhere we have a repo that has somehow done 
something that would cause python setup.py sdist or python setup.py 
bdist_wheel to not work without pip requirements installed. I believe we 
should consider that a bug and fix it in the project if we find such a 
thing - but since we use pbr in all of the OpenStack projects, I find it 
extremely unlikely.


Governance patch submitted: https://review.openstack.org/508693

Sphinx Documentation


Doc builds are more complex - but I think there is a high amount of 
value in changing how we invoke them for a few reasons.


a) nobody uses 'tox -evenv -- python setup.py build_sphinx' but us
b) we decided to use sphinx for go and javascript - but we invoke sphinx 
differently for each of those (since they naturally don't have tox), 
meaning we can't just have a "build-sphinx-docs" job and even share it 
with ourselves.
c) readthedocs.org is an excellent Open Source site that builds and 
hosts sphinx docs for projects. They have an interface for docs 
requirements documented and defined that we can align. By aligning, 
projects can use migrate between docs.o.o and readthedocs.org and still 
have a consistent experience.


The PTI I'd like to propose for this is more complex, so I'd like to 
describe it in terms of:


- OpenStack organizational requirements
- helper sugar for developers with per-language recommendations

I believe the result can be a single in-tree doc PTI that applies to 
python, go and javascript - and a single Zuul job that applies to all of 
our projects AND non-OpenStack projects as well.


Organizational Requirements
---

Following readthedocs.org logic we can actually support a wider range of 
schemes technically, but there is human value in having consistency on 
these topics across our OpenStack repos.


* docs live in doc/source
* Python requirements needed by Sphinx to build the docs live in 
doc/requirements.txt


If the project is python:

* doc/requirements.txt can assume the project will have been installed
* The following should be set in setup.cfg:

  [build_sphinx]
  source-dir = doc/source
  build-dir = doc/build

Doing the above allows the following commands to work cleanly in 
automation no matter what the language is:


  [ -f doc/requirements.txt ] && pip install -rdoc/requirements.txt
  sphinx-build -b doc/source doc/build

No additional commands should be required.

The setup.cfg stanza allows:

  python setup.py build_sphinx

to continue to work. (also, everyone already has one)

Helper sugar for developers

[openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-09-29 Thread Monty Taylor

Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this 
email and followups on this thread for mentions of them. If it's an 
issue with your job and you can spot it (bad config) just submit a patch 
with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like 
to ask that you send a follow up email to this thread so that we can 
ensure we've got them all and so that others can see it too.


** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's 
working perfectly for you.


If you have - sorry for the disruption. It turns out we have a REALLY 
complicated array of job content you've all created. Hopefully the pain 
of the moment will be offset by the ability for you to all take direct 
ownership of your awesome content... so bear with us, your patience is 
appreciated.


If you find yourself with some extra time on your hands while you wait 
on something, you may find it helpful to read:


  https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the 
issues is that the infra manual publication job stopped working.


While the infra manual publication is being fixed, we're collecting FAQ 
content for it in an etherpad:


  https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for 
it. Once manual publication is fixed, we'll update the etherpad to point 
to the FAQ section of the manual.


** Global Issues **

There are a number of outstanding issues that are being worked. As of 
right now, there are a few major/systemic ones that we're looking in to 
that are worth noting:


* Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do 
something wrong?", we're having an issue that jeblair and Shrews are 
currently tracking down with intermittent connection issues in the 
backend plumbing.


When it happens it's an across the board issue, so fixing it is our 
number one priority.


* Incorrect node type

We've got reports of things running on trusty that should be running on 
xenial. The job definitions look correct, so this is also under 
investigation.


* Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes 
while the old jobs were designed to only collect from the 'primary'. 
Patches are up to fix this and should be fixed soon.


* Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a 
bunch of changes in it, so we really do appreciate everyone's 
understanding as we work through it all.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   >