Re: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1

2018-06-06 Thread Peng Liu
Cool.
I'll start to prepare a BP for this, so we can have more detailed
discussion.

On Wed, Jun 6, 2018 at 11:08 PM, Antoni Segura Puimedon 
wrote:

> On Wed, Jun 6, 2018 at 2:37 PM, Irena Berezovsky 
> wrote:
> > Sounds like a great initiative.
> >
> > Lets follow up on the proposal by the kuryr-kubernetes blueprint.
>
> I fully subscribe what Irena said. Let's get on this quick!
>
> >
> > BR,
> > Irena
> >
> > On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu  wrote:
> >>
> >> Hi Kuryr-kubernetes team,
> >>
> >> I'm thinking to propose a new BP to support  Kubernetes Network Custom
> >> Resource Definition De-facto Standard Version 1 [1], which was drafted
> by
> >> network plumbing working group of kubernetes-sig-network. I'll call it
> NPWG
> >> spec below.
> >>
> >> The purpose of NPWG spec is trying to standardize the multi-network
> effort
> >> around K8S by defining a CRD object 'network' which can be consumed by
> >> various CNI plugins. I know there has already been a BP VIF-Handler And
> Vif
> >> Drivers Design, which has designed a set of mechanism to implement the
> >> multi-network functionality. However I think it is still worthwhile to
> >> support this widely accepted NPWG spec.
> >>
> >> My proposal is to implement a new vif_driver, which can interpret the
> PoD
> >> annotation and CRD defined by NPWG spec, and attach pod to additional
> >> neutron subnet and port accordingly. This new driver should be mutually
> >> exclusive with the sriov and additional_subnets drivers.So the endusers
> can
> >> choose either way of using mult-network with kuryr-kubernetes.
> >>
> >> Please let me know your thought, any comments are welcome.
> >>
> >>
> >>
> >> [1]
> >> https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_
> RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd
> >>
> >>
> >> Regards,
> >>
> >> --
> >> Peng Liu
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Peng Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][grapql] Proof of Concept

2018-06-06 Thread Gilles Dubreuil
The branch is now available under feature/graphql on the neutron core 
repository [1].


Just to summarize our initial requirements:

- GraphQL endpoint to be added through a new WeBoB/WSGI stack
- Add graphene library [2]
- Unit tests and implementation for GraphQL schema for networks, subnets 
and ports Types.


I think we should support Relay by making the Schema Relay compliant and 
support Node ID, cursor connections and .
This will offer re-fetch, automated pagination and caching out of the 
box and not only will show the power of GraphQL but also because on the 
long run it would more likely what would be needed for complex API 
structures like we have across the board.


Any thoughts?

[1] https://git.openstack.org/cgit/openstack/neutron/log/?h=feature/graphql
[2] http://graphene-python.org/

On 31/05/18 17:27, Flint WALRUS wrote:

Hi Gilles, Ed,

I’m really glad and thrilled to read such good news!

At this point it’s cool to see that many initiatives have the same 
convergent needs regarding GraphQL as it will give us a good traction 
from the beginning if our PoC manage to sufficiently convince our peers.


Let me know as soon as the branch have been made, I’ll work on it.

Regards,
Fl1nt.
Le jeu. 31 mai 2018 à 09:17, Gilles Dubreuil > a écrit :


Hi Flint,

I wish it was "my" summit ;)
In the latter case I'd make the sessions an hour and not 20 or 40
minutes, well at least for the Forum part. And I will also make
only one summit a year instead of two (which is also a feed back I
got from the Market place). I've passed that during the user
feedback session.

Sorry for not responding earlier, @elmiko is going to send the
minutes of the API SIG forum session we had.

We confirmed Neutron to be the PoC.
We are going to use a feature branch, waiting for Miguel Lavalle
to confirm the request has been acknowledge by the Infra group.
The PoC goal is to show GraphQL efficiency.
So we're going to make something straightforward, use Neutron
existing server by  adding the graphQL endpoint and cover few core
items such as network, subnets and ports (for example).

Also the idea of having a central point of access for OpenStack
APIs using GrahpQL stitching and delegation is exciting for
everyone (and I had obviously same feedback off the session) and
that's something that could happen once the PoC has convinced.

During the meeting, Jiri Tomasek explained how GraphQL could help
TripleO UI. Effectively they struggle with APIs requests and had
to create a middle(ware) module in JS to do API work and
reconstruction before the Javascript client can use it. GraphQL
would simplify the process and allow to get rid of the module. He
also explained, after the meeting, how Horizon could benefit as
well, allowing to use only JS and avoid Django altogether!

I've also been told that Zuul nees GraphQL.

Well basically the question is who doesn't need it?

Cheers,
Gilles



On 31/05/18 03:34, Flint WALRUS wrote:

Hi Gilles, I hope you enjoyed your Summit!?

Did you had any interesting talk to report about our little
initiative ?
Le dim. 6 mai 2018 à 15:01, Gilles Dubreuil mailto:gdubr...@redhat.com>> a écrit :


Akihiro, thank you for your precious help!

Regarding the choice of Neutron as PoC, I'm sorry for not
providing much details when I said "because of its specific
data model",
effectively the original mention was  "its API exposes things
at an individual table level, requiring the client to join
that information to get the answers they need".
I realize now such description probably applies to many
OpenStack APIs.
So I'm not sure what was the reason for choosing Neutron.
I suppose Nova is also a good candidate because API is quite
complex too, in a different way, and need to expose the data
API and the control API plane as we discussed.

After all Neutron is maybe not the best candidate but it
seems good enough.

And as Flint say the extension mechanism shouldn't be an issue.

So if someone believe there is a better candidate for the
PoC, please speak now.

Thanks,
Gilles

PS: Flint,  Thank you for offering to be the advocate for
Berlin. That's great!


On 06/05/18 02:23, Flint WALRUS wrote:

Hi Akihiro,

Thanks a lot for this insight on how neutron behave.

We would love to get support and backing from the neutron
team in order to be able to get the best PoC possible.

Someone suggested neutron as a good choice because of it
simple database model. As GraphQL can manage your behavior
of an extension declaring its own schemes I don’t think it
would take that much time to implement it.

@Gilles, 

[openstack-dev] [First Contact] [SIG] [PTL] Project Liaisons

2018-06-06 Thread Kendall Nelson
Hello!

As you hopefully are aware the First Contact SIG strives to provide a place
for new contributors to come for information and advice. Part of this is
helping new contributors find more established contributors in the
community they can ask for help from. While the group of people involved in
the FC SIG is diverse in project knowledge, we don't have all of them
covered.

Over the last year we have built a list of Project Liaisons to refer new
contributors to when the project they are interested in isn't one we know
well. Unfortunately, this list[1] isn't as filled out as we would like it
to be.

So! Keeping with the conventions of other liaison roles, if there isn't
already a project liaison named, this role will default to the PTL unless
you respond to this thread with the individual you are delegating to :)  Or
by adding them to the list in the wiki[1].

Essentially the duties of the liaison are just to be willing to help out
newcomers when a FC SIG member introduces you to them and to keep an eye
out for patches that come in to your project with the 'Welcome, new
contributor' bot message. Its likely you are doing this already, but to
have a defined list of people to refer to would be a huge help.

Thank you!

-Kendall Nelson (diablo_rojo)

[1]https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Rocky-2 milestone release

2018-06-06 Thread Brian Rosmaita
Status:

glanceclient - released 2.11.1 today

glance_store - one outstanding patch that would be worth including in
the release:
- https://review.openstack.org/#/c/534745/  (use only exceptions for
uri validations)

glance - two patches we should get in:
- https://review.openstack.org/#/c/514114/ (refactor exception
handling in cmd.api) (has one +2)
- https://review.openstack.org/#/c/572534/ (remove deprecated
'enable_image_import' option)
- note: will need to regenerate the config files before proposing a release


cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] [Infra] Terms of service for hosted projects

2018-06-06 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2018-06-06 20:09:36 +:
> On 2018-06-06 14:52:04 -0400 (-0400), Zane Bitter wrote:
> > On 29/05/18 13:37, Jeremy Stanley wrote:
> > > On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote:
> [...]
> > > > * If the repo is a fork of another project, there must be (public)
> > > > evidence of an attempt to co-ordinate with the upstream first.
> > > 
> > > I don't recall this ever being mandated, though the project-config
> > > reviewers do often provide suggestions to project creators such as
> > > places in the existing community with which they might consider
> > > cooperating/collaborating.
> > 
> > We're mandating it for StarlingX, aren't we?
> 
> This goes back to depending on what you mean by "we" but assuming
> you mean those of us who were in the community track Forum room at
> the end of the day on Thursday, a number of us seemed to be in
> support of that idea including Dean (who was going to do the work to
> make it happen) and Jonathan (as OSF executive director). Far from a
> mandate, and definitely a rare enough situation that recording a
> hard and fast rule is not a useful way to spend our valuable time.
> 
> > AIUI we haven't otherwise forked anything that was still maintained
> > (although we've forked plenty of libraries after establishing that the
> > upstream was moribund).
> 
> All the Debian packaging, when we were hosting it (before it got
> retired and moved back to Debian's repository hosting) was
> implemented as forks of our Git repositories. The Infra team also
> maintains a fork of Gerrit (for the purposes of backporting bug
> fixes from later versions until we're ready to upgrade what we're
> running), and has some forks of other things which are basically
> dead upstream (lodgeit) or where we're stuck carrying support for
> very old versions of stuff that upstream has since moved on from
> (puppet-apache). Forks are not necessarily inherently bad, and
> usually the story around each one is somewhat unique.

Yeah, if I had realized the Debian packaging repos had changes beyond
packaging I wouldn't have supported hosting them at the time.

Because the gerrit fork is for the use of this community with our
deployment, we do try to upstream fixes, and we don't intend to
release it separately under our own distribution, I see that as
reasonable.

I'm trying to look at this from the perspective of the Golden Rule
[1].  We not treat other projects in ways we don't want to be treated
ourselves, regardless of whether we're doing it out in the open.
I don't want the OpenStack community to have the reputation of
forking instead of collaborating.

[1] https://en.wikipedia.org/wiki/Golden_Rule

> > > > Neither of those appears to be documented (specifically,
> > > > https://governance.openstack.org/tc/reference/licensing.html only
> > > > specifies licensing requirements for official projects, libraries
> > > > imported by official projects, and software used by the Infra
> > > > team).
> > > 
> > > The Infrastructure team has been granted a fair amount of autonomy
> > > to determine its operating guidelines, and future plans to separate
> > > project hosting further from the OpenStack name (in an attempt to
> > > make it more clear that hosting your project in the infrastructure
> > > is not an endorsement by OpenStack and doesn't make it "part of
> > > OpenStack") make the OpenStack TC governance site a particularly
> > > poor choice of venue to document such things.
> > 
> > So clearly in the future this will be the responsibility of the
> > Winterscale Infrastructure Council assuming that proposal goes
> > ahead.
> > 
> > For now, would it be valuable for the TC to develop some
> > guidelines that will provide the WIC with a solid base it can
> > evolve from once it takes them over, or should we just leave it up
> > to infra's discretion?
> [...]
> 
> My opinion is that helping clarify the terms of service
> documentation the Infra team is already maintaining is great, but
> putting hosting terms of service in the TC governance repo is likely
> a poor choice of venue. In the past it has fallen to the Infra team
> to help people come to the right conclusions as to what sorts of
> behaviors are acceptable, but we've preferred to avoid having lots
> of proscriptive rules and beating people into submission with them.
> I think we'd all like this to remain a fun and friendly place to get
> things done.

I want it to be fun, too. One way to ensure that is to try to avoid
these situations where one group angers another through some action
that the broader community can generally agree is not acceptable
to us by writing those policies down.

I agree this is ultimately going to be something we rely on the
infra team to deal with. I think it's reasonable for the rest of
the community to try to help establish the preferences about what
policies should be in place.

Doug

__

Re: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition

2018-06-06 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2018-06-06 15:14:48 -0500:
> On 06/06/2018 03:04 PM, Doug Hellmann wrote:
> > I have started submitting a series of patches to fix up the tox.ini
> > settings for projects as a step towards running "python3 first"
> > [1]. The point of doing this now is to give teams a head start on
> > understanding the work involved as we consider whether to make this
> > a community goal.
> 
> I would ask that you stop.
> 
> While I think this is useful as a quick way of finding out which projects
> will require additional work here and which don't, this is just creating
> a lot of work and overlap.
> 
> Some teams are not ready to take this on right now. So unless you are
> planning on actually following through with making the failing ones work,
> it is just adding to the set of failing patches in their review queue.
> 
> Other teams are already working on this and working through the failures
> due to the differences between python 2 and 3. So these just end up being
> duplication and a distraction for limited review capacity.

I've already proposed all of the ones I intended to, so if folks
don't want them either abandon them or let me know and I will.

Otherwise, I will work with anyone who wants to use these as the
first step to converting their doc and release notes jobs, or to
explore what else would need to be done as part of the shift.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Zuul updating Ansible in use to Ansible 2.5

2018-06-06 Thread Clark Boylan
Zuul will be updating the version of Ansible it uses to run jobs from version 
2.3 to 2.5 tomorrow, June 7, 2018. The Infra team will followup shortly after 
and get that update deployed.

Other users have apparently checked that this works in general and we have 
tests that exercise some basic integration with Ansible so we don't expect 
major breakages. However, should you notice anything new/different/broken feel 
free to reach out to the Infra team.

You may notice there will be new deprecation warnings from Ansible particularly 
around our use of the include directive. Version 2.3 doesn't have the non 
deprecated directives available to it so we will have to transition after the 
upgrade.

Thank you for your patience,
Clark (and the Infra team)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][heat] where does ip_netmask in network_config come from?

2018-06-06 Thread Mark Hamzy
When the system boots up, the IP addresses seem correct:

Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno5:  | 
True |   .   |   .   |   .   | 6c:ae:8b:25:34:ed |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno4:  | 
True | 9.114.118.241 | 255.255.255.0 |   .   | 6c:ae:8b:25:34:ec |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno4:  | 
True |   .   |   .   |   d   | 6c:ae:8b:25:34:ec |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | 
enp0s29u1u1u5: | True |   .   |   .   |   .   | 
6e:ae:8b:25:34:e9 |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | 
enp0s29u1u1u5: | True |   .   |   .   |   d   | 
6e:ae:8b:25:34:e9 |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: |  lo:   | 
True |   127.0.0.1   |   255.0.0.0   |   .   | . |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: |  lo:   | 
True |   .   |   .   |   d   | . |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno3:  | 
True | 9.114.219.197 | 255.255.255.0 |   .   | 6c:ae:8b:25:34:eb |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno3:  | 
True |   .   |   .   |   d   | 6c:ae:8b:25:34:eb |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno2:  | 
True |  9.114.219.44 | 255.255.255.0 |   .   | 6c:ae:8b:25:34:ea |
Jun  6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno2:  | 
True |   .   |   .   |   d   | 6c:ae:8b:25:34:ea |

However, I am seeing the following when run-os-net-config.sh is run.  I 
put in a (sudo ip route; sudo ip -o address; sudo ip route get to 
${METADATA_IP}) before the ping check:

default via 9.114.219.254 dev eno3 proto dhcp metric 101
9.114.219.0/24 dev br-ex proto kernel scope link src 9.114.219.193
9.114.219.0/24 dev eno2 proto kernel scope link src 9.114.219.193
9.114.219.0/24 dev eno3 proto kernel scope link src 9.114.219.197 metric 
101
169.254.95.0/24 dev enp0s29u1u1u5 proto kernel scope link src 
169.254.95.120 metric 103
169.254.169.254 via 9.114.219.30 dev eno2

1: loinet 127.0.0.1/8 scope host lo\   valid_lft forever 
preferred_lft forever
1: loinet6 ::1/128 scope host \   valid_lft forever preferred_lft 
forever
2: eno2inet 9.114.219.193/24 brd 9.114.219.255 scope global eno2\  
valid_lft forever preferred_lft forever
2: eno2inet6 fe80::6eae:8bff:fe25:34ea/64 scope link tentative \ 
valid_lft forever preferred_lft forever
3: eno3inet 9.114.219.197/24 brd 9.114.219.255 scope global 
noprefixroute dynamic eno3\   valid_lft 538sec preferred_lft 538sec
3: eno3inet6 fd55:faaf:e1ab:3d9:6eae:8bff:fe25:34eb/64 scope global 
mngtmpaddr dynamic \   valid_lft 2591961sec preferred_lft 604761sec
3: eno3inet6 fe80::6eae:8bff:fe25:34eb/64 scope link \   valid_lft 
forever preferred_lft forever
6: enp0s29u1u1u5inet 169.254.95.120/24 brd 169.254.95.255 scope link 
noprefixroute dynamic enp0s29u1u1u5\   valid_lft 539sec preferred_lft 
539sec
6: enp0s29u1u1u5inet6 fe80::6cae:8bff:fe25:34e9/64 scope link \ 
valid_lft forever preferred_lft forever
8: br-exinet 9.114.219.193/24 brd 9.114.219.255 scope global br-ex\  
valid_lft forever preferred_lft forever
8: br-exinet6 fe80::6eae:8bff:fe25:34ec/64 scope link \ valid_lft 
forever preferred_lft forever

9.114.219.30 dev br-ex src 9.114.219.193
cache

Trying to ping metadata IP 9.114.219.30...FAILURE

It seems  like the data is coming from:

[root@overcloud-controller-0 ~]# cat /etc/os-net-config/config.json 
{"network_config": [{"addresses": [{"ip_netmask": "9.114.219.196/24"}], 
"dns_servers": ["8.8.8.8", "8.8.4.4"], "name": "nic1", "routes": 
[{"ip_netmask": "169.254.169.254/32", "next_hop": "9.114.219.30"}], 
"type": "interface", "use_dhcp": false}, {"addresses": [{"ip_netmask": 
"9.114.219.196/24"}], "dns_servers": ["8.8.8.8", "8.8.4.4"], "members": 
[{"name": "nic3", "primary": true, "type": "interface"}], "name": "br-ex", 
"routes": [{"default": true, "next_hop": "9.114.118.254"}], "type": 
"ovs_bridge", "use_dhcp": false}]}

Also in the log I see:

...
Jun  6 12:45:15 overcloud-controller-0 os-collect-config: [2018/06/06 
12:43:53 PM] [INFO] Active nics are ['eno2', 'eno3', 'eno4']
Jun  6 12:45:15 overcloud-controller-0 os-collect-config: [2018/06/06 
12:43:53 PM] [INFO] nic1 mapped to: eno2
Jun  6 12:45:15 overcloud-controller-0 os-collect-config: [2018/06/06 
12:43:53 PM] [INFO] nic2 mapped to: eno3
Jun  6 12:45:15 overcloud-controller-0 os-collect-config: [2018/06/06 
12:43:53 PM] [INFO] nic3 mapped to: eno4
...

templates/nic-configs/controller.yaml has the following section:
...
$network_config:
  network_config:
  - type: interface
name: nic1
use_dhcp: false
dns_servers:
  

Re: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition

2018-06-06 Thread Sean McGinnis

On 06/06/2018 03:04 PM, Doug Hellmann wrote:

I have started submitting a series of patches to fix up the tox.ini
settings for projects as a step towards running "python3 first"
[1]. The point of doing this now is to give teams a head start on
understanding the work involved as we consider whether to make this
a community goal.


I would ask that you stop.

While I think this is useful as a quick way of finding out which projects
will require additional work here and which don't, this is just creating
a lot of work and overlap.

Some teams are not ready to take this on right now. So unless you are
planning on actually following through with making the failing ones work,
it is just adding to the set of failing patches in their review queue.

Other teams are already working on this and working through the failures
due to the differences between python 2 and 3. So these just end up being
duplication and a distraction for limited review capacity.



The current patches are all mechanically generated changes to the
basepython value for environments that seem to be likely candidates.
They're basically the "easy" part of the transition. I've left any
changes that will need more discussion alone for now.

In particular, I've skipped over any tox environments with "functional"
in the name, since I thought those ran functional tests. Teams will
need to decide whether to change those job definitions, or duplicate
them and run them under python 2 and 3. Since we are not dropping
python 2 support until the U cycle, I suggest going ahead and running
the jobs twice.

Note that changing the tox settings won't actually change some of the
jobs. For example, with our current PTI definition, the documentation
and releasenotes jobs do not run under tox. That means those will need
to be changed by editing the zuul configuration for the repository.

I have started to make notes for tracking the work in
https://etherpad.openstack.org/p/python3-first -- including some notes
about taking the next step to update the zuul job definitions and common
issues we've already encountered to help folks debug job failures.

I could use some help keeping an eye on these changes and getting
them through the gate. If you are interested in helping, please
leave a comment on the review you are willing to shepherd.

Doug

[1] 
https://review.openstack.org/#/q/topic:python3-first+(status:open+OR+status:merged)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] [Infra] Terms of service for hosted projects

2018-06-06 Thread Doug Hellmann
Excerpts from Anne Bertucio's message of 2018-06-06 12:28:25 -0700:
> > Either way, I would like to ensure that someone from
> > Kata is communicating with qemu upstream.
> 
> Since probably not too many Kata folks are on the OpenStack dev list 
> (something to tackle in another thread or OSF all-project meeting), chiming 
> in to say yup!, we’ve got QEMU upstream folks in the Kata community, and 
> we’re definitely committed to making sure we communicate with other 
> communities about these things (be it QEMU or another group in the future). 
> 
>  
> Anne Bertucio
> OpenStack Foundation
> a...@openstack.org | irc: annabelleB

Thanks for confirming that, Anne!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] [Infra] Terms of service for hosted projects

2018-06-06 Thread Jeremy Stanley
On 2018-06-06 15:16:59 -0400 (-0400), Doug Hellmann wrote:
[...]
> Kata also has a qemu fork, but that is under the kata-containers
> github org and not our infrastructure. I'm not sure someone outside
> of our community would differentiate between the two, but maybe
> they would.
[...]

The Kata community (currently) hosts all their work in GitHub rather
than our infrastructure, so I'm not sure that's an altogether useful
distinction.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] [Infra] Terms of service for hosted projects

2018-06-06 Thread Jeremy Stanley
On 2018-06-06 14:52:04 -0400 (-0400), Zane Bitter wrote:
> On 29/05/18 13:37, Jeremy Stanley wrote:
> > On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote:
[...]
> > > * If the repo is a fork of another project, there must be (public)
> > > evidence of an attempt to co-ordinate with the upstream first.
> > 
> > I don't recall this ever being mandated, though the project-config
> > reviewers do often provide suggestions to project creators such as
> > places in the existing community with which they might consider
> > cooperating/collaborating.
> 
> We're mandating it for StarlingX, aren't we?

This goes back to depending on what you mean by "we" but assuming
you mean those of us who were in the community track Forum room at
the end of the day on Thursday, a number of us seemed to be in
support of that idea including Dean (who was going to do the work to
make it happen) and Jonathan (as OSF executive director). Far from a
mandate, and definitely a rare enough situation that recording a
hard and fast rule is not a useful way to spend our valuable time.

> AIUI we haven't otherwise forked anything that was still maintained
> (although we've forked plenty of libraries after establishing that the
> upstream was moribund).

All the Debian packaging, when we were hosting it (before it got
retired and moved back to Debian's repository hosting) was
implemented as forks of our Git repositories. The Infra team also
maintains a fork of Gerrit (for the purposes of backporting bug
fixes from later versions until we're ready to upgrade what we're
running), and has some forks of other things which are basically
dead upstream (lodgeit) or where we're stuck carrying support for
very old versions of stuff that upstream has since moved on from
(puppet-apache). Forks are not necessarily inherently bad, and
usually the story around each one is somewhat unique.

> > > Neither of those appears to be documented (specifically,
> > > https://governance.openstack.org/tc/reference/licensing.html only
> > > specifies licensing requirements for official projects, libraries
> > > imported by official projects, and software used by the Infra
> > > team).
> > 
> > The Infrastructure team has been granted a fair amount of autonomy
> > to determine its operating guidelines, and future plans to separate
> > project hosting further from the OpenStack name (in an attempt to
> > make it more clear that hosting your project in the infrastructure
> > is not an endorsement by OpenStack and doesn't make it "part of
> > OpenStack") make the OpenStack TC governance site a particularly
> > poor choice of venue to document such things.
> 
> So clearly in the future this will be the responsibility of the
> Winterscale Infrastructure Council assuming that proposal goes
> ahead.
> 
> For now, would it be valuable for the TC to develop some
> guidelines that will provide the WIC with a solid base it can
> evolve from once it takes them over, or should we just leave it up
> to infra's discretion?
[...]

My opinion is that helping clarify the terms of service
documentation the Infra team is already maintaining is great, but
putting hosting terms of service in the TC governance repo is likely
a poor choice of venue. In the past it has fallen to the Infra team
to help people come to the right conclusions as to what sorts of
behaviors are acceptable, but we've preferred to avoid having lots
of proscriptive rules and beating people into submission with them.
I think we'd all like this to remain a fun and friendly place to get
things done.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition

2018-06-06 Thread Doug Hellmann
I have started submitting a series of patches to fix up the tox.ini
settings for projects as a step towards running "python3 first"
[1]. The point of doing this now is to give teams a head start on
understanding the work involved as we consider whether to make this
a community goal.

The current patches are all mechanically generated changes to the
basepython value for environments that seem to be likely candidates.
They're basically the "easy" part of the transition. I've left any
changes that will need more discussion alone for now.

In particular, I've skipped over any tox environments with "functional"
in the name, since I thought those ran functional tests. Teams will
need to decide whether to change those job definitions, or duplicate
them and run them under python 2 and 3. Since we are not dropping
python 2 support until the U cycle, I suggest going ahead and running
the jobs twice.

Note that changing the tox settings won't actually change some of the
jobs. For example, with our current PTI definition, the documentation
and releasenotes jobs do not run under tox. That means those will need
to be changed by editing the zuul configuration for the repository.

I have started to make notes for tracking the work in
https://etherpad.openstack.org/p/python3-first -- including some notes
about taking the next step to update the zuul job definitions and common
issues we've already encountered to help folks debug job failures.

I could use some help keeping an eye on these changes and getting
them through the gate. If you are interested in helping, please
leave a comment on the review you are willing to shepherd.

Doug

[1] 
https://review.openstack.org/#/q/topic:python3-first+(status:open+OR+status:merged)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Matt Riedemann

Here is the nova patch for those following along:

https://review.openstack.org/#/c/572790/

On 6/6/2018 9:07 AM, Jay Pipes wrote:

On 06/06/2018 10:02 AM, Matt Riedemann wrote:

On 6/6/2018 8:24 AM, Jay Pipes wrote:

On 06/06/2018 09:10 AM, Artom Lifshitz wrote:

I think regardless of how we ended up with this situation, we're still
in a position where we have a public-facing API that could lead to
data-corruption when used in a specific way. That should never be the
case. I would think re-using the already possible 400 response code to
update-volume when used with a multi-attach volume to indicate that it
can't be done, without a new microversion, would be the cleaned way of
getting out of this pickle.


That's fine, yes.

I just think it's worth noting that it's a pickle that we put 
ourselves in due to an ill-conceived feature and Compute API call. 
And that we should, you know, try to stop doing that. :)


-jay


If we're going to change something, I think it should probably happen 
on the cinder side when the retype or live migration of the volume is 
initiated and would do the attachment counting there.


So if you're swapping from multiattach volume A to multiattach volume 
B and either has >1 read/write attachment, then fail with a 400 in the 
cinder API.


We can check those things in the compute API when cinder calls the 
swap volume API in nova, but:


1. It's racy - cinder is the source of truth on the current state of 
the attachments.


2. The failure mode is going to be questionable - by the time cinder 
calls nova to swap the volumes on the compute host, the cinder REST 
API has long since 202'ed the response to the user and the best nova 
can do is return a 400 and then cinder has to handle that gracefully 
and rollback. It would be much cleaner if the volume API just fails fast.


+10

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Ports port_binding attribute is changing to an iterable

2018-06-06 Thread Miguel Lavalle
Dear OpenStack Networking community of projects,

As part of the implementation of multiple port bindings in the Neutron
reference implementation (
https://specs.openstack.org/openstack/neutron-specs/specs/backlog/pike/portbinding_information_for_nova.html),
the port_binding relationship in the Port DB model is changing to be an
iterable:

https://review.openstack.org/#/c/414251/66/neutron/plugins/ml2/models.py@64

and its name is being changed to port_bindings:

https://review.openstack.org/#/c/571041/4/neutron/plugins/ml2/models.py@61

Corresponding changes are being made to the Port Oslo Versioned Object:

https://review.openstack.org/#/c/414251/66/neutron/objects/ports.py@285
https://review.openstack.org/#/c/571041/4/neutron/objects/ports.py@285

I did my best to find usages of these attributes in the Neutron Stadium
projects and only found them in networking-odl:
https://review.openstack.org/#/c/572212/2/networking_odl/ml2/mech_driver.py.
These are the other projects that I checked:

   - networking-midonet
   - networking-ovn
   - networking-bagpipe
   - networking-bgpvpn
   - neutron-dynamic-routing
   - neutron-fwaas
   - neutron-vpnaas
   - networking-sfc

I STRONGLY ENCOURAGE these projects teams to double check and see if you
might be affected. I also encourage projects in the broader OpenStack
Networking community of projects to check for possible impacts. We will be
holding these two patches until June 14th before merging them.

If you need help dealing with the change, please ping me in the Neutron
channel

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] [Infra] Terms of service for hosted projects

2018-06-06 Thread Anne Bertucio
> Either way, I would like to ensure that someone from
> Kata is communicating with qemu upstream.

Since probably not too many Kata folks are on the OpenStack dev list (something 
to tackle in another thread or OSF all-project meeting), chiming in to say 
yup!, we’ve got QEMU upstream folks in the Kata community, and we’re definitely 
committed to making sure we communicate with other communities about these 
things (be it QEMU or another group in the future). 

 
Anne Bertucio
OpenStack Foundation
a...@openstack.org | irc: annabelleB





> On Jun 6, 2018, at 12:16 PM, Doug Hellmann  wrote:
> 
> Excerpts from Zane Bitter's message of 2018-06-06 14:52:04 -0400:
>> On 29/05/18 13:37, Jeremy Stanley wrote:
>>> On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote:
 We allow various open source projects that are not an official
 part of OpenStack or necessarily used by OpenStack to be hosted on
 OpenStack infrastructure - previously under the 'StackForge'
 branding, but now without separate branding. Do we document
 anywhere the terms of service under which we offer such hosting?
>>> 
>>> We do so minimally here:
>>> 
>>> https://docs.openstack.org/infra/system-config/unofficial_project_hosting.html
>>> 
>>> It's linked from this section of the Project Creator’s Guide in the
>>> Infra Manual:
>>> 
>>> https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project
>>> 
>>> But yes, we should probably add some clarity to that document and
>>> see about making sure it's linked more prominently. We also maintain
>>> some guidelines for reviewers of changes to the
>>> openstack-infra/project-config repository, which has a bit to say
>>> about new repository creation changes:
>>> 
>>> https://git.openstack.org/cgit/openstack-infra/project-config/tree/REVIEWING.rst
>>> 
 It is my understanding that the infra team will enforce the
 following conditions when a repo import request is received:
 
 * The repo must be licensed under an OSI-approved open source
 license.
>>> 
>>> That has been our custom, but we should add a statement to this
>>> effect in the aforementioned document.
>>> 
 * If the repo is a fork of another project, there must be (public)
 evidence of an attempt to co-ordinate with the upstream first.
>>> 
>>> I don't recall this ever being mandated, though the project-config
>>> reviewers do often provide suggestions to project creators such as
>>> places in the existing community with which they might consider
>>> cooperating/collaborating.
>> 
>> We're mandating it for StarlingX, aren't we?
> 
> We suggested that it would make importing the repositories more
> palatable, and Dean said he would do it. Which isn't quite the same
> as making it a requirement.
> 
>> 
>> AIUI we haven't otherwise forked anything that was still maintained 
>> (although we've forked plenty of libraries after establishing that the 
>> upstream was moribund).
> 
> Kata has a fork of the kernel, but that feels less controversial
> because the kernel community expects forks as part of their contribution
> process.
> 
> Kata also has a qemu fork, but that is under the kata-containers
> github org and not our infrastructure. I'm not sure someone outside
> of our community would differentiate between the two, but maybe
> they would. Either way, I would like to ensure that someone from
> Kata is communicating with qemu upstream.
> 
>> 
 Neither of those appears to be documented (specifically,
 https://governance.openstack.org/tc/reference/licensing.html only
 specifies licensing requirements for official projects, libraries
 imported by official projects, and software used by the Infra
 team).
>>> 
>>> The Infrastructure team has been granted a fair amount of autonomy
>>> to determine its operating guidelines, and future plans to separate
>>> project hosting further from the OpenStack name (in an attempt to
>>> make it more clear that hosting your project in the infrastructure
>>> is not an endorsement by OpenStack and doesn't make it "part of
>>> OpenStack") make the OpenStack TC governance site a particularly
>>> poor choice of venue to document such things.
>> 
>> So clearly in the future this will be the responsibility of the 
>> Winterscale Infrastructure Council assuming that proposal goes ahead.
>> 
>> For now, would it be valuable for the TC to develop some guidelines that 
>> will provide the WIC with a solid base it can evolve from once it takes 
>> them over, or should we just leave it up to infra's discretion?
>> 
 In addition, I think we should require projects hosted on our
 infrastructure to agree to other policies:
 
 * Adhere to the OpenStack Foundation Code of Conduct.
>>> 
>>> This seems like a reasonable addition to our hosting requirements.
>>> 
 * Not misrepresent their relationship to the official OpenStack
 project or the Foundation. Ideally we'd come up with language that
 they *can* use to 

Re: [openstack-dev] [TC] [Infra] Terms of service for hosted projects

2018-06-06 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2018-06-06 14:52:04 -0400:
> On 29/05/18 13:37, Jeremy Stanley wrote:
> > On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote:
> >> We allow various open source projects that are not an official
> >> part of OpenStack or necessarily used by OpenStack to be hosted on
> >> OpenStack infrastructure - previously under the 'StackForge'
> >> branding, but now without separate branding. Do we document
> >> anywhere the terms of service under which we offer such hosting?
> > 
> > We do so minimally here:
> > 
> > https://docs.openstack.org/infra/system-config/unofficial_project_hosting.html
> > 
> > It's linked from this section of the Project Creator’s Guide in the
> > Infra Manual:
> > 
> > https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project
> > 
> > But yes, we should probably add some clarity to that document and
> > see about making sure it's linked more prominently. We also maintain
> > some guidelines for reviewers of changes to the
> > openstack-infra/project-config repository, which has a bit to say
> > about new repository creation changes:
> > 
> > https://git.openstack.org/cgit/openstack-infra/project-config/tree/REVIEWING.rst
> > 
> >> It is my understanding that the infra team will enforce the
> >> following conditions when a repo import request is received:
> >>
> >> * The repo must be licensed under an OSI-approved open source
> >> license.
> > 
> > That has been our custom, but we should add a statement to this
> > effect in the aforementioned document.
> > 
> >> * If the repo is a fork of another project, there must be (public)
> >> evidence of an attempt to co-ordinate with the upstream first.
> > 
> > I don't recall this ever being mandated, though the project-config
> > reviewers do often provide suggestions to project creators such as
> > places in the existing community with which they might consider
> > cooperating/collaborating.
> 
> We're mandating it for StarlingX, aren't we?

We suggested that it would make importing the repositories more
palatable, and Dean said he would do it. Which isn't quite the same
as making it a requirement.

> 
> AIUI we haven't otherwise forked anything that was still maintained 
> (although we've forked plenty of libraries after establishing that the 
> upstream was moribund).

Kata has a fork of the kernel, but that feels less controversial
because the kernel community expects forks as part of their contribution
process.

Kata also has a qemu fork, but that is under the kata-containers
github org and not our infrastructure. I'm not sure someone outside
of our community would differentiate between the two, but maybe
they would. Either way, I would like to ensure that someone from
Kata is communicating with qemu upstream.

> 
> >> Neither of those appears to be documented (specifically,
> >> https://governance.openstack.org/tc/reference/licensing.html only
> >> specifies licensing requirements for official projects, libraries
> >> imported by official projects, and software used by the Infra
> >> team).
> > 
> > The Infrastructure team has been granted a fair amount of autonomy
> > to determine its operating guidelines, and future plans to separate
> > project hosting further from the OpenStack name (in an attempt to
> > make it more clear that hosting your project in the infrastructure
> > is not an endorsement by OpenStack and doesn't make it "part of
> > OpenStack") make the OpenStack TC governance site a particularly
> > poor choice of venue to document such things.
> 
> So clearly in the future this will be the responsibility of the 
> Winterscale Infrastructure Council assuming that proposal goes ahead.
> 
> For now, would it be valuable for the TC to develop some guidelines that 
> will provide the WIC with a solid base it can evolve from once it takes 
> them over, or should we just leave it up to infra's discretion?
> 
> >> In addition, I think we should require projects hosted on our
> >> infrastructure to agree to other policies:
> >>
> >> * Adhere to the OpenStack Foundation Code of Conduct.
> > 
> > This seems like a reasonable addition to our hosting requirements.
> > 
> >> * Not misrepresent their relationship to the official OpenStack
> >> project or the Foundation. Ideally we'd come up with language that
> >> they *can* use to describe their status, such as "hosted on the
> >> OpenStack infrastructure".
> > 
> > Also a great suggestion. We sort of say that in the "what being an
> > unoffocial project is not" bullet list, but it could use some
> > fleshing out.
> > 
> >> If we don't have place where this kind of thing is documented
> >> already, I'll submit a review adding one. Does anybody have any
> >> ideas about a process for ensuring that projects have read and
> >> agreed to the terms when we add them?
> > 
> > Adding process forcing active confirmation of such rules seems like
> > a lot of unnecessary overhead/red tape/bureaucracy. As it stands,
> > we're working 

Re: [openstack-dev] [TC] [Infra] Terms of service for hosted projects

2018-06-06 Thread Zane Bitter

On 29/05/18 13:37, Jeremy Stanley wrote:

On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote:

We allow various open source projects that are not an official
part of OpenStack or necessarily used by OpenStack to be hosted on
OpenStack infrastructure - previously under the 'StackForge'
branding, but now without separate branding. Do we document
anywhere the terms of service under which we offer such hosting?


We do so minimally here:

https://docs.openstack.org/infra/system-config/unofficial_project_hosting.html

It's linked from this section of the Project Creator’s Guide in the
Infra Manual:

https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project

But yes, we should probably add some clarity to that document and
see about making sure it's linked more prominently. We also maintain
some guidelines for reviewers of changes to the
openstack-infra/project-config repository, which has a bit to say
about new repository creation changes:

https://git.openstack.org/cgit/openstack-infra/project-config/tree/REVIEWING.rst


It is my understanding that the infra team will enforce the
following conditions when a repo import request is received:

* The repo must be licensed under an OSI-approved open source
license.


That has been our custom, but we should add a statement to this
effect in the aforementioned document.


* If the repo is a fork of another project, there must be (public)
evidence of an attempt to co-ordinate with the upstream first.


I don't recall this ever being mandated, though the project-config
reviewers do often provide suggestions to project creators such as
places in the existing community with which they might consider
cooperating/collaborating.


We're mandating it for StarlingX, aren't we?

AIUI we haven't otherwise forked anything that was still maintained 
(although we've forked plenty of libraries after establishing that the 
upstream was moribund).



Neither of those appears to be documented (specifically,
https://governance.openstack.org/tc/reference/licensing.html only
specifies licensing requirements for official projects, libraries
imported by official projects, and software used by the Infra
team).


The Infrastructure team has been granted a fair amount of autonomy
to determine its operating guidelines, and future plans to separate
project hosting further from the OpenStack name (in an attempt to
make it more clear that hosting your project in the infrastructure
is not an endorsement by OpenStack and doesn't make it "part of
OpenStack") make the OpenStack TC governance site a particularly
poor choice of venue to document such things.


So clearly in the future this will be the responsibility of the 
Winterscale Infrastructure Council assuming that proposal goes ahead.


For now, would it be valuable for the TC to develop some guidelines that 
will provide the WIC with a solid base it can evolve from once it takes 
them over, or should we just leave it up to infra's discretion?



In addition, I think we should require projects hosted on our
infrastructure to agree to other policies:

* Adhere to the OpenStack Foundation Code of Conduct.


This seems like a reasonable addition to our hosting requirements.


* Not misrepresent their relationship to the official OpenStack
project or the Foundation. Ideally we'd come up with language that
they *can* use to describe their status, such as "hosted on the
OpenStack infrastructure".


Also a great suggestion. We sort of say that in the "what being an
unoffocial project is not" bullet list, but it could use some
fleshing out.


If we don't have place where this kind of thing is documented
already, I'll submit a review adding one. Does anybody have any
ideas about a process for ensuring that projects have read and
agreed to the terms when we add them?


Adding process forcing active confirmation of such rules seems like
a lot of unnecessary overhead/red tape/bureaucracy. As it stands,
we're working to get rid of active agreement to the ICLA in favor of
simply asserting the DCO in commit messages, so I'm not a fan of
adding some new agreement people have to directly acknowledge along
with associated automation and policing.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-06 Thread Michael Johnson
Octavia also has an informal rule about two cores from the same
company merging patches. I support this because it makes sure we have
a diverse perspective on the patches. Specifically it has worked well
for us as all of the cores have different cloud designs, so it catches
anything that would limit/conflict with the different OpenStack
topologies.

That said, we don't hard enforce this or police it, it is just an
informal policy to make sure we get input from the wider team.
Currently we only have one company with two cores.

That said, my issue with the current diversity calculations is they
tend to be skewed by the PTL role. People have a tendency to defer to
the PTL to review/comment/merge patches, so if the PTL shares a
company with another core the diversity numbers get skewed heavily
towards that company.

Michael

On Wed, Jun 6, 2018 at 5:06 AM,   wrote:
>> -Original Message-
>> From: Doug Hellmann 
>> Sent: Monday, June 4, 2018 5:52 PM
>> To: openstack-dev 
>> Subject: Re: [openstack-dev] [tc] Organizational diversity tag
>>
>> Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400:
>> > On 02/06/18 13:23, Doug Hellmann wrote:
>> > > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:
>> > >> On 01/06/18 12:18, Doug Hellmann wrote:
>> > >
>> > > [snip]
>> > Apparently enough people see it the way you described that this is
>> > probably not something we want to actively spread to other projects at
>> > the moment.
>>
>> I am still curious to know which teams have the policy. If it is more
>> widespread than I realized, maybe it's reasonable to extend it and use it as
>> the basis for a health check after all.
>>
>
> A while back, Trove had this policy. When Rackspace, HP, and Tesora had core 
> reviewers, (at various times, eBay, IBM and Red Hat also had cores), the 
> agreement was that multiple cores from any one company would not merge a 
> change unless it was an emergency. It was not formally written down (to my 
> knowledge).
>
> It worked well, and ensured that the operators didn't get surprised by some 
> unexpected thing that took down their service.
>
> -amrith
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] proposal to postpone nova-network core functionality removal to Stein

2018-06-06 Thread melanie witt

On Thu, 31 May 2018 15:04:53 -0500, Matt Riedemann wrote:

On 5/31/2018 1:35 PM, melanie witt wrote:


This cycle at the PTG, we had decided to start making some progress
toward removing nova-network [1] (thanks to those who have helped!) and
so far, we've landed some patches to extract common network utilities
from nova-network core functionality into separate utility modules. And
we've started proposing removal of nova-network REST APIs [2].

At the cells v2 sync with operators forum session at the summit [3], we
learned that CERN is in the middle of migrating from nova-network to
neutron and that holding off on removal of nova-network core
functionality until Stein would help them out a lot to have a safety net
as they continue progressing through the migration.

If we recall correctly, they did say that removal of the nova-network
REST APIs would not impact their migration and Surya Seetharaman is
double-checking about that and will get back to us. If so, we were
thinking we can go ahead and work on nova-network REST API removals this
cycle to make some progress while holding off on removing the core
functionality of nova-network until Stein.

I wanted to send this to the ML to let everyone know what we were
thinking about this and to receive any additional feedback folks might
have about this plan.

Thanks,
-melanie

[1] https://etherpad.openstack.org/p/nova-ptg-rocky L301
[2] https://review.openstack.org/567682
[3]
https://etherpad.openstack.org/p/YVR18-cellsv2-migration-sync-with-operators
L30


As a reminder, this is the etherpad I started to document the nova-net
specific compute REST APIs which are candidates for removal:

https://etherpad.openstack.org/p/nova-network-removal-rocky


Update: In the cells meeting today [4], Surya confirmed that CERN is 
okay with nova-network REST API pieces being removed this cycle while 
leaving the core functionality of nova-network intact, as they continue 
their migration from nova-network to neutron. We're tracking the 
nova-net REST API removal candidates on the aforementioned 
nova-network-removal etherpad.


-melanie

[4] 
http://eavesdrop.openstack.org/meetings/nova_cells/2018/nova_cells.2018-06-06-17.00.html






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-06 Thread Assaf Muller
On Tue, May 29, 2018 at 12:41 PM, Mathieu Gagné  wrote:
> Hi Julia,
>
> Thanks for the follow up on this topic.
>
> On Tue, May 29, 2018 at 6:55 AM, Julia Kreger
>  wrote:
>>
>> These things are not just frustrating, but also very inhibiting for
>> part time contributors such as students who may also be time limited.
>> Or an operator who noticed something that was clearly a bug and that
>> put forth a very minor fix and doesn't have the time to revise it over
>> and over.
>>
>
> What I found frustrating is receiving *only* nitpicks, addressing them
> to only receive more nitpicks (sometimes from the same reviewer) with
> no substantial review on the change itself afterward.
> I wouldn't mind addressing nitpicks if more substantial reviews were
> made in a timely fashion.

The behavior that I've tried to promote in communities I've partaken
is: If your review is comprised solely of nits, either abandon it, or
don't -1.

>
> --
> Mathieu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict

2018-06-06 Thread Sean McGinnis

On 06/06/2018 11:35 AM, Jeremy Stanley wrote:

On 2018-06-06 18:24:00 +0200 (+0200), Dmitry Tantsur wrote:

In Ironic world we run doc8 on README.rst as part of the pep8 job.
Maybe we should make it a common practice?

[...]

First, the doc8 tool should be considered generally useful for any
project with Sphinx-based documentation, regardless of whether it's
a Python project. Second, doc8 isn't going to necessarily turn up
the same errors as `python setup.py check --restructuredtext
--strict` since the latter is focused on validating that the
long description (which _might_ be in a file referenced from your
documentation tree, but also might not!) for your Python package is
suitable for rendering on PyPI.


This is a good point about the README not necessarily being the
long description. Another option for teams that have complicated
README files that would be a lot of work to make compatible would
be to explicitly set the long_description value for the project to
something else:

https://pythonhosted.org/an_example_pypi_project/setuptools.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict

2018-06-06 Thread Jeremy Stanley
On 2018-06-06 18:24:00 +0200 (+0200), Dmitry Tantsur wrote:
> In Ironic world we run doc8 on README.rst as part of the pep8 job.
> Maybe we should make it a common practice?
[...]

First, the doc8 tool should be considered generally useful for any
project with Sphinx-based documentation, regardless of whether it's
a Python project. Second, doc8 isn't going to necessarily turn up
the same errors as `python setup.py check --restructuredtext
--strict` since the latter is focused on validating that the
long description (which _might_ be in a file referenced from your
documentation tree, but also might not!) for your Python package is
suitable for rendering on PyPI.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict

2018-06-06 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2018-06-06 18:24:00 +0200:
> In Ironic world we run doc8 on README.rst as part of the pep8 job. Maybe we 
> should make it a common practice?

That seems like it may be a good thing to add, but I don't know
that it is sufficient to detect all of the problems that prevent
uploading packages because of the README formatting.

> 
> On 06/06/2018 03:35 PM, Jeremy Stanley wrote:
> > On 2018-06-06 16:36:45 +0900 (+0900), Akihiro Motoki wrote:
> > [...]
> >> In addition, unfortunately such checks are not run in project gate,
> >> so there is no way to detect in advance.
> >> I think we need a way to check this when a change is made
> >> instead of detecting an error when a release patch is proposed.
> > 
> > While I hate to suggest yet another Python PTI addition, for my
> > personal projects I test every commit (essentially a check/gate
> > pipeline job) with:
> > 
> >  python setup.py check --restructuredtext --strict
> >  python setup.py bdist_wheel sdist
> > 
> > ...as proof that it hasn't broken sdist/wheel building nor regressed
> > the description-file provided in my setup.cfg. My intent is to add
> > other release artifact tests into the same set so that there are no
> > surprises come release time.
> > 
> > We sort of address this case in OpenStack projects by forcing sdist
> > builds in our standard pep8 jobs, so maybe that would be a
> > lower-overhead place to introduce the setup rst check?
> > Brainstorming.
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict

2018-06-06 Thread Dmitry Tantsur
In Ironic world we run doc8 on README.rst as part of the pep8 job. Maybe we 
should make it a common practice?


On 06/06/2018 03:35 PM, Jeremy Stanley wrote:

On 2018-06-06 16:36:45 +0900 (+0900), Akihiro Motoki wrote:
[...]

In addition, unfortunately such checks are not run in project gate,
so there is no way to detect in advance.
I think we need a way to check this when a change is made
instead of detecting an error when a release patch is proposed.


While I hate to suggest yet another Python PTI addition, for my
personal projects I test every commit (essentially a check/gate
pipeline job) with:

 python setup.py check --restructuredtext --strict
 python setup.py bdist_wheel sdist

...as proof that it hasn't broken sdist/wheel building nor regressed
the description-file provided in my setup.cfg. My intent is to add
other release artifact tests into the same set so that there are no
surprises come release time.

We sort of address this case in OpenStack projects by forcing sdist
builds in our standard pep8 jobs, so maybe that would be a
lower-overhead place to introduce the setup rst check?
Brainstorming.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-06 Thread Davanum Srinivas
"do you think this is an area that OpenLab could help out with?" <<
YES!! please ping mrhillsman and RuiChen over on #askopenlab

-- Dims

On Wed, Jun 6, 2018 at 11:44 AM, Zane Bitter  wrote:
> On 06/06/18 11:18, Chris Hoge wrote:
>>
>> Hi Zane,
>>
>> Do you think this effort would make sense as a subproject within the Cloud
>> Provider OpenStack repository hosted within the Kubernetes org? We have
>> a solid group of people working on the cloud provider, and while it’s not
>> the same code, it’s a collection of the same expertise and test resources.
>
>
> TBH, I think it makes more sense as part of the OpenStack community. If you
> look at how the components interact, it goes:
>
> Kubernetes Service Catalog -> Automation Broker -> [this] -> OpenStack
>
> So the interfaces with k8s are already well-defined and owned by other
> teams. It's the interface with OpenStack that requires the closest
> co-ordination. (Particularly if we end up autogenerating the playbooks from
> introspection on shade.) If you look at where the other clouds host their
> service brokers or Ansible Playbook Bundles, they're not part of the
> equivalent Kubernetes Cloud Providers either.
>
> We'll definitely want testing though. Given that this is effectively another
> user interface to OpenStack, do you think this is an area that OpenLab could
> help out with?
>
>> Even if it's hosted as an OpenStack project, we should still make sure
>> we have documentation and pointers from the
>> kubernetes/cloud-provider-openstack
>> to guide users in the right direction.
>
>
> Sure, that makes sense to cross-advertise it to people we know are using k8s
> on top of OpenStack already. (Although note that k8s does not have to be
> running on top of OpenStack for the service broker to be useful, unlike the
> cloud provider.)
>
>> While I'm not in a position to directly contribute, I'm happy to offer
>> any support I can through the SIG-OpenStack and SIG-Cloud-Provider
>> roles I have in the K8s community.
>
>
> Thanks!
>
> cheers,
> Zane.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-06 Thread Zane Bitter

On 06/06/18 11:18, Chris Hoge wrote:

Hi Zane,

Do you think this effort would make sense as a subproject within the Cloud
Provider OpenStack repository hosted within the Kubernetes org? We have
a solid group of people working on the cloud provider, and while it’s not
the same code, it’s a collection of the same expertise and test resources.


TBH, I think it makes more sense as part of the OpenStack community. If 
you look at how the components interact, it goes:


Kubernetes Service Catalog -> Automation Broker -> [this] -> OpenStack

So the interfaces with k8s are already well-defined and owned by other 
teams. It's the interface with OpenStack that requires the closest 
co-ordination. (Particularly if we end up autogenerating the playbooks 
from introspection on shade.) If you look at where the other clouds host 
their service brokers or Ansible Playbook Bundles, they're not part of 
the equivalent Kubernetes Cloud Providers either.


We'll definitely want testing though. Given that this is effectively 
another user interface to OpenStack, do you think this is an area that 
OpenLab could help out with?



Even if it's hosted as an OpenStack project, we should still make sure
we have documentation and pointers from the kubernetes/cloud-provider-openstack
to guide users in the right direction.


Sure, that makes sense to cross-advertise it to people we know are using 
k8s on top of OpenStack already. (Although note that k8s does not have 
to be running on top of OpenStack for the service broker to be useful, 
unlike the cloud provider.)



While I'm not in a position to directly contribute, I'm happy to offer
any support I can through the SIG-OpenStack and SIG-Cloud-Provider
roles I have in the K8s community.


Thanks!

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-06 Thread Chris Hoge
Hi Zane,

Do you think this effort would make sense as a subproject within the Cloud
Provider OpenStack repository hosted within the Kubernetes org? We have
a solid group of people working on the cloud provider, and while it’s not
the same code, it’s a collection of the same expertise and test resources.

Even if it's hosted as an OpenStack project, we should still make sure
we have documentation and pointers from the kubernetes/cloud-provider-openstack
to guide users in the right direction.

While I'm not in a position to directly contribute, I'm happy to offer
any support I can through the SIG-OpenStack and SIG-Cloud-Provider
roles I have in the K8s community.

-Chris

> On Jun 5, 2018, at 9:19 AM, Zane Bitter  wrote:
> 
> I've been doing some investigation into the Service Catalog in Kubernetes and 
> how we can get OpenStack resources to show up in the catalog for use by 
> applications running in Kubernetes. (The Big 3 public clouds already support 
> this.) The short answer is via an implementation of something called the Open 
> Service Broker API, but there are shortcuts available to make it easier to do.
> 
> I'm convinced that this is readily achievable and something we ought to do as 
> a community.
> 
> I've put together a (long-winded) FAQ below to answer all of your questions 
> about it.
> 
> Would you be interested in working on a new project to implement this 
> integration? Reply to this thread and let's collect a list of volunteers to 
> form the initial core review team.
> 
> cheers,
> Zane.
> 
> 
> What is the Open Service Broker API?
> 
> 
> The Open Service Broker API[1] is a standard way to expose external resources 
> to applications running in a PaaS. It was originally developed in the context 
> of CloudFoundry, but the same standard was adopted by Kubernetes (and hence 
> OpenShift) in the form of the Service Catalog extension[2]. (The Service 
> Catalog in Kubernetes is the component that calls out to a service broker.) 
> So a single implementation can cover the most popular open-source PaaS 
> offerings.
> 
> In many cases, the services take the form of simply a pre-packaged 
> application that also runs inside the PaaS. But they don't have to be - 
> services can be anything. Provisioning via the service broker ensures that 
> the services requested are tied in to the PaaS's orchestration of the 
> application's lifecycle.
> 
> (This is certainly not the be-all and end-all of integration between 
> OpenStack and containers - we also need ways to tie PaaS-based applications 
> into the OpenStack's orchestration of a larger group of resources. Some 
> applications may even use both. But it's an important part of the story.)
> 
> What sorts of services would OpenStack expose?
> --
> 
> Some example use cases might be:
> 
> * The application needs a reliable message queue. Rather than spinning up 
> multiple storage-backed containers with anti-affinity policies and dealing 
> with the overhead of managing e.g. RabbitMQ, the application requests a Zaqar 
> queue from an OpenStack cloud. The overhead of running the queueing service 
> is amortised across all of the applications in the cloud. The queue gets 
> cleaned up correctly when the application is removed, since it is tied into 
> the application definition.
> 
> * The application needs a database. Rather than spinning one up in a 
> storage-backed container and dealing with the overhead of managing it, the 
> application requests a Trove DB from an OpenStack cloud.
> 
> * The application includes a service that needs to run on bare metal for 
> performance reasons (e.g. could also be a database). The application requests 
> a bare-metal server from Nova w/ Ironic for the purpose. (The same applies to 
> requesting a VM, but there are alternatives like KubeVirt - which also 
> operates through the Service Catalog - available for getting a VM in 
> Kubernetes. There are no non-proprietary alternatives for getting a 
> bare-metal server.)
> 
> AWS[3], Azure[4], and GCP[5] all have service brokers available that support 
> these and many more services that they provide. I don't know of any reason in 
> principle not to expose every type of resource that OpenStack provides via a 
> service broker.
> 
> How is this different from cloud-provider-openstack?
> 
> 
> The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself to 
> access features of the cloud to provide its service. For example, if k8s 
> needs persistent storage for a container then it can request that from Cinder 
> through cloud-provider-openstack[7]. It can also request a load balancer from 
> Octavia instead of having to start a container running HAProxy to load 
> balance between multiple instances of an application container (thus enabling 
> use of hardware load balancers via the cloud's abstraction for them).
> 
> 

[openstack-dev] [PTL] Rocky-2 Milestone Reminder

2018-06-06 Thread Sean McGinnis
Hey everyone,

Just a quick reminder that tomorrow, June 7, is the Rocky-2 milestone.

Any projects follow the cycle-with-milestones model should propose a patch to
the openstack/releases repo before the end of the day tomorrow to have a b2
release created.

Please see the releases repo README for details on how to request a release:

https://github.com/openstack/releases/blob/master/README.rst#requesting-a-release

Note, you can also use the new-release command rather than manually editing
files. After cloning the repo, you would then run something like the following
to prepare a milestone 2 release request:

   $ tox -e venv -- new-release rocky $PROJECT milestone

If you have any questions, please stop by #openstack-release and let us know
how we can help.

** Note on README files

There has been a recent change when uploading to PyPi that will reject packages
if their long description has RST formatting errors. To guard against this, we
now have validation in place on release patches to validate this before it's
too late.

If you see failures with the validation job, most likely this will be the
cause. The README file in each repo will need to be fixed and the release
request will need to be updated to include the commit hash that includes that
update.

For further details, please see:

http://lists.openstack.org/pipermail/openstack-dev/2018-June/131233.html

---
Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1

2018-06-06 Thread Antoni Segura Puimedon
On Wed, Jun 6, 2018 at 2:37 PM, Irena Berezovsky  wrote:
> Sounds like a great initiative.
>
> Lets follow up on the proposal by the kuryr-kubernetes blueprint.

I fully subscribe what Irena said. Let's get on this quick!

>
> BR,
> Irena
>
> On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu  wrote:
>>
>> Hi Kuryr-kubernetes team,
>>
>> I'm thinking to propose a new BP to support  Kubernetes Network Custom
>> Resource Definition De-facto Standard Version 1 [1], which was drafted by
>> network plumbing working group of kubernetes-sig-network. I'll call it NPWG
>> spec below.
>>
>> The purpose of NPWG spec is trying to standardize the multi-network effort
>> around K8S by defining a CRD object 'network' which can be consumed by
>> various CNI plugins. I know there has already been a BP VIF-Handler And Vif
>> Drivers Design, which has designed a set of mechanism to implement the
>> multi-network functionality. However I think it is still worthwhile to
>> support this widely accepted NPWG spec.
>>
>> My proposal is to implement a new vif_driver, which can interpret the PoD
>> annotation and CRD defined by NPWG spec, and attach pod to additional
>> neutron subnet and port accordingly. This new driver should be mutually
>> exclusive with the sriov and additional_subnets drivers.So the endusers can
>> choose either way of using mult-network with kuryr-kubernetes.
>>
>> Please let me know your thought, any comments are welcome.
>>
>>
>>
>> [1]
>> https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd
>>
>>
>> Regards,
>>
>> --
>> Peng Liu
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Jay Pipes

On 06/06/2018 10:02 AM, Matt Riedemann wrote:

On 6/6/2018 8:24 AM, Jay Pipes wrote:

On 06/06/2018 09:10 AM, Artom Lifshitz wrote:

I think regardless of how we ended up with this situation, we're still
in a position where we have a public-facing API that could lead to
data-corruption when used in a specific way. That should never be the
case. I would think re-using the already possible 400 response code to
update-volume when used with a multi-attach volume to indicate that it
can't be done, without a new microversion, would be the cleaned way of
getting out of this pickle.


That's fine, yes.

I just think it's worth noting that it's a pickle that we put 
ourselves in due to an ill-conceived feature and Compute API call. And 
that we should, you know, try to stop doing that. :)


-jay


If we're going to change something, I think it should probably happen on 
the cinder side when the retype or live migration of the volume is 
initiated and would do the attachment counting there.


So if you're swapping from multiattach volume A to multiattach volume B 
and either has >1 read/write attachment, then fail with a 400 in the 
cinder API.


We can check those things in the compute API when cinder calls the swap 
volume API in nova, but:


1. It's racy - cinder is the source of truth on the current state of the 
attachments.


2. The failure mode is going to be questionable - by the time cinder 
calls nova to swap the volumes on the compute host, the cinder REST API 
has long since 202'ed the response to the user and the best nova can do 
is return a 400 and then cinder has to handle that gracefully and 
rollback. It would be much cleaner if the volume API just fails fast.


+10

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Matthew Booth
On 6 June 2018 at 13:55, Jay Pipes  wrote:
> On 06/06/2018 07:46 AM, Matthew Booth wrote:
>>
>> TL;DR I think we need to entirely disable swap volume for multiattach
>> volumes, and this will be an api breaking change with no immediate
>> workaround.
>>
>> I was looking through tempest and came across
>>
>> api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach.
>> This test does:
>>
>> Create 2 multiattach volumes
>> Create 2 servers
>> Attach volume 1 to both servers
>> ** Swap volume 1 for volume 2  on server 1 **
>> Check all is attached as expected
>>
>> The problem with this is that swap volume is a copy operation.
>
>
> Is it, though? The original blueprint and implementation seem to suggest
> that the swap_volume operation was nothing more than changing the mountpoint
> for a volume to point to a different location (in a safe
> manner that didn't lose any reads or writes).
>
> https://blueprints.launchpad.net/nova/+spec/volume-swap
>
> Nothing about the description of swap_volume() in the virt driver interface
> mentions swap_volume() being a "copy operation":
>
> https://github.com/openstack/nova/blob/76ec078d3781fb55c96d7aaca4fb73a74ce94d96/nova/virt/driver.py#L476
>
>> We don't just replace one volume with another, we copy the contents
>> from one to the other and then do the swap. We do this with a qemu
>> drive mirror operation, which is able to do this copy safely without
>> needing to make the source read-only because it can also track writes
>> to the source and ensure the target is updated again. Here's a link
>> to the libvirt logs showing a drive mirror operation during the swap
>> volume of an execution of the above test:
>
> After checking the source code, the libvirt virt driver is the only virt
> driver that implements swap_volume(), so it looks to me like a public HTTP
> API method was added that was specific to libvirt's implementation of drive
> mirroring. Yay, more implementation leaking out through the API.
>
>>
>> http://logs.openstack.org/58/567258/5/check/nova-multiattach/d23fad8/logs/libvirt/libvirtd.txt.gz#_2018-06-04_10_57_05_201
>>
>> The problem is that when the volume is attached to more than one VM,
>> the hypervisor doing the drive mirror *doesn't* know about writes on
>> the other attached VMs, so it can't do that copy safely, and the
>> result is data corruption.
>
>
> Would it be possible to swap the volume by doing what Vish originally
> described in the blueprint: pause the VM, swap the volume mountpoints
> (potentially after migrating the underlying volume), start the VM?
>
>>
>  Note that swap volume isn't visible to the
>>
>> guest os, so this can't be addressed by the user. This is a data
>> corrupter, and we shouldn't allow it. However, it is in released code
>> and users might be doing it already, so disabling it would be a
>> user-visible api change with no immediate workaround.
>
>
> I'd love to know who is actually using the swap_volume() functionality,
> actually. I'd especially like to know who is using swap_volume() with
> multiattach.
>
>> However, I think we're attempting to do the wrong thing here anyway,
>> and the above tempest test is explicit testing behaviour that we don't
>> want. The use case for swap volume is that a user needs to move volume
>> data for attached volumes, e.g. to new faster/supported/maintained
>> hardware.
>
>
> Is that the use case?
>
> As was typical, there's no mention of a use case on the original blueprint.
> It just says "This feature allows a user or administrator to transparently
> swap out a cinder volume that connected to an instance." Which is hardly a
> use case since it uses the feature name in a description of the feature
> itself. :(
>
> The commit message (there was only a single commit for this functionality
> [1]) mentions overwriting data on the new volume:
>
>   Adds support for transparently swapping an attached volume with
>   another volume. Note that this overwrites all data on the new volume
>   with data from the old volume.
>
> Yes, that is the commit message in its entirety. Of course, the commit had
> no documentation at all in it, so there's no ability to understand what the
> original use case really was here.
>
> https://review.openstack.org/#/c/28995/
>
> If the use case was really "that a user needs to move volume data for
> attached volumes", why not just pause the VM, detach the volume, do a
> openstack volume migrate to the new destination, reattach the volume and
> start the VM? That would mean no libvirt/QEMU-specific implementation
> behaviour leaking out of the public HTTP API and allow the volume service
> (Cinder) to do its job properly.

I can't comment on how it was originally documented, but I'm confident
in the use case. Certainly I know this is how our customers use it.
It's the Nova-side implementation of a cinder retype operation. There
are a bunch of potential reasons to want to do this, but a specific
one that I recall from a 

Re: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Matt Riedemann

On 6/6/2018 8:24 AM, Jay Pipes wrote:

On 06/06/2018 09:10 AM, Artom Lifshitz wrote:

I think regardless of how we ended up with this situation, we're still
in a position where we have a public-facing API that could lead to
data-corruption when used in a specific way. That should never be the
case. I would think re-using the already possible 400 response code to
update-volume when used with a multi-attach volume to indicate that it
can't be done, without a new microversion, would be the cleaned way of
getting out of this pickle.


That's fine, yes.

I just think it's worth noting that it's a pickle that we put ourselves 
in due to an ill-conceived feature and Compute API call. And that we 
should, you know, try to stop doing that. :)


-jay


If we're going to change something, I think it should probably happen on 
the cinder side when the retype or live migration of the volume is 
initiated and would do the attachment counting there.


So if you're swapping from multiattach volume A to multiattach volume B 
and either has >1 read/write attachment, then fail with a 400 in the 
cinder API.


We can check those things in the compute API when cinder calls the swap 
volume API in nova, but:


1. It's racy - cinder is the source of truth on the current state of the 
attachments.


2. The failure mode is going to be questionable - by the time cinder 
calls nova to swap the volumes on the compute host, the cinder REST API 
has long since 202'ed the response to the user and the best nova can do 
is return a 400 and then cinder has to handle that gracefully and 
rollback. It would be much cleaner if the volume API just fails fast.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl][doc] openstack-tox-validate: python setup.py check --restructuredtext --strict

2018-06-06 Thread Doug Hellmann
Excerpts from Akihiro Motoki's message of 2018-06-06 16:36:45 +0900:
> Hi the release team,
> 
> When I prepared neutron Rocky-2 deliverables, I noticed a new metadata
> syntax check
> which checks README.rst was introduced.
> 
> As of now, README.rst in networking-bagpipe and networking-ovn hit this [1].
> 
> Although they can be fixed in individual projects, what is the current
> recommended solution?
> 
> In addition, unfortunately such checks are not run in project gate,
> so there is no way to detect in advance.
> I think we need a way to check this when a change is made
> instead of detecting an error when a release patch is proposed.
> 
> Thanks,
> Akihiro (amotoki)
> 
> [1]
> http://logs.openstack.org/66/572666/1/check/openstack-tox-validate/b5dde2f/job-output.txt.gz#_2018-06-06_04_09_16_067790

I apologize for not following through with more communication when we
added this check.

We started noticing uploads to PyPI fail because of validation errors in
the README.rst files associated with the packages. We think this is a
recent change to warehouse (the software that implements PyPI).

The new check in the releases repo validation job tries to catch
the errors before the upload fails, so they can be fixed. We wanted
to start by putting it in the releases repo because it would only
block releases, and not block projects from landing other patches.

I recommend that projects update their tox.ini to modify their pep8
or linters target (whichever you are using) to add this command:

  python setup.py check --restructuredtext --strict

For the check to run, the 'docutils' package must be installed, so you
may have to add that to the test-requirements.txt list.

Be forewarned that the error messages can be scant, almost to the
point of useless. In some cases the exception has to do with
implementation details of the parser, rather than explaining what
part of the input triggered the error. Usually the problems are
caused by using RST directives that are part of Sphinx but not
"core" RST in the README.rst ("code-block" is a common one). If you
can't figure out what's wrong, please post a link to the README.rst
on the mailing list or in #openstack-docs and someone will try to
help you out.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict

2018-06-06 Thread Jeremy Stanley
On 2018-06-06 16:36:45 +0900 (+0900), Akihiro Motoki wrote:
[...]
> In addition, unfortunately such checks are not run in project gate,
> so there is no way to detect in advance.
> I think we need a way to check this when a change is made
> instead of detecting an error when a release patch is proposed.

While I hate to suggest yet another Python PTI addition, for my
personal projects I test every commit (essentially a check/gate
pipeline job) with:

python setup.py check --restructuredtext --strict
python setup.py bdist_wheel sdist

...as proof that it hasn't broken sdist/wheel building nor regressed
the description-file provided in my setup.cfg. My intent is to add
other release artifact tests into the same set so that there are no
surprises come release time.

We sort of address this case in OpenStack projects by forcing sdist
builds in our standard pep8 jobs, so maybe that would be a
lower-overhead place to introduce the setup rst check?
Brainstorming.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-06-06 Thread Radomir Dopieralski
Some of our xstatic packages require an elaborate build process, as they
use various javascript-based tools to do the build. In this case, it's more
than just minification — it's macros and includes, basically a
pre-processor, and it would be very hard to re-create that in Python. Thus,
we did some exceptions and included the minified files in those cases. But
when it's just about minifying, then the application serving the files is
responsible for that, and the xstatic package shouldn't contain those files.

On Wed, Jun 6, 2018 at 5:45 AM, Akihiro Motoki  wrote:

> 2018年6月6日(水) 11:54 Xinni Ge :
>
>> Hi, akihiro and other guys,
>>
>> I understand why minified is considered to be non-free, but I was
>> confused about the statement
>> "At the very least, a non-minified version should be present next to the
>> minified version" [1]
>> in the documentation.
>>
>> Actually in existing xstatic repo, I observed several minified files in
>> angular_fileupload, jquery-migrate, or bootstrap_scss.
>> So, I uploaded those minified files as in the release package of
>>  angular/material.
>>
>
> Good point. My interpretation is:
> - Basically minified files should not be included in xstatic deliverables.
> - Even though not suggested, if minified files are included, corresponding
> non-minified version must be included.
>
> Considering this, I believe we should not include minified files for new
> xstatic deliverables.
> Makes sense?
>
>
>>
>> Personally I don't insist on minified files, and I will delete all
>> minified files and re-upload the patch.
>> Thanks a lot for the advice.
>>
>
> Thanks for understanding and your patience.
> Let's land pending reviews soon :)
>
> Akihiro
>
>
>>
>> [1] https://docs.openstack.org/horizon/latest/contributor/
>> topics/packaging.html#minified-javascript-policy
>>
>> 
>> Ge Xinni
>> Email: xinni.ge1...@gmail.com
>> 
>>
>> On Tue, Jun 5, 2018 at 8:59 PM, Akihiro Motoki  wrote:
>>
>>> Hi,
>>>
>>> Sorry for re-using the ancient ML thread.
>>> Looking at recent xstatic-* repo reviews, I am a bit afraid that
>>> xstatic-cores do not have a common understanding on the principle of
>>> xstatic packages.
>>> I hope all xstatic-cores re-read "Packing Software" in the horizon
>>> contributor docs [1], especially "Minified Javascript policy" [2],
>>> carefully.
>>>
>>> Thanks,
>>> Akihiro
>>>
>>> [1] https://docs.openstack.org/horizon/latest/contributor/
>>> topics/packaging.html
>>> [2] https://docs.openstack.org/horizon/latest/
>>> contributor/topics/packaging.html#minified-javascript-policy
>>>
>>>
>>> 2018年4月4日(水) 14:35 Xinni Ge :
>>>
 Hi Ivan and other Horizon team member,

 Thanks for adding us into xstatic-core group.
 But I still need your opinion and help to release the newly-added
 xstatic packages to pypi index.

 Current `xstatic-core` group doesn't have the permission to PUSH SIGNED
 TAG, and I cannot release the first non-trivial version.

 If I (or maybe Kaz) could be added into xstatic-release group, we can
 release all the 8 packages by ourselves.

 Or, we are very appreciate if any member of xstatic-release could help
 to do it.

 Just for your quick access, here is the link of access permission page
 of one xstatic package.
 https://review.openstack.org/#/admin/projects/openstack/
 xstatic-angular-material,access

 --
 Best Regards,
 Xinni

 On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara 
 wrote:

> Hi Ivan,
>
>
> Thank you very much.
> I've confirmed that all of us have been added to xstatic-core.
>
> As discussed, we will focus on the followings what we added for
> heat-dashboard, will not touch other xstatic repos as core.
>
> xstatic-angular-material
> xstatic-angular-notify
> xstatic-angular-uuid
> xstatic-angular-vis
> xstatic-filesaver
> xstatic-js-yaml
> xstatic-json2yaml
> xstatic-vis
>
> Regards,
> Kaz
>
> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny :
> > Hi Kuz,
> >
> > Don't worry, we're on the same page with you. I added both you,
> Xinni and
> > Keichii to the xstatic-core group. Thank you for your contributions!
> >
> > Regards,
> > Ivan Kolodyazhny,
> > http://blog.e0ne.info/
> >
> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara 
> wrote:
> >>
> >> Hi Ivan & Horizon folks
> >>
> >>
> >> AFAIK, Horizon team had conclusion that you will add the specific
> >> members to xstatic-core, correct ?
> >> Can I ask you to add the following members ?
> >> # All of tree are heat-dashboard core.
> >>
> >> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
> >> Xinni Ge / xinni.ge1...@gmail.com
> >> Keiichi Hikita / keiichi.hik...@gmail.com
> >>
> >> Please give me a shout, if we are not on same page or any concern.
> >>
> >> 

Re: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Jay Pipes

On 06/06/2018 09:10 AM, Artom Lifshitz wrote:

I think regardless of how we ended up with this situation, we're still
in a position where we have a public-facing API that could lead to
data-corruption when used in a specific way. That should never be the
case. I would think re-using the already possible 400 response code to
update-volume when used with a multi-attach volume to indicate that it
can't be done, without a new microversion, would be the cleaned way of
getting out of this pickle.


That's fine, yes.

I just think it's worth noting that it's a pickle that we put ourselves 
in due to an ill-conceived feature and Compute API call. And that we 
should, you know, try to stop doing that. :)


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Matt Riedemann

On 6/6/2018 7:55 AM, Jay Pipes wrote:
I'd love to know who is actually using the swap_volume() functionality, 
actually. I'd especially like to know who is using swap_volume() with 
multiattach.


The swap volume API in nova only exists as a callback routine during 
volume live migration or retype operations. It's admin-only by default 
on the nova side, and shouldn't be called directly (similar to 
guest-assisted volume snapshots for NFS and GlusterFS volumes - totally 
just a callback from Cinder). So during volume retype, cinder will call 
swap volume in nova and then nova will call another admin-only API in 
Cinder to tell Cinder, yup we did it or we failed, rollback.


The cinder API reference on retype mentions the restrictions about 
multiattach volumes:


https://developer.openstack.org/api-ref/block-storage/v3/#retype-a-volume

"Retyping an in-use volume from a multiattach-capable type to a 
non-multiattach-capable type, or vice-versa, is not supported. It is 
generally not recommended to retype an in-use multiattach volume if that 
volume has more than one active read/write attachment."


There is no API reference for volume live migration, but it should 
generally be the same idea.


The Tempest test for swap volume with multiattach volumes was written 
before we realized we needed to put restrictions in place *on the cinder 
side* to limit the behavior. The Tempest test just hits the compute API 
to verify the plumbing in nova works properly, it doesn't initiate the 
flow via an actual retype (or volume live migration).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Artom Lifshitz
I think regardless of how we ended up with this situation, we're still
in a position where we have a public-facing API that could lead to
data-corruption when used in a specific way. That should never be the
case. I would think re-using the already possible 400 response code to
update-volume when used with a multi-attach volume to indicate that it
can't be done, without a new microversion, would be the cleaned way of
getting out of this pickle.

On Wed, Jun 6, 2018 at 2:55 PM, Jay Pipes  wrote:
> On 06/06/2018 07:46 AM, Matthew Booth wrote:
>>
>> TL;DR I think we need to entirely disable swap volume for multiattach
>> volumes, and this will be an api breaking change with no immediate
>> workaround.
>>
>> I was looking through tempest and came across
>>
>> api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach.
>> This test does:
>>
>> Create 2 multiattach volumes
>> Create 2 servers
>> Attach volume 1 to both servers
>> ** Swap volume 1 for volume 2  on server 1 **
>> Check all is attached as expected
>>
>> The problem with this is that swap volume is a copy operation.
>
>
> Is it, though? The original blueprint and implementation seem to suggest
> that the swap_volume operation was nothing more than changing the mountpoint
> for a volume to point to a different location (in a safe
> manner that didn't lose any reads or writes).
>
> https://blueprints.launchpad.net/nova/+spec/volume-swap
>
> Nothing about the description of swap_volume() in the virt driver interface
> mentions swap_volume() being a "copy operation":
>
> https://github.com/openstack/nova/blob/76ec078d3781fb55c96d7aaca4fb73a74ce94d96/nova/virt/driver.py#L476
>
>> We don't just replace one volume with another, we copy the contents
>> from one to the other and then do the swap. We do this with a qemu
>> drive mirror operation, which is able to do this copy safely without
>> needing to make the source read-only because it can also track writes
>> to the source and ensure the target is updated again. Here's a link
>> to the libvirt logs showing a drive mirror operation during the swap
>> volume of an execution of the above test:
>
> After checking the source code, the libvirt virt driver is the only virt
> driver that implements swap_volume(), so it looks to me like a public HTTP
> API method was added that was specific to libvirt's implementation of drive
> mirroring. Yay, more implementation leaking out through the API.
>
>>
>> http://logs.openstack.org/58/567258/5/check/nova-multiattach/d23fad8/logs/libvirt/libvirtd.txt.gz#_2018-06-04_10_57_05_201
>>
>> The problem is that when the volume is attached to more than one VM,
>> the hypervisor doing the drive mirror *doesn't* know about writes on
>> the other attached VMs, so it can't do that copy safely, and the
>> result is data corruption.
>
>
> Would it be possible to swap the volume by doing what Vish originally
> described in the blueprint: pause the VM, swap the volume mountpoints
> (potentially after migrating the underlying volume), start the VM?
>
>>
>  Note that swap volume isn't visible to the
>>
>> guest os, so this can't be addressed by the user. This is a data
>> corrupter, and we shouldn't allow it. However, it is in released code
>> and users might be doing it already, so disabling it would be a
>> user-visible api change with no immediate workaround.
>
>
> I'd love to know who is actually using the swap_volume() functionality,
> actually. I'd especially like to know who is using swap_volume() with
> multiattach.
>
>> However, I think we're attempting to do the wrong thing here anyway,
>> and the above tempest test is explicit testing behaviour that we don't
>> want. The use case for swap volume is that a user needs to move volume
>> data for attached volumes, e.g. to new faster/supported/maintained
>> hardware.
>
>
> Is that the use case?
>
> As was typical, there's no mention of a use case on the original blueprint.
> It just says "This feature allows a user or administrator to transparently
> swap out a cinder volume that connected to an instance." Which is hardly a
> use case since it uses the feature name in a description of the feature
> itself. :(
>
> The commit message (there was only a single commit for this functionality
> [1]) mentions overwriting data on the new volume:
>
>   Adds support for transparently swapping an attached volume with
>   another volume. Note that this overwrites all data on the new volume
>   with data from the old volume.
>
> Yes, that is the commit message in its entirety. Of course, the commit had
> no documentation at all in it, so there's no ability to understand what the
> original use case really was here.
>
> https://review.openstack.org/#/c/28995/
>
> If the use case was really "that a user needs to move volume data for
> attached volumes", why not just pause the VM, detach the volume, do a
> openstack volume migrate to the new destination, reattach the volume and
> start the VM? That would mean no 

Re: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Jay Pipes

On 06/06/2018 07:46 AM, Matthew Booth wrote:

TL;DR I think we need to entirely disable swap volume for multiattach
volumes, and this will be an api breaking change with no immediate
workaround.

I was looking through tempest and came across
api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach.
This test does:

Create 2 multiattach volumes
Create 2 servers
Attach volume 1 to both servers
** Swap volume 1 for volume 2  on server 1 **
Check all is attached as expected

The problem with this is that swap volume is a copy operation.


Is it, though? The original blueprint and implementation seem to suggest 
that the swap_volume operation was nothing more than changing the 
mountpoint for a volume to point to a different location (in a safe

manner that didn't lose any reads or writes).

https://blueprints.launchpad.net/nova/+spec/volume-swap

Nothing about the description of swap_volume() in the virt driver 
interface mentions swap_volume() being a "copy operation":


https://github.com/openstack/nova/blob/76ec078d3781fb55c96d7aaca4fb73a74ce94d96/nova/virt/driver.py#L476


We don't just replace one volume with another, we copy the contents
from one to the other and then do the swap. We do this with a qemu
drive mirror operation, which is able to do this copy safely without
needing to make the source read-only because it can also track writes
to the source and ensure the target is updated again. Here's a link
to the libvirt logs showing a drive mirror operation during the swap
volume of an execution of the above test:
After checking the source code, the libvirt virt driver is the only virt 
driver that implements swap_volume(), so it looks to me like a public 
HTTP API method was added that was specific to libvirt's implementation 
of drive mirroring. Yay, more implementation leaking out through the API.



http://logs.openstack.org/58/567258/5/check/nova-multiattach/d23fad8/logs/libvirt/libvirtd.txt.gz#_2018-06-04_10_57_05_201

The problem is that when the volume is attached to more than one VM,
the hypervisor doing the drive mirror *doesn't* know about writes on
the other attached VMs, so it can't do that copy safely, and the
result is data corruption.


Would it be possible to swap the volume by doing what Vish originally 
described in the blueprint: pause the VM, swap the volume mountpoints 
(potentially after migrating the underlying volume), start the VM?


>
 Note that swap volume isn't visible to the

guest os, so this can't be addressed by the user. This is a data
corrupter, and we shouldn't allow it. However, it is in released code
and users might be doing it already, so disabling it would be a
user-visible api change with no immediate workaround.


I'd love to know who is actually using the swap_volume() functionality, 
actually. I'd especially like to know who is using swap_volume() with 
multiattach.



However, I think we're attempting to do the wrong thing here anyway,
and the above tempest test is explicit testing behaviour that we don't
want. The use case for swap volume is that a user needs to move volume
data for attached volumes, e.g. to new faster/supported/maintained
hardware.


Is that the use case?

As was typical, there's no mention of a use case on the original 
blueprint. It just says "This feature allows a user or administrator to 
transparently swap out a cinder volume that connected to an instance." 
Which is hardly a use case since it uses the feature name in a 
description of the feature itself. :(


The commit message (there was only a single commit for this 
functionality [1]) mentions overwriting data on the new volume:


  Adds support for transparently swapping an attached volume with
  another volume. Note that this overwrites all data on the new volume
  with data from the old volume.

Yes, that is the commit message in its entirety. Of course, the commit 
had no documentation at all in it, so there's no ability to understand 
what the original use case really was here.


https://review.openstack.org/#/c/28995/

If the use case was really "that a user needs to move volume data for 
attached volumes", why not just pause the VM, detach the volume, do a 
openstack volume migrate to the new destination, reattach the volume and 
start the VM? That would mean no libvirt/QEMU-specific implementation 
behaviour leaking out of the public HTTP API and allow the volume 
service (Cinder) to do its job properly.



With single attach that's exactly what they get: the end
user should never notice. With multi-attach they don't get that. We're
basically forking the shared volume at a point in time, with the
instance which did the swap writing to the new location while all
others continue writing to the old location. Except that even the fork
is broken, because they'll get a corrupt, inconsistent copy rather
than point in time. I can't think of a use case for this behaviour,
and it certainly doesn't meet the original design intent.

What they 

Re: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1

2018-06-06 Thread Irena Berezovsky
Sounds like a great initiative.

Lets follow up on the proposal by the kuryr-kubernetes blueprint.

BR,
Irena

On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu  wrote:

> Hi Kuryr-kubernetes team,
>
> I'm thinking to propose a new BP to support  Kubernetes Network Custom
> Resource Definition De-facto Standard Version 1 [1], which was drafted by
> network plumbing working group of kubernetes-sig-network. I'll call it NPWG
> spec below.
>
> The purpose of NPWG spec is trying to standardize the multi-network effort
> around K8S by defining a CRD object 'network' which can be consumed by
> various CNI plugins. I know there has already been a BP VIF-Handler And Vif
> Drivers Design, which has designed a set of mechanism to implement the
> multi-network functionality. However I think it is still worthwhile to
> support this widely accepted NPWG spec.
>
> My proposal is to implement a new vif_driver, which can interpret the PoD
> annotation and CRD defined by NPWG spec, and attach pod to additional
> neutron subnet and port accordingly. This new driver should be mutually
> exclusive with the sriov and additional_subnets drivers.So the endusers can
> choose either way of using mult-network with kuryr-kubernetes.
>
> Please let me know your thought, any comments are welcome.
>
>
>
> [1] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR
> 7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd
>
>
> Regards,
>
> --
> Peng Liu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-06 Thread amrith.kumar
> -Original Message-
> From: Doug Hellmann 
> Sent: Monday, June 4, 2018 5:52 PM
> To: openstack-dev 
> Subject: Re: [openstack-dev] [tc] Organizational diversity tag
> 
> Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400:
> > On 02/06/18 13:23, Doug Hellmann wrote:
> > > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:
> > >> On 01/06/18 12:18, Doug Hellmann wrote:
> > >
> > > [snip]
> > Apparently enough people see it the way you described that this is
> > probably not something we want to actively spread to other projects at
> > the moment.
> 
> I am still curious to know which teams have the policy. If it is more
> widespread than I realized, maybe it's reasonable to extend it and use it as
> the basis for a health check after all.
> 

A while back, Trove had this policy. When Rackspace, HP, and Tesora had core 
reviewers, (at various times, eBay, IBM and Red Hat also had cores), the 
agreement was that multiple cores from any one company would not merge a change 
unless it was an emergency. It was not formally written down (to my knowledge).

It worked well, and ensured that the operators didn't get surprised by some 
unexpected thing that took down their service.

-amrith


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed

2018-06-06 Thread Matthew Booth
TL;DR I think we need to entirely disable swap volume for multiattach
volumes, and this will be an api breaking change with no immediate
workaround.

I was looking through tempest and came across
api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach.
This test does:

Create 2 multiattach volumes
Create 2 servers
Attach volume 1 to both servers
** Swap volume 1 for volume 2  on server 1 **
Check all is attached as expected

The problem with this is that swap volume is a copy operation. We
don't just replace one volume with another, we copy the contents from
one to the other and then do the swap. We do this with a qemu drive
mirror operation, which is able to do this copy safely without needing
to make the source read-only because it can also track writes to the
source and ensure the target is updated again. Here's a link to the
libvirt logs showing a drive mirror operation during the swap volume
of an execution of the above test:

http://logs.openstack.org/58/567258/5/check/nova-multiattach/d23fad8/logs/libvirt/libvirtd.txt.gz#_2018-06-04_10_57_05_201

The problem is that when the volume is attached to more than one VM,
the hypervisor doing the drive mirror *doesn't* know about writes on
the other attached VMs, so it can't do that copy safely, and the
result is data corruption. Note that swap volume isn't visible to the
guest os, so this can't be addressed by the user. This is a data
corrupter, and we shouldn't allow it. However, it is in released code
and users might be doing it already, so disabling it would be a
user-visible api change with no immediate workaround.

However, I think we're attempting to do the wrong thing here anyway,
and the above tempest test is explicit testing behaviour that we don't
want. The use case for swap volume is that a user needs to move volume
data for attached volumes, e.g. to new faster/supported/maintained
hardware. With single attach that's exactly what they get: the end
user should never notice. With multi-attach they don't get that. We're
basically forking the shared volume at a point in time, with the
instance which did the swap writing to the new location while all
others continue writing to the old location. Except that even the fork
is broken, because they'll get a corrupt, inconsistent copy rather
than point in time. I can't think of a use case for this behaviour,
and it certainly doesn't meet the original design intent.

What they really want is for the multi-attached volume to be copied
from location a to location b and for all attachments to be updated.
Unfortunately I don't think we're going to be in a position to do that
any time soon, but I also think users will be unhappy if they're no
longer able to move data at all because it's multi-attach. We can
compromise, though, if we allow a multiattach volume to be moved as
long as it only has a single attachment. This means the operator can't
move this data without disruption to users, but at least it's not
fundamentally immovable.

This would require some cooperation with cinder to achieve, as we need
to be able to temporarily prevent cinder from allowing new
attachments. A natural way to achieve this would be to allow a
multi-attach volume with only a single attachment to be redesignated
not multiattach, but there might be others. The flow would then be:

Detach volume from server 2
Set multiattach=False on volume
Migrate volume on server 1
Set multiattach=True on volume
Attach volume to server 2

Combined with a patch to nova to disallow swap_volume on any
multiattach volume, this would then be possible if inconvenient.

Regardless of any other changes, though, I think it's urgent that we
disable the ability to swap_volume a multiattach volume because we
don't want users to start using this relatively new, but broken,
feature.

Matt
-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1

2018-06-06 Thread Luis Tomas Bolivar
Hi Peng,

Thanks for the proposal! See below

On 06/06/2018 05:47 AM, Peng Liu wrote:
> Hi Kuryr-kubernetes team,
> 
> I'm thinking to propose a new BP to support  Kubernetes Network Custom
> Resource Definition De-facto Standard Version 1 [1], which was drafted
> by network plumbing working group of kubernetes-sig-network. I'll call
> it NPWG spec below.
> 
> The purpose of NPWG spec is trying to standardize the multi-network
> effort around K8S by defining a CRD object 'network' which can be
> consumed by various CNI plugins. I know there has already been a BP
> VIF-Handler And Vif Drivers Design, which has designed a set of
> mechanism to implement the multi-network functionality. However I think
> it is still worthwhile to support this widely accepted NPWG spec. 

Yes, I agree
> 
> My proposal is to implement a new vif_driver, which can interpret the
> PoD annotation and CRD defined by NPWG spec, and attach pod to
> additional neutron subnet and port accordingly. This new driver should
> be mutually exclusive with the sriov and additional_subnets drivers.So
> the endusers can choose either way of using mult-network with
> kuryr-kubernetes.

Perhaps we can move current kuryr annotations on pods to also use CRDs,
defining a standard way (for instance, dict with 'nic-name' :
kuryr-port-crd, and then the kuryr-port-crd having the vif information).

Cheers,
Luis

> 
> Please let me know your thought, any comments are welcome.
> 
> 
> 
> [1] 
> https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd
> 
> 
> 
> Regards,
> 
> -- 
> Peng Liu
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
LUIS TOMÁS BOLÍVAR
SENIOR SOFTWARE ENGINEER
Red Hat
Madrid, Spain
ltoma...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stepping down from core

2018-06-06 Thread Anna Taraday
Ihar,

Neutron can not be what it is without all your work! Thank you and wish you
all the best!

On Wed, Jun 6, 2018 at 11:22 AM Andreas Scheuring <
scheu...@linux.vnet.ibm.com> wrote:

> Hi Ihar, it was always a pleasure learning from and working with you. Wish
> you all the best for your new project!
>
> ---
> Andreas Scheuring (andreas_s)
>
>
>
> On 4. Jun 2018, at 22:31, Ihar Hrachyshka  wrote:
>
> Hi neutrinos and all,
>
> As some of you've already noticed, the last several months I was
> scaling down my involvement in Neutron and, more generally, OpenStack.
> I am at a point where I feel confident my disappearance won't disturb
> the project, and so I am ready to make it official.
>
> I am stepping down from all administrative roles I so far accumulated
> in Neutron and Stable teams. I shifted my focus to another project,
> and so I just removed myself from all relevant admin groups to reflect
> the change.
>
> It was a nice 4.5 year ride for me. I am very happy with what we
> achieved in all these years and a bit sad to leave. The community is
> the most brilliant and compassionate and dedicated to openness group
> of people I was lucky to work with, and I am reminded daily how
> awesome it is.
>
> I am far from leaving the industry, or networking, or the promise of
> open source infrastructure, so I am sure we will cross our paths once
> in a while with most of you. :) I also plan to hang out in our IRC
> channels and make snarky comments, be aware!
>
> Thanks for the fish,
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators][heat] Heat Summit summary and project status

2018-06-06 Thread Rico Lin
Hi all
Summit is over for weeks.
Would like to share with team on what we got in Summit

*Heat Onboarding Session*
We didn't get many people shows up in Onboarding session this time, but we
do get much more view in our video.
Slide:
https://www.slideshare.net/GuanYuLin1/openinfra-summit-2018-vancouver-heat-onboarding
Video: https://www.youtube.com/watch?v=8rMkxdx5YKE
(You can find videos from previous Summits in Slide)

*Project Update Session*
Slide:
https://www.slideshare.net/GuanYuLin1/openinfra-summit-2018-vancouver-heat-project-update
Video: https://www.youtube.com/watch?v=h4UXBRo948k
(You can find videos from previous Summits in Slide)

*User feedback Session*
Etherpad:
https://etherpad.openstack.org/p/2018-Vancouver-Summit-heat-ops-and-users-feedback
(You can find Etherpad from the last Summit in Etherpad)

Apparently, we got a lot of users which includes a lot of different domains
(at least that's what I felt during summit). And according to feedbacks, I
think our plans mostly match with what requirements from users.(if not, it
still not too late to provide us feedbacks
https://etherpad.openstack.org/p/2018-Vancouver-Summit-heat-ops-and-users-feedback
)


*Project Status*
Also, we're about to release Rocky-2, so would like to share current
project status:
We got less bug reported than the last cycle. For features, we seem got
less implemented or WIP. We do get few WIP or under planned features:
Blazar resource support(review in progress)
Etcd support(work in progress)
Multi-Cloud support (work in progress)
Swift store for heat template (can input heat template from Swift)
We do need more reviewer and people willing to help with features.

For rocky release(about to release rocky-2)
we got around 700 reviews
Commits: 216
Filed Bugs: 56
Resolved Bugs: 34
(For reference. Here's Queens cycle number:
around 1700 reviews, Commits: 417, Filed Bugs: 166, Resolved Bugs: 122 )


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][networking-bgpvpn][infra] missing networking-odl repository

2018-06-06 Thread Elõd Illés

Hi,

I'm trying to create a fix for the failing networking-bgpvpn stable 
periodic sphinx-docs job [1], but meanwhile it turned out that other 
"check" (and possibly "gate") jobs are failing on stable, too, on 
networking-bgpvpn, because of missing dependency: networking-odl 
repository (for pep8, py27, py35, cover and even sphinx, too). I 
submitted a patch a couple of days ago for the stable periodic py27 job 
[2] and it solved the issue there. But now it seems that every other 
networking-bgpvpn job needs this fix if it is run against stable 
branches (something like in this patch [3]).


Question: Is there a better way to fix these issues?


The common error message of the failing jobs:

**
ERROR! /home/zuul/src/git.openstack.org/openstack/networking-odl not found
In Zuul v3 all repositories used need to be declared
in the 'required-projects' parameter on the job.
To fix this issue, add:

  openstack/networking-odl

to 'required-projects'.

While you're at it, it's worth noting that zuul-cloner itself
is deprecated and this shim is only present for transition
purposes. Start thinking about how to rework job content to
just use the git repos that zuul will place into
/home/zuul/src/git.openstack.org directly.
**


[1] https://review.openstack.org/#/c/572368/
[2] https://review.openstack.org/#/c/569111/
[3] https://review.openstack.org/#/c/572495/


Thanks,

Előd


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] weekly meeting

2018-06-06 Thread Чадин Александр Сергеевич
Hi Watcher team,

We have meeting today at 8:00 UTC on #openstack-meeting-alt channel

Best Regards,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-06-06 Thread Zhipeng Huang
Hi Blair,

Sorry for the late reply, could you elaborate more on the proxy driver idea
?

On Mon, May 21, 2018 at 4:05 PM, Blair Bethwaite 
wrote:

> (Please excuse the top-posting)
>
> The other possibility is that the Cyborg managed devices are plumbed in
> via IP in guest network space. Then "attach" isn't so much a Nova problem
> as a Neutron one - probably similar to Manila.
>
> Has the Cyborg team considered a RESTful-API proxy driver, i.e., something
> that wraps a vendor-specific accelerator service and makes it friendly to a
> multi-tenant OpenStack cloud? Quantum co-processors might be a compelling
> example which fit this model.
>
> Cheers,
>
>
> On Sun., 20 May 2018, 23:28 Chris Friesen, 
> wrote:
>
>> On 05/19/2018 05:58 PM, Blair Bethwaite wrote:
>> > G'day Jay,
>> >
>> > On 20 May 2018 at 08:37, Jay Pipes  wrote:
>> >> If it's not the VM or baremetal machine that is using the accelerator,
>> what
>> >> is?
>> >
>> > It will be a VM or BM, but I don't think accelerators should be tied
>> > to the life of a single instance if that isn't technically necessary
>> > (i.e., they are hot-pluggable devices). I can see plenty of scope for
>> > use-cases where Cyborg is managing devices that are accessible to
>> > compute infrastructure via network/fabric (e.g. rCUDA or dedicated
>> > PCIe fabric). And even in the simple pci passthrough case (vfio or
>> > mdev) it isn't hard to imagine use-cases for workloads that only need
>> > an accelerator sometimes.
>>
>> Currently nova only supports attach/detach of volumes and network
>> interfaces.
>> Is Cyborg looking to implement new Compute API operations to support hot
>> attach/detach of various types of accelerators?
>>
>> Chris
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Cyborg traits

2018-06-06 Thread Alex Xu
After reading the spec
https://review.openstack.org/#/c/554717/14/doc/specs/rocky/cyborg-nova-sched.rst
,
I confuse on the CUSTOM_ACCELERATOR_FPGA meaning. Initially, I thought it
means a region. But after reading the spec, it can be a device, a region or
a function. Is it on purpose design?

Sounds like we need to have agreement on the naming also. We already have
resource class `VGPU`, so we only need to add another resource class
'FPGA'(but same as above question, I thought it should be FPGA_REGION?), is
it right? I didn't see any requirement on the prefix 'ACCELERATOR'.

2018-05-31 4:18 GMT+08:00 Eric Fried :

> This all sounds fully reasonable to me.  One thing, though...
>
> >>   * There is a resource class per device category e.g.
> >> CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA.
>
> Let's propose standard resource classes for these ASAP.
>
> https://github.com/openstack/nova/blob/d741f624c81baf89fc8b6b94a2bc20
> eb5355a818/nova/rc_fields.py
>
> -efried
> .
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict

2018-06-06 Thread Akihiro Motoki
Hi the release team,

When I prepared neutron Rocky-2 deliverables, I noticed a new metadata
syntax check
which checks README.rst was introduced.

As of now, README.rst in networking-bagpipe and networking-ovn hit this [1].

Although they can be fixed in individual projects, what is the current
recommended solution?

In addition, unfortunately such checks are not run in project gate,
so there is no way to detect in advance.
I think we need a way to check this when a change is made
instead of detecting an error when a release patch is proposed.

Thanks,
Akihiro (amotoki)

[1]
http://logs.openstack.org/66/572666/1/check/openstack-tox-validate/b5dde2f/job-output.txt.gz#_2018-06-06_04_09_16_067790
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stepping down from core

2018-06-06 Thread Andreas Scheuring
Hi Ihar, it was always a pleasure learning from and working with you. Wish you 
all the best for your new project! 

---
Andreas Scheuring (andreas_s)



On 4. Jun 2018, at 22:31, Ihar Hrachyshka  wrote:

Hi neutrinos and all,

As some of you've already noticed, the last several months I was
scaling down my involvement in Neutron and, more generally, OpenStack.
I am at a point where I feel confident my disappearance won't disturb
the project, and so I am ready to make it official.

I am stepping down from all administrative roles I so far accumulated
in Neutron and Stable teams. I shifted my focus to another project,
and so I just removed myself from all relevant admin groups to reflect
the change.

It was a nice 4.5 year ride for me. I am very happy with what we
achieved in all these years and a bit sad to leave. The community is
the most brilliant and compassionate and dedicated to openness group
of people I was lucky to work with, and I am reminded daily how
awesome it is.

I am far from leaving the industry, or networking, or the promise of
open source infrastructure, so I am sure we will cross our paths once
in a while with most of you. :) I also plan to hang out in our IRC
channels and make snarky comments, be aware!

Thanks for the fish,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Weekly Team Meeting 2018.06.06

2018-06-06 Thread Zhipeng Huang
Hi Team,

let's resume the team meeting, at today's meeting we need to make decisions
on all Rocky critical specs in order to meet MS2 deadline.

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev