Re: [openstack-dev] [rally] Moving OpenStack plugins into separate repo

2018-04-11 Thread Goutham Pratapa
Hi andrey,

Great to hear this

Cheers and I wish you all luck

Cheers
Goutham.

On Wed, 11 Apr 2018 at 11:00 PM, Boris Pavlovic  wrote:

> Andrey,
>
> Great news!
>
> Best regards,
> Boris Pavlovic
>
> On Wed, Apr 11, 2018 at 9:14 AM, Andrey Kurilin 
> wrote:
>
>> Hi Stackers!
>>
>> Today I am happy to announce great news!
>>
>> From a historical perspective, Rally is testing (benchmarking) tool for
>> OpenStack, but it is changed. More and more users want to use Rally for
>> different platforms and environments. Our pluggable system allows doing
>> this.
>> To make the framework lightweight and simplify our release model, we
>> decided to move OpenStack to the separate repository[1].
>>
>> [1] https://git.openstack.org/cgit/openstack/rally-openstack
>>
>> We cut the first release 1.0.0 two weeks ago, and it is published to
>> PyPI[2].
>>
>> [2] https://pypi.python.org/pypi/rally-openstack
>>
>> If you are Rally consumer and do not have custom plugins, the migration
>> should be simple. Just install rally-openstack package instead of rally and
>> everything will work as previously. rally-openstack has a dependency to
>> rally, so you need nothing more than installing one package.
>>
>> If you have custom plugins, do not worry, the migration should be simple
>> for you too. The first release has the similar structure as it was in rally
>> repository. The only thing which should be changed is importing
>> rally_openstack instead of rally.plugins.openstack.
>>
>> --
>> Best regards,
>> Andrey Kurilin.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Cheers !!!
Goutham Pratapa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc]TC candidacy

2018-04-11 Thread Zhipeng Huang
Hi all,

I'm announcing my candidacy for the OpenStack Technical Committee.

I started following OpenStack community since Portland Summit in 2013, and
has been an integral part of it from then on. I'm currently serving as the
PTL for the Cyborg project [0] which provides general management framework
for accelerators. I'm also serving as the co-chair of the Public Cloud WG
[1], active member of the First Contact SIG [2] and had been a contributor
for the Interop WG throughout the year 2017 [3]. Outside of OpenStack, I'm
one of the founding co-leads for the Kubernetes Policy WG [4], the
ecosystem lead for OpenSDS community [5], and also served as the PTL of
OPNFV Parser project from 2014 to 2016 [6]. I've also been involved with
Open Service Broker API and SPDK community where my team members are
working on.

I would like to think my strength are in the areas like cross-community
collaboration, community team building, and non-stop innovation. I believe
these are also the areas that my future work on the Technical Committee
should continue to bring forth.

** Cross Community Collaboration **

For those of you who are familiar with my work, you would know that I've
always been taking a full stack approach for open source community work and
strongly believed the value of collaboration. From the very start building
of the *Cyborg project*, we collaborated with the OPNFV community and also
had a concrete plan on working with communities like Kubernetes, Linaro,
ONNX and so forth. With my work in *OpenSDS*, I've repeatedly emphasize the
importance of the capability of working with OpenStack and Kubernetes but
not drop something and claim it would be better to replace the existing
module which has been built by a lot of community work. During our
discussions in the *Kubernetes Policy WG* on multi-tenancy I've also
introduced what the Keystone team has greatly done and try to build a
synergy there.

Hence if I were to be elected on the technical committee, I would like to
pushing further on the community collaborations within but not limited to
the following areas:
*- Data model alignment regarding accelerator between OpenStack and
Kubernetes via Cyborg project and the Resource Management SIG.*
*- Alignment regarding Policy architecture between OpenStack and Kubernetes
via Kubernetes Policy WG as well as Keystone team.*


** Community Team Building **

With currently busting the hype bubble, I've seen many commentaries on how
OpenStack "is getting outdated" and not "technically cool" any more. Set
aside the absurdity on the technical aspects, I think one of the core
things people will learn in the OpenStack community is the governance, the
way how we work here.

Take *Cyborg* for example, from day one I've been strictly following the
four opens principle and trying to build a good team structure by learning
from great teams like Nova, Cinder, Neutron, etc. The Cyborg project was
started from basically zero and I intentionally avoided any code dumping as
we've seen in many open source projects. We designed the specs from open
discussion, wrote the codes with public reviews and continue on. When few
people believe even this could work, we make it happen. The reward we are
having is awesome, for example on nova-cyborg collaboration, by not
mandating certain design philosophy, we have great Nova developers joining
our project meeting from time to time, providing valuable comments on how
we better design the interaction, and help reviewing the specs. I think for
a new project I dare say we've got the best and logical architecture design
with regarding to nova interaction.

With that said, the community team building will be another important theme
for my future work on TC:
*- Leveraging First Contact SIG, to try to incubate or help more project
that knows how to build their team in a community way instead of a
corporate way.*
*- Continue on the Cyborg team structure building, enable reasonable
sub-team work and encourage more developers to join and contribute.*
*- Enabling more collaboration between projects and WG/SIGs. We have some
good experience on Cyborg working with Scientific SIG as well as Public
Cloud WG working with Nova/Keystone team, and I think we could make further
progress on it*

** Non Stop Innovation **

OpenStack offers the ultimate open source cloud computing infrastructure
and there are just so many exciting new things we could do with it. I've
experimenting the ideas of *how Cyborg could better support AI application,
and also the possibility of utilizing blockchain for the Passport Program
[7]*. I plan to keep bring new things like these forward when given the
opportunity to serve on the  technical committee to make OpenStack's edge
keep cutting as sharp as ever :)

Thank you for your time to read such a long letter and please vote for me
and any other candidate that you see value in. A great community could not
exist without your important voice.


Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-11 Thread Matt Riedemann

On 4/11/2018 5:09 PM, Michael Still wrote:


https://review.openstack.org/#/c/523387 proposes adding a z/VM specific 
dependancy to nova's requirements.txt. When I objected the counter 
argument is that we have examples of windows specific dependancies 
(os-win) and powervm specific dependancies in that file already.


I think perhaps all three are a mistake and should be removed.

My recollection is that for drivers like ironic which may not be 
deployed by everyone, we have the dependancy documented, and then loaded 
at runtime by the driver itself instead of adding it to 
requirements.txt. This is to stop pip for auto-installing the dependancy 
for anyone who wants to run nova. I had assumed this was at the request 
of the deployer community.


So what do we do with z/VM? Do we clean this up? Or do we now allow 
dependancies that are only useful to a very small number of deployments 
into requirements.txt?


As Eric pointed out in the review, this came up when pypowervm was added:

https://review.openstack.org/#/c/438119/5/requirements.txt

And you're asking the same questions I did in there, which was, should 
it go into test-requirements.txt like oslo.vmware and 
python-ironicclient, or should it go under [extras], or go into 
requirements.txt like os-win (we also have the xenapi library now too).


I don't really think all of these optional packages should be in 
requirements.txt, but we should just be consistent with whatever we do, 
be that test-requirements.txt or [extras]. I remember caring more about 
this back in my rpm packaging days when we actually tracked what was in 
requirements.txt to base what needed to go into the rpm spec, unlike 
Fedora rpm specs which just zero out requirements.txt and depend on 
their own knowledge of what needs to be installed (which is sometimes 
lacking or lagging master).


I also seem to remember that [extras] was less than user-friendly for 
some reason, but maybe that was just because of how our CI jobs are 
setup? Or I'm just making that up. I know it's pretty simple to install 
the stuff from extras for tox runs, it's just an extra set of 
dependencies to list in the tox.ini.


Having said all this, I don't have the energy to help push for 
consistency myself, but will happily watch you from the sidelines.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config drive format

2018-04-11 Thread Michael Still
The more I think about it, the more I dislike how the proposed driver also
"lies" about it using iso9660. That's definitely wrong:

if CONF.config_drive_format in ['iso9660']:
# cloud-init only support iso9660 and vfat, but in z/VM
# implementation, can't link a disk to VM as iso9660 before it's
# boot ,so create a tgz file then send to the VM deployed, and
# during startup process, the tgz file will be extracted and
# mounted as iso9660 format then cloud-init is able to consume
it
self._make_tgz(path)
else:
raise exception.ConfigDriveUnknownFormat(
format=CONF.config_drive_format)

Michael

On Thu, Apr 12, 2018 at 9:28 AM, Dan Smith  wrote:

> > https://review.openstack.org/#/c/527658 is a z/VM patch which
> > introduces their support for config drive. They do this by attaching a
> > tarball to the instance, having pretended in the nova code that it is
> > an iso9660. This worries me.
> >
> > In the past we've been concerned about adding new filesystem formats
> > for config drives, and the long term support implications of that --
> > the filesystem formats for config drive that we use today were
> > carefully selected as being universally supported by our guest
> > operating systems.
> >
> > The previous example we've had of these issues is the parallels
> > driver, which had similar "my hypervisor doesn't support these
> > filesystem format" concerns. We worked around those concerns IIRC, and
> > certainly virt.configdrive still only supports iso9660 and vfat.
>
> Yeah, IIRC, the difference with the parallels driver was that it ends up
> mounted in the container automagically for the guest by the..uh..man
> behind the curtain. However, z/VM being much more VM-y I imagine that
> the guest is just expected to grab that blob and do something with it to
> extract it on local disk at runtime or something. That concerns me too.
>
> In the past I've likened adding filesystem (or format, in this case)
> options to configdrive as a guest ABI change. I think the stability of
> what we present to guests is second only to our external API in terms of
> importance. I know z/VM is "weird" or "different", but I wouldn't want a
> more conventional hypervisor exposing the configdrive as a tarball, so I
> don't really think it's a precedent we should set. Both vfat and iso9660
> are easily supportable by most everything on the planet so I don't think
> it's an unreasonable bar.
>
> --Dan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config drive format

2018-04-11 Thread Dan Smith
> https://review.openstack.org/#/c/527658 is a z/VM patch which
> introduces their support for config drive. They do this by attaching a
> tarball to the instance, having pretended in the nova code that it is
> an iso9660. This worries me.
>
> In the past we've been concerned about adding new filesystem formats
> for config drives, and the long term support implications of that --
> the filesystem formats for config drive that we use today were
> carefully selected as being universally supported by our guest
> operating systems.
>
> The previous example we've had of these issues is the parallels
> driver, which had similar "my hypervisor doesn't support these
> filesystem format" concerns. We worked around those concerns IIRC, and
> certainly virt.configdrive still only supports iso9660 and vfat.

Yeah, IIRC, the difference with the parallels driver was that it ends up
mounted in the container automagically for the guest by the..uh..man
behind the curtain. However, z/VM being much more VM-y I imagine that
the guest is just expected to grab that blob and do something with it to
extract it on local disk at runtime or something. That concerns me too.

In the past I've likened adding filesystem (or format, in this case)
options to configdrive as a guest ABI change. I think the stability of
what we present to guests is second only to our external API in terms of
importance. I know z/VM is "weird" or "different", but I wouldn't want a
more conventional hypervisor exposing the configdrive as a tarball, so I
don't really think it's a precedent we should set. Both vfat and iso9660
are easily supportable by most everything on the planet so I don't think
it's an unreasonable bar.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-11 Thread Steve Baker



On 11/04/18 12:50, Emilien Macchi wrote:

Greetings,

Steve Baker and I had a quick chat today about the work that is being 
done around containers workflow in Rocky cycle.


If you're not familiar with the topic, I suggest to first read the 
blueprint to understand the context here:

https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the 
operator won't have to run all the "openstack overcloud container" 
commands to prepare the container registry and upload the containers. 
Indeed, it'll be driven by Heat and Mistral mostly.
But today our discussion extended on 2 uses-cases that we're going to 
explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud with 
customized containers (more or less related to the all-in-one 
discussions on another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow) and 
need my patch to be tested when the undercloud is containerized (see 
[2] for an excellent example).


I'm fairly sure the only use cases for this will be developer or CI 
based. I think we need to be strongly encouraging image modifications 
for production deployments to go through some kind of image building 
pipeline. See Next Steps below for the implications of this.



Both cases would require additional things:
- The container registry needs to be deployed *before* actually 
installing the undercloud.
- We need a tool to update containers from this registry and *before* 
deploying them. We already have this tool in place in our CI for the 
overcloud (see [3] and [4]). Now we need a similar thing for the 
undercloud.


One problem I see is that we use roles and environment files to filter 
the images to be pulled/modified/uploaded. Now we would need to assemble 
a list of undercloud *and* overcloud environments, and build some kind 
of aggregate role data for both. This would need to happen before the 
undercloud is even deployed, which is quite a different order from what 
quickstart does currently.


Either that or we do no image filtering and just process every image 
regardless of whether it will be used.




Next steps:
- Agree that we need to deploy the container-registry before the 
undercloud.
- If agreed, we'll create a new Ansible role called 
ansible-role-container-registry that for now will deploy exactly what 
we have in TripleO, without extra feature.

+1
- Drive the playbook runtime from tripleoclient to bootstrap the 
container registry (which of course could be disabled in undercloud.conf).
tripleoclient could switch to using this role instead of puppet-tripleo 
to install the registry, however since the only use-cases we have are 
dev/CI driven I wonder if quickstart/infrared can just invoke the role 
when required, before tripleoclient is involved.


- Create another Ansible role that would re-use container-check tool 
but the idea is to provide a role to modify containers when needed, 
and we could also control it from tripleoclient. The role would be 
using the ContainerImagePrepare parameter, which Steve is working on 
right now.


Since the use cases are all upstream CI/dev I do wonder if we should 
just have a dedicated container-check 
 role inside 
tripleo-quickstart-extras which can continue to use the script[3] or 
whatever. Keeping the logic in quickstart will remove the temptation to 
use it instead of a proper image build pipeline for production deployments.


Alternatively it could still be a standalone role which quickstart 
invokes, just to accommodate development workflows which don't use 
quickstart.



Feedback is welcome, thanks.

[1] All-In-One thread: 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed 
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed: 
https://github.com/imain/container-check
[4] Container-check running in TripleO CI: 
https://review.openstack.org/#/c/558885/ and 
https://review.openstack.org/#/c/529399/

--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] z/VM introducing a new config drive format

2018-04-11 Thread Michael Still
Heya,

https://review.openstack.org/#/c/527658 is a z/VM patch which introduces
their support for config drive. They do this by attaching a tarball to the
instance, having pretended in the nova code that it is an iso9660. This
worries me.

In the past we've been concerned about adding new filesystem formats for
config drives, and the long term support implications of that -- the
filesystem formats for config drive that we use today were carefully
selected as being universally supported by our guest operating systems.

The previous example we've had of these issues is the parallels driver,
which had similar "my hypervisor doesn't support these filesystem format"
concerns. We worked around those concerns IIRC, and certainly
virt.configdrive still only supports iso9660 and vfat.

Discuss.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-11 Thread Steve Baker



On 12/04/18 00:58, Wesley Hayutin wrote:



On Tue, 10 Apr 2018 at 20:51 Emilien Macchi > wrote:


Greetings,

Steve Baker and I had a quick chat today about the work that is
being done around containers workflow in Rocky cycle.

If you're not familiar with the topic, I suggest to first read the
blueprint to understand the context here:
https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the
operator won't have to run all the "openstack overcloud container"
commands to prepare the container registry and upload the
containers. Indeed, it'll be driven by Heat and Mistral mostly.
But today our discussion extended on 2 uses-cases that we're going
to explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud
with customized containers (more or less related to the all-in-one
discussions on another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow)
and need my patch to be tested when the undercloud is
containerized (see [2] for an excellent example).

Both cases would require additional things:
- The container registry needs to be deployed *before* actually
installing the undercloud.
- We need a tool to update containers from this registry and
*before* deploying them. We already have this tool in place in our
CI for the overcloud (see [3] and [4]). Now we need a similar
thing for the undercloud.

Next steps:
- Agree that we need to deploy the container-registry before the
undercloud.
- If agreed, we'll create a new Ansible role called
ansible-role-container-registry that for now will deploy exactly
what we have in TripleO, without extra feature.
- Drive the playbook runtime from tripleoclient to bootstrap the
container registry (which of course could be disabled in
undercloud.conf).
- Create another Ansible role that would re-use container-check
tool but the idea is to provide a role to modify containers when
needed, and we could also control it from tripleoclient. The role
would be using the ContainerImagePrepare parameter, which Steve is
working on right now.


This all looks really good Emilien, thanks for sending it out.
Regarding the update of containers, we would just want to be 100% sure 
that we can control which yum repositories are in play for the 
update.  Maybe it will be done by the user prior to running the 
command, or maybe with some flags to what ever command Steve is 
working on.


Is it enough to retain the existing container-check 
 behavior of just mounting in 
the undercloud's /etc/yum.repos.d?


FYI.. we've noticed in CI that when the base os updates ( not baseos) 
are included you tend to fail on at least on package download on one 
of the 50+ containers due to infra/network.  In CI we only enable 
baseos, dlrn updates and the dependency change [1]


It would be interesting to see what speed/reliability change there would 
be if the concurrency of container-check 
 was disabled and the 
undercloud's /var/cache/yum was mounted in to each container to avoid 
duplicate package download.



Thanks

[1] 
https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-containers/templates/overcloud-prep-containers.sh.j2#L104-L109



Feedback is welcome, thanks.

[1] All-In-One thread:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed:
https://github.com/imain/container-check
[4] Container-check running in TripleO CI:
https://review.openstack.org/#/c/558885/ and
https://review.openstack.org/#/c/529399/
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-11 Thread Clark Boylan
On Wed, Apr 11, 2018, at 3:09 PM, Michael Still wrote:
> Hi,
> 
> https://review.openstack.org/#/c/523387 proposes adding a z/VM specific
> dependancy to nova's requirements.txt. When I objected the counter argument
> is that we have examples of windows specific dependancies (os-win) and
> powervm specific dependancies in that file already.
> 
> I think perhaps all three are a mistake and should be removed.
> 
> My recollection is that for drivers like ironic which may not be deployed
> by everyone, we have the dependancy documented, and then loaded at runtime
> by the driver itself instead of adding it to requirements.txt. This is to
> stop pip for auto-installing the dependancy for anyone who wants to run
> nova. I had assumed this was at the request of the deployer community.
> 
> So what do we do with z/VM? Do we clean this up? Or do we now allow
> dependancies that are only useful to a very small number of deployments
> into requirements.txt?
> 
> Michael

I think there are two somewhat related issues here. The first is being able to 
have platform specific dependencies so that nova can run on say python2 and 
python3 or linux and windows using the same requirements list. To address this 
you should use environment markers [0] to specify when a specific environment 
needs additional or different packages to function and those should probably 
all just go into requirements.txt.

The second issue is enabling optional functionality that a default install 
shouldn't reasonably have to worry about (and is install platform independent). 
For this you can use setuptools extras[1]. For an example of how this is used 
along with setup.cfg and PBR you can look at swiftclient. Then users that know 
they want the extra features will execute something like `pip install 
python-swiftclient[keystone]`.

[0] https://www.python.org/dev/peps/pep-0496/
[1] 
http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies
[2] 
https://git.openstack.org/cgit/openstack/python-swiftclient/tree/setup.cfg#n35

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-11 Thread Michael Still
Hi,

https://review.openstack.org/#/c/523387 proposes adding a z/VM specific
dependancy to nova's requirements.txt. When I objected the counter argument
is that we have examples of windows specific dependancies (os-win) and
powervm specific dependancies in that file already.

I think perhaps all three are a mistake and should be removed.

My recollection is that for drivers like ironic which may not be deployed
by everyone, we have the dependancy documented, and then loaded at runtime
by the driver itself instead of adding it to requirements.txt. This is to
stop pip for auto-installing the dependancy for anyone who wants to run
nova. I had assumed this was at the request of the deployer community.

So what do we do with z/VM? Do we clean this up? Or do we now allow
dependancies that are only useful to a very small number of deployments
into requirements.txt?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Changes to direct tagging by projects under governance

2018-04-11 Thread Sean McGinnis
Hey all,

We've had a semi-official thing until now that when projects are accepted under
governance, they then do all of their tagging and releases via our official
release process by submitting patches to the openstack/releases repo.

From time to time we would come across projects that either were not aware of
this, or had someone new that would push up new tags. This could cause some
complications, or at least confusion. Normally when a project came under
governance, changes would be made to their gerrit ACLs, but that step was not
always remembered.

This is really kind of a clean up, but I wanted to make sure everyone was aware
of this just in case.

The TC has officially updated the new project documentation [1], and we are now
merging a patch to remove those leftover ACL rights from projects that should
no longer have rights to push tags [2].

[1] https://review.openstack.org/#/c/557737
[2] https://review.openstack.org/#/c/557730/

If there are any questions about this, please let me know, or grab someone in
the #openstack-releases channel.

Thanks!

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] Re-Reminder on the state of WSME

2018-04-11 Thread Chris Dent

On Wed, 11 Apr 2018, Michael Johnson wrote:


I am willing to help with maintenance (patch reviews/gate fixes), but
I cannot commit time to development work on it.


Michael and I also spoke in IRC and he too is now a WSME core.

Thanks to both of you for stepping up and being willing to help out.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] Re-Reminder on the state of WSME

2018-04-11 Thread Michael Johnson
I am willing to help with maintenance (patch reviews/gate fixes), but
I cannot commit time to development work on it.

Michael

On Wed, Apr 11, 2018 at 6:21 AM, Chris Dent  wrote:
> On Wed, 11 Apr 2018, Dougal Matthews wrote:
>
>> I would like to see us move away from WSME. I'm not sure I have time to
>> drive an effort in finding a replacement (and migration path) but I would
>> certainly like to help.
>
>
> Dougal and I talked about this in IRC and agreed that being able to
> merge changes in WSME would help the goal of establishing a
> migration path. So I've added him to WSME cores.
>
>
> --
> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
> freenode: cdent tw: @anticdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][plugins] Improve Horizon testing

2018-04-11 Thread Ivan Kolodyazhny
Hi all,

Let me introduce my proposal about Horizon testing improvements[1]. We
started this discussion at the last PTG [2] and had a good conversation at
the previous meeting [3].

The idea is simple: to have CI that verifies Horizon changes across
supported plugins. As a side-effect of this activity, we'll have a list of
maintained and supported plugins per each release. For now, we have
a static list in Horizon Install Guide only [4]

We don't have Selenium-based tests now. the selenium-headless job always
reports
success. Integration tests are totally broken and we even don't run them on
gates. We need to fix selenium-headless job and integration tests too.

It would be great to have new gate job per each plugin per any
Horizon code change to be sure that we don't break anything. The same job
with
plugin-specific selenium or integration tests should be executed against
each
Horizon plugin's change request.

To make this happen, we need to fix horizon's selenium and integration
tests first. One of the first steps is to get rid of nose from Horizon and
plugins. Initially, I tried to use Django Test Runner but XMLTestRunner [5]
looks better for me because of it generates a report in xunit format.
Ideally, it would be great to use pytest for it, but it requires more
efforts now. stestr requires some work to get it working with Django too.

I know that Horizon team already introduced some new things in Rocky which
require action from plugins developers like moving to Mock (it's one of the
community goals for this release for all projects) and support
Django<2.0,>=1.11. That's why I'm ready to help plugins with test runner
migration and propose a patch for each plugin in a list [4].

Since it's supposed to be a cross-project activity, I would like to get
feedback from Horizon Plugins developers.



[1] https://blueprints.launchpad.net/horizon/+spec/improve-horizon-testing
[2] https://etherpad.openstack.org/p/horizon-ptg-rocky
[3]
http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-04-04-20.01.log.html#l-25
[4] https://docs.openstack.org/horizon/latest/install/plugin-registry.html
[5] https://review.openstack.org/#/c/544296/


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Ironic Bug Day on Thursday April 12th @ 1:00 PM - 3:00 PM (UTC)

2018-04-11 Thread Michael Turek
Sorry this is so late but as for the format of the event I think we 
should do something like this:


1) Go through new bugs
    -This is doable in storyboard. Sort by creation date
    -Should be a nice warm up activity!
2) Go through oldest bugs
    -Again, doable in storyboard. Sort by last updated.
    -Older bugs are usually candidates for some clean up. We'll decide 
if bugs are still valid

 or if we need to reassign/poke owners.
3) Open Floor
    -If you have a bug that you'd like to discuss, bring it up here!
4) Storyboard discussion
    -One of the reasons we are doing this is to get our feet wet in 
storyboard. Let's spend
 10 to 20 minutes discussing what we need out of the tool after 
playing with it.


Originally I was hoping that we could sort by task priority but that 
currently seems to be
unavailable, or well hidden, in storyboard . If someone knows how to do 
this, please reply.


Does anyone else have any ideas on how to structure bug day?

Thanks!
Mike 


On 4/11/18 9:47 AM, Michael Turek wrote:

Hey all,

Ironic Bug Day is happening tomorrow, April 12th at 1:00 PM - 3:00 PM 
(UTC)


We will be meeting on Julia's bluejeans line: 
https://bluejeans.com/5548595878


Hope to see everyone there!

Thanks,
Mike Turek 


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Moving OpenStack plugins into separate repo

2018-04-11 Thread Boris Pavlovic
Andrey,

Great news!

Best regards,
Boris Pavlovic

On Wed, Apr 11, 2018 at 9:14 AM, Andrey Kurilin 
wrote:

> Hi Stackers!
>
> Today I am happy to announce great news!
>
> From a historical perspective, Rally is testing (benchmarking) tool for
> OpenStack, but it is changed. More and more users want to use Rally for
> different platforms and environments. Our pluggable system allows doing
> this.
> To make the framework lightweight and simplify our release model, we
> decided to move OpenStack to the separate repository[1].
>
> [1] https://git.openstack.org/cgit/openstack/rally-openstack
>
> We cut the first release 1.0.0 two weeks ago, and it is published to
> PyPI[2].
>
> [2] https://pypi.python.org/pypi/rally-openstack
>
> If you are Rally consumer and do not have custom plugins, the migration
> should be simple. Just install rally-openstack package instead of rally and
> everything will work as previously. rally-openstack has a dependency to
> rally, so you need nothing more than installing one package.
>
> If you have custom plugins, do not worry, the migration should be simple
> for you too. The first release has the similar structure as it was in rally
> repository. The only thing which should be changed is importing
> rally_openstack instead of rally.plugins.openstack.
>
> --
> Best regards,
> Andrey Kurilin.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] TC candidacy for cdent

2018-04-11 Thread Chris Dent


Hi,

I'm announcing my candidacy to continue as a member of the Technical Committee.
When I ran a year ago, one of my goals was to foster more, and more
transparent, communication among the many parts of the OpenStack community. The
TC has made progress by being more overt and intentional in reaching out to
others and sharing information in an active way. I helped, with my weekly TC
Reports and other writing related to the TC [1], but there is plenty more to
do, especially as the infrastructure as a service community grows and mutates
to include CI/CD, Edge and container-related activities. Enough left to do that
I would like to continue for another term.

The growth of projects under the OpenStack Foundation umbrella will present
opportunities and challenges. We'll be able to deal with those most effectively
by having good communication hygiene: over communicating in a written and
discoverable fashion.

Changes in the shape of the community will impact the role of the TC and its
members. The TC has been something of a high-level judiciary within the
OpenStack technical community but increasingly will need to take on a role as a
representative of the community that develops what has traditionally been known
as "OpenStack" to the other nearby communities that are also now "OpenStack".

My candidacy note from last year [2] remains relevant and a good expression of
my opinions about governance and the overarching themes that concern me:
communication, openness, lowering boundaries between people and platforms,
maintaining developer sanity [3].

If I'm elected again I intend to encourage engagement by continuing with the
TC Report, making sure that we include the right people when making decisions,
and using media that is accessible to people of many languages and time zones.

I will also actively drive discussion and policy that leads to people who are
users of OpenStack in the broadest sense finding it easier to be regularly
active contributors to the open source projects which create OpenStack. We are
making progress with this, but much of OpenStack is still the domain of (often
overburdened) "professionals". Breaking into those domains needs to be simpler
and encouraged for the benefit of all concerned.

If you would like to look at my past voting record on governance changes that
can be found here:

https://review.openstack.org/#/q/project:openstack/governance+reviewedby:%22Chris+Dent+%253Ccdent%2540anticdent.org%253E%22

If you would like me to continue, please vote for me in the upcoming elections.
If you would like someone else, please vote for them. If you would like to
give it a try yourself, then please run; you have until the end of the (UTC)
day of April 17th to submit your candidacy. See the following for details:

https://governance.openstack.org/election/#how-to-submit-a-candidacy

Thanks for reading and your consideration.

[1] https://anticdent.org/tag/tc.html
[2] 
https://git.openstack.org/cgit/openstack/election/plain/candidates/pike/TC/cdent.txt
[3] https://anticdent.org/openstack-developer-satisfaction.html

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-11 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-04-11 12:20:46 -0400:
> Excerpts from Matthew Thode's message of 2018-04-05 10:47:37 -0500:
> > eventlet-0.22.1 has been out for a while now, we should try and use it.
> > Going to be fun times.
> > 
> > I have a review projects can depend upon if they wish to test.
> > https://review.openstack.org/533021
> 
> I have proposed a bunch of patches to projects to remove the cap
> for eventlet [1]. If they don't pass tests, please take them over
> and fix them up as needed (I anticipate some trouble with the new
> check-requirements rules, for example).
> 
> Doug
> 
> [1] 
> https://review.openstack.org/#/q/topic:uncap-eventlet+(status:open+OR+status:merged)

And please go ahead and abandon any that are duplicates for patches that
are already being worked on elsewhere. It was easier to just update
everything than to script something to figure out when updates were
needed.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-11 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2018-04-05 10:47:37 -0500:
> eventlet-0.22.1 has been out for a while now, we should try and use it.
> Going to be fun times.
> 
> I have a review projects can depend upon if they wish to test.
> https://review.openstack.org/533021

I have proposed a bunch of patches to projects to remove the cap
for eventlet [1]. If they don't pass tests, please take them over
and fix them up as needed (I anticipate some trouble with the new
check-requirements rules, for example).

Doug

[1] 
https://review.openstack.org/#/q/topic:uncap-eventlet+(status:open+OR+status:merged)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] Moving OpenStack plugins into separate repo

2018-04-11 Thread Andrey Kurilin
Hi Stackers!

Today I am happy to announce great news!

>From a historical perspective, Rally is testing (benchmarking) tool for
OpenStack, but it is changed. More and more users want to use Rally for
different platforms and environments. Our pluggable system allows doing
this.
To make the framework lightweight and simplify our release model, we
decided to move OpenStack to the separate repository[1].

[1] https://git.openstack.org/cgit/openstack/rally-openstack

We cut the first release 1.0.0 two weeks ago, and it is published to
PyPI[2].

[2] https://pypi.python.org/pypi/rally-openstack

If you are Rally consumer and do not have custom plugins, the migration
should be simple. Just install rally-openstack package instead of rally and
everything will work as previously. rally-openstack has a dependency to
rally, so you need nothing more than installing one package.

If you have custom plugins, do not worry, the migration should be simple
for you too. The first release has the similar structure as it was in rally
repository. The only thing which should be changed is importing
rally_openstack instead of rally.plugins.openstack.

-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event

2018-04-11 Thread Andreas Scheuring
Looks good IMO.

---
Andreas Scheuring (andreas_s)



On 11. Apr 2018, at 15:46, Eric Fried  wrote:

Jichen was able to use this information immediately, to great benefit
[1].  (If those paying attention could have a quick look at that to make
sure he used it right, it would be appreciated; I'm not an expert here.)

[1]
https://review.openstack.org/#/c/527658/31..32/nova/virt/zvm/guest.py@192

On 04/10/2018 09:06 PM, Chen CH Ji wrote:
> Thanks for your info ,really helpful
> 
> Best Regards!
> 
> Kevin (Chen) Ji 纪 晨
> 
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> District, Beijing 100193, PRC
> 
> Inactive hide details for Andreas Scheuring ---04/10/2018 10:19:21
> PM---Yes, that’s how it works! ---Andreas Scheuring ---04/10/2018
> 10:19:21 PM---Yes, that’s how it works! ---
> 
> From: Andreas Scheuring 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 04/10/2018 10:19 PM
> Subject: Re: [openstack-dev] [nova] Changes
> toComputeVirtAPI.wait_for_instance_event
> 
> 
> 
> 
> 
> Yes, that’s how it works!
> 
> ---
> Andreas Scheuring (andreas_s)
> 
> 
> 
> On 10. Apr 2018, at 16:05, Matt Riedemann <_mriedemos@gmail.com_
> > wrote:
> 
> On 4/9/2018 9:57 PM, Chen CH Ji wrote:
> 
>Could you please help to share whether this kind of event is
>sent by neutron-server or neutron agent ? I searched neutron code
>from [1][2] this means the agent itself need tell neutron server
>the device(VIF) is up then neutron server will send notification
>to nova through REST API and in turn consumed by compute node?
>
> [1]_https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct_
>
> 
>
> [2]_https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264_
>
> 
> 
> 
> I believe the neutron agent is the one that is getting (or polling) the
> information from the underlying network backend when VIFs are plugged or
> unplugged from a host, then route that information via RPC to the
> neutron server which then sends an os-server-external-events request to
> the compute REST API, which then routes the event information down to
> the nova-compute host where the instance is currently running.
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: _OpenStack-dev-request@lists.openstack.org_
> ?subject:unsubscribe_
> __http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=tIntFpZ0ffp-_h5CsqN1I9tv64hW2xugxBXaxDn7Z_I=z2jOgMD7B3XFoNsUHTtIO6hWKYXH-Dm4L4P0-u-oSSw=
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack 

[openstack-dev] [FEMDC ]Kubernetes IoT Edge Working Group Proposal

2018-04-11 Thread free
Dear all, 

I’m not sure the information has been shared on the MLs. 

https://groups.google.com/forum/?utm_medium=email_source=footer#!topic/kubernetes-dev/sAFIvDsvUCI
 


Regards, 
Ad_ri3n___
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-11 Thread Ben Nemec



On 04/11/2018 07:58 AM, Wesley Hayutin wrote:



On Tue, 10 Apr 2018 at 20:51 Emilien Macchi > wrote:


Greetings,

Steve Baker and I had a quick chat today about the work that is
being done around containers workflow in Rocky cycle.

If you're not familiar with the topic, I suggest to first read the
blueprint to understand the context here:
https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the
operator won't have to run all the "openstack overcloud container"
commands to prepare the container registry and upload the
containers. Indeed, it'll be driven by Heat and Mistral mostly.
But today our discussion extended on 2 uses-cases that we're going
to explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud
with customized containers (more or less related to the all-in-one
discussions on another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow)
and need my patch to be tested when the undercloud is containerized
(see [2] for an excellent example).

Both cases would require additional things:
- The container registry needs to be deployed *before* actually
installing the undercloud.
- We need a tool to update containers from this registry and
*before* deploying them. We already have this tool in place in our
CI for the overcloud (see [3] and [4]). Now we need a similar thing
for the undercloud.

Next steps:
- Agree that we need to deploy the container-registry before the
undercloud.
- If agreed, we'll create a new Ansible role called
ansible-role-container-registry that for now will deploy exactly
what we have in TripleO, without extra feature.
- Drive the playbook runtime from tripleoclient to bootstrap the
container registry (which of course could be disabled in
undercloud.conf).
- Create another Ansible role that would re-use container-check tool
but the idea is to provide a role to modify containers when needed,
and we could also control it from tripleoclient. The role would be
using the ContainerImagePrepare parameter, which Steve is working on
right now.


This all looks really good Emilien, thanks for sending it out.
Regarding the update of containers, we would just want to be 100% sure 
that we can control which yum repositories are in play for the update.  
Maybe it will be done by the user prior to running the command, or maybe 
with some flags to what ever command Steve is working on.
FYI.. we've noticed in CI that when the base os updates ( not baseos) 
are included you tend to fail on at least on package download on one of 
the 50+ containers due to infra/network.  In CI we only enable baseos, 
dlrn updates and the dependency change [1]


I will note that this was the sort of use case the -o parameter to 
tripleo-repos was intended to handle.  It can write the configured repos 
to an arbitrary location that we could then mount into the containers so 
the update repos are independent from the underlying system.


https://github.com/openstack/tripleo-repos/blob/8961edcd2d9dd1f2c50d3da51f4129daaad85ab0/tripleo_repos/main.py#L88



Thanks

[1] 
https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-containers/templates/overcloud-prep-containers.sh.j2#L104-L109



Feedback is welcome, thanks.

[1] All-In-One thread:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed:
https://github.com/imain/container-check
[4] Container-check running in TripleO CI:
https://review.openstack.org/#/c/558885/ and
https://review.openstack.org/#/c/529399/
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Ironic Bug Day on Thursday April 12th @ 1:00 PM - 3:00 PM (UTC)

2018-04-11 Thread Michael Turek

Hey all,

Ironic Bug Day is happening tomorrow, April 12th at 1:00 PM - 3:00 PM (UTC)

We will be meeting on Julia's bluejeans line: 
https://bluejeans.com/5548595878


Hope to see everyone there!

Thanks,
Mike Turek 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event

2018-04-11 Thread Eric Fried
Jichen was able to use this information immediately, to great benefit
[1].  (If those paying attention could have a quick look at that to make
sure he used it right, it would be appreciated; I'm not an expert here.)

[1]
https://review.openstack.org/#/c/527658/31..32/nova/virt/zvm/guest.py@192

On 04/10/2018 09:06 PM, Chen CH Ji wrote:
> Thanks for your info ,really helpful
> 
> Best Regards!
> 
> Kevin (Chen) Ji 纪 晨
> 
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> District, Beijing 100193, PRC
> 
> Inactive hide details for Andreas Scheuring ---04/10/2018 10:19:21
> PM---Yes, that’s how it works! ---Andreas Scheuring ---04/10/2018
> 10:19:21 PM---Yes, that’s how it works! ---
> 
> From: Andreas Scheuring 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 04/10/2018 10:19 PM
> Subject: Re: [openstack-dev] [nova] Changes
> toComputeVirtAPI.wait_for_instance_event
> 
> 
> 
> 
> 
> Yes, that’s how it works!
> 
> ---
> Andreas Scheuring (andreas_s)
> 
> 
> 
> On 10. Apr 2018, at 16:05, Matt Riedemann <_mriedemos@gmail.com_
> > wrote:
> 
> On 4/9/2018 9:57 PM, Chen CH Ji wrote:
> 
> Could you please help to share whether this kind of event is
> sent by neutron-server or neutron agent ? I searched neutron code
> from [1][2] this means the agent itself need tell neutron server
> the device(VIF) is up then neutron server will send notification
> to nova through REST API and in turn consumed by compute node?
> 
> [1]_https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct_
> 
> 
> 
> [2]_https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264_
> 
> 
> 
> 
> I believe the neutron agent is the one that is getting (or polling) the
> information from the underlying network backend when VIFs are plugged or
> unplugged from a host, then route that information via RPC to the
> neutron server which then sends an os-server-external-events request to
> the compute REST API, which then routes the event information down to
> the nova-compute host where the instance is currently running.
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: _OpenStack-dev-request@lists.openstack.org_
> ?subject:unsubscribe_
> __http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=tIntFpZ0ffp-_h5CsqN1I9tv64hW2xugxBXaxDn7Z_I=z2jOgMD7B3XFoNsUHTtIO6hWKYXH-Dm4L4P0-u-oSSw=
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] TC candidacy (but not for chair)

2018-04-11 Thread Thierry Carrez
Hi everyone,

Growing new leaders has been a focus of the Technical Committee over the
last year: first discussed at the leadership workshop with Board members
in March 2017, then included in the TC "vision for 2019"[1] adopted in June.

As part of this objective, we actively looked for new stewards in our
community, provided opportunities to step up, and rotated key roles to
develop a deeper bench of ready leaders. But we never applied those
ideas for the TC chair position itself: I have been the only candidate
and holding that position since the creation of that governance body in
2012. The main reason for it is that tracking everything that's
happening is a significant commitment, and the Foundation is happy with
me investing that time in. That said, it's not ideal to have a role that
only one person can fill, so it's time for a change.

I am announcing my candidacy for a position on the OpenStack Technical
Committee in the upcoming election. However, if I'm elected I won't be a
candidate to the chair position for the upcoming TC session. To ensure a
seamless transition I will actively support the person who will be
chosen by the TC members. In all cases I'll be as involved with the TC
activities as I've always been.

In my opinion our vision for 2019[1] is still current. We have a lot of
work ahead of us to fully implement it, especially around the concept of
"Constellations" (representation of groups of OpenStack components that
answer a specific use case). Beyond that, our main challenge is to
continue to adapt OpenStack governance to the evolving needs of the
project. Most of our processes and structures come from back when we
doubled activity every year, when our main focus was to survive that
meteoritic growth. With OpenStack getting more mature and having more
adoption, we need to rethink those processes and structures with
long-term sustainability in mind. Finally, we need to navigate a
transition where everything produced by our community will no longer
necessarily be called "OpenStack", starting with Zuul being given its
own separate branding.

If you're passionate about open source project governance and interested
in tackling those challenges, please consider running for the Technical
Committee ! Several of the current members won't be running for
re-election, so seats are up for grabs.  We track current proposed
changes on a Tracker[2], track work items on StoryBoard[3], and usually
meet in person at Summits and PTGs. You can read past weekly "TC status
update" emails to get a better idea of the type of things we cover. I
would say the time commitment is between 2 and 6 hours a week. Join us !

[1]
https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html
[2] https://wiki.openstack.org/wiki/Technical_Committee_Tracker
[3] https://storyboard.openstack.org/#!/project/923

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-11 Thread Doug Hellmann
Excerpts from Chen CH Ji's message of 2018-04-11 21:09:29 +0800:
> sorry, I didn't see any solution for following error found in [1]
> I just rechecked the patch and is this kind of issue already fixed?
> 
> ubuntu-xenial | Requirement for package eventlet : Requirement
> (package=u'eventlet', location='',
> specifiers='!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2', markers=u'', comment=u'#
> MIT', extras=frozenset([])) does not match openstack/requirements value :
> set([Requirement(package='eventlet', location='',
> specifiers='!=0.18.3,!=0.20.1,>=0.18.2', markers='', comment='# MIT',
> extras=frozenset([]))])

The error message is correct. The requirements specification does not
match and needs to be fixed by removing the cap from eventlet.

Doug

> 
> [1]
> logs.openstack.org/87/523387/32/check/requirements-check/408e28c/job-output.txt.gz
> 
> Best Regards!
> 
> Kevin (Chen) Ji 纪 晨
> 
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
> 
> 
> 
> From:Doug Hellmann 
> To:openstack-dev 
> Date:04/11/2018 08:56 PM
> Subject:Re: [openstack-dev] [all][requirements] uncapping eventlet
> 
> Excerpts from IWAMOTO Toshihiro's message of 2018-04-11 18:19:02 +0900:
> > On Mon, 09 Apr 2018 22:58:28 +0900,
> > Doug Hellmann wrote:
> > >
> > > Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000:
> > > > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote:
> > > >
> > > > > My understanding of our use of upper constraints was that this
> should
> > > > > (almost) always be the case for (almost) all dependencies.  We
> should
> > > > > rely on constraints instead of requirements caps. Capping libs like
> > > > > pbr or eventlet and any other that is in use globally is incredibly
> > > > > difficult to work with when you want to uncap it because you have
> to
> > > > > coordinate globally. Instead if using constraints you just bump the
> > > > > constraint and are done.
> > > >
> > > > Part of the reason that we have the caps it to prevent the tools that
> > > > auto-generate the constraints syncs from considering these versions
> and
> > > > then depending on the requirements team to strip that from the bot
> > > > change before committing (assuming it passes CI).
> > > >
> > > > Once the work Doug's doing is complete we could consider tweaking the
> > > > tools to use a different mechanism, but that's only part of the
> reason
> > > > for the caps in g-r.
> > > >
> > > > Yours Tony.
> > >
> > > Now that projects don't have to match the global requirements list
> > > entries exactly we should be able to remove caps from within the
> > > projects and keep caps in the global list for cases like this where we
> > > know we frequently encounter breaking changes in new releases. The
> > > changes to support that were part of
> > >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_555402_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ=P99Z7BlpiP8Sg9_5Ku4JMW_tJWXARpd2ldSvFFlFBpU=
> 
> >
> > As eventlet has been uncapped in g-r, requirements-check is
> > complaining on unrelated project-local requirement changes.
> > I'm not quite sure but doesn't seem to be a intended behavior.
> >
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__logs.openstack.org_57_451257_16_check_requirements-2Dcheck_c32ee69_job-2Doutput.txt.gz=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ=6uHgERcFttsqFakjBTrjvKZhk5n-tZO-e0QMd7zj0nc=
> 
> >
> 
> This error is related to the change in
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_560050_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ=1hIA6J9OfM1mhcTDq89NkGmoAQi_fDfhel7q5dgcwIA=
>  which applies the matching
> rules to all requirements settings any time any requirements-related
> file is touched. The change was made because we are less in-sync than we
> thought and because we're allowing "bad" settings to stay in place.
> 
> To correct the problem in the log you linked to, remove the cap from
> eventlet in neutron.
> 
> Doug
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] Re-Reminder on the state of WSME

2018-04-11 Thread Chris Dent

On Wed, 11 Apr 2018, Dougal Matthews wrote:


I would like to see us move away from WSME. I'm not sure I have time to
drive an effort in finding a replacement (and migration path) but I would
certainly like to help.


Dougal and I talked about this in IRC and agreed that being able to
merge changes in WSME would help the goal of establishing a
migration path. So I've added him to WSME cores.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-11 Thread Chen CH Ji
sorry, I didn't see any solution for following error found in [1]
I just rechecked the patch and is this kind of issue already fixed?

ubuntu-xenial | Requirement for package eventlet : Requirement
(package=u'eventlet', location='',
specifiers='!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2', markers=u'', comment=u'#
MIT', extras=frozenset([])) does not match openstack/requirements value :
set([Requirement(package='eventlet', location='',
specifiers='!=0.18.3,!=0.20.1,>=0.18.2', markers='', comment='# MIT',
extras=frozenset([]))])

[1]
logs.openstack.org/87/523387/32/check/requirements-check/408e28c/job-output.txt.gz

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Doug Hellmann 
To: openstack-dev 
Date:   04/11/2018 08:56 PM
Subject:Re: [openstack-dev] [all][requirements] uncapping eventlet



Excerpts from IWAMOTO Toshihiro's message of 2018-04-11 18:19:02 +0900:
> On Mon, 09 Apr 2018 22:58:28 +0900,
> Doug Hellmann wrote:
> >
> > Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000:
> > > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote:
> > >
> > > > My understanding of our use of upper constraints was that this
should
> > > > (almost) always be the case for (almost) all dependencies.  We
should
> > > > rely on constraints instead of requirements caps. Capping libs like
> > > > pbr or eventlet and any other that is in use globally is incredibly
> > > > difficult to work with when you want to uncap it because you have
to
> > > > coordinate globally. Instead if using constraints you just bump the
> > > > constraint and are done.
> > >
> > > Part of the reason that we have the caps it to prevent the tools that
> > > auto-generate the constraints syncs from considering these versions
and
> > > then depending on the requirements team to strip that from the bot
> > > change before committing (assuming it passes CI).
> > >
> > > Once the work Doug's doing is complete we could consider tweaking the
> > > tools to use a different mechanism, but that's only part of the
reason
> > > for the caps in g-r.
> > >
> > > Yours Tony.
> >
> > Now that projects don't have to match the global requirements list
> > entries exactly we should be able to remove caps from within the
> > projects and keep caps in the global list for cases like this where we
> > know we frequently encounter breaking changes in new releases. The
> > changes to support that were part of
> >
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_555402_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ=P99Z7BlpiP8Sg9_5Ku4JMW_tJWXARpd2ldSvFFlFBpU=

>
> As eventlet has been uncapped in g-r, requirements-check is
> complaining on unrelated project-local requirement changes.
> I'm not quite sure but doesn't seem to be a intended behavior.
>
>
https://urldefense.proofpoint.com/v2/url?u=http-3A__logs.openstack.org_57_451257_16_check_requirements-2Dcheck_c32ee69_job-2Doutput.txt.gz=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ=6uHgERcFttsqFakjBTrjvKZhk5n-tZO-e0QMd7zj0nc=

>

This error is related to the change in
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_560050_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ=1hIA6J9OfM1mhcTDq89NkGmoAQi_fDfhel7q5dgcwIA=
 which applies the matching
rules to all requirements settings any time any requirements-related
file is touched. The change was made because we are less in-sync than we
thought and because we're allowing "bad" settings to stay in place.

To correct the problem in the log you linked to, remove the cap from
eventlet in neutron.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ=nEWykx9Cfm6deVwO9Sdge-_Q31mCbdfAmvp_KoPaenc=




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-11 Thread Wesley Hayutin
On Tue, 10 Apr 2018 at 20:51 Emilien Macchi  wrote:

> Greetings,
>
> Steve Baker and I had a quick chat today about the work that is being done
> around containers workflow in Rocky cycle.
>
> If you're not familiar with the topic, I suggest to first read the
> blueprint to understand the context here:
> https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow
>
> One of the great outcomes of this blueprint is that in Rocky, the operator
> won't have to run all the "openstack overcloud container" commands to
> prepare the container registry and upload the containers. Indeed, it'll be
> driven by Heat and Mistral mostly.
>
> But today our discussion extended on 2 uses-cases that we're going to
> explore and find how we can address them:
> 1) I'm a developer and want to deploy a containerized undercloud with
> customized containers (more or less related to the all-in-one discussions
> on another thread [1]).
> 2) I'm submitting a patch in tripleo-common (let's say a workflow) and
> need my patch to be tested when the undercloud is containerized (see [2]
> for an excellent example).
>
> Both cases would require additional things:
> - The container registry needs to be deployed *before* actually installing
> the undercloud.
> - We need a tool to update containers from this registry and *before*
> deploying them. We already have this tool in place in our CI for the
> overcloud (see [3] and [4]). Now we need a similar thing for the undercloud.
>
> Next steps:
> - Agree that we need to deploy the container-registry before the
> undercloud.
> - If agreed, we'll create a new Ansible role called
> ansible-role-container-registry that for now will deploy exactly what we
> have in TripleO, without extra feature.
> - Drive the playbook runtime from tripleoclient to bootstrap the container
> registry (which of course could be disabled in undercloud.conf).
> - Create another Ansible role that would re-use container-check tool but
> the idea is to provide a role to modify containers when needed, and we
> could also control it from tripleoclient. The role would be using
> the ContainerImagePrepare parameter, which Steve is working on right now.
>

This all looks really good Emilien, thanks for sending it out.
Regarding the update of containers, we would just want to be 100% sure that
we can control which yum repositories are in play for the update.  Maybe it
will be done by the user prior to running the command, or maybe with some
flags to what ever command Steve is working on.
FYI.. we've noticed in CI that when the base os updates ( not baseos) are
included you tend to fail on at least on package download on one of the 50+
containers due to infra/network.  In CI we only enable baseos, dlrn updates
and the dependency change [1]

Thanks

[1]
https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-containers/templates/overcloud-prep-containers.sh.j2#L104-L109


>
> Feedback is welcome, thanks.
>
> [1] All-In-One thread:
> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
> [2] Bug report when undercloud is containeirzed
> https://bugs.launchpad.net/tripleo/+bug/1762422
> [3] Tool to update containers if needed:
> https://github.com/imain/container-check
> [4] Container-check running in TripleO CI:
> https://review.openstack.org/#/c/558885/ and
> https://review.openstack.org/#/c/529399/
> --
> Emilien Macchi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-11 Thread Doug Hellmann
Excerpts from IWAMOTO Toshihiro's message of 2018-04-11 18:19:02 +0900:
> On Mon, 09 Apr 2018 22:58:28 +0900,
> Doug Hellmann wrote:
> > 
> > Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000:
> > > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote:
> > > 
> > > > My understanding of our use of upper constraints was that this should
> > > > (almost) always be the case for (almost) all dependencies.  We should
> > > > rely on constraints instead of requirements caps. Capping libs like
> > > > pbr or eventlet and any other that is in use globally is incredibly
> > > > difficult to work with when you want to uncap it because you have to
> > > > coordinate globally. Instead if using constraints you just bump the
> > > > constraint and are done.
> > > 
> > > Part of the reason that we have the caps it to prevent the tools that
> > > auto-generate the constraints syncs from considering these versions and
> > > then depending on the requirements team to strip that from the bot
> > > change before committing (assuming it passes CI).
> > > 
> > > Once the work Doug's doing is complete we could consider tweaking the
> > > tools to use a different mechanism, but that's only part of the reason
> > > for the caps in g-r.
> > > 
> > > Yours Tony.
> > 
> > Now that projects don't have to match the global requirements list
> > entries exactly we should be able to remove caps from within the
> > projects and keep caps in the global list for cases like this where we
> > know we frequently encounter breaking changes in new releases. The
> > changes to support that were part of
> > https://review.openstack.org/#/c/555402/
> 
> As eventlet has been uncapped in g-r, requirements-check is
> complaining on unrelated project-local requirement changes.
> I'm not quite sure but doesn't seem to be a intended behavior.
> 
> http://logs.openstack.org/57/451257/16/check/requirements-check/c32ee69/job-output.txt.gz
> 

This error is related to the change in
https://review.openstack.org/#/c/560050/ which applies the matching
rules to all requirements settings any time any requirements-related
file is touched. The change was made because we are less in-sync than we
thought and because we're allowing "bad" settings to stay in place.

To correct the problem in the log you linked to, remove the cap from
eventlet in neutron.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Elections][TC] Announcing Rico Lin candidacy for TC

2018-04-11 Thread Rico Lin
Dear all,

I'd like to announce my candidacy for a seat on the OpenStack Technical
Committee.

I'm Rico Lin, employed by EasyStack, full-time OpenStacker.
I have been in this community since 2014. And been deeply involved with
technical contribution [1], mostly around Orchestration service which allows
me to work on integration and management resources cross-projects. Also, I
have served as PTL for three cycles. Which allows me to learn better on how
we can join users and operators' experiences and requirements and integrated
with development workflow and technical decision processes.


Here are my major goals with this seat in TC:

- Application:
We've updated our resolution with [3] and saying we care about what
applications needs on top of OpenStack. As for jobs from few projects are
taking the role and thinking about what application needs, we should provide
help with setting up community goals, making resolutions, or define what are
top priority applications (can be a short-term definition) we need to focus
on
and taking action items/guidelines and finding weaknesses, so others from
community can follow (if they agree with the goals, but got no idea on what
they can help, IMO this will be a good stuff).

- Cooperate with Users, Operators, and Developers:
We have been losing some communication cross Users, Operators, and
Developers.
And it's never a good thing when users can share use cases, ops shares
experiences, developers shares code, but they won't make it to one another
not
if a user provides developers by them self. In this case, works like
StoryBoard
should be in our first priority. We need a more solid way to get user
feedback
for developers, so we can actually learn what's working or not for each
feature. And maybe it's considerable, to strengthen the communication
between
TCs and UCs (User Committee).

- Diversity:
The math is easy. [2] shows we got around one-third of users from Asia (with
75% of users in China). Also IIRC, around the same percentage of developers.
But we got 0 in TC. The actual works are hard. We need forwards our
technical guideline to developers in Asia and provide chances to get more
feedback from them, so we can provide better technical resolutions which
should be able to tight developers all together. Which I think I'm a good
candidate for this.

- Reach out for new blood:
With cloud getting more mature. It's normal that cloud developers need to
works in multiple communities, and they might comes and goes (mostly based
on their job definition from their enterprise), so we need more new
developers. And most important is to provides more chances for them to stay.
Which I know there are many new join developers struggle with finding ways
to fit in each project. We need ways to shorten their onboarding time, so
they can make good works during they're in our community.

- Paying the debt:
Our community has done a great job on changing our resolutions and
guidelines to adopt new trends and keep ourself sharp. TCs try really hard
to migrate our path and do the magic. IMO, we need more effects on some
specific jobs (like cross-project for Application infra. or Storyboard
migrate), I do like to keep that going and closing our technical debts,
so we can have room for new.


Thank you for your consideration.

Best Regards,
Rico Lin (ricolin)

[1] http://stackalytics.com/?release=all_id=rico-lin=person-day
[2] https://www.openstack.org/assets/survey/OpenStack-User-Survey-Nov17.pdf
[3]
https://review.openstack.org/#/c/447031/5/resolutions/20170317-cloud-applications-mission.rst

-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] Re-Reminder on the state of WSME

2018-04-11 Thread Dougal Matthews
On 11 April 2018 at 11:39, Chris Dent  wrote:

> On Tue, 10 Apr 2018, Michael Johnson wrote:
>
> I echo Ben's question about what is the recommended replacement.
>>
>
> It's a good question. Unfortunately I don't have a good answer. My
> involvement in WSME is simply the result of submitting some bug fixes
> in early 2015 and there being no one to review them. Lucas Gomes and
> I were pressganged into becoming the sole core reviews for a project
> that was already languishing.
>
> A short answer could be this: There doesn't have to be a
> replacement. There are people in the community who are active users
> of WSME, if those people would like to become maintainers of WSME,
> Lucas and I can make those people core and help them to shepherd the
> project to an active state. It may be that nothing really needs to
> change. The reason this is coming up now is because a code change
> was proposed that failed the gate because for unrelated reasons (the
> pep8 python3 thing mentioned elsewhere). If the existing feature set
> is sufficient the only real work to do is to keep those features
> working as we move to python3.
>

I would like to see us move away from WSME. I'm not sure I have time to
drive an effort in finding a replacement (and migration path) but I would
certainly like to help.


>
> Any volunteers?
>
> For new projects, I think the standby is Flask + jsonschema. They
> are both boring and common.
>
> I know some people really like django REST framework, but it appears
> to have lots of magic and magic is bad.
>
> The longer answer is just opinion so if the above is enough of an
> answer you can stop here before I go off on a ramble.
>
> I've never really been all that sure on what WSME is for. It
> describes itself with "simplifies the writing of REST web services
> by providing simple yet powerful typing, removing the need to
> directly manipulate the request and the response objects." This is
> pretty much exactly the opposite of what I want when writing a web
> service. I want to be closely aware of the request and response and
> not abstract away the details of HTTP because those details are what
> makes a web service useful and maintainable. So I tend to avoid
> typing systems like WSME and object dispatch systems like pecan in
> favor of tools that are more explicit about the data (both headers
> and body) coming in and going out, and that make the association
> between URLs and code explicit rather than implicit.
>
> That is: you want to write code for the API layer so that future
> maintainers of that code find it easy to trace the path through the
> code that a request takes without having to make a lot of guesses or
> de-serialize (in their heads) an object inheritance hierarchy.
>
> Flask can do that, if you chose to use it that way, but like many
> tools it also allows you to do things in confusing ways too.
>
> I personally don't think that consistency of web framework across
> OpenStack projects is important. What's important is:
>
> * The exposed HTTP APIs have some degree of consistency (that is,
>   they don't have glaring differences in grammar and semantics).
> * The code is low on abstraction and high on scrutability so that
>   future maintainers aren't scratching their heads.
> * Any frameworks chosen (if any) are maintained by the broader
>   Python community and are not OpenStack snowflakes.
>
> Committing to any particular framework is the same as committing to
> being wrong and calcified in some fairly short amount of time.
>
> Who wants to volunteer to help maintain WSME?
>
>
> --
> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
> freenode: cdent tw: @anticdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Reminder and agenda tomorrows meeting

2018-04-11 Thread Tobias Rydberg

Hi everyone,

Time for a new meeting for the Public Cloud WG. Forum sessions for 
Vancouver is priority of this meeting, would be nice to see as many of 
you there.


Agenda can be found at https://etherpad.openstack.org/p/publiccloud-wg

Feel free to add items to the agenda!

See you all tomorrow 1400 UTC in #opensstack-publiccloud

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Mobile: +46 733 312780

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED




smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Elections][TC] Announcing Sean McGinnis candidacy for TC

2018-04-11 Thread Sean McGinnis
Hey everyone,

I am announcing my candidacy to continue on the OpenStack Technical Committee.

I am employed by Huawei and lucky enough to have a full-time focus on
OpenStack. I have been contributing since the Icehouse release. I served as
Cinder PTL from Mitaka through Pike, and was elected to the TC last spring. I
am currently serving my second cycle as Release Management PTL.

During the last year on the TC, I have tried to be pragmatic and open to reason
on governance changes. I do think some proposals need healthy debate with a
really long term mindset to understand how changes now can impact our community
long term. I also think we need to pay a lot of attention to how proposals
impact the sometimes seemingly minor affect they will have on all those
currently involved, and how it impacts developer happiness and the attraction
of working on an OpenStack project.

I've learned a lot from the other TC members and others participating in these
discussions. This last year has been very rewarding, and I've been glad to do
my part to move these conversations forward.

My voting on past changes can be perused here:

https://review.openstack.org/#/q/project:openstack/governance+reviewedby:%22Sean+McGinnis+%253Csean.mcginnis%2540gmail.com%253E%22

Outside of specific governance proposals, I have been working on getting
involved in the operators community by attending the last few Ops Meetups to be
able to get face to face with more of the folks actually using OpenStack. I've
found it very valuable to hear directly about what kinds of issues are being
run into and what kinds of things we might be able to change on the development
side to make things better.

Part of the outcome of that has led me to be more interested in our stable
policy, and helping out more with stable branch reviews. Many operators are not
able to get to a version, for one reason or another, until we have deleted the
branch upstream. I was happy to support our recent efforts in changing out
stable policies to allow a bigger window that might allow a resurgence in
interest for some of these older branches once more users are actually able to
run them and find issues.

I do think it is good to have some new faces on the TC, but would love to serve
another term. I feel like the first year was partly just getting settled in,
and I would be very happy to continue to serve another term to keep things
going. OpenStack has been one of the best communities I've been involved in,
and I would love the opportunity to continue to do what I can to help support
it and help it grow.

Thank you for your consideration.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] Re-Reminder on the state of WSME

2018-04-11 Thread Chris Dent

On Tue, 10 Apr 2018, Michael Johnson wrote:


I echo Ben's question about what is the recommended replacement.


It's a good question. Unfortunately I don't have a good answer. My
involvement in WSME is simply the result of submitting some bug fixes
in early 2015 and there being no one to review them. Lucas Gomes and
I were pressganged into becoming the sole core reviews for a project
that was already languishing.

A short answer could be this: There doesn't have to be a
replacement. There are people in the community who are active users
of WSME, if those people would like to become maintainers of WSME,
Lucas and I can make those people core and help them to shepherd the
project to an active state. It may be that nothing really needs to
change. The reason this is coming up now is because a code change
was proposed that failed the gate because for unrelated reasons (the
pep8 python3 thing mentioned elsewhere). If the existing feature set
is sufficient the only real work to do is to keep those features
working as we move to python3.

Any volunteers?

For new projects, I think the standby is Flask + jsonschema. They
are both boring and common.

I know some people really like django REST framework, but it appears
to have lots of magic and magic is bad.

The longer answer is just opinion so if the above is enough of an
answer you can stop here before I go off on a ramble.

I've never really been all that sure on what WSME is for. It
describes itself with "simplifies the writing of REST web services
by providing simple yet powerful typing, removing the need to
directly manipulate the request and the response objects." This is
pretty much exactly the opposite of what I want when writing a web
service. I want to be closely aware of the request and response and
not abstract away the details of HTTP because those details are what
makes a web service useful and maintainable. So I tend to avoid
typing systems like WSME and object dispatch systems like pecan in
favor of tools that are more explicit about the data (both headers
and body) coming in and going out, and that make the association
between URLs and code explicit rather than implicit.

That is: you want to write code for the API layer so that future
maintainers of that code find it easy to trace the path through the
code that a request takes without having to make a lot of guesses or
de-serialize (in their heads) an object inheritance hierarchy.

Flask can do that, if you chose to use it that way, but like many
tools it also allows you to do things in confusing ways too.

I personally don't think that consistency of web framework across
OpenStack projects is important. What's important is:

* The exposed HTTP APIs have some degree of consistency (that is,
  they don't have glaring differences in grammar and semantics).
* The code is low on abstraction and high on scrutability so that
  future maintainers aren't scratching their heads.
* Any frameworks chosen (if any) are maintained by the broader
  Python community and are not OpenStack snowflakes.

Committing to any particular framework is the same as committing to
being wrong and calcified in some fairly short amount of time.

Who wants to volunteer to help maintain WSME?

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Weekly Team Meeting April 11, 2018

2018-04-11 Thread Zhipeng Huang
Hi Team,

Our weekly meeting starting from UTC1400 at #openstack-cyborg as usual. The
initial agenda is as follows:

1. Confirmation of new core reviewer promotion,
2. Critical Rocky Spec update and discussion
3. open patch discussion

Please feel free to suggest new topics any time :)

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-11 Thread IWAMOTO Toshihiro
On Mon, 09 Apr 2018 22:58:28 +0900,
Doug Hellmann wrote:
> 
> Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000:
> > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote:
> > 
> > > My understanding of our use of upper constraints was that this should
> > > (almost) always be the case for (almost) all dependencies.  We should
> > > rely on constraints instead of requirements caps. Capping libs like
> > > pbr or eventlet and any other that is in use globally is incredibly
> > > difficult to work with when you want to uncap it because you have to
> > > coordinate globally. Instead if using constraints you just bump the
> > > constraint and are done.
> > 
> > Part of the reason that we have the caps it to prevent the tools that
> > auto-generate the constraints syncs from considering these versions and
> > then depending on the requirements team to strip that from the bot
> > change before committing (assuming it passes CI).
> > 
> > Once the work Doug's doing is complete we could consider tweaking the
> > tools to use a different mechanism, but that's only part of the reason
> > for the caps in g-r.
> > 
> > Yours Tony.
> 
> Now that projects don't have to match the global requirements list
> entries exactly we should be able to remove caps from within the
> projects and keep caps in the global list for cases like this where we
> know we frequently encounter breaking changes in new releases. The
> changes to support that were part of
> https://review.openstack.org/#/c/555402/

As eventlet has been uncapped in g-r, requirements-check is
complaining on unrelated project-local requirement changes.
I'm not quite sure but doesn't seem to be a intended behavior.

http://logs.openstack.org/57/451257/16/check/requirements-check/c32ee69/job-output.txt.gz

--
IWAMOTO Toshihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev