[openstack-dev] sahara-image-elements 9.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for sahara-image-elements for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/sahara-image-elements/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/sahara-image-elements/log/?h=stable/rocky

Release notes for sahara-image-elements can be found at:

http://docs.openstack.org/releasenotes/sahara-image-elements/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sahara-dashboard 9.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for sahara-dashboard for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/sahara-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/sahara-dashboard/log/?h=stable/rocky

Release notes for sahara-dashboard can be found at:

http://docs.openstack.org/releasenotes/sahara-dashboard/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sahara 9.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for sahara for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/sahara/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/rocky

Release notes for sahara can be found at:

http://docs.openstack.org/releasenotes/sahara/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] glance 17.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for glance for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/glance/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/glance/log/?h=stable/rocky

Release notes for glance can be found at:

http://docs.openstack.org/releasenotes/glance/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Masahito MUROI

Thanks for the notification!

Blazar has already support the versioned notification. It consumes the 
service.update type notification internally. And I checked the feature 
works only with the versioned one.


best regards,
Masahito

On 2018/08/09 18:41, Balázs Gibizer wrote:

Dear Nova notification consumers!


The Nova team made progress with the new versioned notification 
interface [1] and it is almost reached feature parity [2] with the 
legacy, unversioned one. So Nova team will discuss on the upcoming PTG 
the deprecation of the legacy interface. There is a list of projects (we 
know of) consuming the legacy interface and we would like to know if any 
of these projects plan to switch over to the new interface in the 
foreseeable future so we can make a well informed decision about the 
deprecation.



* Searchlight [3] - it is in maintenance mode so I guess the answer is no
* Designate [4]
* Telemetry [5]
* Mistral [6]
* Blazar [7]
* Watcher [8] - it seems Watcher uses both legacy and versioned nova 
notifications

* Masakari - I'm not sure Masakari depends on nova notifications or not

Cheers,
gibi

[1] https://docs.openstack.org/nova/latest/reference/notifications.html
[2] http://burndown.peermore.com/nova-notification/

[3] 
https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py 

[4] 
https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py 

[5] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2 

[6] 
https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2 

[7] 
https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst 

[8] 
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Paul Belanger
On Thu, Aug 09, 2018 at 08:00:13PM -0600, Wesley Hayutin wrote:
> On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz  wrote:
> 
> > On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann 
> > wrote:
> > > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600:
> > >> Ahoy folks,
> > >>
> > >> I think it's time we come up with some basic rules/patterns on where
> > >> code lands when it comes to OpenStack related Ansible roles and as we
> > >> convert/export things. There was a recent proposal to create an
> > >> ansible-role-tempest[0] that would take what we use in
> > >> tripleo-quickstart-extras[1] and separate it for re-usability by
> > >> others.   So it was asked if we could work with the openstack-ansible
> > >> team and leverage the existing openstack-ansible-os_tempest[2].  It
> > >> turns out we have a few more already existing roles laying around as
> > >> well[3][4].
> > >>
> > >> What I would like to propose is that we as a community come together
> > >> to agree on specific patterns so that we can leverage the same roles
> > >> for some of the core configuration/deployment functionality while
> > >> still allowing for specific project specific customization.  What I've
> > >> noticed between all the project is that we have a few specific core
> > >> pieces of functionality that needs to be handled (or skipped as it may
> > >> be) for each service being deployed.
> > >>
> > >> 1) software installation
> > >> 2) configuration management
> > >> 3) service management
> > >> 4) misc service actions
> > >>
> > >> Depending on which flavor of the deployment you're using, the content
> > >> of each of these may be different.  Just about the only thing that is
> > >> shared between them all would be the configuration management part.
> > >
> > > Does that make the 4 things separate roles, then? Isn't the role
> > > usually the unit of sharing between playbooks?
> > >
> >
> > It can be, but it doesn't have to be.  The problem comes in with the
> > granularity at which you are defining the concept of the overall
> > action.  If you want a role to encompass all that is "nova", you could
> > have a single nova role that you invoke 5 different times to do the
> > different actions during the overall deployment. Or you could create a
> > role for nova-install, nova-config, nova-service, nova-cells, etc etc.
> > I think splitting them out into their own role is a bit too much in
> > terms of management.   In my particular openstack-ansible is already
> > creating a role to manage "nova".  So is there a way that I can
> > leverage part of their process within mine without having to duplicate
> > it.  You can pull in the task files themselves from a different so
> > technically I think you could define a ansible-role-tripleo-nova that
> > does some include_tasks: ../../os_nova/tasks/install.yaml but then
> > we'd have to duplicate the variables in our playbook rather than
> > invoking a role with some parameters.
> >
> > IMHO this structure is an issue with the general sharing concepts of
> > roles/tasks within ansible.  It's not really well defined and there's
> > not really a concept of inheritance so I can't really extend your
> > tasks with mine in more of a programming sense. I have to duplicate it
> > or do something like include a specific task file from another role.
> > Since I can't really extend a role in the traditional OO programing
> > sense, I would like to figure out how I can leverage only part of it.
> > This can be done by establishing ansible variables to trigger specific
> > actions or just actually including the raw tasks themselves.   Either
> > of these concepts needs some sort of contract to be established to the
> > other won't get broken.   We had this in puppet via parameters which
> > are checked, there isn't really a similar concept in ansible so it
> > seems that we need to agree on some community established rules.
> >
> > For tripleo, I would like to just invoke the os_nova role and pass in
> > like install: false, service: false, config_dir:
> > /my/special/location/, config_data: {...} and it spit out the configs.
> > Then my roles would actually leverage these via containers/etc.  Of
> > course most of this goes away if we had a unified (not file based)
> > configuration method across all services (openstack and non-openstack)
> > but we don't. :D
> >
> 
> I like your idea here Alex.
> So having a role for each of these steps is too much management I agree,
> however
> establishing a pattern of using tasks for each step may be a really good
> way to cleanly handle this.
> 
> Are you saying something like the following?
> 
> openstack-nova-role/
> * * /tasks/
> * * /tasks/install.yml
> * * /tasks/service.yml
> * */tasks/config.yml
> * */taks/main.yml
> ---
> # main.yml
> 
> include: install.yml
> when: nova_install|bool
> 
> include: service.yml
> when: nova_service|bool
> 
> include: config.yml
> when: nova_config.yml
> 

Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Wesley Hayutin
On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz  wrote:

> On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann 
> wrote:
> > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600:
> >> Ahoy folks,
> >>
> >> I think it's time we come up with some basic rules/patterns on where
> >> code lands when it comes to OpenStack related Ansible roles and as we
> >> convert/export things. There was a recent proposal to create an
> >> ansible-role-tempest[0] that would take what we use in
> >> tripleo-quickstart-extras[1] and separate it for re-usability by
> >> others.   So it was asked if we could work with the openstack-ansible
> >> team and leverage the existing openstack-ansible-os_tempest[2].  It
> >> turns out we have a few more already existing roles laying around as
> >> well[3][4].
> >>
> >> What I would like to propose is that we as a community come together
> >> to agree on specific patterns so that we can leverage the same roles
> >> for some of the core configuration/deployment functionality while
> >> still allowing for specific project specific customization.  What I've
> >> noticed between all the project is that we have a few specific core
> >> pieces of functionality that needs to be handled (or skipped as it may
> >> be) for each service being deployed.
> >>
> >> 1) software installation
> >> 2) configuration management
> >> 3) service management
> >> 4) misc service actions
> >>
> >> Depending on which flavor of the deployment you're using, the content
> >> of each of these may be different.  Just about the only thing that is
> >> shared between them all would be the configuration management part.
> >
> > Does that make the 4 things separate roles, then? Isn't the role
> > usually the unit of sharing between playbooks?
> >
>
> It can be, but it doesn't have to be.  The problem comes in with the
> granularity at which you are defining the concept of the overall
> action.  If you want a role to encompass all that is "nova", you could
> have a single nova role that you invoke 5 different times to do the
> different actions during the overall deployment. Or you could create a
> role for nova-install, nova-config, nova-service, nova-cells, etc etc.
> I think splitting them out into their own role is a bit too much in
> terms of management.   In my particular openstack-ansible is already
> creating a role to manage "nova".  So is there a way that I can
> leverage part of their process within mine without having to duplicate
> it.  You can pull in the task files themselves from a different so
> technically I think you could define a ansible-role-tripleo-nova that
> does some include_tasks: ../../os_nova/tasks/install.yaml but then
> we'd have to duplicate the variables in our playbook rather than
> invoking a role with some parameters.
>
> IMHO this structure is an issue with the general sharing concepts of
> roles/tasks within ansible.  It's not really well defined and there's
> not really a concept of inheritance so I can't really extend your
> tasks with mine in more of a programming sense. I have to duplicate it
> or do something like include a specific task file from another role.
> Since I can't really extend a role in the traditional OO programing
> sense, I would like to figure out how I can leverage only part of it.
> This can be done by establishing ansible variables to trigger specific
> actions or just actually including the raw tasks themselves.   Either
> of these concepts needs some sort of contract to be established to the
> other won't get broken.   We had this in puppet via parameters which
> are checked, there isn't really a similar concept in ansible so it
> seems that we need to agree on some community established rules.
>
> For tripleo, I would like to just invoke the os_nova role and pass in
> like install: false, service: false, config_dir:
> /my/special/location/, config_data: {...} and it spit out the configs.
> Then my roles would actually leverage these via containers/etc.  Of
> course most of this goes away if we had a unified (not file based)
> configuration method across all services (openstack and non-openstack)
> but we don't. :D
>

I like your idea here Alex.
So having a role for each of these steps is too much management I agree,
however
establishing a pattern of using tasks for each step may be a really good
way to cleanly handle this.

Are you saying something like the following?

openstack-nova-role/
* * /tasks/
* * /tasks/install.yml
* * /tasks/service.yml
* */tasks/config.yml
* */taks/main.yml
---
# main.yml

include: install.yml
when: nova_install|bool

include: service.yml
when: nova_service|bool

include: config.yml
when: nova_config.yml
--

Interested in anything other than tags :)
Thanks



>
> Thanks,
> -Alex
>
> > Doug
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> 

[openstack-dev] networking-bgpvpn 9.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for networking-bgpvpn for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/networking-bgpvpn/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/networking-bgpvpn/log/?h=stable/rocky

Release notes for networking-bgpvpn can be found at:

http://docs.openstack.org/releasenotes/networking-bgpvpn/

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/bgpvpn

and tag it *rocky-rc-potential* to bring it to the networking-bgpvpn
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-odl 13.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for networking-odl for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/networking-odl/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/networking-odl/log/?h=stable/rocky

Release notes for networking-odl can be found at:

http://docs.openstack.org/releasenotes/networking-odl/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron-vpnaas 13.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for neutron-vpnaas for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/neutron-vpnaas/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/neutron-vpnaas/log/?h=stable/rocky

Release notes for neutron-vpnaas can be found at:

http://docs.openstack.org/releasenotes/neutron-vpnaas/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-sfc 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for networking-sfc for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/networking-sfc/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/networking-sfc/log/?h=stable/rocky

Release notes for networking-sfc can be found at:

http://docs.openstack.org/releasenotes/networking-sfc/

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/networking-sfc

and tag it *rocky-rc-potential* to bring it to the networking-sfc
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron-dynamic-routing 13.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for neutron-dynamic-routing for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/neutron-dynamic-routing/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/neutron-dynamic-routing/log/?h=stable/rocky

Release notes for neutron-dynamic-routing can be found at:

http://docs.openstack.org/releasenotes/neutron-dynamic-routing/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-midonet 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for networking-midonet for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/networking-midonet/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/networking-midonet/log/?h=stable/rocky

Release notes for networking-midonet can be found at:

http://docs.openstack.org/releasenotes/networking-midonet/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-bagpipe 9.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for networking-bagpipe for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/networking-bagpipe/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/networking-bagpipe/log/?h=stable/rocky

Release notes for networking-bagpipe can be found at:

http://docs.openstack.org/releasenotes/networking-bagpipe/

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/networking-bagpipe

and tag it *rocky-rc-potential* to bring it to the networking-bagpipe
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-ovn 5.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for networking-ovn for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/networking-ovn/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/networking-ovn/log/?h=stable/rocky

Release notes for networking-ovn can be found at:

http://docs.openstack.org/releasenotes/networking-ovn/

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/networking-ovn

and tag it *rocky-rc-potential* to bring it to the networking-ovn
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron 13.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for neutron for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/neutron/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/rocky

Release notes for neutron can be found at:

http://docs.openstack.org/releasenotes/neutron/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron-fwaas 13.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for neutron-fwaas for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/neutron-fwaas/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/neutron-fwaas/log/?h=stable/rocky

Release notes for neutron-fwaas can be found at:

http://docs.openstack.org/releasenotes/neutron-fwaas/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][docs] ACTION REQUIRED for projects using readthedocs

2018-08-09 Thread Sean McGinnis
Resending the below since several projects using ReadTheDocs appear to have
missed this. If your project publishes docs to ReadTheDocs, please follow these
steps to avoid job failures.

On Fri, Aug 03, 2018 at 02:20:40PM +1000, Ian Wienand wrote:
> Hello,
> 
> tl;dr : any projects using the "docs-on-readthedocs" job template
> to trigger a build of their documentation in readthedocs needs to:
> 
>  1) add the "openstackci" user as a maintainer of the RTD project
>  2) generate a webhook integration URL for the project via RTD
>  3) provide the unique webhook ID value in the "rtd_webhook_id" project
> variable
> 
> See
> 
>  
> https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs
> 
> --
> 
> readthedocs has recently updated their API for triggering a
> documentation build.  In the old API, anyone could POST to a known URL
> for the project and it would trigger a build.  This end-point has
> stopped responding and we now need to use an authenticated webhook to
> trigger documentation builds.
> 
> Since this is only done in the post and release pipelines, projects
> probably haven't had great feedback that current methods are failing
> and this may be a surprise.  To check your publishing, you can go to
> the zuul builds page [1] and filter by your project and the "post"
> pipeline to find recent runs.
> 
> There is now some setup required which can only be undertaken by a
> current maintainer of the RTD project.
> 
> In short; add the "openstackci" user as a maintainer, add a "generic
> webhook" integration to the project, find the last bit of the URL from
> that and put it in the project variable "rtd_webhook_id".
> 
> Luckily OpenStack infra keeps a team of highly skilled digital artists
> on retainer and they have produced a handy visual guide available at
> 
>   https://imgur.com/a/Pp4LH31
> 
> Once the RTD project is setup, you must provide the webhook ID value
> in your project variables.  This will look something like:
> 
>  - project:
> templates:
>   - docs-on-readthedocs
>   - publish-to-pypi
> vars:
>   rtd_webhook_id: '12345'
> check:
>   jobs:
>   ...
> 
> For actual examples; see pbrx [2] which keeps its config in tree, or
> gerrit-dash-creator which has its configuration in project-config [3].
> 
> Happy to help if anyone is having issues, via mail or #openstack-infra
> 
> Thanks!
> 
> -i
> 
> p.s. You don't *have* to use the jobs from the docs-on-readthedocs
> templates and hence add infra as a maintainer; you can setup your own
> credentials with zuul secrets in tree and write your playbooks and
> jobs to use the generic role [4].  We're always happy to discuss any
> concerns.
> 
> [1] https://zuul.openstack.org/builds.html
> [2] https://git.openstack.org/cgit/openstack/pbrx/tree/.zuul.yaml#n17
> [3] 
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml
> [4] https://zuul-ci.org/docs/zuul-jobs/roles.html#role-trigger-readthedocs
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] congress-dashboard 3.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for congress-dashboard for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/congress-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/congress-dashboard/log/?h=stable/rocky

Release notes for congress-dashboard can be found at:

http://docs.openstack.org/releasenotes/congress-dashboard/

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/congress

and tag it *rocky-rc-potential* to bring it to the congress-dashboard
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] congress 8.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for congress for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/congress/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/congress/log/?h=stable/rocky

Release notes for congress can be found at:

http://docs.openstack.org/releasenotes/congress/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova_powervm 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for nova_powervm for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/nova-powervm/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/nova_powervm/log/?h=stable/rocky

Release notes for nova_powervm can be found at:

http://docs.openstack.org/releasenotes/nova_powervm/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-powervm 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for networking-powervm for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/networking-powervm/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/networking-powervm/log/?h=stable/rocky

Release notes for networking-powervm can be found at:

http://docs.openstack.org/releasenotes/networking-powervm/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [l3-sub-team] Weekly meeting cancellation

2018-08-09 Thread Miguel Lavalle
Dear L3 sub-team members,

On August 16th I will be on a business trip and other team members will be
off on vacation. As a consequence, we will cancel our weekly meeting that
day. We will resume at the usual time on the August 23rd

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [drivers]

2018-08-09 Thread Miguel Lavalle
Dear Neutron team members,

Tomorrow I will be on an airplane during the drivers team meeting time and
one team member is off on vacation. Next week, two of our team members will
be off on vacation. As a consequence, Let's cancel the meetings on August
10th and 17th. We will resume normally on the 24th.

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ceilometer-powervm 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for ceilometer-powervm for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/ceilometer-powervm/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/ceilometer-powervm/log/?h=stable/rocky

Release notes for ceilometer-powervm can be found at:

http://docs.openstack.org/releasenotes/ceilometer-powervm/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stepping down as coordinator for the Outreachy internships

2018-08-09 Thread Kendall Nelson
You have done such amazing things with the program! We appreciate
everything you do :) Enjoy the little extra spare time.

-Kendall (daiblo_rojo)


On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz <
victo...@vmartinezdelacruz.com> wrote:

> Hi all,
>
> I'm reaching you out to let you know that I'll be stepping down as
> coordinator for OpenStack next round. I had been contributing to this
> effort for several rounds now and I believe is a good moment for somebody
> else to take the lead. You all know how important is Outreachy to me and
> I'm grateful for all the amazing things I've done as part of the Outreachy
> program and all the great people I've met in the way. I plan to keep
> involved with the internships but leave the coordination tasks to somebody
> else.
>
> If you are interested in becoming an Outreachy coordinator, let me know
> and I can share my experience and provide some guidance.
>
> Thanks,
>
> Victoria
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][election] Timing of the Upcoming Stein TC election

2018-08-09 Thread Tony Breeds
On Thu, Aug 09, 2018 at 05:20:53PM -0400, Doug Hellmann wrote:
> Excerpts from Tony Breeds's message of 2018-08-08 14:39:30 +1000:
> > Hello all,
> > With the PTL elections behind us it's time to start looking at the
> > TC election.  Our charter[1] says:
> > 
> >   The election is held no later than 6 weeks prior to each OpenStack
> >   Summit (on or before ‘S-6’ week), with elections held open for no less
> >   than four business days.
> > 
> > Assuming we have the same structure that gives us a timeline of:
> > 
> >   Summit is at: 2018-11-13
> >   Latest possible completion is at: 2018-10-02
> >   Moving back to Tuesday: 2018-10-02
> >   TC Election from 2018-09-25T23:45 to 2018-10-02T23:45
> >   TC Campaigning from 2018-09-18T23:45 to 2018-09-25T23:45
> >   TC Nominations from 2018-09-11T23:45 to 2018-09-18T23:45
> > 
> > This puts the bulk of the nomination period during the PTG, which is
> > sub-optimal as the nominations cause a distraction from the PTG but more
> > so because the campaigning will coincide with travel home, and some
> > community members take vacation along with the PTG.
> > 
> > So I'd like to bring up the idea of moving the election forward a
> > little so that it's actually the campaigning period that overlaps with
> > the PTG:
> > 
> >   TC Electionfrom 2018-09-18T23:45 to 2018-09-27T23:45
> >   TC Campaigning from 2018-09-06T23:45 to 2018-09-18T23:45
> >   TC Nominations from 2018-08-30T23:45 to 2018-09-06T23:45
> > 
> > This gives us longer campaigning and election periods.
> > 
> > There are some advantages to doing this:
> > 
> >  * A panel style Q could be held formally or informally ;P
> >  * There's improved scope for for incoming, outgoing and staying put TC
> >members to interact in a high bandwidth way.
> >  * In personi/private discussions with TC candidates/members.
> > 
> > However it isn't without downsides:
> > 
> >   * Election fatigue, We've just had the PTL elections and the UC
> > elections are currently running.  Less break before the TC elections
> > may not be a good thing.
> >   * TC candidates that can't travel to the PTG could be disadvantaged
> >   * The campaigning would all happen at the PTG and not on the mailing
> > list disadvantaging community members not at the PTG.
> > 
> > So thoughts?
> > 
> > Yours Tony.
> > 
> > [1] https://governance.openstack.org/tc/reference/charter.html
> 
> Who needs to make this decision? The current TC?

I believe that the TC delegated that to the Election WG [1] but the
governance here is a little gray/fuzzy.

So I kinda think that if the TC doesn't object I can propose the patch
to the election repo and you (as TC chair) can +/-1 is as you see fit.

Is it fair to ask we do that shortly after the next TC office hours?

Yours Tony.

[1] https://governance.openstack.org/tc/reference/working-groups.html


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Struggling with non-admin user on Queens install

2018-08-09 Thread Neil Jerram
It appears this is to do with Keystone v3-created users not having any role
assignment by default.  Big thanks to lbragstad for helping me to
understand this on IRC; he also provided this link as historical context
for this situation: https://bugs.launchpad.net/keystone/+bug/1662911.

In detail, I was creating a non-admin project and user like this:

tenant = self.keystone3.projects.create(username,
"default",

description=description,
enabled=True)
user = self.keystone3.users.create(username,
   domain="default",
   project=tenant.id,
   password=password)

With just that, that user won't be able to do anything; you need to give it
a role assignment as well, for example:

admin_role = None
for role in self.keystone3.roles.list():
_log.info("role: %r", role)
if role.name == 'admin':
admin_role = role
break
assert admin_role is not None, "Couldn't find 'admin' role"
self.keystone3.roles.grant(admin_role, user=user,
project=tenant)

I still don't have a good understanding of what 'admin' within that project
really means, or why it means that that user can then do, e.g.
nova.images.list(); but at least I have a working system again.

Regards,
 Neil


On Thu, Aug 9, 2018 at 4:42 PM Neil Jerram  wrote:

> I'd like to create a non-admin project and user that are able to do
> nova.images.list(), in a Queens install.  IIUC, all users should be able to
> do that.  I'm afraid I'm pretty lost and would appreciate any help.
>
> Define a function to test whether a particular set of credentials can do
> nova.images.list():
>
> from keystoneauth1 import identity
> from keystoneauth1 import session
> from novaclient.client import Client as NovaClient
>
> def attemp(auth):
> sess = session.Session(auth=auth)
> nova = NovaClient(2, session=sess)
> for i in nova.images.list():
> print i
>
> With an admin user, things work:
>
> >>> auth_url = "http://controller:5000/v3;
> >>> auth = identity.Password(auth_url=auth_url,
> >>>   username="admin",
> >>>   password="abcdef",
> >>>   project_name="admin",
> >>>   project_domain_id="default",
> >>>   user_domain_id="default")
> >>> attemp(auth)
> 
> 
>
> With a non-admin user with project_id specified, 401:
>
> >>> tauth = identity.Password(auth_url=auth_url,
> ...   username="tenant2",
> ...   password="password",
> ...   project_id="tenant2",
> ...   user_domain_id="default")
> >>> attemp(tauth)
> ...
> keystoneauth1.exceptions.http.Unauthorized: The request you have made
> requires authentication. (HTTP 401) (Request-ID:
> req-ed0630a4-7df0-4ba8-a4c4-de3ecb7b4d7d)
>
> With the same but without project_id, I get an empty service catalog
> instead:
>
> >>> tauth = identity.Password(auth_url=auth_url,
> ...   username="tenant2",
> ...   password="password",
> ...   #project_name="tenant2",
> ...   #project_domain_id="default",
> ...   user_domain_id="default")
> >>>
> >>> attemp(tauth)
> ...
> keystoneauth1.exceptions.catalog.EmptyCatalog: The service catalog is
> empty.
>
> Can anyone help?
>
> Regards,
>  Neil
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Alex Schultz
On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann  wrote:
> Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600:
>> Ahoy folks,
>>
>> I think it's time we come up with some basic rules/patterns on where
>> code lands when it comes to OpenStack related Ansible roles and as we
>> convert/export things. There was a recent proposal to create an
>> ansible-role-tempest[0] that would take what we use in
>> tripleo-quickstart-extras[1] and separate it for re-usability by
>> others.   So it was asked if we could work with the openstack-ansible
>> team and leverage the existing openstack-ansible-os_tempest[2].  It
>> turns out we have a few more already existing roles laying around as
>> well[3][4].
>>
>> What I would like to propose is that we as a community come together
>> to agree on specific patterns so that we can leverage the same roles
>> for some of the core configuration/deployment functionality while
>> still allowing for specific project specific customization.  What I've
>> noticed between all the project is that we have a few specific core
>> pieces of functionality that needs to be handled (or skipped as it may
>> be) for each service being deployed.
>>
>> 1) software installation
>> 2) configuration management
>> 3) service management
>> 4) misc service actions
>>
>> Depending on which flavor of the deployment you're using, the content
>> of each of these may be different.  Just about the only thing that is
>> shared between them all would be the configuration management part.
>
> Does that make the 4 things separate roles, then? Isn't the role
> usually the unit of sharing between playbooks?
>

It can be, but it doesn't have to be.  The problem comes in with the
granularity at which you are defining the concept of the overall
action.  If you want a role to encompass all that is "nova", you could
have a single nova role that you invoke 5 different times to do the
different actions during the overall deployment. Or you could create a
role for nova-install, nova-config, nova-service, nova-cells, etc etc.
I think splitting them out into their own role is a bit too much in
terms of management.   In my particular openstack-ansible is already
creating a role to manage "nova".  So is there a way that I can
leverage part of their process within mine without having to duplicate
it.  You can pull in the task files themselves from a different so
technically I think you could define a ansible-role-tripleo-nova that
does some include_tasks: ../../os_nova/tasks/install.yaml but then
we'd have to duplicate the variables in our playbook rather than
invoking a role with some parameters.

IMHO this structure is an issue with the general sharing concepts of
roles/tasks within ansible.  It's not really well defined and there's
not really a concept of inheritance so I can't really extend your
tasks with mine in more of a programming sense. I have to duplicate it
or do something like include a specific task file from another role.
Since I can't really extend a role in the traditional OO programing
sense, I would like to figure out how I can leverage only part of it.
This can be done by establishing ansible variables to trigger specific
actions or just actually including the raw tasks themselves.   Either
of these concepts needs some sort of contract to be established to the
other won't get broken.   We had this in puppet via parameters which
are checked, there isn't really a similar concept in ansible so it
seems that we need to agree on some community established rules.

For tripleo, I would like to just invoke the os_nova role and pass in
like install: false, service: false, config_dir:
/my/special/location/, config_data: {...} and it spit out the configs.
Then my roles would actually leverage these via containers/etc.  Of
course most of this goes away if we had a unified (not file based)
configuration method across all services (openstack and non-openstack)
but we don't. :D

Thanks,
-Alex

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][election] Timing of the Upcoming Stein TC election

2018-08-09 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2018-08-08 14:39:30 +1000:
> Hello all,
> With the PTL elections behind us it's time to start looking at the
> TC election.  Our charter[1] says:
> 
>   The election is held no later than 6 weeks prior to each OpenStack
>   Summit (on or before ‘S-6’ week), with elections held open for no less
>   than four business days.
> 
> Assuming we have the same structure that gives us a timeline of:
> 
>   Summit is at: 2018-11-13
>   Latest possible completion is at: 2018-10-02
>   Moving back to Tuesday: 2018-10-02
>   TC Election from 2018-09-25T23:45 to 2018-10-02T23:45
>   TC Campaigning from 2018-09-18T23:45 to 2018-09-25T23:45
>   TC Nominations from 2018-09-11T23:45 to 2018-09-18T23:45
> 
> This puts the bulk of the nomination period during the PTG, which is
> sub-optimal as the nominations cause a distraction from the PTG but more
> so because the campaigning will coincide with travel home, and some
> community members take vacation along with the PTG.
> 
> So I'd like to bring up the idea of moving the election forward a
> little so that it's actually the campaigning period that overlaps with
> the PTG:
> 
>   TC Electionfrom 2018-09-18T23:45 to 2018-09-27T23:45
>   TC Campaigning from 2018-09-06T23:45 to 2018-09-18T23:45
>   TC Nominations from 2018-08-30T23:45 to 2018-09-06T23:45
> 
> This gives us longer campaigning and election periods.
> 
> There are some advantages to doing this:
> 
>  * A panel style Q could be held formally or informally ;P
>  * There's improved scope for for incoming, outgoing and staying put TC
>members to interact in a high bandwidth way.
>  * In personi/private discussions with TC candidates/members.
> 
> However it isn't without downsides:
> 
>   * Election fatigue, We've just had the PTL elections and the UC
> elections are currently running.  Less break before the TC elections
> may not be a good thing.
>   * TC candidates that can't travel to the PTG could be disadvantaged
>   * The campaigning would all happen at the PTG and not on the mailing
> list disadvantaging community members not at the PTG.
> 
> So thoughts?
> 
> Yours Tony.
> 
> [1] https://governance.openstack.org/tc/reference/charter.html

Who needs to make this decision? The current TC?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Doug Hellmann
Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600:
> Ahoy folks,
> 
> I think it's time we come up with some basic rules/patterns on where
> code lands when it comes to OpenStack related Ansible roles and as we
> convert/export things. There was a recent proposal to create an
> ansible-role-tempest[0] that would take what we use in
> tripleo-quickstart-extras[1] and separate it for re-usability by
> others.   So it was asked if we could work with the openstack-ansible
> team and leverage the existing openstack-ansible-os_tempest[2].  It
> turns out we have a few more already existing roles laying around as
> well[3][4].
> 
> What I would like to propose is that we as a community come together
> to agree on specific patterns so that we can leverage the same roles
> for some of the core configuration/deployment functionality while
> still allowing for specific project specific customization.  What I've
> noticed between all the project is that we have a few specific core
> pieces of functionality that needs to be handled (or skipped as it may
> be) for each service being deployed.
> 
> 1) software installation
> 2) configuration management
> 3) service management
> 4) misc service actions
> 
> Depending on which flavor of the deployment you're using, the content
> of each of these may be different.  Just about the only thing that is
> shared between them all would be the configuration management part.

Does that make the 4 things separate roles, then? Isn't the role
usually the unit of sharing between playbooks?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Mohammed Naser
Hi Alex,

I am very much in favour of what you're bringing up.  We do have
multiple projects that leverage Ansible in different ways and we all
end up doing the same thing at the end.  The duplication of work is
not really beneficial for us as it takes away from our use-cases.

I believe that there is a certain number of steps that we all share
regardless of how we deploy, some of the things that come up to me
right away are:

- Configuring infrastructure services (i.e.: create vhosts for service
in rabbitmq, create databases for services, configure users for
rabbitmq, db, etc)
- Configuring inter-OpenStack services (i.e. keystone_authtoken
section, creating endpoints, etc and users for services)
- Configuring actual OpenStack services (i.e.
/etc//.conf file with the ability of extending
options)
- Running CI/integration on a cloud (i.e. common role that literally
gets an admin user, password and auth endpoint and creates all
resources and does CI)

This would deduplicate a lot of work, and especially the last one, it
might be beneficial for more than Ansible-based projects, I can
imagine Puppet OpenStack leveraging this as well inside Zuul CI
(optionally)... However, I think that this something which we should
discus further for the PTG.  I think that there will be a tiny bit
upfront work as we all standarize but then it's a win for all involved
communities.

I would like to propose that deployment tools maybe sit down together
at the PTG, all share how we use Ansible to accomplish these tasks and
then perhaps we can work all together on abstracting some of these
concepts together for us to all leverage.

I'll let others chime in as well.

Regards,
Mohammed

On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz  wrote:
> Ahoy folks,
>
> I think it's time we come up with some basic rules/patterns on where
> code lands when it comes to OpenStack related Ansible roles and as we
> convert/export things. There was a recent proposal to create an
> ansible-role-tempest[0] that would take what we use in
> tripleo-quickstart-extras[1] and separate it for re-usability by
> others.   So it was asked if we could work with the openstack-ansible
> team and leverage the existing openstack-ansible-os_tempest[2].  It
> turns out we have a few more already existing roles laying around as
> well[3][4].
>
> What I would like to propose is that we as a community come together
> to agree on specific patterns so that we can leverage the same roles
> for some of the core configuration/deployment functionality while
> still allowing for specific project specific customization.  What I've
> noticed between all the project is that we have a few specific core
> pieces of functionality that needs to be handled (or skipped as it may
> be) for each service being deployed.
>
> 1) software installation
> 2) configuration management
> 3) service management
> 4) misc service actions
>
> Depending on which flavor of the deployment you're using, the content
> of each of these may be different.  Just about the only thing that is
> shared between them all would be the configuration management part.
> To that, I was wondering if there would be a benefit to establishing a
> pattern within say openstack-ansible where we can disable items #1 and
> #3 but reuse #2 in projects like kolla/tripleo where we need to do
> some configuration generation.  If we can't establish a similar
> pattern it'll make it harder to reuse and contribute between the
> various projects.
>
> In tripleo we've recently created a bunch of ansible-role-tripleo-*
> repositories which we were planning on moving the tripleo specific
> tasks (for upgrades, etc) to and were hoping that we might be able to
> reuse the upstream ansible roles similar to how we've previously
> leverage the puppet openstack work for configurations.  So for us, it
> would be beneficial if we could maybe help align/contribute/guide the
> configuration management and maybe misc service action portions of the
> openstack-ansible roles, but be able to disable the actual software
> install/service management as that would be managed via our
> ansible-role-tripleo-* roles.
>
> Is this something that would be beneficial to further discuss at the
> PTG? Anyone have any additional suggestions/thoughts?
>
> My personal thoughts for tripleo would be that we'd have
> tripleo-ansible calls openstack-ansible- for core config but
> package/service installation disabled and calls
> ansible-role-tripleo- for tripleo specific actions such as
> opinionated packages/service configuration/upgrades.  Maybe this is
> too complex? But at the same time, do we need to come up with 3
> different ways to do this?
>
> Thanks,
> -Alex
>
> [0] https://review.openstack.org/#/c/589133/
> [1] 
> http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest
> [2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/
> [3] 
> 

[openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Alex Schultz
Ahoy folks,

I think it's time we come up with some basic rules/patterns on where
code lands when it comes to OpenStack related Ansible roles and as we
convert/export things. There was a recent proposal to create an
ansible-role-tempest[0] that would take what we use in
tripleo-quickstart-extras[1] and separate it for re-usability by
others.   So it was asked if we could work with the openstack-ansible
team and leverage the existing openstack-ansible-os_tempest[2].  It
turns out we have a few more already existing roles laying around as
well[3][4].

What I would like to propose is that we as a community come together
to agree on specific patterns so that we can leverage the same roles
for some of the core configuration/deployment functionality while
still allowing for specific project specific customization.  What I've
noticed between all the project is that we have a few specific core
pieces of functionality that needs to be handled (or skipped as it may
be) for each service being deployed.

1) software installation
2) configuration management
3) service management
4) misc service actions

Depending on which flavor of the deployment you're using, the content
of each of these may be different.  Just about the only thing that is
shared between them all would be the configuration management part.
To that, I was wondering if there would be a benefit to establishing a
pattern within say openstack-ansible where we can disable items #1 and
#3 but reuse #2 in projects like kolla/tripleo where we need to do
some configuration generation.  If we can't establish a similar
pattern it'll make it harder to reuse and contribute between the
various projects.

In tripleo we've recently created a bunch of ansible-role-tripleo-*
repositories which we were planning on moving the tripleo specific
tasks (for upgrades, etc) to and were hoping that we might be able to
reuse the upstream ansible roles similar to how we've previously
leverage the puppet openstack work for configurations.  So for us, it
would be beneficial if we could maybe help align/contribute/guide the
configuration management and maybe misc service action portions of the
openstack-ansible roles, but be able to disable the actual software
install/service management as that would be managed via our
ansible-role-tripleo-* roles.

Is this something that would be beneficial to further discuss at the
PTG? Anyone have any additional suggestions/thoughts?

My personal thoughts for tripleo would be that we'd have
tripleo-ansible calls openstack-ansible- for core config but
package/service installation disabled and calls
ansible-role-tripleo- for tripleo specific actions such as
opinionated packages/service configuration/upgrades.  Maybe this is
too complex? But at the same time, do we need to come up with 3
different ways to do this?

Thanks,
-Alex

[0] https://review.openstack.org/#/c/589133/
[1] 
http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest
[2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/
[3] 
http://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/tempest
[4] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-tempest

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-09 Thread Jay Pipes
On Wed, Aug 1, 2018 at 11:15 AM, Ben Nemec  wrote:

> Hi,
>
> I'm having an issue with no valid host errors when starting instances and
> I'm struggling to figure out why.  I thought the problem was disk space,
> but I changed the disk_allocation_ratio and I'm still getting no valid
> host.  The host does have plenty of disk space free, so that shouldn't be a
> problem.
>
> However, I'm not even sure it's disk that's causing the failures because I
> can't find any information in the logs about why the no valid host is
> happening.  All I get from the scheduler is:
>
> "Got no allocation candidates from the Placement API. This may be a
> temporary occurrence as compute nodes start up and begin reporting
> inventory to the Placement service."
>
> While in placement I see:
>
> 2018-08-01 15:02:22.062 20 DEBUG nova.api.openstack.placement.requestlog
> [req-0a830ce9-e2af-413a-86cb-b47ae129b676 fc44fe5cefef43f4b921b9123c95e694
> b07e6dc2e6284b00ac7070aa3457c15e - default default] Starting request:
> 10.2.2.201 "GET /placement/allocation_candidat
> es?limit=1000=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1"
> __call__ /usr/lib/python2.7/site-packages/nova/api/openstack/placemen
> t/requestlog.py:38
> 2018-08-01 15:02:22.103 20 INFO nova.api.openstack.placement.requestlog
> [req-0a830ce9-e2af-413a-86cb-b47ae129b676 fc44fe5cefef43f4b921b9123c95e694
> b07e6dc2e6284b00ac7070aa3457c15e - default default] 10.2.2.201 "GET
> /placement/allocation_candidates?limit=1000=DISK_
> GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A1" status: 200 len: 53 microversion:
> 1.25
>
> Basically it just seems to be logging that it got a request, but there's
> no information about what it did with that request.
>
> So where do I go from here?  Is there somewhere else I can look to see why
> placement returned no candidates?
>
>
Hi again, Ben, hope you are enjoying your well-earned time off! :)

I've created a patch that (hopefully) will address some of the difficulty
that folks have had in diagnosing which parts of a request caused all
providers to be filtered out from the return of GET /allocation_candidates:

https://review.openstack.org/#/c/590041

This patch changes two primary things:

1) Query-splitting

The patch splits the existing monster SQL query that was being used for
querying for all providers that matched all requested resources, required
traits, forbidden traits and required aggregate associations into doing
multiple queries, one for each requested resource. While this does increase
the number of database queries executed for each call to GET
/allocation_candidates, the changes allow better visibility into what parts
of the request cause an exhaustion of matching providers. We've benchmarked
the new patch and have shown the performance impact of doing 3 queries
versus 1 (when there is a request for 3 resources -- VCPU, RAM and disk) is
minimal (a few extra milliseconds for execution against a DB with 1K
providers having inventory of all three resource classes).

2) Diagnostic logging output

The patch adds debug log output within each loop iteration, so there is no
logging output that shows how many matching providers were found for each
resource class involved in the request. The output looks like this in the
logs:

[req-2d30faa8-4190-4490-a91e-610045530140] inside VCPU request loop.
before applying trait and aggregate filters, found 12 matching
providers[req-2d30faa8-4190-4490-a91e-610045530140] found 12 providers
with capacity for the requested 1
VCPU.[req-2d30faa8-4190-4490-a91e-610045530140] inside MEMORY_MB
request loop. before applying trait and aggregate filters, found 9
matching providers[req-2d30faa8-4190-4490-a91e-610045530140] found 9
providers with capacity for the requested 64 MEMORY_MB. before loop
iteration we had 12 matches.[req-2d30faa8-4190-4490-a91e-610045530140]
RequestGroup(use_same_provider=False, resources={MEMORY_MB:64,
VCPU:1}, traits=[], aggregates=[]) (suffix '') returned 9 matches

If a request includes required traits, forbidden traits or required
aggregate associations, there are additional log messages showing how many
matching providers were found after applying the trait or aggregate
filtering set operation (in other words, the log output shows the impact of
the trait filter or aggregate filter in much the same way that the existing
FilterScheduler logging shows the "before and after" impact that a
particular filter had on a request process.

Have a look at the patch in question and please feel free to add your
feedback and comments on ways this can be improved to meet your needs.

Best,
-jay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election][tc] Lederless projects.

2018-08-09 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2018-08-01 10:32:50 +1000:

[snip]

> Need appointment  : 8 (Dragonflow Freezer Loci Packaging_Rpm RefStack
>Searchlight Trove Winstackers)

To summarize the situation for these teams as it stands now:

Omer Anson volunteered to serve as PTL for Dragonflow another term. The
patch to appoint him is https://review.openstack.org/589939

Changcai Geng has offered to be PTL for Freezer
(https://review.openstack.org/#/c/590071). There is still a proposal
to remove the project from governance
(https://review.openstack.org/588645).  We need the TC members to
vote on the proposals above to indicate which direction they want
to take.  Several folks suggested retaining the project provisionally,
but we don't have a formal proposal for that.  If you want to retain
the team provisionally, please say so when you vote in favor of
confirming Changcai Geng as PTL.

Sam Yaple has volunteered to serve as the PTL of Loci. The patch to
appoint him is https://review.openstack.org/#/c/590488/

Dirk Mueller volunteered to serve as the PTL of the packaging-rpm team.
The patch to confirm him is https://review.openstack.org/#/c/588617/

The repositories currently owned by the RefStack team are being moved to
the Interop Working Group in https://review.openstack.org/#/c/590179/

The proposal to remove the Searchlight project
(https://review.openstack.org/588644) has one comment indicating
potential interest in taking over maintenance of the project. We are
waiting for a formal proposal to designate a PTL before deciding whether
to keep the team under governance.

Dariusz Krol has volunteered to serve as PTL for Trove. The patch to
confirm him is https://review.openstack.org/#/c/588510/

Claudiu Belu has volunteered to serve as PTL for the Winstackers team.
The patch to confirm him is https://review.openstack.org/#/c/590386/

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Matt Riedemann

On 8/9/2018 1:30 PM, Graham Hayes wrote:

You could just send the deprecation notice, and we could deprecate
designate-sink if no one came forward to update it - that seems fairer
to push the burden on to the people who actually use the feature, not
other teams maintaining legacy stuff. Does that seem overly harsh?


It's harsh depending on my mood the day you ask me. :) Nova has done 
long-running deprecations for things that we know we don't want people 
building on, like nova-network and cells v1. And then we've left those 
around for a long time while we work on whatever it takes to eventually 
drop them. I think we could do the same here, and designate-sink could 
log a warning once on startup or something that says, "This service 
relies on the legacy nova notification format which is deprecated and 
being replaced with the versioned notification format. Removal is TBD 
but if you are dependent on this service/feature, we encourage you to 
help work on the transition for designate-sink to use nova versioned 
notifications." I would hold off on doing that until after we've 
actually agreed to deprecate the legacy notifications at the PTG.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] heat 11.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for heat for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/heat/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/heat/log/?h=stable/rocky

Release notes for heat can be found at:

http://docs.openstack.org/releasenotes/heat/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder 13.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for cinder for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/cinder/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky

Release notes for cinder can be found at:

http://docs.openstack.org/releasenotes/cinder/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Graham Hayes
On 09/08/2018 15:27, Matt Riedemann wrote:
> On 8/9/2018 8:44 AM, Graham Hayes wrote:
>> Designate has no plans to swap or add support for the new interface in
>> the near or medium term - we are more than willing to take patches, but
>> we do not have the people power to do it ourselves.
>>
>> Some of our users do use the old interface a lot - designate-sink
>> is quite heavily embeded in some workflows.
> 
> This is what I suspected would be the answer from most projects.
> 
> I was very half-assedly wondering if we could write some kind of
> translation middleware library that allows your service to listen for
> versioned notifications and translate them to legacy notifications. Then
> we could apply that generically across projects that don't have time for
> a big re-write while allowing nova to drop the legacy compat code (after
> some period of deprecation, I was thinking at least a year).
> 
> It should be pretty simple to write a dumb versioned->unversioned
> payload mapping for each legacy notification, but there might be more
> sophisticated ways of doing that using some kind of schema or template
> instead. Just thinking out loud.
> 

I have no objection to that - and I wish we had the people to move to
the new formats - I know maintaining legacy features like this is
extra work no-one needs.

Thinking out loud 

You could just send the deprecation notice, and we could deprecate
designate-sink if no one came forward to update it - that seems fairer
to push the burden on to the people who actually use the feature, not
other teams maintaining legacy stuff. Does that seem overly harsh?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Matt Riedemann

On 8/9/2018 12:53 PM, Lance Bragstad wrote:

I could have sworn I created a bug in oslo.policy for this at one
point for the same reason Jay mentions it, but I guess not.


This?https://bugs.launchpad.net/oslo.policy/+bug/1421863



Not unless I was time traveling (note the date that was reported).

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Doug Hellmann
Excerpts from Lance Bragstad's message of 2018-08-09 12:55:52 -0500:
> 
> On 08/09/2018 12:48 PM, Doug Hellmann wrote:
> > Excerpts from Matt Riedemann's message of 2018-08-09 12:18:14 -0500:
> >> On 8/9/2018 11:47 AM, Doug Hellmann wrote:
> >>> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400:
>  For evidence, see:
> 
>  http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING
> 
>  thousands of these are filling the logs with WARNING-level log messages,
>  making it difficult to find anything:
> 
>  Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060
>  devstack@placement-api.service[14403]: WARNING py.warnings
>  [req-a809b022-59af-4628-be73-488cfec3187d
>  req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement]
>  /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896:
>  UserWarning: Policy placement:resource_providers:list failed scope
>  check. The token used to make the request was project scoped but the
>  policy requires ['system'] scope. This behavior may change in the future
>  where using the intended scope is required
>  Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060
>  devstack@placement-api.service[14403]:   warnings.warn(msg)
>  Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060
>  devstack@placement-api.service[14403]:
> 
>  Is there any way we can get rid of these?
> 
>  Thanks,
>  -jay
> 
> >>> It looks like those are coming out of the policy library? Maybe file a
> >>> bug there. I added "oslo" to the subject line to get the team's
> >>> attention.
> >>>
> >>> This feels like something we could fix and backport to rocky.
> >>>
> >>> Doug
> >> I could have sworn I created a bug in oslo.policy for this at one point 
> >> for the same reason Jay mentions it, but I guess not.
> >>
> >> We could simply, on the nova side, add a warnings filter to only log 
> >> this once.
> >>
> > What level should it be logged at in the policy library? Should it be
> > logged there at all?
> 
> The initial intent behind logging was to make sure operators knew that
> they needed to make a role assignment adjustment in order to be
> compatible moving forward. I can investigate a way to log things at
> least once in oslo.policy though. I fear not logging it at all would
> cause failures in upgrade since operators wouldn't know they need to
> make that adjustment.

That sounds like a good check to add to the upgrade test tools as part
of the goal for Stein.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Lance Bragstad


On 08/09/2018 12:48 PM, Doug Hellmann wrote:
> Excerpts from Matt Riedemann's message of 2018-08-09 12:18:14 -0500:
>> On 8/9/2018 11:47 AM, Doug Hellmann wrote:
>>> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400:
 For evidence, see:

 http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING

 thousands of these are filling the logs with WARNING-level log messages,
 making it difficult to find anything:

 Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060
 devstack@placement-api.service[14403]: WARNING py.warnings
 [req-a809b022-59af-4628-be73-488cfec3187d
 req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement]
 /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896:
 UserWarning: Policy placement:resource_providers:list failed scope
 check. The token used to make the request was project scoped but the
 policy requires ['system'] scope. This behavior may change in the future
 where using the intended scope is required
 Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060
 devstack@placement-api.service[14403]:   warnings.warn(msg)
 Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060
 devstack@placement-api.service[14403]:

 Is there any way we can get rid of these?

 Thanks,
 -jay

>>> It looks like those are coming out of the policy library? Maybe file a
>>> bug there. I added "oslo" to the subject line to get the team's
>>> attention.
>>>
>>> This feels like something we could fix and backport to rocky.
>>>
>>> Doug
>> I could have sworn I created a bug in oslo.policy for this at one point 
>> for the same reason Jay mentions it, but I guess not.
>>
>> We could simply, on the nova side, add a warnings filter to only log 
>> this once.
>>
> What level should it be logged at in the policy library? Should it be
> logged there at all?

The initial intent behind logging was to make sure operators knew that
they needed to make a role assignment adjustment in order to be
compatible moving forward. I can investigate a way to log things at
least once in oslo.policy though. I fear not logging it at all would
cause failures in upgrade since operators wouldn't know they need to
make that adjustment.

>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Lance Bragstad


On 08/09/2018 12:18 PM, Matt Riedemann wrote:
> On 8/9/2018 11:47 AM, Doug Hellmann wrote:
>> Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400:
>>> For evidence, see:
>>>
>>> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING
>>>
>>>
>>> thousands of these are filling the logs with WARNING-level log
>>> messages,
>>> making it difficult to find anything:
>>>
>>> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060
>>> devstack@placement-api.service[14403]: WARNING py.warnings
>>> [req-a809b022-59af-4628-be73-488cfec3187d
>>> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement]
>>> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896:
>>> UserWarning: Policy placement:resource_providers:list failed scope
>>> check. The token used to make the request was project scoped but the
>>> policy requires ['system'] scope. This behavior may change in the
>>> future
>>> where using the intended scope is required
>>> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060
>>> devstack@placement-api.service[14403]:   warnings.warn(msg)
>>> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060
>>> devstack@placement-api.service[14403]:
>>>
>>> Is there any way we can get rid of these?
>>>
>>> Thanks,
>>> -jay
>>>
>> It looks like those are coming out of the policy library? Maybe file a
>> bug there. I added "oslo" to the subject line to get the team's
>> attention.
>>
>> This feels like something we could fix and backport to rocky.
>>
>> Doug
>
> I could have sworn I created a bug in oslo.policy for this at one
> point for the same reason Jay mentions it, but I guess not.
>

This? https://bugs.launchpad.net/oslo.policy/+bug/1421863

>
> We could simply, on the nova side, add a warnings filter to only log
> this once.
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][masakari][release] Pre-release of openstack/masakari failed

2018-08-09 Thread Doug Hellmann
Excerpts from zuul's message of 2018-08-09 17:23:01 +:
> Build failed.
> 
> - release-openstack-python 
> http://logs.openstack.org/84/84135048cb372cbd11080fc27151949cee4e52d1/pre-release/release-openstack-python/095990b/
>  : FAILURE in 8m 57s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
> 

The RC1 build for Masakari failed with this error:

  error: can't copy 'etc/masakari/masakari-custom-recovery-methods.conf':
  doesn't exist or not a regular file

The packaging files need to be fixed so a new release candidate can be
prepared. The changes will need to be made on master and then backported
to the new stable/rocky branch.

Doug

http://logs.openstack.org/84/84135048cb372cbd11080fc27151949cee4e52d1/pre-release/release-openstack-python/095990b/ara-report/result/7459d483-48d8-414f-8830-d6411158f9a2/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2018-08-09 12:18:14 -0500:
> On 8/9/2018 11:47 AM, Doug Hellmann wrote:
> > Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400:
> >> For evidence, see:
> >>
> >> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING
> >>
> >> thousands of these are filling the logs with WARNING-level log messages,
> >> making it difficult to find anything:
> >>
> >> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060
> >> devstack@placement-api.service[14403]: WARNING py.warnings
> >> [req-a809b022-59af-4628-be73-488cfec3187d
> >> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement]
> >> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896:
> >> UserWarning: Policy placement:resource_providers:list failed scope
> >> check. The token used to make the request was project scoped but the
> >> policy requires ['system'] scope. This behavior may change in the future
> >> where using the intended scope is required
> >> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060
> >> devstack@placement-api.service[14403]:   warnings.warn(msg)
> >> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060
> >> devstack@placement-api.service[14403]:
> >>
> >> Is there any way we can get rid of these?
> >>
> >> Thanks,
> >> -jay
> >>
> > It looks like those are coming out of the policy library? Maybe file a
> > bug there. I added "oslo" to the subject line to get the team's
> > attention.
> > 
> > This feels like something we could fix and backport to rocky.
> > 
> > Doug
> 
> I could have sworn I created a bug in oslo.policy for this at one point 
> for the same reason Jay mentions it, but I guess not.
> 
> We could simply, on the nova side, add a warnings filter to only log 
> this once.
> 

What level should it be logged at in the policy library? Should it be
logged there at all?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Matt Riedemann

On 8/9/2018 12:18 PM, Matt Riedemann wrote:
We could simply, on the nova side, add a warnings filter to only log 
this once.


Let's see if this works:

https://review.openstack.org/#/c/590445/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Patches to speed up plan operations

2018-08-09 Thread Ian Main
Hey Jirka!

I wasn't aware of the other options available.  Basically yes, you now just
need to upload a tarball of the templates to swift.  You can see in the
client:

-tarball.tarball_extract_to_swift_container(
-swift_client, tmp_tarball.name, container_name)
+_upload_file(swift_client, container_name,
+ constants.TEMPLATES_TARBALL_NAME, tmp_tarball.name)

Other than that it should be the same.  I'm not sure what files the UI
wants to look at in swift, but certainly some are still there.  Basically
any file that exists
in swift is not overwritten by the contents of the tar file.  So if a file
exists in swift it takes priority.

I'll try to catch you on irc but I know our timezones are quite different.

Thanks for looking into it!

   Ian



On Wed, Aug 8, 2018 at 10:46 AM Jiri Tomasek  wrote:

> Hello, thanks for bringing this up.
>
> I am going to try to test this patch with TripleO UI tomorrow. Without
> properly looking at the patch, questions I would like to get answers for
> are:
>
> How is this going to affect ways to create/update deployment plan?
> Currently user is able to create deployment plan by:
> - not providing any files - creating deployment plan from default files in
> /usr/share/openstack-tripleo-heat-templates
> - providing a tarball
> - providing a local directory of files to create plan from
> - providing a git repository link
>
> These changes will have an impact on certain TripleO UI operations where
> (in rare cases) we reach directly for a swift object
>
> IIUC it seems we are deciding to consider deployment plan as a black box
> packed in a tarball, which I quite like, we'll need to provide a standard
> way how to provide custom files to the plan.
>
> How is this going to affect CLI vs GUI workflow as currently CLI creates
> the plan as part of the deploy command, rather than GUI starts its workflow
> by selecting/creating deployment plan and whole configuration of the plan
> is performed on the deployment plan. Then the deployment plan gets
> deployed. We are aiming to introduce CLI commands to consolidate the
> behaviour of both clients to what GUI workflow is currently.
>
> I am going to try to find answers to these questions and identify
> potential problems in next couple of days.
>
> -- Jirka
>
>
> On Tue, Aug 7, 2018 at 5:34 PM Dan Prince  wrote:
>
>> Thanks for taking this on Ian! I'm fully on board with the effort. I
>> like the consolidation and performance improvements. Storing t-h-t
>> templates in Swift worked okay 3-4 years ago. Now that we have more
>> templates, many of which need .j2 rendering the storage there has
>> become quite a bottleneck.
>>
>> Additionally, since we'd be sending commands to Heat via local
>> filesystem template storage we could consider using softlinks again
>> within t-h-t which should help with refactoring and deprecation
>> efforts.
>>
>> Dan
>> On Wed, Aug 1, 2018 at 7:35 PM Ian Main  wrote:
>> >
>> >
>> > Hey folks!
>> >
>> > So I've been working on some patches to speed up plan operations in
>> TripleO.  This was originally driven by the UI needing to be able to
>> perform a 'plan upload' in something less than several minutes. :)
>> >
>> > https://review.openstack.org/#/c/581153/
>> > https://review.openstack.org/#/c/581141/
>> >
>> > I have a functioning set of patches, and it actually cuts over 2
>> minutes off the overcloud deployment time.
>> >
>> > Without patch:
>> > + openstack overcloud plan create --templates
>> /home/stack/tripleo-heat-templates/ overcloud
>> > Creating Swift container to store the plan
>> > Creating plan from template files in:
>> /home/stack/tripleo-heat-templates/
>> > Plan created.
>> > real3m3.415s
>> >
>> > With patch:
>> > + openstack overcloud plan create --templates
>> /home/stack/tripleo-heat-templates/ overcloud
>> > Creating Swift container to store the plan
>> > Creating plan from template files in:
>> /home/stack/tripleo-heat-templates/
>> > Plan created.
>> > real0m44.694s
>> >
>> > This is on VMs.  On real hardware it now takes something like 15-20
>> seconds to do the plan upload which is much more manageable from the UI
>> standpoint.
>> >
>> > Some things about what this patch does:
>> >
>> > - It makes use of process-templates.py (written for the undercloud) to
>> process the jinjafied templates.  This reduces replication with the
>> existing version in the code base and is very fast as it's all done on
>> local disk.
>> > - It stores the bulk of the templates as a tarball in swift.  Any
>> individual files in swift take precedence over the contents of the tarball
>> so it should be backwards compatible.  This is a great speed up as we're
>> not accessing a lot of individual files in swift.
>> >
>> > There's still some work to do; cleaning up and fixing the unit tests,
>> testing upgrades etc.  I just wanted to get some feedback on the general
>> idea and hopefully some reviews and/or help - especially with the unit test
>> 

Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Matt Riedemann

On 8/9/2018 11:47 AM, Doug Hellmann wrote:

Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400:

For evidence, see:

http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING

thousands of these are filling the logs with WARNING-level log messages,
making it difficult to find anything:

Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060
devstack@placement-api.service[14403]: WARNING py.warnings
[req-a809b022-59af-4628-be73-488cfec3187d
req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement]
/usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896:
UserWarning: Policy placement:resource_providers:list failed scope
check. The token used to make the request was project scoped but the
policy requires ['system'] scope. This behavior may change in the future
where using the intended scope is required
Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060
devstack@placement-api.service[14403]:   warnings.warn(msg)
Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060
devstack@placement-api.service[14403]:

Is there any way we can get rid of these?

Thanks,
-jay


It looks like those are coming out of the policy library? Maybe file a
bug there. I added "oslo" to the subject line to get the team's
attention.

This feels like something we could fix and backport to rocky.

Doug


I could have sworn I created a bug in oslo.policy for this at one point 
for the same reason Jay mentions it, but I guess not.


We could simply, on the nova side, add a warnings filter to only log 
this once.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][oslo] Excessive WARNING level log messages in placement-api

2018-08-09 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2018-08-08 22:53:54 -0400:
> For evidence, see:
> 
> http://logs.openstack.org/41/590041/1/check/tempest-full-py3/db08dec/controller/logs/screen-placement-api.txt.gz?level=WARNING
> 
> thousands of these are filling the logs with WARNING-level log messages, 
> making it difficult to find anything:
> 
> Aug 08 22:17:30.837557 ubuntu-xenial-inap-mtl01-0001226060 
> devstack@placement-api.service[14403]: WARNING py.warnings 
> [req-a809b022-59af-4628-be73-488cfec3187d 
> req-d46cb1f0-431f-490f-955b-b9c2cd9f6437 service placement] 
> /usr/local/lib/python3.5/dist-packages/oslo_policy/policy.py:896: 
> UserWarning: Policy placement:resource_providers:list failed scope 
> check. The token used to make the request was project scoped but the 
> policy requires ['system'] scope. This behavior may change in the future 
> where using the intended scope is required
> Aug 08 22:17:30.837800 ubuntu-xenial-inap-mtl01-0001226060 
> devstack@placement-api.service[14403]:   warnings.warn(msg)
> Aug 08 22:17:30.838067 ubuntu-xenial-inap-mtl01-0001226060 
> devstack@placement-api.service[14403]:
> 
> Is there any way we can get rid of these?
> 
> Thanks,
> -jay
> 

It looks like those are coming out of the policy library? Maybe file a
bug there. I added "oslo" to the subject line to get the team's
attention.

This feels like something we could fix and backport to rocky.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Jeremy Stanley
On 2018-08-09 11:41:38 +0200 (+0200), Balázs Gibizer wrote:
[...]
> There is a list of projects (we know of) consuming the legacy
> interface and we would like to know if any of these projects plan
> to switch over to the new interface in the foreseeable future so
> we can make a well informed decision about the deprecation.
> 
> 
> * Searchlight [3] - it is in maintenance mode so I guess the answer is no
[...]

With https://review.openstack.org/588644 looking likely to merge and
Searchlight already not slated for inclusion in Rocky, I recommend
not basing your decision on what it is or isn't using at this point.
If someone wants to resurrect it, updating things like the use of
the Nova notification interface seem like the bare minimum amount of
work they should commit to doing anyway.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-08-09 Thread Chris Dent



Greetings OpenStack community,

As is our recent custom, short meeting this week. Our main topic of 
conversation was discussing the planning etherpad [7] for the API-SIG gathering 
at the Denver PTG. If you will be there, and have topics of interest, please 
add them to the etherpad.

There are no new guidelines under review, but there is a stack of changes which 
do some reformatting and explicitly link to useful resources [8].

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* None

# API Guidelines Proposed for Freeze

* None

# Guidelines that are ready for wider review by the whole community.

* None

# Guidelines Currently Under Review [3]

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://etherpad.openstack.org/p/api-sig-stein-ptg
[8] https://review.openstack.org/#/c/589131/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-08-09 Thread Doug Hellmann
Excerpts from Andrey Kurilin's message of 2018-08-09 13:35:53 +0300:
> Hi Doug!
> 
> I'm ready to port our job to openstack-zuul-jobs repo, but I expect that
> Community will not accept it.
> 
> The result of rally unittests is different between environments with python
> 3.7 final release and python 3.7.0~b3 .
> There is at least one failed test at python 3.7.0~b3 which is not
> reproducible at py27,py34,py35,py36,py37-final ,
> so I'm not sure that it is a good decision to add py37 job based on
> ubuntu-bionic.
> 
> As for Rally, I applied the easiest thing which occurred to me - just use
> external python ppa (deadsnakes) to
> install the final release of Python 3.7.
> Such a way is satisfying for Rally community and it cannot be used as the
> main one for the whole OpenStack.

Yes, I think we don't want to use that approach for most of the jobs.
The point is to test on the Python packaged in the distro.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Dropping core reviewer

2018-08-09 Thread Michał Jastrzębski
Hello Kollegues, Koalas and Koalines,

I feel I should do the same, as my work sadly doesn't involve Kolla,
or OpenStack for that matter, any more.

It has been a wonderful time and serving Kolla community as core and
PTL is achievement I'm most proud of and I thank you all for giving me
this opportunity. We've built something great!

Cheers,
Michal
On Thu, 9 Aug 2018 at 08:55, Steven Dake (stdake)  wrote:
>
> Kollians,
>
>
> Thanks for the kind words.
>
>
> I do plan to stay involved in the OpenStack community - specifically 
> targeting governance and will definitely be around - irc - mls - summits - 
> etc :)
>
>
> Cheers
>
> -steve
>
>
> 
> From: Surya Singh 
> Sent: Wednesday, August 8, 2018 10:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] Dropping core reviewer
>
> words are not strong enough to appreciate your immense contribution and help 
> in OpenStack community.
> Projects like Kolla, Heat and Magnum are still rocking and many more to come 
> in future from you.
> Hope to see you around.
>
> Wish you all the luck !!
> -- Surya
>
> On Wed, Aug 8, 2018 at 6:15 PM Paul Bourke  wrote:
>>
>> +1. Will always have good memories of when Steve was getting the project
>> off the ground. Thanks Steve for doing a great job of building the
>> community around Kolla, and for all your help in general!
>>
>> Best of luck,
>> -Paul
>>
>> On 08/08/18 12:23, Eduardo Gonzalez wrote:
>> > Steve,
>> >
>> > Is sad to see you leaving kolla core team, hope to still see you around
>> > IRC and Summit/PTGs.
>> >
>> > I truly appreciate your leadership, guidance and commitment to make
>> > kolla the great project it is now.
>> >
>> > Best luck on your new projects and board of directors.
>> >
>> > Regards
>> >
>> >
>> >
>> >
>> >
>> > 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) > > >:
>> >
>> > Kollians,
>> >
>> >
>> > Many of you that know me well know my feelings towards participating
>> > as a core reviewer in a project.  Folks with the ability to +2/+W
>> > gerrit changes can sometimes unintentionally harm a codebase if they
>> > are not consistently reviewing and maintaining codebase context.  I
>> > also believe in leading an exception-free life, and I'm no exception
>> > to my own rules.  As I am not reviewing Kolla actively given my
>> > OpenStack individually elected board of directors service and other
>> > responsibilities, I am dropping core reviewer ability for the Kolla
>> > repositories.
>> >
>> >
>> > I want to take a moment to thank the thousands of people that have
>> > contributed and shaped Kolla into the modern deployment system for
>> > OpenStack that it is today.  I personally find Kolla to be my finest
>> > body of work as a leader.  Kolla would not have been possible
>> > without the involvement of the OpenStack global community working
>> > together to resolve the operational pain points of OpenStack.  Thank
>> > you for your contributions.
>> >
>> >
>> > Finally, quoting Thierry [1] from our initial application to
>> > OpenStack, " ... Long live Kolla!"
>> >
>> >
>> > Cheers!
>> >
>> > -steve
>> >
>> >
>> > [1] https://review.openstack.org/#/c/206789/
>> > 
>> >
>> >
>> >
>> >
>> >
>> > 
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > 
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>> >
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] keystone 14.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for keystone for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/keystone/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/rocky

Release notes for keystone can be found at:

http://docs.openstack.org/releasenotes/keystone/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] manila 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for manila for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/manila/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/manila/log/?h=stable/rocky

Release notes for manila can be found at:

http://docs.openstack.org/releasenotes/manila/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Dropping core reviewer

2018-08-09 Thread Steven Dake (stdake)
?Kollians,


Thanks for the kind words.


I do plan to stay involved in the OpenStack community - specifically targeting 
governance and will definitely be around - irc - mls - summits - etc :)


Cheers

-steve



From: Surya Singh 
Sent: Wednesday, August 8, 2018 10:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Dropping core reviewer

words are not strong enough to appreciate your immense contribution and help in 
OpenStack community.
Projects like Kolla, Heat and Magnum are still rocking and many more to come in 
future from you.
Hope to see you around.

Wish you all the luck !!
-- Surya

On Wed, Aug 8, 2018 at 6:15 PM Paul Bourke 
mailto:paul.bou...@oracle.com>> wrote:
+1. Will always have good memories of when Steve was getting the project
off the ground. Thanks Steve for doing a great job of building the
community around Kolla, and for all your help in general!

Best of luck,
-Paul

On 08/08/18 12:23, Eduardo Gonzalez wrote:
> Steve,
>
> Is sad to see you leaving kolla core team, hope to still see you around
> IRC and Summit/PTGs.
>
> I truly appreciate your leadership, guidance and commitment to make
> kolla the great project it is now.
>
> Best luck on your new projects and board of directors.
>
> Regards
>
>
>
>
>
> 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) 
> mailto:std...@cisco.com>
> >>:
>
> Kollians,
>
>
> Many of you that know me well know my feelings towards participating
> as a core reviewer in a project.  Folks with the ability to +2/+W
> gerrit changes can sometimes unintentionally harm a codebase if they
> are not consistently reviewing and maintaining codebase context.  I
> also believe in leading an exception-free life, and I'm no exception
> to my own rules.  As I am not reviewing Kolla actively given my
> OpenStack individually elected board of directors service and other
> responsibilities, I am dropping core reviewer ability for the Kolla
> repositories.
>
>
> I want to take a moment to thank the thousands of people that have
> contributed and shaped Kolla into the modern deployment system for
> OpenStack that it is today.  I personally find Kolla to be my finest
> body of work as a leader.  Kolla would not have been possible
> without the involvement of the OpenStack global community working
> together to resolve the operational pain points of OpenStack.  Thank
> you for your contributions.
>
>
> Finally, quoting Thierry [1] from our initial application to
> OpenStack, " ... Long live Kolla!"
>
>
> Cheers!
>
> -steve
>
>
> [1] https://review.openstack.org/#/c/206789/
> 
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova] Struggling with non-admin user on Queens install

2018-08-09 Thread Neil Jerram
I'd like to create a non-admin project and user that are able to do
nova.images.list(), in a Queens install.  IIUC, all users should be able to
do that.  I'm afraid I'm pretty lost and would appreciate any help.

Define a function to test whether a particular set of credentials can do
nova.images.list():

from keystoneauth1 import identity
from keystoneauth1 import session
from novaclient.client import Client as NovaClient

def attemp(auth):
sess = session.Session(auth=auth)
nova = NovaClient(2, session=sess)
for i in nova.images.list():
print i

With an admin user, things work:

>>> auth_url = "http://controller:5000/v3;
>>> auth = identity.Password(auth_url=auth_url,
>>>   username="admin",
>>>   password="abcdef",
>>>   project_name="admin",
>>>   project_domain_id="default",
>>>   user_domain_id="default")
>>> attemp(auth)



With a non-admin user with project_id specified, 401:

>>> tauth = identity.Password(auth_url=auth_url,
...   username="tenant2",
...   password="password",
...   project_id="tenant2",
...   user_domain_id="default")
>>> attemp(tauth)
...
keystoneauth1.exceptions.http.Unauthorized: The request you have made
requires authentication. (HTTP 401) (Request-ID:
req-ed0630a4-7df0-4ba8-a4c4-de3ecb7b4d7d)

With the same but without project_id, I get an empty service catalog
instead:

>>> tauth = identity.Password(auth_url=auth_url,
...   username="tenant2",
...   password="password",
...   #project_name="tenant2",
...   #project_domain_id="default",
...   user_domain_id="default")
>>>
>>> attemp(tauth)
...
keystoneauth1.exceptions.catalog.EmptyCatalog: The service catalog is empty.

Can anyone help?

Regards,
 Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 28th Edition

2018-08-09 Thread Emilien Macchi
Welcome to the twenty-eightiest edition of a weekly update in TripleO world!
The goal is to provide a short reading (less than 5 minutes) to learn
what's new this week.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-July/132672.html

+-+
| General announcements |
+-+

+--> We're still preparing the first release candidate of TripleO Rocky,
please focus on Critical / High bugs.
+--> Reminder about PTG etherpad, feel free to propose topics:
https://etherpad.openstack.org/p/tripleo-ptg-stein
+--> Juan will be the next PTL for Stein cycle, congratulations!

+--+
| Continuous Integration |
+--+

+--> Sprint theme: migration to zuul v3, including migrating from legacy
bash to ansible tasks/playbooks (More on
https://trello.com/c/JikmHXSS/881-sprint-17-goals)
+--> The Ruck and Rover for this sprint  are Gabriele Cerami (panda) and
Rafael Folco (rfolco). Please tell them any CI issue.
+--> Promotion on master is 2 days, 9 day on Queens, 0 days on Pike and 7
days on Ocata.
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

+-+
| Upgrades |
+-+

+--> No updates this week.


+---+
| Containers |
+---+

+--> The team is looking at podman/buildah support for Stein cycle. More
discussions at the PTG, but doing some ground work now.

+--+
| config-download |
+--+

+--> No updates this week..

+--+
| Integration |
+--+

+--> No updates this week.

+-+
| UI/CLI |
+-+

+--> No updates this week.

+---+
| Validations |
+---+

+--> No updates this week.

+---+
| Networking |
+---+

+--> No updates this week.

+--+
| Workflows |
+--+

+--> Progress on the Mistral tempest plugin and testing on the
containerized undercloud job.
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Discussion around secret management.
+--> Last meeting notes:
http://eavesdrop.openstack.org/meetings/tripleo_security_squad/2018/tripleo_security_squad.2018-08-08-12.03.log.html

+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact  |
++
Owl Wings Are Helping Silence Airplanes, Fans, and Wind Turbines
Nice reading:
https://gizmodo.com/owl-wings-are-helping-silence-airplanes-fans-and-wind-1713023055
Thanks Cédric for this contribution!

Thank you all for reading and stay tuned!
--
Your fellow reporter, Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-2, August 13-17

2018-08-09 Thread Sean McGinnis
Related to below, here is the current list of deliverables waiting for an RC1
release (as of August 9, 15:30 UTC):

barbican
ceilometer-powervm
cinder
congress-dashboard
congress
cyborg
designate-dashboard
designate
glance
heat
masakari-monitors
masakari
networking-bagpipe
networking-bgpvpn
networking-midonet
networking-odl
networking-ovn
networking-powervm
networking-sfc
neutron-dynamic-routing
neutron-fwaas
neutron-vpnaas
neutron
nova-powervm
nova
release-test
sahara-dashboard
sahara-image-elements
sahara


Today is the deadline, so please make sure you get in the RC release requests
soon.

Thanks!
Sean

On Thu, Aug 09, 2018 at 09:58:06AM -0500, Sean McGinnis wrote:
> Development Focus
> -
> 
> Teams should be working on any release critical bugs that would require 
> another
> RC before the final release, and thinking about plans for Stein.
> 
> General Information
> ---
> 
> Any cycle-with-milestones projects that missed the RC1 deadline should prepare
> an RC1 release as soon as possible.
> 
> After all of the cycle-with-milestone projects have branched we will branch
> devstack, grenade, and the requirements repos. This will effectively open them
> back up for Stein development, though the focus should still be on finishing 
> up
> Rocky until the final release.
> 
> Actions
> -
> 
> Watch for any translation patches coming through and merge them quickly.
> 
> If your project has a stable/rocky branch created, please make sure those
> patches are also merged there. Keep in mind there will need to be a final
> release candidate cut to capture any merged translations and critical bug 
> fixes
> from this branch.
> 
> Please also check for completeness in release notes and add any relevant
> "prelude" content. These notes are targetted for the downstream consumers of
> your project, so it would be great to include any useful information for those
> that are going to pick up and use or deploy the Queens version of your 
> project.
> 
> We also have the cycle-highlights information in the project deliverable 
> files.
> This one is targeted at marketing and other consumers that have typically been
> pinging PTLs every release asking for "what's new" in this release. If you 
> have
> not done so already, please add a few highlights for your team that would be
> useful for this kind of consumer.
> 
> This would be a good time for any release:independent projects to add the
> history for any releases not yet listed in their deliverable file. These files
> are under the deliverable/_independent directory in the openstack/releases
> repo.
> 
> If you have a cycle-with-intermediary release that has not done an RC yet,
> please do so as soon as possible. If we do not receive release requests for
> these repos soon we will be forced to create a release from the latest commit
> to create a stable/rocky branch. The release team would rather not be the ones
> initiating this release.
> 
> 
> Upcoming Deadlines & Dates
> --
> 
> Final RC deadline: August 23
> Rocky Release: August 29
> Stein PTG: September 10-14
> 
> -- 
> Sean McGinnis (smcginnis)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-2, August 13-17

2018-08-09 Thread Sean McGinnis
Development Focus
-

Teams should be working on any release critical bugs that would require another
RC before the final release, and thinking about plans for Stein.

General Information
---

Any cycle-with-milestones projects that missed the RC1 deadline should prepare
an RC1 release as soon as possible.

After all of the cycle-with-milestone projects have branched we will branch
devstack, grenade, and the requirements repos. This will effectively open them
back up for Stein development, though the focus should still be on finishing up
Rocky until the final release.

Actions
-

Watch for any translation patches coming through and merge them quickly.

If your project has a stable/rocky branch created, please make sure those
patches are also merged there. Keep in mind there will need to be a final
release candidate cut to capture any merged translations and critical bug fixes
from this branch.

Please also check for completeness in release notes and add any relevant
"prelude" content. These notes are targetted for the downstream consumers of
your project, so it would be great to include any useful information for those
that are going to pick up and use or deploy the Queens version of your project.

We also have the cycle-highlights information in the project deliverable files.
This one is targeted at marketing and other consumers that have typically been
pinging PTLs every release asking for "what's new" in this release. If you have
not done so already, please add a few highlights for your team that would be
useful for this kind of consumer.

This would be a good time for any release:independent projects to add the
history for any releases not yet listed in their deliverable file. These files
are under the deliverable/_independent directory in the openstack/releases
repo.

If you have a cycle-with-intermediary release that has not done an RC yet,
please do so as soon as possible. If we do not receive release requests for
these repos soon we will be forced to create a release from the latest commit
to create a stable/rocky branch. The release team would rather not be the ones
initiating this release.


Upcoming Deadlines & Dates
--

Final RC deadline: August 23
Rocky Release: August 29
Stein PTG: September 10-14

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Matt Riedemann

On 8/9/2018 8:44 AM, Graham Hayes wrote:

Designate has no plans to swap or add support for the new interface in
the near or medium term - we are more than willing to take patches, but
we do not have the people power to do it ourselves.

Some of our users do use the old interface a lot - designate-sink
is quite heavily embeded in some workflows.


This is what I suspected would be the answer from most projects.

I was very half-assedly wondering if we could write some kind of 
translation middleware library that allows your service to listen for 
versioned notifications and translate them to legacy notifications. Then 
we could apply that generically across projects that don't have time for 
a big re-write while allowing nova to drop the legacy compat code (after 
some period of deprecation, I was thinking at least a year).


It should be pretty simple to write a dumb versioned->unversioned 
payload mapping for each legacy notification, but there might be more 
sophisticated ways of doing that using some kind of schema or template 
instead. Just thinking out loud.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Matt Riedemann

On 8/9/2018 4:41 AM, Balázs Gibizer wrote:

* Masakari - I'm not sure Masakari depends on nova notifications or not


From a quick look, it looks like masakari does not rely on nova's 
rpc-based notifications and instead registers and listens for libvirt 
guest events directly (ceilometer's compute agent does something similar 
I think - or used to anyway):


https://github.com/openstack/masakari-monitors/commit/a566f8ddc6b3b46ae020d182496d153fb0c1b3e7

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Graham Hayes
Designate has no plans to swap or add support for the new interface in
the near or medium term - we are more than willing to take patches, but
we do not have the people power to do it ourselves.

Some of our users do use the old interface a lot - designate-sink
is quite heavily embeded in some workflows.

Thanks,

- Graham

On 09/08/2018 10:41, Balázs Gibizer wrote:
> Dear Nova notification consumers!
> 
> 
> The Nova team made progress with the new versioned notification
> interface [1] and it is almost reached feature parity [2] with the
> legacy, unversioned one. So Nova team will discuss on the upcoming PTG
> the deprecation of the legacy interface. There is a list of projects (we
> know of) consuming the legacy interface and we would like to know if any
> of these projects plan to switch over to the new interface in the
> foreseeable future so we can make a well informed decision about the
> deprecation.
> 
> 
> * Searchlight [3] - it is in maintenance mode so I guess the answer is no
> * Designate [4]
> * Telemetry [5]
> * Mistral [6]
> * Blazar [7]
> * Watcher [8] - it seems Watcher uses both legacy and versioned nova
> notifications
> * Masakari - I'm not sure Masakari depends on nova notifications or not
> 
> Cheers,
> gibi
> 
> [1] https://docs.openstack.org/nova/latest/reference/notifications.html
> [2] http://burndown.peermore.com/nova-notification/
> 
> [3]
> https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py
> 
> [4]
> https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py
> 
> [5]
> https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2
> 
> [6]
> https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2
> 
> [7]
> https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst
> 
> [8]
> https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opensatck-dev][qa][barbican][novajoin][networking-fortinet][vmware-nsx] Dependency of Tempest changes

2018-08-09 Thread Ade Lee
barbican and novajoin done.

On Mon, 2018-08-06 at 19:23 +0900, Ghanshyam Mann wrote:
> Hi All,
> 
> Tempest patch [1] removes the deprecated config option for volume v1
> API and it has dependency on may plugins. I have proposed the patches
> to each plugins using that option [2] to stop using that option so
> that their gate will not be broken if Tempest patch merge. Also I
> have made Tempest patch dependency on each plugins commit. Many of
> those dependent patch has merged but 4 patches are still hanging
> around  since long time which is blocking Tempest change to get
> merge. 
> 
>  Below are the plugins which have not merged the changes:
>barbican-tempest-plugin - https://review.openstack.org/#/c/573174/
>  
>novajoin-tempest-plugin - https://review.openstack.org/#/c/573175/
>  
>networking-fortinet - https://review.openstack.org/#/c/573170/   
>vmware-nsx-tempest-plugin - https://review.openstack.org/#/c/57317
> 2/ 
> 
> I want to merge this tempest patch in Rocky release which I am
> planing to do in next week. To make that happen we have to merge the
> Tempest patch soon. If above patches are not merged by plugins team
> within 2-3 days which means those plugins might not be active or do
> not care for gate, I am going to remove their dependency on Tempest
> patch and merge that.
> 
> [1] https://review.openstack.org/#/c/573135/ 
> [2] https://review.openstack.org/#/q/topic:remove-support-of-cinder-v
> 1-api+(status:open+OR+status:merged)
> 
> -gmann
> 
> 
> 
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] blazar 2.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for blazar for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/blazar/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/blazar/log/?h=stable/rocky

Release notes for blazar can be found at:

http://docs.openstack.org/releasenotes/blazar/

If you find an issue that could be considered release-critical, please
file it at:

https://launchpad.net/blazar

and tag it *rocky-rc-potential* to bring it to the blazar
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] horizon 14.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for horizon for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/horizon/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/rocky

Release notes for horizon can be found at:

http://docs.openstack.org/releasenotes/horizon/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] mistral-dashboard 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for mistral-dashboard for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/mistral-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/mistral-dashboard/log/?h=stable/rocky

Release notes for mistral-dashboard can be found at:

http://docs.openstack.org/releasenotes/mistral-dashboard/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] mistral-extra 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for mistral-extra for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/mistral-extra/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/mistral-extra/log/?h=stable/rocky

Release notes for mistral-extra can be found at:

http://docs.openstack.org/releasenotes/mistral-extra/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] mistral 7.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for mistral for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/mistral/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/mistral/log/?h=stable/rocky

Release notes for mistral can be found at:

http://docs.openstack.org/releasenotes/mistral/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] Nominate change in tricircle core team

2018-08-09 Thread linghucongsong


Hi team, I would like to nominate Zhuo Tang (ztang) for tricircle core member. 
ztang has actively joined the discussion of feature development in our offline 
meeting and has participated in contribute important blueprints since Rocky, 
like network deletion reliability and service function chaining. I really think 
his experience will help us substantially improve tricircle.

Bye the way the vote unitl 2018-8-16 beijing time.


Best Wishes!
Baisen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] octavia-dashboard 2.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for octavia-dashboard for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/octavia-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/octavia-dashboard/log/?h=stable/rocky

Release notes for octavia-dashboard can be found at:

http://docs.openstack.org/releasenotes/octavia-dashboard/

If you find an issue that could be considered release-critical, please
file it at:

https://storyboard.openstack.org/#!/project/909

and tag it *rocky-rc-potential* to bring it to the octavia-dashboard
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] octavia 3.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for octavia for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/octavia/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/octavia/log/?h=stable/rocky

Release notes for octavia can be found at:

http://docs.openstack.org/releasenotes/octavia/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron-lbaas 13.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for neutron-lbaas for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/neutron-lbaas/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/neutron-lbaas/log/?h=stable/rocky

Release notes for neutron-lbaas can be found at:

http://docs.openstack.org/releasenotes/neutron-lbaas/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] trove 10.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for trove for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/trove/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/trove/log/?h=stable/rocky

Release notes for trove can be found at:

http://docs.openstack.org/releasenotes/trove/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron-lbaas-dashboard 5.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for neutron-lbaas-dashboard for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/neutron-lbaas-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


http://git.openstack.org/cgit/openstack/neutron-lbaas-dashboard/log/?h=stable/rocky

Release notes for neutron-lbaas-dashboard can be found at:

http://docs.openstack.org/releasenotes/neutron-lbaas-dashboard/

If you find an issue that could be considered release-critical, please
file it at:

https://storyboard.openstack.org/#!/project/907

and tag it *rocky-rc-potential* to bring it to the neutron-lbaas-dashboard
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Juan Antonio Osorio
We have a small project (novajoin) that still relies on unversioned
notifications. Thanks for the notification, I hope we can migrate to
versioned notifications by Stein.

On Thu, Aug 9, 2018 at 12:41 PM, Balázs Gibizer  wrote:

> Dear Nova notification consumers!
>
>
> The Nova team made progress with the new versioned notification interface
> [1] and it is almost reached feature parity [2] with the legacy,
> unversioned one. So Nova team will discuss on the upcoming PTG the
> deprecation of the legacy interface. There is a list of projects (we know
> of) consuming the legacy interface and we would like to know if any of
> these projects plan to switch over to the new interface in the foreseeable
> future so we can make a well informed decision about the deprecation.
>
>
> * Searchlight [3] - it is in maintenance mode so I guess the answer is no
> * Designate [4]
> * Telemetry [5]
> * Mistral [6]
> * Blazar [7]
> * Watcher [8] - it seems Watcher uses both legacy and versioned nova
> notifications
> * Masakari - I'm not sure Masakari depends on nova notifications or not
>
> Cheers,
> gibi
>
> [1] https://docs.openstack.org/nova/latest/reference/notifications.html
> [2] http://burndown.peermore.com/nova-notification/
>
> [3] https://github.com/openstack/searchlight/blob/master/searchl
> ight/elasticsearch/plugins/nova/notification_handler.py
> [4] https://github.com/openstack/designate/blob/master/designate
> /notification_handler/nova.py
> [5] https://github.com/openstack/ceilometer/blob/master/ceilomet
> er/pipeline/data/event_definitions.yaml#L2
> [6] https://github.com/openstack/mistral/blob/master/etc/event_d
> efinitions.yml.sample#L2
> [7] https://github.com/openstack/blazar/blob/5526ed1f9b74d23b588
> 1a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst
> [8] https://github.com/openstack/watcher/blob/master/watcher/dec
> ision_engine/model/notification/nova.py#L335
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] trove-dashboard 11.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for trove-dashboard for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/trove-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/trove-dashboard/log/?h=stable/rocky

Release notes for trove-dashboard can be found at:

http://docs.openstack.org/releasenotes/trove-dashboard/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-08-09 Thread Andrey Kurilin
Hi Doug!

I'm ready to port our job to openstack-zuul-jobs repo, but I expect that
Community will not accept it.

The result of rally unittests is different between environments with python
3.7 final release and python 3.7.0~b3 .
There is at least one failed test at python 3.7.0~b3 which is not
reproducible at py27,py34,py35,py36,py37-final ,
so I'm not sure that it is a good decision to add py37 job based on
ubuntu-bionic.

As for Rally, I applied the easiest thing which occurred to me - just use
external python ppa (deadsnakes) to
install the final release of Python 3.7.
Such a way is satisfying for Rally community and it cannot be used as the
main one for the whole OpenStack.

ср, 8 авг. 2018 г. в 16:35, Doug Hellmann :

> Excerpts from Andrey Kurilin's message of 2018-08-08 15:25:01 +0300:
> > Thanks Thomas for pointing to the issue, I checked it locally and here is
> > an update for openstack/rally (rally framework without in-tree OpenStack
> > plugins) project:
> >
> > - added unittest job with py37 env
>
> It would be really useful if you could help set up a job definition in
> openstack-zuul-jobs like we have for openstack-tox-py36 [1], so that other
> projects can easily add the job, too. Do you have time to do that?
>
> Doug
>
> [1]
> http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/jobs.yaml#n354
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] API updates week 02-08

2018-08-09 Thread Ghanshyam Mann
Hi All, 

Please find the Nova API highlights of this week. 

Weekly Office Hour: 
=== 

What we discussed this week: 
- Discussed on granular policy spec to update that as default roles are present 
now.

- Discussed keypair quota usage bug. and only doc update can be done for now. 
Patch is up for this https://review.openstack.org/#/c/590081/ 

- Discussed about simple-tenant-usage bug about value error. We need to handle 
500 error for non iso8601 time format input.  Bug was reported on Pike but due 
to env issue as author confirmed. I also tried this on master and not 
reproducible.  Anyways we need to handle the 500 in this API. I will push patch 
for that.  : https://bugs.launchpad.net/nova/+bug/1783338 

Planned Features : 
== 
Below are the API related features which did not make to Rocky and need to 
propose for Stein. Not much progress to share on these as of now. 

1. Servers Ips non-unique network names : 
- 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 
- Spec Merged 
- 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. Need to open for stein

2. Volume multiattach enhancements: 
- https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements 
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. 

3. API Extensions merge work 
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-stein
 
- Weekly Progress: Done for Rocky, and new BP open for remaining work on this 
BP. I will remove the deprecated extensions policy first which will be more 
clean. 

4. Handling a down cell 
- https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
- 
https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. Need to open for stein

Bugs: 
 
This week Bug Progress: 
https://etherpad.openstack.org/p/nova-api-weekly-bug-report 

Critical: 0->0 
High importance: 2->1 
By Status: 
New: 1->0 
Confirmed/Triage: 30-> 32 
In-progress: 34->32
Incomplete: 4->4 
= 
Total: 68->68 

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 


-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] There is no Platform plugin with name: 'existing@openstack'"

2018-08-09 Thread Goutham Pratapa
Hi Andrey,

Thanks it worked.


On Thu, Aug 9, 2018 at 3:10 PM, Andrey Kurilin 
wrote:

> Hi Goutham!
>
> There are 2 issues which can result in such error:
>
> 1) You did not read change log for Rally (see
> https://github.com/openstack/rally/blob/master/CHANGELOG.rst, all
> versions are included there). We do not provide in-tree OpenStack plugins
> started from Rally 1.0.0 .You need to install rally-openstack package (
> https://pypi.python.org/pypi/rally-openstack) . It has Rally as a
> dependency, so if you are preparing the environment from the scratch ->
> just install rally-openstack package.
>
> 2) There are one or many conflicts in package requirements. Run `rally
> plugin show Dummy.openstack` and see the logging messages. It should point
> out the errors of loading plugins if there is something.
>
> чт, 9 авг. 2018 г. в 10:32, Goutham Pratapa :
>
>> Hi Rally Team,
>>
>> I have been trying to setup rally version v1.1.0
>>
>> I could successfully install rally but when i try to create the
>> deployment i am getting this error
>>
>>
>>
>> *ubuntu@ubuntu:~$  rally deployment create --file=existing.json
>> --name=existing*
>>
>> *Env manager got invalid spec:*
>>
>>
>> * ["There is no Platform plugin with name: 'existing@openstack'"]*
>>
>>
>> *ubuntu@ubuntu:~$ rally version *
>>
>> *1.1.0*
>>
>> can any one help me  the issue and the fix ?
>>
>> Thanks in advance.
>>
>> --
>> Cheers !!!
>> Goutham Pratapa
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Cheers !!!
Goutham Pratapa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Balázs Gibizer

Dear Nova notification consumers!


The Nova team made progress with the new versioned notification 
interface [1] and it is almost reached feature parity [2] with the 
legacy, unversioned one. So Nova team will discuss on the upcoming PTG 
the deprecation of the legacy interface. There is a list of projects 
(we know of) consuming the legacy interface and we would like to know 
if any of these projects plan to switch over to the new interface in 
the foreseeable future so we can make a well informed decision about 
the deprecation.



* Searchlight [3] - it is in maintenance mode so I guess the answer is 
no

* Designate [4]
* Telemetry [5]
* Mistral [6]
* Blazar [7]
* Watcher [8] - it seems Watcher uses both legacy and versioned nova 
notifications

* Masakari - I'm not sure Masakari depends on nova notifications or not

Cheers,
gibi

[1] https://docs.openstack.org/nova/latest/reference/notifications.html
[2] http://burndown.peermore.com/nova-notification/

[3] 
https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py
[4] 
https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py
[5] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2
[6] 
https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2
[7] 
https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst
[8] 
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] There is no Platform plugin with name: 'existing@openstack'"

2018-08-09 Thread Andrey Kurilin
Hi Goutham!

There are 2 issues which can result in such error:

1) You did not read change log for Rally (see
https://github.com/openstack/rally/blob/master/CHANGELOG.rst, all versions
are included there). We do not provide in-tree OpenStack plugins started
from Rally 1.0.0 .You need to install rally-openstack package (
https://pypi.python.org/pypi/rally-openstack) . It has Rally as a
dependency, so if you are preparing the environment from the scratch ->
just install rally-openstack package.

2) There are one or many conflicts in package requirements. Run `rally
plugin show Dummy.openstack` and see the logging messages. It should point
out the errors of loading plugins if there is something.

чт, 9 авг. 2018 г. в 10:32, Goutham Pratapa :

> Hi Rally Team,
>
> I have been trying to setup rally version v1.1.0
>
> I could successfully install rally but when i try to create the deployment
> i am getting this error
>
>
>
> *ubuntu@ubuntu:~$  rally deployment create --file=existing.json
> --name=existing*
>
> *Env manager got invalid spec:*
>
>
> * ["There is no Platform plugin with name: 'existing@openstack'"]*
>
>
> *ubuntu@ubuntu:~$ rally version *
>
> *1.1.0*
>
> can any one help me  the issue and the fix ?
>
> Thanks in advance.
>
> --
> Cheers !!!
> Goutham Pratapa
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [I18n][all] Translation Imports

2018-08-09 Thread Andreas Jaeger
On 2018-08-09 10:10, Frank Kloeker wrote:
> Hello,
> 
> maybe you missed it, translation import jobs are back today. Please
> merge "Imported Translations from Zanata" as soon as possible so we can
> move fast forward with Rocky translation.
> If you have already branched to stable/rocky please aware that we
> releasenotes translation manage only in master branch.

This means that the import deletes automatically the releasenotes
translations on any stable branch. The file removals you see are fine.

> Special thanks to openstack-infra and the zuul team for the fast error
> handling.

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [I18n] Translation Imports

2018-08-09 Thread Frank Kloeker

Hello,

maybe you missed it, translation import jobs are back today. Please 
merge "Imported Translations from Zanata" as soon as possible so we can 
move fast forward with Rocky translation.
If you have already branched to stable/rocky please aware that we 
releasenotes translation manage only in master branch.
Special thanks to openstack-infra and the zuul team for the fast error 
handling.


kind regards
Frank

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] There is no Platform plugin with name: 'existing@openstack'"

2018-08-09 Thread Goutham Pratapa
Hi Rally Team,

I have been trying to setup rally version v1.1.0

I could successfully install rally but when i try to create the deployment
i am getting this error



*ubuntu@ubuntu:~$  rally deployment create --file=existing.json
--name=existing*

*Env manager got invalid spec:*


* ["There is no Platform plugin with name: 'existing@openstack'"]*


*ubuntu@ubuntu:~$ rally version *

*1.1.0*

can any one help me  the issue and the fix ?

Thanks in advance.

-- 
Cheers !!!
Goutham Pratapa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev