Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-27 Thread Lingxian Kong
At Catalyst Cloud:

RetryFilter
AvailabilityZoneFilter
RamFilter
ComputeFilter
AggregateCoreFilter
DiskFilter
AggregateInstanceExtraSpecsFilter
ImagePropertiesFilter
ServerGroupAntiAffinityFilter
SameHostFilter

Cheers,
Lingxian Kong


On Sat, Apr 28, 2018 at 3:04 AM Jim Rollenhagen 
wrote:

> On Wed, Apr 18, 2018 at 11:17 AM, Artom Lifshitz 
> wrote:
>
>> Hi all,
>>
>> A CI issue [1] caused by tempest thinking some filters are enabled
>> when they're really not, and a proposed patch [2] to add
>> (Same|Different)HostFilter to the default filters as a workaround, has
>> led to a discussion about what filters should be enabled by default in
>> nova.
>>
>> The default filters should make sense for a majority of real world
>> deployments. Adding some filters to the defaults because CI needs them
>> is faulty logic, because the needs of CI are different to the needs of
>> operators/users, and the latter takes priority (though it's my
>> understanding that a good chunk of operators run tempest on their
>> clouds post-deployment as a way to validate that the cloud is working
>> properly, so maybe CI's and users' needs aren't that different after
>> all).
>>
>> To that end, we'd like to know what filters operators are enabling in
>> their deployment. If you can, please reply to this email with your
>> [filter_scheduler]/enabled_filters (or
>> [DEFAULT]/scheduler_default_filters if you're using an older version)
>> option from nova.conf. Any other comments are welcome as well :)
>>
>
> At Oath:
>
> AggregateImagePropertiesIsolation
> ComputeFilter
> CoreFilter
> DifferentHostFilter
> SameHostFilter
> ServerGroupAntiAffinityFilter
> ServerGroupAffinityFilter
> AvailabilityZoneFilter
> AggregateInstanceExtraSpecsFilter
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] validating overcloud config changes on a redeploy

2018-04-27 Thread Marius Cornea
Hi Ade,

To the best of my knowledge the closest place where we do similar
sequence of actions (post deployment build and append environment
files to the deploy command and re-run overcloud deploy on top of an
already deployed overcloud) is tripleo-upgrade. As we discussed on IRC
I was reluctant to adding these kind of tests to tripleo-upgrade since
it was initially created to cover only the minor update and major
upgrades use cases. Nevertheless, thinking more about your use case I
realized that configuration changes tests could fit quite well in
tripleo-upgrade for several reasons:

 - we already have a mechanism[1] in place for attaching extra
environment files to the deploy command
 - we already have tests that can be run during the stack update which
applies the config changes; this could be useful to validate that
configuration changes do not break the data plane(e.g to validate that
a neutron config change doesn't not leave instances without networking
during the stack update)
 - we can easily segregate the config changes plays into their own
directory as we do with update/upgrade[2] and add the reusable ones in
the common directory
 - upgrades might benefit from the config changes tests by running
them in a pre/post minor update/major upgrade step and catch potential
parameters changes between releases

I'd like to hear what others think about this and see if there could
be a better place where to host these kind of tests but personally I'm
ok with adding them to tripleo-upgrade.

Best regards,
Marius

[1] 
http://git.openstack.org/cgit/openstack/tripleo-upgrade/tree/tasks/upgrade/step_upgrade.yml
[2] http://git.openstack.org/cgit/openstack/tripleo-upgrade/tree/tasks

On Fri, Apr 27, 2018 at 11:49 AM, Ade Lee  wrote:
> Hi,
>
> Recently I starting looking at how we implement password changes in an
> existing deployment, and found that there were issues.  This made me
> wonder whether we needed a test job to confirm that password changes
> (and other config changes) are in fact executed properly.
>
> As far as I understand it, the way to do password changes is to -
> 1) Create a yaml file containing the parameters to be changed and
>their new values
> 2) call openstack overcloud deploy and append -e new_params.yaml
>
> Note that the above steps can really describe the testing of setting
> any config changes (not just passwords).
>
> Of course, if we do change passwords, we'll want to validate that the
> config files have changed, the keystone/dbusers have been modified, the
> mistral plan has been updated, services are still running etc.
>
> After talking with many folks, it seems there is no clear consensus
> where code to do the above tasks should live.  Should it be in tripleo-
> upgrades, or in tripleo-validations or in a separate repo?
>
> Is there anyone already doing something similar?
>
> If we end up creating a role to do this, ideally it should be
> deployment tool agnostic - usable by both infrared or quickstart or
> others.
>
> Whats the best way to do this?
>
> Thanks,
> Ade
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core

2018-04-27 Thread Alex Schultz
+1

On Fri, Apr 27, 2018 at 11:41 AM, Emilien Macchi  wrote:
> +1, thanks Tobias for your contributions!
>
> On Fri, Apr 27, 2018 at 8:21 AM, Iury Gregory  wrote:
>>
>> +1
>>
>> On Fri, Apr 27, 2018, 12:15 Mohammed Naser  wrote:
>>>
>>> Hi everyone,
>>>
>>> I'm proposing that we add Tobias Urdin to the core Puppet OpenStack
>>> team as they've been putting great reviews over the past few months
>>> and they have directly contributed in resolving all the Ubuntu
>>> deployment issues and helped us bring Ubuntu support back and make the
>>> jobs voting again.
>>>
>>> Thank you,
>>> Mohammed
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [lbaas] Neutron LBaaS V2 docs incompatibility

2018-04-27 Thread Artem Goncharov
Thanks a lot Michael.

On Fri, 27 Apr 2018, 19:57 Michael Johnson,  wrote:

> Hi Artem,
>
> You are correct that the API reference at
> https://developer.openstack.org/api-ref/network/v2/index.html#pools is
> incorrect. As you figured out, someone mistakenly merged the long
> dead/removed LBaaS v1 API specification into the LBaaS v2 API
> specification at that link.
>
> The current, and up to date load balancing API reference is at:
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html
>
> This documents the Octavia API which is a superset of the the LBaaS v2
> API, so it should help you clarify any issues you run into.
>
> That said, due to the deprecation of neutron-lbaas and spin out from
> neutron we decided to explicitly not support neutron-lbaas in the
> OpenStack Client. neutron-lbaas is only supported using the neutron
> client.  You can continue to use the neutron client CLI with
> neutron-lbaas through the neutron-lbaas deprecation cycle.
>
> When you move to using Octavia you can switch to using the
> python-octaviaclient OSC plugin.
>
> Michael
>
> On Wed, Apr 25, 2018 at 5:51 AM, Artem Goncharov
>  wrote:
> > Hi all,
> >
> > after working with OpenStackSDK in my cloud I have found one difference
> in
> > the Neutron LBaaS (yes, I know it is deprecated, but it is still used).
> The
> > fix would be small and fast, unfortunately I have faced problems with the
> > API description:
> > - https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Pools describes,
> > that the LB pool has healthmonitor_id attribute (what eventually also
> fits
> > reality of my cloud)
> > - https://developer.openstack.org/api-ref/network/v2/index.html#pools
> (which
> > is referred to from the previous link in the deprecation note) describes,
> > that the LB pool has healthmonitors (and healthmonitors_status) as list
> of
> > IDs. Basically in this regards it is same as
> > https://wiki.openstack.org/wiki/Neutron/LBaaS/API_1.0#Pool description
> > - unfortunately even
> >
> https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/lbaas-v2.inc
> > describes Pool.healthmonitors (however it also contains
> >
> https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/samples/lbaas/pools-list-response2.json
> > sample with the Pool.healthmonitor_id)
> > - OpenStackSDK contains network.pool.health_monitors (with underscore)
> >
> > I want to bring this all in an order and enable managing of the
> loadbalancer
> > through OSC for my OpenStack cloud, but I can't figure out what is the
> > correct behavior here.
> >
> > Can anybody, please, help in figuring out the truth here?
> >
> > Thanks,
> > Artem
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [lbaas] Neutron LBaaS V2 docs incompatibility

2018-04-27 Thread Michael Johnson
Hi Artem,

You are correct that the API reference at
https://developer.openstack.org/api-ref/network/v2/index.html#pools is
incorrect. As you figured out, someone mistakenly merged the long
dead/removed LBaaS v1 API specification into the LBaaS v2 API
specification at that link.

The current, and up to date load balancing API reference is at:
https://developer.openstack.org/api-ref/load-balancer/v2/index.html

This documents the Octavia API which is a superset of the the LBaaS v2
API, so it should help you clarify any issues you run into.

That said, due to the deprecation of neutron-lbaas and spin out from
neutron we decided to explicitly not support neutron-lbaas in the
OpenStack Client. neutron-lbaas is only supported using the neutron
client.  You can continue to use the neutron client CLI with
neutron-lbaas through the neutron-lbaas deprecation cycle.

When you move to using Octavia you can switch to using the
python-octaviaclient OSC plugin.

Michael

On Wed, Apr 25, 2018 at 5:51 AM, Artem Goncharov
 wrote:
> Hi all,
>
> after working with OpenStackSDK in my cloud I have found one difference in
> the Neutron LBaaS (yes, I know it is deprecated, but it is still used). The
> fix would be small and fast, unfortunately I have faced problems with the
> API description:
> - https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Pools describes,
> that the LB pool has healthmonitor_id attribute (what eventually also fits
> reality of my cloud)
> - https://developer.openstack.org/api-ref/network/v2/index.html#pools (which
> is referred to from the previous link in the deprecation note) describes,
> that the LB pool has healthmonitors (and healthmonitors_status) as list of
> IDs. Basically in this regards it is same as
> https://wiki.openstack.org/wiki/Neutron/LBaaS/API_1.0#Pool description
> - unfortunately even
> https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/lbaas-v2.inc
> describes Pool.healthmonitors (however it also contains
> https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/samples/lbaas/pools-list-response2.json
> sample with the Pool.healthmonitor_id)
> - OpenStackSDK contains network.pool.health_monitors (with underscore)
>
> I want to bring this all in an order and enable managing of the loadbalancer
> through OSC for my OpenStack cloud, but I can't figure out what is the
> correct behavior here.
>
> Can anybody, please, help in figuring out the truth here?
>
> Thanks,
> Artem
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core

2018-04-27 Thread Emilien Macchi
+1, thanks Tobias for your contributions!

On Fri, Apr 27, 2018 at 8:21 AM, Iury Gregory  wrote:

> +1
>
> On Fri, Apr 27, 2018, 12:15 Mohammed Naser  wrote:
>
>> Hi everyone,
>>
>> I'm proposing that we add Tobias Urdin to the core Puppet OpenStack
>> team as they've been putting great reviews over the past few months
>> and they have directly contributed in resolving all the Ubuntu
>> deployment issues and helped us bring Ubuntu support back and make the
>> jobs voting again.
>>
>> Thank you,
>> Mohammed
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout

2018-04-27 Thread Michael Johnson
Hi Mihaela,

I am sorry to hear you are having trouble with the queens release of
Octavia.  It is true that a lot of work has gone into the failover
capability, specifically working around a python threading issue and
making it more resistant to certain neutron failure situations
(missing ports, etc.).

I know of one open bug against the failover flows,
https://storyboard.openstack.org/#!/story/2001481, "failover breaks in
Active/Standby mode if both amphroae are down".

Unfortunately the log snippet above does not give me enough
information about the problem to help with this issue. From the
snippet it looks like the failovers were initiated, but the
controllers are unable to reach the amphora-agent on the replacement
amphora. It will continue those retry attempts, but eventually will
fail the amphora into ERROR if it doesn't succeed.

One thought I have is if you created you amphora image in the last two
weeks, you may have built an amphora using the master branch of
octavia, which had a bug that impacted active/standby images. This was
introduced working around the new pip 10 issues.  That patch has been
fixed: https://review.openstack.org/#/c/564371/

If neither of these situations match your environment, please open a
story (https://storyboard.openstack.org/#!/dashboard/stories) for us
and include the health manager logs from the point you delete the
amphora up until it starts these connection attempts.  We will dig
through those logs to see what the issue might be.

Michael (johnsom)

On Wed, Apr 25, 2018 at 4:07 AM,   wrote:
> Hello,
>
>
>
> I am testing Octavia Queens and I see that the failover behavior is very
> much different than the one in Ocata (this is the version we are currently
> running in production).
>
> One example of such behavior is:
>
>
>
> I create 4 load balancers and after the creation is successful, I shut off
> all the 8 amphoras. Sometimes, even the health-manager agent does not reach
> the amphoras, they are not deleted and re-created. The logs look like shown
> below even when the heartbeat timeout is long passed. Sometimes the amphoras
> are deleted and re-created. Sometimes,  they are partially re-created – part
> of them remain in shut off.
>
> Heartbeat_timeout is set to 60 seconds.
>
>
>
>
>
>
>
> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:26.244 11 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver
> [req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7
> - - -] Could not connect to instance. Retrying.: ConnectionError:
> HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded
> with url:
> /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem
> (Caused by NewConnectionError(' object at 0x7f559862c710>: Failed to establish a new connection: [Errno 113]
> No route to host',))
>
> [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:26.464 13 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver
> [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7
> - - -] Could not connect to instance. Retrying.: ConnectionError:
> HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded
> with url:
> /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy
> (Caused by NewConnectionError(' object at 0x7f8a0de95e10>: Failed to establish a new connection: [Errno 113]
> No route to host',))
>
> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:27.772 11 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver
> [req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7
> - - -] Could not connect to instance. Retrying.: ConnectionError:
> HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded
> with url:
> /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy
> (Caused by NewConnectionError(' object at 0x7f5598491c90>: Failed to establish a new connection: [Errno 113]
> No route to host',))
>
> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:34.252 11 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver
> [req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7
> - - -] Could not connect to instance. Retrying.: ConnectionError:
> HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded
> with url:
> /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem
> (Caused by NewConnectionError(' object at 0x7f5598520790>: Failed to establish a new connection: [Errno 113]
> No route to host',))
>
> [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:34.476 13 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver
> [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7
> - - -] Could not connect to instance. Retrying.: ConnectionError:
> 

Re: [openstack-dev] The Forum Schedule is now live

2018-04-27 Thread Jimmy McArthur
PS: If you have general questions on the schedule, additional updates to 
an abstract, or changes to the speaker list, please send them along to 
speakersupp...@openstack.org.



Jimmy McArthur 
April 27, 2018 at 11:04 AM
Hello all -

Please take a look here for the posted Forum schedule: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224  
You should also see it update on your Summit App.


Thank you and see you in Vancouver!
Jimmy


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][reno] issue with reno 2.9.0 and duplicate anchors

2018-04-27 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-04-27 10:02:08 -0400:
> The latest release of reno tries to add anchors to the page in a way
> that ensures they are named consistently across builds. For projects
> with the same version number in multiple series (which can happen for
> non-milestone projects that haven't tagged for rocky yet), this causes
> duplicate anchors and causes the release notes build to fail.
> 
> There is a fix for this in https://review.openstack.org/564763 and we
> will try to get a new release of reno out as soon as that patch merges.
> 
> Doug

Reno 2.9.1 is available now and should fix this issue [1].

The constraint update is working its way through the gate [2].

[1] http://lists.openstack.org/pipermail/release-announce/2018-April/004988.html
[2] https://review.openstack.org/#/c/564794/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS

2018-04-27 Thread Tobias Urdin
I got started on kuryr-libnetwork but never finished the init/systemd scripts 
but all dependencies in control file should be ok.

I uploaded it here: https://github.com/tobias-urdin/deb-kuryr-libnetwork (not a 
working package!)

After fixing kuryr-libnetwork one can get starting packaging Zun.


For Qinling you might want kuryr-libkubernetes as well, but I'm unsure.

Best regards

On 04/27/2018 05:56 PM, Corey Bryant wrote:


On Fri, Apr 27, 2018 at 11:23 AM, Tobias Urdin 
> wrote:

Hello,

I was very interested in packaging Zun for Ubuntu however I did not have the 
time to properly get started.

I was able to package kuryr-lib, I've uploaded it here for now 
https://github.com/tobias-urdin/deb-kuryr-lib


Would love to see both Zun and Qinling in Ubuntu to get a good grip on the 
container world :)

Best regards


Awesome Tobias. I can take a closer look next week if you'd like.

Thanks,
Corey

On 04/27/2018 04:59 PM, Corey Bryant wrote:
On Fri, Apr 27, 2018 at 10:20 AM, Hongbin Lu 
> wrote:
Corey,

Thanks for the information. Would you clarify what is "working packages from 
the community"?

Best regards,
Hongbin

Sorry I guess that comment is probably a bit vague.

The OpenStack packages are open source like many other projects. They're Apache 
2 licensed and we gladly accept contributions. :)

This is a good starting point for working with the Ubuntu OpenStack packages:
https://wiki.ubuntu.com/OpenStack/CorePackages

If you or someone else were to provide package sources for zun that DTRT to 
create binary packages, and if they can test them, then I'd be happy to 
review/sponsor the Ubuntu and cloud-archive uploads.

Thanks,
Corey


On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant 
> wrote:


On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu 
> wrote:
Hi Corey,

What are the requirements to include OpenStack Zun into the Ubuntu packages? We 
have a comprehensive installation guide [1] that are using by a lot of users 
when they were installing Zun. However, the missing of Ubuntu packages is 
inconvenient for our users. What the Zun team can help for adding Zun to Ubuntu.

[1] https://docs.openstack.org/zun/latest/install/index.html

Best regards,
Hongbin

Hi Hongbin,

If we were to get working packages from the community and commitment to test, 
I'd be happy to sponsor uploads to Ubuntu and backport to the cloud achive.

Thanks,
Corey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The Forum Schedule is now live

2018-04-27 Thread Jimmy McArthur

Hello all -

Please take a look here for the posted Forum schedule: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224  
You should also see it update on your Summit App.


Thank you and see you in Vancouver!
Jimmy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS

2018-04-27 Thread Corey Bryant
On Fri, Apr 27, 2018 at 11:23 AM, Tobias Urdin 
wrote:

> Hello,
>
> I was very interested in packaging Zun for Ubuntu however I did not have
> the time to properly get started.
>
> I was able to package kuryr-lib, I've uploaded it here for now
> https://github.com/tobias-urdin/deb-kuryr-lib
>
>
> Would love to see both Zun and Qinling in Ubuntu to get a good grip on the
> container world :)
> Best regards
>
>
Awesome Tobias. I can take a closer look next week if you'd like.

Thanks,
Corey

>
> On 04/27/2018 04:59 PM, Corey Bryant wrote:
>
> On Fri, Apr 27, 2018 at 10:20 AM, Hongbin Lu  wrote:
>
>> Corey,
>>
>> Thanks for the information. Would you clarify what is "working packages
>> from the community"?
>>
>> Best regards,
>> Hongbin
>>
>
> Sorry I guess that comment is probably a bit vague.
>
> The OpenStack packages are open source like many other projects. They're
> Apache 2 licensed and we gladly accept contributions. :)
>
> This is a good starting point for working with the Ubuntu OpenStack
> packages:
> https://wiki.ubuntu.com/OpenStack/CorePackages
>
> If you or someone else were to provide package sources for zun that DTRT
> to create binary packages, and if they can test them, then I'd be happy to
> review/sponsor the Ubuntu and cloud-archive uploads.
>
> Thanks,
> Corey
>
>
>>
>> On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant > > wrote:
>>
>>>
>>>
>>> On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu 
>>> wrote:
>>>
 Hi Corey,

 What are the requirements to include OpenStack Zun into the Ubuntu
 packages? We have a comprehensive installation guide [1] that are using by
 a lot of users when they were installing Zun. However, the missing of
 Ubuntu packages is inconvenient for our users. What the Zun team can help
 for adding Zun to Ubuntu.

 [1] https://docs.openstack.org/zun/latest/install/index.html

 Best regards,
 Hongbin

>>>
>>> Hi Hongbin,
>>>
>>> If we were to get working packages from the community and commitment to
>>> test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud
>>> achive.
>>>
>>> Thanks,
>>> Corey
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] validating overcloud config changes on a redeploy

2018-04-27 Thread Ade Lee
Hi,

Recently I starting looking at how we implement password changes in an
existing deployment, and found that there were issues.  This made me
wonder whether we needed a test job to confirm that password changes
(and other config changes) are in fact executed properly.

As far as I understand it, the way to do password changes is to - 
1) Create a yaml file containing the parameters to be changed and 
   their new values
2) call openstack overcloud deploy and append -e new_params.yaml

Note that the above steps can really describe the testing of setting
any config changes (not just passwords).

Of course, if we do change passwords, we'll want to validate that the
config files have changed, the keystone/dbusers have been modified, the
mistral plan has been updated, services are still running etc.

After talking with many folks, it seems there is no clear consensus
where code to do the above tasks should live.  Should it be in tripleo-
upgrades, or in tripleo-validations or in a separate repo?

Is there anyone already doing something similar?

If we end up creating a role to do this, ideally it should be
deployment tool agnostic - usable by both infrared or quickstart or
others.

Whats the best way to do this?

Thanks,
Ade

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-04-27 Thread Richard Wellum
+1

On Fri, Apr 27, 2018 at 2:07 AM Paul Bourke  wrote:

> +1, always great working with Mark :)
>
> On 26/04/18 16:31, Jeffrey Zhang wrote:
> > Kolla core reviewer team,
> >
> > It is my pleasure to nominate
> > ​
> > mgoddard for kolla core team.
> > ​
> > Mark has been working both upstream and downstream with kolla and
> > kolla-ansible for over two years, building bare metal compute clouds with
> > ironic for HPC. He's been involved with OpenStack since 2014. He started
> > the kayobe deployment project which complements kolla-ansible. He is
> > also the most active non-core contributor for last 90 days[1]
> > ​​
> > Consider this nomination a +1 vote from me
> >
> > A +1 vote indicates you are in favor of
> > ​
> > mgoddard as a candidate, a -1
> > is a
> > ​​
> > veto. Voting is open for 7 days until
> > ​May
> > ​4​
> > th, or a unanimous
> > response is reached or a veto vote occurs.
> >
> > [1] http://stackalytics.com/report/contribution/kolla-group/90
> >
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me 
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Recent failures to use ARA or generate reports in the gate

2018-04-27 Thread David Moreau Simard
Hi,

I was made aware today that new installations of ARA were not working
or failing to generate reports in a variety of gate jobs with a stack
trace that ends with:
AttributeError: 'Blueprint' object has no attribute 'json_encoder'

The root cause was identified to be a new release of Flask, 0.12.3,
which shipped broken packages to PyPi [1].
This should be fixed momentarily once upstream ships a fixed 0.12.4 package.

In the meantime, we're going to merge a requirements.txt update to
blacklist 0.12.3 but it won't be effective until we cut a new release
of ARA which we hope to be able to do sometime next week.

I'll take the opportunity to remind users of ARA that we're
transitioning away from statically generated reports [3] and you
should do that too if you haven't already.

[1]: https://github.com/pallets/flask/issues/2728
[2]: 
https://github.com/openstack/requirements/blob/a5537a6f4b9cc477067949e1f9136415ac216f21/upper-constraints.txt#
L480
[3]: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128902.html

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS

2018-04-27 Thread Tobias Urdin
Hello,

I was very interested in packaging Zun for Ubuntu however I did not have the 
time to properly get started.

I was able to package kuryr-lib, I've uploaded it here for now 
https://github.com/tobias-urdin/deb-kuryr-lib


Would love to see both Zun and Qinling in Ubuntu to get a good grip on the 
container world :)

Best regards

On 04/27/2018 04:59 PM, Corey Bryant wrote:
On Fri, Apr 27, 2018 at 10:20 AM, Hongbin Lu 
> wrote:
Corey,

Thanks for the information. Would you clarify what is "working packages from 
the community"?

Best regards,
Hongbin

Sorry I guess that comment is probably a bit vague.

The OpenStack packages are open source like many other projects. They're Apache 
2 licensed and we gladly accept contributions. :)

This is a good starting point for working with the Ubuntu OpenStack packages:
https://wiki.ubuntu.com/OpenStack/CorePackages

If you or someone else were to provide package sources for zun that DTRT to 
create binary packages, and if they can test them, then I'd be happy to 
review/sponsor the Ubuntu and cloud-archive uploads.

Thanks,
Corey


On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant 
> wrote:


On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu 
> wrote:
Hi Corey,

What are the requirements to include OpenStack Zun into the Ubuntu packages? We 
have a comprehensive installation guide [1] that are using by a lot of users 
when they were installing Zun. However, the missing of Ubuntu packages is 
inconvenient for our users. What the Zun team can help for adding Zun to Ubuntu.

[1] https://docs.openstack.org/zun/latest/install/index.html

Best regards,
Hongbin

Hi Hongbin,

If we were to get working packages from the community and commitment to test, 
I'd be happy to sponsor uploads to Ubuntu and backport to the cloud achive.

Thanks,
Corey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core

2018-04-27 Thread Iury Gregory
+1

On Fri, Apr 27, 2018, 12:15 Mohammed Naser  wrote:

> Hi everyone,
>
> I'm proposing that we add Tobias Urdin to the core Puppet OpenStack
> team as they've been putting great reviews over the past few months
> and they have directly contributed in resolving all the Ubuntu
> deployment issues and helped us bring Ubuntu support back and make the
> jobs voting again.
>
> Thank you,
> Mohammed
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core

2018-04-27 Thread Mohammed Naser
Hi everyone,

I'm proposing that we add Tobias Urdin to the core Puppet OpenStack
team as they've been putting great reviews over the past few months
and they have directly contributed in resolving all the Ubuntu
deployment issues and helped us bring Ubuntu support back and make the
jobs voting again.

Thank you,
Mohammed

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 23 April 2018

2018-04-27 Thread Colleen Murphy
# Keystone Team Update - Week of 23 April 2018

## News

### scope_types in nova

We've had some good discussions incorporating scope_types into nova [0]. Thanks 
to mriedem and jaypipes for helping out. The discussion flushed out some work 
needed in keystonemiddleware [1] and oslo.context [2], making the interaction 
between those components more clear and easier for other services to use 
system-scoped tokens. Jay's comments/questions are probably going to be asked 
by other people working on incorporating these changes into their service. If 
that pertains to you, please see those reviews.

[0] https://review.openstack.org/#/c/553613/
[1] https://review.openstack.org/#/c/564072/
[2] https://review.openstack.org/#/c/530509/

### Milestone 1 retrospective

We had our first team retrospective of the cycle after the meeting on Tuesday. 
We captured our thoughts on a Trello board[3].

[3] https://trello.com/b/PiJecAs4/keystone-rocky-m1-retrospective

### Forum schedule

All of the topics we submitted for the Vancouver forum were accepted[4][5][6].

[4] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles
[5] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21762/keystone-feedback-session
[6] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21760/unified-limits

## Open Specs

Search query: https://goo.gl/eyTktx

We still have four open keystone specs as well as our cross-project spec on 
default roles[7]. At our milestone retrospective we talked about possibly 
dropping some of the lower priority specs from the roadmap for this cycle.

[7] https://review.openstack.org/#/c/523973/

## Recently Merged Changes

Search query: https://goo.gl/hdD9Kw

We merged 14 changes this week.

## Changes that need Attention

Search query: https://goo.gl/tW5PiH

There are 62 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots.

## Bugs

Report: https://gist.github.com/lbragstad/80862a9111ff821af07e43e217c52190

This week we opened 6 new bugs and closed 2.

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

We're about six weeks away from spec freeze. Feature proposal freeze is just 
two weeks after that.

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-27 Thread Jim Rollenhagen
On Wed, Apr 18, 2018 at 11:17 AM, Artom Lifshitz 
wrote:

> Hi all,
>
> A CI issue [1] caused by tempest thinking some filters are enabled
> when they're really not, and a proposed patch [2] to add
> (Same|Different)HostFilter to the default filters as a workaround, has
> led to a discussion about what filters should be enabled by default in
> nova.
>
> The default filters should make sense for a majority of real world
> deployments. Adding some filters to the defaults because CI needs them
> is faulty logic, because the needs of CI are different to the needs of
> operators/users, and the latter takes priority (though it's my
> understanding that a good chunk of operators run tempest on their
> clouds post-deployment as a way to validate that the cloud is working
> properly, so maybe CI's and users' needs aren't that different after
> all).
>
> To that end, we'd like to know what filters operators are enabling in
> their deployment. If you can, please reply to this email with your
> [filter_scheduler]/enabled_filters (or
> [DEFAULT]/scheduler_default_filters if you're using an older version)
> option from nova.conf. Any other comments are welcome as well :)
>

At Oath:

AggregateImagePropertiesIsolation
ComputeFilter
CoreFilter
DifferentHostFilter
SameHostFilter
ServerGroupAntiAffinityFilter
ServerGroupAffinityFilter
AvailabilityZoneFilter
AggregateInstanceExtraSpecsFilter

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS

2018-04-27 Thread Corey Bryant
On Fri, Apr 27, 2018 at 10:20 AM, Hongbin Lu  wrote:

> Corey,
>
> Thanks for the information. Would you clarify what is "working packages
> from the community"?
>
> Best regards,
> Hongbin
>

Sorry I guess that comment is probably a bit vague.

The OpenStack packages are open source like many other projects. They're
Apache 2 licensed and we gladly accept contributions. :)

This is a good starting point for working with the Ubuntu OpenStack
packages:
https://wiki.ubuntu.com/OpenStack/CorePackages

If you or someone else were to provide package sources for zun that DTRT to
create binary packages, and if they can test them, then I'd be happy to
review/sponsor the Ubuntu and cloud-archive uploads.

Thanks,
Corey


>
> On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant 
> wrote:
>
>>
>>
>> On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu  wrote:
>>
>>> Hi Corey,
>>>
>>> What are the requirements to include OpenStack Zun into the Ubuntu
>>> packages? We have a comprehensive installation guide [1] that are using by
>>> a lot of users when they were installing Zun. However, the missing of
>>> Ubuntu packages is inconvenient for our users. What the Zun team can help
>>> for adding Zun to Ubuntu.
>>>
>>> [1] https://docs.openstack.org/zun/latest/install/index.html
>>>
>>> Best regards,
>>> Hongbin
>>>
>>
>> Hi Hongbin,
>>
>> If we were to get working packages from the community and commitment to
>> test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud
>> achive.
>>
>> Thanks,
>> Corey
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey

2018-04-27 Thread Matt Riedemann

On 4/27/2018 4:02 AM, Tomáš Vondra wrote:
Also, Windows host isolation is done using image metadata. I have filled 
a bug somewhere that it does not work correctly with Boot from Volume.


Likely because for boot from volume the instance.image_id is ''. The 
request spec, which the filter has access to, also likely doesn't have 
the backing image metadata for the volume because the instance isn't 
creating with an image directly. But nova could fetch the image metadata 
from the volume and put that into the request spec. We fixed a similar 
bug recently for the IsolatedHostsFilter:


https://review.openstack.org/#/c/543263/

If you can find the bug, or report a new one, I could take a look.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-27 Thread Emilien Macchi
On Fri, Apr 27, 2018 at 6:43 AM, Dmitry Tantsur  wrote:
[...]

I would like to run at least one TripleO CI job with cleaning enabled. Any
> objections to that? If not, what would be the best job (it has to run
> ironic, obviously)?
>
> [0] https://review.openstack.org/#/q/topic:cleaning+status:open


We "only" have 2 jobs in the (third party) gate: fs001 and fs035. Both are
testing the same thing the last time I checked except fs035 is ipv6. I
would pick one of them and just do it.
I'll let CI team comment on that.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS

2018-04-27 Thread Hongbin Lu
Corey,

Thanks for the information. Would you clarify what is "working packages
from the community"?

Best regards,
Hongbin

On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant 
wrote:

>
>
> On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu  wrote:
>
>> Hi Corey,
>>
>> What are the requirements to include OpenStack Zun into the Ubuntu
>> packages? We have a comprehensive installation guide [1] that are using by
>> a lot of users when they were installing Zun. However, the missing of
>> Ubuntu packages is inconvenient for our users. What the Zun team can help
>> for adding Zun to Ubuntu.
>>
>> [1] https://docs.openstack.org/zun/latest/install/index.html
>>
>> Best regards,
>> Hongbin
>>
>
> Hi Hongbin,
>
> If we were to get working packages from the community and commitment to
> test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud
> achive.
>
> Thanks,
> Corey
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][reno] issue with reno 2.9.0 and duplicate anchors

2018-04-27 Thread Doug Hellmann
The latest release of reno tries to add anchors to the page in a way
that ensures they are named consistently across builds. For projects
with the same version number in multiple series (which can happen for
non-milestone projects that haven't tagged for rocky yet), this causes
duplicate anchors and causes the release notes build to fail.

There is a fix for this in https://review.openstack.org/564763 and we
will try to get a new release of reno out as soon as that patch merges.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-27 Thread Dmitry Tantsur
Okay, it seems like the idea was not well received, but I do have some action 
items out of the discussion (thanks all!):


1. Simplify running cleaning per node. I've proposed patches [0] to add a new 
command (documentation to follow) to do it.


2. Consider running metadata cleaning during deployment in Ironic. This is a bit 
difficult right now, but will simplify substantially after the deploy steps work.


Any other ideas?

I would like to run at least one TripleO CI job with cleaning enabled. Any 
objections to that? If not, what would be the best job (it has to run ironic, 
obviously)?


[0] https://review.openstack.org/#/q/topic:cleaning+status:open

On 04/25/2018 03:14 PM, Dmitry Tantsur wrote:

Hi all,

I'd like to restart conversation on enabling node automated cleaning by default 
for the undercloud. This process wipes partitioning tables (optionally, all the 
data) from overcloud nodes each time they move to "available" state (i.e. on 
initial enrolling and after each tear down).


We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable and 
available steps several times


However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take precedence in 
some BIOS
- an UEFI boot partition left from a previous deployment is likely to confuse 
UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to the 
storage team to comment)


For these reasons we don't recommend having cleaning disabled, and I propose to 
re-enable it.


It has the following drawbacks:
- The default workflow will require another node boot, thus becoming several 
minutes longer (incl. the CI)

- It will no longer be possible to easily restore a deleted overcloud node.

What do you think? If I don't hear principal objections, I'll prepare a patch in 
the coming days.


Dmitry



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-27 Thread Dmitry Tantsur

Hi Tim,

On 04/26/2018 07:16 PM, Tim Bell wrote:

My worry with changing the default is that it would be like adding the 
following in /etc/environment,

alias ls=' rm -rf / --no-preserve-root'

i.e. an operation which was previously read-only now becomes irreversible.


Well, deleting instances has never been read-only :) The problem really is that 
Heat can delete instances during a seemingly innocent operations. And I do agree 
that we cannot just ignore this problem.




We also have current use cases with Ironic where we are moving machines between 
projects by 'disowning' them to the spare pool and then reclaiming them (by 
UUID) into new projects with the same state.


I'd be curious to hear how exactly it works. Does it work on Nova level or on 
Ironic level?




However, other operators may feel differently which is why I suggest asking 
what people feel about changing the default.

In any case, changes in default behaviour need to be highly visible.

Tim

-Original Message-
From: "arkady.kanev...@dell.com" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 26 April 2018 at 18:48
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

 +1.
 It would be good to also identify the use cases.
 Surprised that node should be cleaned up automatically.
 I would expect that we want it to be a deliberate request from 
administrator to do.
 Maybe user when they "return" a node to free pool after baremetal usage.
 Thanks,
 Arkady
 
 -Original Message-

 From: Tim Bell [mailto:tim.b...@cern.ch]
 Sent: Thursday, April 26, 2018 11:17 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by 
default?
 
 How about asking the operators at the summit Forum or asking on openstack-operators to see what the users think?
 
 Tim
 
 -Original Message-

 From: Ben Nemec 
 Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

 Date: Thursday, 26 April 2018 at 17:39
 To: "OpenStack Development Mailing List (not for usage questions)" 
, Dmitry Tantsur 
 Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by 
default?
 
 
 
 On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:

 > Answering to both James and Ben inline.
 >
 > On 04/25/2018 05:47 PM, Ben Nemec wrote:
 >>
 >>
 >> On 04/25/2018 10:28 AM, James Slagle wrote:
 >>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur
 >>>  wrote:
  On 04/25/2018 04:26 PM, James Slagle wrote:
 >
 > On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 

 > wrote:
 >>
 >> Hi all,
 >>
 >> I'd like to restart conversation on enabling node automated
 >> cleaning by
 >> default for the undercloud. This process wipes partitioning 
tables
 >> (optionally, all the data) from overcloud nodes each time they
 >> move to
 >> "available" state (i.e. on initial enrolling and after each tear
 >> down).
 >>
 >> We have had it disabled for a few reasons:
 >> - it was not possible to skip time-consuming wiping if data from
 >> disks
 >> - the way our workflows used to work required going between
 >> manageable
 >> and
 >> available steps several times
 >>
 >> However, having cleaning disabled has several issues:
 >> - a configdrive left from a previous deployment may confuse
 >> cloud-init
 >> - a bootable partition left from a previous deployment may take
 >> precedence
 >> in some BIOS
 >> - an UEFI boot partition left from a previous deployment is 
likely to
 >> confuse UEFI firmware
 >> - apparently ceph does not work correctly without cleaning (I'll
 >> defer to
 >> the storage team to comment)
 >>
 >> For these reasons we don't recommend having cleaning disabled, 
and I
 >> propose
 >> to re-enable it.
 >>
 >> It has the following drawbacks:
 >> - The default workflow will require another node boot, thus 
becoming
 >> several
 >> minutes longer (incl. the CI)
 >> - It will no longer be possible to easily restore a 

Re: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS

2018-04-27 Thread Corey Bryant
On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu  wrote:

> Hi Corey,
>
> What are the requirements to include OpenStack Zun into the Ubuntu
> packages? We have a comprehensive installation guide [1] that are using by
> a lot of users when they were installing Zun. However, the missing of
> Ubuntu packages is inconvenient for our users. What the Zun team can help
> for adding Zun to Ubuntu.
>
> [1] https://docs.openstack.org/zun/latest/install/index.html
>
> Best regards,
> Hongbin
>

Hi Hongbin,

If we were to get working packages from the community and commitment to
test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud
achive.

Thanks,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] placement update 18-17

2018-04-27 Thread Chris Dent


Welcome to placement update 18-17. This is an expand update, meaning
I've gone searching for new stuff to add to the lists.

In other news: I'll be on holiday next week so there won't be one of
these next week, unless somebody else wants to do one.

# Most Important

A great deal of stuff is reliant on nested providers in allocation
candidates, so moving it forward is the most important. Next in line
are granular resource requests and consumer generations.

# What's Changed

A race condition in synchronizing os-traits has been corrected, by
doing the sync in an independent transaction. Code that handles the
"local-delete" situation and cleans up allocations has been merged.

# Bugs

* Placement related bugs not yet in progress:  https://goo.gl/TgiPXb
  17, +1 on last week
* In progress placement bugs: https://goo.gl/vzGGDQ
  8, -4 (woot!) on last week

# Specs

Total last week: 12. Now: 11 (because one was abandoned)

* https://review.openstack.org/#/c/549067/
 VMware: place instances on resource pool
 (using update_provider_tree)

* https://review.openstack.org/#/c/552924/
Proposes NUMA topology with RPs

* https://review.openstack.org/#/c/544683/
Account for host agg allocation ratio in placement

* https://review.openstack.org/#/c/552105/
Support default allocation ratios

* https://review.openstack.org/#/c/438640/
Spec on preemptible servers

* https://review.openstack.org/#/c/557065/
  Proposes Multiple GPU types

* https://review.openstack.org/#/c/555081/
  Standardize CPU resource tracking

* https://review.openstack.org/#/c/502306/
  Network bandwidth resource provider

* https://review.openstack.org/#/c/509042/
  Propose counting quota usage from placement

* https://review.openstack.org/#/c/560174/
Add history behind nullable project_id and user_id

* https://review.openstack.org/#/c/559466/
Return resources of entire trees in Placement

# Main Themes

## Nested providers in allocation candidates

Representing nested provides in the response to GET
/allocation_candidates is required to actually make use of all the
topology that update provider tree will report. That work is in
progress at:

   
https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates

## Mirror nova host aggregates to placement

This makes it so some kinds of aggregate filtering can be done
"placement side" by mirroring nova host aggregates into placement
aggregates.

 https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates

This is still in progress but took a little attention break while
nested provider discussions took up (and destroyed) brains.

## Consumer Generations

This allows multiple agents to "safely" update allocations for a
single consumer. The code is in progress:

  https://review.openstack.org/#/q/topic:bp/add-consumer-generation

This is moving along, but is encountering some debate over how best
to represent the data and flexibly deal with the at least 3
different ways we need to manage consumer information.

## Granular

Ways and means of addressing granular requests when dealing with
nested resource providers. Granular in this sense is grouping
resource classes and traits together in their own lumps as required.
Topic is:

 https://review.openstack.org/#/q/topic:bp/granular-resource-requests

# Extraction

I've created patches that adjust devstack and zuul config to use the
separate placement database connection.

   devstack: https://review.openstack.org/#/c/564180/
   zuul: https://review.openstack.org/#/c/564067/
   db connection: https://review.openstack.org/#/c/362766/

All of these things could merge without requiring any action by
anybody. Instead they allow people to use different connections, but
don't require it.

Jay has made a first pass at an os-resource-classes:

https://github.com/jaypipes/os-resource-classes/

which I thought was potentially more heavyweight than required, but
other people should have a look too.

The other main issue in extraction is the placement unit and
functional tests have a lot of dependence on the fixtures and base
classes used in the nova unit and functional tests. For the time
being that is okay, but it would be useful to start unwinding that,
soon. Same will be true for config.

# Other

14 entries last week, 4 of those have merged but we've added some to
bring the total to: 17.

* https://review.openstack.org/#/c/546660/
 Purge comp_node and res_prvdr records during deletion of
 cells/hosts

* https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky
 A huge pile of improvements to osc-placement

* https://review.openstack.org/#/c/524425/
 General policy sample file for placement

* https://review.openstack.org/#/c/527791/
Get resource provider by uuid or name (osc-placement)

* https://review.openstack.org/#/c/477478/
placement: Make API history doc more 

Re: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS

2018-04-27 Thread Hongbin Lu
Hi Corey,

What are the requirements to include OpenStack Zun into the Ubuntu
packages? We have a comprehensive installation guide [1] that are using by
a lot of users when they were installing Zun. However, the missing of
Ubuntu packages is inconvenient for our users. What the Zun team can help
for adding Zun to Ubuntu.

[1] https://docs.openstack.org/zun/latest/install/index.html

Best regards,
Hongbin

On Fri, Apr 27, 2018 at 8:43 AM, Corey Bryant 
wrote:

> Hi All,
>
> With yesterday’s release of Ubuntu 18.04 LTS (the Bionic Beaver) the
> Ubuntu OpenStack team at Canonical is pleased to announce the general
> availability of OpenStack Queens on Ubuntu 18.04 LTS. This release of
> Ubuntu is a Long Term Support release that will be supported for 5 years.
>
> Further details for the Ubuntu 18.04 release can be found at:
> https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes.
>
> And further details for the OpenStack Queens release can be found at:
> https://www.openstack.org/software/queens.
>
> Installing on Ubuntu 18.04 LTS
> --
> No extra steps are required required; just start installing OpenStack!
>
> Installing on Ubuntu 16.04 LTS
> --
> If you’re interested in OpenStack Queens on Ubuntu 16.04, please refer to
> http://lists.openstack.org/pipermail/openstack-dev/2018-March/127851.html,
> which coincided with the upstream OpenStack Queens release.
>
> Packages
> 
> The 18.04 archive includes updates for:
>
> aodh, barbican, ceilometer, ceph (12.2.4), cinder, congress, designate,
> designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi,
> heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum,
> manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe,
> networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl,
> networking-ovn, networking-sfc, neutron, neutron-dynamic-routing,
> neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas,
> neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0),
> panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard,
> senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar.
>
> For a full list of packages and versions, please refer to [0].
>
> Branch Package Builds
> -
> If you want to try out the latest updates to stable branches, we are
> delivering continuously integrated packages on each upstream commit in the
> following PPA’s:
>
> sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
> sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
> sudo add-apt-repository ppa:openstack-ubuntu-testing/pike
> sudo add-apt-repository ppa:openstack-ubuntu-testing/queens
>
> bear in mind these are built per-commitish (30 min checks for new commits
> at the moment) so ymmv from time-to-time.
>
> Reporting bugs
> --
> If you run into any issues please report bugs using the ‘ubuntu-bug’ tool:
>
> sudo ubuntu-bug nova-conductor
>
> this will ensure that bugs get logged in the right place in Launchpad.
>
> Thank you to all who contributed to OpenStack Queens and Ubuntu Bionic
> both upstream and in Debian/Ubuntu packaging!
>
> Regards,
> Corey
> (on behalf of the Ubuntu OpenStack team)
>
> [0] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-
> archive/queens_versions.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS

2018-04-27 Thread Corey Bryant
Hi All,

With yesterday’s release of Ubuntu 18.04 LTS (the Bionic Beaver) the Ubuntu
OpenStack team at Canonical is pleased to announce the general availability
of OpenStack Queens on Ubuntu 18.04 LTS. This release of Ubuntu is a Long
Term Support release that will be supported for 5 years.

Further details for the Ubuntu 18.04 release can be found at:
https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes.

And further details for the OpenStack Queens release can be found at:
https://www.openstack.org/software/queens.

Installing on Ubuntu 18.04 LTS
--
No extra steps are required required; just start installing OpenStack!

Installing on Ubuntu 16.04 LTS
--
If you’re interested in OpenStack Queens on Ubuntu 16.04, please refer to
http://lists.openstack.org/pipermail/openstack-dev/2018-March/127851.html,
which coincided with the upstream OpenStack Queens release.

Packages

The 18.04 archive includes updates for:

aodh, barbican, ceilometer, ceph (12.2.4), cinder, congress, designate,
designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi,
heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum,
manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe,
networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl,
networking-ovn, networking-sfc, neutron, neutron-dynamic-routing,
neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas,
neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0),
panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard,
senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar.

For a full list of packages and versions, please refer to [0].

Branch Package Builds
-
If you want to try out the latest updates to stable branches, we are
delivering continuously integrated packages on each upstream commit in the
following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike
sudo add-apt-repository ppa:openstack-ubuntu-testing/queens

bear in mind these are built per-commitish (30 min checks for new commits
at the moment) so ymmv from time-to-time.

Reporting bugs
--
If you run into any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thank you to all who contributed to OpenStack Queens and Ubuntu Bionic both
upstream and in Debian/Ubuntu packaging!

Regards,
Corey
(on behalf of the Ubuntu OpenStack team)

[0]
http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ​ [mistral] timeout and retry

2018-04-27 Thread Vitalii Solodilov
Thanks for confirmation.Ticket https://bugs.launchpad.net/mistral/+bug/1767352I will try to fix it. 27.04.2018, 12:02, "Renat Akhmerov" :Yep, agree that this is a bug. We need to fix that. Would you please create a ticket at LP?ThanksRenat Akhmerov@NokiaOn 27 Apr 2018, 12:53 +0700, Vitalii Solodilov , wrote:> No matter at what stage the task is, but if it’s still in RUNNING state or FAILED but we know that retry policy still didn’t use all attempts then the ‘timeout’ policy should force the task to fail.Ok, then we have a bug because timeout policy doesn't force the task to fail. It retries task. And after that, we have two running action parallel.https://github.com/openstack/mistral/blob/master/mistral/engine/policies.py#L537 27.04.2018, 07:50, "Renat Akhmerov" :Hi, I don’t clearly understand the problem you’re trying to point to. IMO, when using these two policies at the same time ‘timeout’ policy should have higher priority. No matter at what stage the task is, but if it’s still in RUNNING state or FAILED but we know that retry policy still didn’t use all attempts then the ‘timeout’ policy should force the task to fail. If it’s the second case when it’s FAILED but the retry policy is still in play then we need to leave some ‘force’ marker in the task state to make sure that there’s no need to retry it further.ThanksRenat Akhmerov@NokiaOn 24 Apr 2018, 05:00 +0700, Vitalii Solodilov , wrote:Hi Renat, Can you explain me and Dougal how timeout policy should work with retry policy?I guess there is bug right now.The behaviour is something like this https://ibb.co/hhm0eHExample: https://review.openstack.org/#/c/563759/Logs: http://logs.openstack.org/59/563759/1/check/openstack-tox-py27/6f38808/job-output.txt.gz#_2018-04-23_20_54_55_376083Even we will fix this bug and after task timeout we will not retry task. I don't understand which problem is decided by this timeout and retry.Other problem. What about task retry? I mean using mistral api. The problem is that timeout delayed calls was not created.IMHO the combination of these policies should work like this https://ibb.co/fe5tzHIt is not a timeout per action because when task retry it move to some complete state and then back to RUNNING state. And it will work fine with with-items policy.The main advantage is executor and rabbitmq HA. I can specify small timeout if executor will die the task retried by timeout and create new action.The second is predictable behaviour. When I specify timeout: 10 and retry.count: 5 I know that will be create maximum 5 action before SUCCESS state and every action will be executes no longer than 10 seconds.-- Best regards,Vitalii Solodilov   -- Best regards, Vitalii Solodilov   -- Best regards, Vitalii Solodilov __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [valence] Valence 0.9.0 Release

2018-04-27 Thread Anusha Ramineni
 Hi,

Valence team is happy to announce the initial release of Valence-0.9.0 to
PyPi. Please find the details below.


Valence PyPi url:  https://pypi.org/project/valence/ .


Documentation and Release Notes for the release can be found at:


Release Notes :
http://valence.readthedocs.io/en/latest/releasenotes/valence-0.9.html

Documentation : http://valence.readthedocs.io/en/latest/


Thanks,
Anusha
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Next notification subteam meeting is cancelled

2018-04-27 Thread Balázs Gibizer

Hi,

I have to cancel the next notification subteam meeting as it happens to 
be on 1st of May which is (inter)national holiday. So the next meeting 
expected to be held on 8th of May.


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-27 Thread thomas.morin

Hi Monty,

Thanks for bringing this up.

Having run into the topic for a few combination of deps, I'll certainly 
agree that we need something better than what we currently have.
I don't feel that I've enough perspective on the whole system and 
practices to give a strong opinion on what we should do though.


A few comments... (below)


On 25/04/2018 16:40, Monty Taylor wrote:


projects with test requirements on git repo urls of other projects
--

There are a bunch of projects that need, for testing purposes, to 
depend on other projects. The majority are either neutron or horizon 
plugins, but conceptually there is nothing neutron or horizon specific 
about the issue. The problem they're trying to deal with is that they 
are a plugin to a service and they need to be able to import code from 
the service they are a plugin to in their unit tests.


(using neutron to avoid being too abstract, but this generalizes to 
other components with plugins)


True, but sometimes a change to a neutron plugin may (with or without a 
need to actually import neutron), need to run against a neutron version 
from git (because the change has a Depends-On a Neutron change, or 
because the change depends on something that is in neutron master but 
not in a release).  We have this when the plugin depends on a new or 
fixed behavior.


While this case can in theory be fixed by moving the code introducing 
the fixed or new behavior into neutron-lib,  it doesn't mean that this 
is always feasible (because the work required to move this part of the 
code to neutron-lib hasn't happened).







unwinding things


There are a few different options, but it's important to keep in mind 
that we ultimately want all of the following:


* The code works
* Tests can run properly in CI
* "Depends-On" works in CI so that you can test changes cross-repo


Note that this was true with tools/tox_install.sh, but broke when it was 
removed for a job such as legacy-networking-bgpvpn-dsvm-functional (see 
[1]) which does not inherit from zuul tox jobs, but still relies 
ultimately on zuul to run the test.


[1] 
http://logs.openstack.org/41/558741/11/check/legacy-networking-bgpvpn-dsvm-functional/86a743c/



* Tests can run properly locally for developers


(Broke when tools/tox_install.sh was abandoned, currently causing minor 
pain to lots of people working on neutron-plugins unless py27-dev hacks 
are in place in their project)




* Deployment requirements are accurately communicated to deployers


Was definitely improved by removing tools/tox_install.sh!




Specific Suggestions


As there are a few different scenarios, I want to suggest we do a few 
different things.


* Prefer interface libraries on PyPI that projects depend on

Like python-openstackclient and osc-lib, this is the *best* approach
for projects with plugins. Such interface libraries need to be able to 
do intermediate releases - and those intermediate releases need to not 
break the released version of the projects. This is the hardest and 
longest thing to do as well, so it's most likely to be a multi-cycle 
effort.


I would object to "best", for the following reasons:
- because this is not the starting point, the effort to librarize code 
is significant, and it's seems a fact that people don't rush to do it
- there is a practical drawback of doing that: even for project that 
have compatible release cycle, we have overhead of having to release 
e.g. neutron-lib with the change before we can consume it in neutron or 
a neutron plugin (and we have overhead to test the changes as well, with 
extra jobs to test against master or local .zuul.yaml hacks to force 
Depends-On to test what we want e.g. [x] ) ; a situation that would 
avoid this overhead would I think be superior


[x] https://review.openstack.org/#/c/557660/



* Treat inter-plugin depends as normal library depends

If networking-bgpvpn depends on networking-bagpipe and networking-odl, 
then networking-bagpipe and networking-odl need to be released to PyPI 
just like any other library in OpenStack. These are real runtime 
dependencies.


Juste a side note here: networking-bagpipe and networking-odl provide 
components drivers for their corresponding drivers in networking-bgpvpn, 
they aren't strict runtime dependencies, but only dependencies for a 
scenario where their driver is used. Given that, they were moved as 
test-requirements dependencies recently (only required to run unit tests).


The situation for these drivers is a bit in flux though:
- ODL: the bgpvpn driver for ODL is a v1 driver that became obsolete, 
there is a v2 driver sitting entirely in networking-odl
- bagpipe: the bgpvpn driver for bagpipe could be moved to 
networking-bagpipe entirely  -- the one reason why it hasn't happened 
(apart from inertia) is that is it the reference driver for the 
networking-bgpvpn project, and removing it from 

Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-27 Thread thomas.morin

On 25/04/2018 18:40, Jeremy Stanley wrote:

This came up again a few days ago for sahara-dashboard. We talked
through some obvious alternatives to keep its master branch from
depending on an unreleased state of horizon and the situation today
is that plugin developers have been relying on developing their
releases in parallel with the services. Not merging an entire
development cycle's worth of work until release day (whether that's
by way of a feature branch or by just continually rebasing and
stacking in Gerrit) would be a very painful workflow for them, and
having to wait a full release cycle before they could start
integrating support for new features in the service would be equally
unfortunate.


+1



As for merging the plugin and service repositories, they tend to be
developed by completely disparate teams so that could require a fair
amount of political work to solve. Extracting the plugin interface
into a separate library which releases more frequently than the
service does indeed sound like the sanest option, but will also
probably take quite a while for some teams to achieve (I gather
neutron-lib is getting there, but I haven't heard about any work
toward that end in Horizon yet).


+1

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-04-27 Thread Chen CH Ji
According to requirements and comments, now we opened the CI runs with
run_validation = True
And according to [1] below, for example, [2] need the ssh validation passed
the test

And there are a couple of comments need some enhancement on the logs of CI
such as format and legacy incorrect links of logs etc
the newest logs sample can be found [3] (take n-cpu as example and those
logs are with _white.html)

Also, the blueprint [4] requested by previous discussion post here again
for reference

please let us know whether the procedure -2 can be removed in order to
proceed . thanks for your help



[1]
http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-17455/logs/tempest.log
2018-04-27 08:50:44.852 19582 DEBUG tempest [-] validation.run_validation
= True

http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-17455/console.html
{0}
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
 [86.788179s] ... ok

[2]
https://github.com/openstack/tempest/blob/master/tempest/scenario/test_server_basic_ops.py

[3]
http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-17455/logs/n-cpu.log_white.html

[4] https://review.openstack.org/#/c/562154/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   melanie witt 
To: openstack-dev@lists.openstack.org
Date:   04/18/2018 01:41 AM
Subject:Re: [openstack-dev] [Nova] z/VM introducing a new config
driveformat



On Tue, 17 Apr 2018 16:58:22 +0800, Chen Ch Ji wrote:
> For the question on AE documentation, it's open source in [1] and the
> documentation for how to build and use is [2]
> once our code is upstream, there are a set of documentation change which
> will cover this image build process by
> adding some links to there [3]

Thanks, that is good info.

> You are right, we need image to have our Active Engine, I think
> different arch and platform might have their unique
> requirements and our solution our Active Engine is very like to
> cloud-init so no harm to add it from user's perspective
> I think later we can upload image to some place so anyone is able to
> consume it as test image if they like
> because different arch's image (e.g x86 and s390x) can't be shared
anyway.
>
> For the config drive format you mentioned, actually, as previous
> explanation and discussion witho Michael and Dan,
> We found the iso9660 can be used (previously we made a bad assumption)
> and we already changed the patch in [4],
> so it's exactly same to other virt drivers you mentioned , we don't need
> special format and iso9660 works perfect for our driver

That's good news, I'm glad that got resolved.

> It make sense to me we are temply moved out from runway, I suppose we
> can adjust the CI to enable the run_ssh = true
> with config drive functionalities very soon and we will apply for review
> after that with the test result requested in our CI log.

Okay, sounds good. Since you expect to be up and running with
[validation]run_validation = True soon, I'm going to move the z/VM
driver blueprint back to the front of the queue and put the next
blueprint in line into the runway.

Then, when the next blueprint end date arrives (currently 2018-04-30),
if the z/VM CI is ready with cleaned up, human readable log files and is
running with run_ssh = True with the test_server_basic_ops test to
verify config drive operation, we will add the z/VM driver blueprint
back to a runway for dedicated review.

Let us know when the z/VM CI is ready, in case other runway reviews are
completed early. If other runway reviews complete early, a runway space
might be available earlier than 2018-04-30.

Thanks,
-melanie

> Thanks
>
> [1]
>
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_mfcloud_python-2Dzvm-2Dsdk_blob_master_tools_share_zvmguestconfigure=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=CRsbKwMgE5rCqLl8MTo6ZnKA4QkxK3NRmDont5BYcqw=RpjRNK6wiUJDNTYKBkou6nSDpaUkNOXdmBJ-SyjkPaw=

> [2]
>
https://urldefense.proofpoint.com/v2/url?u=http-3A__cloudlib4zvm.readthedocs.io_en_latest_makeimage.html-23configuration-2Dof-2Dactivation-2Dengine-2Dae-2Din-2Dzlinux=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=CRsbKwMgE5rCqLl8MTo6ZnKA4QkxK3NRmDont5BYcqw=CVvkU6HtWW7GArGIpFT4fichM0fuTXXrmWD9zyRo9h0=

> [3]
>
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_q_status-3Aopen-2Bproject-3Aopenstack_nova-2Bbranch-3Amaster-2Btopic-3Abp_add-2Dzvm-2Ddriver-2Drocky=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=CRsbKwMgE5rCqLl8MTo6ZnKA4QkxK3NRmDont5BYcqw=P_DwKtfQWsNNWz9SmTW2xvArTWIzCh2EKPHRqLDkGeg=

> [4]
>

[openstack-dev] [mistral] Help with test run

2018-04-27 Thread András Kövi
Hi,

Can someone please help me with why this build ended with TIMED_OUT?
http://logs.openstack.org/85/527085/8/check/mistral-tox-unit-mysql/3ffae9f/

Thanks,
Andras

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, April 27th

2018-04-27 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of currently-considered changes at:
https://wiki.openstack.org/wiki/Technical_Committee_Tracker

We also track TC objectives for the cycle using StoryBoard at:
https://storyboard.openstack.org/#!/project/923


== Recently-approved changes ==

* New repos: ansible-role-container-registry


== Election season ==

Voting is open to renew 7 seats from the Technical Committee's 13 seats.
If you contributed changes recently to any of the official OpenStack
repositories, you should have received a ballot. Deadline to vote is
23:59 UTC on Monday, so please vote now !

You can find details on the election at:
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129753.html

A number of threads have been started to discuss TC-related questions,
which may inform your vote:

* http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html
* http://lists.openstack.org/pipermail/openstack-dev/2018-April/129658.html
* http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html
* http://lists.openstack.org/pipermail/openstack-dev/2018-April/129664.html


== Under discussion ==

The four changes requiring formal votes from the TC members will be held
until the election concludes and new members join:

* Splitting/abandoning kolla-kubernetes [1]
* Adjutant project team addition [2]
* Allow projects to drop py27 support in the PTI [3]
* More detail about the expectations we place on goal champions [4]

[1] https://review.openstack.org/#/c/552531/
[2] https://review.openstack.org/#/c/553643/
[3] https://review.openstack.org/561922
[4] https://review.openstack.org/564060


== TC member actions/focus/discussions for the coming week(s) ==

The election closes on Monday. The new members will be inducted, and
they will select the Technical Committee chair for the upcoming 6-month
session.

Urgent topics include preparation of the agenda for the joint Board + TC
+ UC meeting in Vancouver. If you have an idea of topic that should be
discussed, it's still time to chime in on the thread at:

http://lists.openstack.org/pipermail/openstack-dev/2018-April/129428.html


== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

Feel free to add your own office hour conversation starter at:
https://etherpad.openstack.org/p/tc-office-hour-conversation-starters

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-04-27 Thread Paul Bourke

+1, always great working with Mark :)

On 26/04/18 16:31, Jeffrey Zhang wrote:

Kolla core reviewer team,

It is my pleasure to nominate
​
mgoddard for kolla core team.
​
Mark has been working both upstream and downstream with kolla and
kolla-ansible for over two years, building bare metal compute clouds with
ironic for HPC. He's been involved with OpenStack since 2014. He started
the kayobe deployment project which complements kolla-ansible. He is
also the most active non-core contributor for last 90 days[1]
​​
Consider this nomination a +1 vote from me

A +1 vote indicates you are in favor of
​
mgoddard as a candidate, a -1
is a
​​
veto. Voting is open for 7 days until
​May
​4​
th, or a unanimous
response is reached or a veto vote occurs.

[1] http://stackalytics.com/report/contribution/kolla-group/90

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey

2018-04-27 Thread Tomáš Vondra
Hi!

What we‘ve got in our small public cloud:

 

scheduler_default_filters=AggregateInstanceExtraSpecsFilter,

AggregateImagePropertiesIsolation,

RetryFilter,

AvailabilityZoneFilter,

AggregateRamFilter,

AggregateDiskFilter,

AggregateCoreFilter,

ComputeFilter,

ImagePropertiesFilter,

ServerGroupAntiAffinityFilter,

ServerGroupAffinityFilter

 

#ComputeCapabilitiesFilter off because of conflict with 
AggregateInstanceExtraSpecFilter https://bugs.launchpad.net/nova/+bug/1279719

 

I really like to set resource limits using Aggregate metadata.

Also, Windows host isolation is done using image metadata. I have filled a bug 
somewhere that it does not work correctly with Boot from Volume. I believe it 
got pretty much ignored. That’s why we also use flavor metadata.

 

Tomas from Homeatcloud

 

From: Massimo Sgaravatto [mailto:massimo.sgarava...@gmail.com] 
Sent: Saturday, April 21, 2018 7:49 AM
To: Simon Leinen
Cc: OpenStack Development Mailing List (not for usage questions); OpenStack 
Operators
Subject: Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler 
filters survey

 

enabled_filters = 
AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

 

Cheers, Massimo

 

On Wed, Apr 18, 2018 at 10:20 PM, Simon Leinen  wrote:

Artom Lifshitz writes:
> To that end, we'd like to know what filters operators are enabling in
> their deployment. If you can, please reply to this email with your
> [filter_scheduler]/enabled_filters (or
> [DEFAULT]/scheduler_default_filters if you're using an older version)
> option from nova.conf. Any other comments are welcome as well :)

We have the following enabled on our semi-public (academic community)
cloud, which runs on Newton:

AggregateInstanceExtraSpecsFilter
AvailabilityZoneFilter
ComputeCapabilitiesFilter
ComputeFilter
ImagePropertiesFilter
PciPassthroughFilter
RamFilter
RetryFilter
ServerGroupAffinityFilter
ServerGroupAntiAffinityFilter

(sorted alphabetically) Recently we've also been trying

AggregateImagePropertiesIsolation

...but it looks like we'll replace it with our own because it's a bit
awkward to use for our purpose (scheduling Windows instance to licensed
compute nodes).
-- 
Simon.


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ​[openstack-dev] [mistral] timeout and retry

2018-04-27 Thread Renat Akhmerov
Yep, agree that this is a bug. We need to fix that. Would you please create a 
ticket at LP?

Thanks

Renat Akhmerov
@Nokia

On 27 Apr 2018, 12:53 +0700, Vitalii Solodilov , wrote:
> > No matter at what stage the task is, but if it’s still in RUNNING state or 
> > FAILED but we know that retry policy still didn’t use all attempts then the 
> > ‘timeout’ policy should force the task to fail.
> Ok, then we have a bug because timeout policy doesn't force the task to fail. 
> It retries task. And after that, we have two running action parallel.
> https://github.com/openstack/mistral/blob/master/mistral/engine/policies.py#L537
>
> 27.04.2018, 07:50, "Renat Akhmerov" :
> > Hi,
> >
> > I don’t clearly understand the problem you’re trying to point to.
> >
> > IMO, when using these two policies at the same time ‘timeout’ policy should 
> > have higher priority. No matter at what stage the task is, but if it’s 
> > still in RUNNING state or FAILED but we know that retry policy still didn’t 
> > use all attempts then the ‘timeout’ policy should force the task to fail. 
> > If it’s the second case when it’s FAILED but the retry policy is still in 
> > play then we need to leave some ‘force’ marker in the task state to make 
> > sure that there’s no need to retry it further.
> >
> > Thanks
> >
> > Renat Akhmerov
> > @Nokia
> >
> > On 24 Apr 2018, 05:00 +0700, Vitalii Solodilov , wrote:
> > > Hi Renat, Can you explain me and Dougal how timeout policy should work 
> > > with retry policy?
> > >
> > > I guess there is bug right now.
> > > The behaviour is something like this https://ibb.co/hhm0eH
> > > Example: https://review.openstack.org/#/c/563759/
> > > Logs: 
> > > http://logs.openstack.org/59/563759/1/check/openstack-tox-py27/6f38808/job-output.txt.gz#_2018-04-23_20_54_55_376083
> > > Even we will fix this bug and after task timeout we will not retry task. 
> > > I don't understand which problem is decided by this timeout and retry.
> > > Other problem. What about task retry? I mean using mistral api. The 
> > > problem is that timeout delayed calls was not created.
> > >
> > > IMHO the combination of these policies should work like this 
> > > https://ibb.co/fe5tzH
> > > It is not a timeout per action because when task retry it move to some 
> > > complete state and then back to RUNNING state. And it will work fine with 
> > > with-items policy.
> > > The main advantage is executor and rabbitmq HA. I can specify small 
> > > timeout if executor will die the task retried by timeout and create new 
> > > action.
> > > The second is predictable behaviour. When I specify timeout: 10 and 
> > > retry.count: 5 I know that will be create maximum 5 action before SUCCESS 
> > > state and every action will be executes no longer than 10 seconds.
> > >
> > > --
> > > Best regards,
> > >
> > > Vitalii Solodilov
> > >
>
>
> --
> Best regards,
>
> Vitalii Solodilov
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sqlalchemy-migrate and networking-mlnx still depends on tempest-lib

2018-04-27 Thread Thomas Goirand
Hi,

Everyone migrated away from tempest-lib to tempest, but there's still 2
packages that are remaining, still using the old deprecated tempest-lib.
Does anyone volunteer for the job? It'd be nice if that happened, so we
could get completely rid of the tempest-lib packages in distros and
everywhere.

I can review patches in sqla-migrate, as I'm still core-reviewer there,
though I'm not sure I know enough to do it myself.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev