Re: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-25 Thread Cédric Jeanneret
Hello Sam,

Thanks for the clarifications.

On 07/25/2018 07:46 PM, Sam Doran wrote:
> I spoke with other Ansible Core devs to get some clarity on this change.
> 
> This is not a change that is being made quickly, lightly, or without a
> whole of bunch of reservation. In fact, that PR created by agaffney may
> not be merged any time soon. He just wanted to get something started and
> there is still ongoing discussion on that PR. It is definitely a WIP at
> this point.
> 
> The main reason for this change is that pretty much all of the Ansible
> CVEs to date came from "fact injection", meaning a fact that contains
> executable Python code Jinja will merrily exec(). Vars, hostvars, and
> facts are different in Ansible (yes, this is confusing — sorry). All
> vars go through a templating step. By copying facts to vars, it means
> facts get templated controller side which could lead to controller
> compromise if malicious code exists in facts.
> 
> We created an AnsibleUnsafe class to protect against this, but stopping
> the practice of injecting facts into vars would close the door
> completely. It also alleviates some name collisions if you set a hostvar
> that has the same name as a var. We have some methods that filter out
> certain variables, but keeping facts and vars in separate spaces is much
> cleaner.
> 
> This also does not change how hostvars set via set_fact are referenced.
> (set_fact should really be called set_host_var). Variables set with
> set_fact are not facts and are therefore not inside the ansible_facts
> dict. They are in the hostvars dict, which you can reference as {{
> my_var }} or {{ hostvars['some-host']['my_var'] }} if you need to look
> it up from a different host.

so if, for convenience, we do this:
vars:
  a_mounts: "{{ hostvars[inventory_hostname].ansible_facts.mounts }}"

That's completely acceptable and correct, and won't create any security
issue, right?

> 
> All that being said, the setting to control this behavior as Emilien
> pointed out is inject_facts_as_vars, which defaults to True and will
> remain that way for the foreseeable future. I would not rush into
> changing all the fact references in playbooks. It can be a gradual process.
> 
> Setting inject_facts_as_vars toTrue means ansible_hostname becomes
> ansible_facts.hostname. You do not have to use the hostvars dictionary —
> that is for looking up facts about hosts other than the current host.
> 
> If you wanted to be proactive, you could start using the ansible_facts
> dictionary today since it is compatible with the default setting and
> will not affect others trying to use playbooks that reference ansible_facts.
> 
> In other words, with the default setting of True, you can use either
> ansible_hostname or ansible_facts.hostname. Changing it to False means
> only ansible_facts.hostname is defined.
> 
>> Like, really. I know we can't really have a word about that kind of
>> decision, but... damn, WHY ?!
> 
> That is most certainly not the case. Ansible is developed in the open
> and we encourage community members to attend meetings
>  and add
> topics to the agenda
>  for discussion.
> Ansible also goes through a proposal process for major changes, which
> you can view here
> .
> 
> You can always go to #ansible-devel on Freenode or start a discussion on
> the mailing list
>  to speak with
> the Ansible Core devs about these things as well.

And I also have the "Because" linked to my "why" :). big thanks!

Bests,

C.

> 
> ---
> 
> Respectfully,
> 
> Sam Doran
> Senior Software Engineer
> Ansible by Red Hat
> sdo...@redhat.com 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Setting swift as glance backend

2018-07-25 Thread Samuel Monderer
Hi,

I would like to deploy a small overcloud with just one controller and one
compute for testing.
I want to use swift as the glance backend.
How do I configure the overcloud templates?

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] ptl non candidacy

2018-07-25 Thread Steven Dake (stdake)
?Jeffrey,


Thanks for your excellent service as Kolla PTL.  You have served the Kolla 
community well.


Regards,

-steve


From: Jeffrey Zhang 
Sent: Tuesday, July 24, 2018 8:48 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [kolla] ptl non candidacy

Hi all,

I just wanna to say I am not running PTL for Stein cycle. I have been involved 
in Kolla project for almost 3 years. And recently my work changes a little, 
too. So I may not have much time in the community in the future. Kolla is a 
great project and the community is also awesome. I would encourage everyone in 
the community to consider for running.

Thanks for your support :D.
--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL candidacy for the Stein Cycle

2018-07-25 Thread Cédric Jeanneret
+1 :).

On 07/25/2018 02:03 PM, Juan Antonio Osorio wrote:
> Hello folks!
> 
> I'd like to nominate myself for the TripleO PTL role for the Stein cycle.
> 
> Alex has done a great job as a PTL: The project is progressing nicely
> with many
> new, exciting features and uses for TripleO coming to fruition recently.
> It's a
> great time for the project. But, there's more work to be done.
> 
> I have served the TripleO community as a core-reviewer for some years
> now and,
> more recently, by driving the Security Squad. This project has been a
> great learning experience for me, both technically (I got to learn even
> more of
> OpenStack) and community-wise. Now I wish to better serve the community
> further
> by bringing my experiences into PTL role. While I have not yet served as PTL
> for a project before,I'm eager to learn the ropes and help improve the
> community that has been so influential on me.
> 
> For Stein, I would like to focus on:
> 
> * Increasing TripleO's usage in the testing of other projects
>   Now that TripleO can deploy a standalone OpenStack installation, I hope it
>   can be leveraged to add value to other projects' testing efforts. I
> hope this
>   would subsequentially help increase TripleO's testing coverage, and reduce
>   the footprint required for full-deployment testing.
> 
> * Technical Debt & simplification
>   We've been working on simplifying the deployment story and battle
> technical
>   depth -- let’s keep  this momentum going.  We've been running (mostly)
> fully
>   containerized environments for a couple of releases now; I hope we can
> reduce
>   the number of stacks we create, which would in turn simplify the project
>   structure (at least on the t-h-t side). We should also aim for the most
>   convergence we can achieve (e.g. CLI and UI workflows).
> 
> * CI and testing
>   The project has made great progress regarding CI and testing; lets
> keep this
>   moving forward and get developers easier ways to bring up testing
>   environments for them to work on and to be able to reproduce CI jobs.
> 
> Thanks!
> 
> Juan Antonio Osorio Robles
> IRC: jaosorior
> 
> 
> -- 
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Senlin][PTL][Election] Candidacy for Senlin PTL for Stein

2018-07-25 Thread Duc Truong
Hello everyone,

I'd like to announce my candidacy for the Senlin PTL position during
the Stein cycle.

I've been contributing to Senlin since the Queens cycle and became a
core reviewer during the Rocky cycle.  I work for Blizzard
Entertainment where I'm an active operator and upstream developer for
Senlin.  I believe this dual role gives me a unique perspective on the
use cases for Senlin.

If elected as PTL, I will focus on the following priorities:

* Testing:
More integration tests are needed to avoid any regression due to new
feature implementations.  More rally tests are needed to cover stress
testing scenarios in HA deployments of Senlin.

* Bug fixes:
Actively monitor incoming bug reports and triage them.  Clean out old
bugs that can no longer be reproduced.

* Technical debt:
Identify areas of code that can be reimplemented more efficiently
and/or simplified.

* User documentation:
Restructure the Senlin documentation to make it easier for the users to
find the relevant information.

* Grow the Senlin community:
My goal is to grow the Senlin user base and encourage more developers
to contribute. To do so, I propose changing the weekly meetings to
office hours and hold those office hours consistently so that new users
and/or developers can ask questions and receive feedback. Moreover, I
want to increase Senlin's visibility in the developer community by more
actively using the mailing list.  One idea would be to send out Senlin
project updates to the mailing list throughout the cycle like many
other projects are doing now.

Thanks for your consideration.

Duc Truong (dtruong)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Arkady.Kanevsky
Indeed. Thanks Alex for your great leadership of TripleO.

From: Remo Mattei [mailto:r...@rm.ht]
Sent: Wednesday, July 25, 2018 4:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] PTL non-candidacy

I want publically  want to say THANK YOU Alex. You ROCK.

Hopefully one of those summit, I will meet.

Ciao,
Remo


On Jul 25, 2018, at 6:23 AM, Alex Schultz 
mailto:aschu...@redhat.com>> wrote:

Hey folks,

So it's been great fun and we've accomplished much over the last two
cycles but I believe it is time for me to step back and let someone
else do the PTLing.  I'm not going anywhere so I'll still be around to
focus on the simplification and improvements that TripleO needs going
forward.  I look forwards to continuing our efforts with everyone.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[Image removed by sender.]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lots of slow tests timing out jobs

2018-07-25 Thread Ghanshyam Mann



  On Wed, 25 Jul 2018 22:22:24 +0900 Matt Riedemann  
wrote  
 > On 7/25/2018 1:46 AM, Ghanshyam Mann wrote:
 > > yeah, there are many tests taking too long time. I do not know the reason 
 > > this time but last time we did audit for slow tests was mainly due to ssh 
 > > failure.
 > > I have created the similar ethercalc [3] to collect time taking tests and 
 > > then round figure of their avg time taken since last 14 days from health 
 > > dashboard. Yes, there is no calculated avg time on o-h so I did not take 
 > > exact avg time its round figure.
 > > 
 > > May be 14 days  is too less to take decision to mark them slow but i think 
 > > their avg time since 3 months will be same. should we consider 3 month 
 > > time period for those ?
 > > 
 > > As per avg time, I have voted (currently based on 14 days avg) on 
 > > ethercalc which all test to mark as slow. I taken the criteria of >120 sec 
 > > avg time.  Once we have more and more people votes there we can mark them 
 > > slow.
 > > 
 > > [3]https://ethercalc.openstack.org/dorupfz6s9qt
 > 
 > Thanks for this. I haven't gone through all of the tests in there yet, 
 > but noticed (yesterday) a couple of them were personality file compute 
 > API tests, which I thought was strange. Do we have any idea where the 
 > time is being spent there? I assume it must be something with ssh 
 > validation to try and read injected files off the guest. I need to dig 
 > into this one a bit more because by default, file injection is disabled 
 > in the libvirt driver so I'm not even sure how these are running (or 
 > really doing anything useful). 

That is set to True explicitly in tempest-full job [1] and then devstack set it 
True on nova. 

>Given we have deprecated personality 
 > files in the compute API [1] I would definitely mark those as slow tests 
 > so we can still run them but don't care about them as much.

Make sense, +1.


[1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n56

-gmann
 > 
 > [1] 
 > https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id52
 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API updates week 19-25

2018-07-25 Thread Ghanshyam Mann



  On Wed, 25 Jul 2018 23:53:18 +0900 Surya Seetharaman 
 wrote  
 > Hi!
 > On Wed, Jul 25, 2018 at 11:53 AM, Ghanshyam Mann  
 > wrote:
 > 
 >  5. API Extensions merge work 
 >  - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky 
 >  - 
 > https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky
 >  
 >  - Weekly Progress: part-1 of schema merge and part-2 of server_create merge 
 > has been merged for Rocky. 1 last patch of removing the placeholder method 
 > are on gate.
 >  part-3 of view builder merge 
 > cannot make it to Rocky (7 patch up for review + 5 more to push)< Postponed 
 > this work to Stein.
 >  
 >  6. Handling a down cell 
 >  - https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
 >  - 
 > https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
 >  
 >  - Weekly Progress: It is difficult to make it in Rocky? matt has open 
 > comment on patch about changing the service list along with server list in 
 > single microversion which make 
 > sense. 
 > 
 > 
 > ​The handling down cell spec related API changes will also be postponed to 
 > Stein since the view builder merge (part-3 of API Extensions merge work)​ is 
 > postponed to Stein. It would be more cleaner.

Yeah, I will make sure view builder things gets in early in stein. I am going 
to push all remaining patches and make them ready for review once we have stein 
branch. 

-gmann

 > -- 
 > 
 > Regards,
 > Surya.
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Alex Xu
2018-07-26 1:43 GMT+08:00 Chris Friesen :

> On 07/25/2018 10:29 AM, William M Edmonds wrote:
>
>>
>> Ghanshyam Mann  wrote on 07/25/2018 05:44:46 AM:
>> ... snip ...
>>  > 1. is it ok to show the keypair used info via API ? any original
>>  > rational not to do so or it was just like that from starting.
>>
>> keypairs aren't tied to a tenant/project, so how could nova track/report
>> a quota
>> for them on a given tenant/project? Which is how the API is
>> constructed... note
>> the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail
>>
>>  > 2. Because this change will show the keypair used quota information
>>  > in API's existing filed 'in_use', it is API behaviour change (not
>>  > interface signature change in backward incompatible way) which can
>>  > cause interop issue. Should we bump microversion for this change?
>>
>> If we find a meaningful way to return in_use data for keypairs, then yes,
>> I
>> would expect a microversion bump so that callers can distinguish between
>> a)
>> talking to an older installation where in_use is always 0 vs. b) talking
>> to a
>> newer installation where in_use is 0 because there are really none in
>> use. Or if
>> we remove keypairs from the response, which at a glance seems to make more
>> sense, that should also have a microversion bump so that someone who
>> expects the
>> old response format will still get it.
>>
>
> Keypairs are weird in that they're owned by users, not projects.  This is
> arguably wrong, since it can cause problems if a user boots an instance
> with their keypair and then gets removed from a project.
>
> Nova microversion 2.54 added support for modifying the keypair associated
> with an instance when doing a rebuild.  Before that there was no clean way
> to do it.


I don't understand this, we didn't count the keypair usage with the
instance together, we just count the keypair usage for specific user.


>
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Alex Xu
2018-07-26 0:29 GMT+08:00 William M Edmonds :

>
> Ghanshyam Mann  wrote on 07/25/2018 05:44:46 AM:
> ... snip ...
> > 1. is it ok to show the keypair used info via API ? any original
> > rational not to do so or it was just like that from starting.
>
> keypairs aren't tied to a tenant/project, so how could nova track/report a
> quota for them on a given tenant/project? Which is how the API is
> constructed... note the "tenant_id" in GET /os-quota-sets/{tenant_id}/
> detail
>

Keypairs usage is only value for the API 'GET
/os-quota-sets/{tenant_id}/detail?user_id={user_id}'

>
>
> > 2. Because this change will show the keypair used quota information
> > in API's existing filed 'in_use', it is API behaviour change (not
> > interface signature change in backward incompatible way) which can
> > cause interop issue. Should we bump microversion for this change?
>
> If we find a meaningful way to return in_use data for keypairs, then yes,
> I would expect a microversion bump so that callers can distinguish between
> a) talking to an older installation where in_use is always 0 vs. b) talking
> to a newer installation where in_use is 0 because there are really none in
> use. Or if we remove keypairs from the response, which at a glance seems to
> make more sense, that should also have a microversion bump so that someone
> who expects the old response format will still get it.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Alex Xu
2018-07-25 17:44 GMT+08:00 Ghanshyam Mann :

> Hi All,
>
> During today API office hour, we were discussing about keypair quota usage
> bug (newton) [1]. key_pair 'in_use' quota is always 0 even when request per
> user which is because it is being set as 0 always [2].
>
> From checking the history and review discussion on [3], it seems that it
> was like that from staring. key_pair quota is being counted when actually
> creating the keypair but it is not shown in API 'in_use' field. Vishakha
> (assignee of this bug) is currently planing to work on this bug and before
> that we have few queries:
>
> 1. is it ok to show the keypair used info via API ? any original rational
> not to do so or it was just like that from starting.
>

It doesn't make sense to show the usage when the user queries project
quota, but it makes sense to show the usage when the user queries specific
user quota. And we have no way to show usage for the
server_group_memebers/security_group_rules, since they are the limit for a
specific server group and security group, we have no way to express that in
our quota API.



>
> 2. Because this change will show the keypair used quota information in
> API's existing filed 'in_use', it is API behaviour change (not interface
> signature change in backward incompatible way) which can cause interop
> issue. Should we bump microversion for this change?
>

If we are going to bump microversion, I prefer to set the usage to -1 for
server_group_member/security_group_rules usage, since 0 is really confuse
for the end user.


>
> [1] https://bugs.launchpad.net/nova/+bug/1644457
> [2] https://github.com/openstack/nova/blob/bf497cc47497d3a5603bf60de65205
> 4ac5ae1993/nova/quota.py#L189
> [3] https://review.openstack.org/#/c/446239/
>
> -gmann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Octavia][Congress] The usage of Neutron API

2018-07-25 Thread Michael Johnson
Octavia is done. Thank you for the patch!

Michael
On Tue, Jul 24, 2018 at 8:35 AM Hongbin Lu  wrote:
>
> Hi folks,
>
>
>
> Neutron has landed a patch to enable strict validation on query parameters 
> when listing resources [1]. I tested the Neutorn’s change in your project’s 
> gate and the result suggested that your projects would need the fixes 
> [2][3][4] to keep the gate functioning.
>
>
>
> Please feel free to reach out if there is any question or concern.
>
>
>
> [1] https://review.openstack.org/#/c/574907/
>
> [2] https://review.openstack.org/#/c/583990/
>
> [3] https://review.openstack.org/#/c/584000/
>
> [4] https://review.openstack.org/#/c/584112/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [Telemetry] RPC Message TTL for Ceilometer Notification Agent

2018-07-25 Thread Hang Yang
Hi there,

I have a question about rpc_message_ttl for ceilometer service. I'm using
queens ceilometer with gnocchi and rabbitmq. I recently noticed that
ceilometer notification agent was receiving old metrics sent by polling
agent on hypervisor from a few days ago. Since the default rpc_message_ttl
is set to 300s, does anyone know how could that happen?

I didn't want to receive those old metrics as they were in large scale and
choke the notification agent (stuck in high cpu usage). I had to purge
rabbitmq to fix that issue but wondering if there is any configuration I
can do to prevent it happen again?

Any help is appreciated.

Thanks,
Hang
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tripleo] FFE request for container-prepare-workflow

2018-07-25 Thread Alex Schultz
On Wed, Jul 25, 2018 at 3:50 PM, Steve Baker  wrote:
> I'd like to request a FFE for this blueprint[1].
>
> The remaining changes will be tracked as Depends-On on this oooq change[2].
>
> Initially the aim of this blueprint was to do all container prepare
> operations in a mistral action before the overcloud deploy. However the
> priority for delivery switched to helping blueprint containerized-undercloud
> with its container prepare. Once this was complete it was apparent that the
> overcloud prepare could share the undercloud prepare approach.
>
> The undercloud prepare does the following:
>
> 1) During undercloud_config, do a try-run prepare to populate the image
> parameters (but don't do any image transfers)
>
> 2) During tripleo-deploy, driven by tripleo-heat-templates, do the actual
> prepare after the undercloud registry is installed but before and containers
> are required
>
> For the overcloud, 1) will be done by a mistral action[3] and 2) will be
> done during overcloud deploy[4].
>
> The vast majority of code for this blueprint has landed and is exercised by
> containerized-undercloud. I don't expect issues with the overcloud changes
> landing, but in the worst case scenario the overcloud prepare can be done
> manually by running the new command "openstack tripleo container image
> prepare" as documented in this change [5].
>

Sounds good, hopefully we can figure out the issue with the reverted
patch and get it landed.

Thanks,
-Alex

> [1]
> https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow
>
> [2] https://review.openstack.org/#/c/573476/
>
> [3] https://review.openstack.org/#/c/558972/ (landed but currently being
> reverted)
>
> [4] https://review.openstack.org/#/c/581919/ (plus the series before it)
>
> [5] https://review.openstack.org/#/c/553104/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] FFE request for container-prepare-workflow

2018-07-25 Thread Steve Baker

I'd like to request a FFE for this blueprint[1].

Theremaining changes will be tracked as Depends-On on this oooq change[2].

Initially the aim of this blueprint was to do all container prepare 
operations in a mistral action before the overcloud deploy. However the 
priority for delivery switched to helping blueprint 
containerized-undercloud with its container prepare. Once this was 
complete it was apparent that the overcloud prepare could share the 
undercloud prepare approach.


The undercloud prepare does the following:

1) During undercloud_config, do a try-run prepare to populate the image 
parameters (but don't do any image transfers)


2) During tripleo-deploy, driven by tripleo-heat-templates, do the 
actual prepare after the undercloud registry is installed but before and 
containers are required


For the overcloud, 1) will be done by a mistral action[3] and 2) will be 
done during overcloud deploy[4].


The vast majority of code for this blueprint has landed and is exercised 
by containerized-undercloud. I don't expect issues with the overcloud 
changes landing, but in the worst case scenario the overcloud prepare 
can be done manually by running the new command "openstack tripleo 
container image prepare" as documented in this change [5].


[1] 
https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow


[2] https://review.openstack.org/#/c/573476/

[3] https://review.openstack.org/#/c/558972/ (landed but currently being 
reverted)


[4] https://review.openstack.org/#/c/581919/ (plus the series before it)

[5] https://review.openstack.org/#/c/553104/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Requirements][PTL][Election] Nomination of Matthew Thode (prometheanfire) for PTL of the Requirements project

2018-07-25 Thread Matthew Thode
I would like to announce my candidacy for PTL of the Requirements project for
the Stein cycle.

The following will be my goals for the cycle, in order of importance:

1. The primary goal is to keep a tight rein on global-requirements and
upper-constraints updates.  (Keep things working well)

2. Un-cap requirements where possible (stuff like eventlet).

3. Publish constraints and requirements to streamline the freeze process.

https://bugs.launchpad.net/openstack-requirements/+bug/1719006 is the bug
tracking the publish job.

4. Audit global-requirements and upper-constraints for redundancies.  One of
the rules we have for new entrants to global-requirements and/or
upper-constraints is that they be non-redundant.  Keeping that rule in mind,
audit the list of requirements for possible redundancies and if possible,
reduce the number of requirements we manage.

5. Find more cores to smooth out the review process.

I look forward to continue working with you in this cycle, as your PTL or not.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [RelMgmt][PTL][Election] Candidacy for Release Management PTL for Stein

2018-07-25 Thread Sean McGinnis
Hello everyone!

I would like to submit my name to continue as the release management PTL for
the Stein release.

Since I failed to recruit someone new to take over for me, I guess I'm still
it.

But being serious, I think I've now gotten a much deeper understanding of our
release tools and process. Things with CI jobs have stabilized and we have a
lot of good checks in place that help identify issues before they become
problems.

While Doug and Thierry are now very busy with other things that prevent them
from running again, they are still around and available with a lot of great
historical information and are able to help immensely with reviews, fixes, and
keeping code rot at bay. I'm not saying this as a reason to have enough
confidence in me to continue to run things, but for anyone that might be
interested in getting involved in the Release Management team - know you would
have plenty of help getting involved.

I'm looking forward to helping out in whatever ways I can in Stein, and I
appreciate your consideration for me to continue as PTL for the Release
Management team.

Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Remo Mattei
I want publically  want to say THANK YOU Alex. You ROCK. 

Hopefully one of those summit, I will meet. 

Ciao, 
Remo 

> On Jul 25, 2018, at 6:23 AM, Alex Schultz  wrote:
> 
> Hey folks,
> 
> So it's been great fun and we've accomplished much over the last two
> cycles but I believe it is time for me to step back and let someone
> else do the PTLing.  I'm not going anywhere so I'll still be around to
> focus on the simplification and improvements that TripleO needs going
> forward.  I look forwards to continuing our efforts with everyone.
> 
> Thanks,
> -Alex
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] network isolation can't find files referred to on director

2018-07-25 Thread James Slagle
On Wed, Jul 25, 2018 at 11:56 AM, Samuel Monderer
 wrote:
> Hi,
>
> I'm trying to upgrade from OSP11(Ocata) to OSP13 (Queens)
> In my network-isolation I refer to files that do not exist anymore on the
> director such as
>
>   OS::TripleO::Compute::Ports::ExternalPort:
> /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
>   OS::TripleO::Compute::Ports::InternalApiPort:
> /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
>   OS::TripleO::Compute::Ports::StoragePort:
> /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
>   OS::TripleO::Compute::Ports::StorageMgmtPort:
> /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
>   OS::TripleO::Compute::Ports::TenantPort:
> /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
>   OS::TripleO::Compute::Ports::ManagementPort:
> /usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml
>
> Where have they gone?

These files are now generated from network/ports/port.network.j2.yaml
during the jinja2 template rendering process. They will be created
automatically during the overcloud deployment based on the enabled
networks from network_data.yaml.

You still need to refer to the rendered path (as shown in your
example) in the various resource_registry entries.

This work was done to enable full customization of the created
networks used for the deployment. See:
https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee update for week of 23 July

2018-07-25 Thread Mohammed Naser
This is the weekly summary of work being done by the Technical
Committee members. The full list of active items is managed in the
wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker

Doug (who usually sends these out!) is out so we've come up with
the idea of a vice-chair, which I'll be fulfilling.  More information
in the change listed below.

We also track TC objectives for the cycle using StoryBoard at:
https://storyboard.openstack.org/#!/project/923

== Recent Activity ==

Project updates:

- Remove Stable branch maintenance as a project team
https://review.openstack.org/584206
- Add ansible-role-tripleo-cookiecutter to governance
https://review.openstack.org/#/c/581428/

Reference/charter changes:

- Clarify new project requirements for community engagement
https://review.openstack.org/#/c/567944/
- add vice chair role to the tc charter https://review.openstack.org/#/c/583947/
- designate Mohammed Naser as vice chair
https://review.openstack.org/#/c/583948/

Other approved changes:

- ansible-role-tripleo-zaqar had a typo which was fixed up
https://review.openstack.org/#/c/583636/
- added validation for repo names (because of the above!)
https://review.openstack.org/#/c/583637/
- tooling improvements in this stack: https://review.openstack.org/#/c/583953/

Office hour logs:

Due to (what) seems to be a lack of consumption of the office hours
logs, we're not longer logging
the start and end.  However, we welcome community feedback if this was
something that was consumed.

== Ongoing Discussions ==

Sean McGinnis (smcginnis) has proposed the pre-upgrade checks as the
Stein goal, the document is
currently being worked on with reviews already in, please chime in:

- https://review.openstack.org/#/c/585491/

== TC member actions/focus/discussions for the coming week(s) ==

It looks like it's been a quiet past few days.  However, there is a
lot of discussion around how
to properly decide to on-board an OpenStack project in a very specific
and clear process rather
than an arbitrary one at the moment.

We also should continue to discuss on subjects for the upcoming PTG:

- https://etherpad.openstack.org/p/tc-stein-ptg

== Contacting the TC ==

The Technical Committee uses a series of weekly "office hour" time
slots for synchronous communication. We hope that by having several
such times scheduled, we will have more opportunities to engage
with members of the community from different timezones.

Office hour times in #openstack-tc:

- 09:00 UTC on Tuesdays
- 01:00 UTC on Wednesdays
- 15:00 UTC on Thursdays

If you have something you would like the TC to discuss, you can add
it to our office hour conversation starter etherpad at:
https://etherpad.openstack.org/p/tc-office-hour-conversation-starters

Many of us also run IRC bouncers which stay in #openstack-tc most
of the time, so please do not feel that you need to wait for an
office hour time to pose a question or offer a suggestion. You can
use the string "tc-members" to alert the members to your question.

You will find channel logs with past conversations at
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/

If you expect your topic to require significant discussion or to
need input from members of the community other than the TC, please
start a mailing list discussion on openstack-dev at lists.openstack.org
and use the subject tag "[tc]" to bring it to the attention of TC
members.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-25 Thread Mohammed Naser
Hi everyone:

This email is just to notify everyone on the TC and the community that
the change to remove the stable branch maintenance as a project
team[1] has been fast-tracked[2].

The change should be approved on 2018-07-28 however it is beneficial
to remove the stable branch team (which has been moved into a SIG) in
order for `tonyb` to be able to act as an election official.

There seems to be no opposing votes however a revert is always
available if any members of the TC are opposed to the change[3].

Thanks to Tony for all of his help in the elections.

Regards,
Mohammed

[1]: https://review.openstack.org/#/c/584206/
[2]: 
https://governance.openstack.org/tc/reference/house-rules.html#other-project-team-updates
[3]: 
https://governance.openstack.org/tc/reference/house-rules.html#rolling-back-fast-tracked-changes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron L3 sub-team meeting canceled on July 26th

2018-07-25 Thread Miguel Lavalle
Dear Neutron Team,

Tomorrow's L3 sub team meeting will be canceled. We will resume next week,
on August 2nd, at 1500 UTC as normal

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Editable environments with resource registry entries

2018-07-25 Thread Ben Nemec

Hi,

This came up recently on my review to add an environment to enable 
Designate in a TripleO deployment.  It needs to set both resource 
registry entries and some user-configurable parameters, which means 
users need to make a copy of it that they can edit.  However, if the 
file moves then the relative paths will break.


The suggestion for Designate was to split the environment into one part 
that contains registry entries and one that contains parameters.  This 
way the file users edit doesn't have any paths in it.  So far so good.


Then as I was writing docs[1] on how to use it I was reminded that we 
have other environments that use this pattern.  In this case, 
specifically the ips-from-pool* (like [2]) files.  I don't know if there 
are others.


So do we need to rework all of those environments too, or is there 
another option?


Thanks.

-Ben

1: https://review.openstack.org/585833
2: 
https://github.com/openstack/tripleo-heat-templates/blob/master/environments/ips-from-pool.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7

2018-07-25 Thread Corey Bryant
Ok thanks again for the input.

Corey

On Wed, Jul 25, 2018 at 2:15 PM, Joshua Harlow 
wrote:

> So the only diff is that GreenThreadPoolExecutor was customized to work
> for eventlet (with a similar/same api as ThreadPoolExecutor); as for
> performance I would expect (under eventlet) that GreenThreadPoolExecutor
> would have better performance because it can use the native eventlet green
> objects (which it does try to use) instead of having to go threw the layers
> that ThreadPoolExecutor would have to use to achieve the same (and in this
> case as you found out it looks like those layers are not patched correctly
> in the newest ThreadPoolExecutor).
>
> Otherwise yes, under eventlet imho swap out the executor (assuming you can
> do this) and under threading swap in threadpool executor (ideally if done
> correctly the same stuff should 'just work').
>
> Corey Bryant wrote:
>
>> Josh,
>>
>> Thanks for the input. GreenThreadPoolExecutor does not have the deadlock
>> issue, so that is promising (at least with futurist 1.6.0).
>>
>> Does ThreadPoolExecutor have better performance than
>> GreenThreadPoolExecutor? Curious if we could just swap out
>> ThreadPoolExecutor for GreenThreadPoolExecutor.
>>
>> Thanks,
>> Corey
>>
>> On Wed, Jul 25, 2018 at 12:54 PM, Joshua Harlow > > wrote:
>>
>> Have you tried the following instead of threadpoolexecutor (which
>> honestly should work as well, even under eventlet + eventlet
>> patching).
>>
>> https://docs.openstack.org/futurist/latest/reference/index.
>> html#futurist.GreenThreadPoolExecutor
>> > html#futurist.GreenThreadPoolExecutor>
>>
>> If you have the ability to specify which executor your code is
>> using, and you are running under eventlet I'd give preference to the
>> green thread pool executor under that situation (and if not running
>> under eventlet then prefer the threadpool executor variant).
>>
>> As for @tomoto question; honestly openstack was created before
>> asyncio was a thing so that was a reason and assuming eventlet
>> patching is actually working then all the existing stdlib stuff
>> should keep on working under eventlet (including
>> concurrent.futures); otherwise eventlet.monkey_patch isn't working
>> and that's breaking the eventlet api. If their contract is that only
>> certain things work when monkey patched, that's fair, but that needs
>> to be documented somewhere (honestly it's time imho to get the hell
>> off eventlet everywhere but that likely requires rewrites of a lot
>> of things, oops...).
>>
>> -Josh
>>
>> Corey Bryant wrote:
>>
>> Hi All,
>>
>> I'm trying to add Py3 packaging support for Ubuntu Rocky and
>> while there
>> are a lot of issues involved with supporting Py3.7, this is one
>> of the
>> big ones that I could use a hand with.
>>
>> With py3.7, there's a deadlock when eventlet monkeypatch of stdlib
>> thread modules is combined with use of ThreadPoolExecutor. I
>> know this
>> affects at least designate. The same or similar also affects heat
>> (though I've not dug into the code the traceback after canceling
>> tests
>> matches that seen with designate). And it may affect other
>> projects that
>> I haven't touched yet.
>>
>> How to recreate [1]:
>> * designate: Add a tox.ini py37 target and run
>> designate.tests.test_workers.test_processing.TestProcessingE
>> xecutor.test_execute_multiple_tasks
>> * heat: Add a tox.ini py37 target and run tests
>> * general: Run bpo34173-recreate.py
>> > > from
>> issue
>> 34173 (see below).
>> [1] ubuntu cosmic has py3.7
>>
>> In issue 508 (see below) @tomoto asks "Eventlet and asyncio
>> solve same
>> problem. Why would you want concurrent.futures and eventlet in
>> same
>> application?"
>>
>> I told @tomoto that I'd seek input to that question from
>> upstream. I
>> know there've been efforts to move away from eventlet but I just
>> don't
>> have the knowledge to  provide a good answer to him.
>>
>> Here are the bugs/issues I currently have open for this:
>> https://github.com/eventlet/eventlet/issues/508
>> 
>> > >
>> https://bugs.launchpad.net/designate/+bug/1782647
>> 
>> > >
>>  

Re: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7

2018-07-25 Thread Joshua Harlow
So the only diff is that GreenThreadPoolExecutor was customized to work 
for eventlet (with a similar/same api as ThreadPoolExecutor); as for 
performance I would expect (under eventlet) that GreenThreadPoolExecutor 
would have better performance because it can use the native eventlet 
green objects (which it does try to use) instead of having to go threw 
the layers that ThreadPoolExecutor would have to use to achieve the same 
(and in this case as you found out it looks like those layers are not 
patched correctly in the newest ThreadPoolExecutor).


Otherwise yes, under eventlet imho swap out the executor (assuming you 
can do this) and under threading swap in threadpool executor (ideally if 
done correctly the same stuff should 'just work').


Corey Bryant wrote:

Josh,

Thanks for the input. GreenThreadPoolExecutor does not have the deadlock
issue, so that is promising (at least with futurist 1.6.0).

Does ThreadPoolExecutor have better performance than
GreenThreadPoolExecutor? Curious if we could just swap out
ThreadPoolExecutor for GreenThreadPoolExecutor.

Thanks,
Corey

On Wed, Jul 25, 2018 at 12:54 PM, Joshua Harlow mailto:harlo...@fastmail.com>> wrote:

Have you tried the following instead of threadpoolexecutor (which
honestly should work as well, even under eventlet + eventlet patching).


https://docs.openstack.org/futurist/latest/reference/index.html#futurist.GreenThreadPoolExecutor



If you have the ability to specify which executor your code is
using, and you are running under eventlet I'd give preference to the
green thread pool executor under that situation (and if not running
under eventlet then prefer the threadpool executor variant).

As for @tomoto question; honestly openstack was created before
asyncio was a thing so that was a reason and assuming eventlet
patching is actually working then all the existing stdlib stuff
should keep on working under eventlet (including
concurrent.futures); otherwise eventlet.monkey_patch isn't working
and that's breaking the eventlet api. If their contract is that only
certain things work when monkey patched, that's fair, but that needs
to be documented somewhere (honestly it's time imho to get the hell
off eventlet everywhere but that likely requires rewrites of a lot
of things, oops...).

-Josh

Corey Bryant wrote:

Hi All,

I'm trying to add Py3 packaging support for Ubuntu Rocky and
while there
are a lot of issues involved with supporting Py3.7, this is one
of the
big ones that I could use a hand with.

With py3.7, there's a deadlock when eventlet monkeypatch of stdlib
thread modules is combined with use of ThreadPoolExecutor. I
know this
affects at least designate. The same or similar also affects heat
(though I've not dug into the code the traceback after canceling
tests
matches that seen with designate). And it may affect other
projects that
I haven't touched yet.

How to recreate [1]:
* designate: Add a tox.ini py37 target and run

designate.tests.test_workers.test_processing.TestProcessingExecutor.test_execute_multiple_tasks
* heat: Add a tox.ini py37 target and run tests
* general: Run bpo34173-recreate.py
> from issue
34173 (see below).
[1] ubuntu cosmic has py3.7

In issue 508 (see below) @tomoto asks "Eventlet and asyncio
solve same
problem. Why would you want concurrent.futures and eventlet in same
application?"

I told @tomoto that I'd seek input to that question from upstream. I
know there've been efforts to move away from eventlet but I just
don't
have the knowledge to  provide a good answer to him.

Here are the bugs/issues I currently have open for this:
https://github.com/eventlet/eventlet/issues/508

>
https://bugs.launchpad.net/designate/+bug/1782647

>
https://bugs.python.org/issue34173

>

Any help with this would be greatly appreciated!

Thanks,
Corey


__
OpenStack Development 

[openstack-announce] [OSSA-2018-002] GET /v3/OS-FEDERATION/projects leaks project information (CVE-2018-14432)

2018-07-25 Thread Matthew Thode
===
OSSA-2018-002: GET /v3/OS-FEDERATION/projects leaks project information
===

:Date: July 25, 2018
:CVE: CVE-2018-14432


Affects
~~~
- Keystone: <11.0.4, ==12.0.0, ==13.0.0


Description
~~~
Kristi Nikolla with Boston University reported a vulnerability in
Keystone federation. By doing GET /v3/OS-FEDERATION/projects an
authenticated user may discover projects they have no authority to
access, leaking all projects in the deployment and their attributes.
Only Keystone with the /v3/OS-FEDERATION endpoint enabled via
policy.json is affected.


Patches
~~~
- https://review.openstack.org/585802 (Ocata)
- https://review.openstack.org/585792 (Pike)
- https://review.openstack.org/585788 (Queens)
- https://review.openstack.org/585782 (Rocky)


Credits
~~~
- Kristi Nikolla from Boston University (CVE-2018-14432)


References
~~
- https://launchpad.net/bugs/1779205
- http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14432


signature.asc
Description: PGP signature
___
OpenStack-announce mailing list
OpenStack-announce@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce


Re: [openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Tom Barron
I don't do enough in TripleO to chime in on the list, but I can't 
think of a more helpful PTL!


Thank you for your service.

On 25/07/18 10:31 -0700, Wesley Hayutin wrote:

On Wed, Jul 25, 2018 at 9:24 AM Alex Schultz  wrote:


Hey folks,

So it's been great fun and we've accomplished much over the last two
cycles but I believe it is time for me to step back and let someone
else do the PTLing.  I'm not going anywhere so I'll still be around to
focus on the simplification and improvements that TripleO needs going
forward.  I look forwards to continuing our efforts with everyone.

Thanks,
-Alex



Thanks for all the hard work, long hours and leadership!
You have done a great job, congrats on a great cycle.

Thanks



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Wes Hayutin

Associate MANAGER

Red Hat



w hayu...@redhat.comT: +1919 <+19197544114>4232509
  IRC:  weshay


View my calendar and check my availability for meetings HERE




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [OSSA-2018-002] GET /v3/OS-FEDERATION/projects leaks project information (CVE-2018-14432)

2018-07-25 Thread Matthew Thode
===
OSSA-2018-002: GET /v3/OS-FEDERATION/projects leaks project information
===

:Date: July 25, 2018
:CVE: CVE-2018-14432


Affects
~~~
- Keystone: <11.0.4, ==12.0.0, ==13.0.0


Description
~~~
Kristi Nikolla with Boston University reported a vulnerability in
Keystone federation. By doing GET /v3/OS-FEDERATION/projects an
authenticated user may discover projects they have no authority to
access, leaking all projects in the deployment and their attributes.
Only Keystone with the /v3/OS-FEDERATION endpoint enabled via
policy.json is affected.


Patches
~~~
- https://review.openstack.org/585802 (Ocata)
- https://review.openstack.org/585792 (Pike)
- https://review.openstack.org/585788 (Queens)
- https://review.openstack.org/585782 (Rocky)


Credits
~~~
- Kristi Nikolla from Boston University (CVE-2018-14432)


References
~~
- https://launchpad.net/bugs/1779205
- http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14432


signature.asc
Description: PGP signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-25 Thread Sam Doran
I spoke with other Ansible Core devs to get some clarity on this change.

This is not a change that is being made quickly, lightly, or without a whole of 
bunch of reservation. In fact, that PR created by agaffney may not be merged 
any time soon. He just wanted to get something started and there is still 
ongoing discussion on that PR. It is definitely a WIP at this point.

The main reason for this change is that pretty much all of the Ansible CVEs to 
date came from "fact injection", meaning a fact that contains executable Python 
code Jinja will merrily exec(). Vars, hostvars, and facts are different in 
Ansible (yes, this is confusing — sorry). All vars go through a templating 
step. By copying facts to vars, it means facts get templated controller side 
which could lead to controller compromise if malicious code exists in facts.

We created an AnsibleUnsafe class to protect against this, but stopping the 
practice of injecting facts into vars would close the door completely. It also 
alleviates some name collisions if you set a hostvar that has the same name as 
a var. We have some methods that filter out certain variables, but keeping 
facts and vars in separate spaces is much cleaner.

This also does not change how hostvars set via set_fact are referenced. 
(set_fact should really be called set_host_var). Variables set with set_fact 
are not facts and are therefore not inside the ansible_facts dict. They are in 
the hostvars dict, which you can reference as {{ my_var }} or {{ 
hostvars['some-host']['my_var'] }} if you need to look it up from a different 
host.

All that being said, the setting to control this behavior as Emilien pointed 
out is inject_facts_as_vars, which defaults to True and will remain that way 
for the foreseeable future. I would not rush into changing all the fact 
references in playbooks. It can be a gradual process.

Setting inject_facts_as_vars to True means ansible_hostname becomes 
ansible_facts.hostname. You do not have to use the hostvars dictionary — that 
is for looking up facts about hosts other than the current host.

If you wanted to be proactive, you could start using the ansible_facts 
dictionary today since it is compatible with the default setting and will not 
affect others trying to use playbooks that reference ansible_facts.

In other words, with the default setting of True, you can use either 
ansible_hostname or ansible_facts.hostname. Changing it to False means only 
ansible_facts.hostname is defined.

> Like, really. I know we can't really have a word about that kind of decision, 
> but... damn, WHY ?!

That is most certainly not the case. Ansible is developed in the open and we 
encourage community members to attend meetings 
 and add 
topics to the agenda  for 
discussion. Ansible also goes through a proposal process for major changes, 
which you can view here 
.

You can always go to #ansible-devel on Freenode or start a discussion on the 
mailing list  to speak 
with the Ansible Core devs about these things as well.

---

Respectfully,

Sam Doran
Senior Software Engineer
Ansible by Red Hat
sdo...@redhat.com __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Chris Friesen

On 07/25/2018 10:29 AM, William M Edmonds wrote:


Ghanshyam Mann  wrote on 07/25/2018 05:44:46 AM:
... snip ...
 > 1. is it ok to show the keypair used info via API ? any original
 > rational not to do so or it was just like that from starting.

keypairs aren't tied to a tenant/project, so how could nova track/report a quota
for them on a given tenant/project? Which is how the API is constructed... note
the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail

 > 2. Because this change will show the keypair used quota information
 > in API's existing filed 'in_use', it is API behaviour change (not
 > interface signature change in backward incompatible way) which can
 > cause interop issue. Should we bump microversion for this change?

If we find a meaningful way to return in_use data for keypairs, then yes, I
would expect a microversion bump so that callers can distinguish between a)
talking to an older installation where in_use is always 0 vs. b) talking to a
newer installation where in_use is 0 because there are really none in use. Or if
we remove keypairs from the response, which at a glance seems to make more
sense, that should also have a microversion bump so that someone who expects the
old response format will still get it.


Keypairs are weird in that they're owned by users, not projects.  This is 
arguably wrong, since it can cause problems if a user boots an instance with 
their keypair and then gets removed from a project.


Nova microversion 2.54 added support for modifying the keypair associated with 
an instance when doing a rebuild.  Before that there was no clean way to do it.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread John Fulton
On Wed, Jul 25, 2018 at 1:38 PM Raoul Scarazzini  wrote:
>
> On 25/07/2018 15:23, Alex Schultz wrote:
> > Hey folks,
> > So it's been great fun and we've accomplished much over the last two
> > cycles but I believe it is time for me to step back and let someone
> > else do the PTLing.  I'm not going anywhere so I'll still be around to
> > focus on the simplification and improvements that TripleO needs going
> > forward.  I look forwards to continuing our efforts with everyone.
> > Thanks,
> > -Alex
>
> To me you did really a great job. I know you'll be around and so on, but
> let me just say thank you.

+1000!
>
> --
> Raoul Scarazzini
> ra...@redhat.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Raoul Scarazzini
On 25/07/2018 15:23, Alex Schultz wrote:
> Hey folks,
> So it's been great fun and we've accomplished much over the last two
> cycles but I believe it is time for me to step back and let someone
> else do the PTLing.  I'm not going anywhere so I'll still be around to
> focus on the simplification and improvements that TripleO needs going
> forward.  I look forwards to continuing our efforts with everyone.
> Thanks,
> -Alex

To me you did really a great job. I know you'll be around and so on, but
let me just say thank you.

-- 
Raoul Scarazzini
ra...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7

2018-07-25 Thread Corey Bryant
Josh,

Thanks for the input. GreenThreadPoolExecutor does not have the deadlock
issue, so that is promising (at least with futurist 1.6.0).

Does ThreadPoolExecutor have better performance than
GreenThreadPoolExecutor? Curious if we could just swap out
ThreadPoolExecutor for GreenThreadPoolExecutor.

Thanks,
Corey

On Wed, Jul 25, 2018 at 12:54 PM, Joshua Harlow 
wrote:

> Have you tried the following instead of threadpoolexecutor (which honestly
> should work as well, even under eventlet + eventlet patching).
>
> https://docs.openstack.org/futurist/latest/reference/index.
> html#futurist.GreenThreadPoolExecutor
>
> If you have the ability to specify which executor your code is using, and
> you are running under eventlet I'd give preference to the green thread pool
> executor under that situation (and if not running under eventlet then
> prefer the threadpool executor variant).
>
> As for @tomoto question; honestly openstack was created before asyncio was
> a thing so that was a reason and assuming eventlet patching is actually
> working then all the existing stdlib stuff should keep on working under
> eventlet (including concurrent.futures); otherwise eventlet.monkey_patch
> isn't working and that's breaking the eventlet api. If their contract is
> that only certain things work when monkey patched, that's fair, but that
> needs to be documented somewhere (honestly it's time imho to get the hell
> off eventlet everywhere but that likely requires rewrites of a lot of
> things, oops...).
>
> -Josh
>
> Corey Bryant wrote:
>
>> Hi All,
>>
>> I'm trying to add Py3 packaging support for Ubuntu Rocky and while there
>> are a lot of issues involved with supporting Py3.7, this is one of the
>> big ones that I could use a hand with.
>>
>> With py3.7, there's a deadlock when eventlet monkeypatch of stdlib
>> thread modules is combined with use of ThreadPoolExecutor. I know this
>> affects at least designate. The same or similar also affects heat
>> (though I've not dug into the code the traceback after canceling tests
>> matches that seen with designate). And it may affect other projects that
>> I haven't touched yet.
>>
>> How to recreate [1]:
>> * designate: Add a tox.ini py37 target and run
>> designate.tests.test_workers.test_processing.TestProcessingE
>> xecutor.test_execute_multiple_tasks
>> * heat: Add a tox.ini py37 target and run tests
>> * general: Run bpo34173-recreate.py
>>  from issue
>> 34173 (see below).
>> [1] ubuntu cosmic has py3.7
>>
>> In issue 508 (see below) @tomoto asks "Eventlet and asyncio solve same
>> problem. Why would you want concurrent.futures and eventlet in same
>> application?"
>>
>> I told @tomoto that I'd seek input to that question from upstream. I
>> know there've been efforts to move away from eventlet but I just don't
>> have the knowledge to  provide a good answer to him.
>>
>> Here are the bugs/issues I currently have open for this:
>> https://github.com/eventlet/eventlet/issues/508
>> 
>> https://bugs.launchpad.net/designate/+bug/1782647
>> 
>> https://bugs.python.org/issue34173 
>>
>> Any help with this would be greatly appreciated!
>>
>> Thanks,
>> Corey
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Wesley Hayutin
On Wed, Jul 25, 2018 at 9:24 AM Alex Schultz  wrote:

> Hey folks,
>
> So it's been great fun and we've accomplished much over the last two
> cycles but I believe it is time for me to step back and let someone
> else do the PTLing.  I'm not going anywhere so I'll still be around to
> focus on the simplification and improvements that TripleO needs going
> forward.  I look forwards to continuing our efforts with everyone.
>
> Thanks,
> -Alex
>

Thanks for all the hard work, long hours and leadership!
You have done a great job, congrats on a great cycle.

Thanks

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

Wes Hayutin

Associate MANAGER

Red Hat



w hayu...@redhat.comT: +1919 <+19197544114>4232509
   IRC:  weshay


View my calendar and check my availability for meetings HERE

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] URGENT - Live migration error DestinationDiskExists

2018-07-25 Thread Satish Patel
I am using PIKE 16.0.15 version and seeing following error during live
migration, I am using Ceph storage for shared storage. any idea what
is going on ?

2018-07-25 13:15:00.773 52312 ERROR oslo_messaging.rpc.server
DestinationDiskExists: The supplied disk path
(/var/lib/nova/instances/5f56bc2b-74c8-47c1-834c-00796fafe6ae) already
exists, it is expected not to exist.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tripleo] PTL candidacy for the Stein Cycle

2018-07-25 Thread John Fulton
+1
On Wed, Jul 25, 2018 at 8:04 AM Juan Antonio Osorio  wrote:
>
> Hello folks!
>
> I'd like to nominate myself for the TripleO PTL role for the Stein cycle.
>
> Alex has done a great job as a PTL: The project is progressing nicely with 
> many
> new, exciting features and uses for TripleO coming to fruition recently. It's 
> a
> great time for the project. But, there's more work to be done.
>
> I have served the TripleO community as a core-reviewer for some years now and,
> more recently, by driving the Security Squad. This project has been a
> great learning experience for me, both technically (I got to learn even more 
> of
> OpenStack) and community-wise. Now I wish to better serve the community 
> further
> by bringing my experiences into PTL role. While I have not yet served as PTL
> for a project before,I'm eager to learn the ropes and help improve the
> community that has been so influential on me.
>
> For Stein, I would like to focus on:
>
> * Increasing TripleO's usage in the testing of other projects
>   Now that TripleO can deploy a standalone OpenStack installation, I hope it
>   can be leveraged to add value to other projects' testing efforts. I hope 
> this
>   would subsequentially help increase TripleO's testing coverage, and reduce
>   the footprint required for full-deployment testing.
>
> * Technical Debt & simplification
>   We've been working on simplifying the deployment story and battle technical
>   depth -- let’s keep  this momentum going.  We've been running (mostly) fully
>   containerized environments for a couple of releases now; I hope we can 
> reduce
>   the number of stacks we create, which would in turn simplify the project
>   structure (at least on the t-h-t side). We should also aim for the most
>   convergence we can achieve (e.g. CLI and UI workflows).
>
> * CI and testing
>   The project has made great progress regarding CI and testing; lets keep this
>   moving forward and get developers easier ways to bring up testing
>   environments for them to work on and to be able to reproduce CI jobs.
>
> Thanks!
>
> Juan Antonio Osorio Robles
> IRC: jaosorior
>
>
> --
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7

2018-07-25 Thread Joshua Harlow
Have you tried the following instead of threadpoolexecutor (which 
honestly should work as well, even under eventlet + eventlet patching).


https://docs.openstack.org/futurist/latest/reference/index.html#futurist.GreenThreadPoolExecutor

If you have the ability to specify which executor your code is using, 
and you are running under eventlet I'd give preference to the green 
thread pool executor under that situation (and if not running under 
eventlet then prefer the threadpool executor variant).


As for @tomoto question; honestly openstack was created before asyncio 
was a thing so that was a reason and assuming eventlet patching is 
actually working then all the existing stdlib stuff should keep on 
working under eventlet (including concurrent.futures); otherwise 
eventlet.monkey_patch isn't working and that's breaking the eventlet 
api. If their contract is that only certain things work when monkey 
patched, that's fair, but that needs to be documented somewhere 
(honestly it's time imho to get the hell off eventlet everywhere but 
that likely requires rewrites of a lot of things, oops...).


-Josh

Corey Bryant wrote:

Hi All,

I'm trying to add Py3 packaging support for Ubuntu Rocky and while there
are a lot of issues involved with supporting Py3.7, this is one of the
big ones that I could use a hand with.

With py3.7, there's a deadlock when eventlet monkeypatch of stdlib
thread modules is combined with use of ThreadPoolExecutor. I know this
affects at least designate. The same or similar also affects heat
(though I've not dug into the code the traceback after canceling tests
matches that seen with designate). And it may affect other projects that
I haven't touched yet.

How to recreate [1]:
* designate: Add a tox.ini py37 target and run
designate.tests.test_workers.test_processing.TestProcessingExecutor.test_execute_multiple_tasks
* heat: Add a tox.ini py37 target and run tests
* general: Run bpo34173-recreate.py
 from issue
34173 (see below).
[1] ubuntu cosmic has py3.7

In issue 508 (see below) @tomoto asks "Eventlet and asyncio solve same
problem. Why would you want concurrent.futures and eventlet in same
application?"

I told @tomoto that I'd seek input to that question from upstream. I
know there've been efforts to move away from eventlet but I just don't
have the knowledge to  provide a good answer to him.

Here are the bugs/issues I currently have open for this:
https://github.com/eventlet/eventlet/issues/508

https://bugs.launchpad.net/designate/+bug/1782647

https://bugs.python.org/issue34173 

Any help with this would be greatly appreciated!

Thanks,
Corey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL candidacy for the Stein Cycle

2018-07-25 Thread Remo Mattei
+1 for Juan, 


> On Jul 25, 2018, at 5:03 AM, Juan Antonio Osorio  wrote:
> 
> Hello folks!
> 
> I'd like to nominate myself for the TripleO PTL role for the Stein cycle.
> 
> Alex has done a great job as a PTL: The project is progressing nicely with 
> many
> new, exciting features and uses for TripleO coming to fruition recently. It's 
> a
> great time for the project. But, there's more work to be done.
> 
> I have served the TripleO community as a core-reviewer for some years now and,
> more recently, by driving the Security Squad. This project has been a
> great learning experience for me, both technically (I got to learn even more 
> of
> OpenStack) and community-wise. Now I wish to better serve the community 
> further
> by bringing my experiences into PTL role. While I have not yet served as PTL
> for a project before,I'm eager to learn the ropes and help improve the
> community that has been so influential on me.
> 
> For Stein, I would like to focus on:
> 
> * Increasing TripleO's usage in the testing of other projects
>   Now that TripleO can deploy a standalone OpenStack installation, I hope it
>   can be leveraged to add value to other projects' testing efforts. I hope 
> this
>   would subsequentially help increase TripleO's testing coverage, and reduce
>   the footprint required for full-deployment testing.
> 
> * Technical Debt & simplification
>   We've been working on simplifying the deployment story and battle technical
>   depth -- let’s keep  this momentum going.  We've been running (mostly) fully
>   containerized environments for a couple of releases now; I hope we can 
> reduce
>   the number of stacks we create, which would in turn simplify the project
>   structure (at least on the t-h-t side). We should also aim for the most
>   convergence we can achieve (e.g. CLI and UI workflows).
> 
> * CI and testing
>   The project has made great progress regarding CI and testing; lets keep this
>   moving forward and get developers easier ways to bring up testing
>   environments for them to work on and to be able to reproduce CI jobs.
> 
> Thanks!
> 
> Juan Antonio Osorio Robles
> IRC: jaosorior
> 
> 
> -- 
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] ptl non candidacy

2018-07-25 Thread Surya Singh
Jeffrey, Great work with great leadership for Rocky Cycle.
 Hope to see you around always.

---spsurya

On Wed, Jul 25, 2018 at 9:19 AM Jeffrey Zhang 
wrote:

> Hi all,
>
> I just wanna to say I am not running PTL for Stein cycle. I have been
> involved in Kolla project for almost 3 years. And recently my work changes
> a little, too. So I may not have much time in the community in the future. 
> Kolla
> is a great project and the community is also awesome. I would encourage
> everyone in the community to consider for running.
>
> Thanks for your support :D.
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread William M Edmonds


Ghanshyam Mann  wrote on 07/25/2018 05:44:46 AM:
... snip ...
> 1. is it ok to show the keypair used info via API ? any original
> rational not to do so or it was just like that from starting.

keypairs aren't tied to a tenant/project, so how could nova track/report a
quota for them on a given tenant/project? Which is how the API is
constructed... note the "tenant_id" in
GET /os-quota-sets/{tenant_id}/detail

> 2. Because this change will show the keypair used quota information
> in API's existing filed 'in_use', it is API behaviour change (not
> interface signature change in backward incompatible way) which can
> cause interop issue. Should we bump microversion for this change?

If we find a meaningful way to return in_use data for keypairs, then yes, I
would expect a microversion bump so that callers can distinguish between a)
talking to an older installation where in_use is always 0 vs. b) talking to
a newer installation where in_use is 0 because there are really none in
use. Or if we remove keypairs from the response, which at a glance seems to
make more sense, that should also have a microversion bump so that someone
who expects the old response format will still get it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] Deadlines this week

2018-07-25 Thread Matthew Thode
On 18-07-23 14:20:59, Sean McGinnis wrote:
> Just a quick reminder that this week is a big one for deadlines.
> 
> This Thursday, July 26, is our scheduled deadline for feature freeze, soft
> string freeze, client library freeze, and requirements freeze.
> 
> String freeze is necessary to give our i18n team a chance at translating error
> strings. You are highly encouraged not to accept proposed changes containing
> modifications in user-facing strings (with consideration for important bug
> fixes of course). Such changes should be rejected by the review team and
> postponed until the next series development opens (which should happen when
> RC1 is published).
> 
> The other freezes are to allow library changes and other code churn to settle
> down before we get to RC1. Import feature freeze exceptions should be 
> requested
> from the project's PTL for them to decide if the risk is low enough to allow
> changes to still be accepted.
> 
> Requirements updates will need a feature freeze exception from the 
> requirements
> team. Those should be requested by sending a request to openstack-dev with the
> subject line containing "[requirements][ffe]".
> 
> For more details, please refer to our published Rocky release schedule:
> 
> https://releases.openstack.org/rocky/schedule.html
> 

Final reminder, the requirements freeze starts tomorrow.  I still see
some projects trickling in, so this is your final warning.  Starting
tomorrow you will have to make a FFE request to the list first.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership

2018-07-25 Thread 赵超
cc to the Trove team members and guys from Samsung R Center in Krakow,
Poland privately, so anyone of them who are not reading the ML could also
be notified.

On Thu, Jul 26, 2018 at 12:09 AM, 赵超  wrote:

> Hi All,
>
> Trove currently has a really small team, and all the active team members
> are from China, we had some good discussions during the Rocky online PTG
> meetings[1], and the goals were arranged and priorited [2][3]. But it's sad
> that none of us could focus on the project, and the number of patches and
> reviews fall a lot in this cycle comparing Queens.
>
> [1] https://etherpad.openstack.org/p/trove-ptg-rocky
> [2] https://etherpad.openstack.org/p/trove-priorities-and-specs-tracking
> [3] https://docs.google.com/spreadsheets/d/1Jz6TnmRHnhbg6J_tSBXv-
> SvYIrG4NLh4nWejupxqdeg/edit#gid=0
>
> And for me, it's a really great chance to play as the PTL role of Trove,
> and I learned a lot during this cycle(from Trove projects to the CI
> infrastrues, and more). However in this cycle, I have been with no bandwith
> to work on the project for months, and the situation seems not be better in
> the forseeable future, so I think it's better to transfter the leadership,
> and look for opportunites for more anticipations in the project.
>
> A good news is recently a team from Samsung R Center in Krakow, Poland
> joined us, they're building a product on OpenStack, have done improvments
> on Trove(internally), and now interested in contributing to the community,
> starting by migrating the intergating tests to the tempest plugin. They're
> also willing and ready to act as the PTL role. The only problem for their
> nomination may be that none of them have a patched merged into the Trove
> projects. There're some in the trove-tempest-plugin waiting review, but
> according to the activities of the project, these patches may need a long
> time to merge (and we're at Rocky milestone-3, I think we could merge
> patches in the trove-tempest-plugin, as they're all abouth testing).
>
> I also hope and welcome the other current active team members of Trove
> could nominate themselves, in that way, we could get more discussions about
> how we think about the direction of Trove.
>
> I'll stll be here, to help the migration of the integration tests, CentOS
> guest images support, Cluster improvement and all other goals we discussed
> before, and code review.
>
> Thanks.
>
> --
> To be free as in freedom.
>



-- 
To be free as in freedom.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership

2018-07-25 Thread 赵超
Hi All,

Trove currently has a really small team, and all the active team members
are from China, we had some good discussions during the Rocky online PTG
meetings[1], and the goals were arranged and priorited [2][3]. But it's sad
that none of us could focus on the project, and the number of patches and
reviews fall a lot in this cycle comparing Queens.

[1] https://etherpad.openstack.org/p/trove-ptg-rocky
[2] https://etherpad.openstack.org/p/trove-priorities-and-specs-tracking
[3]
https://docs.google.com/spreadsheets/d/1Jz6TnmRHnhbg6J_tSBXv-SvYIrG4NLh4nWejupxqdeg/edit#gid=0

And for me, it's a really great chance to play as the PTL role of Trove,
and I learned a lot during this cycle(from Trove projects to the CI
infrastrues, and more). However in this cycle, I have been with no bandwith
to work on the project for months, and the situation seems not be better in
the forseeable future, so I think it's better to transfter the leadership,
and look for opportunites for more anticipations in the project.

A good news is recently a team from Samsung R Center in Krakow, Poland
joined us, they're building a product on OpenStack, have done improvments
on Trove(internally), and now interested in contributing to the community,
starting by migrating the intergating tests to the tempest plugin. They're
also willing and ready to act as the PTL role. The only problem for their
nomination may be that none of them have a patched merged into the Trove
projects. There're some in the trove-tempest-plugin waiting review, but
according to the activities of the project, these patches may need a long
time to merge (and we're at Rocky milestone-3, I think we could merge
patches in the trove-tempest-plugin, as they're all abouth testing).

I also hope and welcome the other current active team members of Trove
could nominate themselves, in that way, we could get more discussions about
how we think about the direction of Trove.

I'll stll be here, to help the migration of the integration tests, CentOS
guest images support, Cluster improvement and all other goals we discussed
before, and code review.

Thanks.

-- 
To be free as in freedom.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Live migration failed with ceph storage

2018-07-25 Thread Satish Patel
Look like i do have this option in whatever version of nova i am running


[root@ostack-compute-02 site-packages]# pwd
/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages
[root@ostack-compute-02 site-packages]# grep "migrate_configure_max_speed" * -r
nova/tests/unit/virt/libvirt/test_driver.py:
guest.migrate_configure_max_speed = mock.MagicMock()
nova/tests/unit/virt/libvirt/test_driver.py:
guest.migrate_configure_max_speed.assert_called_once_with(
nova/tests/unit/virt/libvirt/test_driver.py:
guest.migrate_configure_max_speed = mock.MagicMock()
nova/tests/unit/virt/libvirt/test_driver.py:
guest.migrate_configure_max_speed.assert_called_once_with(
nova/tests/unit/virt/libvirt/test_driver.py:
guest.migrate_configure_max_speed = mock.MagicMock()
nova/tests/unit/virt/libvirt/test_driver.py:
guest.migrate_configure_max_speed.assert_not_called()
nova/tests/unit/virt/libvirt/test_driver.py:
guest.migrate_configure_max_speed = mock.MagicMock()
nova/tests/unit/virt/libvirt/test_driver.py:
guest.migrate_configure_max_speed.assert_not_called()
nova/virt/libvirt/driver.py:guest.migrate_configure_max_speed(



On Wed, Jul 25, 2018 at 10:15 AM, Satish Patel  wrote:
> David,
>
> I did this on compute node
>
> [root@ostack-compute-01 ~]# locate test_guest.py
> /openstack/venvs/nova-16.0.14/lib/python2.7/site-packages/nova/tests/unit/virt/libvirt/test_guest.py
> /openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/tests/unit/virt/libvirt/test_guest.py
>
>
> I didn't find option
>
> [root@ostack-compute-01 ~]# grep -i "test_migrate_configure_max_speed"
> /openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/tests/unit/virt/libvirt/test_guest.py
> [root@ostack-compute-01 ~]#
>
> On Wed, Jul 25, 2018 at 10:06 AM, Satish Patel  wrote:
>> Oh wait i believe following.
>>
>> https://github.com/openstack/openstack-ansible/blob/0e03f46a2ebb0ffc6f12384f19ec1184434e7a09/playbooks/defaults/repo_packages/openstack_services.yml#L148
>>
>> On Wed, Jul 25, 2018 at 10:04 AM, Satish Patel  wrote:
>>> David,
>>>
>>> look like OSAD 16.0.15 using following repo, if i am not wrong
>>>
>>> - name: os_nova
>>>   scm: git
>>>   src: https://git.openstack.org/openstack/openstack-ansible-os_nova
>>>   version: 378cf6c83f9ad23c2e0d37e9df06796fee02cc27
>>>
>>> On Wed, Jul 25, 2018 at 9:45 AM, David Medberry  
>>> wrote:
 I think that nova --version is the version of the client (not of nova
 itself).

 I'm looking at OSAD 16.0.15 to see what it is pulling for nova.

 If I see anything of interest, I'll reply.

 On Wed, Jul 25, 2018 at 6:33 AM, Satish Patel  wrote:
>
> Thanks David,
>
> [root@ostack-compute-01 ~]# nova --version
> 9.1.2
>
> I am using Pike 16.0.15  (My deployment tool is openstack-ansible)
>
>
> What are my option here?
>
>
> On Wed, Jul 25, 2018 at 8:19 AM, David Medberry 
> wrote:
> > It's not clear what version of Nova you are running but perhaps it is
> > badly
> > patched. The 16.x.x (Pike) release of Nova has no
> > "migrate_configure_max_speed" but as best I can tell you are running a
> > patched version of Nova Pike so it may be inconsistent.
> >
> > This parameter was introduced on 2017-08-24:
> >
> > https://github.com/openstack/nova/commit/23446a9552b5be3b040278646149a0f481d0a005
> >
> > That parameter showed up in Queens (not Pike) initially.
> >
> > -d
> >
> > On Tue, Jul 24, 2018 at 11:22 PM, Satish Patel 
> > wrote:
> >>
> >> I have openstack with ceph storage setup and trying to test Live
> >> migration but somehow it failed and showing following error
> >>
> >> nova.conf
> >>
> >> # ceph rbd support
> >> live_migration_uri = "qemu+tcp://%s/system"
> >> live_migration_tunnelled = True
> >>
> >> libvirtd.conf
> >>
> >> listen_tls = 0
> >> listen_tcp = 1
> >> unix_sock_group = "libvirt"
> >> unix_sock_ro_perms = "0777"
> >> unix_sock_rw_perms = "0770"
> >> auth_unix_ro = "none"
> >> auth_unix_rw = "none"
> >> auth_tcp = "none"
> >>
> >>
> >> This is the error i am getting, i google it but didn't find any
> >> reference
> >>
> >>
> >>
> >> ] [instance: 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Live migration
> >> failed.: AttributeError: 'Guest' object has no attribute
> >> 'migrate_configure_max_speed'
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Traceback (most recent call
> >> last):
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
> >>
> >>
> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py",
> >> line 5580, in _do_live_migration
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:

[openstack-dev] [tripleo] network isolation can't find files referred to on director

2018-07-25 Thread Samuel Monderer
Hi,

I'm trying to upgrade from OSP11(Ocata) to OSP13 (Queens)
In my network-isolation I refer to files that do not exist anymore on the
director such as

  OS::TripleO::Compute::Ports::ExternalPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
  OS::TripleO::Compute::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
  OS::TripleO::Compute::Ports::StoragePort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::TenantPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
  OS::TripleO::Compute::Ports::ManagementPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml

Where have they gone?

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7

2018-07-25 Thread Corey Bryant
Hi All,

I'm trying to add Py3 packaging support for Ubuntu Rocky and while there
are a lot of issues involved with supporting Py3.7, this is one of the big
ones that I could use a hand with.

With py3.7, there's a deadlock when eventlet monkeypatch of stdlib thread
modules is combined with use of ThreadPoolExecutor. I know this affects at
least designate. The same or similar also affects heat (though I've not dug
into the code the traceback after canceling tests matches that seen with
designate). And it may affect other projects that I haven't touched yet.

How to recreate [1]:
* designate: Add a tox.ini py37 target and run designate.tests.test_workers.
test_processing.TestProcessingExecutor.test_execute_multiple_tasks
* heat: Add a tox.ini py37 target and run tests
* general: Run bpo34173-recreate.py
 from issue 34173
(see below).
[1] ubuntu cosmic has py3.7

In issue 508 (see below) @tomoto asks "Eventlet and asyncio solve same
problem. Why would you want concurrent.futures and eventlet in same
application?"

I told @tomoto that I'd seek input to that question from upstream. I know
there've been efforts to move away from eventlet but I just don't have the
knowledge to  provide a good answer to him.

Here are the bugs/issues I currently have open for this:
https://github.com/eventlet/eventlet/issues/508
https://bugs.launchpad.net/designate/+bug/1782647
https://bugs.python.org/issue34173

Any help with this would be greatly appreciated!

Thanks,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral workflow cannot establish connection

2018-07-25 Thread Samuel Monderer
Hi Steve,

You were right, when I removed most of the roles it worked.

I've encountered another problem. It seems that the network-isolation.yaml
I used with OSP11 is pointing to files that do not exist anymore such as

*  # Port assignments for the Controller role*
*  OS::TripleO::Controller::Ports::ExternalPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml*
*  OS::TripleO::Controller::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml*
*  OS::TripleO::Controller::Ports::StoragePort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml*
*  OS::TripleO::Controller::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml*
*  OS::TripleO::Controller::Ports::TenantPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml*
*  OS::TripleO::Controller::Ports::ManagementPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml*

Have they moved to a different location or are they created during the
overcloud deployment??

Thanks
Samuel

On Mon, Jul 16, 2018 at 3:06 PM Steven Hardy  wrote:

> On Sun, Jul 15, 2018 at 7:50 PM, Samuel Monderer
>  wrote:
> >
> > Hi Remo,
> >
> > Attached are templates I used for the deployment. They are based on a
> deployment we did with OSP11.
> > I made the changes for it to work with OSP13.
> >
> > I do think it's the roles_data.yaml file that is causing the error
> because if remove the " -r $TEMPLATES_DIR/roles_data.yaml" from the
> deployment script the deployment passes the point it was failing before but
> fails much later because of the missing definition of the role.
>
> I can't see a problem with the roles_data.yaml you provided, it seems
> to render ok using tripleo-heat-templates/tools/process-templates.py -
> are you sure the error isn't related to uploading the roles_data file
> to the swift container?
>
> I'd check basic CLI access to swift as a sanity check, e.g something like:
>
> openstack container list
>
> and writing the roles data e.g:
>
> openstack object create overcloud roles_data.yaml
>
> If that works OK then it may be an haproxy timeout - you are
> specifying quite a lot of roles, so I wonder if something is timing
> out during the plan creation phase - we had some similar issues in CI
> ref https://bugs.launchpad.net/tripleo-quickstart/+bug/1638908 where
> increasing the haproxy timeouts helped.
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API updates week 19-25

2018-07-25 Thread Surya Seetharaman
Hi!

On Wed, Jul 25, 2018 at 11:53 AM, Ghanshyam Mann 
wrote:

>
> 5. API Extensions merge work
> - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky
> - https://review.openstack.org/#/q/project:openstack/nova+
> branch:master+topic:bp/api-extensions-merge-rocky
> - Weekly Progress: part-1 of schema merge and part-2 of server_create
> merge has been merged for Rocky. 1 last patch of removing the placeholder
> method are on gate.
> part-3 of view builder merge
> cannot make it to Rocky (7 patch up for review + 5 more to push)< Postponed
> this work to Stein.
>
> 6. Handling a down cell
> - https://blueprints.launchpad.net/nova/+spec/handling-down-cell
> - https://review.openstack.org/#/q/topic:bp/handling-down-
> cell+(status:open+OR+status:merged)
> - Weekly Progress: It is difficult to make it in Rocky? matt has open
> comment on patch about changing the service list along with server list in
> single microversion which make
>sense.
>
>
​The handling down cell spec related API changes will also be postponed to
Stein since the view builder merge (part-3 of API Extensions merge work)​
is postponed to Stein. It would be more cleaner.

-- 

Regards,
Surya.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Live migration failed with ceph storage

2018-07-25 Thread Satish Patel
Oh wait i believe following.

https://github.com/openstack/openstack-ansible/blob/0e03f46a2ebb0ffc6f12384f19ec1184434e7a09/playbooks/defaults/repo_packages/openstack_services.yml#L148

On Wed, Jul 25, 2018 at 10:04 AM, Satish Patel  wrote:
> David,
>
> look like OSAD 16.0.15 using following repo, if i am not wrong
>
> - name: os_nova
>   scm: git
>   src: https://git.openstack.org/openstack/openstack-ansible-os_nova
>   version: 378cf6c83f9ad23c2e0d37e9df06796fee02cc27
>
> On Wed, Jul 25, 2018 at 9:45 AM, David Medberry  
> wrote:
>> I think that nova --version is the version of the client (not of nova
>> itself).
>>
>> I'm looking at OSAD 16.0.15 to see what it is pulling for nova.
>>
>> If I see anything of interest, I'll reply.
>>
>> On Wed, Jul 25, 2018 at 6:33 AM, Satish Patel  wrote:
>>>
>>> Thanks David,
>>>
>>> [root@ostack-compute-01 ~]# nova --version
>>> 9.1.2
>>>
>>> I am using Pike 16.0.15  (My deployment tool is openstack-ansible)
>>>
>>>
>>> What are my option here?
>>>
>>>
>>> On Wed, Jul 25, 2018 at 8:19 AM, David Medberry 
>>> wrote:
>>> > It's not clear what version of Nova you are running but perhaps it is
>>> > badly
>>> > patched. The 16.x.x (Pike) release of Nova has no
>>> > "migrate_configure_max_speed" but as best I can tell you are running a
>>> > patched version of Nova Pike so it may be inconsistent.
>>> >
>>> > This parameter was introduced on 2017-08-24:
>>> >
>>> > https://github.com/openstack/nova/commit/23446a9552b5be3b040278646149a0f481d0a005
>>> >
>>> > That parameter showed up in Queens (not Pike) initially.
>>> >
>>> > -d
>>> >
>>> > On Tue, Jul 24, 2018 at 11:22 PM, Satish Patel 
>>> > wrote:
>>> >>
>>> >> I have openstack with ceph storage setup and trying to test Live
>>> >> migration but somehow it failed and showing following error
>>> >>
>>> >> nova.conf
>>> >>
>>> >> # ceph rbd support
>>> >> live_migration_uri = "qemu+tcp://%s/system"
>>> >> live_migration_tunnelled = True
>>> >>
>>> >> libvirtd.conf
>>> >>
>>> >> listen_tls = 0
>>> >> listen_tcp = 1
>>> >> unix_sock_group = "libvirt"
>>> >> unix_sock_ro_perms = "0777"
>>> >> unix_sock_rw_perms = "0770"
>>> >> auth_unix_ro = "none"
>>> >> auth_unix_rw = "none"
>>> >> auth_tcp = "none"
>>> >>
>>> >>
>>> >> This is the error i am getting, i google it but didn't find any
>>> >> reference
>>> >>
>>> >>
>>> >>
>>> >> ] [instance: 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Live migration
>>> >> failed.: AttributeError: 'Guest' object has no attribute
>>> >> 'migrate_configure_max_speed'
>>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Traceback (most recent call
>>> >> last):
>>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>>> >>
>>> >>
>>> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py",
>>> >> line 5580, in _do_live_migration
>>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] block_migration,
>>> >> migrate_data)
>>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>>> >>
>>> >>
>>> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
>>> >> line 6436, in live_migration
>>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] migrate_data)
>>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>>> >>
>>> >>
>>> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
>>> >> line 6944, in _live_migration
>>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]
>>> >> guest.migrate_configure_max_speed(
>>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] AttributeError: 'Guest' object
>>> >> has no attribute 'migrate_configure_max_speed'
>>> >>
>>> >> ___
>>> >> Mailing list:
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> >> Post to : openstack@lists.openstack.org
>>> >> Unsubscribe :
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> >
>>> >
>>
>>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live migration failed with ceph storage

2018-07-25 Thread Satish Patel
David,

look like OSAD 16.0.15 using following repo, if i am not wrong

- name: os_nova
  scm: git
  src: https://git.openstack.org/openstack/openstack-ansible-os_nova
  version: 378cf6c83f9ad23c2e0d37e9df06796fee02cc27

On Wed, Jul 25, 2018 at 9:45 AM, David Medberry  wrote:
> I think that nova --version is the version of the client (not of nova
> itself).
>
> I'm looking at OSAD 16.0.15 to see what it is pulling for nova.
>
> If I see anything of interest, I'll reply.
>
> On Wed, Jul 25, 2018 at 6:33 AM, Satish Patel  wrote:
>>
>> Thanks David,
>>
>> [root@ostack-compute-01 ~]# nova --version
>> 9.1.2
>>
>> I am using Pike 16.0.15  (My deployment tool is openstack-ansible)
>>
>>
>> What are my option here?
>>
>>
>> On Wed, Jul 25, 2018 at 8:19 AM, David Medberry 
>> wrote:
>> > It's not clear what version of Nova you are running but perhaps it is
>> > badly
>> > patched. The 16.x.x (Pike) release of Nova has no
>> > "migrate_configure_max_speed" but as best I can tell you are running a
>> > patched version of Nova Pike so it may be inconsistent.
>> >
>> > This parameter was introduced on 2017-08-24:
>> >
>> > https://github.com/openstack/nova/commit/23446a9552b5be3b040278646149a0f481d0a005
>> >
>> > That parameter showed up in Queens (not Pike) initially.
>> >
>> > -d
>> >
>> > On Tue, Jul 24, 2018 at 11:22 PM, Satish Patel 
>> > wrote:
>> >>
>> >> I have openstack with ceph storage setup and trying to test Live
>> >> migration but somehow it failed and showing following error
>> >>
>> >> nova.conf
>> >>
>> >> # ceph rbd support
>> >> live_migration_uri = "qemu+tcp://%s/system"
>> >> live_migration_tunnelled = True
>> >>
>> >> libvirtd.conf
>> >>
>> >> listen_tls = 0
>> >> listen_tcp = 1
>> >> unix_sock_group = "libvirt"
>> >> unix_sock_ro_perms = "0777"
>> >> unix_sock_rw_perms = "0770"
>> >> auth_unix_ro = "none"
>> >> auth_unix_rw = "none"
>> >> auth_tcp = "none"
>> >>
>> >>
>> >> This is the error i am getting, i google it but didn't find any
>> >> reference
>> >>
>> >>
>> >>
>> >> ] [instance: 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Live migration
>> >> failed.: AttributeError: 'Guest' object has no attribute
>> >> 'migrate_configure_max_speed'
>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Traceback (most recent call
>> >> last):
>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>> >>
>> >>
>> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py",
>> >> line 5580, in _do_live_migration
>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] block_migration,
>> >> migrate_data)
>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>> >>
>> >>
>> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
>> >> line 6436, in live_migration
>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] migrate_data)
>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>> >>
>> >>
>> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
>> >> line 6944, in _live_migration
>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]
>> >> guest.migrate_configure_max_speed(
>> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] AttributeError: 'Guest' object
>> >> has no attribute 'migrate_configure_max_speed'
>> >>
>> >> ___
>> >> Mailing list:
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >> Post to : openstack@lists.openstack.org
>> >> Unsubscribe :
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >
>> >
>
>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live migration failed with ceph storage

2018-07-25 Thread Satish Patel
David,

I did this on compute node

[root@ostack-compute-01 ~]# locate test_guest.py
/openstack/venvs/nova-16.0.14/lib/python2.7/site-packages/nova/tests/unit/virt/libvirt/test_guest.py
/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/tests/unit/virt/libvirt/test_guest.py


I didn't find option

[root@ostack-compute-01 ~]# grep -i "test_migrate_configure_max_speed"
/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/tests/unit/virt/libvirt/test_guest.py
[root@ostack-compute-01 ~]#

On Wed, Jul 25, 2018 at 10:06 AM, Satish Patel  wrote:
> Oh wait i believe following.
>
> https://github.com/openstack/openstack-ansible/blob/0e03f46a2ebb0ffc6f12384f19ec1184434e7a09/playbooks/defaults/repo_packages/openstack_services.yml#L148
>
> On Wed, Jul 25, 2018 at 10:04 AM, Satish Patel  wrote:
>> David,
>>
>> look like OSAD 16.0.15 using following repo, if i am not wrong
>>
>> - name: os_nova
>>   scm: git
>>   src: https://git.openstack.org/openstack/openstack-ansible-os_nova
>>   version: 378cf6c83f9ad23c2e0d37e9df06796fee02cc27
>>
>> On Wed, Jul 25, 2018 at 9:45 AM, David Medberry  
>> wrote:
>>> I think that nova --version is the version of the client (not of nova
>>> itself).
>>>
>>> I'm looking at OSAD 16.0.15 to see what it is pulling for nova.
>>>
>>> If I see anything of interest, I'll reply.
>>>
>>> On Wed, Jul 25, 2018 at 6:33 AM, Satish Patel  wrote:

 Thanks David,

 [root@ostack-compute-01 ~]# nova --version
 9.1.2

 I am using Pike 16.0.15  (My deployment tool is openstack-ansible)


 What are my option here?


 On Wed, Jul 25, 2018 at 8:19 AM, David Medberry 
 wrote:
 > It's not clear what version of Nova you are running but perhaps it is
 > badly
 > patched. The 16.x.x (Pike) release of Nova has no
 > "migrate_configure_max_speed" but as best I can tell you are running a
 > patched version of Nova Pike so it may be inconsistent.
 >
 > This parameter was introduced on 2017-08-24:
 >
 > https://github.com/openstack/nova/commit/23446a9552b5be3b040278646149a0f481d0a005
 >
 > That parameter showed up in Queens (not Pike) initially.
 >
 > -d
 >
 > On Tue, Jul 24, 2018 at 11:22 PM, Satish Patel 
 > wrote:
 >>
 >> I have openstack with ceph storage setup and trying to test Live
 >> migration but somehow it failed and showing following error
 >>
 >> nova.conf
 >>
 >> # ceph rbd support
 >> live_migration_uri = "qemu+tcp://%s/system"
 >> live_migration_tunnelled = True
 >>
 >> libvirtd.conf
 >>
 >> listen_tls = 0
 >> listen_tcp = 1
 >> unix_sock_group = "libvirt"
 >> unix_sock_ro_perms = "0777"
 >> unix_sock_rw_perms = "0770"
 >> auth_unix_ro = "none"
 >> auth_unix_rw = "none"
 >> auth_tcp = "none"
 >>
 >>
 >> This is the error i am getting, i google it but didn't find any
 >> reference
 >>
 >>
 >>
 >> ] [instance: 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Live migration
 >> failed.: AttributeError: 'Guest' object has no attribute
 >> 'migrate_configure_max_speed'
 >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
 >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Traceback (most recent call
 >> last):
 >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
 >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
 >>
 >>
 >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py",
 >> line 5580, in _do_live_migration
 >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
 >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] block_migration,
 >> migrate_data)
 >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
 >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
 >>
 >>
 >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 >> line 6436, in live_migration
 >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
 >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] migrate_data)
 >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
 >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
 >>
 >>
 >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 >> line 6944, in _live_migration
 >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
 >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]
 >> guest.migrate_configure_max_speed(
 >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
 >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] AttributeError: 'Guest' object
 >> has no attribute 'migrate_configure_max_speed'
 >>
 >> ___
 >> Mailing list:
 >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] Live migration failed with ceph storage

2018-07-25 Thread David Medberry
I think that nova --version is the version of the client (not of nova
itself).

I'm looking at OSAD 16.0.15 to see what it is pulling for nova.

If I see anything of interest, I'll reply.

On Wed, Jul 25, 2018 at 6:33 AM, Satish Patel  wrote:

> Thanks David,
>
> [root@ostack-compute-01 ~]# nova --version
> 9.1.2
>
> I am using Pike 16.0.15  (My deployment tool is openstack-ansible)
>
>
> What are my option here?
>
>
> On Wed, Jul 25, 2018 at 8:19 AM, David Medberry 
> wrote:
> > It's not clear what version of Nova you are running but perhaps it is
> badly
> > patched. The 16.x.x (Pike) release of Nova has no
> > "migrate_configure_max_speed" but as best I can tell you are running a
> > patched version of Nova Pike so it may be inconsistent.
> >
> > This parameter was introduced on 2017-08-24:
> > https://github.com/openstack/nova/commit/23446a9552b5be3b040278646149a0
> f481d0a005
> >
> > That parameter showed up in Queens (not Pike) initially.
> >
> > -d
> >
> > On Tue, Jul 24, 2018 at 11:22 PM, Satish Patel 
> wrote:
> >>
> >> I have openstack with ceph storage setup and trying to test Live
> >> migration but somehow it failed and showing following error
> >>
> >> nova.conf
> >>
> >> # ceph rbd support
> >> live_migration_uri = "qemu+tcp://%s/system"
> >> live_migration_tunnelled = True
> >>
> >> libvirtd.conf
> >>
> >> listen_tls = 0
> >> listen_tcp = 1
> >> unix_sock_group = "libvirt"
> >> unix_sock_ro_perms = "0777"
> >> unix_sock_rw_perms = "0770"
> >> auth_unix_ro = "none"
> >> auth_unix_rw = "none"
> >> auth_tcp = "none"
> >>
> >>
> >> This is the error i am getting, i google it but didn't find any
> reference
> >>
> >>
> >>
> >> ] [instance: 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Live migration
> >> failed.: AttributeError: 'Guest' object has no attribute
> >> 'migrate_configure_max_speed'
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Traceback (most recent call
> >> last):
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
> >>
> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-
> packages/nova/compute/manager.py",
> >> line 5580, in _do_live_migration
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] block_migration,
> >> migrate_data)
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
> >>
> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-
> packages/nova/virt/libvirt/driver.py",
> >> line 6436, in live_migration
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] migrate_data)
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
> >>
> >> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-
> packages/nova/virt/libvirt/driver.py",
> >> line 6944, in _live_migration
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]
> >> guest.migrate_configure_max_speed(
> >> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> >> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] AttributeError: 'Guest' object
> >> has no attribute 'migrate_configure_max_speed'
> >>
> >> ___
> >> Mailing list:
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >> Post to : openstack@lists.openstack.org
> >> Unsubscribe :
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> >
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [neutron][api][grapql] Proof of Concept

2018-07-25 Thread Ed Leafe
On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil  wrote:
> 
> The branch is now available under feature/graphql on the neutron core 
> repository [1].

I wanted to follow up with you on this effort. I haven’t seen any activity on 
StoryBoard for several weeks now, and wanted to be sure that there was nothing 
blocking you that we could help with.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Alex Schultz
Hey folks,

So it's been great fun and we've accomplished much over the last two
cycles but I believe it is time for me to step back and let someone
else do the PTLing.  I'm not going anywhere so I'll still be around to
focus on the simplification and improvements that TripleO needs going
forward.  I look forwards to continuing our efforts with everyone.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lots of slow tests timing out jobs

2018-07-25 Thread Matt Riedemann

On 7/25/2018 1:46 AM, Ghanshyam Mann wrote:

yeah, there are many tests taking too long time. I do not know the reason this 
time but last time we did audit for slow tests was mainly due to ssh failure.
I have created the similar ethercalc [3] to collect time taking tests and then 
round figure of their avg time taken since last 14 days from health dashboard. 
Yes, there is no calculated avg time on o-h so I did not take exact avg time 
its round figure.

May be 14 days  is too less to take decision to mark them slow but i think 
their avg time since 3 months will be same. should we consider 3 month time 
period for those ?

As per avg time, I have voted (currently based on 14 days avg) on ethercalc which 
all test to mark as slow. I taken the criteria of >120 sec avg time.  Once we 
have more and more people votes there we can mark them slow.

[3]https://ethercalc.openstack.org/dorupfz6s9qt


Thanks for this. I haven't gone through all of the tests in there yet, 
but noticed (yesterday) a couple of them were personality file compute 
API tests, which I thought was strange. Do we have any idea where the 
time is being spent there? I assume it must be something with ssh 
validation to try and read injected files off the guest. I need to dig 
into this one a bit more because by default, file injection is disabled 
in the libvirt driver so I'm not even sure how these are running (or 
really doing anything useful). Given we have deprecated personality 
files in the compute API [1] I would definitely mark those as slow tests 
so we can still run them but don't care about them as much.


[1] 
https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id52


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Live migration failed with ceph storage

2018-07-25 Thread Satish Patel
Thanks David,

[root@ostack-compute-01 ~]# nova --version
9.1.2

I am using Pike 16.0.15  (My deployment tool is openstack-ansible)


What are my option here?


On Wed, Jul 25, 2018 at 8:19 AM, David Medberry  wrote:
> It's not clear what version of Nova you are running but perhaps it is badly
> patched. The 16.x.x (Pike) release of Nova has no
> "migrate_configure_max_speed" but as best I can tell you are running a
> patched version of Nova Pike so it may be inconsistent.
>
> This parameter was introduced on 2017-08-24:
> https://github.com/openstack/nova/commit/23446a9552b5be3b040278646149a0f481d0a005
>
> That parameter showed up in Queens (not Pike) initially.
>
> -d
>
> On Tue, Jul 24, 2018 at 11:22 PM, Satish Patel  wrote:
>>
>> I have openstack with ceph storage setup and trying to test Live
>> migration but somehow it failed and showing following error
>>
>> nova.conf
>>
>> # ceph rbd support
>> live_migration_uri = "qemu+tcp://%s/system"
>> live_migration_tunnelled = True
>>
>> libvirtd.conf
>>
>> listen_tls = 0
>> listen_tcp = 1
>> unix_sock_group = "libvirt"
>> unix_sock_ro_perms = "0777"
>> unix_sock_rw_perms = "0770"
>> auth_unix_ro = "none"
>> auth_unix_rw = "none"
>> auth_tcp = "none"
>>
>>
>> This is the error i am getting, i google it but didn't find any reference
>>
>>
>>
>> ] [instance: 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Live migration
>> failed.: AttributeError: 'Guest' object has no attribute
>> 'migrate_configure_max_speed'
>> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Traceback (most recent call
>> last):
>> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>>
>> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py",
>> line 5580, in _do_live_migration
>> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] block_migration,
>> migrate_data)
>> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>>
>> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
>> line 6436, in live_migration
>> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] migrate_data)
>> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
>>
>> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
>> line 6944, in _live_migration
>> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]
>> guest.migrate_configure_max_speed(
>> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
>> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] AttributeError: 'Guest' object
>> has no attribute 'migrate_configure_max_speed'
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live migration failed with ceph storage

2018-07-25 Thread David Medberry
It's not clear what version of Nova you are running but perhaps it is badly
patched. The 16.x.x (Pike) release of Nova has no
"migrate_configure_max_speed" but as best I can tell you are running a
patched version of Nova Pike so it may be inconsistent.

This parameter was introduced on 2017-08-24:
https://github.com/openstack/nova/commit/23446a9552b5be3b040278646149a0f481d0a005

That parameter showed up in Queens (not Pike) initially.

-d

On Tue, Jul 24, 2018 at 11:22 PM, Satish Patel  wrote:

> I have openstack with ceph storage setup and trying to test Live
> migration but somehow it failed and showing following error
>
> nova.conf
>
> # ceph rbd support
> live_migration_uri = "qemu+tcp://%s/system"
> live_migration_tunnelled = True
>
> libvirtd.conf
>
> listen_tls = 0
> listen_tcp = 1
> unix_sock_group = "libvirt"
> unix_sock_ro_perms = "0777"
> unix_sock_rw_perms = "0770"
> auth_unix_ro = "none"
> auth_unix_rw = "none"
> auth_tcp = "none"
>
>
> This is the error i am getting, i google it but didn't find any reference
>
>
>
> ] [instance: 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Live migration
> failed.: AttributeError: 'Guest' object has no attribute
> 'migrate_configure_max_speed'
> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] Traceback (most recent call
> last):
> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-
> packages/nova/compute/manager.py",
> line 5580, in _do_live_migration
> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] block_migration,
> migrate_data)
> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-
> packages/nova/virt/libvirt/driver.py",
> line 6436, in live_migration
> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] migrate_data)
> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]   File
> "/openstack/venvs/nova-16.0.16/lib/python2.7/site-
> packages/nova/virt/libvirt/driver.py",
> line 6944, in _live_migration
> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97]
> guest.migrate_configure_max_speed(
> 2018-07-25 01:00:59.214 9331 ERROR nova.compute.manager [instance:
> 2b92ca5b-e433-4ac7-8dc8-619c9523ba97] AttributeError: 'Guest' object
> has no attribute 'migrate_configure_max_speed'
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] testing 123

2018-07-25 Thread Moshe Levi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet-openstack] [announce] puppet-openstack now has Ubuntu 18.04 Bionic support

2018-07-25 Thread Tobias Urdin
Hello Stackers,

Would just like to give a heads-up that the puppet-openstack project as
of the Rocky release will supports
Ubuntu 18.04 Bionic and we are as of yesterday/today checking that in
infra Zuul CI.

As a step for adding this support we also introduced support for the
Ceph Mimic release for the puppet-ceph module.
Because of upstream packaging Ceph Mimic cannot be used on Debian 9, and
should also note that Ceph Luminous cannot be
used on Ubuntu 18.04 Bionic using upstream Ceph community packages
(Canonical is packaging Ceph in Bionic main repo).

I would like to thank everybody contributing to this effort and for
everyone involved in the puppet-openstack project that has reviewed all
changes.
A special thanks to all the infra-people that has helped out a bunch
with mirrors, Zuul and providing all necessary bits required to work on
this.

Best regards
Tobias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] PTL candidacy for the Stein Cycle

2018-07-25 Thread Juan Antonio Osorio
Hello folks!

I'd like to nominate myself for the TripleO PTL role for the Stein cycle.

Alex has done a great job as a PTL: The project is progressing nicely with
many
new, exciting features and uses for TripleO coming to fruition recently.
It's a
great time for the project. But, there's more work to be done.

I have served the TripleO community as a core-reviewer for some years now
and,
more recently, by driving the Security Squad. This project has been a
great learning experience for me, both technically (I got to learn even
more of
OpenStack) and community-wise. Now I wish to better serve the community
further
by bringing my experiences into PTL role. While I have not yet served as PTL
for a project before,I'm eager to learn the ropes and help improve the
community that has been so influential on me.

For Stein, I would like to focus on:

* Increasing TripleO's usage in the testing of other projects
  Now that TripleO can deploy a standalone OpenStack installation, I hope it
  can be leveraged to add value to other projects' testing efforts. I hope
this
  would subsequentially help increase TripleO's testing coverage, and reduce
  the footprint required for full-deployment testing.

* Technical Debt & simplification
  We've been working on simplifying the deployment story and battle
technical
  depth -- let’s keep  this momentum going.  We've been running (mostly)
fully
  containerized environments for a couple of releases now; I hope we can
reduce
  the number of stacks we create, which would in turn simplify the project
  structure (at least on the t-h-t side). We should also aim for the most
  convergence we can achieve (e.g. CLI and UI workflows).

* CI and testing
  The project has made great progress regarding CI and testing; lets keep
this
  moving forward and get developers easier ways to bring up testing
  environments for them to work on and to be able to reproduce CI jobs.

Thanks!

Juan Antonio Osorio Robles
IRC: jaosorior


-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Stepping down (but stick around)

2018-07-25 Thread Zhipeng Huang
Hi Team,

For those of you who has been around since the "Nomad" time, you know how a
terrific journey we have come along together. A pure idea, through
community discussion and development, morphed into a project who is rapidly
growing and gaining industry attentions.

It is my privilege to serve as Cyborg's project's PTL for two cycles and I
hope for all of my inefficiency and sometimes incapability as a tech lead,
I did help the project grew both in development and governance (thank you
for putting up with me btw :) ). We got help from the Nova team, release
team, TC, Scientific SIG and other teams constantly, and we could not be
where we are now without these hand-holdings.

Although we have been suffering considerably high core reviewer fade out
rate, we keep have new strong core reviewers coming in. This is what makes
me proud and happy the most, and also why I'm comfortable with the decision
of non-candidacy in Stein. A great open source project, should do without
any specific leader, and keep grow organically. This is what I have been
hoping for Cyborg to achieve.

Hence I want to nominate Li Liu to be a candidate of PTL for Cyborg project
in Stein cycle. Li Liu has been joining Cyborg development since very early
stage and contributed a lot important work: deployable db design, metadata
standardization, FPGA programming support, etc. As an expert both in FPGA
synthesis as well as software development for OpenStack, I think Li Liu, or
Uncle Li as we nicknamed him, is the best choice we should have for S
release.

I would like to emphasize that this does not mean I have done with Cyborg
project, on the contrary I will be spending more time to build a great
ecosystem for Cyborg project. We have four target areas (AI, NFV, Edge,
HPC) and it will be an even more amazing journey in front of us.

Keep up the good work folks, and let's work even harder.

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova] [os-vif] [vif_plug_ovs] Support for OVS DB tcp socket communication.

2018-07-25 Thread Sahid Orentino Ferdjaoui
On Wed, Jul 25, 2018 at 03:22:27PM +0530, pranab boruah wrote:
> Hello folks,
> 
> I have filed a bug in os-vif:
> https://bugs.launchpad.net/os-vif/+bug/1778724 and
> working on a patch. Any feedback/comments from you guys would be extremely
> helpful.
> 
> Bug details:
> 
> OVS DB server has the feature of listening over a TCP socket for
> connections rather than just on the unix domain socket. [0]
> 
> If the OVS DB server is listening over a TCP socket, then the ovs-vsctl
> commands should include the ovsdb_connection parameter:
> # ovs-vsctl --db=tcp:IP:PORT ...
> eg:
> # ovs-vsctl --db=tcp:169.254.1.1:6640 add-port br-int eth0
> 
> Neutron supports running the ovs-vsctl commands with the ovsdb_connection
> parameter. The ovsdb_connection parameter is configured in
> openvswitch_agent.ini file. [1]
> 
> While adding a vif to the ovs bridge(br-int), Nova(os-vif) invokes the
> ovs-vsctl command. Today, there is no support to pass the ovsdb_connection
> parameter while invoking the ovs-vsctl command. The support should be
> added. This would enhance the functionality of os-vif, since it would
> support a scenario when OVS DB server is listening on a TCP socket
> connection and on functional parity with Neutron.
> 
> [0] http://www.openvswitch.org/support/dist-docs/ovsdb-server.1.html
> [1] https://docs.openstack.org/neutron/pike/configuration
> /openvswitch-agent.html
> TIA,
> Pranab

Hello Pranab,

Makes sense for me. This is really related to the OVS plugin that we
are maintaining. I guess you will have to add a new config option for
it as we have with 'network_device_mtu' and 'ovs_vsctl_timeout'.

Don't hesitate to add me as reviewer when patch is ready.

Thanks,
s.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] ptl non candidacy

2018-07-25 Thread Mark Goddard
Thanks for your work as PTL during the Rocky cycle Jeffrey. Hope you are
able to stay part of the community.

Cheers,
Mark

On 25 July 2018 at 04:48, Jeffrey Zhang  wrote:

> Hi all,
>
> I just wanna to say I am not running PTL for Stein cycle. I have been
> involved in Kolla project for almost 3 years. And recently my work changes
> a little, too. So I may not have much time in the community in the future. 
> Kolla
> is a great project and the community is also awesome. I would encourage
> everyone in the community to consider for running.
>
> Thanks for your support :D.
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] API updates week 19-25

2018-07-25 Thread Ghanshyam Mann
Hi All, 

Please find the Nova API highlights of this week. 

Weekly Office Hour: 
=== 

What we discussed this week: 
- Discussion on priority BP and remaining reviews on those. 
- Discussed keypair quota usage bug. 

Planned Features : 
== 
Below are the API related features for Rocky cycle. Nova API Sub team will 
start reviewing those to give their regular feedback. If anythings missing 
there feel free to add those in etherpad- 
https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 

1. Servers Ips non-unique network names : 
- 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 
- Spec Merged 
- 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
 
- Weekly Progress: I did not start this due to other work. This cannot make in 
Rocky and will plan for Stein early. 

2. Abort live migration in queued state: 
- 
https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
 
- 
https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
 
- Weekly Progress: COMPLETED

3. Complex anti-affinity policies: 
- https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies 
- 
https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged)
 
- Weekly Progress: COMPLETED

4. Volume multiattach enhancements: 
- https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements 
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. 

5. API Extensions merge work 
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky 
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky
 
- Weekly Progress: part-1 of schema merge and part-2 of server_create merge has 
been merged for Rocky. 1 last patch of removing the placeholder method are on 
gate.
part-3 of view builder merge cannot 
make it to Rocky (7 patch up for review + 5 more to push)< Postponed this work 
to Stein.

6. Handling a down cell 
- https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
- 
https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
 
- Weekly Progress: It is difficult to make it in Rocky? matt has open comment 
on patch about changing the service list along with server list in single 
microversion which make 
   sense. 

Bugs: 
 
Discussed about keypair quota bug. Sent separate mailing list for more 
feedback[1]

This week Bug Progress:   
https://etherpad.openstack.org/p/nova-api-weekly-bug-report 

Critical: 0->0 
High importance: 3->2
By Status: 
New: 0->0 
Confirmed/Triage: 29-> 30 
In-progress: 36->34
Incomplete: 4->4 
= 
Total: 69->68

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 


[1] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132459.html

-gmann 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [nova] [os-vif] [vif_plug_ovs] Support for OVS DB tcp socket communication.

2018-07-25 Thread pranab boruah
Hello folks,

I have filed a bug in os-vif:
https://bugs.launchpad.net/os-vif/+bug/1778724 and
working on a patch. Any feedback/comments from you guys would be extremely
helpful.

Bug details:

OVS DB server has the feature of listening over a TCP socket for
connections rather than just on the unix domain socket. [0]

If the OVS DB server is listening over a TCP socket, then the ovs-vsctl
commands should include the ovsdb_connection parameter:
# ovs-vsctl --db=tcp:IP:PORT ...
eg:
# ovs-vsctl --db=tcp:169.254.1.1:6640 add-port br-int eth0

Neutron supports running the ovs-vsctl commands with the ovsdb_connection
parameter. The ovsdb_connection parameter is configured in
openvswitch_agent.ini file. [1]

While adding a vif to the ovs bridge(br-int), Nova(os-vif) invokes the
ovs-vsctl command. Today, there is no support to pass the ovsdb_connection
parameter while invoking the ovs-vsctl command. The support should be
added. This would enhance the functionality of os-vif, since it would
support a scenario when OVS DB server is listening on a TCP socket
connection and on functional parity with Neutron.

[0] http://www.openvswitch.org/support/dist-docs/ovsdb-server.1.html
[1] https://docs.openstack.org/neutron/pike/configuration
/openvswitch-agent.html
TIA,
Pranab
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Ghanshyam Mann
Hi All,

During today API office hour, we were discussing about keypair quota usage bug 
(newton) [1]. key_pair 'in_use' quota is always 0 even when request per user 
which is because it is being set as 0 always [2].

From checking the history and review discussion on [3], it seems that it was 
like that from staring. key_pair quota is being counted when actually creating 
the keypair but it is not shown in API 'in_use' field. Vishakha (assignee of 
this bug) is currently planing to work on this bug and before that we have few 
queries:

1. is it ok to show the keypair used info via API ? any original rational not 
to do so or it was just like that from starting.  

2. Because this change will show the keypair used quota information in API's 
existing filed 'in_use', it is API behaviour change (not interface signature 
change in backward incompatible way) which can cause interop issue. Should we 
bump microversion for this change? 

[1] https://bugs.launchpad.net/nova/+bug/1644457 
[2] 
https://github.com/openstack/nova/blob/bf497cc47497d3a5603bf60de652054ac5ae1993/nova/quota.py#L189
 
[3] https://review.openstack.org/#/c/446239/

-gmann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [I18n] [PTL] [Election] Candidacy for I18n PTL in Stein Cycle

2018-07-25 Thread Frank Kloeker

[posted & mailed] https://review.openstack.org/#/c/585663/

This is my announcement for re-candidacy as I18n PTL in Stein Cycle.

At first a quick review what we've done in the last cycle:

1. Zanata upgrade done. As one of the first community's we're running
Zanata Release 4 in production. A big success and a great user
experiense for all.
2. Translation Check Site done. We were able to win Deutsche Telekom as
sponsor for resources, so now we are able to host our own Translation
Check Site. From my point of view this solves different problems
which we had in the past and now we can check translation strings very
fast on our requirements.
3. Aquire more people to the team. I had great experiences during the
OpenStack Days in Krakow and Budapest. I shared informationen what our
team is doing and how I18n works in the OpenStack Community.
I've got many inspirations and hopefully some new team members :-)

What I mostly like is getting things done. In this cycle we should get
ready project doc translations. We started already with some projects
as a proof of concept and we're still working on it. To get that around,
involve more projects and involve more project team members for
translations is the biggest challenge for me in this cycle.
On the other hand we have Edge Computing whitepaper and Container
whitepaper on our translation plan. With a new technology in use to
publish the translation results very fast on the web page.

Beside that we have the OpenStack Summit Berlin in that cycle. For me
a special event, since I live and work in Berlin. I expect a lot of
collaboration and knowledge sharing with I18n and the OpenStack
Community in general.

That's my plan for Stein, I'm looking forward to your vote.

Frank

Email: eu...@arcor.de
IRC: eumel8
Twitter: eumel_8

OpenStack Profile:
https://www.openstack.org/community/members/profile/45058/frank-kloeker


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ?==?utf-8?q? Lots of slow tests timing out jobs

2018-07-25 Thread jean-philippe
On Wednesday, July 25, 2018 08:46 CEST, Ghanshyam Mann 
 wrote: 
 
>   On Wed, 25 Jul 2018 05:15:53 +0900 Matt Riedemann  
> wrote  
>  > While going through our uncategorized gate failures [1] I found that we 
>  > have a lot of jobs failing (161 in 7 days) due to the tempest run timing 
>  > out [2]. I originally thought it was just the networking scenario tests, 
>  > but I was able to identify a handful of API tests that are also taking 
>  > nearly 3 minutes each, which seems like they should be moved to scenario 
>  > tests and/or marked slow so they can be run in a dedicated tempest-slow 
> job.
>  > 
>  > I'm not sure how to get the history on the longest-running tests on 
>  > average to determine where to start drilling down on the worst 
>  > offenders, but it seems like an audit is in order.
> 
> yeah, there are many tests taking too long time. I do not know the reason 
> this time but last time we did audit for slow tests was mainly due to ssh 
> failure. 
> I have created the similar ethercalc [3] to collect time taking tests and 
> then round figure of their avg time taken since last 14 days from health 
> dashboard. Yes, there is no calculated avg time on o-h so I did not take 
> exact avg time its round figure. 
> 
> May be 14 days  is too less to take decision to mark them slow but i think 
> their avg time since 3 months will be same. should we consider 3 month time 
> period for those ?
> 
> As per avg time, I have voted (currently based on 14 days avg) on ethercalc 
> which all test to mark as slow. I taken the criteria of >120 sec avg time.  
> Once we have more and more people votes there we can mark them slow. 
> 
> [3] https://ethercalc.openstack.org/dorupfz6s9qt
> 
> -gmann
> 

We have a similar observation in openstack-ansible. It is painful. Recently 
something that passed gates without rechecks (but close to timeout) took 14 
(timeouts) rechecks to get in.

In OSA, we will be starting a project to refactor our testing for being faster, 
but I'd like to have news of your research :)

Thanks,
Jean-Philippe (evrardjp)

>  > 
>  > [1] http://status.openstack.org/elastic-recheck/data/integrated_gate.html
>  > [2] https://bugs.launchpad.net/tempest/+bug/1783405
>  > 
>  > -- 
>  > 
>  > Thanks,
>  > 
>  > Matt
>  > 
>  > __
>  > OpenStack Development Mailing List (not for usage questions)
>  > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  > 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] PTL non-candidacy

2018-07-25 Thread jean-philippe
Hello everyone,

If you were not at the previous OpenStack-Ansible meeting*, I'd like to inform 
you I will not be running for PTL of OSA.

It's been a pleasure being the PTL of OSA for the last 2 cycles.
We have improved in many ways: testing, stability, speed, features, 
documentation, user friendliness...
I am glad of the work we achieved, and I think it's time for a fresh view with 
a new PTL.

Thanks for being an awesome community.
Jean-Philippe Evrard (evrardjp) 


*Please join! 4PM UTC in #openstack-ansible!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lots of slow tests timing out jobs

2018-07-25 Thread Ghanshyam Mann
  On Wed, 25 Jul 2018 05:15:53 +0900 Matt Riedemann  
wrote  
 > While going through our uncategorized gate failures [1] I found that we 
 > have a lot of jobs failing (161 in 7 days) due to the tempest run timing 
 > out [2]. I originally thought it was just the networking scenario tests, 
 > but I was able to identify a handful of API tests that are also taking 
 > nearly 3 minutes each, which seems like they should be moved to scenario 
 > tests and/or marked slow so they can be run in a dedicated tempest-slow job.
 > 
 > I'm not sure how to get the history on the longest-running tests on 
 > average to determine where to start drilling down on the worst 
 > offenders, but it seems like an audit is in order.

yeah, there are many tests taking too long time. I do not know the reason this 
time but last time we did audit for slow tests was mainly due to ssh failure. 
I have created the similar ethercalc [3] to collect time taking tests and then 
round figure of their avg time taken since last 14 days from health dashboard. 
Yes, there is no calculated avg time on o-h so I did not take exact avg time 
its round figure. 

May be 14 days  is too less to take decision to mark them slow but i think 
their avg time since 3 months will be same. should we consider 3 month time 
period for those ?

As per avg time, I have voted (currently based on 14 days avg) on ethercalc 
which all test to mark as slow. I taken the criteria of >120 sec avg time.  
Once we have more and more people votes there we can mark them slow. 

[3] https://ethercalc.openstack.org/dorupfz6s9qt

-gmann

 > 
 > [1] http://status.openstack.org/elastic-recheck/data/integrated_gate.html
 > [2] https://bugs.launchpad.net/tempest/+bug/1783405
 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev