Re: [openstack-dev] [Forum] Moderators needed!

2017-05-01 Thread Sukhdev Kapur
If no body has claimed it yet, I will be happy to moderate Making Neutron
easy session -
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18800/making-neutron-easy-for-people-who-want-basic-networking


I am already presenter into two sessions. Let me know if you want me to do
this so that I can plan accordingly.

Thanks
-Sukhdev


On Fri, Apr 28, 2017 at 5:22 AM, Shamail Tahir  wrote:

> Hi everyone,
>
> Most of the proposed/accepted Forum sessions currently have moderators but
> there are six sessions that do not have a confirmed moderator yet. Please
> look at the list below and let us know if you would be willing to help
> moderate any of these sessions.
>
> The topics look really interesting but it will be difficult to keep the
> sessions on the schedule if there is not an assigned moderator. We look
> forward to seeing you at the Summit/Forum in Boston soon!
>
> Achieving Resiliency at Scales of 1000+
> 
> Feedback from users for I18n & translation - important part?
> 
> Neutron Pain Points
> 
> Making Neutron easy for people who want basic networking
> 
> High Availability in OpenStack
> 
> Cloud-Native Design/Refactoring across OpenStack
> 
>
>
> Thanks,
> Doug, Emilien, Melvin, Mike, Shamail & Tom
> Forum Scheduling Committee
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][blazar][scientific] advanced instance scheduling: reservations and preeemption - Forum session

2017-05-01 Thread Sylvain Bauza
You can also count on me for discussing about what was Blazar previously
and how Nova could help it ;-)

-Sylvain

Le 1 mai 2017 21:53, "Jay Pipes"  a écrit :

> On 05/01/2017 03:39 PM, Blair Bethwaite wrote:
>
>> Hi all,
>>
>> Following up to the recent thread "[Openstack-operators] [scientific]
>> Resource reservation requirements (Blazar) - Forum session" and adding
>> openstack-dev.
>>
>> This is now a confirmed forum session
>> (https://www.openstack.org/summit/boston-2017/summit-schedul
>> e/events/18781/advanced-instance-scheduling-reservations-and-preemption)
>> to cover any advanced scheduling use-cases people want to talk about,
>> but in particular focusing on reservations and preemption as they are
>> big priorities particularly for scientific deployers.
>>
> >
>
>> Etherpad draft is
>> https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling,
>> please attend and contribute! In particular I'd appreciate background
>> spec and review links added to the etherpad.
>>
>> Jay, would you be able and interested to moderate this from the Nova side?
>>
>
> Masahito Muroi is currently marked as the moderator, but I will indeed be
> there and happy to assist Masahito in moderating, no problem.
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Something seems to have regressed the live migration job

2017-05-01 Thread Matt Riedemann
I don't have the time to dig into this tonight and nothing is jumping 
out at me for obvious regressions in nova, tempest or devstack, but it 
seems something has regressed the live migration job in the last 12 hours:


https://bugs.launchpad.net/nova/+bug/1687511

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-01 Thread Kevin Zhao
+1 for me,

Thanks

On 29 April 2017 at 12:05, Hongbin Lu  wrote:

> Hi all,
>
>
>
> I proposes a change of Zun’s core team memberships as below:
>
>
>
> + Feng Shengqin (feng-shengqin)
>
> - Wang Feilong (flwang)
>
>
>
> Feng Shengqin has contributed a lot to the Zun projects. Her contribution
> includes BPs, bug fixes, and reviews. In particular, she completed an
> essential BP and had a lot of accepted commits in Zun’s repositories. I
> think she is qualified for the core reviewer position. I would like to
> thank Wang Feilong for his interest to join the team when the project was
> found. I believe we are always friends regardless of his core membership.
>
>
>
> By convention, we require a minimum of 4 +1 votes from Zun core reviewers
> within a 1 week voting window (consider this proposal as a +1 vote from
> me). A vote of -1 is a veto. If we cannot get enough votes or there is a
> veto vote prior to the end of the voting window, this proposal is rejected.
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-01 Thread Qiming Teng
+1

Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Boston Summit Forum and working session Log messages (please comment and participate)

2017-05-01 Thread Rochelle Grober
Hey folks!

I just wanted to raise your awareness of a forum session and a working session 
on Log Messages that is happening during the Boston Summit.

Here is the link to the etherpad for these sessions:

https://etherpad.openstack.org/p/BOS-forum-log-messages

We’ve been going around in circles on some details of the log messages for 
years, and Doug Hellmann has graciously stepped up to try and wrestle this 
beast into submissions.  So, besides giving him a warm round of applause, let’s 
give him (and the small cadre of folks working with him on this) our respectful 
interactions, comments, concerns, high fives, etc. and turn up to the sessions 
to get this spec implementable in the here and now.

Please add your comments, topics, pre forum discussions, etc. on the etherpad 
so that we remember to review and discuss them in the sessions.


Thanks and see you soon!
--Rocky


As reference, here is Doug’s email [1] advertising the spec:
I am looking for some feedback on two new proposals to add IDs to
log messages.

The tl;dr is that we’ve been talking about adding unique IDs to log
messages for 5 years. I myself am still not 100% convinced the idea
is useful, but I would like us to either do it or definitively say
we won't ever do it so that we can stop talking about it and consider
some other improvements to logging instead.

Based on early feedback from a small group who have been involved
in the conversations about this in the past, I have drafted new two
specs with different approaches that try to avoid the pitfalls that
blocked the earlier specs:

1. A cross-project spec to add logging message IDs in (what I hope
   is) a less onerous way than has been proposed before:
   https://review.openstack.org/460110

2. An Oslo spec to add some features to oslo.log to try to achieve the
   goals of the original proposal without having to assign message IDs:
   https://review.openstack.org/460112

To understand the full history and context, you’ll want to read the
blog post I write last week [1].  The reference lists of the specs
also point to some older specs with different proposals that have
failed to gain traction in the past.

I expect all three proposals to be up for discussion during the
logging working group session at the summit/forum, so if you have
any interest in the topic please plan to attend [2].

Thanks!
Doug

[1] 
https://doughellmann.com/blog/2017/04/20/lessons-learned-from-working-on-large-scale-cross-project-initiatives-in-openstack/
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18507/logging-working-group-working-session


[1] http://lists.openstack.org/pipermail/openstack-dev/2017-April/115958.html


华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]
Rochelle Grober
Sr. Staff Architect, Open Source
Office Phone:408-330-5472
Email:rochelle.gro...@huawei.com

 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Order of Slides

2017-05-01 Thread Amy Marrich
I was going over the 2 sections I'm presenting this weekend and noticed
that in

https://docs.openstack.org/upstream-training/workflow-training-contribution-process.html

We talk about submitting and taking bugs, doing reviews and pushing up code
sets as it's the overview. But the next section

https://docs.openstack.org/upstream-training/workflow-reg-and-accounts.html

we sign up for the actual accounts.

I'm not sure if in the future we might want to change the order of these
sections so that folks can possibly work a little ahead or if we want to
keep the order to prevent it. Either way we can always reference the other
section, in this case with a 'In the next section you'll be making the
account to do this'  or if we switched the order 'Using the username from
the last section'.

Amy (spotz)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]poll for new weekly meeting time slot

2017-05-01 Thread joehuang
Hello,

According to the poll, most of contributors prefer the option "Wed  UTC 01:00, 
Beijing 9:00 AM, PDT(-1 day) 6:00 PM". I'll submit a patch to update the weekly 
meeting time slot.

Best Regards
Chaoyi Huang (joehuang)

From: joehuang
Sent: 24 April 2017 16:12
To: openstack-dev
Subject: [openstack-dev][tricircle]poll for new weekly meeting time slot

Hello,

We's like to reschedule our weekly time according to our discussion, there are 
four options after some offline communication:

Wed  UTC 01:00, Beijing 9:00 AM, PDT(-1 day) 6:00 PM
Wed  UTC 13:00, Beijing 9:00 PM, PDT 6:00 AM
Thu  UTC 01:00, Beijing 9:00 AM, PDT(-1 day) 6:00 PM
Fri  UTC 01:00, Beijing 9:00 AM, PDT(-1 day) 6:00 PM


Please vote in the doodle poll: http://doodle.com/poll/ypd2xpqppzaqek5u thanks 
a lot.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][osc] Openstack client, unit tests & creating default empty values

2017-05-01 Thread Van Leeuwen, Robert
Hello,

The unit test for the network/v2/fakes.py for the port creates empty 
dictionaries e.g. for allowed_address_pairs:
 port_attrs = {
 'admin_state_up': True,
'allowed_address_pairs': [{}],
 'binding:host_id': 'binding-host-id-' + uuid.uuid4().hex,
 'binding:profile': {},

In practice this value is actually “none” if someone (or nova for that matter) 
creates the port without specifying anything.

This allowed for at least one bug I hit, which will traceback with 'NoneType' 
object is not iterable:
https://review.openstack.org/#/c/461354/

I wonder how the unit tests should be modified to actually catch these things?
I can modify fakes.py to exclude the 'allowed_address_pairs': [{}]
However these issues might be in more places so it might require a more generic 
approach.

What is the opinion on this?

Note that:
- I did not test against trunk neutron so maybe trunk behavior is different in 
always returning an empty dict ?
- neutron cli used to work without this issue

Thx,
Robert van Leeuwen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] The next two IRC meetings are SKIPPED

2017-05-01 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

Due to the Boston summit and the fact that most of Vitrage contributors are now 
busy preparing to it, we will skip the IRC meetings this week and next week. We 
will meet again on Wednesday, May 17 at 8:00 UTC.

Thanks,
Ifat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Adding Ying Zuo to core team

2017-05-01 Thread Rob Cresswell (rcresswe)
Hey everyone,

I’m adding Ying Zuo to the Horizon Core team. She’s been contributing many 
great patches to the code base driven by operator experience, as well as 
providing solid reviews. Welcome to the team!

Rob


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Adding Ying Zuo to core team

2017-05-01 Thread David Lyle
Welcome Ying Zuo!

On Mon, May 1, 2017 at 5:19 AM, Rob Cresswell (rcresswe)
 wrote:
> Hey everyone,
>
> I’m adding Ying Zuo to the Horizon Core team. She’s been contributing many 
> great patches to the code base driven by operator experience, as well as 
> providing solid reviews. Welcome to the team!
>
> Rob
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of puppet and ruby from nodepool images

2017-05-01 Thread Paul Belanger
On Mon, May 01, 2017 at 12:11:14PM -0400, Paul Belanger wrote:
> On Mon, May 01, 2017 at 09:59:01AM -0400, Paul Belanger wrote:
> > On Thu, Apr 27, 2017 at 07:39:00PM -0400, Paul Belanger wrote:
> > > On Wed, Apr 26, 2017 at 12:17:03PM -0400, Paul Belanger wrote:
> > > > Greetings,
> > > > 
> > > > We, openstack-infra, are on our final steps of removing puppet (and 
> > > > ruby) from
> > > > our images jobs run in nodepool.  At this point, I think we are 
> > > > confident we
> > > > shouldn't break any projects however I wanted to send this email just 
> > > > to keep
> > > > everybody up to date.
> > > > 
> > > > If you do depend on puppet or ruby for your project jobs, please make 
> > > > sure to
> > > > update your intree bindep.txt file and express the dependency.
> > > > 
> > > > If you do have a problem, please join us in #openstack-infra so we can 
> > > > help.
> > > > 
> > > As a heads up, we are rolling out another set of changes today. This is 
> > > the
> > > final setup needed from manging our diskimage in nodepool with puppet.
> > > 
> > > As a result, jobs should now be setting up there SSH known_hosts files as
> > > needed. For example:
> > > 
> > >   ssh-keyscan  >> ~/.ssh/known_hosts
> > > 
> > > Or if you'd like to continue with disabled host checking:
> > > 
> > >   ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
> > > 
> > And today was the day we approved the change[1] \o/
> > 
> > We plan on starting rebuilds in the next 30min, and take a few hours to 
> > build /
> > upload everything.  In the mean time, if you start to see random failures 
> > please
> > join us in #openstack-infra to troubleshoot.
> > 
> > We'll be doing our best to monitor http://status.opentack.org/zuul and
> > http://status.openstack.org/elastic-recheck/ to find any affected jobs.
> > 
> > -PB
> > 
> > [1] https://review.openstack.org/#/c/460728/
> > 
> The centos-7 and fedora-25 images are live now. We are moving foward with
> ubuntu-trusty and ubuntu-xenial next.
> 
And ubuntu-trusty and ubuntu-xenial images online now.  We'll continue to watch
jobs to see if there are any issues that are found.

Moving forward, if you job required puppet or ruby, you'll now need to use the
bindep.txt file to include these dependencies.

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Blair Bethwaite
On 29 April 2017 at 01:46, Mike Dorman  wrote:
> I don’t disagree with you that the client side choose-a-server-at-random is 
> not a great load balancer.  (But isn’t this roughly the same thing that 
> oslo-messaging does when we give it a list of RMQ servers?)  For us it’s more 
> about the failure handling if one is down than it is about actually equally 
> distributing the load.

Maybe not great, but still better than making operators deploy (often
complex) full-featured external LBs when they really just want
*enough* redundancy. In many cases this seems to just create pets in
the control plane. I think it'd be useful if all OpenStack APIs and
their clients actively handled this poor-man's HA without having to
resort to haproxy etc, or e.g., assuming operators own the DNS.

-- 
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Sam Morrison

> On 1 May 2017, at 4:24 pm, Sean McGinnis  wrote:
> 
> On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
>>> 
>> 
>> I thought it was just nova too, but it turns out cinder has the same exact
>> option as nova: (I hit this in my devstack patch trying to get glance 
>> deployed
>> as a wsgi app)
>> 
>> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>> 
>> Although from what I can tell you don't have to set it and it will fallback 
>> to
>> using the catalog, assuming you configured the catalog info for cinder:
>> 
>> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>> 
>> 
>> -Matt Treinish
>> 
> 
> FWIW, that came with the original fork out of Nova. I do not have any real
> world data on whether that is used or not.

Yes this is used in cinder.

A lot of the projects you can set endpoints for them to use. This is extremely 
useful in a a large production Openstack install where you want to control the 
traffic.

I can understand using the catalog in certain situations and feel it’s OK for 
that to be the default but please don’t prevent operators configuring it 
differently.

Glance is the big one as you want to control the data flow efficiently but any 
service to service configuration should ideally be able to be manually 
configured.

Cheers,
Sam


> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-01 Thread MONTEIRO, FELIPE C
Murano currently uses the Tempest orchestration client for its scenario Tempest 
tests [0], which are not turned on by default in the Murano Tempest gate due to 
resource constraints. 

However, I'm hesitant to switch to Heat's testing client, because it is not a 
Tempest client, but rather the python-heatclient. I would like to know whether 
there are plans to change this to a Tempest-based client? 

[0] 
https://github.com/openstack/murano/blob/master/murano_tempest_tests/tests/scenario/application_catalog/base.py#L100
[1] 
https://github.com/openstack/heat/blob/master/heat_integrationtests/common/clients.py#L120
 

Felipe

-Original Message-
From: Ghanshyam Mann [mailto:ghanshyamm...@gmail.com] 
Sent: Sunday, April 30, 2017 1:53 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat 
support from Tempest

On Fri, Apr 28, 2017 at 5:47 PM, Andrea Frittoli
 wrote:
>
>
> On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra  wrote:
>>
>> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli
>>  wrote:
>>>
>>> Dear stackers,
>>>
>>> starting in the Liberty cycle Tempest has defined a set of projects which
>>> are in scope for direct
>>> testing in Tempest [0]. The current list includes keystone, nova, glance,
>>> swift, cinder and neutron.
>>> All other projects can use the same Tempest testing infrastructure (or
>>> parts of it) by taking advantage
>>> the Tempest plugin and stable interfaces.
>>>
>>> Tempest currently hosts a set of API tests as well as a service client
>>> for the Heat project.
>>> The Heat service client is used by the tests in Tempest, which run in
>>> Heat gate as part of the grenade
>>> job, as well as in the Tempest gate (check pipeline) as part of the
>>> layer4 job.
>>> According to code search [3] the Heat service client is also used by
>>> Murano and Daisycore.
>>
>>
>> For the heat grenade job, I've proposed two patches.
>>
>> 1. To run heat tree gabbi api tests as part of grenade 'post-upgrade'
>> phase
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_460542_=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-SJ9DRnCxhze-aw=aN-OTm6qpDxNIXC86mUeowDuZe9O-NeCWHJdSvrVsYA=d2pZwZ8xKsFLHxQ0YNiM4itJjUHzgE0ibHNu7v28mXM=
>>  
>>
>> 2. To remove tempest tests from the grenade job
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_460810_=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-SJ9DRnCxhze-aw=aN-OTm6qpDxNIXC86mUeowDuZe9O-NeCWHJdSvrVsYA=07__zljUdvdtD_K5ltoKwdjaBwrs0fYJKaXSr93AAiU=
>>  
>>
>>
>>>
>>> I proposed a patch to Tempest to start the deprecation counter for Heat /
>>> orchestration related
>>> configuration items in Tempest [4], and I would like to make sure that
>>> all tests and the service client
>>> either find a new home outside of Tempest, or are removed, by the end the
>>> Pike cycle at the latest.
>>>
>>> Heat has in-tree integration tests and Gabbi based API tests, but I don't
>>> know if those provide
>>> enough coverage to replace the tests on Tempest side.
>>>
>>
>> Yes, the heat gabbi api tests do not yet have the same coverage as the
>> tempest tree api tests (lacks tests using nova, neutron and swift
>> resources),  but I think that should not stop us from *not* running the
>> tempest tests in the grenade job.
>>
>> I also don't know if the tempest tree heat tests are used by any other
>> upstream/downstream jobs. We could surely add more tests to bridge the gap.
>>
>> Also, It's possible to run the heat integration tests (we've enough
>> coverage there) with tempest plugin after doing some initial setup, as we do
>> in all our dsvm gate jobs.
>>
>>> It would propose to move tests and client to a Tempest plugin owned /
>>> maintained by
>>> the Heat team, so that the Heat team can have full flexibility in
>>> consolidating their integration
>>> tests. For Murano and Daisycloud - and any other team that may want to
>>> use the Heat service
>>> client in their tests, even if the client is removed from Tempest, it
>>> would still be available via
>>> the Heat Tempest plugin. As long as the plugin implements the service
>>> client interface,
>>> the Heat service client will register automatically in the service client
>>> manager and be available
>>> for use as today.
>>>
>>
>> if I understand correctly, you're proposing moving the existing tempest
>> tests and service clients to a separate repo managed by heat team. Though
>> that would be collective decision, I'm not sure that's something I would
>> like to do. To start with we may look at adding some of the missing pieces
>> in heat tree itself.
>
>
> I'm proposing to move tests and the service client outside of tempest to a
> new home.
>
> I also suggested that the new home could be a dedicate repo, since that
> would allow you to maintain the
> current branchless nature 

[openstack-dev] [neutron] neutron-lib impact: Port security extension moved

2017-05-01 Thread Boden Russell
The neutron portsecurity extension has been rehomed into neutron-lib and
we are now in the process of consuming it.

Suggested actions:
- If your project consumes neutron.extensions.portsecurity [2] and
there's not an existing patch for your project in [1], please move your
imports over to neutron-lib's portsecurity API definition. You can use
[3] for reference.

We'll hold off on merging [3] until consumers have a chance to switch
over and can discuss this topic in our weekly neutron meeting.

Thanks


[1]
https://review.openstack.org/#/q/message:%22use+neutron-lib+port+security%22
[2]
http://codesearch.openstack.org/?q=from%20neutron%5C.extensions%20import%20portsecurity
[3] https://review.openstack.org/#/c/461464/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Meeting reminder

2017-05-01 Thread Ildiko Vancsa
Hi Training Team,

Friendly reminder that we will have our next (and last before the Boston 
training) meeting in less than an hour at 2000 UTC on #openstack-meeting-3.

You can find the agenda for the meeting here: 
https://etherpad.openstack.org/p/openstack-upstream-institute-meetings 


Thanks and Best Regards,
Ildikó__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Goodbye ironic o/

2017-05-01 Thread John Villalovos
Mario,

So sorry you won't be working with us on Ironic anymore :( You have been an
great part of Ironic and I'm glad I got to know you.

Hopefully I will get to work with you again. Best of luck for the future!

John

On Fri, Apr 28, 2017 at 9:12 AM, Mario Villaplana <
mario.villapl...@gmail.com> wrote:

> Hi ironic team,
>
> You may have noticed a decline in my upstream contributions the past few
> weeks. Unfortunately, I'm no longer being paid to work on ironic. It's
> unlikely that I'll be contributing enough to keep up with the project in my
> new job, too, so please do feel free to remove my core access.
>
> It's been great working with all of you. I've learned so much about open
> source, baremetal provisioning, Python, and more from all of you, and I
> will definitely miss it. I hope that we all get to work together again in
> the future someday.
>
> I am not sure that I'll be at the Forum during the day, but please do ping
> me for a weekend or evening hangout if you're attending. I'd love to show
> anyone who's interested around the Boston area if our schedules align.
>
> Also feel free to contact me via IRC/email/carrier pigeon with any
> questions about work in progress I had upstream.
>
> Good luck with the project, and thanks for everything!
>
> Best wishes,
> Mario Villaplana
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of puppet and ruby from nodepool images

2017-05-01 Thread Paul Belanger
On Mon, May 01, 2017 at 09:59:01AM -0400, Paul Belanger wrote:
> On Thu, Apr 27, 2017 at 07:39:00PM -0400, Paul Belanger wrote:
> > On Wed, Apr 26, 2017 at 12:17:03PM -0400, Paul Belanger wrote:
> > > Greetings,
> > > 
> > > We, openstack-infra, are on our final steps of removing puppet (and ruby) 
> > > from
> > > our images jobs run in nodepool.  At this point, I think we are confident 
> > > we
> > > shouldn't break any projects however I wanted to send this email just to 
> > > keep
> > > everybody up to date.
> > > 
> > > If you do depend on puppet or ruby for your project jobs, please make 
> > > sure to
> > > update your intree bindep.txt file and express the dependency.
> > > 
> > > If you do have a problem, please join us in #openstack-infra so we can 
> > > help.
> > > 
> > As a heads up, we are rolling out another set of changes today. This is the
> > final setup needed from manging our diskimage in nodepool with puppet.
> > 
> > As a result, jobs should now be setting up there SSH known_hosts files as
> > needed. For example:
> > 
> >   ssh-keyscan  >> ~/.ssh/known_hosts
> > 
> > Or if you'd like to continue with disabled host checking:
> > 
> >   ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
> > 
> And today was the day we approved the change[1] \o/
> 
> We plan on starting rebuilds in the next 30min, and take a few hours to build 
> /
> upload everything.  In the mean time, if you start to see random failures 
> please
> join us in #openstack-infra to troubleshoot.
> 
> We'll be doing our best to monitor http://status.opentack.org/zuul and
> http://status.openstack.org/elastic-recheck/ to find any affected jobs.
> 
> -PB
> 
> [1] https://review.openstack.org/#/c/460728/
> 
The centos-7 and fedora-25 images are live now. We are moving foward with
ubuntu-trusty and ubuntu-xenial next.

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-05-01 Thread gordon chung


On 29/04/17 08:14 AM, Julien Danjou wrote:
> 1. get 'deleted' metrics
> 2. delete all things in storage
>   -> if it fails, whatever, ignore, maybe a janitor is doing the same
>   thing?
> 3. expunge from indexer

possibly? i was thinking it was possible that maybe it would partially 
delete and could not deleted the rest on second go but i guess i'll need 
to look at that and see if we can do that.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] ironic-stable-maint update proposal

2017-05-01 Thread Loo, Ruby
+1 to all and more sighs.

I wish I didn't have to be added. Can't we make people stay? :)

--ruby

From: Julia Kreger 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, April 27, 2017 at 5:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] [stable] ironic-stable-maint update 
proposal

On Thu, Apr 27, 2017 at 10:21 AM, Dmitry Tantsur 
> wrote:

1. Add Ruby Loo (rloo) to the group.

+1

2. Remove Jay Faulkner (sigh..) per his request at [2].

+1 to the sigh.


3. Remove Devananda (sigh again..)
+1 *more sighs*

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Matthew Treinish
On Mon, May 01, 2017 at 05:00:17AM -0700, Flavio Percoco wrote:
> On 28/04/17 11:19 -0500, Eric Fried wrote:
> > If it's *just* glance we're making an exception for, I prefer #1 (don't
> > deprecate/remove [glance]api_servers).  It's way less code &
> > infrastructure, and it discourages others from jumping on the
> > multiple-endpoints bandwagon.  If we provide endpoint_override_list
> > (handwave), people will think it's okay to use it.
> > 
> > Anyone aware of any other services that use multiple endpoints?
> 
> Probably a bit late but yeah, I think this makes sense. I'm not aware of other
> projects that have list of api_servers.

I thought it was just nova too, but it turns out cinder has the same exact
option as nova: (I hit this in my devstack patch trying to get glance deployed
as a wsgi app)

https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55

Although from what I can tell you don't have to set it and it will fallback to
using the catalog, assuming you configured the catalog info for cinder:

https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129


-Matt Treinish


> 
> > On 04/28/2017 10:46 AM, Mike Dorman wrote:
> > > Maybe we are talking about two different things here?  I’m a bit confused.
> > > 
> > > Our Glance config in nova.conf on HV’s looks like this:
> > > 
> > > [glance]
> > > api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
> > > glance_api_insecure=True
> > > glance_num_retries=4
> > > glance_protocol=http
> 
> 
> FWIW, this feature is being used as intended. I'm sure there are ways to 
> achieve
> this using external tools like haproxy/nginx but that adds an extra burden to
> OPs that is probably not necessary since this functionality is already there.
> 
> Flavio
> 
> > > So we do provide the full URLs, and there is SSL support.  Right?  I am 
> > > fairly certain we tested this to ensure that if one URL fails, nova goes 
> > > on to retry the next one.  That failure does not get bubbled up to the 
> > > user (which is ultimately the goal.)
> > > 
> > > I don’t disagree with you that the client side choose-a-server-at-random 
> > > is not a great load balancer.  (But isn’t this roughly the same thing 
> > > that oslo-messaging does when we give it a list of RMQ servers?)  For us 
> > > it’s more about the failure handling if one is down than it is about 
> > > actually equally distributing the load.
> > > 
> > > In my mind options One and Two are the same, since today we are already 
> > > providing full URLs and not only server names.  At the end of the day, I 
> > > don’t feel like there is a compelling argument here to remove this 
> > > functionality (that people are actively making use of.)
> > > 
> > > To be clear, I, and I think others, are fine with nova by default getting 
> > > the Glance endpoint from Keystone.  And that in Keystone there should 
> > > exist only one Glance endpoint.  What I’d like to see remain is the 
> > > ability to override that for nova-compute and to target more than one 
> > > Glance URL for purposes of fail over.
> > > 
> > > Thanks,
> > > Mike
> > > 
> > > 
> > > 
> > > 
> > > On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:
> > > 
> > > Thank you both for your feedback - that's really helpful.
> > > 
> > > Let me say a few more words about what we're trying to accomplish here
> > > overall so that maybe we can figure out what the right way forward is.
> > > (it may be keeping the glance api servers setting, but let me at least
> > > make the case real quick)
> > > 
> > >  From a 10,000 foot view, the thing we're trying to do is to get 
> > > nova's
> > > consumption of all of the OpenStack services it uses to be less 
> > > special.
> > > 
> > > The clouds have catalogs which list information about the services -
> > > public, admin and internal endpoints and whatnot - and then we're 
> > > asking
> > > admins to not only register that information with the catalog, but to
> > > also put it into the nova.conf. That means that any updating of that
> > > info needs to be an API call to keystone and also a change to 
> > > nova.conf.
> > > If we, on the other hand, use the catalog, then nova can pick up 
> > > changes
> > > in real time as they're rolled out to the cloud - and there is 
> > > hopefully
> > > a sane set of defaults we could choose (based on operator feedback 
> > > like
> > > what you've given) so that in most cases you don't have to tell nova
> > > where to find glance _at_all_ becuase the cloud already knows where it
> > > is. (nova would know to look in the catalog for the interal interface 
> > > of
> > > the image service - for instance - there's no need to ask an operator 
> > > to
> > > add to the config "what is the service_type of the image service we
> > > 

Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Flavio Percoco

On 28/04/17 11:19 -0500, Eric Fried wrote:

If it's *just* glance we're making an exception for, I prefer #1 (don't
deprecate/remove [glance]api_servers).  It's way less code &
infrastructure, and it discourages others from jumping on the
multiple-endpoints bandwagon.  If we provide endpoint_override_list
(handwave), people will think it's okay to use it.

Anyone aware of any other services that use multiple endpoints?


Probably a bit late but yeah, I think this makes sense. I'm not aware of other
projects that have list of api_servers.


On 04/28/2017 10:46 AM, Mike Dorman wrote:

Maybe we are talking about two different things here?  I’m a bit confused.

Our Glance config in nova.conf on HV’s looks like this:

[glance]
api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
glance_api_insecure=True
glance_num_retries=4
glance_protocol=http



FWIW, this feature is being used as intended. I'm sure there are ways to achieve
this using external tools like haproxy/nginx but that adds an extra burden to
OPs that is probably not necessary since this functionality is already there.

Flavio


So we do provide the full URLs, and there is SSL support.  Right?  I am fairly 
certain we tested this to ensure that if one URL fails, nova goes on to retry 
the next one.  That failure does not get bubbled up to the user (which is 
ultimately the goal.)

I don’t disagree with you that the client side choose-a-server-at-random is not 
a great load balancer.  (But isn’t this roughly the same thing that 
oslo-messaging does when we give it a list of RMQ servers?)  For us it’s more 
about the failure handling if one is down than it is about actually equally 
distributing the load.

In my mind options One and Two are the same, since today we are already 
providing full URLs and not only server names.  At the end of the day, I don’t 
feel like there is a compelling argument here to remove this functionality 
(that people are actively making use of.)

To be clear, I, and I think others, are fine with nova by default getting the 
Glance endpoint from Keystone.  And that in Keystone there should exist only 
one Glance endpoint.  What I’d like to see remain is the ability to override 
that for nova-compute and to target more than one Glance URL for purposes of 
fail over.

Thanks,
Mike




On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:

Thank you both for your feedback - that's really helpful.

Let me say a few more words about what we're trying to accomplish here
overall so that maybe we can figure out what the right way forward is.
(it may be keeping the glance api servers setting, but let me at least
make the case real quick)

 From a 10,000 foot view, the thing we're trying to do is to get nova's
consumption of all of the OpenStack services it uses to be less special.

The clouds have catalogs which list information about the services -
public, admin and internal endpoints and whatnot - and then we're asking
admins to not only register that information with the catalog, but to
also put it into the nova.conf. That means that any updating of that
info needs to be an API call to keystone and also a change to nova.conf.
If we, on the other hand, use the catalog, then nova can pick up changes
in real time as they're rolled out to the cloud - and there is hopefully
a sane set of defaults we could choose (based on operator feedback like
what you've given) so that in most cases you don't have to tell nova
where to find glance _at_all_ becuase the cloud already knows where it
is. (nova would know to look in the catalog for the interal interface of
the image service - for instance - there's no need to ask an operator to
add to the config "what is the service_type of the image service we
should talk to" :) )

Now - glance, and the thing you like that we don't - is especially hairy
because of the api_servers list. The list, as you know, is just a list
of servers, not even of URLs. This  means it's not possible to configure
nova to talk to glance over SSL (which I know you said works for you,
but we'd like for people to be able to choose to SSL all their things)
We could add that, but it would be an additional pile of special config.
Because of all of that, we also have to attempt to make working URLs
from what is usually a list of IP addresses. This is also clunky and
prone to failure.

The implementation on the underside of the api_servers code is the
world's dumbest load balancer. It picks a server from the  list at
random and uses it. There is no facility for dealing with a server in
the list that stops working or for allowing rolling upgrades like there
would with a real load-balancer across the set. If one of the API
servers goes away, we have no context to know that, so just some of your
internal calls to glance fail.

Those 

Re: [openstack-dev] [ironic] Goodbye ironic o/

2017-05-01 Thread Loo, Ruby
Hi Mario,

I will miss you; good luck!

So long and thanks for all the metrics :)

--ruby

From: Mario Villaplana 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, April 28, 2017 at 12:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] Goodbye ironic o/

Hi ironic team,

You may have noticed a decline in my upstream contributions the past few weeks. 
Unfortunately, I'm no longer being paid to work on ironic. It's unlikely that 
I'll be contributing enough to keep up with the project in my new job, too, so 
please do feel free to remove my core access.

It's been great working with all of you. I've learned so much about open 
source, baremetal provisioning, Python, and more from all of you, and I will 
definitely miss it. I hope that we all get to work together again in the future 
someday.

I am not sure that I'll be at the Forum during the day, but please do ping me 
for a weekend or evening hangout if you're attending. I'd love to show anyone 
who's interested around the Boston area if our schedules align.

Also feel free to contact me via IRC/email/carrier pigeon with any questions 
about work in progress I had upstream.

Good luck with the project, and thanks for everything!

Best wishes,
Mario Villaplana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Eric Fried
Matt-

Yeah, clearly other projects have the same issuethis blueprint is
trying to solve in nova.  I think the idea is that, once the
infrastructure is in place and nova has demonstrated the concept, other
projects can climbaboard.

It's conceivable that the new get_service_url() method could be
moved to a more common lib (ksaor os-client-config perhaps) in the
future to facilitate this.

Eric (efried)

On 05/01/2017 09:17 AM, Matthew Treinish wrote:
> On Mon, May 01, 2017 at 05:00:17AM -0700, Flavio Percoco wrote:
>> On 28/04/17 11:19 -0500, Eric Fried wrote:
>>> If it's *just* glance we're making an exception for, I prefer #1 (don't
>>> deprecate/remove [glance]api_servers).  It's way less code &
>>> infrastructure, and it discourages others from jumping on the
>>> multiple-endpoints bandwagon.  If we provide endpoint_override_list
>>> (handwave), people will think it's okay to use it.
>>>
>>> Anyone aware of any other services that use multiple endpoints?
>> Probably a bit late but yeah, I think this makes sense. I'm not aware of 
>> other
>> projects that have list of api_servers.
> I thought it was just nova too, but it turns out cinder has the same exact
> option as nova: (I hit this in my devstack patch trying to get glance deployed
> as a wsgi app)
>
> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>
> Although from what I can tell you don't have to set it and it will fallback to
> using the catalog, assuming you configured the catalog info for cinder:
>
> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>
>
> -Matt Treinish
>
>
>>> On 04/28/2017 10:46 AM, Mike Dorman wrote:
 Maybe we are talking about two different things here?  I’m a bit confused.

 Our Glance config in nova.conf on HV’s looks like this:

 [glance]
 api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
 glance_api_insecure=True
 glance_num_retries=4
 glance_protocol=http
>>
>> FWIW, this feature is being used as intended. I'm sure there are ways to 
>> achieve
>> this using external tools like haproxy/nginx but that adds an extra burden to
>> OPs that is probably not necessary since this functionality is already there.
>>
>> Flavio
>>
 So we do provide the full URLs, and there is SSL support.  Right?  I am 
 fairly certain we tested this to ensure that if one URL fails, nova goes 
 on to retry the next one.  That failure does not get bubbled up to the 
 user (which is ultimately the goal.)

 I don’t disagree with you that the client side choose-a-server-at-random 
 is not a great load balancer.  (But isn’t this roughly the same thing that 
 oslo-messaging does when we give it a list of RMQ servers?)  For us it’s 
 more about the failure handling if one is down than it is about actually 
 equally distributing the load.

 In my mind options One and Two are the same, since today we are already 
 providing full URLs and not only server names.  At the end of the day, I 
 don’t feel like there is a compelling argument here to remove this 
 functionality (that people are actively making use of.)

 To be clear, I, and I think others, are fine with nova by default getting 
 the Glance endpoint from Keystone.  And that in Keystone there should 
 exist only one Glance endpoint.  What I’d like to see remain is the 
 ability to override that for nova-compute and to target more than one 
 Glance URL for purposes of fail over.

 Thanks,
 Mike




 On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:

 Thank you both for your feedback - that's really helpful.

 Let me say a few more words about what we're trying to accomplish here
 overall so that maybe we can figure out what the right way forward is.
 (it may be keeping the glance api servers setting, but let me at least
 make the case real quick)

  From a 10,000 foot view, the thing we're trying to do is to get nova's
 consumption of all of the OpenStack services it uses to be less 
 special.

 The clouds have catalogs which list information about the services -
 public, admin and internal endpoints and whatnot - and then we're 
 asking
 admins to not only register that information with the catalog, but to
 also put it into the nova.conf. That means that any updating of that
 info needs to be an API call to keystone and also a change to 
 nova.conf.
 If we, on the other hand, use the catalog, then nova can pick up 
 changes
 in real time as they're rolled out to the cloud - and there is 
 hopefully
 a sane set of defaults we could choose (based on operator feedback like
 what 

Re: [openstack-dev] [all] Removal of puppet and ruby from nodepool images

2017-05-01 Thread Paul Belanger
On Thu, Apr 27, 2017 at 07:39:00PM -0400, Paul Belanger wrote:
> On Wed, Apr 26, 2017 at 12:17:03PM -0400, Paul Belanger wrote:
> > Greetings,
> > 
> > We, openstack-infra, are on our final steps of removing puppet (and ruby) 
> > from
> > our images jobs run in nodepool.  At this point, I think we are confident we
> > shouldn't break any projects however I wanted to send this email just to 
> > keep
> > everybody up to date.
> > 
> > If you do depend on puppet or ruby for your project jobs, please make sure 
> > to
> > update your intree bindep.txt file and express the dependency.
> > 
> > If you do have a problem, please join us in #openstack-infra so we can help.
> > 
> As a heads up, we are rolling out another set of changes today. This is the
> final setup needed from manging our diskimage in nodepool with puppet.
> 
> As a result, jobs should now be setting up there SSH known_hosts files as
> needed. For example:
> 
>   ssh-keyscan  >> ~/.ssh/known_hosts
> 
> Or if you'd like to continue with disabled host checking:
> 
>   ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
> 
And today was the day we approved the change[1] \o/

We plan on starting rebuilds in the next 30min, and take a few hours to build /
upload everything.  In the mean time, if you start to see random failures please
join us in #openstack-infra to troubleshoot.

We'll be doing our best to monitor http://status.opentack.org/zuul and
http://status.openstack.org/elastic-recheck/ to find any affected jobs.

-PB

[1] https://review.openstack.org/#/c/460728/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-05-01 Thread Ben Swartzlander

On 04/28/2017 06:26 PM, Monty Taylor wrote:

Hey everybody!

Yay! (I'm sure you're all saying this, given the topic. I'll let you
collect yourself from your exuberant celebration)

== Background ==

As I'm sure you all know, we've been trying to make some hearway for a
while on getting service-types that are registered in the keystone
service catalog to be consistent. The reason for this is so that API
Consumers can know how to request a service from the catalog. That might
sound like a really easy task - but uh-hoh, you'd be so so wrong. :)

The problem is that we have some services that went down the path of
suggesting people register a new service in the catalog with a version
appended. This pattern was actually started by nova for the v3 api but
which we walked back from - with "computev3". The pattern was picked up
by at least cinder (volumev2, volumev3) and mistral (workflowv2) that I
am aware of. We're also suggesting in the service-types-authority that
manila go by "shared-file-system" instead of "share".

(Incidentally, this is related to a much larger topic of version
discovery, which I will not bore you with in this email, but about which
I have a giant pile of words just waiting for you in a little bit. Get
excited about that!)

== Proposed Solution ==

As a follow up to the consuming version discovery spec, which you should
absolutely run away from and never read, I wrote these:

https://review.openstack.org/#/c/460654/ (Consuming historical aliases)
and
https://review.openstack.org/#/c/460539/ (Listing historical aliases)

It's not a particularly clever proposal - but it breaks down like this:

* Make a list of the known historical aliases we're aware of - in a
place that isn't just in one of our python libraries (460539)
* Write down a process for using them as part of finding a service from
the catalog so that there is a clear method that can be implemented by
anyone doing libraries or REST interactions. (460654)
* Get agreement on that process as the "recommended" way to look up
services by service-type in the catalog.
* Implement it in the base libraries OpenStack ships.
* Contact the authors of as many OpenStack API libraries that we can find.
* Add tempest tests to verify the mappings in both directions.
* Change things in devstack/deployer guides.

The process as described is backwards compatible. That is, once
implemented it means that a user can request "volumev2" or
"block-storage" with version=2 - and both will return the endpoint the
user expects. It also means that we're NOT asking existing clouds to run
out and break their users. New cloud deployments can do the new thing -
but the old values are handled in both directions.

There is a hole, which is that people who are not using the base libs
OpenStack ships may find themselves with a new cloud that has a
different service-type in the catalog than they have used before. It's
not idea, to be sure. BUT - hopefully active outreach to the community
libraries coupled with documentation will keep the issues to a minimum.

If we can agree on the matching and fallback model, I am volunteering to
do the work to implement in every client library in which it needs to be
implemented across OpenStack and to add the tempest tests. (it's
actually mostly a patch to keystoneauth, so that's actually not _that_
impressive of a volunteer) I will also reach out to as many of the
OpenStack API client library authors as I can find, point them at the
docs and suggest they add the support.

Thoughts? Anyone violently opposed?


I don't have any problems with this idea. My main concern would be for 
backwards-compatibility and it sounds like that's pretty well sorted out.


I do think it's important that if we make this improvement that all the 
projects really do get it done at around the same time, because if we 
only implement it 80% of projects, it will look pretty weird.



Thanks for reading...

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Sean McGinnis
On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
> > 
> 
> I thought it was just nova too, but it turns out cinder has the same exact
> option as nova: (I hit this in my devstack patch trying to get glance deployed
> as a wsgi app)
> 
> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
> 
> Although from what I can tell you don't have to set it and it will fallback to
> using the catalog, assuming you configured the catalog info for cinder:
> 
> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
> 
> 
> -Matt Treinish
> 

FWIW, that came with the original fork out of Nova. I do not have any real
world data on whether that is used or not.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-05-01 Thread Flavio Percoco

On 29/04/17 22:40 -0500, Sean McGinnis wrote:

On Fri, Apr 28, 2017 at 05:26:16PM -0500, Monty Taylor wrote:

Hey everybody!

...

== Proposed Solution ==

... Clean things up
... Make things simple
... Don't break everybody



+1 from me. I think this is a good direction to go.



/me likes!
/me likes very much!

+1

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][blazar][scientific] advanced instance scheduling: reservations and preeemption - Forum session

2017-05-01 Thread Blair Bethwaite
Hi all,

Following up to the recent thread "[Openstack-operators] [scientific]
Resource reservation requirements (Blazar) - Forum session" and adding
openstack-dev.

This is now a confirmed forum session
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18781/advanced-instance-scheduling-reservations-and-preemption)
to cover any advanced scheduling use-cases people want to talk about,
but in particular focusing on reservations and preemption as they are
big priorities particularly for scientific deployers.

Etherpad draft is
https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling,
please attend and contribute! In particular I'd appreciate background
spec and review links added to the etherpad.

Jay, would you be able and interested to moderate this from the Nova side?

Cheers,

On 12 April 2017 at 05:22, Jay Pipes  wrote:
> On 04/11/2017 02:08 PM, Pierre Riteau wrote:
>>>
>>> On 4 Apr 2017, at 22:23, Jay Pipes >> > wrote:
>>>
>>> On 04/04/2017 02:48 PM, Tim Bell wrote:

 Some combination of spot/OPIE
>>>
>>>
>>> What is OPIE?
>>
>>
>> Maybe I missed a message: I didn’t see any reply to Jay’s question about
>> OPIE.
>
>
> Thanks!
>
>> OPIE is the OpenStack Preemptible Instances
>> Extension: https://github.com/indigo-dc/opie
>> I am sure other on this list can provide more information.
>
>
> Got it.
>
>> I think running OPIE instances inside Blazar reservations would be
>> doable without many changes to the implementation.
>> We’ve talked about this idea several times, this forum session would be
>> an ideal place to draw up an implementation plan.
>
>
> I just looked through the OPIE source code. One thing I'm wondering is why
> the code for killing off pre-emptible instances is being done in the
> filter_scheduler module?
>
> Why not have a separate service that merely responds to the raising of a
> NoValidHost exception being raised from the scheduler with a call to go and
> terminate one or more instances that would have allowed the original request
> to land on a host?
>
> Right here is where OPIE goes and terminates pre-emptible instances:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L92-L100
>
> However, that code should actually be run when line 90 raises NoValidHost:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L90
>
> There would be no need at all for "detecting overcommit" here:
>
> https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L96
>
> Simply detect a NoValidHost being returned to the conductor from the
> scheduler, examine if there are pre-emptible instances currently running
> that could be terminated and terminate them, and re-run the original call to
> select_destinations() (the scheduler call) just like a Retry operation
> normally does.
>
> There's be no need whatsoever to involve any changes to the scheduler at
> all.
>
 and Blazar would seem doable as long as the resource provider
 reserves capacity appropriately (i.e. spot resources>>blazar
 committed along with no non-spot requests for the same aggregate).
 Is this feasible?
>
>
> No. :)
>
> As mentioned in previous emails and on the etherpad here:
>
> https://etherpad.openstack.org/p/new-instance-reservation
>
> I am firmly against having the resource tracker or the placement API
> represent inventory or allocations with a temporal aspect to them (i.e.
> allocations in the future).
>
> A separate system (hopefully Blazar) is needed to manage the time-based
> associations to inventories of resources over a period in the future.
>
> Best,
> -jay
>
>>> I'm not sure how the above is different from the constraints I mention
>>> below about having separate sets of resource providers for preemptible
>>> instances than for non-preemptible instances?
>>>
>>> Best,
>>> -jay
>>>
 Tim

 On 04.04.17, 19:21, "Jay Pipes" > wrote:

On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> Hi Jay,
>
> On 4 April 2017 at 00:20, Jay Pipes > wrote:
>> However, implementing the above in any useful fashion requires
 that Blazar
>> be placed *above* Nova and essentially that the cloud operator
 turns off
>> access to Nova's  POST /servers API call for regular users.
 Because if not,
>> the information that Blazar acts upon can be simply
 circumvented by any user
>> at any time.
>
> That's something of an oversimplification. A reservation system
> outside of Nova could manipulate Nova host-aggregates to "cordon
 off"
> infrastructure from on-demand access (I believe Blazar already uses
> this approach), and it's not much of a jump to imagine operators
 being
> able to 

Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-05-01 Thread Eric Fried
Sam-

Under the current design, you can provide a specific endpoint
(singular) via the `endpoint_override` conf option.  Based on feedback
on this thread, we will also be keeping support for
`[glance]api_servers` for consumers who actually need to be able to
specify multiple endpoints.  See latest spec proposal[1] for details.

[1] https://review.openstack.org/#/c/461481/

Thanks,
Eric (efried)

On 05/01/2017 12:20 PM, Sam Morrison wrote:
> 
>> On 1 May 2017, at 4:24 pm, Sean McGinnis  wrote:
>>
>> On Mon, May 01, 2017 at 10:17:43AM -0400, Matthew Treinish wrote:
 
>>>
>>> I thought it was just nova too, but it turns out cinder has the same exact
>>> option as nova: (I hit this in my devstack patch trying to get glance 
>>> deployed
>>> as a wsgi app)
>>>
>>> https://github.com/openstack/cinder/blob/d47eda3a3ba9971330b27beeeb471e2bc94575ca/cinder/common/config.py#L51-L55
>>>
>>> Although from what I can tell you don't have to set it and it will fallback 
>>> to
>>> using the catalog, assuming you configured the catalog info for cinder:
>>>
>>> https://github.com/openstack/cinder/blob/19d07a1f394c905c23f109c1888c019da830b49e/cinder/image/glance.py#L117-L129
>>>
>>>
>>> -Matt Treinish
>>>
>>
>> FWIW, that came with the original fork out of Nova. I do not have any real
>> world data on whether that is used or not.
> 
> Yes this is used in cinder.
> 
> A lot of the projects you can set endpoints for them to use. This is 
> extremely useful in a a large production Openstack install where you want to 
> control the traffic.
> 
> I can understand using the catalog in certain situations and feel it’s OK for 
> that to be the default but please don’t prevent operators configuring it 
> differently.
> 
> Glance is the big one as you want to control the data flow efficiently but 
> any service to service configuration should ideally be able to be manually 
> configured.
> 
> Cheers,
> Sam
> 
> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Defining the agenda for Kubernetes Ops on OpenStack forum session @ OpenStack Summit Boston

2017-05-01 Thread Steve Gordon
Hi all,

There will be a forum session at OpenStack Summit Boston next week on the topic 
of Kubernetes Ops on OpenStack on OpenStack. This session will be occurring on 
the Wednesday, May 10, at 1:50pm-2:30pm [1]. If you are an operator, developer, 
or other contributor attending OpenStack Summit who would like to participate 
in this session we would love to have you. We're working on framing the agenda 
for the session in this Etherpad:

https://etherpad.openstack.org/p/BOS-forum-kubernetes-ops-on-openstack

Feel free to add your own thoughts and look forward to seeing you there. If 
this email has caused you to ask yourself what the forum is and why you'd be 
there, I'd suggest starting here:

https://wiki.openstack.org/wiki/Forum

Thanks!

Steve

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18764/kubernetes-ops-on-openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-05-01 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. Pike priorities update: https://review.openstack.org/#/c/460086/
2. rolling upgrades
2.1. make a change to the grenade job to only upgrade conductor, ready for 
reviews: https://review.openstack.org/456166
2.2. the next patch is ready for reviews: 
https://review.openstack.org/#/c/412397/
3. review next BFV patch:
3.1. next: https://review.openstack.org/#/c/366197/
4. review e-tags spec: https://review.openstack.org/#/c/381991/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 24 Apr 2017 and 01 May 2017)
- Ironic: 252 bugs (+2) + 251 wishlist items (-1). 22 new (+5), 201 in 
progress, 0 critical, 26 high and 31 incomplete (-2)
- Inspector: 14 bugs + 30 wishlist items (-3). 3 new (+1), 15 in progress (-5), 
0 critical, 1 high and 3 incomplete
- Nova bugs with Ironic tag: 11. 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed: https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- mjturek is looking into identifying how to address/support a behavior of 
cinder in regards to required information where cinder expects data, but 
dirvers do not necessarilly use it.  The cinder driver code originally expected 
it to be optional, but hshiina found an additional check inside cinder's code 
base that requires both pieces of information.
- mjturek is working on getting together devstack config updates/script 
changes in order to support this configuration
- It's looking more like all setup can/should happen during tempest.
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/366197/
https://review.openstack.org/#/c/406290
https://review.openstack.org/#/c/413324
https://review.openstack.org/#/c/454243/ - WIP logic changes for 
deployment process.  Tenant network separation introduced some additional 
complexity, quick conceptual feedback requested.
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- patches ready for reviews. Next one: 'Add version column' 
(https://review.openstack.org/#/c/412397/)
- Testing work:
- 27-Mar-2017: Grenade multi-node is non-voting
- need to change grenade to only upgrade the conductor, ready for 
reviews: https://review.openstack.org/456166

Reference architecture guide (jroll, dtantsur)
--
- no updates

Python 3.5 compatibility (Nisha)

- no updates
- Nisha will be taking over this work(Nisha on leave from May 5 to May 22)

Deploying with Apache and WSGI in CI (vsaienk0)
---
- ironic part seems finished (needs double-checking)
- do we have install-guide bits on how to do it?
- inspector is TODO and depends on https://review.openstack.org/435517

Driver composition (dtantsur, jroll)

- spec: 

Re: [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum session

2017-05-01 Thread Blair Bethwaite
Thanks Rochelle. I encourage everyone to dump thoughts into the
etherpad (https://etherpad.openstack.org/p/BOS-forum-special-hardware
- feel free to garden it as you go!) so we can have some chance of
organising a coherent session. In particular it would be useful to
know what is going to be most useful for the Nova and Cyborg devs so
that we can give that priority before we start the show-and-tell /
knowledge-share that is often a large part of these sessions. I'd also
be very happy to have a co-moderator if any wants to volunteer.

On 26 April 2017 at 03:11, Rochelle Grober  wrote:
>
> I know that some cyborg folks and nova folks are planning to be there. Now
> we need to drive some ops folks.
>
>
> Sent from HUAWEI AnyOffice
> From:Blair Bethwaite
> To:openstack-dev@lists.openstack.org,openstack-oper.
> Date:2017-04-25 08:24:34
> Subject:[openstack-dev] [scientific][nova][cyborg] Special Hardware Forum
> session
>
> Hi all,
>
> A quick FYI that this Forum session exists:
> https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
> (https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
> thing this Forum.
>
> It would be great to see a good representation from both the Nova and
> Cyborg dev teams, and also ops ready to share their experience and
> use-cases.
>
> --
> Cheers,
> ~Blairo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][blazar][scientific] advanced instance scheduling: reservations and preeemption - Forum session

2017-05-01 Thread Jay Pipes

On 05/01/2017 03:39 PM, Blair Bethwaite wrote:

Hi all,

Following up to the recent thread "[Openstack-operators] [scientific]
Resource reservation requirements (Blazar) - Forum session" and adding
openstack-dev.

This is now a confirmed forum session
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18781/advanced-instance-scheduling-reservations-and-preemption)
to cover any advanced scheduling use-cases people want to talk about,
but in particular focusing on reservations and preemption as they are
big priorities particularly for scientific deployers.

>

Etherpad draft is
https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling,
please attend and contribute! In particular I'd appreciate background
spec and review links added to the etherpad.

Jay, would you be able and interested to moderate this from the Nova side?


Masahito Muroi is currently marked as the moderator, but I will indeed 
be there and happy to assist Masahito in moderating, no problem.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] priorities for the current week (05/02-05/05)

2017-05-01 Thread ChangBo Guo
Oslo folks,

Oslo team maintain  35 libraries,   each core reviewer focuses on part of
them.  In last several weeks,  We did some work , try to make Oslo review
productive and efficiently . Summarize them as follow:

1. Collect core reviwers' focuses
The Oslo program brings together generalist code reviewers and
specialist API maintainers. Not   all of generalist code reviewers know all
libraries enough,  you cand find the list in  section one in [1].

2. Generate unified review dashboard
I post review dashboard links in
https://wiki.openstack.org/wiki/Oslo#Review_Links and section three in [1]
That would be helpful to the review process.

3. Highlight  priorities
 We discussed this in last weekly meeting [2], will try to post hight
priorities each week,  You can edit section two in [1] to let others pay
more attention. This week we need more input to Dough's  logging related
patches before the Summit.
https://review.openstack.org/#/c/460112/
https://review.openstack.org/459426
https://review.openstack.org/459424
https://review.openstack.org/461506

[1] https://etherpad.openstack.org/p/oslo-pike-tracking
[2]
http://eavesdrop.openstack.org/meetings/oslo/2017/oslo.2017-05-01-14.01.log.html

-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Goodbye ironic o/

2017-05-01 Thread Ranganathan, Shobha
Hi Mario,

Sorry to hear that you won’t be working on Ironic anymore!
Best of luck on whatever you are doing next!

Shobha

From: John Villalovos [mailto:openstack@sodarock.com]
Sent: Monday, May 1, 2017 9:14 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] Goodbye ironic o/

Mario,
So sorry you won't be working with us on Ironic anymore :( You have been an 
great part of Ironic and I'm glad I got to know you.
Hopefully I will get to work with you again. Best of luck for the future!
John

On Fri, Apr 28, 2017 at 9:12 AM, Mario Villaplana 
> wrote:
Hi ironic team,

You may have noticed a decline in my upstream contributions the past few weeks. 
Unfortunately, I'm no longer being paid to work on ironic. It's unlikely that 
I'll be contributing enough to keep up with the project in my new job, too, so 
please do feel free to remove my core access.

It's been great working with all of you. I've learned so much about open 
source, baremetal provisioning, Python, and more from all of you, and I will 
definitely miss it. I hope that we all get to work together again in the future 
someday.

I am not sure that I'll be at the Forum during the day, but please do ping me 
for a weekend or evening hangout if you're attending. I'd love to show anyone 
who's interested around the Boston area if our schedules align.

Also feel free to contact me via IRC/email/carrier pigeon with any questions 
about work in progress I had upstream.

Good luck with the project, and thanks for everything!

Best wishes,
Mario Villaplana

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][OSC] OpenStack Interpreter: A useful tool python interpreter tool for the OpenStack client libraries.

2017-05-01 Thread Adrian Turjak
Hello OpenStack folks,

As part of my dev work I recently put together a cool little tool which
lets me have much easier access to the various OpenStack python clients
in the scope of a python interpreter session. The first version was a
little rough and without os-client-config support. The current version
is now a plugin for the openstackclient and introduces a command that
simply authenticates you, sets up the environment and helper tools, and
then drops you into an ipython interactive session. The helper stuff is
fairly simple, but combined with the features of ipython it really lets
you start playing with the tools quickly, and by piggybacking onto
openstackclient I get access to a lot of the niceties and inbuilt auth
mechanisms.

It is useful for learning, testing, or development against the various
openstack client libraries, and even as an ops tool to quickly run some
basic actions without having to resort to weird or silly bash command
combinations.

I personally use it to test out commands or libraries I'm not familiar
with, or if I just need to work out what the output from something is.
Often even doing once off admin actions that require parsing through and
comparing different values and resources, but isn't worth writing a
script for.

My goal was to make something easy to use, and help almost anyone pick
up and start using the various python clients without needing to dig
through too much docs.

https://pypi.python.org/pypi/openstack-interpreter

Feedback is welcome!

Cheers,
Adrian Turjak


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-01 Thread shubham sharma
+1

Regards
Shubham

On Tue, May 2, 2017 at 6:33 AM, Qiming Teng 
wrote:

> +1
>
> Qiming
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][cinder][ceph] should Tempest tests the backend specific feature?

2017-05-01 Thread Ghanshyam Mann
In Cinder, there are many features/APIs which are backend specific and
will return 405 or 501 if same is not implemented on any backend [1].
If such tests are implemented in Tempest, then it will break some gate
where that backend job is voting. like ceph job in glance_store gate.

There been many such cases recently where ceph jobs were broken due to
such tests and recently it is for force-delete backup feature[2].
Reverting force-delete tests in [3]. To resolve such cases at some
extend, Jon is going to add a white/black list of tests which can run
on ceph job [4] depends on what all feature ceph implemented. But this
does not resolve it completely due to many reason like
1. External use of Tempest become difficult where user needs to know
what all tests to skip for which backend
2. Tempest tests become too specific to backend.

Now there are few options to resolve this:
1. Tempest should not tests such API/feature which are backend
specific like mentioned by api-ref like[1].
2. Tempest test can be disabled/skip based on backend. - This is not
good idea as it increase config options and overhead of setting those.
3. Tempest test can verify behavior with if else condition as per
backend. This is bad idea and lose the test strength.

IMO options 1 is better options. More feedback are welcome.

..1 
https://developer.openstack.org/api-ref/block-storage/v3/?expanded=force-delete-a-backup-detail#force-delete-a-backup
..2 https://bugs.launchpad.net/glance/+bug/1687538
..3 https://review.openstack.org/#/c/461625/
..4 http://lists.openstack.org/pipermail/openstack-dev/2017-April/115229.html

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-upstream-institute] Order of Slides

2017-05-01 Thread Victoria Martínez de la Cruz
Hey Amy,

IMHO this is not an issue and IIRC we have followed that flow because we
start by showing an overview (the big picture) of how you can contribute to
OpenStack (without any exercise), then we move on on creating the accounts
for contributing (with exercise) and after that we move on with the actual
process of contributing (filing bugs and so on, with exercises). In other
words, in the overview we don't need the accounts explicitly so it should
be fine.

Cheers,

Victoria

2017-05-02 0:03 GMT-03:00 Amy Marrich :

> I was going over the 2 sections I'm presenting this weekend and noticed
> that in
>
> https://docs.openstack.org/upstream-training/workflow-
> training-contribution-process.html
>
> We talk about submitting and taking bugs, doing reviews and pushing up
> code sets as it's the overview. But the next section
>
> https://docs.openstack.org/upstream-training/workflow-
> reg-and-accounts.html
>
> we sign up for the actual accounts.
>
> I'm not sure if in the future we might want to change the order of these
> sections so that folks can possibly work a little ahead or if we want to
> keep the order to prevent it. Either way we can always reference the other
> section, in this case with a 'In the next section you'll be making the
> account to do this'  or if we switched the order 'Using the username from
> the last section'.
>
> Amy (spotz)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-01 Thread Kumari, Madhuri
+1 for both.
Well deserved Feng!

Thanks,
Madhuri

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Saturday, April 29, 2017 9:35 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Zun] Proposal a change of Zun core team

Hi all,

I proposes a change of Zun's core team memberships as below:

+ Feng Shengqin (feng-shengqin)
- Wang Feilong (flwang)

Feng Shengqin has contributed a lot to the Zun projects. Her contribution 
includes BPs, bug fixes, and reviews. In particular, she completed an essential 
BP and had a lot of accepted commits in Zun's repositories. I think she is 
qualified for the core reviewer position. I would like to thank Wang Feilong 
for his interest to join the team when the project was found. I believe we are 
always friends regardless of his core membership.

By convention, we require a minimum of 4 +1 votes from Zun core reviewers 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, this proposal is rejected.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-01 Thread Fei Long Wang
+1 :)


On 29/04/17 16:05, Hongbin Lu wrote:
>
> Hi all,
>
>  
>
> I proposes a change of Zun’s core team memberships as below:
>
>  
>
> + Feng Shengqin (feng-shengqin)
>
> - Wang Feilong (flwang)
>
>  
>
> Feng Shengqin has contributed a lot to the Zun projects. Her
> contribution includes BPs, bug fixes, and reviews. In particular, she
> completed an essential BP and had a lot of accepted commits in Zun’s
> repositories. I think she is qualified for the core reviewer position.
> I would like to thank Wang Feilong for his interest to join the team
> when the project was found. I believe we are always friends regardless
> of his core membership.
>
>  
>
> By convention, we require a minimum of 4 +1 votes from Zun core
> reviewers within a 1 week voting window (consider this proposal as a
> +1 vote from me). A vote of -1 is a veto. If we cannot get enough
> votes or there is a veto vote prior to the end of the voting window,
> this proposal is rejected.
>
>  
>
> Best regards,
>
> Hongbin
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-01 Thread Rabi Mishra
On Fri, Apr 28, 2017 at 2:17 PM, Andrea Frittoli 
wrote:

>
>
> On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra  wrote:
>
>> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli <
>> andrea.fritt...@gmail.com> wrote:
>>
>>> Dear stackers,
>>>
>>> starting in the Liberty cycle Tempest has defined a set of projects
>>> which are in scope for direct
>>> testing in Tempest [0]. The current list includes keystone, nova,
>>> glance, swift, cinder and neutron.
>>> All other projects can use the same Tempest testing infrastructure (or
>>> parts of it) by taking advantage
>>> the Tempest plugin and stable interfaces.
>>>
>>> Tempest currently hosts a set of API tests as well as a service client
>>> for the Heat project.
>>> The Heat service client is used by the tests in Tempest, which run in
>>> Heat gate as part of the grenade
>>> job, as well as in the Tempest gate (check pipeline) as part of the
>>> layer4 job.
>>> According to code search [3] the Heat service client is also used by
>>> Murano and Daisycore.
>>>
>>
>> For the heat grenade job, I've proposed two patches.
>>
>> 1. To run heat tree gabbi api tests as part of grenade 'post-upgrade'
>> phase
>>
>> https://review.openstack.org/#/c/460542/
>>
>> 2. To remove tempest tests from the grenade job
>>
>> https://review.openstack.org/#/c/460810/
>>
>>
>>
>>> I proposed a patch to Tempest to start the deprecation counter for Heat
>>> / orchestration related
>>> configuration items in Tempest [4], and I would like to make sure that
>>> all tests and the service client
>>> either find a new home outside of Tempest, or are removed, by the end
>>> the Pike cycle at the latest.
>>>
>>> Heat has in-tree integration tests and Gabbi based API tests, but I
>>> don't know if those provide
>>> enough coverage to replace the tests on Tempest side.
>>>
>>>
>> Yes, the heat gabbi api tests do not yet have the same coverage as the
>> tempest tree api tests (lacks tests using nova, neutron and swift
>> resources),  but I think that should not stop us from *not* running the
>> tempest tests in the grenade job.
>>
>> I also don't know if the tempest tree heat tests are used by any other
>> upstream/downstream jobs. We could surely add more tests to bridge the gap.
>>
>> Also, It's possible to run the heat integration tests (we've enough
>> coverage there) with tempest plugin after doing some initial setup, as we
>> do in all our dsvm gate jobs.
>>
>> It would propose to move tests and client to a Tempest plugin owned /
>>> maintained by
>>> the Heat team, so that the Heat team can have full flexibility in
>>> consolidating their integration
>>> tests. For Murano and Daisycloud - and any other team that may want to
>>> use the Heat service
>>> client in their tests, even if the client is removed from Tempest, it
>>> would still be available via
>>> the Heat Tempest plugin. As long as the plugin implements the service
>>> client interface,
>>> the Heat service client will register automatically in the service
>>> client manager and be available
>>> for use as today.
>>>
>>>
>> if I understand correctly, you're proposing moving the existing tempest
>> tests and service clients to a separate repo managed by heat team. Though
>> that would be collective decision, I'm not sure that's something I would
>> like to do. To start with we may look at adding some of the missing pieces
>> in heat tree itself.
>>
>
> I'm proposing to move tests and the service client outside of tempest to a
> new home.
>
> I also suggested that the new home could be a dedicate repo, since that
> would allow you to maintain the
> current branchless nature of those tests. A more detailed discussion about
> the topic can be found
> in the corresponding proposed queens goal [5],
>
> Using a dedicated repo *is not* a precondition for moving tests and
> service client out of Tempest.
>
>
We probably are mixing two different things here.

1. Moving in-tree heat templest plugn and tests to a dedicated repo

Though we don't have any plans for it now, we may have to do it when/if
it's accepted as a community goal.

2.  Moving tempest tree heat tests and heat service client to a new home
and owner.

I don't think that's something heat team would like to do given that we
don't use these tests anywhere and would probably spend time improving the
coverage of the gabbi api tests we already have.


> andrea
>
> [5] https://review.openstack.org/#/c/369749/
>
>
>>
>> Andrea Frittoli (andreaf)
>>>
>>> [0] https://docs.openstack.org/developer/tempest/test_
>>> removal.html#tempest-scope
>>> [1] https://docs.openstack.org/developer/tempest/plugin.html
>>> [2] https://docs.openstack.org/developer/tempest/library.html
>>> [3] http://codesearch.openstack.org/?q=self.orchestration_client=nope;
>>> files==
>>> [4] https://review.openstack.org/#/c/456843/
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for 

Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-01 Thread Steve Baker
On Tue, May 2, 2017 at 5:27 AM, MONTEIRO, FELIPE C  wrote:

> Murano currently uses the Tempest orchestration client for its scenario
> Tempest tests [0], which are not turned on by default in the Murano Tempest
> gate due to resource constraints.
>
> However, I'm hesitant to switch to Heat's testing client, because it is
> not a Tempest client, but rather the python-heatclient. I would like to
> know whether there are plans to change this to a Tempest-based client?
>

There are no plans to switch the heat integration/functional tests to using
the tempest based client. The heat tests will use heatclient for most
tests, and gabbi for testing the REST API.

Since you're testing Murano rather than the Heat API, I think converting
your tests to heatclient would be reasonable.


> [0] https://github.com/openstack/murano/blob/master/murano_
> tempest_tests/tests/scenario/application_catalog/base.py#L100
> [1] https://github.com/openstack/heat/blob/master/heat_
> integrationtests/common/clients.py#L120
>
> Felipe
>
> -Original Message-
> From: Ghanshyam Mann [mailto:ghanshyamm...@gmail.com]
> Sent: Sunday, April 30, 2017 1:53 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat
> support from Tempest
>
> On Fri, Apr 28, 2017 at 5:47 PM, Andrea Frittoli
>  wrote:
> >
> >
> > On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra 
> wrote:
> >>
> >> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli
> >>  wrote:
> >>>
> >>> Dear stackers,
> >>>
> >>> starting in the Liberty cycle Tempest has defined a set of projects
> which
> >>> are in scope for direct
> >>> testing in Tempest [0]. The current list includes keystone, nova,
> glance,
> >>> swift, cinder and neutron.
> >>> All other projects can use the same Tempest testing infrastructure (or
> >>> parts of it) by taking advantage
> >>> the Tempest plugin and stable interfaces.
> >>>
> >>> Tempest currently hosts a set of API tests as well as a service client
> >>> for the Heat project.
> >>> The Heat service client is used by the tests in Tempest, which run in
> >>> Heat gate as part of the grenade
> >>> job, as well as in the Tempest gate (check pipeline) as part of the
> >>> layer4 job.
> >>> According to code search [3] the Heat service client is also used by
> >>> Murano and Daisycore.
> >>
> >>
> >> For the heat grenade job, I've proposed two patches.
> >>
> >> 1. To run heat tree gabbi api tests as part of grenade 'post-upgrade'
> >> phase
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
> openstack.org_-23_c_460542_=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-
> SJ9DRnCxhze-aw=aN-OTm6qpDxNIXC86mUeowDuZe9O-NeCWHJdSvrVsYA=
> d2pZwZ8xKsFLHxQ0YNiM4itJjUHzgE0ibHNu7v28mXM=
> >>
> >> 2. To remove tempest tests from the grenade job
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
> openstack.org_-23_c_460810_=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-
> SJ9DRnCxhze-aw=aN-OTm6qpDxNIXC86mUeowDuZe9O-NeCWHJdSvrVsYA=07__
> zljUdvdtD_K5ltoKwdjaBwrs0fYJKaXSr93AAiU=
> >>
> >>
> >>>
> >>> I proposed a patch to Tempest to start the deprecation counter for
> Heat /
> >>> orchestration related
> >>> configuration items in Tempest [4], and I would like to make sure that
> >>> all tests and the service client
> >>> either find a new home outside of Tempest, or are removed, by the end
> the
> >>> Pike cycle at the latest.
> >>>
> >>> Heat has in-tree integration tests and Gabbi based API tests, but I
> don't
> >>> know if those provide
> >>> enough coverage to replace the tests on Tempest side.
> >>>
> >>
> >> Yes, the heat gabbi api tests do not yet have the same coverage as the
> >> tempest tree api tests (lacks tests using nova, neutron and swift
> >> resources),  but I think that should not stop us from *not* running the
> >> tempest tests in the grenade job.
> >>
> >> I also don't know if the tempest tree heat tests are used by any other
> >> upstream/downstream jobs. We could surely add more tests to bridge the
> gap.
> >>
> >> Also, It's possible to run the heat integration tests (we've enough
> >> coverage there) with tempest plugin after doing some initial setup, as
> we do
> >> in all our dsvm gate jobs.
> >>
> >>> It would propose to move tests and client to a Tempest plugin owned /
> >>> maintained by
> >>> the Heat team, so that the Heat team can have full flexibility in
> >>> consolidating their integration
> >>> tests. For Murano and Daisycloud - and any other team that may want to
> >>> use the Heat service
> >>> client in their tests, even if the client is removed from Tempest, it
> >>> would still be available via
> >>> the Heat Tempest plugin. As long as the plugin implements the service
> >>> client interface,
> >>> the Heat service client will register automatically in the service
> client
> >>> manager and be