[openstack-dev] [neutron]: Neutron naming legal issues

2016-03-31 Thread Jimmy Akin


Dear Neutrinos,

We've been following the project for quite some time.
To our satisfaction the project seems to have done well; the base line of
features that were available to the networking component of OpenStack
(then nova-network) has grown quite a bit and seem to have gained a
successful momentum with the communities, both development and
operators.

However, Neutron appears to be a trademarked name [1], and 
after thoroughly discussing the issue with our and Marvel' legal departments

both sides have reached the conclusion that a naming scheme is an obligatory 
amendment and unfortunately is the only viable option.

An obvious resolution to this issue is reverting the old "Quantum" name back.
However, this is subject to change and review from the PTL and as such, we'll 
shortly propose a relevant change to the review system.
We anticipate the review process to be be swift, to avoid further legal 
implications.

Sincerely,
Jimmy J. Akin,
CIO,    John F. Kennedy Space Center.

[1] https://en.wikipedia.org/wiki/Neutron_(Marvel_Comics)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-31 Thread GHANSHYAM MANN
On Thu, Mar 31, 2016 at 4:47 AM, Matthew Treinish  wrote:
> On Wed, Mar 30, 2016 at 03:26:13PM -0400, Sean Dague wrote:
>> During the Nova API meeting we had some conversations about priorities,
>> but this feels like the thing where a mailing list conversation is more
>> inclusive to get agreement on things. I think we need to remain focused
>> on what API related work will have the highest impact on our users.
>> (some brain storming was here -
>> https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
>> completely straw man proposal on priorities for the Newton cycle.
>>
>> * Top Priority Items *
>>
>> 1. API Reference docs in RST which include microversions (drivers: me,
>> auggy, annegentle) -
>> https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
>> 2. Discoverable Policy (drivers: laski, claudio) -
>> https://review.openstack.org/#/c/289405/
>> 3. ?? (TBD)
>>
>> I think realistically 3 priority items is about what we can sustain, and
>> I'd like to keep it there. Item #3 has a couple of options.
>>
>> * Lower Priority Background Work *
>>
>> - POC of Gabbi for additional API validation
>> - Microversion Testing in Tempest (underway)
>
> FWIW, the  framework for using microversions in tempest is done (and is part 
> of
> tempest.lib too) and the BP for that has been marked as implemented:
>
> http://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/api-microversions-testing-support.html
>
> All that's needed now is to actually start to leverage it by adding tests with
> microversions. IIRC there is only 1 right now, just a pro forma test for v2.2.
> The docs for using it are here:
>
> http://docs.openstack.org/developer/tempest/microversion_testing.html

Yea, now those tests can be implemented along with scenario tests with
other project microversion if available.

Along with version 2.2 tests running in gate, We have 2.10, 2.20 and
2.25 are up for review.

Plan is to cover as much as possible in N which should version most of
schema changes also and
makes tests implementation easy.

Nova functional tests will mostly cover Top, Bottom, change layer of
each microversion
https://blueprints.launchpad.net/nova/+spec/nova-microversion-functional-tests

>
>> - Some of the API WG recommendations
>>
>> * Things we shouldn't do this cycle *
>>
>> - Tasks API - not because it's not a good idea, but because I think
>> until we get ~ 3 core team members agreed that it's their number #1 item
>> for the cycle, it's just not going to get enough energy to go somewhere
>> useful. There are some other things on deck that we just need to clear
>> first.
>> - API wg changes for error codes - we should fix that eventually, but
>> that should come as a single microversion to minimize churn. That's
>> coordination we don't really have the bandwidth for this cycle.
>>
>> * Things we need to decide this cycle *
>>
>> - When are we deleting the legacy v2 code base in tree?
>
> I can get behind doing this. I think we've been running the 2.1 base compat
> as the default for long enough that there aren't gonna be any surprises if
> we drop the old v2 code in Newton.
>
>>
>> * Final priority item *
>>
>> For the #3 priority item one of the things that came up today was the
>> structured errors spec by the API working group. That would be really
>> nice... but in some ways really does need the entire new API reference
>> docs in place. And maybe is better in O.
>>
>> One other issue that we've been blocking on for a while has been
>> Capabilities discovery. Some API proposed adds like live resize have
>> been conceptually blocked behind this one. Once upon a time there was a
>> theory that JSON Home was a thing, and would slice our bread and julien
>> our fries, and solve all this. But it's a big thing to get right, and
>> JSON Home has an unclear future. And, we could server our users pretty
>> well with a much simpler take on capabilities. For instance
>>
>>  GET /servers/{id}/capabilities
>>
>> {
>> "capabilities" : {
>> "resize": True,
>> "live-resize": True,
>> "live-migrate": False
>> ...
>>  }
>> }
>>
>> Effectively an actions map for servers. Lots of details would have to be
>> sorted out on this one, clearly needs a spec, however I think that this
>> would help unstick some other things people would like in Nova, without
>> making our interop story terrible. This would need a driver for this effort.
>>
>> Every thing here is up for discussion. This is a summary of some of what
>> was in the meeting, plus some of my own thoughts. Please chime in on any
>> of this. It would be good to be of general agreement pre summit, so we
>> could focus conversation there more on the hows for getting things done.
>>
>>   -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin

2016-03-31 Thread Tripp, Travis S
Hiroyuki,

Thanks for the update. That sounds like the best course of action. As FYI, Li 
Yingjun also has a spec up on Nova to get notifications on hypervisors [0], 
which we briefly discussed in the IRC meeting this morning [1]. The two of you 
might be able to work together in getting nova specs and patches up. For most 
simple plugins to searchlight, we don’t need to do a full spec on the 
searchlight side in addition to the launchpad blueprint. But, if you could add 
a couple of bullet points on key pairs the etherpad below [2], it would be 
helpful. We are just trying to get a quick inventory started on that status of 
various notifications in OpenStack and may use that at the summit for 
discussions. Later, we may move this to wiki in table form, but I’m just hoping 
to get the basic info captured for now.

[0] https://review.openstack.org/#/c/299807/
[1] 
http://eavesdrop.openstack.org/meetings/openstack_search/2016/openstack_search.2016-03-31-15.01.log.html
[2] https://etherpad.openstack.org/p/search-team-meeting-agenda

Thanks,
Travis




On 3/31/16, 8:05 PM, "Hiroyuki Eguchi"  wrote:

>Hi Steve
>
>Thank you for your advice.
>Currently It's impossible to sync keystone information between DB and 
>ElasticeSearch.
>And no useful notification will be send for kaypair state change.(only 
>key_name) 
>So, I try to propose improving keypair notifications against nova-specs.
> 
>Thanks.
>Hiroyuki
>
>
>差出人: McLellan, Steven [steve.mclel...@hpe.com]
>送信日時: 2016年3月31日 5:49
>宛先: OpenStack Development Mailing List (not for usage questions)
>件名: Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin
>
>Hi Hiroyuki,
>
>It would be worth being certain about what access we have to keypairs before 
>committing to a plugin; if we cannot retrieve the initial list or receive 
>notifications on new keypairs, we likely can't index them at all. If we have 
>partial access we may be able to make a decision on whether it will be good 
>enough. Please feel free to get in touch in IRC (#openstack-searchlight) if 
>that would be useful.
>
>Steve
>
>From: Hiroyuki Eguchi >
>Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>>
>Date: Tuesday, March 29, 2016 at 7:13 PM
>To: "OpenStack Development Mailing List (not for usage questions)" 
>>
>Subject: [openstack-dev] [searchlight] Add Nova Keypair Plugin
>
>Hi Lakshmi,
>
>Thank you for your advice.
>I'm trying to index the public keys.
>I'm gonna try to discuss in searchlight-specs before starting development.
>
>Thanks
>Hiroyuki.
>
>
>
>差出人: Sampath, Lakshmi [lakshmi.samp...@hpe.com]
>送信日時: 2016年3月29日 2:03
>宛先: OpenStack Development Mailing List (not for usage questions)
>件名: Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin
>
>Hi Hiroyuki,
>
>For this plugin what data are you indexing in Elasticsearch. I mean what do 
>you expect users to search on and retrieve? Are you trying to index the public 
>keys?
>Talking directly to DB is not advisable, but before that we need to discuss 
>what data is being indexed and the security implication of it (RBAC) to users 
>who can/cannot access it.
>
>I would suggest start a spec in openstack/searchlight-specs under newton for 
>reviewing/feedback.
>https://github.com/openstack/searchlight-specs.git
>
>
>Thanks
>Lakshmi.
>
>From: Hiroyuki Eguchi [mailto:h-egu...@az.jp.nec.com]
>Sent: Sunday, March 27, 2016 10:26 PM
>To: OpenStack Development Mailing List (not for usage questions) 
>[openstack-dev@lists.openstack.org] 
>>
>Subject: [openstack-dev] [searchlight] Add Nova Keypair Plugin
>
>Hi.
>
>I am developing this plugin.
>https://blueprints.launchpad.net/searchlight/+spec/nova-keypair-plugin
>
>However I faced the problem that a admin user can not retrieve a keypair 
>information created by another user.
>So it is impossible to sync the keypair between OpenStack DB and 
>Elasticsearch, unless connect to OpenStack DB directly.
>Is there any suggestions to resolve it ?
>
>thanks.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Kumari, Madhuri
+1 from me. Thanks Eli for your contribution.

Regards,
Madhuri

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: Friday, April 1, 2016 8:13 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core 
reviewer team

+1 for Eli

Thanks
-Yuanying

2016年4月1日(金) 10:59 王华 
>:
+1 for Eli.

Best Regards,
Wanghua

On Fri, Apr 1, 2016 at 9:51 AM, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
> wrote:
+1 for Eli.

Regards,
Gary Duan

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Friday, April 01, 2016 2:18 AM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer 
team

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Yuanying OTSUKA
+1 for Eli

Thanks
-Yuanying

2016年4月1日(金) 10:59 王华 :

> +1 for Eli.
>
> Best Regards,
> Wanghua
>
> On Fri, Apr 1, 2016 at 9:51 AM, Duan, Li-Gong (Gary,
> HPServers-Core-OE-PSC)  wrote:
>
>> +1 for Eli.
>>
>>
>>
>> Regards,
>>
>> Gary Duan
>>
>>
>>
>> *From:* Hongbin Lu [mailto:hongbin...@huawei.com]
>> *Sent:* Friday, April 01, 2016 2:18 AM
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core
>> reviewer team
>>
>>
>>
>> Hi all,
>>
>>
>>
>> Eli Qiao has been consistently contributed to Magnum for a while. His
>> contribution started from about 10 months ago. Along the way, he
>> implemented several important blueprints and fixed a lot of bugs. His
>> contribution covers various aspects (i.e. APIs, conductor, unit/functional
>> tests, all the COE templates, etc.), which shows that he has a good
>> understanding of almost every pieces of the system. The feature set he
>> contributed to is proven to be beneficial to the project. For example, the
>> gate testing framework he heavily contributed to is what we rely on every
>> days. His code reviews are also consistent and useful.
>>
>>
>>
>> I am happy to propose Eli Qiao to be a core reviewer of Magnum team.
>> According to the OpenStack Governance process [1], we require a minimum of
>> 4 +1 votes within a 1 week voting window (consider this proposal as a +1
>> vote from me). A vote of -1 is a veto. If we cannot get enough votes or
>> there is a veto vote prior to the end of the voting window, Eli is not able
>> to join the core team and needs to wait 30 days to reapply.
>>
>>
>>
>> The voting is open until Thursday April 7st.
>>
>>
>>
>> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] cross openstack L2 networking

2016-03-31 Thread joehuang
Hi, 

Not found you in #openstack-tricircle. We are discussing cross openstack L2 
networking in as scheduled in weekly meeting: 
https://etherpad.openstack.org/p/TricircleCrossPodL2Networking

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Shinobu Kinjo [mailto:ski...@redhat.com] 
Sent: Thursday, March 31, 2016 4:18 PM
To: joehuang
Cc: Yipei Niu; Vega Cai; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [tricircle] playing tricircle with two node 
configuration

Very good.

Cheers,
S

- Original Message -
From: "joehuang" 
To: "Yipei Niu" , "Vega Cai" 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, ski...@redhat.com
Sent: Thursday, March 31, 2016 5:07:24 PM
Subject: RE: [openstack-dev] [tricircle] playing tricircle with two node 
configuration

Congratulation. A great step

Best Regards
Chaoyi Huang ( Joe Huang )

From: Yipei Niu [mailto:newy...@gmail.com]
Sent: Thursday, March 31, 2016 4:07 PM
To: Vega Cai
Cc: OpenStack Development Mailing List (not for usage questions); joehuang; 
ski...@redhat.com
Subject: Re: [openstack-dev] [tricircle] playing tricircle with two node 
configuration

Hi, all,

I have already finished playing tricircle with two nodes. The two VMs in 
different pods echo each other after executing ping command.

[Inline image 1]

[Inline image 2]

Thanks a lot for you guys' help.

Best regards,
Yipei

On Wed, Mar 30, 2016 at 11:56 AM, Vega Cai 
> wrote:
Hi Yipei,

Segment id is not correctly assigned to the bridge network so you get the "None 
is not an interger" message. What version of neutron and tricircle you use? 
Neutron moves network segment table definition out of ML2 code tree and 
tricircle has adapted this change.

BTW, I updated devstack, nova, neutron, keystone, requirements, glance, cinder, 
tricircle to the latest version in my environment yesterday and everything 
worked fine.

BR
Zhiyuan

On 30 March 2016 at 10:19, Yipei Niu 
> wrote:
Hi all,

I have already booted two VMs and successfully created a router with Neutron. 
But I have some trouble with attaching the router to a subnet. The error in 
q-svc.log is as follows:
2016-03-29 15:41:04.065 ^[[01;31mERROR oslo_db.api 
[^[[01;36mreq-0180a3f5-e34c-4d4c-bd39-3c0c714b02de ^[[00;36madmin 
685f8f37363f4467bead5a375e855ccd^[[01;31m] ^[[01;35m^[[01;31mDB error.^[[00m
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mTraceback 
(most recent call last):
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mreturn 
f(*args, **kwargs)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 217, in _handle_action
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mret_value 
= getattr(self._plugin, name)(*arg_list, **kwargs)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/opt/stack/tricircle/tricircle/network/plugin.py", line 768, in 
add_router_interface
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
t_bridge_port)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/opt/stack/tricircle/tricircle/network/plugin.py", line 686, in 
_get_bottom_bridge_elements
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mt_ctx, 
project_id, pod, t_net, 'network', net_body)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/opt/stack/tricircle/tricircle/network/plugin.py", line 550, in 
_prepare_bottom_element
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
list_resources, create_resources)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/opt/stack/tricircle/tricircle/common/lock_handle.py", line 99, in 
get_or_create_element
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mele = 
create_ele_method(t_ctx, q_ctx, pod, body, _type)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/opt/stack/tricircle/tricircle/network/plugin.py", line 545, in 
create_resources
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mreturn 
client.create_resources(_type_, t_ctx_, body_)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/opt/stack/tricircle/tricircle/common/client.py", line 87, in handle_args
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mreturn 
func(*args, **kwargs)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File 
"/opt/stack/tricircle/tricircle/common/client.py", line 358, in 

Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin

2016-03-31 Thread Hiroyuki Eguchi
Hi Steve

Thank you for your advice.
Currently It's impossible to sync keystone information between DB and 
ElasticeSearch.
And no useful notification will be send for kaypair state change.(only 
key_name) 
So, I try to propose improving keypair notifications against nova-specs.
 
Thanks.
Hiroyuki


差出人: McLellan, Steven [steve.mclel...@hpe.com]
送信日時: 2016年3月31日 5:49
宛先: OpenStack Development Mailing List (not for usage questions)
件名: Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi Hiroyuki,

It would be worth being certain about what access we have to keypairs before 
committing to a plugin; if we cannot retrieve the initial list or receive 
notifications on new keypairs, we likely can't index them at all. If we have 
partial access we may be able to make a decision on whether it will be good 
enough. Please feel free to get in touch in IRC (#openstack-searchlight) if 
that would be useful.

Steve

From: Hiroyuki Eguchi >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, March 29, 2016 at 7:13 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi Lakshmi,

Thank you for your advice.
I'm trying to index the public keys.
I'm gonna try to discuss in searchlight-specs before starting development.

Thanks
Hiroyuki.



差出人: Sampath, Lakshmi [lakshmi.samp...@hpe.com]
送信日時: 2016年3月29日 2:03
宛先: OpenStack Development Mailing List (not for usage questions)
件名: Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi Hiroyuki,

For this plugin what data are you indexing in Elasticsearch. I mean what do you 
expect users to search on and retrieve? Are you trying to index the public keys?
Talking directly to DB is not advisable, but before that we need to discuss 
what data is being indexed and the security implication of it (RBAC) to users 
who can/cannot access it.

I would suggest start a spec in openstack/searchlight-specs under newton for 
reviewing/feedback.
https://github.com/openstack/searchlight-specs.git


Thanks
Lakshmi.

From: Hiroyuki Eguchi [mailto:h-egu...@az.jp.nec.com]
Sent: Sunday, March 27, 2016 10:26 PM
To: OpenStack Development Mailing List (not for usage questions) 
[openstack-dev@lists.openstack.org] 
>
Subject: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi.

I am developing this plugin.
https://blueprints.launchpad.net/searchlight/+spec/nova-keypair-plugin

However I faced the problem that a admin user can not retrieve a keypair 
information created by another user.
So it is impossible to sync the keypair between OpenStack DB and Elasticsearch, 
unless connect to OpenStack DB directly.
Is there any suggestions to resolve it ?

thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread 王华
+1 for Eli.

Best Regards,
Wanghua

On Fri, Apr 1, 2016 at 9:51 AM, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
 wrote:

> +1 for Eli.
>
>
>
> Regards,
>
> Gary Duan
>
>
>
> *From:* Hongbin Lu [mailto:hongbin...@huawei.com]
> *Sent:* Friday, April 01, 2016 2:18 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core
> reviewer team
>
>
>
> Hi all,
>
>
>
> Eli Qiao has been consistently contributed to Magnum for a while. His
> contribution started from about 10 months ago. Along the way, he
> implemented several important blueprints and fixed a lot of bugs. His
> contribution covers various aspects (i.e. APIs, conductor, unit/functional
> tests, all the COE templates, etc.), which shows that he has a good
> understanding of almost every pieces of the system. The feature set he
> contributed to is proven to be beneficial to the project. For example, the
> gate testing framework he heavily contributed to is what we rely on every
> days. His code reviews are also consistent and useful.
>
>
>
> I am happy to propose Eli Qiao to be a core reviewer of Magnum team.
> According to the OpenStack Governance process [1], we require a minimum of
> 4 +1 votes within a 1 week voting window (consider this proposal as a +1
> vote from me). A vote of -1 is a veto. If we cannot get enough votes or
> there is a veto vote prior to the end of the voting window, Eli is not able
> to join the core team and needs to wait 30 days to reapply.
>
>
>
> The voting is open until Thursday April 7st.
>
>
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
>
>
> Best regards,
>
> Hongbin
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
+1 for Eli.

Regards,
Gary Duan

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Friday, April 01, 2016 2:18 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer 
team

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Neutron] Improving development and review velocity - Is it time for a drastic change?

2016-03-31 Thread Nikhil Komawar
And now I wish I could delete my messages sent to ML.

On 3/31/16 9:38 PM, Nikhil Komawar wrote:
> 2) is a giveaway it's Apr 1 (in some TZs)!
>
>
> On 3/31/16 9:12 PM, Assaf Muller wrote:
>> Have you been negatively impacted by slow development and review
>> velocity? Read on.
>>
>> OpenStack has had a slow review velocity for as long as I can
>> remember. This has a cascading effect where people take up multiple
>> tasks, so that they can work on something while the other is being
>> reviewed. This adds even more patches to ever growing queues. Features
>> miss releases and bugs never get fixed. Even worse, we turn away new
>> contributors due to an agonizing process.
>>
>> In the Neutron community, we've tried a few things over the years.
>> Neutron's growing scope was identified and load balancing, VPN and
>> firewall as a service were split out to their own repositories.
>> Neutron core reviewers had less load, *aaS contributors could iterate
>> faster, it was a win win. Following that, Neutron plugins were split
>> off as well. Neutron core reviewers did not have the expertise or
>> access to specialized hardware of vendors anyway, vendors could
>> iterate faster, and everybody were happy. Finally, a specialization
>> system was created. Areas of the Neutron code base were determined and
>> a "Lieutenant" was chosen for each area. That lieutenant could then
>> nominate core reviewers, and those reviewers were then expected to +2
>> only within their area. This led to doubling the core team, and for my
>> money was a great success. Leading us to today.
>>
>> Today, I think it's clear we still have a grave problem. Patches sit
>> idle for months, turning contributors away. I believe we've reached a
>> tipping point, and now is the time for out of the box thinking. I am
>> proposing two changes:
>>
>> 1) Changing what a core reviewer is. It is time to move to a system of
>> trust: Everyone have +2 rights to begin with, and the system
>> self-regulates by shaming offending individuals and eventually taking
>> away rights for repeated errors in judgement. I've proposed a Neutron
>> governance change here:
>>
>> https://review.openstack.org/300271
>>
>> 2) Now, transform yourself six to twelve months in the future. We now
>> face a new problem. Patches are flying in. You're no longer working on
>> a dozen patches in parallel. You push up something, it is reviewed
>> promptly, and you move on to the next thing. Our next issue is then CI
>> run-time. The time it takes to test (Check queue), approve and test a
>> patch again (Gate queue) is simply too long. How do we cut this down?
>> Again, by using a proven open source methodology of trust. As
>> Neutron's testing lieutenant, I hereby propose that we remove the
>> tests. Why deal with a problem you can avoid in the first place? The
>> Neutron team has been putting out fires in the form of gate issues on
>> a weekly basis, double so late in to a release cycle. The gate has so
>> many false negatives, the tests are riddled with race conditions,
>> we've clearly failed to get testing right. Needless to say, my
>> proposal keeps pep8 in place. We all know how important a consistent
>> style is. I've proposed a patch that removes Neutron's tests here:
>>
>> https://review.openstack.org/300272
> Well played! But 2) gave it away =]
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Neutron] Improving development and review velocity - Is it time for a drastic change?

2016-03-31 Thread Shinobu Kinjo
On Fri, Apr 1, 2016 at 10:12 AM, Assaf Muller  wrote:
> Have you been negatively impacted by slow development and review
> velocity? Read on.
>
> OpenStack has had a slow review velocity for as long as I can
> remember. This has a cascading effect where people take up multiple
> tasks, so that they can work on something while the other is being
> reviewed. This adds even more patches to ever growing queues. Features
> miss releases and bugs never get fixed. Even worse, we turn away new
> contributors due to an agonizing process.
>
> In the Neutron community, we've tried a few things over the years.
> Neutron's growing scope was identified and load balancing, VPN and
> firewall as a service were split out to their own repositories.
> Neutron core reviewers had less load, *aaS contributors could iterate
> faster, it was a win win. Following that, Neutron plugins were split
> off as well. Neutron core reviewers did not have the expertise or
> access to specialized hardware of vendors anyway, vendors could
> iterate faster, and everybody were happy. Finally, a specialization
> system was created. Areas of the Neutron code base were determined and
> a "Lieutenant" was chosen for each area. That lieutenant could then
> nominate core reviewers, and those reviewers were then expected to +2
> only within their area. This led to doubling the core team, and for my
> money was a great success. Leading us to today.
>
> Today, I think it's clear we still have a grave problem. Patches sit
> idle for months, turning contributors away. I believe we've reached a
> tipping point, and now is the time for out of the box thinking. I am
> proposing two changes:
>
> 1) Changing what a core reviewer is. It is time to move to a system of
> trust: Everyone have +2 rights to begin with, and the system
> self-regulates by shaming offending individuals and eventually taking
> away rights for repeated errors in judgement. I've proposed a Neutron
> governance change here:
>
> https://review.openstack.org/300271

This change would make sense. But will we have core reviewers election?

>
> 2) Now, transform yourself six to twelve months in the future. We now
> face a new problem. Patches are flying in. You're no longer working on
> a dozen patches in parallel. You push up something, it is reviewed
> promptly, and you move on to the next thing. Our next issue is then CI
> run-time. The time it takes to test (Check queue), approve and test a
> patch again (Gate queue) is simply too long. How do we cut this down?
> Again, by using a proven open source methodology of trust. As
> Neutron's testing lieutenant, I hereby propose that we remove the
> tests. Why deal with a problem you can avoid in the first place? The
> Neutron team has been putting out fires in the form of gate issues on
> a weekly basis, double so late in to a release cycle. The gate has so
> many false negatives, the tests are riddled with race conditions,
> we've clearly failed to get testing right. Needless to say, my
> proposal keeps pep8 in place. We all know how important a consistent
> style is. I've proposed a patch that removes Neutron's tests here:
>
> https://review.openstack.org/300272
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Neutron] Improving development and review velocity - Is it time for a drastic change?

2016-03-31 Thread Nikhil Komawar
2) is a giveaway it's Apr 1 (in some TZs)!


On 3/31/16 9:12 PM, Assaf Muller wrote:
> Have you been negatively impacted by slow development and review
> velocity? Read on.
>
> OpenStack has had a slow review velocity for as long as I can
> remember. This has a cascading effect where people take up multiple
> tasks, so that they can work on something while the other is being
> reviewed. This adds even more patches to ever growing queues. Features
> miss releases and bugs never get fixed. Even worse, we turn away new
> contributors due to an agonizing process.
>
> In the Neutron community, we've tried a few things over the years.
> Neutron's growing scope was identified and load balancing, VPN and
> firewall as a service were split out to their own repositories.
> Neutron core reviewers had less load, *aaS contributors could iterate
> faster, it was a win win. Following that, Neutron plugins were split
> off as well. Neutron core reviewers did not have the expertise or
> access to specialized hardware of vendors anyway, vendors could
> iterate faster, and everybody were happy. Finally, a specialization
> system was created. Areas of the Neutron code base were determined and
> a "Lieutenant" was chosen for each area. That lieutenant could then
> nominate core reviewers, and those reviewers were then expected to +2
> only within their area. This led to doubling the core team, and for my
> money was a great success. Leading us to today.
>
> Today, I think it's clear we still have a grave problem. Patches sit
> idle for months, turning contributors away. I believe we've reached a
> tipping point, and now is the time for out of the box thinking. I am
> proposing two changes:
>
> 1) Changing what a core reviewer is. It is time to move to a system of
> trust: Everyone have +2 rights to begin with, and the system
> self-regulates by shaming offending individuals and eventually taking
> away rights for repeated errors in judgement. I've proposed a Neutron
> governance change here:
>
> https://review.openstack.org/300271
>
> 2) Now, transform yourself six to twelve months in the future. We now
> face a new problem. Patches are flying in. You're no longer working on
> a dozen patches in parallel. You push up something, it is reviewed
> promptly, and you move on to the next thing. Our next issue is then CI
> run-time. The time it takes to test (Check queue), approve and test a
> patch again (Gate queue) is simply too long. How do we cut this down?
> Again, by using a proven open source methodology of trust. As
> Neutron's testing lieutenant, I hereby propose that we remove the
> tests. Why deal with a problem you can avoid in the first place? The
> Neutron team has been putting out fires in the form of gate issues on
> a weekly basis, double so late in to a release cycle. The gate has so
> many false negatives, the tests are riddled with race conditions,
> we've clearly failed to get testing right. Needless to say, my
> proposal keeps pep8 in place. We all know how important a consistent
> style is. I've proposed a patch that removes Neutron's tests here:
>
> https://review.openstack.org/300272

Well played! But 2) gave it away =]

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack] configuration user story

2016-03-31 Thread Arkady_Kanevsky
Per our discussion at midcycle,
I had submitted user story for configuration info use cases. 
https://review.openstack.org/#/c/300057/
Looking forward to reviews.
Once user story settles we will start the work on blueprints.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW Development
Dell ESG
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-03-31 Thread Zane Bitter

On 31/03/16 18:10, Zane Bitter wrote:


I'm in favour of some sort of variable-based implementation for a few
reasons. One is that (5) seems to come up fairly regularly in a complex
deployment like TripleO. Another is that Fn::If feels awkward compared
to get_variable.


I actually have to revise this last part after reviewing the patches. 
get_variable can't replace Fn::If, because we'd still need to handle 
stuff of the form:


some_property: {if: [{get_variable: some_var},
 {get_resource: res1},
 {get_resource: res2}]

where the alternatives can't come from a variable because they contain 
resource references and we have said we'd constrain variables to be static.


In fact the intrinsic functions that could be allowed in the first 
argument to the {if: } function would have to constrained in the same 
way as the constraint field in the resource, because we should only 
validate and obtain dependencies from _one_ of the alternates, so we 
need to be able to determine which one statically and not have to wait 
until the actual value is resolved. This is possibly the strongest 
argument for keeping on the cfn implementation course.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? Mitaka Release Edition

2016-03-31 Thread Lana Brindley
Hi everyone,

While it was very tempting to write something something shocking here for April 
Fools', I thought everyone might have already had their heart rate raised 
enough by the fact that Mitaka is only five days away! This will be my final 
docs newsletter before the release, and the docs are in great shape ready to go 
out.

I would like to sincerely thank Andreas and our release managers Brian and 
Olga, not only for the hard work they've done so far, but also for holding down 
the fort over the next few days while I'm out of contact. Please remember to 
contact them directly if you have any last minute documentation fires that need 
putting out. I'd also like to mention the hard work that the Installation Guide 
testers have been putting in over the past couple of weeks. The testing matrix 
is looking very green 
(https://wiki.openstack.org/wiki/Documentation/MitakaDocTesting) and I'm 
confident that we'll get the Mitaka guide out fully tested and on schedule. 

Don't forget you can check release progress on the etherpad here: 
https://etherpad.openstack.org/p/MitakaRelease
   
== Progress towards Mitaka ==

5 days to go!

558 bugs closed so far for this release.

Docs Testing
* https://wiki.openstack.org/wiki/Documentation/MitakaDocTesting
* Only a few items left to check off, at this stage we're good to publish the 
Ubuntu, RDO, and Suse guides on release day. 

Release Tasks:
* Release planning occurs in our etherpad here: 
https://etherpad.openstack.org/p/MitakaRelease
* All tasks that should be completed at this stage are done, and we're on track 
to start the release process in the 24-48 hours before the release drops.
* Release notes for the docs project are in a "reno-style" directory in the 
openstack-manuals repo. This directory is not being managed by Reno, however, 
so please propose patches to it directly.

== The Road to Austin ==

* First of all, thanks for the amazing feedback on what docs sessions you would 
like to see at the Austin Design Summit! We have filled our entire allocation 
(and then some), and the schedule so far is here:  
https://etherpad.openstack.org/p/Newton-DocsSessions
* There's been a really robust and exciting conversation about the Install 
Guide on the dev list over the past week or so, which I have been following 
closely. If you're interested in overhauling our Install Guide (and it 
certainly seems like a lot of people are!), then please make sure you add our 
Install Guide Design Summit session to your agenda. It is on Wednesday at 
11:50, so placed so that it can run over into lunchtime if required.
* A note for those who noticed the Ops guide omission: this should take place 
in the Ops track, and I'll let you know when that session will be.

== Doc team meeting ==

Next meetings:

The US meeting was held this week, you can read the minutes here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-03-30

Next meetings:
APAC: No APAC meeting this week
US: Wednesday 13 April, 19:00 UTC (This is after the Mitaka release)

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#1_April_2016

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All][Neutron] Improving development and review velocity - Is it time for a drastic change?

2016-03-31 Thread Assaf Muller
Have you been negatively impacted by slow development and review
velocity? Read on.

OpenStack has had a slow review velocity for as long as I can
remember. This has a cascading effect where people take up multiple
tasks, so that they can work on something while the other is being
reviewed. This adds even more patches to ever growing queues. Features
miss releases and bugs never get fixed. Even worse, we turn away new
contributors due to an agonizing process.

In the Neutron community, we've tried a few things over the years.
Neutron's growing scope was identified and load balancing, VPN and
firewall as a service were split out to their own repositories.
Neutron core reviewers had less load, *aaS contributors could iterate
faster, it was a win win. Following that, Neutron plugins were split
off as well. Neutron core reviewers did not have the expertise or
access to specialized hardware of vendors anyway, vendors could
iterate faster, and everybody were happy. Finally, a specialization
system was created. Areas of the Neutron code base were determined and
a "Lieutenant" was chosen for each area. That lieutenant could then
nominate core reviewers, and those reviewers were then expected to +2
only within their area. This led to doubling the core team, and for my
money was a great success. Leading us to today.

Today, I think it's clear we still have a grave problem. Patches sit
idle for months, turning contributors away. I believe we've reached a
tipping point, and now is the time for out of the box thinking. I am
proposing two changes:

1) Changing what a core reviewer is. It is time to move to a system of
trust: Everyone have +2 rights to begin with, and the system
self-regulates by shaming offending individuals and eventually taking
away rights for repeated errors in judgement. I've proposed a Neutron
governance change here:

https://review.openstack.org/300271

2) Now, transform yourself six to twelve months in the future. We now
face a new problem. Patches are flying in. You're no longer working on
a dozen patches in parallel. You push up something, it is reviewed
promptly, and you move on to the next thing. Our next issue is then CI
run-time. The time it takes to test (Check queue), approve and test a
patch again (Gate queue) is simply too long. How do we cut this down?
Again, by using a proven open source methodology of trust. As
Neutron's testing lieutenant, I hereby propose that we remove the
tests. Why deal with a problem you can avoid in the first place? The
Neutron team has been putting out fires in the form of gate issues on
a weekly basis, double so late in to a release cycle. The gate has so
many false negatives, the tests are riddled with race conditions,
we've clearly failed to get testing right. Needless to say, my
proposal keeps pep8 in place. We all know how important a consistent
style is. I've proposed a patch that removes Neutron's tests here:

https://review.openstack.org/300272

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Adding amuller to the neutron-drivers team

2016-03-31 Thread Armando M.
Hi folks,

Assaf's tenacity is a great asset for the Neutron team at large. I believe
that the drivers team would benefit from that tenacity, and therefore I
would like to announce him to be a new member of the Neutron Drivers team
[1].

At the same time, I would like to thanks mestery as he steps down. Mestery
has been instrumental in many decisions taken by this team and for
spearheading the creation of the very team back in the Juno days.

As I mentioned in the past, having a propension to attendance, and desire
to review of RFEs puts you on the right foot to join the group, whose
members are rotated regularly so that everyone is given the opportunity to
grow, and no-one burns out.

The team [1] meets regularly on Thursdays [2], and anyone is welcome to
attend.

Please, join me in welcome Assaf to the team.

Cheers,
Armando

[1] http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#
drivers-team
[2] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] Voting for the TC Election is now open

2016-03-31 Thread Tristan Cacqueray
Voting for the TC Election is now open and will remain open until
23:59 April 7th, 2016 UTC.

We are selecting 7 TC members, please rank all candidates in your order
of preference.

You are eligible to vote if you are a Foundation individual member[2]
that also has committed to one of the official programs projects[3] over
the Liberty-Mitaka timeframe (March 4, 2015 00:00 UTC to
March 3, 2016 23:59 UTC) or if you are one of the extra-atcs.[4]

What to do if you don't see the email and have a commit in at least one
of the official programs projects[3]:
  * check the trash or spam folder of your gerrit Preferred Email
address[5], in case it went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project
repos[3] and email me and Tony[1]. If we can confirm that you are
entitled to vote, we will add you to the voters list and you will
be emailed a ballot.

Our democratic process is important to the health of OpenStack, please
exercise your right to vote.

Candidate statements/platforms can be found linked to Candidate names[0].

Happy voting,
Thank you,
-Tristan

[0]:
https://wiki.openstack.org/wiki/TC_Elections_April_2016#Confirmed_Candidates
[1]: Tony's email: tony at bakeyournoodle dot com
 Tristan's email: tdecacqu at redhat dot com
[2]: http://www.openstack.org/community/members/
[3]:
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=march-2016-elections
Note the tag for this repo, march-2016-elections.
[4]: Look for the extra-atcs element in [3]
[5]: Sign into review.openstack.org:
 Go to Settings > Contact Information.
 Look at the email listed as your Preferred Email.
 That is where the ballot has been sent.



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Docs] Austin Design Summit Sessions - Docs

2016-03-31 Thread Lana Brindley
Hi everyone,

First of all, thanks for the amazing feedback on what docs sessions you would 
like to see at the Austin Design Summit!

We have filled our entire allocation (and then some), and the schedule so far 
looks something like this:

Fishbowls (40 min slot) x 4

Mitaka retro - Wed 11
Toolchain/Infra session - Wed 16:30
Contributor Guide - Thu 9:50
Newton planning - Thu 13:30

Workrooms (40 min slot) x 4

API Guide Wed 9am
Install Guide - Wed 11:50
Security Guide - Thu 11
Networking Guide - Thu 11:50

Meetup (Friday 14:00-17:30)

Contributors meetup. No agenda.


And a note for those who noticed the Ops guide omission: this should take place 
in the Ops track, and I'll let you know when that session will be.

Lana

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] tc candidacy

2016-03-31 Thread gordon chung
hi folks,

i'd like to announce my candidacy for the OpenStack Technical Committee.

as a quick introduction, i've been a contributor in OpenStack for the 
past few years, focused primarily on various Telemetry and Oslo related 
projects. i was recently the Project Team Liaison for the Telemetry 
project, and currently, i'm an engineer at Huawei Technologies Canada 
where i work with a team of developers that contribute to the OpenStack 
community.

my views on what the next steps for OpenStack are are not unique. i 
share the idea: OpenStack needs to refine its mission. this is not to 
dissuade developers from continuing to build upon and extend the 
existing projects but i think OpenStack should concern itself with the 
core story first before worrying about extended use cases.

Also, i believe that the existing projects in OpenStack are too siloed. 
coming from a project that interacts with all other services, it's quite 
apparent that projects are focused solely on their own offerings and 
ignoring how it works globally. tighter collaboration between projects i 
believe will help make integration easier and more efficient. similar to 
how services within a project just work together, projects should just 
work together.

in many ways i see the Technical Committee as the Cross Project Liaisons 
some of us are searching for. rather than act as Guardians of "what is 
OpenStack", i think the TC should take a more active role in the 
projects and work together with the PTLs to ensure that the "Cloud" 
story makes sense. PTLs and TC members should work side by side to 
identify gaps in the story rather than the current interaction agreement 
of "you're in. we'll talk if stuff hits the fan".

understandably the "Cloud" story is different to many people so i'll 
employ the strategy i've used previously: listen to others, share the 
decision, share the blame, and claim the success.

thanks for your time.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][designate] Designate Mitaka RC2 available

2016-03-31 Thread Doug Hellmann
Due to release-critical issues spotted in Designate during RC1
testing, a new release candidate was created for Mitaka. You can
find the RC2 source code tarball at:

https://tarballs.openstack.org/designate/designate-2.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, this tarball will be formally released
as final "Mitaka" versions on April 7th. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the mitaka release branch
at:

http://git.openstack.org/cgit/openstack/designate/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/designate/+filebug

and tag it *mitaka-rc-potential* to bring it to the Designate release crew's
attention.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

2016-03-31 Thread Rochelle Grober
Cross posting to the Ops ML as one/some of them might have a test cloud like 
this.

Operators:

If you respond to this thread, please only respond to the openstack-dev list?

They could use your input;-)

--Rocky

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Thursday, March 31, 2016 12:58 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

On 03/31/2016 01:23 PM, Monty Taylor wrote:
> Just a friendly reminder to everyone - floating IPs are not synonymous
> with Public IPs in OpenStack.
> 
> The most common (and growing, thank you to the beta of the new
> Dreamcompute cloud) configuration for Public Clouds is directly assign
> public IPs to VMs without requiring a user to create a floating IP.
> 
> I have heard that the require-floating-ip model is very common for
> private clouds. While I find that even stranger, as the need to run NAT
> inside of another NAT is bizarre, it is what it is.
> 
> Both models are common enough that pretty much anything that wants to
> consume OpenStack VMs needs to account for both possibilities.
> 
> It would be really great if we could get the default config in devstack
> to be to have a shared direct-attached network that can also have a
> router attached to it and provider floating ips, since that scenario
> actually allows interacting with both models (and is actually the most
> common config across the OpenStack public clouds)

If someone has the the pattern for what that config looks like,
especially if it could work on single interface machines, that would be
great.

The current defaults in devstack are mostly there for legacy reasons
(and because they work everywhere), and for activation energy to getting
a new robust work everywhere setup.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Summit: cross-project session for deployment tools projects

2016-03-31 Thread Ryan Hallisey
Good idea Emilien!

Working in both tripleo and Kolla, I'd like to be a part of the conversation.

-Ryan

- Original Message -
From: "Samuel Cassiba" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, March 31, 2016 6:08:28 PM
Subject: Re: [openstack-dev] [all] Newton Summit: cross-project session for 
deployment tools projects

On Thu, Mar 31, 2016 at 2:40 PM, Emilien Macchi < emil...@redhat.com > wrote: 


Hi, 

OpenStack big tent has different projects for deployments: Puppet, 
Chef, Ansible, Kolla, TripleO, (...) but we still have some topics in 
common. 
I propose we use the Cross-project day to meet and talk about our 
common topics: CI, doc, release, backward compatibility management, 
etc. 

Feel free to look at the proposed session and comment: 
https://etherpad.openstack.org/p/newton-cross-project-sessions 


+1 on this. Looking forward to meeting people and discussing our common pain 
points. 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-03-31 Thread Zane Bitter

On 31/03/16 10:10, Thomas Herve wrote:

On Thu, Mar 31, 2016 at 2:25 PM, Huangtianhua  wrote:

The conditions function has been requested for a long time, and there have been 
several previous discussions, which all ended up in debating the 
implementation, and no result.
https://review.openstack.org/#/c/84468/3/doc/source/template_guide/hot_spec.rst
https://review.openstack.org/#/c/153771/1/specs/kilo/resource-enabled-meta-property.rst


I wouldn't say no result, all of those discussions ended up moving in 
the direction of the variables thing that Thomas is suggesting here.



And for a reason: this is a tricky issue, and introducing imperative
constructs in a template can lead to bad practices.


Yes, the disagreement was largely over whether we should do it at all.


I think we should focus on the simplest possible way(same as AWS) to meet the 
user requirement, and follows the AWS, there is no doubt that we will get a 
very good compatibility.
And the patches are good in-progress. I don't want everything back to zero:)
https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/support-conditions-function


I'm sympathetic to this. Happily, I don't think the change being 
proposed is even close to the level of 'back to zero' :)



I don't say that you should scratch everything. I'm mostly OK with
what has been done, with the exception of the top-level conditions
section. Templates are our user interface, and we need to be very
careful when we introduce new things. 3 years ago following AWS was
the easy path because we didn't have much idea on what to do, but I
believe we now have enough background to be more innovative.


+1

I think looking at the cfn implementation has an unfortunate effect of 
making us look at a number of separate problems as if they were a 
single, monolithic one. The problems it solves are:


1) How to conditionally disable a resource
2) How to conditionally change data (e.g. properties) without requiring 
the user to input all of the data in a parameter
3) How to share the same condition in multiple resource without having 
to duplicate the condition logic
4) How to share the same condition for multiple pieces of data (or share 
between data and resources) without having to duplicate the condition logic


The solution cfn has is a "Conditions" section that contains the logic 
(3) & (4), a "Condition" on a resource that refers to an entry in the 
"Conditions" section (1) and a Fn::If intrinsic function that also 
refers to an entry in the "Conditions" section (2).


An obvious problem with (3)/(4) is that they eliminate the need for 
duplicating the condition logic in more than one place, but if you need 
the data that is selected by Fn::If in more than one place then you'll 
end up repeating _that_. Let's call this (5).


A solution using variables (I prefer "values" but I'm happy to bikeshed) 
instead of conditions would solve (2) and (5). (1) and (3) can obviously 
be solved by a "condition" in a resource that takes a value directly 
(instead of just a condition name).


Importantly, (4) can be achieved only if the get_variable function is 
allowed inside variable definitions, and that means we need to detect 
circular references. (That'd be a fun little programming problem for 
someone, but not a show-stopper.) You could argue that if you have (3) 
and (5) then (4) is not as important, since the condition in many cases 
might be just a parameter reference without any more complicated logic.



The reason (I speculate) that cfn went with the implementation they did, 
specifically having resource conditions contain the *name* of a 
condition variable rather than a value, is that they probably didn't 
want to have to parse the different parts of the resource definition 
differently. Specifically, the conditions must be resolved statically at 
therefore stuff like Fn::GetAtt are not valid. It's easy to explain a 
different section of the template where those are not valid, but much 
harder to explain that they're valid in some parts of the resource 
definition but not others.


We could get around this by calling the field condition_variable or 
something and having it contain a name of a variable rather than an 
explicit get_variable function, so that any conditions you want to write 
have to go into a variable, but not all variables would have to be 
conditions.



I'm in favour of some sort of variable-based implementation for a few 
reasons. One is that (5) seems to come up fairly regularly in a complex 
deployment like TripleO. Another is that Fn::If feels awkward compared 
to get_variable.


But maybe the biggest one is that what we choose here is not only our 
public facing user-interface, as Thomas pointed out, but also the API 
interface against which people (including our future selves) can write 
their own template format plugins. The AWS solution can be trivially 
implemented in terms of the variable-based 

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-31 Thread Steven Dake (stdake)
Kevin,

This is how our playbooks are designed to operate (using serial 30%) but it is 
unclear if workloads can be scheduled during this upgrade because I haven't 
personally seen it tested with Kolla yet.  I have only tested upgrades with vms 
and no workload which works fantastically well.

Regards,
-steve

From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, March 31, 2016 at 10:54 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

Ideally it can roll the services one instance at a time while doing the 
appropriate load balancer stuff to make it seamless. Our experience has been 
even though services should retry, they dont always do it right. So better to 
do it with the lb proper if you can. Between ansible/container orchestration 
and containers, it should be pretty easy to do. While doing it with just 
packages would be very hard.

Thanks,
Kevin


From: Steven Dake (stdake)
Sent: Thursday, March 31, 2016 1:22:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

Kevin,

I am not directly answering your question, but from the perspective of Kolla, 
our upgrades are super simple because we don't make a big mess in the first 
place to upgrade from.  In my experience, this is the number one problem with 
upgrades – everyone makes a mess of the first deployment, so upgrading from 
there is a minefield.  Better to walk straight through that minefield by not 
making a mess of the system in the first place using my favorite deployment 
tool: Kolla ;-)

Kolla upgrades rock.  I have no doubt we will have some minor issues in the 
field, but we have tested 1 month old master to master upgrades with database 
migrations of the services we deploy, and it takes approximately 10 minutes on 
a 64 (3 control rest compute) node cluster without VM downtime or loss of 
networking service to the virtual machines.  This is because our upgrades, 
while not totally atomic across the clsuter, are pretty darn close and upgrade 
the entire filesystem runtime in one atomic action per service while rolling 
the ugprade in the controller nodes.

During the upgrade process there may be some transient failures for API service 
calls, but they are typically repeated by clients and no real harm is done.  
Note we follow project's best practices for handling upgrades, without the mess 
of dealing with packaging or configuration on the filesystem and migration 
thereof.

Regards
-steve


From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, March 30, 2016 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

The main issue is one of upgradability, not stability. We all know tripleo is 
stable. Tripleo cant do upgrades today. We're looking for ways to get there. So 
"upgrading" to ansible isnt nessisary for sure since folks deploying tripleo 
today must assume they cant upgrade anyway.

Honestly I have doubts any config management system from puppet to heat 
software deployments can be coorced to do a cloud upgrade without downtime 
without a huge amount of workarounds. You really either need a workflow 
oriented system with global knowledge like ansible or a container orchestration 
system like kubernes to ensure you dont change too many things at once and 
break things. You need to be able to run some old things and some new, all at 
the same time. And in some cases different versions/config of the same service 
on different machines.

Thoughts on how this may be made to work with puppet/heat?

Thanks,
Kevin


From: Dan Prince
Sent: Monday, March 28, 2016 12:07:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

On Wed, 2016-03-23 at 07:54 -0400, Ryan Hallisey wrote:
> *Snip*
>
> >
> > Indeed, this has literally none of the benefits of the ideal Heat
> > deployment enumerated above save one: it may be entirely the wrong
> > tool
> > in every way for the job it's being asked to do, but at least it
> > is
> > still 

Re: [openstack-dev] [all] Newton Summit: cross-project session for deployment tools projects

2016-03-31 Thread Samuel Cassiba
On Thu, Mar 31, 2016 at 2:40 PM, Emilien Macchi  wrote:

> Hi,
>
> OpenStack big tent has different projects for deployments: Puppet,
> Chef, Ansible, Kolla, TripleO, (...) but we still have some topics  in
> common.
> I propose we use the Cross-project day to meet and talk about our
> common topics: CI, doc, release, backward compatibility management,
> etc.
>
> Feel free to look at the proposed session and comment:
> https://etherpad.openstack.org/p/newton-cross-project-sessions
>
>
+1 on this. Looking forward to meeting people and discussing our common
pain points.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Summit: cross-project session for deployment tools projects

2016-03-31 Thread Michał Jastrzębski
Ahh I never run from good flame war;)

Just kidding ofc. I personally think this is brilliant and should be
somewhat tradition per release as we all deal with things like
upgrades and deployment changes (yes, I'm looking at you
nova-api-database).
Please consider me a volunteer from Kolla:)

Regards,
Michal

On 31 March 2016 at 16:55, Jesse Pretorius  wrote:
> On 31 March 2016 at 22:40, Emilien Macchi  wrote:
>>
>>
>> OpenStack big tent has different projects for deployments: Puppet,
>> Chef, Ansible, Kolla, TripleO, (...) but we still have some topics  in
>> common.
>> I propose we use the Cross-project day to meet and talk about our
>> common topics: CI, doc, release, backward compatibility management,
>> etc.
>>
>> Feel free to look at the proposed session and comment:
>> https://etherpad.openstack.org/p/newton-cross-project-sessions
>
>
> Thanks for raising this Emilien. As discussed in #openstack-dev I think
> we're long overdue to have a session like this where we can compare notes
> and find ways to work together to improve the experience of deployers who
> need to implement, upgrade and maintain OpenStack.
>
> I feel strongly that deployment tooling projects play a unique role in the
> community where we touch both Operators and Developers. In doing so we can
> certainly work together to improve the quality of project documentation,
> improve integrated testing for the projects, provide improved design
> examples for common use-cases (with example configurations for our
> respective projects) and contribute generally to the architecture, security
> and other guides which inform operators.
>
> I look forward to it!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Summit: cross-project session for deployment tools projects

2016-03-31 Thread Jesse Pretorius
On 31 March 2016 at 22:40, Emilien Macchi  wrote:

>
> OpenStack big tent has different projects for deployments: Puppet,
> Chef, Ansible, Kolla, TripleO, (...) but we still have some topics  in
> common.
> I propose we use the Cross-project day to meet and talk about our
> common topics: CI, doc, release, backward compatibility management,
> etc.
>
> Feel free to look at the proposed session and comment:
> https://etherpad.openstack.org/p/newton-cross-project-sessions
>

Thanks for raising this Emilien. As discussed in #openstack-dev I think
we're long overdue to have a session like this where we can compare notes
and find ways to work together to improve the experience of deployers who
need to implement, upgrade and maintain OpenStack.

I feel strongly that deployment tooling projects play a unique role in the
community where we touch both Operators and Developers. In doing so we can
certainly work together to improve the quality of project documentation,
improve integrated testing for the projects, provide improved design
examples for common use-cases (with example configurations for our
respective projects) and contribute generally to the architecture, security
and other guides which inform operators.

I look forward to it!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Summit: cross-project session for deployment tools projects

2016-03-31 Thread Kevin Carter
+1 This is great and I look forward to meeting and talking with folks at 
the summit.

On 03/31/2016 04:42 PM, Emilien Macchi wrote:
> Hi,
>
> OpenStack big tent has different projects for deployments: Puppet,
> Chef, Ansible, Kolla, TripleO, (...) but we still have some topics  in
> common.
> I propose we use the Cross-project day to meet and talk about our
> common topics: CI, doc, release, backward compatibility management,
> etc.
>
> Feel free to look at the proposed session and comment:
> https://etherpad.openstack.org/p/newton-cross-project-sessions
>
> Thanks,
>

--

Kevin Carter
IRC: cloudnull

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Newton Summit: cross-project session for deployment tools projects

2016-03-31 Thread Emilien Macchi
Hi,

OpenStack big tent has different projects for deployments: Puppet,
Chef, Ansible, Kolla, TripleO, (...) but we still have some topics  in
common.
I propose we use the Cross-project day to meet and talk about our
common topics: CI, doc, release, backward compatibility management,
etc.

Feel free to look at the proposed session and comment:
https://etherpad.openstack.org/p/newton-cross-project-sessions

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Newton virtual pre-summit sync

2016-03-31 Thread Nikhil Komawar
Hi all,

I'm excited to see you all Glancers at the Austin summit and discuss
more on our project.

Nevertheless, I feel that our summit conversations will have a lot more
value if we establish some prior-context to the discussions and prepare
ourselves for some of the sessions. Hence, I would like to call for a
virtual sync meetup on the week of April 11th.

My initial plan is to have a 4 hour sync on Tuesday April 12th, from
1400-1800 UTC (we can decide on the length of the sessions and the
required breaks later). But sending out this tentative plan to gauge the
feasibility and interest.

Please reply to this thread with your thoughts. If none, looking forward
to seeing you at the event.

P.S. based on the number of interested participants we can decide on the
tool to be used.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Adrian Otto
+1

On Mar 31, 2016, at 11:18 AM, Hongbin Lu 
> wrote:

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Adrian Otto
+1

On Mar 31, 2016, at 11:18 AM, Hongbin Lu 
> wrote:

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all][ptls] the plan for the final week of the Mitaka release

2016-03-31 Thread Matt Riedemann



On 3/30/2016 8:03 AM, Doug Hellmann wrote:


Folks,

We are approaching the final week of the Mitaka release cycle, and
the release team wants to make sure everyone is clear about what
will be happening and what the policies for changes are.

First, a few dates:

* Tomorrow 31 March is the final day for requesting release
   candidates for projects following the milestone release model.

* Friday 1 April is the last day for requesting full releases for
   service projects following the intermediary release model.

* Early next Thursday, 7 April, we will tag the most recent release
   candidate for each milestone project as the final release and
   announce that Mitaka has been released.

During the week between 31 March and 7 April, the release team will
reject or postpone requests for new library releases and new service
release candidates by default. Only truly critical bug fixes (which
could not be fixed post-release, as determined by the release team)
may end up triggering releases during this time.

Although we will be extremely picky about which changes can go into
new release candidates, critical fixes will be considered. It is
better to have no extraneous changes merged into stable/mitaka
during this period to avoid having to revert a non-critical change
in order to prepare a new release candidate for a critical issue.
Please do not approve patches unless they are for CRITICAL fixes.
The release team reserves the right to refuse to release a project
with extraneous changes added.

I have some personal travel planned early next week. While I'm
unavailable, Thierry is going to manage the release team and release
process. If he tells you "no", don't expect me to tell you "yes".


But but but that's exactly what my 4 year old does...



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][winstackers] os-win 0.4.1 release (mitaka)

2016-03-31 Thread no-reply
We are pleased to announce the release of:

os-win 0.4.1: Windows / Hyper-V library for OpenStack projects.

This release is part of the mitaka stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-win

Please report issues through launchpad:

http://bugs.launchpad.net/os-win

For more details, please see below.

Changes in os-win 0.4.0..0.4.1
--

a8a16f8 python3: Fixes vhdutils internal VHDX size
ff81f1c Consistently raise exception if port not found
d89b2c4 Ensure vmutils honors the host argument

Diffstat (except docs and test files)
-

os_win/utils/compute/vmutils.py|  2 +-
os_win/utils/network/networkutils.py   | 44 --
os_win/utils/storage/virtdisk/vhdutils.py  |  2 +-
5 files changed, 31 insertions(+), 38 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Segments, subnet types, and IPAM

2016-03-31 Thread Brian Haley

On 03/28/2016 07:17 PM, Carl Baldwin wrote:

On Fri, Mar 11, 2016 at 10:04 PM, Salvatore Orlando
 wrote:

On 11 March 2016 at 23:15, Carl Baldwin  wrote:
I wonder if we could satisfy this requirement with tags - as it seems these
subnets are anyway operator-owned you should probably not worry about
regular tenants fiddling with them, and therefore the "helper" subnet needed
for the fip namespace could just be tagged to the purpose.


We discussed tags at the mid-cycle.  We decided against using them.
One reason is that tags are pretty new and are a moving target.
Another reason I was hesitant is that tags were designed to be user
facing.  As I recall, we didn't want the tags to affect code behavior.
But, maybe I misunderstood that.  We want service types to affect
IPAM, at least.

Brian Haley is going to put up a spec for this service type attribute
work.  Brian, would you mind posting a link to your spec review when
you've posted it?


The Subnet service-type attribute spec is at:

https://review.openstack.org/#/c/300207/

Be gentle :)

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Mitaka RC2 available

2016-03-31 Thread Lingxian Kong
Hi, OpenStackers:

Due to release-critical issues spotted in Mistral during RC1 testing,
a new release candidate was created for Mitaka. You can find the RC2
source code tarballs at:

https://tarballs.openstack.org/mistral/mistral-2.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a
last-minute release candidate respin, this tarball will be formally
released as the final "Mitaka" version on April 7th. You are therefore
strongly encouraged to test and validate this tarball !

Alternatively, you can directly test the mitaka release branches at:

http://git.openstack.org/cgit/openstack/mistral/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/mistral/+filebug

and tag it *mitaka-rc-potential* to bring it to the Mistral release
crew's attention.

Thanks for all the effort of Mistral team members!

Cheers!

-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc] Not Running for Re-Election

2016-03-31 Thread Doug Hellmann
Excerpts from Mark McClain's message of 2016-03-31 16:17:42 -0400:
> All-
> 
> I wanted to drop a note to let everyone know that I will not seek re-election 
> to the TC for this cycle.  I’ve really enjoyed working with everyone in our 
> community through the TC the past three years. I’m super excited by the 
> candidates who are running where they’ll lead our community.  I’ll still be 
> around on working on various networking initiatives and look forward to 
> seeing many of you in Austin.
> 
> mark
> 

Thanks for your service, Mark. It has been valuable to have your
insights, especially since Neutron is one of our larger projects.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][ConfigDB] Separating node and cluster serialized data

2016-03-31 Thread Andrew Woodward
One of the problems we've faced with trying to plug-in ConfigDB is trying
to separate the cluster attributes from the node attributes in the
serialized output (ie astute.yaml)

I started talking with Alex S about how we could separate them after
astute.yaml is prepared trying to ensure which was which we came back
uncertain that the results would be accurate.

So I figured I'd go back to the source and see if there was a way to know
which keys belonged where. It turns out that we could solve the problem in
a simpler and more precise way than cutting them back apart later.

Looking over the deployment_serializers.py [1] the serialized data follows
a simple work flow

iterate over every node in cluster
  if node is customized:
serialized_data = node.replaced_deployment_data
  else:
serialized_data = dict_merge(
  serialize_node(node),
  get_common_attrs(cluster))

Taking this into mind, we can simply construct an extension to expose these
as an APIs so that we can consume them as a task in the deployment graph.

Cluster:
We can simply expose
DeploymentMultinodeSerializer().get_common_attrs(cluster)

This would then be plumbed to the cluster level in ConfigDB

Node:
if a Node has customized data, then we can return that at the node level,
this continues to work at the same as native since it most likely has
Cluster merged into it.

otherwise we can return the serialized node with whichever of the first
'role' the node has

We would expose DeploymentMultinodeSerializer().serialize_node(node,
objects.Node.all_roles(node)[0])

for our usage, we don't need to worry about the normal node role
combination as the data only influences 'role' and 'fail_if_error'
attributes, both are not consumed in the library.

https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L93-L121
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] Not Running for Re-Election

2016-03-31 Thread Mark McClain
All-

I wanted to drop a note to let everyone know that I will not seek re-election 
to the TC for this cycle.  I’ve really enjoyed working with everyone in our 
community through the TC the past three years. I’m super excited by the 
candidates who are running where they’ll lead our community.  I’ll still be 
around on working on various networking initiatives and look forward to seeing 
many of you in Austin.

mark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][election] TC Candidacy

2016-03-31 Thread Matthew Treinish
Hi Everyone,

I'd like to submit my candidacy for the OpenStack Technical Committee.

I've been involved with OpenStack since the Folsom cycle. In that time I've
worked on a lot of varied parts of OpenStack. During this time I have started
and led several key initiatives and projects (mostly centered around QA and the
gate) in OpenStack including:

 * Starting the elastic-recheck project with Joe Gordon
 * Enabling parallel tempest execution. This made our gate test environment more
   closely resemble a real environment by having multiple API requests happen at
   once. It's shook loose a ton of race conditions in projects
 * Creating tempest-lib and the tempest plugin interface
 * Creating subunit2sql and starting the openstack-health dashboard
 * Helping debug gate issues and consistently helping with firefighting blocking
   gate issues

I also served as the QA PTL for the past 4 cycles from Juno through Mitaka. It
was in this role that I've interacted with most of the projects/teams in
OpenStack and gained an appreciation for where OpenStack works at it's best and
at it's worst.

As a community I see our greatest weaknesses (and strengths) come from having
a large and diverse platform and ecosystem. While I do believe that for
OpenStack to succeed we do need a large ecosystem of projects, which the big
tent was introduced to address, I feel in the process we have lost some
concentration as a community on having a strong base and clear messaging about
OpenStack. With everyone distracted by the big tent it often leaves things in
the small tent, which holds up the entire ecosystem, not getting the attention
it should.

The other aspect that comes with this is the messaging around OpenStack. I've
had many conversations with people outside the community about how they choose
not to use or contribute to OpenStack because it's not clear what it is, where
to begin, or how to use it. I feel this is largely because we grew quite quickly
after converting to the big tent and we need to do a better job of helping users
bridge the gap here. Tags were a start, but I still think there is a way to
go before we can say we've solved this problem.

As a member of the TC I'd want to bring more focus to these problems. I'd like
to see the TC take a more active role and take a more hands on approach in the
technical oversight of projects. I'd also want to work on making our messaging
about what OpenStack is much clearer.

It would be my honor and privilege to serve the community if I'm lucky enough
to be elected to the TC.

Thanks,

Matthew Treinish

IRC: mtreinish
Review history: 
https://review.openstack.org/#/q/reviewer:mtreinish%2540kortar.org
Commit history: https://review.openstack.org/#/q/owner:mtreinish%2540kortar.org
Stackalytics: http://stackalytics.com/?metric=commits_id=treinish
Blog: http://blog.kortar.org/


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

2016-03-31 Thread Sean Dague
On 03/31/2016 01:23 PM, Monty Taylor wrote:
> Just a friendly reminder to everyone - floating IPs are not synonymous
> with Public IPs in OpenStack.
> 
> The most common (and growing, thank you to the beta of the new
> Dreamcompute cloud) configuration for Public Clouds is directly assign
> public IPs to VMs without requiring a user to create a floating IP.
> 
> I have heard that the require-floating-ip model is very common for
> private clouds. While I find that even stranger, as the need to run NAT
> inside of another NAT is bizarre, it is what it is.
> 
> Both models are common enough that pretty much anything that wants to
> consume OpenStack VMs needs to account for both possibilities.
> 
> It would be really great if we could get the default config in devstack
> to be to have a shared direct-attached network that can also have a
> router attached to it and provider floating ips, since that scenario
> actually allows interacting with both models (and is actually the most
> common config across the OpenStack public clouds)

If someone has the the pattern for what that config looks like,
especially if it could work on single interface machines, that would be
great.

The current defaults in devstack are mostly there for legacy reasons
(and because they work everywhere), and for activation energy to getting
a new robust work everywhere setup.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Cross Project Session proposals due by Apr 2nd

2016-03-31 Thread Sean Dague
Reminder that cross project session proposals are due by Apr 2nd -
https://etherpad.openstack.org/p/newton-cross-project-sessions

We've got 15 proposals at this point, and I expect a bunch to come in
later this week.

Also, if you aren't sure whether or not your proposal is "cross project
enough", please feel free to propose. We can only build a schedule out
of proposed content, and in talking with a number of contributors this
week I think some of our more active contributors were heavily self
censoring discussions they were putting forward. We can decide during
the selection process, and we have a bit more flexibility in rooms in
Austin than we had in Tokyo.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-03-31 Thread Steven Dake (stdake)
Including tc and kolla

Michael,

Sounds good.  I'll get the governance changes rolling for debate at the
next TC meeting.

Note I added this cross project topic for discussion Tuesday at summit
(last item in the list)
https://etherpad.openstack.org/p/newton-cross-project-sessions

Regards,
-steve

On 3/31/16, 12:15 PM, "michael mccune"  wrote:

>hey all,
>
>at the most recent ossp meeting[1], there was some extended discussion
>about threat analysis and the work that is being done to push this
>forward.
>
>in this discussion, there were several topics brought up around the
>process for doing these analyses on current projects and how the ossp
>should proceed especially with respect to the "vulnerability:managed"
>tag[2].
>
>as for the threat analysis process, there are still several questions
>which need to be answered:
>
>* what is the process for performing an analysis
>
>* how will an analysis be formally recognized and approved
>
>* who will be doing these analyses
>
>* does it make sense to keep the analysis process strictly limited to
>the vmt
>
>* how to address commonalities and differences between a developer
>oriented analysis and a deployer oriented analysis
>
>these questions all feed into another related topic, which is the
>proposed initial threat analysis for kolla which has been suggested to
>start at the upcoming austin summit.
>
>i wanted to capture some of the discussion happening around this topic,
>and continue the ball rolling as to how we will solve these questions as
>we head to summit.
>
>one of the big questions seems to be who should be doing these analysis,
>especially given that the ossp has not formally codified the practice
>yet, and the complexity involved. although currently the
>vulnerability:managed tag suggests that a third party review should be
>done, this may prove difficult to scale in practice. i feel that it
>would be in the best interests of the wider openstack community if the
>ossp works towards creating instructional material that can empower the
>project teams to start their own analyses.
>
>ultimately, having a third-party review of a project is worthy goal, but
>this has to be tempered with the reality that a single team will not be
>able to scale out and provide thorough analyses for all projects. to
>that extent, the ossp should work, initially, to help a few teams get
>these analyses completed and in the process create a set of useful tools
>(docs, guides, diagrams, foil-hat wearing pamphlets) to help further the
>effort.
>
>i would like to propose that the threat analysis portion of the
>vulnerability:managed tag be modified with the goal of having the
>project teams create their own analyses, with an extended third-party
>review to be performed afterwards. in this respect, the scale issue can
>be addressed, as well as the issue of project domain knowledge. it makes
>much more sense to me to have the project team creating the initial work
>here as they will know the areas, and architectures, that will need the
>most attention.
>
>as the ossp build better tools for generating these analyses they will
>be in a much better position to guide project teams in their initial
>analyses, with the ultimate goal of having the ossp, and/or vmt, perform
>the third-party audit for application of the tag. i have a feeling we
>will also discover much overlap between the developer and deployer
>oriented analyses, and these overlaps will help to strengthen the
>process for all involved.
>
>finally, the austin summit, and proposed kolla review, provide a great
>opportunity for the ossp to put "rubber on the road" with respect to
>this process. although a full analysis may not be accomplished during
>the summit, we can definitely achieve the goal of defining this process
>much better and creating more guidance for all projects that wish to
>follow this path, as well as having kolla solidly on the way to having a
>full threat analysis ready for third-party review.
>
>thoughts?
>
>
>regards,
>mike
>
>
>[1]: 
>http://eavesdrop.openstack.org/meetings/security/2016/security.2016-03-31-
>17.00.log.txt
>
>[2]: 
>http://governance.openstack.org/reference/tags/vulnerability_managed.html
>
>[3]: https://review.openstack.org/#/c/220712/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] TC Candidacy

2016-03-31 Thread Morgan Fainberg
Friends, Colleagues, and Fellow Community Members, lend me your ears (or
eyes
as the case may be).

I am pleased to submit for your consideration my candidacy for the technical
committee (TC).

You may know me from such projects as Keystone, and the “From Camera to
Couch”
segment of Jonathan Bryce’s opening keynote at the Vancouver OpenStack
Summit.
I have been an active contributor to OpenStack and have served in a
leadership
role since August of 2013 (as core contributor for Keystone and PTL for two
cycles). During my time contributing to OpenStack I have been focused on
improving the interoperability between clouds, improving the cross
communication between projects, and collecting (both direct and indirect)
feedback from the many deployers and users of OpenStack.

With some distance from being PTL of Keystone, I feel that it is time to
refocus my contributions to OpenStack and work to improve the community in a
different but also very important manner.

The most important thing we can do as a community is to continue to improve
the feedback loop from those who are actively using OpenStack. This means
that we need to continue to clearly hear the deployers and improve the story
around running OpenStack (at small, medium and large scales). We also need
to
improve the voice of the end user (those who are utilizing the actual APIs
themselves on a daily basis); as a community we continually improve our
ability to hear feedback on the projects in the Big Tent, but we must also
be mindful that the demand for OpenStack (especially by the end users) is
what makes it more compelling. The TC provides an overarching vision and
encourages the continued inclusion of the many different facets of our
community.

I believe that with the initiatives like the API Working Group and the
Product Working group, we can make OpenStack significant better and more
compelling. I hope to see the TC continue to encourage cross project
communication, move the “recommendations” provided from the API Working
Group from advisory towards a stronger “tenants of OpenStack API design”,
and work towards codifying a slightly more opinionated OpenStack. These
changes will serve the project for a long time to come.

So in short here is what aim to do as a member of the TC:

* Encourage a strong vision for OpenStack as a whole, ensuring new projects
and old are at home in the “Big Tent”; this includes help in eliminating
bureaucratic red tape for inclusion in OpenStack as possible while making
sure OpenStack still feels like a cohesive ecosystem

* Continue to encourage and drive solid interoperability between deployments
(via defcore and working to drive the API working group recommendations to
something more firm).

* Provide mentorship (both directly and through reaching out to the
community) to new contributors. As a member of the TC it is important to
lead by example especially when it comes to bringing in new members to the
community (developer,

* Continue to be an advocate for both the deployers and the end users and
working to continue to improve the feedback loop between this part of our
community and the developers.

* Work with the rest of the TC, the Foundation, and the Board to continue
solidifying the “what is OpenStack” and how it fits into the needs of the
many consumers of “cloud” technologies.

Finally, I look forward to continuing to be part of this amazing Open Source
community for many more development cycles.

Thanks for your time, consideration, and contributions to this community.

Cheers,
Morgan Fainberg

IRC: "morgan" (or "notmorgan")
Review History:
https://review.openstack.org/#/q/reviewer:morgan.fainberg%2540gmail.com
Commit History:
https://review.openstack.org/#/q/owner:morgan.fainberg%2540gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security] threat analysis, tags, and the road ahead

2016-03-31 Thread michael mccune

hey all,

at the most recent ossp meeting[1], there was some extended discussion 
about threat analysis and the work that is being done to push this forward.


in this discussion, there were several topics brought up around the 
process for doing these analyses on current projects and how the ossp 
should proceed especially with respect to the "vulnerability:managed" 
tag[2].


as for the threat analysis process, there are still several questions 
which need to be answered:


* what is the process for performing an analysis

* how will an analysis be formally recognized and approved

* who will be doing these analyses

* does it make sense to keep the analysis process strictly limited to 
the vmt


* how to address commonalities and differences between a developer 
oriented analysis and a deployer oriented analysis


these questions all feed into another related topic, which is the 
proposed initial threat analysis for kolla which has been suggested to 
start at the upcoming austin summit.


i wanted to capture some of the discussion happening around this topic, 
and continue the ball rolling as to how we will solve these questions as 
we head to summit.


one of the big questions seems to be who should be doing these analysis, 
especially given that the ossp has not formally codified the practice 
yet, and the complexity involved. although currently the 
vulnerability:managed tag suggests that a third party review should be 
done, this may prove difficult to scale in practice. i feel that it 
would be in the best interests of the wider openstack community if the 
ossp works towards creating instructional material that can empower the 
project teams to start their own analyses.


ultimately, having a third-party review of a project is worthy goal, but 
this has to be tempered with the reality that a single team will not be 
able to scale out and provide thorough analyses for all projects. to 
that extent, the ossp should work, initially, to help a few teams get 
these analyses completed and in the process create a set of useful tools 
(docs, guides, diagrams, foil-hat wearing pamphlets) to help further the 
effort.


i would like to propose that the threat analysis portion of the 
vulnerability:managed tag be modified with the goal of having the 
project teams create their own analyses, with an extended third-party 
review to be performed afterwards. in this respect, the scale issue can 
be addressed, as well as the issue of project domain knowledge. it makes 
much more sense to me to have the project team creating the initial work 
here as they will know the areas, and architectures, that will need the 
most attention.


as the ossp build better tools for generating these analyses they will 
be in a much better position to guide project teams in their initial 
analyses, with the ultimate goal of having the ossp, and/or vmt, perform 
the third-party audit for application of the tag. i have a feeling we 
will also discover much overlap between the developer and deployer 
oriented analyses, and these overlaps will help to strengthen the 
process for all involved.


finally, the austin summit, and proposed kolla review, provide a great 
opportunity for the ossp to put "rubber on the road" with respect to 
this process. although a full analysis may not be accomplished during 
the summit, we can definitely achieve the goal of defining this process 
much better and creating more guidance for all projects that wish to 
follow this path, as well as having kolla solidly on the way to having a 
full threat analysis ready for third-party review.


thoughts?


regards,
mike


[1]: 
http://eavesdrop.openstack.org/meetings/security/2016/security.2016-03-31-17.00.log.txt


[2]: 
http://governance.openstack.org/reference/tags/vulnerability_managed.html


[3]: https://review.openstack.org/#/c/220712/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Ton Ngo

+1 for Eli

Ton Ngo,



From:   Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   03/31/2016 11:21 AM
Subject:[openstack-dev] [magnum] Proposing Eli Qiao for Magnum core
reviewer team



Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His
contribution started from about 10 months ago. Along the way, he
implemented several important blueprints and fixed a lot of bugs. His
contribution covers various aspects (i.e. APIs, conductor, unit/functional
tests, all the COE templates, etc.), which shows that he has a good
understanding of almost every pieces of the system. The feature set he
contributed to is proven to be beneficial to the project. For example, the
gate testing framework he heavily contributed to is what we rely on every
days. His code reviews are also consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team.
According to the OpenStack Governance process [1], we require a minimum of
4 +1 votes within a 1 week voting window (consider this proposal as a +1
vote from me). A vote of -1 is a veto. If we cannot get enough votes or
there is a veto vote prior to the end of the voting window, Eli is not able
to join the core team and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin

 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PTL communication of offline time

2016-03-31 Thread Anita Kuno
I proposed a resolution to the TC regarding PTL leave of absences,
outlining a workflow of communication:
https://review.openstack.org/#/c/290141/
The proposal was struck down in favour of a change to the Project Team
Guide: https://review.openstack.org/#/c/293170/

I recently was contacted by a PTL who though my proposed resolution was
a good idea and was in search of some advice. The following is the
essence of what I suggested.

If you are going offline but retaining your position of responsibility
regarding the project for which you are a PTL, my resolution wouldn't
have applied anyway since the only situation it covered was if someone
was not going to be responsible for the leadership of the program any
longer and didn't have time/leeway to work though a change of leadership
for the project/program/team (insert current word here).

If you are a PTL that is retaining responsibility for your ream but will
be offline for a bit, I can see why you would want to make provisions
for your team and want to communicate them.

Personally I would like to avoid a stream of "out of office"
notifications to the mailing lists.

My suggestion is to mention your planned offline time to your team at
your weekly team meeting (logged and in a discrete unit so an onlooker
only has to read the log from that meeting, not an entire day's worth of
channel chat in case it was a conversation rather than a statement). If
you want to deputize someone on the team to make decisions in your
stead, name that person and have them acknowledge their responsibility
in the meeting log.

If the deputy is told someone needs to make a decision in a hurry but
wants to wait for the PTL's approval, the deputy can merely share the
url of the meeting log that acknowledges the agreement between PTL and
deputy, which I think is all most folks want to know anyway.

Now these are merely my suggestions, you are bound by the provisions
outlined in the approved documentation, not resolutions that didn't
merge. But since I was asked I thought I would share would I had advised.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-03-31 Thread Clark Boylan
On Tue, Mar 29, 2016, at 07:34 PM, Kevin Benton wrote:
> I'm not aware of any issues. Perhaps you can propose a patch to just
> change
> the default in Neutron to that interface and people can -1 if there are
> any
> concerns.

Yes, please do. This is one of the nice things about pre merge testing,
we can all see how it works upfront before committing to it.

As far as concerns go I got curious about this and did some digging
around documentation and found basically zero docs. It is possible I
don't know where to look, but I would expect that we should be
documenting the behavior of the various interface options? However, this
is a small concern and can be addressed if/when the changes are made to
Neutron.

In my digging I did find that there is also an of_interface that
defaults to ovs-ofctl, does this also have a native interface that could
be used?

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] TC Non-candidancy

2016-03-31 Thread Doug Hellmann
Excerpts from Robert Collins's message of 2016-03-31 15:56:33 +1300:
> Hi everyone - I'm not submitting my hat for the ring this cycle - I
> think its important we both share the work, and bring more folk up
> into the position of having-been-on-the-TC. I promise to still hold
> strong opinions weakly, and to discuss those in TC meetings :).
> 
> -Rob
> 

Thanks for your service, Robert. I have always appreciated your
willingness to question fundamental assumptions during TC discussions,
so I hope we don't entirely lose that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Davanum Srinivas
+1 from me for Eli

Thanks,
Dims

On Thu, Mar 31, 2016 at 2:18 PM, Hongbin Lu  wrote:
> Hi all,
>
>
>
> Eli Qiao has been consistently contributed to Magnum for a while. His
> contribution started from about 10 months ago. Along the way, he implemented
> several important blueprints and fixed a lot of bugs. His contribution
> covers various aspects (i.e. APIs, conductor, unit/functional tests, all the
> COE templates, etc.), which shows that he has a good understanding of almost
> every pieces of the system. The feature set he contributed to is proven to
> be beneficial to the project. For example, the gate testing framework he
> heavily contributed to is what we rely on every days. His code reviews are
> also consistent and useful.
>
>
>
> I am happy to propose Eli Qiao to be a core reviewer of Magnum team.
> According to the OpenStack Governance process [1], we require a minimum of 4
> +1 votes within a 1 week voting window (consider this proposal as a +1 vote
> from me). A vote of -1 is a veto. If we cannot get enough votes or there is
> a veto vote prior to the end of the voting window, Eli is not able to join
> the core team and needs to wait 30 days to reapply.
>
>
>
> The voting is open until Thursday April 7st.
>
>
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
>
>
> Best regards,
>
> Hongbin
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Hongbin Lu
Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][horizon] Horizon Mitaka RC2 available

2016-03-31 Thread Doug Hellmann
Due to release-critical issues spotted in Horizon during RC1 testing,
a new release candidate was created for Mitaka. You can find the
RC2 source code tarball at:

https://tarballs.openstack.org/horizon/horizon-9.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, this tarball will be formally released
as final "Mitaka" versions on April 7th. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the mitaka release branch
at:

http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/horizon/+filebug

and tag it *mitaka-rc-potential* to bring it to the Horizon release
crew's attention.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-03-31 Thread Fox, Kevin M
+1. This sounds good. The lack of any conditionals at all has caused a lot of 
pain.

Thanks,
Kevin


From: Huangtianhua
Sent: Thursday, March 31, 2016 5:25:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

The conditions function has been requested for a long time, and there have been 
several previous discussions, which all ended up in debating the 
implementation, and no result.
https://review.openstack.org/#/c/84468/3/doc/source/template_guide/hot_spec.rst
https://review.openstack.org/#/c/153771/1/specs/kilo/resource-enabled-meta-property.rst

I think we should focus on the simplest possible way(same as AWS) to meet the 
user requirement, and follows the AWS, there is no doubt that we will get a 
very good compatibility.
And the patches are good in-progress. I don't want everything back to zero:)
https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/support-conditions-function

In the example you given of 'variables', seems there's no relation with 
resource/output/property conditions, it seems as another function which likes 
really 'variables' to used in template.

-邮件原件-
发件人: Thomas Herve [mailto:the...@redhat.com]
发送时间: 2016年3月31日 19:55
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [Heat] Re-evaluate conditions specification

On Thu, Mar 31, 2016 at 10:40 AM, Thomas Herve  wrote:
> Hi all,
>
> As the patches for conditions support are incoming, I've found
> something in the code (and the spec) I'm not really happy with. We're
> creating a new top-level section in the template called "conditions"
> which holds names that can be reused for conditionally creating
> resource.
>
> While it's fine and maps to what AWS does, I think it's a bit
> short-sighted and limited. What I have suggested in the past is to
> have a "variables" (or whatever you want to call it) section, where
> one can declare names and values. Then we can add an intrinsic
> function to retrieve data from there, and use that for examples for
> conditions.

I was asked to give examples, here's at least one that can illustrate what I 
meant:

parameters:
   host:
  type: string
   port:
  type: string

variables:
   endpoint:
  str_replace:
template:
   http://HOST:PORT/
params:
   HOST: {get_param: host}
   PORT: {get_param: port}

resources:
   config1:
  type: OS::Heat::StructuredConfig
  properties:
config:
   hosts: [{get_variable: endpoint}]

--
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Re-evaluate conditions specification

2016-03-31 Thread Fox, Kevin M
My initial reaction is that sounds good, but how is it different then params, 
or maybe a hidden param? Maybe they are conditionally assigned? Conditional 
params would be awesome too though. If param 1 is not set, then param 2 is 
required...

Thanks,
Kevin


From: Thomas Herve
Sent: Thursday, March 31, 2016 1:40:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Heat] Re-evaluate conditions specification

Hi all,

As the patches for conditions support are incoming, I've found
something in the code (and the spec) I'm not really happy with. We're
creating a new top-level section in the template called "conditions"
which holds names that can be reused for conditionally creating
resource.

While it's fine and maps to what AWS does, I think it's a bit
short-sighted and limited. What I have suggested in the past is to
have a "variables" (or whatever you want to call it) section, where
one can declare names and values. Then we can add an intrinsic
function to retrieve data from there, and use that for examples for
conditions.

It solves that particular issue, and it opens some interesting
possibilities for reducing duplication in the template, as we could
build arbitrary values out of parameters or attributes that can then
be reused several times.

Thoughts?

--
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-31 Thread Fox, Kevin M
Ideally it can roll the services one instance at a time while doing the 
appropriate load balancer stuff to make it seamless. Our experience has been 
even though services should retry, they dont always do it right. So better to 
do it with the lb proper if you can. Between ansible/container orchestration 
and containers, it should be pretty easy to do. While doing it with just 
packages would be very hard.

Thanks,
Kevin


From: Steven Dake (stdake)
Sent: Thursday, March 31, 2016 1:22:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

Kevin,

I am not directly answering your question, but from the perspective of Kolla, 
our upgrades are super simple because we don't make a big mess in the first 
place to upgrade from.  In my experience, this is the number one problem with 
upgrades – everyone makes a mess of the first deployment, so upgrading from 
there is a minefield.  Better to walk straight through that minefield by not 
making a mess of the system in the first place using my favorite deployment 
tool: Kolla ;-)

Kolla upgrades rock.  I have no doubt we will have some minor issues in the 
field, but we have tested 1 month old master to master upgrades with database 
migrations of the services we deploy, and it takes approximately 10 minutes on 
a 64 (3 control rest compute) node cluster without VM downtime or loss of 
networking service to the virtual machines.  This is because our upgrades, 
while not totally atomic across the clsuter, are pretty darn close and upgrade 
the entire filesystem runtime in one atomic action per service while rolling 
the ugprade in the controller nodes.

During the upgrade process there may be some transient failures for API service 
calls, but they are typically repeated by clients and no real harm is done.  
Note we follow project's best practices for handling upgrades, without the mess 
of dealing with packaging or configuration on the filesystem and migration 
thereof.

Regards
-steve


From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, March 30, 2016 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

The main issue is one of upgradability, not stability. We all know tripleo is 
stable. Tripleo cant do upgrades today. We're looking for ways to get there. So 
"upgrading" to ansible isnt nessisary for sure since folks deploying tripleo 
today must assume they cant upgrade anyway.

Honestly I have doubts any config management system from puppet to heat 
software deployments can be coorced to do a cloud upgrade without downtime 
without a huge amount of workarounds. You really either need a workflow 
oriented system with global knowledge like ansible or a container orchestration 
system like kubernes to ensure you dont change too many things at once and 
break things. You need to be able to run some old things and some new, all at 
the same time. And in some cases different versions/config of the same service 
on different machines.

Thoughts on how this may be made to work with puppet/heat?

Thanks,
Kevin


From: Dan Prince
Sent: Monday, March 28, 2016 12:07:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

On Wed, 2016-03-23 at 07:54 -0400, Ryan Hallisey wrote:
> *Snip*
>
> >
> > Indeed, this has literally none of the benefits of the ideal Heat
> > deployment enumerated above save one: it may be entirely the wrong
> > tool
> > in every way for the job it's being asked to do, but at least it
> > is
> > still well-integrated with the rest of the infrastructure.
> >
> > Now, at the Mitaka summit we discussed the idea of a 'split
> > stack',
> > where we have one stack for the infrastructure and a separate one
> > for
> > the software deployments, so that there is no longer any tight
> > integration between infrastructure and software. Although it makes
> > me a
> > bit sad in some ways, I can certainly appreciate the merits of the
> > idea
> > as well. However, from the argument above we can deduce that if
> > this is
> > the *only* thing we do then we will end up in the very worst of
> > all
> > possible worlds: the wrong tool for the job, poorly integrated.
> > Every
> > single advantage of using Heat to deploy software will have
> > evaporated,
> > leaving only disadvantages.
> I think Heat is a very powerful tool having done the 

[openstack-dev] Floating IPs and Public IPs are not equivalent

2016-03-31 Thread Monty Taylor
Just a friendly reminder to everyone - floating IPs are not synonymous 
with Public IPs in OpenStack.


The most common (and growing, thank you to the beta of the new 
Dreamcompute cloud) configuration for Public Clouds is directly assign 
public IPs to VMs without requiring a user to create a floating IP.


I have heard that the require-floating-ip model is very common for 
private clouds. While I find that even stranger, as the need to run NAT 
inside of another NAT is bizarre, it is what it is.


Both models are common enough that pretty much anything that wants to 
consume OpenStack VMs needs to account for both possibilities.


It would be really great if we could get the default config in devstack 
to be to have a shared direct-attached network that can also have a 
router attached to it and provider floating ips, since that scenario 
actually allows interacting with both models (and is actually the most 
common config across the OpenStack public clouds)


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-31 Thread Monty Taylor

A few things:

Public IPs and Floating IPs are not the same thing.
Some clouds have public IPs. Some have floating ips. Some have both.

I think it's important to be able to have magnum work with all of the above.

If the cloud does not require using a floating IP (as most do not) to 
get externally routable network access, magnum should work with that.


If the cloud does require using a floating IP (as some do) to get 
externally rouatable network access, magnum should be able to work with 
that.


In either case, it's also possible the user will not desire the thing 
they are deploying in magnum to be assigned an IP on a network that 
routes off of the cloud. That should also be supported.


Shade has code to properly detect most of those situations that you can 
look at for all of the edge cases - however, since magnum is installed 
by the operator, I'd suggest making a config value for it that allows 
the operator to express whether or not the cloud in question requires 
floating ips as it's EXCEPTIONALLY hard to accurately detect.


On 03/31/2016 12:42 PM, Guz Egor wrote:

Hongbin,
It's correct, I was involved in two big OpenStack private cloud
deployments and we never had public ips.
In such case Magnum shouldn't create any private networks, operator need
to provide network id/name or
it should use default  (we used to have networking selection logic in
scheduler) .

---
Egor


*From:* Hongbin Lu 
*To:* Guz Egor ; OpenStack Development Mailing List
(not for usage questions) 
*Sent:* Thursday, March 31, 2016 7:29 AM
*Subject:* RE: [openstack-dev] [magnum] Are Floating IPs really needed
for all nodes?

Egor,
I agree with what you said, but I think we need to address the problem
that some clouds are lack of public IP addresses. It is not uncommon
that a private cloud is running without public IP addresses, and they
already figured out how to route traffics in and out. In such case, a
bay doesn’t need to have floating IPs and the NodePort feature seems to
work with the private IP address.
Generally speaking, I think it is useful to have a feature that allows
bays to work without public IP addresses. I don’t want to end up in a
situation that Magnum is unusable because the clouds don’t have enough
public IP addresses.
Best regards,
Hongbin
*From:*Guz Egor [mailto:guz_e...@yahoo.com]
*Sent:* March-31-16 12:08 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [magnum] Are Floating IPs really needed
for all nodes?
-1
who is going to run/support this proxy? also keep in mind that
Kubernetes Service/NodePort
(http://kubernetes.io/docs/user-guide/services/#type-nodeport)
functionality is not going to work without public ip and this is very
handy feature.
---
Egor

*From:*王华>
*To:* OpenStack Development Mailing List (not for usage questions)
>
*Sent:* Wednesday, March 30, 2016 8:41 PM
*Subject:* Re: [openstack-dev] [magnum] Are Floating IPs really needed
for all nodes?
Hi yuanying,
I agree to reduce the usage of floating IP. But as far as I know, if we
need to pull
docker images from docker hub in nodes floating ips are needed. To
reduce the
usage of floating ip, we can use proxy. Only some nodes have floating
ips, and
other nodes can access docker hub by proxy.
Best Regards,
Wanghua
On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao > wrote:

Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since
IP address are
kinds of resource which not wise to waste.
On 2016年03月31日10:40, 大塚元央wrote:

Hi team,
Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.
I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.
I want to introduce `disable-floating-ips-to-nodes` parameter to bay
model.
Thoughts?
[1]:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html
Thanks
-yuanying


__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe:

Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-31 Thread Guz Egor
Hongbin, It's correct, I was involved in two big OpenStack private cloud 
deployments and we never had public ips.In such case Magnum shouldn't create 
any private networks, operator need to provide network id/name or it should use 
default  (we used to have networking selection logic in scheduler) .
--- Egor
  From: Hongbin Lu 
 To: Guz Egor ; OpenStack Development Mailing List (not for 
usage questions)  
 Sent: Thursday, March 31, 2016 7:29 AM
 Subject: RE: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?
   
#yiv8309482815 #yiv8309482815 -- _filtered #yiv8309482815 
{font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered #yiv8309482815 
{font-family:SimSun;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv8309482815 
{panose-1:2 4 5 3 5 4 6 3 2 4;} _filtered #yiv8309482815 
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv8309482815 
{font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;} _filtered #yiv8309482815 
{panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv8309482815 
{font-family:Consolas;panose-1:2 11 6 9 2 2 4 3 2 4;}#yiv8309482815 
#yiv8309482815 p.yiv8309482815MsoNormal, #yiv8309482815 
li.yiv8309482815MsoNormal, #yiv8309482815 div.yiv8309482815MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv8309482815 a:link, 
#yiv8309482815 span.yiv8309482815MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv8309482815 a:visited, #yiv8309482815 
span.yiv8309482815MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv8309482815 pre 
{margin:0cm;margin-bottom:.0001pt;font-size:10.0pt;}#yiv8309482815 
p.yiv8309482815MsoAcetate, #yiv8309482815 li.yiv8309482815MsoAcetate, 
#yiv8309482815 div.yiv8309482815MsoAcetate 
{margin:0cm;margin-bottom:.0001pt;font-size:8.0pt;}#yiv8309482815 
span.yiv8309482815HTMLPreformattedChar {font-family:Consolas;}#yiv8309482815 
span.yiv8309482815hoenzb {}#yiv8309482815 span.yiv8309482815BalloonTextChar 
{}#yiv8309482815 span.yiv8309482815EmailStyle22 {color:#1F497D;}#yiv8309482815 
.yiv8309482815MsoChpDefault {font-size:10.0pt;} _filtered #yiv8309482815 
{margin:72.0pt 72.0pt 72.0pt 72.0pt;}#yiv8309482815 
div.yiv8309482815WordSection1 {}#yiv8309482815 Egor,    I agree with what you 
said, but I think we need to address the problem that some clouds are lack of 
public IP addresses. It is not uncommon that a private cloud is running without 
public IP addresses, and they already figured out how to route traffics in and 
out. In such case, a bay doesn’t need to have floating IPs and the NodePort 
feature seems to work with the private IP address.    Generally speaking, I 
think it is useful to have a feature that allows bays to work without public IP 
addresses. I don’t want to end up in a situation that Magnum is unusable 
because the clouds don’t have enough public IP addresses.    Best regards, 
Hongbin    From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-31-16 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?    -1    who is going to run/support this proxy? also keep in mind that 
Kubernetes Service/NodePort 
(http://kubernetes.io/docs/user-guide/services/#type-nodeport) functionality is 
not going to work without public ip and this is very handy feature.      ---  
Egor    From:王华 
To: OpenStack Development Mailing List (not for usage questions) 

Sent: Wednesday, March 30, 2016 8:41 PM
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?    Hi yuanying,    I agree to reduce the usage of floating IP. But as 
far as I know, if we need to pull docker images from docker hub in nodes 
floating ips are needed. To reduce the usage of floating ip, we can use proxy. 
Only some nodes have floating ips, and other nodes can access docker hub by 
proxy.    Best Regards, Wanghua    On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
 wrote:

 Hi Yuanying,
+1 
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.    On 2016年03月31日 10:40, 大塚元央 wrote: 
Hi team,    Previously, we had a reason why all nodes should have floating ips 
[1]. But now we have a LoadBalancer features for masters [2] and minions [3]. 
And also minions do not necessarily need to have floating ips [4]. I think it’s 
the time to remove floating ips from all nodes.    I know we are using floating 
ips in gate to get log files, So it’s not good idea to remove floating ips 
entirely.    I want to introduce `disable-floating-ips-to-nodes` parameter to 
bay model.    Thoughts?    [1]: 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html [2]: 
https://blueprints.launchpad.net/magnum/+spec/make-master-ha [3]: 

Re: [openstack-dev] [nova] API priorities in Newton

2016-03-31 Thread Ken'ichi Ohmichi
2016-03-30 12:54 GMT-07:00 Matt Riedemann :
>>> - Microversion Testing in Tempest (underway)
>
> How much coverage do we have today? This could be like novaclient where
> people just start hacking on adding tests for each microversion (assuming
> gmann would be working on this).

Yeah, gmann is working for this now. That is pretty good work.
Current coverage is just v2.2 only on Tempest side.
Nova v2.10 test patch is being reviewed now:
https://review.openstack.org/#/c/277763/

The difference between Tempest tests and novaclient tests is blocking
additional properties.
Tempest implements *response* body validation with JSON-Schema like
nova side, and additionalProperties is false for blocking unexpected
additional properties.
As microversion rule of nova, we need to add properties in a response
body with bumping a microversion.
So as the test, Tempest needs to block unexpected properties on the
existing microversions tests.

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-31 Thread Ken'ichi Ohmichi
2016-03-30 12:26 GMT-07:00 Sean Dague :
>
> One other issue that we've been blocking on for a while has been
> Capabilities discovery. Some API proposed adds like live resize have
> been conceptually blocked behind this one. Once upon a time there was a
> theory that JSON Home was a thing, and would slice our bread and julien
> our fries, and solve all this. But it's a big thing to get right, and
> JSON Home has an unclear future. And, we could server our users pretty
> well with a much simpler take on capabilities. For instance
>
>  GET /servers/{id}/capabilities
>
> {
> "capabilities" : {
> "resize": True,
> "live-resize": True,
> "live-migrate": False
> ...
>  }
> }

Yeah, JSOM-Home is not an option for this kind of capabilities discovery.
Swagger is that instead. Clients can know available capabilities by
getting swagger definition via REST API.

Ex:

GET /swaggers
{
...
"/action": {
   "parameters": [
   {"name": "resize", "in": "body", ...},
   {"name": "os-start", "in": "body", ...},
   {"name": "os-stop", "in": "body", ...},
   ...
   }
}
}

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Oh Swagger, where art thou?

2016-03-31 Thread Jim Rollenhagen
On Thu, Mar 31, 2016 at 09:50:48AM -0400, Sean Dague wrote:
> On 03/31/2016 09:43 AM, Jim Rollenhagen wrote:
> > On Thu, Mar 31, 2016 at 08:43:29AM -0400, Sean Dague wrote:
> >> Some more details on progress, because this is getting closer every day.
> >>
> >> There is now an api-ref target on the Nova project. The entire work in
> >> progress stream has been rebased into 2 patches to a top level api-ref/
> >> directory structure in the Nova tree -
> >> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:wip_api_docs2
> >>
> >> That is a 2 patch series. The first is the infrastructure, the second is
> >> the content for 2 resources (versions and servers). The rendered output
> >> for this is at -
> >> http://docs-draft.openstack.org/63/298763/4/check/gate-nova-api-ref/6983838//api-ref/build/html/
> >> (you can also pull and build locally with tox -e api-ref)
> >>
> >> Karen, Auggy, and Anne continue to work on the wadl data translator
> >> using the wadl2rst project and fairy-slipper to get various pieces of
> >> the structured data over. Hopefully we'll see some of those translated
> >> stacks rendering soon in patch sets.
> > 
> > I assume at some point we'll be pulling the sphinx extension out to a
> > separate project so that other projects can use this too? :)
> > 
> > // jim
> 
> Yes. I feel like it's probably a milestone 2 activity. Fixing styling
> and error handling is a lot faster in tree while we sort out all the
> issues. Once it's good we can move things into something that's pip
> installable.

Sounds good, thanks for pushing on this. Looking forward to using it in
ironic. :)

// jim

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Implementing tempest test for Keystone federation functional tests

2016-03-31 Thread Matthew Treinish
On Thu, Mar 31, 2016 at 11:38:55AM -0400, Minying Lu wrote:
> Hi all,
> 
> I'm working on resource federation at the Massachusetts Open Cloud. We want
> to implement functional test on the k2k federation, which requires
> authentication with both a local keystone and a remote keystone (in a
> different cloud installation). It also requires a K2K/SAML assertion
> exchange with the local and remote keystones. These functions are not
> implemented in the current tempest.lib.service library, so I'm adding code
> to the service library.
> 
> My question is, is it possible to adapt keystoneauth python clients? Or do
> you prefer implementing it with http requests.

So tempest's clients have to be completely independent. That's part of tempest's
design points about testing APIs, not client implementations. If you need to add
additional functionality to the tempest clients that's fine, but pulling in
keystoneauth isn't really an option.

> 
> And since this test requires a lot of environment set up including: 2
> separate cloud installations, shibboleth, creating mapping and protocols on
> remote cloud, etc. Would it be within the scope of tempest's mission?

From the tempest perspective it expects the environment to be setup and already
exist by the time you run the test. If it's a valid use of the API, which I'd
say this is and an important one too, then I feel it's fair game to have tests
for this live in tempest. We'll just have to make the configuration options
around how tempest will do this very explicit to make sure the necessary
environment exists before the tests are executed.

The fly in the ointment for this case will be CI though. For tests to live in
tempest they need to be verified by a CI system before they can land. So to
land the additional testing in tempest you'll have to also ensure there is a
CI job setup in infra to configure the necessary environment. While I think
this is a good thing to have in the long run, it's not necessarily a small
undertaking.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Mitaka RC3 available

2016-03-31 Thread Doug Hellmann
Due to release-critical issues spotted in Neutron during RC2 testing,
a new release candidate was created for Mitaka. You can find the
RC3 source code tarballs at:

https://tarballs.openstack.org/neutron/neutron-8.0.0.0rc3.tar.gz
https://tarballs.openstack.org/neutron-lbaas/neutron-lbaas-8.0.0.0rc3.tar.gz
https://tarballs.openstack.org/neutron-fwaas/neutron-fwaas-8.0.0.0rc3.tar.gz
https://tarballs.openstack.org/neutron-vpnaas/neutron-vpnaas-8.0.0.0rc3.tar.gz

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, these tarballs will be formally released
as the final "Mitaka" version on April 7th. You are therefore
strongly encouraged to test and validate these tarballs!

Alternatively, you can directly test the mitaka release branches at:

http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/mitaka
http://git.openstack.org/cgit/openstack/neutron-lbaas/log/?h=stable/mitaka
http://git.openstack.org/cgit/openstack/neutron-fwaas/log/?h=stable/mitaka
http://git.openstack.org/cgit/openstack/neutron-vpnaas/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/neutron/+filebug

and tag it *mitaka-rc-potential* to bring it to the Neutron release
crew's attention.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Component Leads Elections

2016-03-31 Thread Serg Melikyan
Hi fuelers,

only few hours left until period of self-nomination will be closed, but so
far we don't have neither consensus regarding how to proceed further nor
candidates.

I've increased period of self-nomination for another week (until April 7,
23:59 UTC) and expect to have decision about how we are going to proceed
further if no one nominate himself or candidates for each of the three
projects.

I propose to start with defining steps that we are going to take if no one
nominate himself by April 7 and move forward with separate discussion
regarding governance.

P.S. I strongly believe that declaring Component Leads role as obsolete
require agreement among all members of Fuel team, which may take quite a
lot of time. I think we should propose change-request to existing spec with
governance [0], and have decision by end of Newton cycle.

References:
[0]
https://specs.openstack.org/openstack/fuel-specs/policy/team-structure.html

On Thu, Mar 31, 2016 at 3:22 AM, Evgeniy L  wrote:

> Hi,
>
> I'm not sure if it's a right place to continue this discussion, but if
> there are doubts that such role is needed, we should not wait for another
> half a year to drop it.
>
> Also I'm not sure if a single engineer (or two engineers) can handle
> majority of upcoming patches + specs + meetings around features. Sergii and
> Igor put a lot of efforts to make it work, but does it really scale?
>
> I think it would be better to offload more responsibilities to core
> groups, and if core team (of specific project) wants to see formal or
> informal leader, let them decide.
>
> I would be really interested to see feedback from current component leads.
>
> Thanks,
>
>
> On Wed, Mar 30, 2016 at 2:20 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dmitry,
>>
>> "No need to rush" does not mean we should postpone
>> team structure changes until Ocata. IMO, CL role
>> (when it is exposed to Fuel) contradicts to our
>> modularization activities. Fuel should be an aggregator
>> of components. What if we decide to use Ironic or
>> Neutron as Fuel components? Should we chose also
>> Ironic CL? NO! Ironic is an independent
>> project with its own PTL.
>>
>> I agree with Mike that we could remove this CL
>> role in a month if have consensus. But does it
>> make any sense to chose CLs now and then
>> immediately remove this role? Probably, it is better
>> to make a decision right now. I'd really like to
>> see here in this ML thread opinions of our current
>> CLs and other people.
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Tue, Mar 29, 2016 at 11:21 PM, Dmitry Borodaenko <
>> dborodae...@mirantis.com> wrote:
>>
>>> On Tue, Mar 29, 2016 at 03:19:27PM +0300, Vladimir Kozhukalov wrote:
>>> > > I think this call is too late to change a structure for now. I
>>> suggest
>>> > > that we always respect the policy we've accepted, and follow it.
>>> > >
>>> > > If Component Leads role is under a question, then I'd continue the
>>> > > discussion, hear opinion of current component leads, and give this a
>>> time
>>> > > to be discussed. I'd have nothing against removing this role in a
>>> month
>>> > > from now if we reach a consensus on this topic - no need to wait for
>>> the
>>> > > cycle end.
>>> >
>>> > Sure, there is no need to rush. I'd also like to see current CL
>>> opinions.
>>>
>>> Considering that, while there's an ongoing discussion on how to change
>>> Fuel team structure for Ocata, there's also an apparent consensus that
>>> we still want to have component leads for Newton, I'd like to call once
>>> again for volunteers to self-nominate for component leads of
>>> fuel-library, fuel-web, and fuel-ui. We've got 2 days left until
>>> nomination period is over, and no volunteer so far :(
>>>
>>> --
>>> Dmitry Borodaenko
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [tempest] Implementing tempest test for Keystone federation functional tests

2016-03-31 Thread Minying Lu
Hi all,

I'm working on resource federation at the Massachusetts Open Cloud. We want
to implement functional test on the k2k federation, which requires
authentication with both a local keystone and a remote keystone (in a
different cloud installation). It also requires a K2K/SAML assertion
exchange with the local and remote keystones. These functions are not
implemented in the current tempest.lib.service library, so I'm adding code
to the service library.

My question is, is it possible to adapt keystoneauth python clients? Or do
you prefer implementing it with http requests.

And since this test requires a lot of environment set up including: 2
separate cloud installations, shibboleth, creating mapping and protocols on
remote cloud, etc. Would it be within the scope of tempest's mission?

Thank you!

Regards,
Minying Lu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] - oAuth tab proposal

2016-03-31 Thread Rob Cresswell (rcresswe)
Could you put up a blueprint for discussion? We have a weekly meeting to review 
blueprints: https://wiki.openstack.org/wiki/Meetings/HorizonDrivers

The blueprint template is here: 
https://blueprints.launchpad.net/horizon/+spec/template

Thanks!

Rob

On 31 Mar 2016, at 10:57, Marcos Fermin Lobo 
> wrote:

Hi all,

I would like to propose a new tab in "Access and security" web page.

As you know, keystone offers an OAUTH plugin for authentication. This means 
that third party applications could access to OpenStack cloud resources using 
OAUTH. Now, this is possible using the CLI but there is nothing (AFAIK) in 
Horizon.

I would propose a new tab in "Access and security" web page to manage OAUTH 
credentials. As usual, this new tab would have a list of OAUTH crendentials 
with buttons to approve and remove them.

Please see a simple mockup here 
https://mferminl.web.cern.ch/mferminl/mockups/horizon-oauth-mockup.png

Comments, suggestions... are very welcome!

Cheers,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-03-31 Thread Qiming Teng
On Thu, Mar 31, 2016 at 09:21:43AM -0400, Rabi Mishra wrote:
 
> If I understand the suggestion correctly, the only relation it has with 
> conditions is,
> conditions are nothing but variables(boolean).
> 
> conditions: {
> 'for_prod': {equals: [{get_param: env_type}, 'prod']}
>   }
> 
> would be
> 
> variables:
>for_prod: {equals: [{get_param: env_type}, prod]}
> 
> 
> then you can use it in your example as:
> 
> floating_ip:
>   type: OS::Nova::FloatingIP
>   condition: {get_variable: for_prod}
> 
> so suggestion is to make it more generic, so that it can be used for other 
> things
> and reduce some of the verbosity in the templates.
> 
> However, I think the term 'variable' makes is sound more like a programming 
> thing. May
> be we can use something better. However, personally I kind of like the idea.

well ... now I get a better idea about the suggestion. Actually,
'variables' is not that bad in my opinion.

Regards,
 Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking guide meeting] Meeting Tomorrow!

2016-03-31 Thread Edgar Magana
Hello Folks,

Friendly reminder email that we have our networking guide team meeting in few 
minutes #openstack-meeting


Matt will be chairing the meeting.

Thanks,

Edgar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the blueprint"support-private-registry"

2016-03-31 Thread Hongbin Lu
Ricardo,

Thanks for the willingness to implement the blueprint. I am looking forward to 
reviewing the implementation.

Best regards,
Hongbin

From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
Sent: March-30-16 10:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discuss the 
blueprint"support-private-registry"

Hi.

On Wed, Mar 30, 2016 at 4:20 PM, Kai Qiang Wu 
> wrote:

I agree to that support-private-registry should be secure. As insecure seems 
not much useful for production use.
Also I understood the point setup related CA could be diffcult than normal 
HTTP, but we want to know if
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

Could address the issue and make templates clearer to understood ? If related 
patch or spec proposed, we are glad to review and make it better.

Yes, some local customization of the node setup would be great and help with 
the CA setup - we're willing to implement that blueprint.

Cheers,
Ricardo





Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Ricardo Rocha ---30/03/2016 09:09:14 pm---Hi. On 
Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao > wrote:

From: Ricardo Rocha >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 30/03/2016 09:09 pm
Subject: Re: [openstack-dev] [magnum] Discuss the blueprint 
"support-private-registry"





Hi.

On Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao 
> wrote:
>
> Hi Hongbin
>
> Thanks for starting this thread,
>
>
>
> I initial propose this bp because I am in China which is behind China great
> wall and can not have access of gcr.io directly, after 
> checking our
> cloud-init script, I see that
>
> lots of code are *hard coded* to using gcr.io, I personally 
> though this is
> not good idea. We can not force user/customer to have internet access in
> their environment.
>
> I proposed to use insecure-registry to give customer/user (Chinese or whom
> doesn't have gcr.io access) a chance to switch use their own
> insecure-registry to deploy
> k8s/swarm bay.
>
> For your question:
>>  Is the private registry secure or insecure? If secure, how to handle
>> the authentication secrets. If insecure, is it OK to connect a secure bay to
>> an insecure registry?
> An insecure-resigtry should be 'secure' one, since customer need to setup it
> and make sure it's clear one and in this case, they could be a private
> cloud.
>
>>  Should we provide an instruction for users to pre-install the private
>> registry? If not, how to verify the correctness of this feature?
>
> The simply way to pre-install private registry is using insecure-resigtry
> and docker.io has very simple steps to start it [1]
> for other, docker registry v2 also supports using TLS enable mode but this
> will require to tell docker client key and crt file which will make
> "support-private-registry" complex.
>
> [1] https://docs.docker.com/registry/
> [2]https://docs.docker.com/registry/deploying/

'support-private-registry' and 'allow-insecure-registry' sound different to me.

We're using an internal docker registry at CERN (v2, TLS enabled), and
have the magnum nodes setup to use it.

We just install our CA certificates in the nodes (cp to
etc/pki/ca-trust/source/anchors/, update-ca-trust) - had to change the
HEAT templates for that, and submitted a blueprint to be able to do
similar things in a cleaner way:
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

That's all that is needed, the images are then prefixed with the
registry dns location when referenced - example:
docker.cern.ch/my-fancy-image.

Things we found on the way:
- registry v2 doesn't seem to allow anonymous pulls (you can always
add an account with read-only access everywhere, but it means you need
to always authenticate at least with this account)
https://github.com/docker/docker/issues/17317
- swarm 1.1 and kub8s 1.0 allow authentication to the registry from
the client (which was good news, and it works fine), handy if 

[openstack-dev] [Fuel] CI may be broken for about 2 hours

2016-03-31 Thread Roman Prykhodchenko
We’ve organized a tiger team in order to merge two patches with a series of 
patches [1,2] in order to fix a swarm blocker. Our plan is the following:

1) Merge [1] and [2]
2) Trigger a build
3) Upload the new ISO to the CI system while it lasses the BVT
4) If the BVT is fine, switch the CI system to the new ISO

We anticipate the whole process to take about 2 hours.


References:

1. https://review.openstack.org/#/c/295731/2
2. https://review.openstack.org/#/c/295737/1


- romcheg



> 31 бер. 2016 р. о 14:15 Roman Prykhodchenko  написав(ла):
> 
> Folks,
> 
> There is a bug that is tagged as a swarm blocker [1] and two patches that fix 
> it. One of them is for nailgun [2] and another one is for fuel-library [3]. 
> The reason I write here is that those patches seem to fix the issue but they 
> will never pass tests on the CI when they are tested apart of each other.
> 
> I have built and ISO [4] and pushed it for the BVT in order to check that 
> everything works and passes tests with no problems. The results of the BVT 
> [5] indeed displayed that everything works correctly. Thus my proposal is to 
> coordinate core reviewers from both fuel-web and fuel-library to merge [2] 
> and [3] together ASAP.
> 
> 
> References:
> 
> 1. https://bugs.launchpad.net/fuel/+bug/1548776
> 2. https://review.openstack.org/#/c/295731/2
> 3. https://review.openstack.org/#/c/295737/1
> 4. http://jenkins-product.srt.mirantis.net:8080/job/9.0.custom.iso/1318/
> 5. 
> http://jenkins-product.srt.mirantis.net:8080/job/9.0.custom.ubuntu.bvt_2/453
> 
> 
> - romcheg
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-31 Thread Hongbin Lu
Egor,

I agree with what you said, but I think we need to address the problem that 
some clouds are lack of public IP addresses. It is not uncommon that a private 
cloud is running without public IP addresses, and they already figured out how 
to route traffics in and out. In such case, a bay doesn’t need to have floating 
IPs and the NodePort feature seems to work with the private IP address.

Generally speaking, I think it is useful to have a feature that allows bays to 
work without public IP addresses. I don’t want to end up in a situation that 
Magnum is unusable because the clouds don’t have enough public IP addresses.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-31-16 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

-1

who is going to run/support this proxy? also keep in mind that Kubernetes 
Service/NodePort (http://kubernetes.io/docs/user-guide/services/#type-nodeport)
functionality is not going to work without public ip and this is very handy 
feature.

---
Egor


From: 王华 >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Wednesday, March 30, 2016 8:41 PM
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pull
docker images from docker hub in nodes floating ips are needed. To reduce the
usage of floating ip, we can use proxy. Only some nodes have floating ips, and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
> wrote:

Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying



__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-03-31 Thread Anita Kuno
On 03/31/2016 10:34 AM, Anita Kuno wrote:
> On 03/31/2016 08:31 AM, Znoinski, Waldemar wrote:
>>
>>
>>  >-Original Message-
>>  >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
>>  >Sent: Wednesday, March 30, 2016 4:22 PM
>>  >To: OpenStack Development Mailing List (not for usage questions)
>>  >
>>  >Cc: Feng, Shaohe 
>>  >Subject: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, 
>> what
>>  >does it test?
>>  >
>>  >Intel has a few third party CIs in the third party systems wiki [1].
>>  >
>>  >I was talking with Moshe Levi today about expanding coverage for mellanox
>>  >CI in nova, today they run an SRIOV CI for vnic type 'direct'.
>>  >I'd like them to also start running their 'macvtap' CI on the same nova
>>  >changes (that job only runs in neutron today I think).
>>  >
>>  >I'm trying to see what we have for coverage on these different NFV
>>  >configurations, and because of limited resources to run NFV CI, don't want 
>> to
>>  >duplicate work here.
>>  >
>>  >So I'm wondering what the various Intel NFV CI jobs run, specifically the 
>> Intel
>>  >Networking CI [2], Intel NFV CI [3] and Intel SRIOV CI [4].
>> [WZ] See comments below about full/small wiki but would the below is enough 
>> or you'd want or to see more:
>> - networking-ci runs (with exceptions): 
>> tempest.api.network 
>> tempest.scenario.test_network_basic_ops 
>>
>> - nfv-ci runs (with exceptions):
>> tempest.api.compute on standard flavors with NFV features enabled
>> tempest.scenario (including intel-nfv-ci-tests - 
>> https://github.com/openstack/intel-nfv-ci-tests) on standard flavors with 
>> NFV features enabled
>>
>>>
>>  > From the wiki it looks like the Intel Networking CI tests ovs-dpdk but 
>> only for
>>  >Neutron. Could that be expanded to also test on Nova changes that hit a 
>> sub-
>>  >set of the nova tree?
>> [WZ] Yes, Networking CI is for neutron to test ovs-dpdk. It was also 
>> configured to trigger on openstack/nova changes when they affect 
>> nova/virt/libvirt/vif.py. It's currently disabled due to issue with Jenkins 
>> plugin we're seeing when having two jobs pointing at the same project 
>> simultaneously which causes to missed comments. Example [5]. We're still 
>> investigating one last option to get it working properly with the current 
>> setup. Even if we fail we're currently migrating to new CI setup (Openstack 
>> Infra's downstream-ci suite) and we'll reenable that ovs-dpdk testing on 
>> nova changes once we're migrated, 6 - 8 weeks from now.
>> Is there more you feel we should be testing our ovs-dpdk on when it comes to 
>> nova changes?
>>
>>  >
>>  >I really don't know what the latter two jobs test as far as configuration 
>> is
>>  >concerned, the descriptions in the wikis are pretty empty (please update
>>  >those to be more specific).
>> [WZ] I see some CIs have a full wiki pages [6] for their CIs apart from the 
>> usual shim wiki overview table [7]. Is that what you're suggesting or 
>> extended info about tests in the 'small' wiki [2] is ok?
> 
> Every CI wikipage must include the template that is specified on the
> third party systems wikipage, I will address the powerkvm wikipage,
> thanks for letting me know.

Sorry here is the powerkvm ci wikipage linked from the third party
systems wikipage which meets requirements:
https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI

Thanks,
Anita.


> 
> As long as you have the template at the top of your wikipage you can put
> additional information either in the table (not changing the template)
> or below the table as long as the information developers are expecting
> for all third party ci systems is available in an easy to find and read
> format.
> 
> Thank you,
> Anita.
> 
>>
>>  >
>>  >Please also include in the wiki the recheck method for each CI so I don't 
>> have
>>  >to dig through Gerrit comments to find one.
>> [WZ] Done
>>
>>  >
>>  >[1] https://wiki.openstack.org/wiki/ThirdPartySystems
>>  >[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI
>>  >[3] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
>>  >[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI
>> [5] https://review.openstack.org/#/c/294312 
>> [6] https://wiki.openstack.org/wiki/PowerKVM 
>> [7] https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI 
>>  >
>>  >--
>>  >
>>  >Thanks,
>>  >
>>  >Matt Riedemann
>> [WZ] thanks
>>>
>>  >
>>  >__
>>  >
>>  >OpenStack Development Mailing List (not for usage questions)
>>  >Unsubscribe: OpenStack-dev-
>>  >requ...@lists.openstack.org?subject:unsubscribe
>>  >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> --
>> Intel Research and Development Ireland Limited
>> Registered in Ireland
>> Registered Office: Collinstown 

Re: [openstack-dev] [Sahara][QA] Notes about the move of the Sahara Tempest API test to sahara-tests

2016-03-31 Thread Luigi Toscano
On Sunday 20 of March 2016 21:07:06 Luigi Toscano wrote:
> Hi,

Small update on the plan:
> 
> as discussed in the last (two) Sahara meetings, I'm working on moving the
> Tempest API tests from the Tempest repository to the new sahara-tests
> repository, which contains only (non-tempest) scenario tests and it's
> branchless as well. The move is a natural consequence of the Tempest focus
> on the "core six" (removing the burden of additional reviews from core
> Tempest), and of the existence of the Tempest Plugin interface.
> 

A temporary change was just merged (thanks Infra) which also disable tests for 
sahara-tests, so 
*Please don't merge anything into sahara-tests for now*

We are more or less here:
>[...] 
> 
> == Extract tempest/api/data_processing from Tempest and filter it
> Easy with git-split (https://github.com/ajdruff/git-splits, thanks Evgeny
> Sikachev) and a bit of cleanup (removal of the first empty commit with `git
> rebase -i --root`). This code should then be merged in a detached branch of
> sahara-tests (created with `git checkout --orphan `). -> See a
> preview here:
> https://github.com/ltoscano-rh/sahara-tests/commits/tempest-sahara-api
> 
> 
> == Push the sanitized history to a detached branch of sahara-tests
> It's really two substeps:
> 
> = Temporarily exclude a specific branch from the CI
> Change to openstack-infra/project-config and openstack/sahara-ci-config
> -> Requires reviews from infra and Sahara cores


Now the only member of sahara-(tests-)release, Sergey Lukjanov (Vitaly should 
be in the group too maybe), should kindly create an orphan branch (git 
checkout --orphan ) in the sahara-test repository, so that we can send 
to gerrit the imported commits.

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-03-31 Thread Anita Kuno
On 03/31/2016 08:31 AM, Znoinski, Waldemar wrote:
> 
> 
>  >-Original Message-
>  >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
>  >Sent: Wednesday, March 30, 2016 4:22 PM
>  >To: OpenStack Development Mailing List (not for usage questions)
>  >
>  >Cc: Feng, Shaohe 
>  >Subject: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, 
> what
>  >does it test?
>  >
>  >Intel has a few third party CIs in the third party systems wiki [1].
>  >
>  >I was talking with Moshe Levi today about expanding coverage for mellanox
>  >CI in nova, today they run an SRIOV CI for vnic type 'direct'.
>  >I'd like them to also start running their 'macvtap' CI on the same nova
>  >changes (that job only runs in neutron today I think).
>  >
>  >I'm trying to see what we have for coverage on these different NFV
>  >configurations, and because of limited resources to run NFV CI, don't want 
> to
>  >duplicate work here.
>  >
>  >So I'm wondering what the various Intel NFV CI jobs run, specifically the 
> Intel
>  >Networking CI [2], Intel NFV CI [3] and Intel SRIOV CI [4].
> [WZ] See comments below about full/small wiki but would the below is enough 
> or you'd want or to see more:
> - networking-ci runs (with exceptions): 
> tempest.api.network 
> tempest.scenario.test_network_basic_ops 
> 
> - nfv-ci runs (with exceptions):
> tempest.api.compute on standard flavors with NFV features enabled
> tempest.scenario (including intel-nfv-ci-tests - 
> https://github.com/openstack/intel-nfv-ci-tests) on standard flavors with NFV 
> features enabled
> 
>>
>  > From the wiki it looks like the Intel Networking CI tests ovs-dpdk but 
> only for
>  >Neutron. Could that be expanded to also test on Nova changes that hit a sub-
>  >set of the nova tree?
> [WZ] Yes, Networking CI is for neutron to test ovs-dpdk. It was also 
> configured to trigger on openstack/nova changes when they affect 
> nova/virt/libvirt/vif.py. It's currently disabled due to issue with Jenkins 
> plugin we're seeing when having two jobs pointing at the same project 
> simultaneously which causes to missed comments. Example [5]. We're still 
> investigating one last option to get it working properly with the current 
> setup. Even if we fail we're currently migrating to new CI setup (Openstack 
> Infra's downstream-ci suite) and we'll reenable that ovs-dpdk testing on nova 
> changes once we're migrated, 6 - 8 weeks from now.
> Is there more you feel we should be testing our ovs-dpdk on when it comes to 
> nova changes?
> 
>  >
>  >I really don't know what the latter two jobs test as far as configuration is
>  >concerned, the descriptions in the wikis are pretty empty (please update
>  >those to be more specific).
> [WZ] I see some CIs have a full wiki pages [6] for their CIs apart from the 
> usual shim wiki overview table [7]. Is that what you're suggesting or 
> extended info about tests in the 'small' wiki [2] is ok?

Every CI wikipage must include the template that is specified on the
third party systems wikipage, I will address the powerkvm wikipage,
thanks for letting me know.

As long as you have the template at the top of your wikipage you can put
additional information either in the table (not changing the template)
or below the table as long as the information developers are expecting
for all third party ci systems is available in an easy to find and read
format.

Thank you,
Anita.

> 
>  >
>  >Please also include in the wiki the recheck method for each CI so I don't 
> have
>  >to dig through Gerrit comments to find one.
> [WZ] Done
> 
>  >
>  >[1] https://wiki.openstack.org/wiki/ThirdPartySystems
>  >[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI
>  >[3] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
>  >[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI
> [5] https://review.openstack.org/#/c/294312 
> [6] https://wiki.openstack.org/wiki/PowerKVM 
> [7] https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI 
>  >
>  >--
>  >
>  >Thanks,
>  >
>  >Matt Riedemann
> [WZ] thanks
>>
>  >
>  >__
>  >
>  >OpenStack Development Mailing List (not for usage questions)
>  >Unsubscribe: OpenStack-dev-
>  >requ...@lists.openstack.org?subject:unsubscribe
>  >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> 
> 
> This e-mail and any attachments may contain confidential material for the sole
> use of the intended recipient(s). Any review or distribution by others is
> strictly prohibited. If you are not the intended recipient, please contact the
> sender and delete all copies.
> 
> 
> 

Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-31 Thread Michał Jastrzębski
So, a plan.

My idea about this backport is to:

1. We write Mitaka code deploying Liberty, just liberty (remove
deploy_liberty nonsense from patchset above)
2. We mark this code as Kolla 1.1.0 effectively deprecating what we
call liberty today
3. Most of bugs found in master/stable-mitaka will also be relevant to
liberty so brace for backports of bugfixes

So what I am going to do:
1. Make patchset above deploy Liberty (only liberty) for source builds
2. After mitaka will be cut off, we do code copying

What I need from you guys:
1. Someone help me figure out centos binary as it doesn't really use
sources from build-conf
2. Someone help me to check out if release of mariadb, rabbitmq and
other infra changed anything that will break Liberty, I highly doubt
it, but if you guys know about anything, shout out please.
3. Testing testing testing

Thoughts?
Michal


On 30 March 2016 at 13:30, Michał Jastrzębski  wrote:
> So I made this:
> https://review.openstack.org/#/c/299563/
>
> I'm not super fond of reverting commits from the middle of release,
> because this will make a lot of mess. I'd rather re-implement keystone
> bootstrap logic and make it conditional as it is not that complicated.
>
> On 30 March 2016 at 12:37, Ryan Hallisey  wrote:
>> Agreed this needs to happen +1,
>>
>> - Original Message -
>> From: "Jeff Peeler" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Sent: Wednesday, March 30, 2016 1:22:31 PM
>> Subject: Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty 
>> within the Liberty branch
>>
>> On Wed, Mar 30, 2016 at 3:52 AM, Steven Dake (stdake)  
>> wrote:
>>>
>>>
>>> From: Jeffrey Zhang 
>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Date: Wednesday, March 30, 2016 at 12:29 AM
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Subject: Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty
>>> within the Liberty branch
>>>
>>> +1
>>>
>>> A lot of changes has been make in Mitaka. Backport is difficult.
>>>
>>> But using Mitaka deploy Liberty also has *much works*. For example,
>>> revert config file change which deprecated in Mitaka and Liberty support.
>>>
>>> A important one is the `keystone-manage bootstrap` command to create the
>>> keystone admin account. This is adderecently and only exist in the Mitaka
>>> branch. So when using this method, we should revert some commit and use
>>> the old way method.
>>>
>>>
>>> Agreed.
>>
>> I'm sure there will be some checking and such once all the code has
>> been shuffled around, but I think doing this work is better than
>> abandoning a branch. So +1 to proposal.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Fix nova swap volume (updating an attached volume) function

2016-03-31 Thread Matt Riedemann



On 3/31/2016 5:58 AM, Duncan Thomas wrote:

I *think* it is significantly semanticist different to do detach,
attach; with swap volume, no events are generated in the guest; that is
why it is dangerous to expose to the tenant - if the volume contents is
not identical, you get weird corruption as the guess flushes caches.

I think this call only makes sense for migration, and not anything else,
and trying to do a version of it that does detach,, attach is both
dangerous and unnecessary.

On 31 March 2016 at 05:14, GHANSHYAM MANN > wrote:

On Thu, Mar 31, 2016 at 10:39 AM, Matt Riedemann
> wrote:
 >
 >
 > On 3/30/2016 8:20 PM, Matt Riedemann wrote:
 >>
 >>
 >>
 >> On 3/30/2016 7:56 PM, Matt Riedemann wrote:
 >>>
 >>>
 >>>
 >>> On 3/30/2016 7:38 PM, Matt Riedemann wrote:
 
 
 
  On 2/25/2016 5:31 AM, Takashi Natsume wrote:
 >
 > Hi Nova and Cinder developers.
 >
 > As I reported in a bug report [1], nova swap volume
 > (updating an attached volume) fuction does not work
 > in the case of non admin users by default.
 > (Volumes are stuck.)
 >
 > Before I was working for fixing another swap volume bug [2][3].
 > But Ryan fixed it on the Cinder side [4].
 > As a result, admin users can execute swap volume function,
 > but it was not fixed in the case of non admin users.
 > So I reported the bug report [1].
 >
 > In the patch[5], I tried to change the default cinder's policy
 > to allow non admin users to execute migrate_volume_completion
API.
 > But it was rejected by the cinder project ('-2' was voted).
 >
 > In the patch[5], it was suggested to make the swap volume API
admin
 > only
 > on the Nova side.
 > But IMO, the swap volume function should be allowed to non
admin users
 > because attaching a volume and detaching a volume can be
performed
 > by non admin users.
 
 
  I agree with this. DuncanT said in IRC that he didn't think
non-admin
  users should be using the swap-volume API in nova because it
can be
  problematic, but I'm not sure why, is there more history or detail
  there? I'd think it shouldn't be any worse than doing a
detach/attach in
  quick succession (like in a CI test for example).
 
 >
 > If migrate_volume_completion is only allowed to admin users
 > by default on the Cinder side, attaching a new volume and
 > detaching an old volume should be performed on the Nova side
 > when swapping volumes.
 
 
  My understanding of the problem is as follows:
 
  1. Admin-initiated volume migration in Cinder calls off to Nova to
  perform the swap-volume, and then Nova calls back to Cinder's
  migrate_volume_completion API. This is fine since it's an
admin that
  initiated this series of operations on the Cinder side (that's by
  default, however, this is broken if the policy file for Cinder
is change
  to allow non-admins to migrate volumes).
 
  2. A non-admin swap-volume API call in Nova fails because Nova
blindly
  makes the migrate_volume_completion call to Cinder which fails
with a
  403 because the Cinder API policy has that as an admin action by
  default.
 
  I don't know the history around when the swap-volume API was
added to
  Nova, was it specifically for this volume migration scenario
in Cinder?
    Are there other use cases?  Knowing those would be good to
determine
  if Nova should change it's default policy for swap-volume,
although,
  again, that's only a default and can be changed per deployment
so we
  probably shouldn't rely on it.
 
  Ideally we would have implemented this like the nova/neutron
server
  events callback API in Nova during vif plugging (nova does the
vif plug
  on the host then waits for neutron to update it's database for
the port
  status and sends an event (API call) to nova to continue
booting the
  server). That server events API in nova is admin-only by
default and
  neutron is configured with admin credentials for nova to use it.
 
  Another option would be for Nova to handle a 403 response when
calling
  Cinder's migrate_volume_completion API and ignore it if we
don't have an
  admin context. This is pretty hacky though. It assumes that it's a
  non-admin user initiating the swap-volume operation. It

Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-03-31 Thread Thomas Herve
On Thu, Mar 31, 2016 at 2:25 PM, Huangtianhua  wrote:
> The conditions function has been requested for a long time, and there have 
> been several previous discussions, which all ended up in debating the 
> implementation, and no result.
> https://review.openstack.org/#/c/84468/3/doc/source/template_guide/hot_spec.rst
> https://review.openstack.org/#/c/153771/1/specs/kilo/resource-enabled-meta-property.rst

And for a reason: this is a tricky issue, and introducing imperative
constructs in a template can lead to bad practices.

> I think we should focus on the simplest possible way(same as AWS) to meet the 
> user requirement, and follows the AWS, there is no doubt that we will get a 
> very good compatibility.
> And the patches are good in-progress. I don't want everything back to zero:)
> https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/support-conditions-function

I don't say that you should scratch everything. I'm mostly OK with
what has been done, with the exception of the top-level conditions
section. Templates are our user interface, and we need to be very
careful when we introduce new things. 3 years ago following AWS was
the easy path because we didn't have much idea on what to do, but I
believe we now have enough background to be more innovative.

It's also slightly worrying that the spec "only" got 3 cores approving
it, especially on such a touchy subject. I'm guilty as others to not
have voiced my concerns then, though.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-31 Thread Jeff Peeler
+1

On Tue, Mar 29, 2016 at 12:07 PM, Steven Dake (stdake)  wrote:
> Hey folks,
>
> Consider this proposal a +1 in favor of Vikram joining the core reviewer
> team.  His reviews are outstanding.  If he doesn’t have anything useful to
> add to a review, he doesn't pile on the review with more –1s which are
> slightly disheartening to people.  Vikram has started a trend amongst the
> core reviewers of actually diagnosing gate failures in peoples patches as
> opposed to saying gate failed please fix.  He does this diagnosis in nearly
> every review I see, and if he is stumped  he says so.  His 30 days review
> stats place him in pole position and his 90 day review stats place him in
> second position.  Of critical notice is that Vikram is ever-present on IRC
> which in my professional experience is the #1 indicator of how well a core
> reviewer will perform long term.   Besides IRC and review requirements, we
> also have code requirements for core reviewers.  Vikram has implemented only
> 10 patches so far, butI feel he could amp this up if he had feature work to
> do.  At the moment we are in a holding pattern on master development because
> we need to fix Mitaka bugs.  That said Vikram is actively working on
> diagnosing root causes of people's bugs in the IRC channel pretty much 12-18
> hours a day so we can ship Mitaka in a working bug-free state.
>
> Our core team consists of 11 people.  Vikram requires at minimum 6 +1 votes,
> with no veto –2 votes within a 7 day voting window to end on April 7th.  If
> there is a veto vote prior to April 7th I will close voting.  If there is a
> unanimous vote prior to April 7th, I will make appropriate changes in
> gerrit.
>
> Regards
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla-group/30
> [2] http://stackalytics.com/report/contribution/kolla-group/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-03-31 Thread Lenny Verkhovsky
As approved in [1] we've added Nova-Macvtap job as part of Mellanox Nova 
non-voting CI
Our wiki page is [2]


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-03-30.log.html
[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Mellanox_CI


Best Regards
Lenny


-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Wednesday, March 30, 2016 6:22 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: shaohe.f...@intel.com
Subject: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what 
does it test?

Intel has a few third party CIs in the third party systems wiki [1].

I was talking with Moshe Levi today about expanding coverage for mellanox CI in 
nova, today they run an SRIOV CI for vnic type 'direct'. 
I'd like them to also start running their 'macvtap' CI on the same nova changes 
(that job only runs in neutron today I think).

I'm trying to see what we have for coverage on these different NFV 
configurations, and because of limited resources to run NFV CI, don't want to 
duplicate work here.

So I'm wondering what the various Intel NFV CI jobs run, specifically the Intel 
Networking CI [2], Intel NFV CI [3] and Intel SRIOV CI [4].

 From the wiki it looks like the Intel Networking CI tests ovs-dpdk but only 
for Neutron. Could that be expanded to also test on Nova changes that hit a 
sub-set of the nova tree?

I really don't know what the latter two jobs test as far as configuration is 
concerned, the descriptions in the wikis are pretty empty (please update those 
to be more specific).

Please also include in the wiki the recheck method for each CI so I don't have 
to dig through Gerrit comments to find one.

[1] https://wiki.openstack.org/wiki/ThirdPartySystems
[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI
[3] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-quickstart import

2016-03-31 Thread John Trowbridge


On 03/30/2016 02:16 PM, Paul Belanger wrote:
> On Tue, Mar 29, 2016 at 08:30:22PM -0400, John Trowbridge wrote:
>> Hola,
>>
>> With the approval of the tripleo-quickstart spec[1], it is time to
>> actually start doing the work. The first work item is moving it to the
>> openstack git. The spec talks about moving it as is, and this would
>> still be fine.
>>
>> However, there are roles in the tripleo-quickstart tree that are not
>> directly related to the instack-virt-setup replacement aspect that is
>> approved in the spec (image building, deployment). I think these should
>> be split into their own ansible-role-* repos, so that they can be
>> consumed using ansible-galaxy. It would actually even make sense to do
>> that with the libvirt role responsible for setting up the virtual
>> environment. The tripleo-quickstart would then be just an integration
>> layer making consuming these roles for virtual deployments easy.
>>
>> This way if someone wanted to make a different role for say OVB
>> deployments, it would be easy to use the other roles on top of a
>> differently provisioned undercloud.
>>
>> Similarly, if we wanted to adopt ansible to drive tripleo-ci, it would
>> be very easy to only consume the roles that make sense for the tripleo
>> cloud.
>>
>> So the first question is, should we split the roles out of
>> tripleo-quickstart?
>>
>> If so, should we do that before importing it to the openstack git?
>>
>> Also, should the split out roles also be on the openstack git?
>>
> So, we actually have a few ansible roles in OpenStack, mostly imported by
> myself.  The OpenStack ansible teams has a few too.
> 
> I would propose, keep them included in your project for now and maybe start a
> different discussion with all the ansible projects (kolla, ansible-openstack,
> windmill, etc) to see how to best move forward.  I've discussed with openstack
> ansible in the past about moving the roles I have uploaded into their team and
> hope to bring it up again at Austin.
> 

Awesome, thanks for the feedback Paul. I went ahead and started the
import process:

https://review.openstack.org/#/c/299932

>> Maybe this all deserves its own spec and we tackle it after completing
>> all of the work for the first spec. I put this on the meeting agenda for
>> today, but we didn't get to it.
>>
>> - trown
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Oh Swagger, where art thou?

2016-03-31 Thread Sean Dague
On 03/31/2016 09:43 AM, Jim Rollenhagen wrote:
> On Thu, Mar 31, 2016 at 08:43:29AM -0400, Sean Dague wrote:
>> Some more details on progress, because this is getting closer every day.
>>
>> There is now an api-ref target on the Nova project. The entire work in
>> progress stream has been rebased into 2 patches to a top level api-ref/
>> directory structure in the Nova tree -
>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:wip_api_docs2
>>
>> That is a 2 patch series. The first is the infrastructure, the second is
>> the content for 2 resources (versions and servers). The rendered output
>> for this is at -
>> http://docs-draft.openstack.org/63/298763/4/check/gate-nova-api-ref/6983838//api-ref/build/html/
>> (you can also pull and build locally with tox -e api-ref)
>>
>> Karen, Auggy, and Anne continue to work on the wadl data translator
>> using the wadl2rst project and fairy-slipper to get various pieces of
>> the structured data over. Hopefully we'll see some of those translated
>> stacks rendering soon in patch sets.
> 
> I assume at some point we'll be pulling the sphinx extension out to a
> separate project so that other projects can use this too? :)
> 
> // jim

Yes. I feel like it's probably a milestone 2 activity. Fixing styling
and error handling is a lot faster in tree while we sort out all the
issues. Once it's good we can move things into something that's pip
installable.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-31 Thread Sahid Orentino Ferdjaoui
On Wed, Mar 30, 2016 at 09:46:45PM -0500, Matt Riedemann wrote:
> 
> 
> On 3/30/2016 5:55 PM, Armando M. wrote:
> >
> >
> >On 29 March 2016 at 18:55, Matt Riedemann  >> wrote:
> >
> >
> >
> >On 3/29/2016 4:44 PM, Armando M. wrote:
> >
> >
> >
> >On 29 March 2016 at 08:08, Matt Riedemann
> >
> > >>> wrote:
> >
> > Nova has had some long-standing bugs that Sahid is trying
> >to fix
> > here [1].
> >
> > You can create a network in neutron with
> > port_security_enabled=False. However, the bug is that since
> >Nova
> > adds the 'default' security group to the request (if none are
> > specified) when allocating networks, neutron raises an
> >error when
> > you try to create a port on that network with a 'default'
> >security
> > group.
> >
> > Sahid's patch simply checks if the network that we're going
> >to use
> > has port_security_enabled and if it does not, no security
> >groups are
> > applied when creating the port (regardless of what's
> >requested for
> > security groups, which in nova is always at least 'default').
> >
> > There has been a similar attempt at fixing this [2]. That
> >change
> > simply only added the 'default' security group when allocating
> > networks with nova-network. It omitted the default security
> >group if
> > using neutron since:
> >
> > a) If the network does not have port security enabled,
> >we'll blow up
> > trying to add a port on it with the default security group.
> >
> > b) If the network does have port security enabled, neutron will
> > automatically apply a 'default' security group to the port,
> >nova
> > doesn't need to specify one.
> >
> > The problem both Feodor's and Sahid's patches ran into was
> >that the
> > nova REST API adds a 'default' security group to the server
> >create
> > response when using neutron if specific security groups
> >weren't on
> > the server create request [3].
> >
> > This is clearly wrong in the case of
> > network.port_security_enabled=False. When listing security
> >groups
> > for an instance, they are correctly not listed, but the server
> > create response is still wrong.
> >
> > So the question is, how to resolve this?  A few options
> >come to mind:
> >
> > a) Don't return any security groups in the server create
> >response
> > when using neutron as the backend. Given by this point
> >we've cast
> > off to the compute which actually does the work of network
> > allocation, we can't call back into the network API to see what
> > security groups are being used. Since we can't be sure, don't
> > provide what could be false info.
> >
> > b) Add a new method to the network API which takes the
> >requested
> > networks from the server create request and returns a best
> >guess if
> > security groups are going to be applied or not. In the case of
> > network.port_security_enabled=False, we know a security
> >group won't
> > be applied so the method returns False. If there is
> > port_security_enabled, we return whatever security group was
> > requested (or 'default'). If there are multiple networks on the
> > request, we return the security groups that will be applied
> >to any
> > networks that have port security enabled.
> >
> > Option (b) is obviously more intensive and requires hitting the
> > neutron API from nova API before we respond, which we'd like to
> > avoid if possible. I'm also not sure what it means for the
> > auto-allocated-topology (get-me-a-network) case. With a
> >standard
> > devstack setup, a network created via the
> >auto-allocated-topology
> > API has port_security_enabled=True, but I also have the 'Port
> > Security' extension enabled and the default public external
> >network
> > has port_security_enabled=True. What if either of those are
> >False
> > (or the port security extension is disabled)? Does the
> > auto-allocated network inherit port_security_enabled=False?
> >We could
> > duplicate that logic in 

Re: [openstack-dev] [docs] [api] Oh Swagger, where art thou?

2016-03-31 Thread Jim Rollenhagen
On Thu, Mar 31, 2016 at 08:43:29AM -0400, Sean Dague wrote:
> Some more details on progress, because this is getting closer every day.
> 
> There is now an api-ref target on the Nova project. The entire work in
> progress stream has been rebased into 2 patches to a top level api-ref/
> directory structure in the Nova tree -
> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:wip_api_docs2
> 
> That is a 2 patch series. The first is the infrastructure, the second is
> the content for 2 resources (versions and servers). The rendered output
> for this is at -
> http://docs-draft.openstack.org/63/298763/4/check/gate-nova-api-ref/6983838//api-ref/build/html/
> (you can also pull and build locally with tox -e api-ref)
> 
> Karen, Auggy, and Anne continue to work on the wadl data translator
> using the wadl2rst project and fairy-slipper to get various pieces of
> the structured data over. Hopefully we'll see some of those translated
> stacks rendering soon in patch sets.

I assume at some point we'll be pulling the sphinx extension out to a
separate project so that other projects can use this too? :)

// jim

> 
>   -Sean
> 
> On 03/29/2016 07:56 AM, Sean Dague wrote:
> > Some additional links to what this sort of looks like right now -
> > http://docs-draft.openstack.org/71/298671/1/check/gate-nova-docs/c9e7f66//doc/build/html/rest_api/
> > 
> > Which is built off an RST file that looks like - http://tinyurl.com/hjqpjdm
> > 
> > The notable additions are the ".. rest_method::" stanza, which is what
> > provides the collapsing method sections (as are in the current api-ref
> > site).
> > 
> > As well as the ".. rest_parameters::" stanza which takes a list of
> > parameter names, and lookup keys to an external yaml file. This allows
> > for building the tables for these parameters much in the same way as the
> > existing WADL does but in a way where long for description can be easily
> > done. (Tables + rst are a bit cumbersome in the base case).
> > 
> > The #1 Goal here is better docs for Humans. In all cases today, Humans
> > are the primary consumers of our API, building software that uses. A
> > large number of the current API docs have inaccuracies or are generally
> > confusing because few people have been able to ventured into the WADL to
> > make more than small typographic changes.
> > 
> > Getting over to an RST base, that is content first in approach, will
> > hopefully open up the effort to more contributors.
> > 
> > Swagger is neat. But it's important to remember it's actually an API
> > Design Tool, which provides the benefit of some documentation tooling
> > around it. If used in the Design phase of an API it imposes a set of
> > constraints that help build a solid API. Retrofiting to swagger is at
> > best difficult, and some of the features our APIs use make it impossible.
> > 
> > This POC has mostly been about demonstrating what a content first
> > approach could be, where we could plug in structured / semi-structured
> > data via sphinx extensions. In phase 1, that will be our custom quick
> > and dirty markup format that's swagger inspired. In the future projects
> > could replace some of these plug points with other structured content in
> > their projects, like jsonschema / swagger / raml, whatever is appropriate.
> > 
> > However, we need to make sure we don't delay moving forward to get off a
> > WADL base until we've perfected any of those other approaches. We're
> > living on brownfield that is on fire. Getting to higher ground is
> > priority one. Iterating after we're there will continue to happen.
> > 
> > There is a cross project design summit session proposed for this work as
> > well, so we can show current status and discuss futures on this.
> > 
> > As always, comments and questions welcomed. Our giant etherpad of doom
> > in poking at this work, and what's been discovered is here -
> > https://etherpad.openstack.org/p/api-site-in-rst
> > 
> > -Sean
> > 
> > On 03/28/2016 06:21 PM, Anne Gentle wrote:
> >> Hi all,
> >>
> >> This release I’ve been communicating about moving from WADL to Swagger
> >> for the API reference information on developer.openstack.org
> >> . What we've discovered on the journey
> >> is that Swagger doesn't match well for our current API designs (details
> >> below), and while we're not completely giving up on Swagger, we're also
> >> recognizing the engineering effort to maintain and sustain those efforts
> >> isn't going to magically appear.
> >>
> >> Sean Dague has put together a proof-of-concept with Compute servers
> >> reference documentation to use Sphinx, RST, parameters files, and some
> >> Sphinx extensions to do a near-copy representation of the API reference
> >> page. We've met with the Nova API team, the API working group, and other
> >> interested developers to make sure our ideas are sound, and so far we
> >> have consensus on forging a new path forward. The 

Re: [openstack-dev] [telemetry] Rescheduling IRC meetings

2016-03-31 Thread Ildikó Váncsa
Hi Gordon,

> 
> ie. the new PTL should checkpoint with subteam leads regularly to review spec 
> status or identify missing resources on high-priority
> items?

I would say anything that we feel as useful information to people who read this 
mailing list. In this sense features that got implemented, items we are 
focusing on, as you mentioned resource bottlenecks on important items, etc.

> 
> as some feedback, re: news flash mails, we need a way to promote roadmap 
> backlog items better. i'm not sure anyone looks at Road
> Map page...
> maybe we need to organise it better with priority and incentive.

I had the Roadmap page in mind as well partially, we could highlight the 
plans/tasks from that page and also track progress.

Thanks,
/Ildikó

> 
> On 31/03/2016 7:14 AM, Ildikó Váncsa wrote:
> > Hi All,
> >
> > +1 on the on demand meeting schedule. Maybe we can also have some news 
> > flash mails  week to summarize the progress in our
> sub-modules when we don't have the meeting. Just to keep people up to date.
> >
> > Will we already skip the today's meeting?
> >
> > Thanks,
> > /Ildikó
> >
> >> -Original Message-
> >> From: Julien Danjou [mailto:jul...@danjou.info]
> >> Sent: March 31, 2016 11:04
> >> To: liusheng
> >> Cc: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [telemetry] Rescheduling IRC meetings
> >>
> >> On Thu, Mar 31 2016, liusheng wrote:
> >>
> >>> Another personal suggestion:
> >>>
> >>> maybe we can have a weekly routine mail thread to present the things
> >>> need to be discussed or need to be notified. The mail will also list
> >>> the topics posted in meeting agenda and ask to Telemetry folks if  a
> >>> online IRC meeting is necessary, if there are a very few topics or
> >>> low priority topics, or the topics can be suitably discussed
> >>> asynchronously, we can disuccs them in the mail thread.
> >>>
> >>> any thoughts?
> >>
> >> Yeah I think it's more or less the same idea that was proposed,
> >> schedule a meeting only if needed. I'm going to amend the meeting wiki 
> >> page with that!
> >>
> >> --
> >> Julien Danjou
> >> -- Free Software hacker
> >> -- https://julien.danjou.info
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> --
> gord
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2016-03-31 Thread Bogdan Dobrelya
It is time for update!
The previous idea with the committed state and automatic cross-repo
merge hooks in zuul seems too complex to implement. So, the "CI gate for
blah blah" magically becomes now a manual helper tool for
reviewers/developers, see the docs update [0], [1].

You may start using it right now, as described in the docs. Hopefully,
it will help to visualize data changes for complex patches better.

[0] https://review.openstack.org/#/c/299912/
[1] http://goo.gl/Pj3lNf

> On 01.12.2015 11:28, Aleksandr Didenko wrote:
>> Hi,
>> 
>>> pregenerated catalogs for the Noop tests to become the very first
>>> committed state in the data regression process has to be put in the
>>> *separate repo*
>> 
>> +1 to that, we can put this new repo into .fixtures.yml
>> 
>>> note, we could as well move the tests/noop/astute.yaml/ there
>> 
>> +1 here too, astute.yaml files are basically configuration fixtures, we
>> can put them into .fixtures.yml as well
> 
> I found the better -and easier for patch authors- way to use the data
> regression checks. Originally suggested workflow was:
> 
> 1.
> "The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state."
> 
> This part remains the same with the only comment that the astute.yaml
> fixtures of deployment cases should be fetched from the
> fuel-noop-fixtures repo. And the committed state for generated catalogs
> should be
> stored there as well.
> 
> 2.
> "And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value."
> 
> This should be changed as following:
> - the data checks gate should be just a non voting helper for reviewers
> and patch authors. The only its task would be to show inducted data
> changes in a pretty and fast view to help accept/update/reject a patch
> on review.
> - the data checks gate job should fetch the committed data state from
> the fuel-noop-fixtures repo and run regressions check with the patch
> under review checked out on fuel-library repo.
> - the Noop tests gate should be changed to fetch the astute.yaml
> fixtures from the fuel-noop-fixtures repo in order to run noop tests as
> usual.
> 
> 3.
> "In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!"
> 
> Instead, the patch authors should do nothing additionally. Once accepted
> with wf+1, the patch on reivew should be merged with a pre-commit zuul
> hook (is it possible?). The hook should just regenerate catalogs with
> the changes introduced by the patch and update the committed state of
> data in the fuel-noop-fixtures repo. After that, the patch may be safely
> merged to the fuel-library and everything will be up to date with the
> committed data state.
> 
> 4.
> "The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!"
> 
> So this part would work even better now, with no additional actions
> required from the review process sides.
> 
>> 
>> Regards,
>> Alex
>> 
>> 
>> On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya > > wrote:
>> 
>> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
>> >> Hi,
>> >>
>> >> let me try to rephrase this a bit and Bogdan will correct me if
>> I'm wrong
>> >> or missing something.
>> >>
>> >> We have a set of top-scope manifests (called Fuel puppet tasks)
>> that we use
>> >> for OpenStack deployment. We execute those tasks with "puppet
>> apply". Each
>> >> task supposed to bring target system into some desired state, so
>> puppet
>> >> compiles a catalog and applies it. So basically, puppet catalog =
>> desired
>> >> system state.
>> >>
>> >> So we can compile* catalogs for all top-scope manifests in master
>> branch
>> >> and store those compiled* catalogs in fuel-library repo. Then for
>> each
>> >> proposed patch CI will compare new catalogs with stored ones and
>> print out
>> >> the difference if any. This will pretty much show what is going to be
>> >> changed in system configuration by proposed patch.
>> >>
>> >> We were discussing such checks before several times, iirc, but we
>> did not
>> >> have right tools to implement such thing before. Well, now we do
>> :) I think
>> >> it could be quite useful even in non-voting mode.
>>   

Re: [openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-03-31 Thread Rabi Mishra
> The conditions function has been requested for a long time, and there have
> been several previous discussions, which all ended up in debating the
> implementation, and no result.
> https://review.openstack.org/#/c/84468/3/doc/source/template_guide/hot_spec.rst
> https://review.openstack.org/#/c/153771/1/specs/kilo/resource-enabled-meta-property.rst
> 
> I think we should focus on the simplest possible way(same as AWS) to meet the
> user requirement, and follows the AWS, there is no doubt that we will get a
> very good compatibility.
> And the patches are good in-progress. I don't want everything back to zero:)
> https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/support-conditions-function
> 
> In the example you given of 'variables', seems there's no relation with
> resource/output/property conditions, it seems as another function which
> likes really 'variables' to used in template.

If I understand the suggestion correctly, the only relation it has with 
conditions is,
conditions are nothing but variables(boolean).

conditions: {
'for_prod': {equals: [{get_param: env_type}, 'prod']}
  }

would be

variables:
   for_prod: {equals: [{get_param: env_type}, prod]}


then you can use it in your example as:

floating_ip:
  type: OS::Nova::FloatingIP
  condition: {get_variable: for_prod}

so suggestion is to make it more generic, so that it can be used for other 
things
and reduce some of the verbosity in the templates.

However, I think the term 'variable' makes is sound more like a programming 
thing. May
be we can use something better. However, personally I kind of like the idea.
 
> -邮件原件-
> 发件人: Thomas Herve [mailto:the...@redhat.com]
> 发送时间: 2016年3月31日 19:55
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 主题: Re: [openstack-dev] [Heat] Re-evaluate conditions specification
> 
> On Thu, Mar 31, 2016 at 10:40 AM, Thomas Herve  wrote:
> > Hi all,
> >
> > As the patches for conditions support are incoming, I've found
> > something in the code (and the spec) I'm not really happy with. We're
> > creating a new top-level section in the template called "conditions"
> > which holds names that can be reused for conditionally creating
> > resource.
> >
> > While it's fine and maps to what AWS does, I think it's a bit
> > short-sighted and limited. What I have suggested in the past is to
> > have a "variables" (or whatever you want to call it) section, where
> > one can declare names and values. Then we can add an intrinsic
> > function to retrieve data from there, and use that for examples for
> > conditions.
> 
> I was asked to give examples, here's at least one that can illustrate what I
> meant:
> 
> parameters:
>host:
>   type: string
>port:
>   type: string
> 
> variables:
>endpoint:
>   str_replace:
> template:
>http://HOST:PORT/
> params:
>HOST: {get_param: host}
>PORT: {get_param: port}
> 
> resources:
>config1:
>   type: OS::Heat::StructuredConfig
>   properties:
> config:
>hosts: [{get_variable: endpoint}]
> 
> --
> Thomas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Integration status

2016-03-31 Thread Jim Rollenhagen
On Thu, Mar 31, 2016 at 03:37:53PM +0300, Vasyl Saienko wrote:
> Hello Community,
> 
> I'm happy to announce that new experimental job
> 'ironic-multitenant-network' is stabilized and working. This job allows to
> test Ironic multitenancy patches at the gates with help of
> networking-generic-switch [1]. Unfortunately depends-on doesn't work for
> python-ironicclient [2] since it is installed from pip. There is workaround
> for it [3].

This is so awesome. Amazing work here by everyone involved. \o/

> The full list of patches is [4].
>  I'm kindly asking to review them as Ironic multitenancy is very desirable
> feature by Ironic customers in Newton release.

+1. I'd love to get this stuff in the ironic tree before the summit, and
have the Nova stuff at least ready to land by then.

I'll be trying to review this today/tomorrow, hoping others can do the
same. :)

// jim

> [1] https://github.com/openstack/networking-generic-switch
> [2] https://review.openstack.org/206144/
> [3] https://review.openstack.org/296432/
> [4]
> https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1526403
> 
> Sincerely,
> Vasyl Saienko

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-31 Thread Thierry Carrez

OK, please find attached the final layout.

Changes:
- Swapped Oslo and Stable fishbowl sessions on Thursday afternoon.
- Swapped Fuel and Swift workroom sessions on Thursday afternoon

This will be pushed to the official schedule ASAP and then PTLs will be 
able to tweak titles and descriptions for their sessions.


--
Thierry Carrez (ttx)


DSAustin20160331.pdf
Description: Adobe PDF document
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-31 Thread Emilien Macchi
On Thu, Mar 31, 2016 at 12:12 AM, Fox, Kevin M  wrote:
> The main issue is one of upgradability, not stability. We all know tripleo
> is stable. Tripleo cant do upgrades today. We're looking for ways to get
> there. So "upgrading" to ansible isnt nessisary for sure since folks
> deploying tripleo today must assume they cant upgrade anyway.

That is wrong now: we are able to upgrade from Liberty to Mitaka.
We're even engaging some CI efforts to have an upgrade job, that will
run on each patch.

> Honestly I have doubts any config management system from puppet to heat
> software deployments can be coorced to do a cloud upgrade without downtime
> without a huge amount of workarounds. You really either need a workflow
> oriented system with global knowledge like ansible or a container
> orchestration system like kubernes to ensure you dont change too many things
> at once and break things. You need to be able to run some old things and
> some new, all at the same time. And in some cases different versions/config
> of the same service on different machines.

> Thoughts on how this may be made to work with puppet/heat?

Yes, TripleO team is currently investigating and pushing efforts on
Mistral, that is the workflow service in OpenStack.
Some background about Mistral choice:
http://lists.openstack.org/pipermail/openstack-dev/2016-January/083757.html

The spec is discussed here:
https://review.openstack.org/#/c/280407/

We think Mistral will really help us to achieve workflows in TripleO,
including upgrades.
Dan Prince made a screencast to demonstrate Mistral usage:
https://www.youtube.com/watch?v=bnAT37O-sdw

> Thanks,
> Kevin
>
> 
> From: Dan Prince
> Sent: Monday, March 28, 2016 12:07:22 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat,
> containers, and the future of TripleO
>
> On Wed, 2016-03-23 at 07:54 -0400, Ryan Hallisey wrote:
>> *Snip*
>>
>> >
>> > Indeed, this has literally none of the benefits of the ideal Heat
>> > deployment enumerated above save one: it may be entirely the wrong
>> > tool
>> > in every way for the job it's being asked to do, but at least it
>> > is
>> > still well-integrated with the rest of the infrastructure.
>> >
>> > Now, at the Mitaka summit we discussed the idea of a 'split
>> > stack',
>> > where we have one stack for the infrastructure and a separate one
>> > for
>> > the software deployments, so that there is no longer any tight
>> > integration between infrastructure and software. Although it makes
>> > me a
>> > bit sad in some ways, I can certainly appreciate the merits of the
>> > idea
>> > as well. However, from the argument above we can deduce that if
>> > this is
>> > the *only* thing we do then we will end up in the very worst of
>> > all
>> > possible worlds: the wrong tool for the job, poorly integrated.
>> > Every
>> > single advantage of using Heat to deploy software will have
>> > evaporated,
>> > leaving only disadvantages.
>> I think Heat is a very powerful tool having done the container
>> integration
>> into the tripleo-heat-templates I can see its appeal.  Something I
>> learned
>> from integration, was that Heat is not the best tool for container
>> deployment,
>> at least right now.  We were able to leverage the work in Kolla, but
>> what it
>> came down to was that we're not using containers or Kolla to its max
>> potential.
>>
>> I did an evaluation recently of tripleo and kolla to see what we
>> would gain
>> if the two were to combine. Let's look at some items on tripleo's
>> roadmap.
>> Split stack, as mentioned above, would be gained if tripleo were to
>> adopt
>> Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates
>> config
>> and deployment.  Therefore, allowing for the decoupling for each
>> piece of
>> the stack.  Composable roles, this would be the ability to land
>> services
>> onto separate hosts on demand.  Kolla also already does this [1].
>> Finally,
>> container integration, this is just a given :).
>>
>> In the near term, if tripleo were to adopt Kolla as its overcloud it
>> would
>> be provided these features and retire heat to setting up the
>> baremetal nodes
>> and providing those ips to ansible.  This would be great for kolla
>> too because
>> it would provide baremetal provisioning.
>>
>> Ian Main and I are currently working on a POC for this as of last
>> week [2].
>> It's just a simple heat template :).
>>
>> I think further down the road we can evaluate using kubernetes [3].
>> For now though,  kolla-anisble is rock solid and is worth using for
>> the
>> overcloud.
>
> Yeah, well TripleO heat Overclouds are rock solid too. They just aren't
> using containers everywhere yet. So lets fix that.
>
> I'm not a fan of replacing the TripleO overcloud configuration with
> Kolla. I don't think there is feature parity, the architectures are
> different (HA, etc.) and I 

Re: [openstack-dev] [nova] API priorities in Newton

2016-03-31 Thread Alex Xu
2016-03-31 19:21 GMT+08:00 Sean Dague :

> On 03/31/2016 04:55 AM, Alex Xu wrote:
> >
> >
> > 2016-03-31 5:36 GMT+08:00 Matt Riedemann  
> > As discussed in IRC today, first steps (I think) are removing the
> > deprecated 'osapi_v21.enabled' option in newton so v2.1 can't be
> > disabled.
> >
> > And we need to think about logging a warning if you're using v2.0.
> >
> > That sets a timetable for removal of v2.0 in the O release at the
> > earliest.
> >
> >
> > If the target is O release, should we update v2.0 as 'DEPRECATED' in the
> > version API, then we have a release to deprecated it
> >
> > Currently it is still
> > 'SUPPORTED'
> https://github.com/openstack/nova/blob/master/doc/api_samples/versions/versions-get-resp.json#L11
>
> To be clear. We aren't actually deprecating v2.0 (though I guess we
> could do that as well). We're talking about deleting the 2.0 legacy
> code. The v2.1 on v2.0 compat layer would still be there.
>

Thanks, I got it now. Sorry for misunderstand the point.


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Mitaka RC3 available

2016-03-31 Thread Thierry Carrez
Due to new release-critical issues spotted in Heat during RC2 testing, a 
new release candidate was created for Mitaka. You can find the RC3 
source code tarball at:


https://tarballs.openstack.org/heat/heat-6.0.0.0rc3.tar.gz

Unless new release-critical issues are found that warrant a last-minute 
release candidate respin, this tarball will be formally released as the 
final "Mitaka" version on April 7th. You are therefore strongly 
encouraged to test and validate this tarball !


Alternatively, you can directly test the mitaka release branch at:

http://git.openstack.org/cgit/openstack/heat/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/heat/+filebug

and tag it *mitaka-rc-potential* to bring it to the Heat release crew's 
attention.



--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Mitaka RC3 available

2016-03-31 Thread Thierry Carrez

Due to new release-critical issues spotted in Nova during RC2 testing, a
new release candidate was created for Mitaka. You can find the RC3 
source code tarball at:


https://tarballs.openstack.org/nova/nova-13.0.0.0rc3.tar.gz

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, this tarball will be formally released as
final "Mitaka" versions on April 7th. You are therefore strongly
encouraged to test and validate this tarball !

Alternatively, you can directly test the mitaka release branch at:
http://git.openstack.org/cgit/openstack/nova/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/nova/+filebug

and tag it *mitaka-rc-potential* to bring it to the Nova release crew's
attention.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Mitaka RC2 available

2016-03-31 Thread Thierry Carrez
Due to one release-critical issue spotted in Sahara during RC1 testing, 
a new release candidate was created for Mitaka. You can find the RC2 
source code tarballs at:


https://tarballs.openstack.org/sahara/sahara-4.0.0.0rc2.tar.gz
https://tarballs.openstack.org/sahara-dashboard/sahara-dashboard-4.0.0.0rc2.tar.gz
https://tarballs.openstack.org/sahara-extra/sahara-extra-4.0.0.0rc2.tar.gz
https://tarballs.openstack.org/sahara-image-elements/sahara-image-elements-4.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a last-minute 
release candidate respin, these tarballs will be formally released as 
the final "Mitaka" version on April 7th. You are therefore strongly 
encouraged to test and validate these tarballs !


Alternatively, you can directly test the mitaka release branches at:

http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/mitaka
http://git.openstack.org/cgit/openstack/sahara-dashboard/log/?h=stable/mitaka
http://git.openstack.org/cgit/openstack/sahara-extra/log/?h=stable/mitaka
http://git.openstack.org/cgit/openstack/sahara-image-elements/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/sahara/+filebug

and tag it *mitaka-rc-potential* to bring it to the Sahara release 
crew's attention.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Neutron] Integration status

2016-03-31 Thread Vasyl Saienko
Hello Community,

I'm happy to announce that new experimental job
'ironic-multitenant-network' is stabilized and working. This job allows to
test Ironic multitenancy patches at the gates with help of
networking-generic-switch [1]. Unfortunately depends-on doesn't work for
python-ironicclient [2] since it is installed from pip. There is workaround
for it [3].

The full list of patches is [4].
 I'm kindly asking to review them as Ironic multitenancy is very desirable
feature by Ironic customers in Newton release.

[1] https://github.com/openstack/networking-generic-switch
[2] https://review.openstack.org/206144/
[3] https://review.openstack.org/296432/
[4]
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1526403

Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Rescheduling IRC meetings

2016-03-31 Thread Julien Danjou
On Thu, Mar 31 2016, Ildikó Váncsa wrote:

> +1 on the on demand meeting schedule. Maybe we can also have some news flash
> mails week to summarize the progress in our sub-modules when we don't have the
> meeting. Just to keep people up to date.

That sounds a good idea. If anyone is motivated to do that, please go!

> Will we already skip the today's meeting?

Yes, there's nothing on the agenda.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Mitaka RC2 available

2016-03-31 Thread Thierry Carrez
Due to one release-critical issue spotted in Trove during RC1 testing, a 
new release candidate was created for Mitaka. You can find the RC2 
source code tarball at:


https://tarballs.openstack.org/trove/trove-5.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a last-minute 
release candidate respin, this tarball will be formally released as the 
final "Mitaka" version on April 7th. You are therefore strongly 
encouraged to test and validate this tarball !


Alternatively, you can directly test the mitaka release branch at:

http://git.openstack.org/cgit/openstack/trove/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/trove/+filebug

and tag it *mitaka-rc-potential* to bring it to the Trove release crew's 
attention.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-03-31 Thread Znoinski, Waldemar


 >-Original Message-
 >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 >Sent: Wednesday, March 30, 2016 4:22 PM
 >To: OpenStack Development Mailing List (not for usage questions)
 >
 >Cc: Feng, Shaohe 
 >Subject: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what
 >does it test?
 >
 >Intel has a few third party CIs in the third party systems wiki [1].
 >
 >I was talking with Moshe Levi today about expanding coverage for mellanox
 >CI in nova, today they run an SRIOV CI for vnic type 'direct'.
 >I'd like them to also start running their 'macvtap' CI on the same nova
 >changes (that job only runs in neutron today I think).
 >
 >I'm trying to see what we have for coverage on these different NFV
 >configurations, and because of limited resources to run NFV CI, don't want to
 >duplicate work here.
 >
 >So I'm wondering what the various Intel NFV CI jobs run, specifically the 
 >Intel
 >Networking CI [2], Intel NFV CI [3] and Intel SRIOV CI [4].
[WZ] See comments below about full/small wiki but would the below is enough or 
you'd want or to see more:
- networking-ci runs (with exceptions): 
tempest.api.network 
tempest.scenario.test_network_basic_ops 

- nfv-ci runs (with exceptions):
tempest.api.compute on standard flavors with NFV features enabled
tempest.scenario (including intel-nfv-ci-tests - 
https://github.com/openstack/intel-nfv-ci-tests) on standard flavors with NFV 
features enabled

>
 > From the wiki it looks like the Intel Networking CI tests ovs-dpdk but only 
 > for
 >Neutron. Could that be expanded to also test on Nova changes that hit a sub-
 >set of the nova tree?
[WZ] Yes, Networking CI is for neutron to test ovs-dpdk. It was also configured 
to trigger on openstack/nova changes when they affect nova/virt/libvirt/vif.py. 
It's currently disabled due to issue with Jenkins plugin we're seeing when 
having two jobs pointing at the same project simultaneously which causes to 
missed comments. Example [5]. We're still investigating one last option to get 
it working properly with the current setup. Even if we fail we're currently 
migrating to new CI setup (Openstack Infra's downstream-ci suite) and we'll 
reenable that ovs-dpdk testing on nova changes once we're migrated, 6 - 8 weeks 
from now.
Is there more you feel we should be testing our ovs-dpdk on when it comes to 
nova changes?

 >
 >I really don't know what the latter two jobs test as far as configuration is
 >concerned, the descriptions in the wikis are pretty empty (please update
 >those to be more specific).
[WZ] I see some CIs have a full wiki pages [6] for their CIs apart from the 
usual shim wiki overview table [7]. Is that what you're suggesting or extended 
info about tests in the 'small' wiki [2] is ok? 

 >
 >Please also include in the wiki the recheck method for each CI so I don't have
 >to dig through Gerrit comments to find one.
[WZ] Done

 >
 >[1] https://wiki.openstack.org/wiki/ThirdPartySystems
 >[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI
 >[3] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
 >[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI
[5] https://review.openstack.org/#/c/294312 
[6] https://wiki.openstack.org/wiki/PowerKVM 
[7] https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI 
 >
 >--
 >
 >Thanks,
 >
 >Matt Riedemann
[WZ] thanks
>
 >
 >__
 >
 >OpenStack Development Mailing List (not for usage questions)
 >Unsubscribe: OpenStack-dev-
 >requ...@lists.openstack.org?subject:unsubscribe
 >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Mitaka RC2 available

2016-03-31 Thread Thierry Carrez
Due to one release-critical issue spotted in Barbican during RC1 
testing, a new release candidate was created for Mitaka. You can find 
the RC2 source code tarball at:


https://tarballs.openstack.org/barbican/barbican-2.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a last-minute 
release candidate respin, this tarball will be formally released as the 
final "Mitaka" version on April 7th. You are therefore strongly 
encouraged to test and validate this tarball !


Alternatively, you can directly test the mitaka release branch at:

http://git.openstack.org/cgit/openstack/barbican/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/barbican/+filebug

and tag it *mitaka-rc-potential* to bring it to the Barbican release 
crew's attention.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] No rejoin-stack.sh script in my setup

2016-03-31 Thread Jordan Pittier
Hi,
rejoin-stack.sh has been removed 14 days ago by
https://review.openstack.org/#/c/291453/

You should use the "screen" command now. (e.g screen -R)

On Thu, Mar 31, 2016 at 1:06 PM, Ouadï Belmokhtar <
ouadi.belmokh...@gmail.com> wrote:

> Hi everyone,
>
> Could you give any help to my question here, please.
>
>
> http://stackoverflow.com/questions/36268822/no-rejoin-stack-sh-script-in-my-setup
>
> I'm blocked with the same problem since 10 days. Any help is considered.
>
> Regards,
>
> --
> Ouadï Belmokhtar
> *Permanent Professor at EMSI-Rabat*
> *PhD Candidate in **Computer & Information Science*
> *Mohammadia School of Engineering*
> *Mohammed V University, Rabat, Morocco*
> *ouadibelmokh...@research.emi.ac.ma *
> *Mobile: (+212) 668829641 <%28%2B212%29%20668829641>*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >