Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-29 Thread Michal Rostecki
+1 under the condition that there will be a path of migration from "old" 
Liberty to the "new" Liberty. At least in form of documentation.


If that wouldn't require any downtime of VM-s (except the Docker 
upgrade, of course) and can be achieved just by running playbooks and 
some script for volumes migration, then I'm fully +1.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Michal Rostecki

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] playing tricircle with two node configuration

2016-03-29 Thread Shinobu Kinjo
It may not be best practice what we did here even this time is o.k.

On Tue, Mar 29, 2016 at 5:10 PM, Yipei Niu  wrote:
> Hi, all,
>
> I follow the advice given by Zhiyuan, and it works. For the port error,
> somebody removes the line "iniset $NOVA_CONF DEFAULT network_api_class
> nova.network.neutronv2.api.API" in lib/neutron-legacy.

https://review.openstack.org/#/c/288556/

> For the unknown
> auth_plugin error in the first mail, somebody changes the auth_plugin to
> auth_type in lib/neutron-legacy,

https://review.openstack.org/#/c/278569/

> but the loader still searches auth_plugin
> for configuration in nova/network/neutronv2/api.py, which therefore leads to
> none in auth_plugin option. I change the files to their original versions,
> respectively. Now I have booted two VMs successfully. Is the solution OK?
>
> BTW, I have some trouble with creating router and I am trying to fix it.
>
> Best regards,
> Yipei
>
> On Tue, Mar 29, 2016 at 2:25 PM, joehuang  wrote:
>>
>> I think we can do it later. It's still a little bit strange that if
>> nova-network in the Local.conf is disabled at first before devstack
>> installation, the devstack should use Neutron instead, not sure how clean
>> Yipei's environment is.
>>
>> I suggest Yipei to use clean virtual machines running in virtualbox, and
>> follow the readme[1] and pengfei's experience[2] to install Tricircle.
>>
>> [1] https://github.com/openstack/tricircle/blob/master/README.md
>> [2] http://shipengfei92.cn/play_tricircle_with_virtualbox
>>
>> Best Regards
>> Chaoyi Huang ( Joe Huang )
>>
>>
>> -Original Message-
>> From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
>> Sent: Tuesday, March 29, 2016 10:48 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Yipei Niu
>> Subject: Re: [openstack-dev] [tricircle] playing tricircle with two node
>> configuration
>>
>> Sorry for interruption.
>>
>> >> I followed the README.md in github, every thing goes well until I
>> >> boot virtual machines with the following command
>>
>> Point here is that even Yipei was following READM provided by us, setting
>> up the tricircle was failed. [1] It may be required to review it, I think.
>>
>> [1] https://github.com/openstack/tricircle/blob/master/README.md
>>
>> Cheers,
>> Shinobu
>>
>>
>> On Tue, Mar 29, 2016 at 11:21 AM, Vega Cai  wrote:
>> > Hi Yipei,
>> >
>> > If you want to use neutron, you need to set "nova_api_class" to
>> > "nova.network.neutronv2.api.API", "nova.network.api.API" is for nova
>> > network.
>> >
>> > Related code:
>> > https://github.com/openstack/nova/blob/master/nova/network/__init__.py
>> > #L47
>> >
>> > Simply search "Unknown argument: port" which is the error showed in
>> > the log in the nova code tree then you can find the above code.
>> >
>> > Also, if all the services are running but you want to change some of
>> > the configuration options, you can just modify the configuration files
>> > then restart the services, so you can quickly check if your
>> > modification works without restarting devstack.
>> >
>> > BR
>> > Zhiyuan
>> >
>> > On 29 March 2016 at 08:56, Yipei Niu  wrote:
>> >>
>> >> Hi, all,
>> >>
>> >> I checked nova.conf and local.conf both.
>> >>
>> >> In nova.conf, the option "nova_api_class" is missing while
>> >> "use_neutron"
>> >> is set as True. I modify the lib/nova so that the devstack can write
>> >> "nova_api_class=nova.network.api.API" to nova.conf [1]. However,
>> >> after installing devstack with tricircle, the same error still happens
>> >> as before.
>> >>
>> >> In local.conf, n-net is disabled, which is the same as the sample
>> >> file of tricircle.
>> >>
>> >> [1] http://docs.openstack.org/developer/nova/sample_config.html
>> >>
>> >> Best regards,
>> >> Yipei
>> >>
>> >> On Mon, Mar 28, 2016 at 4:46 PM, Shinobu Kinjo 
>> >> wrote:
>> >>>
>> >>> FYI:
>> >>> This is the reason is that there is still n-net. [1]
>> >>>
>> >>> [1]
>> >>> http://docs.openstack.org/openstack-ops/content/nova-network-depreca
>> >>> tion.html
>> >>>
>> >>> Cheers,
>> >>> S
>> >>>
>> >>> On Mon, Mar 28, 2016 at 5:08 PM, joehuang  wrote:
>> >>> > Hi,
>> >>> >
>> >>> >
>> >>> >
>> >>> > Agree, it’s quite important not to use Nova-network in devstack.
>> >>> > In devstack local.conf, make sure the Neutron service is enabled
>> >>> > and Nova-network is disabled.
>> >>> >
>> >>> >
>> >>> >
>> >>> > # Use Neutron instead of nova-network
>> >>> >
>> >>> > disable_service n-net
>> >>> >
>> >>> > enable_service q-svc
>> >>> >
>> >>> > enable_service q-svc1
>> >>> >
>> >>> > enable_service q-dhcp
>> >>> >
>> >>> > enable_service q-agt
>> >>> >
>> >>> >
>> >>> >
>> >>> > And also check the configuration in Nova to use Neutron
>> >>> >
>> >>> >
>> >>> >
>> >>> > Best Regards
>> >>> >
>> >>> > Chaoyi Huang ( Joe Huang )
>> >>> >
>> >>> >
>> >>> >
>> >>> > From: Vega Cai 

Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread MD. Nadeem
+1 
No doubt he is good at review and also helpful to guide newbies joining kolla.

-Original Message-
From: Swapnil Kulkarni [mailto:m...@coolsvap.net] 
Sent: Wednesday, March 30, 2016 9:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core 
Reviewer

On Tue, Mar 29, 2016 at 9:37 PM, Steven Dake (stdake)  wrote:
> Hey folks,
>
> Consider this proposal a +1 in favor of Vikram joining the core 
> reviewer team.  His reviews are outstanding.  If he doesn’t have 
> anything useful to add to a review, he doesn't pile on the review with 
> more –1s which are slightly disheartening to people.  Vikram has 
> started a trend amongst the core reviewers of actually diagnosing gate 
> failures in peoples patches as opposed to saying gate failed please 
> fix.  He does this diagnosis in nearly every review I see, and if he 
> is stumped  he says so.  His 30 days review stats place him in pole 
> position and his 90 day review stats place him in second position.  Of 
> critical notice is that Vikram is ever-present on IRC which in my 
> professional experience is the #1 indicator of how well a core
> reviewer will perform long term.   Besides IRC and review requirements, we
> also have code requirements for core reviewers.  Vikram has 
> implemented only
> 10 patches so far, butI feel he could amp this up if he had feature 
> work to do.  At the moment we are in a holding pattern on master 
> development because we need to fix Mitaka bugs.  That said Vikram is 
> actively working on diagnosing root causes of people's bugs in the IRC 
> channel pretty much 12-18 hours a day so we can ship Mitaka in a working 
> bug-free state.
>
> Our core team consists of 11 people.  Vikram requires at minimum 6 +1 
> votes, with no veto –2 votes within a 7 day voting window to end on 
> April 7th.  If there is a veto vote prior to April 7th I will close 
> voting.  If there is a unanimous vote prior to April 7th, I will make 
> appropriate changes in gerrit.
>
> Regards
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla-group/30
> [2] http://stackalytics.com/report/contribution/kolla-group/90
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Big +1.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] playing tricircle with two node configuration

2016-03-29 Thread Vega Cai
Hi Yipei,

Segment id is not correctly assigned to the bridge network so you get the
"None is not an interger" message. What version of neutron and tricircle
you use? Neutron moves network segment table definition out of ML2 code
tree and tricircle has adapted this change.

BTW, I updated devstack, nova, neutron, keystone, requirements, glance,
cinder, tricircle to the latest version in my environment yesterday and
everything worked fine.

BR
Zhiyuan

On 30 March 2016 at 10:19, Yipei Niu  wrote:

> Hi all,
>
> I have already booted two VMs and successfully created a router with
> Neutron. But I have some trouble with attaching the router to a subnet. The
> error in q-svc.log is as follows:
> 2016-03-29 15:41:04.065 ^[[01;31mERROR oslo_db.api
> [^[[01;36mreq-0180a3f5-e34c-4d4c-bd39-3c0c714b02de ^[[00;36madmin
> 685f8f37363f4467bead5a375e855ccd^[[01;31m] ^[[01;35m^[[01;31mDB error.^[[00m
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api
> ^[[01;35m^[[00mTraceback (most recent call last):
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in
> wrapper
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  return f(*args, **kwargs)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/neutron/neutron/api/v2/base.py", line 217, in _handle_action
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  ret_value = getattr(self._plugin, name)(*arg_list, **kwargs)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/tricircle/tricircle/network/plugin.py", line 768, in
> add_router_interface
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  t_bridge_port)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/tricircle/tricircle/network/plugin.py", line 686, in
> _get_bottom_bridge_elements
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  t_ctx, project_id, pod, t_net, 'network', net_body)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/tricircle/tricircle/network/plugin.py", line 550, in
> _prepare_bottom_element
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  list_resources, create_resources)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/tricircle/tricircle/common/lock_handle.py", line 99, in
> get_or_create_element
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mele
> = create_ele_method(t_ctx, q_ctx, pod, body, _type)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/tricircle/tricircle/network/plugin.py", line 545, in
> create_resources
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  return client.create_resources(_type_, t_ctx_, body_)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/tricircle/tricircle/common/client.py", line 87, in handle_args
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  return func(*args, **kwargs)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/tricircle/tricircle/common/client.py", line 358, in
> create_resources
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  return handle.handle_create(cxt, resource, *args, **kwargs)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/opt/stack/tricircle/tricircle/common/resource_handle.py", line 149, in
> handle_create
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  *args, **kwargs)[resource]
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 100, in with_params
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mret
> = self.function(instance, *args, **kwargs)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 552, in create_network
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  return self.post(self.networks_path, body=body)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 271, in post
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  headers=headers, params=params)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 206, in do_request
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
>  self._handle_fault_response(status_code, replybody)
> ^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api 

Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-29 Thread Swapnil Kulkarni
On Wed, Mar 30, 2016 at 9:03 AM, Michał Jastrzębski  wrote:
> I think that it goes without saying that I'm +1
>
> On 29 March 2016 at 22:27, Steven Dake (stdake)  wrote:
>> Dear cores,
>>
>> inc0 has been investigating our backport options for 1.1.0 and they look
>> bleak.  At this time we would have to backport heka because of changes in
>> Docker 1.10.z which are incompatible with the syslog dev file.  We had
>> originally spoken about just taking Mitaka and placing it in Liberty,
>> validating the defaults directory of Ansible, fixing the repo pointers,
>> fixing the source pointers, and modifying containers as necessary to make
>> that happen.
>>
>> I know this is vastly different then what we have discussed in the past
>> related to Liberty management, but beyond giving up entirely on Liberty
>> which is unacceptable to me, it seems like our only option.
>>
>> The good news is we will be able to leverage all of the testing we have done
>> with liberty and all of the testing we have done developing Mitaka, and have
>> a smooth stable backport experience for Mitaka and Liberty.
>>
>> Consider this proposal a +1 vote for me.  Our core team has 11 members,
>> which means we need 6 +1 votes in order for this new plan to pass.  Note I
>> see no other options, so abstaining or voting –1 is in essence recommending
>> abandonment of the Liberty branch.
>>
>> I'll leave voting open for 7 days until Tuesday April 5th unless there is a
>> majority before then.  If  there is a majority prior to the voting deadline,
>> I'll close voting early but leave discussion open for those that wish to
>> have it.
>>
>> We won't be having this happen again, as our Mitaka architecture is stable
>> and strong minus a few straggling bugs remaining.
>>
>> Regards
>> -steve
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Swapnil Kulkarni
On Tue, Mar 29, 2016 at 9:37 PM, Steven Dake (stdake)  wrote:
> Hey folks,
>
> Consider this proposal a +1 in favor of Vikram joining the core reviewer
> team.  His reviews are outstanding.  If he doesn’t have anything useful to
> add to a review, he doesn't pile on the review with more –1s which are
> slightly disheartening to people.  Vikram has started a trend amongst the
> core reviewers of actually diagnosing gate failures in peoples patches as
> opposed to saying gate failed please fix.  He does this diagnosis in nearly
> every review I see, and if he is stumped  he says so.  His 30 days review
> stats place him in pole position and his 90 day review stats place him in
> second position.  Of critical notice is that Vikram is ever-present on IRC
> which in my professional experience is the #1 indicator of how well a core
> reviewer will perform long term.   Besides IRC and review requirements, we
> also have code requirements for core reviewers.  Vikram has implemented only
> 10 patches so far, butI feel he could amp this up if he had feature work to
> do.  At the moment we are in a holding pattern on master development because
> we need to fix Mitaka bugs.  That said Vikram is actively working on
> diagnosing root causes of people's bugs in the IRC channel pretty much 12-18
> hours a day so we can ship Mitaka in a working bug-free state.
>
> Our core team consists of 11 people.  Vikram requires at minimum 6 +1 votes,
> with no veto –2 votes within a 7 day voting window to end on April 7th.  If
> there is a veto vote prior to April 7th I will close voting.  If there is a
> unanimous vote prior to April 7th, I will make appropriate changes in
> gerrit.
>
> Regards
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla-group/30
> [2] http://stackalytics.com/report/contribution/kolla-group/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Big +1.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-03-29 Thread Vikram Choudhary
Hi Armando,

We want to add the support for a new ML2 driver. Can you please guide what
is the step moving forward?

Thanks
Vikram


On Fri, Mar 4, 2016 at 12:39 AM, Armando M.  wrote:

> Hi folks,
>
> Status update on this matter:
>
> Russell, Kyle and I had a number of patches out [1], to try and converge
> on how to better organize Neutron-related efforts. As a result, a number of
> patches merged and a number of patches are still pending. Because of Mitaka
> feature freeze, other initiatives too priority.
>
> That said, some people rightly wonder what's the latest outcome of the
> discussion. Bottom line: we are still figuring this out. For now the
> marching order is unchanged: as far as Mitaka is concerned, things stay as
> they were, and new submissions for inclusion are still frozen. I aim (with
> or without the help of the new PTL) to get to a final resolution by or
> shortly after the Mitaka release [2].
>
> Please be patient and stay focussed on delivering a great Mitaka
> experience!
>
> Cheers,
> Armando
>
> [1] https://review.openstack.org/#/q/branch:master+topic:stadium-implosion
> [2] http://releases.openstack.org/mitaka/schedule.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-29 Thread Michał Jastrzębski
I think that it goes without saying that I'm +1

On 29 March 2016 at 22:27, Steven Dake (stdake)  wrote:
> Dear cores,
>
> inc0 has been investigating our backport options for 1.1.0 and they look
> bleak.  At this time we would have to backport heka because of changes in
> Docker 1.10.z which are incompatible with the syslog dev file.  We had
> originally spoken about just taking Mitaka and placing it in Liberty,
> validating the defaults directory of Ansible, fixing the repo pointers,
> fixing the source pointers, and modifying containers as necessary to make
> that happen.
>
> I know this is vastly different then what we have discussed in the past
> related to Liberty management, but beyond giving up entirely on Liberty
> which is unacceptable to me, it seems like our only option.
>
> The good news is we will be able to leverage all of the testing we have done
> with liberty and all of the testing we have done developing Mitaka, and have
> a smooth stable backport experience for Mitaka and Liberty.
>
> Consider this proposal a +1 vote for me.  Our core team has 11 members,
> which means we need 6 +1 votes in order for this new plan to pass.  Note I
> see no other options, so abstaining or voting –1 is in essence recommending
> abandonment of the Liberty branch.
>
> I'll leave voting open for 7 days until Tuesday April 5th unless there is a
> majority before then.  If  there is a majority prior to the voting deadline,
> I'll close voting early but leave discussion open for those that wish to
> have it.
>
> We won't be having this happen again, as our Mitaka architecture is stable
> and strong minus a few straggling bugs remaining.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-29 Thread Steven Dake (stdake)
Dear cores,

inc0 has been investigating our backport options for 1.1.0 and they look bleak. 
 At this time we would have to backport heka because of changes in Docker 
1.10.z which are incompatible with the syslog dev file.  We had originally 
spoken about just taking Mitaka and placing it in Liberty, validating the 
defaults directory of Ansible, fixing the repo pointers, fixing the source 
pointers, and modifying containers as necessary to make that happen.

I know this is vastly different then what we have discussed in the past related 
to Liberty management, but beyond giving up entirely on Liberty which is 
unacceptable to me, it seems like our only option.

The good news is we will be able to leverage all of the testing we have done 
with liberty and all of the testing we have done developing Mitaka, and have a 
smooth stable backport experience for Mitaka and Liberty.

Consider this proposal a +1 vote for me.  Our core team has 11 members, which 
means we need 6 +1 votes in order for this new plan to pass.  Note I see no 
other options, so abstaining or voting -1 is in essence recommending 
abandonment of the Liberty branch.

I'll leave voting open for 7 days until Tuesday April 5th unless there is a 
majority before then.  If  there is a majority prior to the voting deadline, 
I'll close voting early but leave discussion open for those that wish to have 
it.

We won't be having this happen again, as our Mitaka architecture is stable and 
strong minus a few straggling bugs remaining.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-03-29 Thread Kevin Benton
I'm not aware of any issues. Perhaps you can propose a patch to just change
the default in Neutron to that interface and people can -1 if there are any
concerns.

On Tue, Mar 29, 2016 at 4:32 PM, Inessa Vasilevskaya <
ivasilevsk...@mirantis.com> wrote:

> Hi all,
>
> I spent some time researching the current state of native ovsdb interface
> feature, which has been fully implemented and merged since Liberty [1][2].
> I was pretty much impressed by the expected performance improvement (as per
> spec [1], some interesting research on ovs-vsctl + rootwrap also done here
> [3]). Preliminary test results also showed that native interface does quite
> well.
>
> So my question is - why don’t we make native interface the default option
> for voting gate jobs? Are there any caveats or security issues that I am
> unaware of?
>
> Regards,
>
> Inessa
>
> [1]
> https://specs.openstack.org/openstack/neutron-specs/specs/kilo/vsctl-to-ovsdb.html
>
> [2] https://blueprints.launchpad.net/neutron/+spec/vsctl-to-ovsd
> 
> [3]
> http://blog.otherwiseguy.com/replacing-ovs-vsctl-calls-with-native-ovsdb-in-neutron/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][kolla][release] Deploying the big tent

2016-03-29 Thread Kevin Carter


On 03/29/2016 04:46 PM, Emilien Macchi wrote:
> On Mon, Mar 28, 2016 at 9:16 PM, Steven Dake (stdake)  
> wrote:
>>
>>
>> On 3/27/16, 2:50 PM, "Matt Riedemann"  wrote:
>>
>>>
>>>
>>> On 3/26/2016 11:27 AM, Steven Dake (stdake) wrote:
 Hey fellow PTLs and core reviewers of  those projects,

 Kolla at present deploys  the compute kit, and some other services that
 folks have added over time including other projects like Ironic, Heat,
 Mistral, Murano, Magnum, Manilla, and Swift.

 One of my objectives from my PTL candidacy was to deploy the big tent,
 and I  saw how we were not super successful as I planned in Mitaka at
 filling out the big tent.

 While the Kolla core team is large, and we can commit to maintaining big
 tent projects that are deployed, we are at capacity every milestone of
 every cycle implementing new features that the various big tent services
 should conform to.  The idea of a plugin architecture for Kolla where
 projects could provide their own plugins has been floated, but before we
 try that, I'd prefer that the various teams in OpenStack with an
 interest in having their projects consumed by Operators involve
 themselves in containerizing their projects.

 Again, once the job is done, the Kolla community will continue to
 maintain these projects, and we hope you will stay involved in that
 process.

 It takes roughly four 4 hour blocks to learn the implementation
 architecture of Kolla and probably another 2 4 hour blocks to get a good
 understanding of the Kolla deployment workflow.  Some projects (like
 Neutron for example) might fit outside this norm because containerizing
 them and deploying them is very complex.  But we have already finished
 the job on what we believe are the hard projects.

 My ask is that PTLs take responsibility or recruit someone from their
 respective community to participate in the implementation of Kolla
 deployment for their specific project.

 Only with your help can we make the vision of a deployment system that
 can deploy the big tent a reality.

 Regards
 -steve



 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> Having never looked at Kolla, is there an easy way to see what projects
>>> are already done? Like, what would Nova need to do here? Or is it a
>>> matter of keeping up with new deployment changes / upgrade impacts, like
>>> the newish nova API database?
>>>
>>> If that's the case, couldn't the puppet/ansible/chef projects be making
>>> similar requests from the project development teams?
>
> Regarding our current model, I don't think we will (Puppet). See below
> this text.
>
>>> Unless we have incentive to be contributing to Kolla, like say we
>>> replaced devstack in our CI setup with it, I'm having a hard time seeing
>>> everyone jumping in on this.
>>
>> Matt,
>>
>> The compute kit projects are well covered by the current core reviewer
>> team.  Hence, we don't really need any help with Nova.  This is more aimed
>> at the herd of new server projects in Liberty and Mitaka that want
>> deployment options which currently lack them.  There is no way to deploy
>> aodh in an automated fashion (for example) (picked because it was first in
>> the project list by alphabetical order;)
>>
>> For example this cycle community folks implemented mistral and manila,
>> which were not really top in our priority list.  Yet, the work got done
>> and now the core team will look after these projects as well.
>>
>> As for why puppet/ansible/chef couldn't make the same requests, the answer
>> is they could.  Why haven't they?  I can never speak to the motives or
>> actions of others, but perhaps they didn't think to try?
>
> Puppet OpenStack, Chef OpenStack and Ansible OpenStack took another
> approach, by having a separated module per project.
>
> This is how we started 4 years ago in Puppet modules: having one
> module that deploys one component.
> Example: openstack/puppet-nova - openstack/puppet-keystone - etc
> Note that we currently cover 27 OpenStackc components, documented here:
> https://wiki.openstack.org/wiki/Puppet
>
> We have split the governance a little bit over the last 2 cycles,
> where some modules like puppet-neutron and puppet-keystone (eventually
> more in the future) have a dedicated core member group (among other
> Puppet OpenStack core members) that have a special expertise on a
> project.
>
> Our model allows anyone expert on a specific project (ex: Keystone) to
> contribute on puppet-keystone and eventually become core on the
> project (It's happening 

Re: [openstack-dev] [tricircle] playing tricircle with two node configuration

2016-03-29 Thread Yipei Niu
Hi all,

I have already booted two VMs and successfully created a router with
Neutron. But I have some trouble with attaching the router to a subnet. The
error in q-svc.log is as follows:
2016-03-29 15:41:04.065 ^[[01;31mERROR oslo_db.api
[^[[01;36mreq-0180a3f5-e34c-4d4c-bd39-3c0c714b02de ^[[00;36madmin
685f8f37363f4467bead5a375e855ccd^[[01;31m] ^[[01;35m^[[01;31mDB error.^[[00m
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mTraceback
(most recent call last):
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in
wrapper
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 return f(*args, **kwargs)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/neutron/neutron/api/v2/base.py", line 217, in _handle_action
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 ret_value = getattr(self._plugin, name)(*arg_list, **kwargs)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/tricircle/tricircle/network/plugin.py", line 768, in
add_router_interface
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 t_bridge_port)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/tricircle/tricircle/network/plugin.py", line 686, in
_get_bottom_bridge_elements
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 t_ctx, project_id, pod, t_net, 'network', net_body)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/tricircle/tricircle/network/plugin.py", line 550, in
_prepare_bottom_element
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 list_resources, create_resources)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/tricircle/tricircle/common/lock_handle.py", line 99, in
get_or_create_element
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mele =
create_ele_method(t_ctx, q_ctx, pod, body, _type)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/tricircle/tricircle/network/plugin.py", line 545, in
create_resources
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 return client.create_resources(_type_, t_ctx_, body_)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/tricircle/tricircle/common/client.py", line 87, in handle_args
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 return func(*args, **kwargs)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/tricircle/tricircle/common/client.py", line 358, in
create_resources
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 return handle.handle_create(cxt, resource, *args, **kwargs)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/opt/stack/tricircle/tricircle/common/resource_handle.py", line 149, in
handle_create
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 *args, **kwargs)[resource]
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
100, in with_params
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00mret =
self.function(instance, *args, **kwargs)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
552, in create_network
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 return self.post(self.networks_path, body=body)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
271, in post
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 headers=headers, params=params)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
206, in do_request
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 self._handle_fault_response(status_code, replybody)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
182, in _handle_fault_response
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 exception_handler_v20(status_code, des_error_body)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
69, in exception_handler_v20
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api ^[[01;35m^[[00m
 status_code=status_code)
^[[01;31m2016-03-29 15:41:04.065 TRACE oslo_db.api
^[[01;35m^[[00mBadRequest: Invalid input for operation: 'None' is not an
integer.
^[[01;31m2016-03-29 

Re: [openstack-dev] [magnum] Discuss the blueprint "support-private-registry"

2016-03-29 Thread Eli Qiao


Hi Hongbin

Thanks for starting this thread,

I initial propose this bp because I am in China which is behind China 
great wall and can not have access of gcr.io directly, after checking 
our cloud-init script, I see that


lots of code are *hard coded* to using gcr.io, I personally though this 
is not good idea. We can not force user/customer to have internet access 
in their environment.


I proposed to use insecure-registry to give customer/user (Chinese or 
whom doesn't have gcr.io access) a chance to switch use their own 
insecure-registry to deploy

k8s/swarm bay.

For your question:
>Is the private registry secure or insecure? If secure, how to handle 
the authentication secrets. If insecure, is it OK to connect a secure 
bay to an insecure registry?
An insecure-resigtry should be 'secure' one, since customer need to 
setup it and make sure it's clear one and in this case, they could be a 
private cloud.


Should we provide an instruction for users to pre-install the private 

registry? If not, how to verify the correctness of this feature?

The simply way to pre-install private registry is using 
insecure-resigtry and docker.io has very simple steps to start it [1]
for other, docker registry v2 also supports using TLS enable mode but 
this will require to tell docker client key and crt file which will make 
"support-private-registry" complex.


[1] https://docs.docker.com/registry/
[2]https://docs.docker.com/registry/deploying/



On 2016年03月30日 07:23, Hongbin Lu wrote:


Hi team,

This is the item we didn’t have time to discuss in our team meeting, 
so I started the discussion in here.


Here is the blueprint: 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry 
. Per my understanding, the goal of the BP is to allow users to 
specify the url of their private docker registry where the bays pull 
the kube/swarm images (if they are not able to access docker hub or 
other public registry). An assumption is that users need to 
pre-install their own private registry and upload all the required 
images to there. There are several potential issues of this proposal:


·Is the private registry secure or insecure? If secure, how to handle 
the authentication secrets. If insecure, is it OK to connect a secure 
bay to an insecure registry?


·Should we provide an instruction for users to pre-install the private 
registry? If not, how to verify the correctness of this feature?


Thoughts?

Best regards,

Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best Regards, Eli Qiao (乔立勇)
Intel OTC China

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-29 Thread Matt Riedemann



On 3/29/2016 4:44 PM, Armando M. wrote:



On 29 March 2016 at 08:08, Matt Riedemann > wrote:

Nova has had some long-standing bugs that Sahid is trying to fix
here [1].

You can create a network in neutron with
port_security_enabled=False. However, the bug is that since Nova
adds the 'default' security group to the request (if none are
specified) when allocating networks, neutron raises an error when
you try to create a port on that network with a 'default' security
group.

Sahid's patch simply checks if the network that we're going to use
has port_security_enabled and if it does not, no security groups are
applied when creating the port (regardless of what's requested for
security groups, which in nova is always at least 'default').

There has been a similar attempt at fixing this [2]. That change
simply only added the 'default' security group when allocating
networks with nova-network. It omitted the default security group if
using neutron since:

a) If the network does not have port security enabled, we'll blow up
trying to add a port on it with the default security group.

b) If the network does have port security enabled, neutron will
automatically apply a 'default' security group to the port, nova
doesn't need to specify one.

The problem both Feodor's and Sahid's patches ran into was that the
nova REST API adds a 'default' security group to the server create
response when using neutron if specific security groups weren't on
the server create request [3].

This is clearly wrong in the case of
network.port_security_enabled=False. When listing security groups
for an instance, they are correctly not listed, but the server
create response is still wrong.

So the question is, how to resolve this?  A few options come to mind:

a) Don't return any security groups in the server create response
when using neutron as the backend. Given by this point we've cast
off to the compute which actually does the work of network
allocation, we can't call back into the network API to see what
security groups are being used. Since we can't be sure, don't
provide what could be false info.

b) Add a new method to the network API which takes the requested
networks from the server create request and returns a best guess if
security groups are going to be applied or not. In the case of
network.port_security_enabled=False, we know a security group won't
be applied so the method returns False. If there is
port_security_enabled, we return whatever security group was
requested (or 'default'). If there are multiple networks on the
request, we return the security groups that will be applied to any
networks that have port security enabled.

Option (b) is obviously more intensive and requires hitting the
neutron API from nova API before we respond, which we'd like to
avoid if possible. I'm also not sure what it means for the
auto-allocated-topology (get-me-a-network) case. With a standard
devstack setup, a network created via the auto-allocated-topology
API has port_security_enabled=True, but I also have the 'Port
Security' extension enabled and the default public external network
has port_security_enabled=True. What if either of those are False
(or the port security extension is disabled)? Does the
auto-allocated network inherit port_security_enabled=False? We could
duplicate that logic in Nova, but it's more proxy work that we would
like to avoid.


Port security on the external network has no role in this because this
is not the network you'd be creating ports on. Even if it had
port-security=False, an auto-allocated network will still be created
with port security enabled (i.e. =True).

A user can obviously change that later on.


[1] https://review.openstack.org/#/c/284095/
[2] https://review.openstack.org/#/c/173204/
[3]

https://github.com/openstack/nova/blob/f8a01ccdffc13403df77148867ef3821100b5edb/nova/api/openstack/compute/security_groups.py#L472-L475

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yup, HenryG walked me through the cases on IRC today.

The more I think about option 

[openstack-dev] [puppet] managed gems dependencies in a centralized way

2016-03-29 Thread Emilien Macchi
How many times did we have broken Gem dependencies, for different reasons?
A lot, maybe too much. It's time to find a way to centralize Gems
dependencies, like other OpenStack project already do with
openstack/requirements and Python dependencies.

So I submitted this patch: https://review.openstack.org/#/c/298465/
(that is tested and work, look comments), which aims to use
puppet-openstack_spec_helper.gemspec as a single place to manage our
dependencies.
Next time we need to change the version of a Gem for our CI jobs, we
will just have to patch this file. That's it.

And a serie of patches in our modules:
https://review.openstack.org/#/q/topic:common-gemfile to make sure we
don't install the dependencies from Gemfiles.

I also propose we backport this thing, at least in stable/mitaka. It
will make our life much easier next time a Gem is broken and help to
maintain stable branches CI in a good shape.
Please give any feedback and use Gerrit to review the work in progress,

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Service Catalog TNG work in Mitaka ... next steps

2016-03-29 Thread Matt Riedemann



On 3/29/2016 2:30 PM, Sean Dague wrote:

At the Mitaka Summit we had a double session on the Service Catalog,
where we stood, and where we could move forward. Even though the service
catalog isn't used nearly as much as we'd like, it's used in just enough
odd places that every change pulls on a few other threads that are
unexpected. So this is going to be a slow process going forward, but I
do have faith we'll get there.

Thanks much to Brant, Chris, and Anne for putting in time this cycle to
keep this ball moving forward.

Mitaka did a lot of fact finding.

* public / admin / internal urls - mixed results

The notion of an internal url is used in many deployments because they
believe it means they won't be charged for data transfer. There is no
definitive semantic meaning to any of these. Many sites just make all of
these the same, and use the network to ensure that internal connections
hit internal interfaces.

Next Steps: this really needs a set of user stories built from what we
currently have. That's where that one is left.

* project_id optional in projects - good progress

One of the issues with lots of things that want to be done with the
service catalog, is that we've gone and hard coded project_id into urls
in projects where they are not really semantically meaningful. That
precluded things like an anonymous service catalog.

We decided to demonstrate this on Nova first. That landed as
microversion 2.18. It means that service catalog entries no longer need
project_id to be in the url. There is a patch up for devstack to enable
this - https://review.openstack.org/#/c/233079/ - though a Tempest patch
removing errant tests needs to land first.

The only real snag we found during this was that python Routes +
keystone's ability to have project id not be a uuid (even though it
defaults to one) made for the need to add a new config option to handle
this going either way.

This is probably easy to replicate on other projects during the next cycle.

Next Steps: get volunteers from additional projects to replicate this.

* service types authority

One of the things we know we need to make progress on is an actual
authority of all the service catalogue types which we recognize. We got
agreement to create this repository, I've got some outstanding patches
to restructure for starting off the repo -
https://review.openstack.org/#/q/project:openstack/service-types-authority

The thing we discovered here was even the apparently easy problems, some
times aren't. The assumption that there might be a single URL which
describes the API for a service, is an assumption we don't fulfil even
for most of the base services.

This bump in the road is part of what led to some shifted effort on the
API Reference in RST work - (see
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html)

Next Steps: the API doc conversion probably trumps this work, and at
some level is a requirement for it. Once we get the API reference
transition in full swing, this probably becomes top of stack.

* service catalogue tng schema

Brant has done some early work setting up a schema based on the known
knowns, and leaving some holes for the known unknowns until we get a few
of these locked down (types / allowed urls).

Next Steps: review current schema

* Weekly Meetings

We had been meeting weekly in #openstack-meeting-cp up until release
crunch, when most of us got swamped with such things.

I'd like to keep focus on the API doc conversion in the near term, as
there is a mountain to get over with getting the first API converted,
then we can start making the docs more friendly to our users. I think
this means we probably keep the weekly meeting on hiatus until post
Austin, and start it up again the week after we all get back.


Thanks to folks that helped get us this far. Hopefully we'll start
picking up steam again once we get a bit of this backlog cleared and
getting chugging during the cycle.

-Sean



Thanks for the write up.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Jeffrey Zhang
+1

On Wed, Mar 30, 2016 at 8:25 AM, Angus Salkeld 
wrote:

> +1
>
>
> On Wed, Mar 30, 2016 at 6:45 AM Michał Jastrzębski 
> wrote:
>
>> +1
>>
>> On 29 March 2016 at 11:39, Ryan Hallisey  wrote:
>> > +1
>> >
>> > - Original Message -
>> > From: "Paul Bourke" 
>> > To: openstack-dev@lists.openstack.org
>> > Sent: Tuesday, March 29, 2016 12:10:38 PM
>> > Subject: Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot
>> for Core Reviewer
>> >
>> > +1
>> >
>> > On 29/03/16 17:07, Steven Dake (stdake) wrote:
>> >> Hey folks,
>> >>
>> >> Consider this proposal a +1 in favor of Vikram joining the core
>> reviewer
>> >> team.  His reviews are outstanding.  If he doesn’t have anything useful
>> >> to add to a review, he doesn't pile on the review with more –1s which
>> >> are slightly disheartening to people.  Vikram has started a trend
>> >> amongst the core reviewers of actually diagnosing gate failures in
>> >> peoples patches as opposed to saying gate failed please fix.  He does
>> >> this diagnosis in nearly every review I see, and if he is stumped  he
>> >> says so.  His 30 days review stats place him in pole position and his
>> 90
>> >> day review stats place him in second position.  Of critical notice is
>> >> that Vikram is ever-present on IRC which in my professional experience
>> >> is the #1 indicator of how well a core reviewer will perform long term.
>> >>Besides IRC and review requirements, we also have code requirements
>> >> for core reviewers.  Vikram has implemented only 10 patches so far,
>> butI
>> >> feel he could amp this up if he had feature work to do.  At the moment
>> >> we are in a holding pattern on master development because we need to
>> fix
>> >> Mitaka bugs.  That said Vikram is actively working on diagnosing root
>> >> causes of people's bugs in the IRC channel pretty much 12-18 hours a
>> day
>> >> so we can ship Mitaka in a working bug-free state.
>> >>
>> >> Our core team consists of 11 people.  Vikram requires at minimum 6 +1
>> >> votes, with no veto –2 votes within a 7 day voting window to end on
>> >> April 7th.  If there is a veto vote prior to April 7th I will close
>> >> voting.  If there is a unanimous vote prior to April 7th, I will make
>> >> appropriate changes in gerrit.
>> >>
>> >> Regards
>> >> -steve
>> >>
>> >> [1] http://stackalytics.com/report/contribution/kolla-group/30
>> >> [2] http://stackalytics.com/report/contribution/kolla-group/90
>> >>
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] unit test failures due to new release of Routes package (v2.3)

2016-03-29 Thread Aditya Vaja
Ah, thanks Henry!

I went ahead and filed a bug too, marked it as duplicate of the one you
linked.

  
\--

Aditya

  

On Mar 29 2016, at 6:06 pm, Henry Gessau hen...@gessau.net wrote:  

> https://launchpad.net/bugs/1563028  
https://review.openstack.org/298855

>

> Aditya Vaja wolverine...@gmail.com wrote:  
 Hi,  
  
 I'm seeing unit test failures when I test locally after a fresh git clone
of  
 neutron master.  
  
 $ ./run_tests.sh -V -f  
  
 log excerpt:   
  
 I update the requirements.txt to use 'Routes2.0,=1.12.3' and all
the tests  
 work fine.  
 I see there was a new release (v2.3 ) of Routes on 28th March 2016 [1],
which  
 seems to have caused the issue, specifically:  
 \- Concatenation fix when using submappers with path prefixes. Multiple  
 submappers combined the path prefix inside the controller argument in  
 non-obvious ways. The controller argument will now be properly carried
through  
 when using submappers. PR #28[2].  
  
 Is anyone else noticing the test failures?  
 Should I submit this requirements.txt change as a patch or should we pass
the  
 required two args as the patch? I can do the requirement.txt change. For
the  
 latter, somebody who knows what goes on in the extensions.py __init__()
should  
 take a look.  
  
 I assume this will also affect the stable branches, since the Routes
package  
 versin in requirements.txt in previous versions was same as in master.  
  
 \--  
 Aditya  
 [1] https://routes.readthedocs.org/en/latest/changes.html#release-2-3-mar
ch-28-2016  
 [2] https://github.com/bbangert/routes/pull/28/files?diff=unified#diff-
b54de741c3f86d76eb4bce4a223054aaL154

>

> __  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] unit test failures due to new release of Routes package (v2.3)

2016-03-29 Thread Davanum Srinivas
https://github.com/bbangert/routes/pull/65

On Tue, Mar 29, 2016 at 9:04 PM, Henry Gessau  wrote:
> https://launchpad.net/bugs/1563028
> https://review.openstack.org/298855
>
> Aditya Vaja  wrote:
>> Hi,
>>
>> I'm seeing unit test failures when I test locally after a fresh git clone of
>> neutron master.
>>
>> $ ./run_tests.sh -V -f
>>
>> log excerpt: http://paste.openstack.org/show/492384/
>>
>> I update the requirements.txt to use 'Routes<2.0,>=1.12.3' and all the tests
>> work fine.
>> I see there was a new release (v2.3 ) of Routes on 28th March 2016 [1], which
>> seems to have caused the issue, specifically:
>>  - Concatenation fix when using submappers with path prefixes. Multiple
>> submappers combined the path prefix inside the controller argument in
>> non-obvious ways. The controller argument will now be properly carried 
>> through
>> when using submappers. PR #28[2].
>>
>> Is anyone else noticing the test failures?
>> Should I submit this requirements.txt change as a patch or should we pass the
>> required two args as the patch? I can do the requirement.txt change. For the
>> latter, somebody who knows what goes on in the extensions.py __init__() 
>> should
>> take a look.
>>
>> I assume this will also affect the stable branches, since the Routes package
>> versin in requirements.txt in previous versions was same as in master.
>>
>> --
>> Aditya
>> [1] 
>> https://routes.readthedocs.org/en/latest/changes.html#release-2-3-march-28-2016
>> [2] 
>> https://github.com/bbangert/routes/pull/28/files?diff=unified#diff-b54de741c3f86d76eb4bce4a223054aaL154
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Mar. 30

2016-03-29 Thread joehuang
Hi,

IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.

Agenda:
# Cross OpenStack L2 networking
# pod scheduling
# Reliable aync. Job
# Link: https://etherpad.openstack.org/p/TricircleToDo

If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] unit test failures due to new release of Routes package (v2.3)

2016-03-29 Thread Henry Gessau
https://launchpad.net/bugs/1563028
https://review.openstack.org/298855

Aditya Vaja  wrote:
> Hi,
> 
> I'm seeing unit test failures when I test locally after a fresh git clone of
> neutron master.
> 
> $ ./run_tests.sh -V -f
> 
> log excerpt: http://paste.openstack.org/show/492384/
> 
> I update the requirements.txt to use 'Routes<2.0,>=1.12.3' and all the tests
> work fine.
> I see there was a new release (v2.3 ) of Routes on 28th March 2016 [1], which
> seems to have caused the issue, specifically:
>  - Concatenation fix when using submappers with path prefixes. Multiple
> submappers combined the path prefix inside the controller argument in
> non-obvious ways. The controller argument will now be properly carried through
> when using submappers. PR #28[2].
> 
> Is anyone else noticing the test failures?
> Should I submit this requirements.txt change as a patch or should we pass the
> required two args as the patch? I can do the requirement.txt change. For the
> latter, somebody who knows what goes on in the extensions.py __init__() should
> take a look.
> 
> I assume this will also affect the stable branches, since the Routes package
> versin in requirements.txt in previous versions was same as in master.
> 
> --
> Aditya
> [1] 
> https://routes.readthedocs.org/en/latest/changes.html#release-2-3-march-28-2016
> [2] 
> https://github.com/bbangert/routes/pull/28/files?diff=unified#diff-b54de741c3f86d76eb4bce4a223054aaL154



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] unit test failures due to new release of Routes package (v2.3)

2016-03-29 Thread Aditya Vaja
Hi,

  

I'm seeing unit test failures when I test locally after a fresh git clone of
neutron master.

  

$ ./run_tests.sh -V -f

  

log excerpt: http://paste.openstack.org/show/492384/

  

I update the requirements.txt to use 'Routes2.0,=1.12.3' and all the
tests work fine.

I see there was a new release (v2.3 ) of Routes on 28th March 2016 [1], which
seems to have caused the issue, specifically:

 \- Concatenation fix when using submappers with path prefixes. Multiple
submappers combined the path prefix inside the controller argument in non-
obvious ways. The controller argument will now be properly carried through
when using submappers. PR #28[2].

  

Is anyone else noticing the test failures?

Should I submit this requirements.txt change as a patch or should we pass the
required two args as the patch? I can do the requirement.txt change. For the
latter, somebody who knows what goes on in the extensions.py __init__() should
take a look.

  

I assume this will also affect the stable branches, since the Routes package
versin in requirements.txt in previous versions was same as in master.  

  

\--

Aditya

[1] https://routes.readthedocs.org/en/latest/changes.html#release-2-3-march-28
-2016

[2] https://github.com/bbangert/routes/pull/28/files?diff=unified#diff-
b54de741c3f86d76eb4bce4a223054aaL154

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-quickstart import

2016-03-29 Thread John Trowbridge


On 03/29/2016 08:30 PM, John Trowbridge wrote:
> Hola,
> 
> With the approval of the tripleo-quickstart spec[1], it is time to
> actually start doing the work. The first work item is moving it to the
> openstack git. The spec talks about moving it as is, and this would
> still be fine.
> 
> However, there are roles in the tripleo-quickstart tree that are not
> directly related to the instack-virt-setup replacement aspect that is
> approved in the spec (image building, deployment). I think these should
> be split into their own ansible-role-* repos, so that they can be
> consumed using ansible-galaxy. It would actually even make sense to do
> that with the libvirt role responsible for setting up the virtual
> environment. The tripleo-quickstart would then be just an integration
> layer making consuming these roles for virtual deployments easy.
> 
> This way if someone wanted to make a different role for say OVB
> deployments, it would be easy to use the other roles on top of a
> differently provisioned undercloud.
> 
> Similarly, if we wanted to adopt ansible to drive tripleo-ci, it would
> be very easy to only consume the roles that make sense for the tripleo
> cloud.
> 
> So the first question is, should we split the roles out of
> tripleo-quickstart?
> 
> If so, should we do that before importing it to the openstack git?
> 
> Also, should the split out roles also be on the openstack git?
> 
> Maybe this all deserves its own spec and we tackle it after completing
> all of the work for the first spec. I put this on the meeting agenda for
> today, but we didn't get to it.
> 
> - trown
> 

whoops
[1]
https://github.com/openstack/tripleo-specs/blob/master/specs/mitaka/tripleo-quickstart.rst
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] tripleo-quickstart import

2016-03-29 Thread John Trowbridge
Hola,

With the approval of the tripleo-quickstart spec[1], it is time to
actually start doing the work. The first work item is moving it to the
openstack git. The spec talks about moving it as is, and this would
still be fine.

However, there are roles in the tripleo-quickstart tree that are not
directly related to the instack-virt-setup replacement aspect that is
approved in the spec (image building, deployment). I think these should
be split into their own ansible-role-* repos, so that they can be
consumed using ansible-galaxy. It would actually even make sense to do
that with the libvirt role responsible for setting up the virtual
environment. The tripleo-quickstart would then be just an integration
layer making consuming these roles for virtual deployments easy.

This way if someone wanted to make a different role for say OVB
deployments, it would be easy to use the other roles on top of a
differently provisioned undercloud.

Similarly, if we wanted to adopt ansible to drive tripleo-ci, it would
be very easy to only consume the roles that make sense for the tripleo
cloud.

So the first question is, should we split the roles out of
tripleo-quickstart?

If so, should we do that before importing it to the openstack git?

Also, should the split out roles also be on the openstack git?

Maybe this all deserves its own spec and we tackle it after completing
all of the work for the first spec. I put this on the meeting agenda for
today, but we didn't get to it.

- trown

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Angus Salkeld
+1

On Wed, Mar 30, 2016 at 6:45 AM Michał Jastrzębski  wrote:

> +1
>
> On 29 March 2016 at 11:39, Ryan Hallisey  wrote:
> > +1
> >
> > - Original Message -
> > From: "Paul Bourke" 
> > To: openstack-dev@lists.openstack.org
> > Sent: Tuesday, March 29, 2016 12:10:38 PM
> > Subject: Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for
> Core Reviewer
> >
> > +1
> >
> > On 29/03/16 17:07, Steven Dake (stdake) wrote:
> >> Hey folks,
> >>
> >> Consider this proposal a +1 in favor of Vikram joining the core reviewer
> >> team.  His reviews are outstanding.  If he doesn’t have anything useful
> >> to add to a review, he doesn't pile on the review with more –1s which
> >> are slightly disheartening to people.  Vikram has started a trend
> >> amongst the core reviewers of actually diagnosing gate failures in
> >> peoples patches as opposed to saying gate failed please fix.  He does
> >> this diagnosis in nearly every review I see, and if he is stumped  he
> >> says so.  His 30 days review stats place him in pole position and his 90
> >> day review stats place him in second position.  Of critical notice is
> >> that Vikram is ever-present on IRC which in my professional experience
> >> is the #1 indicator of how well a core reviewer will perform long term.
> >>Besides IRC and review requirements, we also have code requirements
> >> for core reviewers.  Vikram has implemented only 10 patches so far, butI
> >> feel he could amp this up if he had feature work to do.  At the moment
> >> we are in a holding pattern on master development because we need to fix
> >> Mitaka bugs.  That said Vikram is actively working on diagnosing root
> >> causes of people's bugs in the IRC channel pretty much 12-18 hours a day
> >> so we can ship Mitaka in a working bug-free state.
> >>
> >> Our core team consists of 11 people.  Vikram requires at minimum 6 +1
> >> votes, with no veto –2 votes within a 7 day voting window to end on
> >> April 7th.  If there is a veto vote prior to April 7th I will close
> >> voting.  If there is a unanimous vote prior to April 7th, I will make
> >> appropriate changes in gerrit.
> >>
> >> Regards
> >> -steve
> >>
> >> [1] http://stackalytics.com/report/contribution/kolla-group/30
> >> [2] http://stackalytics.com/report/contribution/kolla-group/90
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Add Nova Keypair Plugin

2016-03-29 Thread Hiroyuki Eguchi
Hi Lakshmi,

Thank you for your advice.
I'm trying to index the public keys.
I'm gonna try to discuss in searchlight-specs before starting development.

Thanks
Hiroyuki.



差出人: Sampath, Lakshmi [lakshmi.samp...@hpe.com]
送信日時: 2016年3月29日 2:03
宛先: OpenStack Development Mailing List (not for usage questions)
件名: Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi Hiroyuki,

For this plugin what data are you indexing in Elasticsearch. I mean what do you 
expect users to search on and retrieve? Are you trying to index the public keys?
Talking directly to DB is not advisable, but before that we need to discuss 
what data is being indexed and the security implication of it (RBAC) to users 
who can/cannot access it.

I would suggest start a spec in openstack/searchlight-specs under newton for 
reviewing/feedback.
https://github.com/openstack/searchlight-specs.git


Thanks
Lakshmi.

From: Hiroyuki Eguchi [mailto:h-egu...@az.jp.nec.com]
Sent: Sunday, March 27, 2016 10:26 PM
To: OpenStack Development Mailing List (not for usage questions) 
‎[openstack-dev@lists.openstack.org]‎ 
Subject: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi.

I am developing this plugin.
https://blueprints.launchpad.net/searchlight/+spec/nova-keypair-plugin

However I faced the problem that a admin user can not retrieve a keypair 
information created by another user.
So it is impossible to sync the keypair between OpenStack DB and Elasticsearch, 
unless connect to OpenStack DB directly.
Is there any suggestions to resolve it ?

thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Adam Young

On 03/29/2016 06:21 PM, Rich Megginson wrote:

On 03/29/2016 04:19 PM, Adam Young wrote:


Somewhere in here:

http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb 



spec is for the rspec unit testing.  Do you mean 
http://git.openstack.org/cgit/openstack/puppet-keystone/tree/manifests/init.pp 
?


DOH.  I've done that a few times.  That really should be renamed to test 
somehow.


so, yes, I mean manifests/init.pp





I need to set these options:


admin_project_name
admin_project_domain_name

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450 

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453 




If they are unset, we should default them to 'admin' and 'Default' on 
new

installs, and leave them blank on old installs.


Can anyone point me in the right direction?


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Adam Young

On 03/29/2016 07:43 PM, Emilien Macchi wrote:

On Tue, Mar 29, 2016 at 6:19 PM, Adam Young  wrote:

Somewhere in here:

http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb

I need to set these options:


admin_project_name
admin_project_domain_name

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453


If they are unset, we should default them to 'admin' and 'Default' on new
installs, and leave them blank on old installs.


Can anyone point me in the right direction?

You'll need to patch puppet-keystone/manifests/init.pp (and unit
tests) (using $::os_service_default for the default value, which will
take the default in keystone, blank).

Important note:
If for whatever reason, puppet-keystone providers need these 2 options
loaded in the environment, please also patch [1]. Because after
initial deployment, Puppet catalog will read from /root/openrc file to
connect to Keystone API.

Ignore my last comment if you don't need these 2 params during
authentication when using openstackclient (in our providers).

SO, while they do, it is for completely unrelated reason.

The two values above are for making it possible to limit which "admin" 
role assignments are for Cloud-wide administrator as opposed to project 
specific.  See https://bugs.launchpad.net/keystone/+bug/968696  for context.








[1] 
https://github.com/openstack/puppet-openstack_extras/blob/master/manifests/auth_file.pp

Let us know if you need help,

Thanks!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] gnocchi backport exception for stable/mitaka

2016-03-29 Thread John Trowbridge
+1 I think this is a good exception for the same reasons.

On 03/29/2016 12:27 PM, Steven Hardy wrote:
> On Tue, Mar 29, 2016 at 11:05:00AM -0400, Pradeep Kilambi wrote:
>> Hi Everyone:
>>
>> As Mitaka branch was cut yesterday, I would like to request a backport
>> exception to get gnocchi patches[1][2][3] into stable/mitaka. It
>> should low risk feature as we decided not to set ceilometer to use
>> gnocchi by default. So ceilometer would work as is and gnocchi is
>> deployed along side as a new service but not used out of the box. So
>> this should make upgrades pretty much a non issues as things should
>> work exactly like before. If someone want to use gnocchi backend, they
>> can add an env template file to override the backend. In Netwon, we'll
>> flip the switch to make gnocchi the default backend.
>>
>> If we can please vote to agree to get this in as an exception it would
>> be super useful.
> 
> +1, provided we're able to confirm this plays nicely wrt upgrades I think
> we should allow this.
> 
> We're taking a much stricter stance re backports for stable/mitaka, but I
> think this is justified for the following reasons:
> 
> - The patches have been posted in plenty of time, but have suffered from a
>   lack of reviews and a lot of issues getting CI passing, were it not for
>   those issues this should really have landed by now.
> 
> - The Ceilometer community have been moving towards replacing the database
>   dispatcher with gnocchi since kilo, and it should provide us with a
>   (better performing) alternative the current setup AIUI.
> 
> Thus I think this is a case where an exception is probably justified, but
> to be clear I'm generally opposed to granting exceptions for mitaka beyond
> the few things we may discover in the next few days prior to the
> coordinated release (in Newton I hope we can formalize this to be more
> aligned with the normal feature-freeze and RC process).
> 
> Steve
> 
>>
>> Thanks,
>> ~ Prad
>>
>> [1] https://review.openstack.org/#/c/252032/
>> [2] https://review.openstack.org/#/c/290710/
>> [3] https://review.openstack.org/#/c/238013/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Emilien Macchi
On Tue, Mar 29, 2016 at 6:19 PM, Adam Young  wrote:
>
> Somewhere in here:
>
> http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb
>
> I need to set these options:
>
>
> admin_project_name
> admin_project_domain_name
>
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453
>
>
> If they are unset, we should default them to 'admin' and 'Default' on new
> installs, and leave them blank on old installs.
>
>
> Can anyone point me in the right direction?

You'll need to patch puppet-keystone/manifests/init.pp (and unit
tests) (using $::os_service_default for the default value, which will
take the default in keystone, blank).

Important note:
If for whatever reason, puppet-keystone providers need these 2 options
loaded in the environment, please also patch [1]. Because after
initial deployment, Puppet catalog will read from /root/openrc file to
connect to Keystone API.

Ignore my last comment if you don't need these 2 params during
authentication when using openstackclient (in our providers).

[1] 
https://github.com/openstack/puppet-openstack_extras/blob/master/manifests/auth_file.pp

Let us know if you need help,

Thanks!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] TC Candidacy

2016-03-29 Thread David Lyle
I would like to announce my candidacy for election to the Technical Committee.

You may know me from being the Horizon PTL for the past five releases and a
member of the OpenStack community since 2012 as an operator, contributor and
PTL. Over my tenure, I helped guide the Horizon team through growth that has
paralleled the growth of OpenStack. During this time, I have become sensitized
to the issues that are facing OpenStack at large and specifically from a
horizontal project perspective. I decided to step away from the PTL role this
cycle as I want to focus my efforts toward addressing these issues. The main
issues I want to see progress on in Newton and Ocata are:

1) Setting and driving technical direction and project vision

I think the Technical Committee should take a more active role in driving the
direction of OpenStack. OpenStack now contains many, many projects. The
unified guiding technical direction for those projects is missing. The
OpenStack mission statement reads:

"to produce the ubiquitous Open Source Cloud Computing platform that will
meet the needs of public and private clouds regardless of size, by being
simple to implement and massively scalable"

I will argue that without a unified direction, OpenStack will be many cloudy
things that, with considerable effort, can be used to deliver a cloud computing
solution. That delivers more toward the first half of the mission, than the
latter half. But the technical direction from the TC needs to be more than make
the mission statement reality. While that is enough for projects to make
progress, it is insufficient for end users and operators.

The current cross-project model is broken. The idea of cross-project
initiatives and specs is correct, the problem arises in getting projects to
a) participate in that process
b) actually have that initiative put items on their roadmap
c) actually implement the change

There is no motivation, carrot or stick at this point for projects on
cross-project initiatives. Currently, any cross-project initiative can
effectively be pocket vetoed by a project in OpenStack that does not find it a
priority. Additionally, the cross-project priorities vary per project. Making
progress currently relies on a few individuals doing the work in all effected
projects. With 54 projects, 36 of which are service related, this can be
a prohibitive task. I commend all those who are driving these efforts.

I propose the Technical Committee, working with the user committee and project
teams define some core objectives per release that define the release goals and
track to those. With 54 projects in OpenStack, there is not another way to move
these efforts forward without a lead time of years. One could argue that this
is the purview of the cross-project liaisons, but the TC is the elected
technical governing body in OpenStack and the only one actually defined in the
OpenStack Bylaws.


2) Addressing Big Tent ramifications

Having moved away from a relatively narrow and focused scope for OpenStack,
it is imperative that we improve at functioning as one project. Looking across
OpenStack, since the big tenting, I see a few issues. First and foremost, the
problem of maintaining consistency across projects went from bad to worse.
Previously, consistency problems were centered on APIs, logging, message
content and structure. Now, we have added items like end user documentation and
the forced proliferation of plugin formats. The large number and variety of
projects also makes it difficult to maintain an overall project vision. I think
that may be the current goal. But if we view OpenStack as a merely a kit, we
will again be pushing undo burden on end users and operators. The TC should
formally state whether OpenStack is meant to be a product or a kit,
understanding that a product can have optional and swappable parts.


3) Growth and organization

Many projects are big and unwieldy including the one I lead. The large scope
of projects and the corresponding number of contributors make these projects
sluggish and makes contributing difficult. Contributions are being shoved
through a narrow funnel where priorities are a strange mix of new feature
development and addressing operator needs. I think we need to reevaluate
project scope and governance. This is one area that the big tent provides some
relief, rather than forcing the franken-projects of yore. Breaking out
separable pieces from larger projects should be a high priority. We started
doing this work in Horizon. The consequences of not breaking the monoliths is
that we continue to frustrate new and old developers alike, drown reviewers and
make little relative forward progress. I believe the TC can help design and
drive this restructuring effort.

I still believe OpenStack has the potential to deliver on our mission
statement. And, I think that diverse views being included in the TC is to
everyone's advantage.

Thank you for your consideration,
David Lyle


Re: [openstack-dev] [tripleo] Policy Managment and distribution.

2016-03-29 Thread Emilien Macchi
On Tue, Mar 29, 2016 at 6:18 PM, Adam Young  wrote:
> Keystone has a policy API, but no one uses it.  It allows us to associate a
> policy file with an endpoint. Upload a json blob, it gets a uuid.  Associate
> the UUID with the endpoint.  It could also be associated with a service, and
> then it is associated with all endpoint for that service unless explicitly
> overwritten.
>
> Assuming all of the puppet modules for all of the services support managing
> the policy files, how hard would it be to synchronize between the database
> and what we distribute to the nodes?  Something along the lines of:  if I
> put a file in this directory, I want puppet to use it the next time I do a
> redeploy, and I also want it uploaded to Keystone and associate with the
> endpoint?
>
> As a start, I want to be able to replace the baseline policy.json file with
> the cloudsample.  We ship both.
>
>
> We have policy.pp in Puppet Keystone for this use case.
> In tripleO, we could create a parameter that uses would use to
> configure specific policies. It would be an hash and puppet will
> manage the policies.  This would handle the Keystone case, but we need
> to customize all of the policy files, for all of the services, for
> example, to add the is_admin_project check.  I'd like to get this mechanism
> in place before I start that work, so I can easily test changes.

++

the keystone::policy (and generally neutron::policy, nova::policy,
etc...) class is pretty robust:

* it creates news policies or update existing ones.
* it does not delete old policies, that are already in the file.
* notify keystone service on every change.

Please use it and let us know if we need to change something in
puppet-keystone, that would help you to deploy the use-case.

>
> The workflow needs to be something like this:
>
> Bring up Keystone with Bootstrap.
>
> For each service:
> Fetch its  policy file from the RPM location.
> Upload to Keystone.
> Set the service-policy association in Keystone.
> Deploy the service.
> Copy over the policy file from Keystone.
>
>
> In order to make a change, say to specialize for an endpoint:
>
> Upload new policy file to Keystone
> Set the Endpoint Association for the Policy File
> run overcloud deploy and sync all of the policy files down again.
>
> We don't have to use the Policy API, but we would end up re-implementing
> some aspect of it.
> By using the Keystone API, we also provide a way to query "what is the
> policy for this endpoint?"
>
> I don't think this would be a radical departure from what the Rest of
> OpenStack would do.
>
> I can see Kolla using the same approach, or something like it.
>
> Feedback, before I write this up as a spec?
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-03-29 Thread Inessa Vasilevskaya
Hi all,

I spent some time researching the current state of native ovsdb interface
feature, which has been fully implemented and merged since Liberty [1][2].
I was pretty much impressed by the expected performance improvement (as per
spec [1], some interesting research on ovs-vsctl + rootwrap also done here
[3]). Preliminary test results also showed that native interface does quite
well.

So my question is - why don’t we make native interface the default option
for voting gate jobs? Are there any caveats or security issues that I am
unaware of?

Regards,

Inessa

[1]
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/vsctl-to-ovsdb.html

[2] https://blueprints.launchpad.net/neutron/+spec/vsctl-to-ovsd

[3]
http://blog.otherwiseguy.com/replacing-ovs-vsctl-calls-with-native-ovsdb-in-neutron/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Discuss the blueprint "support-private-registry"

2016-03-29 Thread Hongbin Lu
Hi team,

This is the item we didn't have time to discuss in our team meeting, so I 
started the discussion in here.

Here is the blueprint: 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry . Per my 
understanding, the goal of the BP is to allow users to specify the url of their 
private docker registry where the bays pull the kube/swarm images (if they are 
not able to access docker hub or other public registry). An assumption is that 
users need to pre-install their own private registry and upload all the 
required images to there. There are several potential issues of this proposal:

* Is the private registry secure or insecure? If secure, how to handle 
the authentication secrets. If insecure, is it OK to connect a secure bay to an 
insecure registry?

* Should we provide an instruction for users to pre-install the private 
registry? If not, how to verify the correctness of this feature?

Thoughts?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-29 Thread John Dickinson
On Thursday, I'd like to proposed swapping Swift's 1:30pm session with Fuel's 
2:20pm session. This will give Swift a contiguous time block in the afternoon, 
and Fuel's session would line up right after their full morning (albeit after 
the lunch break).

I have not had a chance to talk to anyone on the Fuel team about this.

--John



On 28 Mar 2016, at 3:28, Thierry Carrez wrote:

> Hi everyone,
>
> Please find attached in PDF the proposed layout for the various tracks at the 
> Design Summit in Austin. I tried to take into account all the talk conflicts 
> and the constraints that you communicated, although as always there is no 
> perfect solution.
>
> Let me know if you see major issues with it. It's difficult to make changes 
> at this stage as they quickly cascade into breaking all sorts of constraints, 
> but we may still be able to accommodate some.
>
> Eagle eyes readers will see that there are a number of fishbowl slots in 
> green (on Thursday early morning and end of afternoon). If your team is 
> interested in them (and that does not trigger a conflict), let me know and 
> we'll try to give them out.
>
> Once the layout is official, I'll proceed to publish it on the official event 
> schedule. Then as each project team comes up with session titles and content, 
> we'll gradually push those to the official schedule. The goal is to have all 
> the content finalized for mid-April.
>
> Thanks all!
>
> -- 
> Thierry Carrez (ttx)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] IRC channel renamed to #openstack-kolla

2016-03-29 Thread Steven Dake (stdake)
See Subject.

Note #kolla automatically redirects to openstack-kolla now, so if folks have 
#kolla in their client list, they will join #openstack-kolla if they have a 
recent IRC client.  If your already in the channel leave and rejoin or restart 
your client and you will be connected with the rest of the community.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Adam Young


Somewhere in here:

http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb

I need to set these options:


admin_project_name
admin_project_domain_name

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453


If they are unset, we should default them to 'admin' and 'Default' on new
installs, and leave them blank on old installs.


Can anyone point me in the right direction?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Rich Megginson

On 03/29/2016 04:19 PM, Adam Young wrote:


Somewhere in here:

http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb 



spec is for the rspec unit testing.  Do you mean 
http://git.openstack.org/cgit/openstack/puppet-keystone/tree/manifests/init.pp 
?




I need to set these options:


admin_project_name
admin_project_domain_name

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450 

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453 




If they are unset, we should default them to 'admin' and 'Default' on new
installs, and leave them blank on old installs.


Can anyone point me in the right direction?


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Policy Managment and distribution.

2016-03-29 Thread Adam Young

Keystone has a policy API, but no one uses it.  It allows us to associate a
policy file with an endpoint. Upload a json blob, it gets a uuid.  Associate
the UUID with the endpoint.  It could also be associated with a service, and
then it is associated with all endpoint for that service unless explicitly
overwritten.

Assuming all of the puppet modules for all of the services support managing
the policy files, how hard would it be to synchronize between the database
and what we distribute to the nodes?  Something along the lines of:  if I
put a file in this directory, I want puppet to use it the next time I do a
redeploy, and I also want it uploaded to Keystone and associate with the
endpoint?

As a start, I want to be able to replace the baseline policy.json file with
the cloudsample.  We ship both.


We have policy.pp in Puppet Keystone for this use case.
In tripleO, we could create a parameter that uses would use to
configure specific policies. It would be an hash and puppet will
manage the policies.  This would handle the Keystone case, but we need
to customize all of the policy files, for all of the services, for
example, to add the is_admin_project check.  I'd like to get this mechanism
in place before I start that work, so I can easily test changes.



The workflow needs to be something like this:

Bring up Keystone with Bootstrap.

For each service:
Fetch its  policy file from the RPM location.
Upload to Keystone.
Set the service-policy association in Keystone.
Deploy the service.
Copy over the policy file from Keystone.


In order to make a change, say to specialize for an endpoint:

Upload new policy file to Keystone
Set the Endpoint Association for the Policy File
run overcloud deploy and sync all of the policy files down again.

We don't have to use the Policy API, but we would end up re-implementing some 
aspect of it.
By using the Keystone API, we also provide a way to query "what is the policy for 
this endpoint?"

I don't think this would be a radical departure from what the Rest of OpenStack 
would do.

I can see Kolla using the same approach, or something like it.

Feedback, before I write this up as a spec?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Relationship between physical networks and segment

2016-03-29 Thread Robert Kukura
My answers below are from the perspective of normal (non-routed) 
networks implemented in ML2. The support for routed networks should 
build on this without breaking it.


On 3/29/16 3:38 PM, Miguel Lavalle wrote:

Hi,

I am writing a patchset to build a mapping between hosts and network 
segments. The goal of this mapping is to be able to say whether a host 
has access to a given network segment. I am building this mapping 
assuming that if a host A has a bridges mapping containing 'physnet 1' 
and a segment has 'physnet 1' in its 'physical_network' attribute, 
then the host has access to that segment.


1) Is this assumption correct? Looking at method 
check_segment_for_agent in 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_agent.py#n180 
seems to me to suggest that my assumption is correct?
This is true for certain agent-based mechanism drivers, but cannot be 
assumed to be the case for all mechanism drivers (even all those that 
use agents. Any use of mapping info (i.e. from agents_db or elsewhere) 
is specific to an individual mechanism driver. I'd strongly recommend 
that any functionality trying to make decisions based on connectivity do 
so by calling into the registered mechanism drivers, so they can decide 
whether whatever they manage has connectivity.


Also note that connectivity may involve hierarchical port binding, in 
which case you really need to try to bind a port to determine if you 
have connectivity. I'm not suggesting that there is a requirement to mix 
HPB and routed networks, but please try not to build assumptions into 
ML2 plugin code that don't work with HPB or that are only valid for a 
subset of mechanism drivers.


2) Furthermore, when a segment is mapped to a physical network, is 
there a one to one relationship between segments and physical nets?
Certainly different virtual networks can map to different segments (i.e. 
VLANs) on the same physical network. It is even possible for the same 
virtual network to have multiple segments on the same physical network.


-Bob


Thanks


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-29 Thread Armando M.
On 29 March 2016 at 08:08, Matt Riedemann 
wrote:

> Nova has had some long-standing bugs that Sahid is trying to fix here [1].
>
> You can create a network in neutron with port_security_enabled=False.
> However, the bug is that since Nova adds the 'default' security group to
> the request (if none are specified) when allocating networks, neutron
> raises an error when you try to create a port on that network with a
> 'default' security group.
>
> Sahid's patch simply checks if the network that we're going to use has
> port_security_enabled and if it does not, no security groups are applied
> when creating the port (regardless of what's requested for security groups,
> which in nova is always at least 'default').
>
> There has been a similar attempt at fixing this [2]. That change simply
> only added the 'default' security group when allocating networks with
> nova-network. It omitted the default security group if using neutron since:
>
> a) If the network does not have port security enabled, we'll blow up
> trying to add a port on it with the default security group.
>
> b) If the network does have port security enabled, neutron will
> automatically apply a 'default' security group to the port, nova doesn't
> need to specify one.
>
> The problem both Feodor's and Sahid's patches ran into was that the nova
> REST API adds a 'default' security group to the server create response when
> using neutron if specific security groups weren't on the server create
> request [3].
>
> This is clearly wrong in the case of network.port_security_enabled=False.
> When listing security groups for an instance, they are correctly not
> listed, but the server create response is still wrong.
>
> So the question is, how to resolve this?  A few options come to mind:
>
> a) Don't return any security groups in the server create response when
> using neutron as the backend. Given by this point we've cast off to the
> compute which actually does the work of network allocation, we can't call
> back into the network API to see what security groups are being used. Since
> we can't be sure, don't provide what could be false info.
>
> b) Add a new method to the network API which takes the requested networks
> from the server create request and returns a best guess if security groups
> are going to be applied or not. In the case of
> network.port_security_enabled=False, we know a security group won't be
> applied so the method returns False. If there is port_security_enabled, we
> return whatever security group was requested (or 'default'). If there are
> multiple networks on the request, we return the security groups that will
> be applied to any networks that have port security enabled.
>
> Option (b) is obviously more intensive and requires hitting the neutron
> API from nova API before we respond, which we'd like to avoid if possible.
> I'm also not sure what it means for the auto-allocated-topology
> (get-me-a-network) case. With a standard devstack setup, a network created
> via the auto-allocated-topology API has port_security_enabled=True, but I
> also have the 'Port Security' extension enabled and the default public
> external network has port_security_enabled=True. What if either of those
> are False (or the port security extension is disabled)? Does the
> auto-allocated network inherit port_security_enabled=False? We could
> duplicate that logic in Nova, but it's more proxy work that we would like
> to avoid.
>

Port security on the external network has no role in this because this is
not the network you'd be creating ports on. Even if it had
port-security=False, an auto-allocated network will still be created with
port security enabled (i.e. =True).

A user can obviously change that later on.


>
> [1] https://review.openstack.org/#/c/284095/
> [2] https://review.openstack.org/#/c/173204/
> [3]
> https://github.com/openstack/nova/blob/f8a01ccdffc13403df77148867ef3821100b5edb/nova/api/openstack/compute/security_groups.py#L472-L475
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] gnocchi backport exception for stable/mitaka

2016-03-29 Thread Emilien Macchi
+1 for me, Gnocchi is bringing great features to improve Ceilometer
deployments at scale. It would be a first iteration and brings
interesting feedback by having it. Go for it!

On Tue, Mar 29, 2016 at 11:05 AM, Pradeep Kilambi  wrote:
> Hi Everyone:
>
> As Mitaka branch was cut yesterday, I would like to request a backport
> exception to get gnocchi patches[1][2][3] into stable/mitaka. It
> should low risk feature as we decided not to set ceilometer to use
> gnocchi by default. So ceilometer would work as is and gnocchi is
> deployed along side as a new service but not used out of the box. So
> this should make upgrades pretty much a non issues as things should
> work exactly like before. If someone want to use gnocchi backend, they
> can add an env template file to override the backend. In Netwon, we'll
> flip the switch to make gnocchi the default backend.
>
> If we can please vote to agree to get this in as an exception it would
> be super useful.
>
> Thanks,
> ~ Prad
>
> [1] https://review.openstack.org/#/c/252032/
> [2] https://review.openstack.org/#/c/290710/
> [3] https://review.openstack.org/#/c/238013/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][kolla][release] Deploying the big tent

2016-03-29 Thread Emilien Macchi
On Mon, Mar 28, 2016 at 9:16 PM, Steven Dake (stdake)  wrote:
>
>
> On 3/27/16, 2:50 PM, "Matt Riedemann"  wrote:
>
>>
>>
>>On 3/26/2016 11:27 AM, Steven Dake (stdake) wrote:
>>> Hey fellow PTLs and core reviewers of  those projects,
>>>
>>> Kolla at present deploys  the compute kit, and some other services that
>>> folks have added over time including other projects like Ironic, Heat,
>>> Mistral, Murano, Magnum, Manilla, and Swift.
>>>
>>> One of my objectives from my PTL candidacy was to deploy the big tent,
>>> and I  saw how we were not super successful as I planned in Mitaka at
>>> filling out the big tent.
>>>
>>> While the Kolla core team is large, and we can commit to maintaining big
>>> tent projects that are deployed, we are at capacity every milestone of
>>> every cycle implementing new features that the various big tent services
>>> should conform to.  The idea of a plugin architecture for Kolla where
>>> projects could provide their own plugins has been floated, but before we
>>> try that, I'd prefer that the various teams in OpenStack with an
>>> interest in having their projects consumed by Operators involve
>>> themselves in containerizing their projects.
>>>
>>> Again, once the job is done, the Kolla community will continue to
>>> maintain these projects, and we hope you will stay involved in that
>>>process.
>>>
>>> It takes roughly four 4 hour blocks to learn the implementation
>>> architecture of Kolla and probably another 2 4 hour blocks to get a good
>>> understanding of the Kolla deployment workflow.  Some projects (like
>>> Neutron for example) might fit outside this norm because containerizing
>>> them and deploying them is very complex.  But we have already finished
>>> the job on what we believe are the hard projects.
>>>
>>> My ask is that PTLs take responsibility or recruit someone from their
>>> respective community to participate in the implementation of Kolla
>>> deployment for their specific project.
>>>
>>> Only with your help can we make the vision of a deployment system that
>>> can deploy the big tent a reality.
>>>
>>> Regards
>>> -steve
>>>
>>>
>>>
>>>_
>>>_
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>Having never looked at Kolla, is there an easy way to see what projects
>>are already done? Like, what would Nova need to do here? Or is it a
>>matter of keeping up with new deployment changes / upgrade impacts, like
>>the newish nova API database?
>>
>>If that's the case, couldn't the puppet/ansible/chef projects be making
>>similar requests from the project development teams?

Regarding our current model, I don't think we will (Puppet). See below
this text.

>>Unless we have incentive to be contributing to Kolla, like say we
>>replaced devstack in our CI setup with it, I'm having a hard time seeing
>>everyone jumping in on this.
>
> Matt,
>
> The compute kit projects are well covered by the current core reviewer
> team.  Hence, we don't really need any help with Nova.  This is more aimed
> at the herd of new server projects in Liberty and Mitaka that want
> deployment options which currently lack them.  There is no way to deploy
> aodh in an automated fashion (for example) (picked because it was first in
> the project list by alphabetical order;)
>
> For example this cycle community folks implemented mistral and manila,
> which were not really top in our priority list.  Yet, the work got done
> and now the core team will look after these projects as well.
>
> As for why puppet/ansible/chef couldn't make the same requests, the answer
> is they could.  Why haven't they?  I can never speak to the motives or
> actions of others, but perhaps they didn't think to try?

Puppet OpenStack, Chef OpenStack and Ansible OpenStack took another
approach, by having a separated module per project.

This is how we started 4 years ago in Puppet modules: having one
module that deploys one component.
Example: openstack/puppet-nova - openstack/puppet-keystone - etc
Note that we currently cover 27 OpenStackc components, documented here:
https://wiki.openstack.org/wiki/Puppet

We have split the governance a little bit over the last 2 cycles,
where some modules like puppet-neutron and puppet-keystone (eventually
more in the future) have a dedicated core member group (among other
Puppet OpenStack core members) that have a special expertise on a
project.

Our model allows anyone expert on a specific project (ex: Keystone) to
contribute on puppet-keystone and eventually become core on the
project (It's happening every cycle).
For now, I don't see an interest to have this code living in core projects.

Yes, there are devstack plugins, but historically, devstack is the
only tool right now that is used 

Re: [openstack-dev] [TripleO] gnocchi backport exception for stable/mitaka

2016-03-29 Thread James Slagle
On Tue, Mar 29, 2016 at 12:27 PM, Steven Hardy  wrote:
> On Tue, Mar 29, 2016 at 11:05:00AM -0400, Pradeep Kilambi wrote:
>> Hi Everyone:
>>
>> As Mitaka branch was cut yesterday, I would like to request a backport
>> exception to get gnocchi patches[1][2][3] into stable/mitaka. It
>> should low risk feature as we decided not to set ceilometer to use
>> gnocchi by default. So ceilometer would work as is and gnocchi is
>> deployed along side as a new service but not used out of the box. So
>> this should make upgrades pretty much a non issues as things should
>> work exactly like before. If someone want to use gnocchi backend, they
>> can add an env template file to override the backend. In Netwon, we'll
>> flip the switch to make gnocchi the default backend.
>>
>> If we can please vote to agree to get this in as an exception it would
>> be super useful.
>
> +1, provided we're able to confirm this plays nicely wrt upgrades I think
> we should allow this.
>
> We're taking a much stricter stance re backports for stable/mitaka, but I
> think this is justified for the following reasons:
>
> - The patches have been posted in plenty of time, but have suffered from a
>   lack of reviews and a lot of issues getting CI passing, were it not for
>   those issues this should really have landed by now.
>
> - The Ceilometer community have been moving towards replacing the database
>   dispatcher with gnocchi since kilo, and it should provide us with a
>   (better performing) alternative the current setup AIUI.
>
> Thus I think this is a case where an exception is probably justified, but
> to be clear I'm generally opposed to granting exceptions for mitaka beyond
> the few things we may discover in the next few days prior to the
> coordinated release (in Newton I hope we can formalize this to be more
> aligned with the normal feature-freeze and RC process).

+1.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] TC Candidacy for Shamail Tahir

2016-03-29 Thread Shamail Tahir
Hi everyone,

I would like to announce my candidacy for OpenStack Technical Committee
member for the upcoming term.

I am currently an offering manager for OpenStack initiatives at IBM and I
have been involved in the OpenStack community since 2013.  I spend most of
my time[1] in our working groups including the Product, Enterprise,
Application Ecosystem, and Operator Tags team..  I have also spent a lot of
time talking with OpenStack users and organizations that are new to
open-source and OpenStack in order to help them with their journey into
becoming community members.  The OpenStack project continues to gain
adoption while also expanding in scope with new projects and
functionality.  These changes continue to make considering operator and
user experience in strategic and architectural decisions ever more
critical.  Furthermore, the TC itself is involved in a set of broad topics
lately (such as changing the mission statement, diversity, and mentoring to
name a few) and additional perspectives will be good for these discussions.

If elected to be a TC, I would like to focus/highlight on the topics
outlined below:

   * Revisit big-tent workflow for new projects.

- The big tent initiative has promoted faster and broader
innovation inside the OpenStack community and I believe we should reflect
on its first year to build a new set of best practices for both acceptance
and on-boarding new projects along with mechanisms to validate that new
projects are continuing to follow the four opens.  What worked? What
didn't?  Should there be additional criteria based on overall fit/need? And
how can we improve this further?



  * Promote a systems architecture point-of-view of the OpenStack cloud
platform

- There are several initiatives under-way (with more to come) that
can benefit multiple OpenStack projects.  Cross project specifications
allow us to build best practices and guidelines but they are currently
treated as recommendations.  If we could develop a way to make some of the
more critical needs a requirement vs. recommendation then we could have a
way to ensure that we have a way inside the community to drive
architectural changes, as necessary, for better system stability, user
experience, or performance.  The other benefit would be to agree on common
needs (e.g. the quota service discussion happening right now, the online
schema migration work that has taken place in a lot of projects) and guide
new projects through adopting the changes as they start development rather
than retrofit.

   * Increase user and technical committee exchanges

   - I would like to propose a regular cadence to discussions between
the technical and user committee members so that technical direction, and
decision points, can be shared and we could strengthen/expedite feedback
from our user community into process as needed.  I have seen both parties
appreciate the dialogue and find it valuable, therefore I think anything
that can strengthen this feedback loop is a positive thing.

   * Building an ecosystem for next-generation platforms

   - Over the last couple of years there have been multiple
complementary open-source projects/technologies that are catering to
similar application design patterns (Mesos, k8s, Cloud Foundry, etc,).  I
believe projects such as Kuryr are doing a good job in mapping OpenStack
technologies like neutron to other ecosystems (e.g Docker libnetwork) and
we should continue to solicit more projects/activity that will help
customers/users build an internal platform that meets their objectives as
they shift from traditional applications to newer applications.  In the
end, we are all trying to meet user needs and the answer probably lies in a
complementary view of adjacent technologies versus every need being
implemented by a single platform.  A successful approach to building an
ecosystem for OpenStack clouds could dramatically increase putting our code
to work through an even greater adoption rate and make OpenStack clouds
viable for additional workloads/use-cases.

My reason for focusing on these topics is based on my experience and
background in the OpenStack community.  I am a core member of the Product
working group[2], participate regularly in the ops-tags-team[3], help with
SuperuserTV[4], help with building the community-generated OpenStack
roadmap[5], and moderate sessions at ops-meetups.

I am passionate about the work our community produces and excited that it
has found relevance for so many people and organizations around the globe.
I would like to continue to do work that further strengthens the feedback
loop between the developers and consumers of our open-source cloud.  I
humbly ask for your vote to represent this need as a member of our
OpenStack Technical Committee.


[1] https://review.openstack.org/#/q/owner:ItzShamail%2540gmail.com

[2] https://wiki.openstack.org/wiki/ProductTeam

[3] https://review.openstack.org/#/q/project:openstack/ops-tags-team

[4] 

Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-29 Thread Matt Riedemann



On 3/29/2016 2:48 AM, Thierry Carrez wrote:

Matt Riedemann wrote:

I see one problem. The single stable team fishbowl session is scheduled
for the last slot on Thursday (5pm). The conflict I have with that is
the nova team does it's priority/scheduling talk in the last session on
the last day for design summit fishbowl sessions (non-meetup style).

I'm wondering if we could move the stable session to 4:10 on Thursday. I
don't see any infra/QA sessions happening at that time, which is the
cross-project people we need for the stable session.


There is the release management fishbowl session at 4:10pm on Thursday
that would conflict (a lot of people are involved in both). Maybe we
could swap those two (put stable at 4:10pm and relmgt at 5:00pm). It
looks like that would work ?



That would work for me. I wouldn't be at the release mgmt one at 5pm, 
but I didn't know it was happening anyway, or that I'd be required to be 
there.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Relationship between physical networks and segment

2016-03-29 Thread Carl Baldwin
On Tue, Mar 29, 2016 at 1:38 PM, Miguel Lavalle  wrote:
> I am writing a patchset to build a mapping between hosts and network
> segments. The goal of this mapping is to be able to say whether a host has
> access to a given network segment. I am building this mapping assuming that
> if a host A has a bridges mapping containing 'physnet 1' and a segment has
> 'physnet 1' in its 'physical_network' attribute, then the host has access to
> that segment.

Miguel, thanks for starting this.  I don't have the answer but here
are some thoughts.

First, a segment in the model is defined by the combination of network
type, physical network, and segmentation id [1].  In theory, the same
physical network name could be used with different network types and
segmentation ids.  For example, it might be natural to express
different VLANS on the same physical switch using the same physical
network name.

But, the method you linked to [2] does seem to make the same
assumption.  So, in practice it seems to be a valid assumption.  A
patch that recently merged [3] to make DHCP physnet aware also seems
consistent with the assumption.

> 1) Is this assumption correct? Looking at method check_segment_for_agent in
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_agent.py#n180

Careful, this reference can change as master updates this file.  :)

> seems to me to suggest that my assumption is correct?
>
> 2) Furthermore, when a segment is mapped to a physical network, is there a
> one to one relationship between segments and physical nets?

The routed networks use case can go either way.  We can easily live
with using a different physical network for each segment.  It might be
a bit awkward in a some situations (e.g. same router/switch serving
multiple segments) but I imagine that it might be more important to be
able to reuse VLAN ids across segments because VLAN ids can be scarce.
That would require that the physical network be unique for each.

I think this discussion is about what is the right thing to do
regardless of what the routed networks use case might or might not
need.  What are other use cases that might be relevant to this
discussion?

Carl

[1] 
https://github.com/openstack/neutron/blob/4a6d05e410/neutron/extensions/providernet.py#L33
[2] 
https://github.com/openstack/neutron/blob/4a6d05e410/neutron/plugins/ml2/drivers/mech_agent.py#L180
[3] https://review.openstack.org/#/c/205631/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-29 Thread Mark McClain

> On Mar 28, 2016, at 6:28 AM, Thierry Carrez  wrote:
> 
> Hi everyone,
> 
> Please find attached in PDF the proposed layout for the various tracks at the 
> Design Summit in Austin. I tried to take into account all the talk conflicts 
> and the constraints that you communicated, although as always there is no 
> perfect solution.
> 
> Let me know if you see major issues with it. It's difficult to make changes 
> at this stage as they quickly cascade into breaking all sorts of constraints, 
> but we may still be able to accommodate some.
> 
> Eagle eyes readers will see that there are a number of fishbowl slots in 
> green (on Thursday early morning and end of afternoon). If your team is 
> interested in them (and that does not trigger a conflict), let me know and 
> we'll try to give them out.
> 
> Once the layout is official, I'll proceed to publish it on the official event 
> schedule. Then as each project team comes up with session titles and content, 
> we'll gradually push those to the official schedule. The goal is to have all 
> the content finalized for mid-April.
> 
> Thanks all!
> 
> -- 
> Thierry Carrez (txt)


Would it be possible to move the Astara fish bowl session to Thursday morning?  
It would avoid the conflict with Tacker.  We’re hoping to see where the teams 
could cooperate across projects in the future.

mark
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all][ptl] release process changes for official projects

2016-03-29 Thread Davanum Srinivas
Kirill,

This is prep for Newton. So definitely not rocking the boat when we
have a week left.

-- Dims

On Tue, Mar 29, 2016 at 4:08 PM, Kirill Zaitsev  wrote:
> My immediate question is — when would this be merged? Is it a good idea to
> alter this during the final RC week and before mitaka release, rather than
> implement this change early in the Newton cycle and let people release their
> final release the old way?
>
> --
> Kirill Zaitsev
> Murano Team
> Software Engineer
> Mirantis, Inc
>
> On 29 March 2016 at 19:46:08, Doug Hellmann (d...@doughellmann.com) wrote:
>
> During the Mitaka cycle, the release team worked on automation for
> tagging and documenting releases [1]. For the first phase, we focused
> on official teams with the release:managed tag for their deliverables,
> to keep the number of projects manageable as we built out the tools
> and processes we needed. That created a bit of confusion as official
> projects still had to submit openstack/releases changes in order
> to appear on the releases.openstack.org website.
>
> For the second phase during the Newton cycle, we are prepared to
> expand the use of automation to all deliverables for all official
> projects. As part of this shift, we will be updating the Gerrit
> ACLs for projects to ensure that the release team can handle the
> releases and branching. These updates will remove tagging and
> branching rights for anyone not on the central release management
> team. Instead of tagging releases and then recording them in the
> releases repository after the tag is applied, all official teams
> can now use the releases repo to request new releases. We will be
> reviewing version numbers in all tag requests to ensure SemVer is
> followed, and we won't release libraries late in the week, but we
> will process releases regularly so there is no reason this change
> should have a significant impact on your ability to release frequently.
>
> If you're not familiar with the current release process, please
> review the README.rst file in the openstack/releases repository.
> Follow up here on the mailing list or in #openstack-release if you
> have questions.
>
> The project-config change to update ACLs and correct issues with
> the build job definitions for official projects is
> https://review.openstack.org/298866
>
> Doug
>
> [1]
> http://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-29 Thread Ton Ngo

In multiple occasions in the past, we have had to use version of some
software that's not available yet
in the upstream image for bug fixes or new features (Kubernetes, Docker,
Flannel,...).  Eventually the upstream
image would catch up, but having the tool to customize let us push forward
with development, and gate tests
if it makes sense.

Ton Ngo,




From:   Yolanda Robla Mota 
To: 
Date:   03/29/2016 01:35 PM
Subject:Re: [openstack-dev] [magnum] Generate atomic images using
diskimage-builder



So the advantages I can see with diskimage-builder are:
- we reuse the same tooling that is present in other openstack projects
to generate images, rather than relying on an external image
- it improves the control we have on the contents of the image, instead
of seeing that as a black box. At the moment we can rely on the default
tree for fedora 23, but this can be updated per magnum needs
- reusability: we have atomic 23 now, but why not create magnum images
with dib, for ubuntu, or any other distros ? Relying on
diskimage-builder makes it easy and flexible, because it's a matter of
adding the right elements.

Best
Yolanda

El 29/03/16 a las 21:54, Steven Dake (stdake) escribió:
> Adrian,
>
> Makes sense.  Do the images have to be built to be mirrored though?
Can't
> they just be put on the mirror sites fro upstream?
>
> Thanks
> -steve
>
> On 3/29/16, 11:02 AM, "Adrian Otto"  wrote:
>
>> Steve,
>>
>> I¹m very interested in having an image locally cached in glance in each
>> of the clouds used by OpenStack infra. The local caching of the glance
>> images will produce much faster gate testing times. I don¹t care about
>> how the images are built, but we really do care about the performance
>> outcome.
>>
>> Adrian
>>
>>> On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake) 
>>> wrote:
>>>
>>> Yolanda,
>>>
>>> That is a fantastic objective.  Matthieu asked why build our own images
>>> if
>>> the upstream images work and need no further customization?
>>>
>>> Regards
>>> -steve
>>>
>>> On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
>>> wrote:
>>>
 Hi
 The idea is to build own images using diskimage-builder, rather than
 downloading the image from external sources. By that way, the image
can
 live in our mirrors, and is built using the same pattern as other
 images
 used in OpenStack.
 It also opens the door to customize the images, using custom trees, if
 there is a need for it. Actually we rely on official tree for Fedora
23
 Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
 default.

 Best,
 Yolanda

 El 29/03/16 a las 10:17, Mathieu Velten escribió:
> Hi,
>
> We are using the official Fedora Atomic 23 images here (on Mitaka M1
> however) and it seems to work fine with at least Kubernetes and
Docker
> Swarm.
> Any reason to continue building specific Magnum image ?
>
> Regards,
>
> Mathieu
>
> Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
>> Hi
>> I wanted to start a discussion on how Fedora Atomic images are being
>> built. Currently the process for generating the atomic images used
>> on
>> Magnum is described here:
>>
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
>> l.
>> The image needs to be built manually, uploaded to fedorapeople, and
>> then
>> consumed from there in the magnum tests.
>> I have been working on a feature to allow diskimage-builder to
>> generate
>> these images. The code that makes it possible is here:
>> https://review.openstack.org/287167
>> This will allow that magnum images are generated on infra, using
>> diskimage-builder element. This element also has the ability to
>> consume
>> any tree we need, so images can be customized on demand. I generated
>> one
>> image using this element, and uploaded to fedora people. The image
>> has
>> passed tests, and has been validated by several people.
>>
>> So i'm raising that topic to decide what should be the next steps.
>> This
>> change to generate fedora-atomic images has not already landed into
>> diskimage-builder. But we have two options here:
>> - add this element to generic diskimage-builder elements, as i'm
>> doing now
>> - generate this element internally on magnum. So we can have a
>> directory
>> in magnum project, called "elements", and have the fedora-atomic
>> element
>> here. This will give us more control on the element behaviour, and
>> will
>> allow to update the element without waiting for external reviews.
>>
>> Once the code for diskimage-builder has landed, another step can be
>> to
>> periodically generate images using a magnum 

Re: [openstack-dev] [Neutron] Segments, subnet types, and IPAM

2016-03-29 Thread Carl Baldwin
On Tue, Mar 29, 2016 at 12:12 PM, Carl Baldwin  wrote:
> I've been playing with this a bit on this patch set [1].  I haven't
> gotten very far yet but it has me thinking.
>
> Calico has a similar use case in mind as I do.  Essentially, we both
> want to group subnets to allow for aggregation of routes.  (a) In
> routed networks, we want to group them by segment and the boundaries
> are hard meaning that if an IP is requested from a particular segment,
> IPAM should fail if it can't allocate it from the same.
>
> (b) For Calico, I believe that the goal is to group by host.  Their
> boundaries are soft meaning that it is okay to allocate any IP on the
> network but one that allows maximum route aggregation is strongly
> preferred.
>
> (c) Brian Haley will soon post a spec to address the need to group
> subnets by service type.  This is another sort of grouping but is
> orthogonal to the need to group for routing purposes.  Here, we're
> trying to group like ports together so that we can use different types
> of addresses.  This kind of grouping could coexist with route grouping
> since they are orthogonal.

I thought of another type of grouping which could benefit pluggable
IPAM today.  It occurred to me as I was refreshing my memory on how
pluggable IPAM works when there are multiple subnets on a network.
Currently, Neutron's backend pulls the subnets  and then tries to ask
the IPAM driver for an IP on each one in turn [1].  This is
inefficient and I think it is a natural opportunity to evolve the IPAM
interface to allow this to be handled within the driver itself.  The
driver could optimize it to avoid repeated round-trips to an external
server.

Anyway, it occurred to me that this is just like segment aware IPAM
except that the network is the group instead of the segment.  The IPAM
driver could consider it another orthogonal grouping of subnets (even
though it isn't really orthogonal to Neutron's point of view).  I
could provide an implementation that would provide a shim for existing
IPAM drivers to work without modification.  In fact, I could do that
for all the types of grouping I've mentioned.  Drivers could choose to
sub-class the behavior to optimize it if they have the capability.

Carl

[1] 
https://github.com/openstack/neutron/blob/4a6d05e410/neutron/db/ipam_pluggable_backend.py#L88

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Michał Jastrzębski
+1

On 29 March 2016 at 11:39, Ryan Hallisey  wrote:
> +1
>
> - Original Message -
> From: "Paul Bourke" 
> To: openstack-dev@lists.openstack.org
> Sent: Tuesday, March 29, 2016 12:10:38 PM
> Subject: Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core 
> Reviewer
>
> +1
>
> On 29/03/16 17:07, Steven Dake (stdake) wrote:
>> Hey folks,
>>
>> Consider this proposal a +1 in favor of Vikram joining the core reviewer
>> team.  His reviews are outstanding.  If he doesn’t have anything useful
>> to add to a review, he doesn't pile on the review with more –1s which
>> are slightly disheartening to people.  Vikram has started a trend
>> amongst the core reviewers of actually diagnosing gate failures in
>> peoples patches as opposed to saying gate failed please fix.  He does
>> this diagnosis in nearly every review I see, and if he is stumped  he
>> says so.  His 30 days review stats place him in pole position and his 90
>> day review stats place him in second position.  Of critical notice is
>> that Vikram is ever-present on IRC which in my professional experience
>> is the #1 indicator of how well a core reviewer will perform long term.
>>Besides IRC and review requirements, we also have code requirements
>> for core reviewers.  Vikram has implemented only 10 patches so far, butI
>> feel he could amp this up if he had feature work to do.  At the moment
>> we are in a holding pattern on master development because we need to fix
>> Mitaka bugs.  That said Vikram is actively working on diagnosing root
>> causes of people's bugs in the IRC channel pretty much 12-18 hours a day
>> so we can ship Mitaka in a working bug-free state.
>>
>> Our core team consists of 11 people.  Vikram requires at minimum 6 +1
>> votes, with no veto –2 votes within a 7 day voting window to end on
>> April 7th.  If there is a veto vote prior to April 7th I will close
>> voting.  If there is a unanimous vote prior to April 7th, I will make
>> appropriate changes in gerrit.
>>
>> Regards
>> -steve
>>
>> [1] http://stackalytics.com/report/contribution/kolla-group/30
>> [2] http://stackalytics.com/report/contribution/kolla-group/90
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-29 Thread Yolanda Robla Mota

So the advantages I can see with diskimage-builder are:
- we reuse the same tooling that is present in other openstack projects 
to generate images, rather than relying on an external image
- it improves the control we have on the contents of the image, instead 
of seeing that as a black box. At the moment we can rely on the default 
tree for fedora 23, but this can be updated per magnum needs
- reusability: we have atomic 23 now, but why not create magnum images 
with dib, for ubuntu, or any other distros ? Relying on 
diskimage-builder makes it easy and flexible, because it's a matter of 
adding the right elements.


Best
Yolanda

El 29/03/16 a las 21:54, Steven Dake (stdake) escribió:

Adrian,

Makes sense.  Do the images have to be built to be mirrored though?  Can't
they just be put on the mirror sites fro upstream?

Thanks
-steve

On 3/29/16, 11:02 AM, "Adrian Otto"  wrote:


Steve,

I¹m very interested in having an image locally cached in glance in each
of the clouds used by OpenStack infra. The local caching of the glance
images will produce much faster gate testing times. I don¹t care about
how the images are built, but we really do care about the performance
outcome.

Adrian


On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake) 
wrote:

Yolanda,

That is a fantastic objective.  Matthieu asked why build our own images
if
the upstream images work and need no further customization?

Regards
-steve

On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
wrote:


Hi
The idea is to build own images using diskimage-builder, rather than
downloading the image from external sources. By that way, the image can
live in our mirrors, and is built using the same pattern as other
images
used in OpenStack.
It also opens the door to customize the images, using custom trees, if
there is a need for it. Actually we rely on official tree for Fedora 23
Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
default.

Best,
Yolanda

El 29/03/16 a las 10:17, Mathieu Velten escribió:

Hi,

We are using the official Fedora Atomic 23 images here (on Mitaka M1
however) and it seems to work fine with at least Kubernetes and Docker
Swarm.
Any reason to continue building specific Magnum image ?

Regards,

Mathieu

Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :

Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generating the atomic images used
on
Magnum is described here:
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
l.
The image needs to be built manually, uploaded to fedorapeople, and
then
consumed from there in the magnum tests.
I have been working on a feature to allow diskimage-builder to
generate
these images. The code that makes it possible is here:
https://review.openstack.org/287167
This will allow that magnum images are generated on infra, using
diskimage-builder element. This element also has the ability to
consume
any tree we need, so images can be customized on demand. I generated
one
image using this element, and uploaded to fedora people. The image
has
passed tests, and has been validated by several people.

So i'm raising that topic to decide what should be the next steps.
This
change to generate fedora-atomic images has not already landed into
diskimage-builder. But we have two options here:
- add this element to generic diskimage-builder elements, as i'm
doing now
- generate this element internally on magnum. So we can have a
directory
in magnum project, called "elements", and have the fedora-atomic
element
here. This will give us more control on the element behaviour, and
will
allow to update the element without waiting for external reviews.

Once the code for diskimage-builder has landed, another step can be
to
periodically generate images using a magnum job, and upload these
images
to OpenStack Infra mirrors. Currently the image is based on Fedora
F23,
docker-host tree. But different images can be generated if we need a
better option.

As soon as the images are available on internal infra mirrors, the
tests
can be changed, to consume these internals images. By this way the
tests
can be a bit faster (i know that the bottleneck is on the functional
testing, but if we reduce the download time it can help), and tests
can
be more reilable, because we will be removing an external dependency.

So i'd like to get more feedback on this topic, options and next
steps
to achieve the goals. Best



___
__
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Yolanda Robla Mota
Cloud Automation and Distribution Engineer
+34 605641639
yolanda.robla-m...@hpe.com




[openstack-dev] [magnum][kuryr] Shared session in design summit

2016-03-29 Thread Hongbin Lu
Hi all,

As discussed before, our team members want to establish a shared session 
between Magnum and Kuryr. We expected a lot of attendees in the session so we 
need a large room (fishbowl). Currently, Kuryr has only 1 fishbowl session, and 
they possibly need it for other purposes. A solution is to promote one of the 
Magnum fishbowl session to be the shared session, or leverage one of the free 
fishbowl slot. The schedule is as below.

Please vote your favorite time slot: http://doodle.com/poll/zuwercgnw2uecs5y .

Magnum fishbowl session:

* 11:00 - 11:40 (Thursday)

* 11:50 - 12:30

* 1:30 - 2:10

* 2:20 - 3:00

* 3:10 - 3:50

Free fishbowl slots:

* 9:00 - 9:40 (Thursday)

* 9:50 - 10:30

* 3:10 - 3:50 (conflict with Magnum session)

* 4:10 - 4:50 (conflict with Magnum session)

* 5:00 - 5:40 (conflict with Magnum session)

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-29 Thread Adrian Otto
Steve,

I will defer to the experts in openstack-infra on this one. As long as the 
image works without modifications, then I think it would be fine to cache the 
upstream one. Practically speaking, I do anticipate a point at which we will 
want to adjust something in the image, and it will be nice to have a well 
defined point of customization in place for that in advance.

Adrian

On Mar 29, 2016, at 12:54 PM, Steven Dake (stdake) 
> wrote:

Adrian,

Makes sense.  Do the images have to be built to be mirrored though?  Can't
they just be put on the mirror sites fro upstream?

Thanks
-steve

On 3/29/16, 11:02 AM, "Adrian Otto" 
> wrote:

Steve,

I¹m very interested in having an image locally cached in glance in each
of the clouds used by OpenStack infra. The local caching of the glance
images will produce much faster gate testing times. I don¹t care about
how the images are built, but we really do care about the performance
outcome.

Adrian

On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake) 
>
wrote:

Yolanda,

That is a fantastic objective.  Matthieu asked why build our own images
if
the upstream images work and need no further customization?

Regards
-steve

On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
>
wrote:

Hi
The idea is to build own images using diskimage-builder, rather than
downloading the image from external sources. By that way, the image can
live in our mirrors, and is built using the same pattern as other
images
used in OpenStack.
It also opens the door to customize the images, using custom trees, if
there is a need for it. Actually we rely on official tree for Fedora 23
Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
default.

Best,
Yolanda

El 29/03/16 a las 10:17, Mathieu Velten escribió:
Hi,

We are using the official Fedora Atomic 23 images here (on Mitaka M1
however) and it seems to work fine with at least Kubernetes and Docker
Swarm.
Any reason to continue building specific Magnum image ?

Regards,

Mathieu

Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generating the atomic images used
on
Magnum is described here:
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
l.
The image needs to be built manually, uploaded to fedorapeople, and
then
consumed from there in the magnum tests.
I have been working on a feature to allow diskimage-builder to
generate
these images. The code that makes it possible is here:
https://review.openstack.org/287167
This will allow that magnum images are generated on infra, using
diskimage-builder element. This element also has the ability to
consume
any tree we need, so images can be customized on demand. I generated
one
image using this element, and uploaded to fedora people. The image
has
passed tests, and has been validated by several people.

So i'm raising that topic to decide what should be the next steps.
This
change to generate fedora-atomic images has not already landed into
diskimage-builder. But we have two options here:
- add this element to generic diskimage-builder elements, as i'm
doing now
- generate this element internally on magnum. So we can have a
directory
in magnum project, called "elements", and have the fedora-atomic
element
here. This will give us more control on the element behaviour, and
will
allow to update the element without waiting for external reviews.

Once the code for diskimage-builder has landed, another step can be
to
periodically generate images using a magnum job, and upload these
images
to OpenStack Infra mirrors. Currently the image is based on Fedora
F23,
docker-host tree. But different images can be generated if we need a
better option.

As soon as the images are available on internal infra mirrors, the
tests
can be changed, to consume these internals images. By this way the
tests
can be a bit faster (i know that the bottleneck is on the functional
testing, but if we reduce the download time it can help), and tests
can
be more reilable, because we will be removing an external dependency.

So i'd like to get more feedback on this topic, options and next
steps
to achieve the goals. Best



___
__
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Yolanda Robla Mota
Cloud Automation and Distribution Engineer
+34 605641639
yolanda.robla-m...@hpe.com




__
OpenStack Development 

Re: [openstack-dev] [fuel] Component Leads Elections

2016-03-29 Thread Dmitry Borodaenko
On Tue, Mar 29, 2016 at 03:19:27PM +0300, Vladimir Kozhukalov wrote:
> > I think this call is too late to change a structure for now. I suggest
> > that we always respect the policy we've accepted, and follow it.
> >
> > If Component Leads role is under a question, then I'd continue the
> > discussion, hear opinion of current component leads, and give this a time
> > to be discussed. I'd have nothing against removing this role in a month
> > from now if we reach a consensus on this topic - no need to wait for the
> > cycle end.
> 
> Sure, there is no need to rush. I'd also like to see current CL opinions.

Considering that, while there's an ongoing discussion on how to change
Fuel team structure for Ocata, there's also an apparent consensus that
we still want to have component leads for Newton, I'd like to call once
again for volunteers to self-nominate for component leads of
fuel-library, fuel-web, and fuel-ui. We've got 2 days left until
nomination period is over, and no volunteer so far :(

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all][ptl] release process changes for official projects

2016-03-29 Thread Kirill Zaitsev
My immediate question is — when would this be merged? Is it a good idea to 
alter this during the final RC week and before mitaka release, rather than 
implement this change early in the Newton cycle and let people release their 
final release the old way?

-- 
Kirill Zaitsev
Murano Team
Software Engineer
Mirantis, Inc

On 29 March 2016 at 19:46:08, Doug Hellmann (d...@doughellmann.com) wrote:

During the Mitaka cycle, the release team worked on automation for  
tagging and documenting releases [1]. For the first phase, we focused  
on official teams with the release:managed tag for their deliverables,  
to keep the number of projects manageable as we built out the tools  
and processes we needed. That created a bit of confusion as official  
projects still had to submit openstack/releases changes in order  
to appear on the releases.openstack.org website.  

For the second phase during the Newton cycle, we are prepared to  
expand the use of automation to all deliverables for all official  
projects. As part of this shift, we will be updating the Gerrit  
ACLs for projects to ensure that the release team can handle the  
releases and branching. These updates will remove tagging and  
branching rights for anyone not on the central release management  
team. Instead of tagging releases and then recording them in the  
releases repository after the tag is applied, all official teams  
can now use the releases repo to request new releases. We will be  
reviewing version numbers in all tag requests to ensure SemVer is  
followed, and we won't release libraries late in the week, but we  
will process releases regularly so there is no reason this change  
should have a significant impact on your ability to release frequently.  

If you're not familiar with the current release process, please  
review the README.rst file in the openstack/releases repository.  
Follow up here on the mailing list or in #openstack-release if you  
have questions.  

The project-config change to update ACLs and correct issues with  
the build job definitions for official projects is  
https://review.openstack.org/298866  

Doug  

[1] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html
  

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][fuel] puppet-pacemaker: collaboration is starting now

2016-03-29 Thread Emilien Macchi
A few months ago, we moved redhat/puppet-pacemaker to
openstack/puppet-pacemaker for some reasons:

* We wanted to take benefits from OpenStack Infra (Gerrit, Zuul,
Jenkins jobs) and improve testing coverage.

Result: we succeed in here, changes in puppet-pacemaker no longer
break TripleO HA jobs, since we have the CI running for every patch.
Sofer is also doing an incredible work on testing at this time, with
beaker jobs and also make the module cleaner & testable.

* TripleO is using this module and we saw an opportunity to share this
code with OpenStack community.

Result: we recently had some conversations with Fuel folks on this ML
and on IRC, who also work on a puppet-pacemaker module, and they are
willing to merge both modules.
The collaboration is starting now: https://review.openstack.org/#/c/296440/

Some actions in progress:
* Move bits from fuel-infra/puppet-pacemaker to
openstack/puppet-pacemaker (see 296440)
* Adding Fuel CI running for patches in openstack/puppet-pacemaker
* Adding Beaker tests to run on Ubuntu
* Try to find an alternative to pcs for Ubuntu platform (pcs is not in
debian/ubuntu)
* Investigate if we can follow Fuel's moduel where XML is used instead of PCS.

Some requirements:
* Work will be done by iterated and test driven, thanks to beaker
tests and Fuel CI / TripleO CI.
* We need to converge a maximum of resources, when we can, but still
keep a feature parity for both Fuel & TripleO installers.

Feel free to jump in this work / conversation if you are involved in
TripleO / Fuel / or interested by this module, we're doing this the
open way.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cross Project Session Day Planning

2016-03-29 Thread Sean Dague
A gentle reminder that we're continuing to collect suggestions in the
etherpad for the remainder of the week. As of this email there are '8'
suggestions so far. We look forward to many more over the course of the
week.

Thanks again,

-Sean

On 03/23/2016 07:02 AM, Sean Dague wrote:
> On the Tuesday (April 26th) before Project Specific Design Summit
> Sessions kick off, we'll again have a Cross Project Session Day. This is
> a time sliced out where we can tackle some of the issues that span
> across our OpenStack Community.
> 
> We'll be doing proposals for this via etherpad, to match the way all the
> rest of the Design Summit planning by projects seems to be happening.
> Please propose items into here -
> https://etherpad.openstack.org/p/newton-cross-project-sessions
> 
> Session ideas will be open until April 2nd. After which point the TC
> will do selection and scheduling.
> 
> Basic Ground Rules:
> 
> Preference will be given to OpenStack-wide topics that impact all
> OpenStack projects. If time allows, we may have space for sessions
> affecting a smaller subset of projects, so feel free to propose such
> topics too.
> 
> Our space is a bit more flexible in Austin than in Tokyo, so the exact
> track structure will be figured out later once the content is there.
> 
> Remember, the Design Summit sessions work best when this is the middle
> of the conversation, not the beginning of one.
> 
> If you have any questions, feel free to ask on this email thread.
> 
> Happy Stacking,
> 
>   -Sean
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-29 Thread Steven Dake (stdake)
Adrian,

Makes sense.  Do the images have to be built to be mirrored though?  Can't
they just be put on the mirror sites fro upstream?

Thanks
-steve

On 3/29/16, 11:02 AM, "Adrian Otto"  wrote:

>Steve,
>
>I¹m very interested in having an image locally cached in glance in each
>of the clouds used by OpenStack infra. The local caching of the glance
>images will produce much faster gate testing times. I don¹t care about
>how the images are built, but we really do care about the performance
>outcome.
>
>Adrian
>
>> On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake) 
>>wrote:
>> 
>> Yolanda,
>> 
>> That is a fantastic objective.  Matthieu asked why build our own images
>>if
>> the upstream images work and need no further customization?
>> 
>> Regards
>> -steve
>> 
>> On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
>> wrote:
>> 
>>> Hi
>>> The idea is to build own images using diskimage-builder, rather than
>>> downloading the image from external sources. By that way, the image can
>>> live in our mirrors, and is built using the same pattern as other
>>>images
>>> used in OpenStack.
>>> It also opens the door to customize the images, using custom trees, if
>>> there is a need for it. Actually we rely on official tree for Fedora 23
>>> Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
>>> default.
>>> 
>>> Best,
>>> Yolanda
>>> 
>>> El 29/03/16 a las 10:17, Mathieu Velten escribió:
 Hi,
 
 We are using the official Fedora Atomic 23 images here (on Mitaka M1
 however) and it seems to work fine with at least Kubernetes and Docker
 Swarm.
 Any reason to continue building specific Magnum image ?
 
 Regards,
 
 Mathieu
 
 Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
> Hi
> I wanted to start a discussion on how Fedora Atomic images are being
> built. Currently the process for generating the atomic images used
> on
> Magnum is described here:
> http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
> l.
> The image needs to be built manually, uploaded to fedorapeople, and
> then
> consumed from there in the magnum tests.
> I have been working on a feature to allow diskimage-builder to
> generate
> these images. The code that makes it possible is here:
> https://review.openstack.org/287167
> This will allow that magnum images are generated on infra, using
> diskimage-builder element. This element also has the ability to
> consume
> any tree we need, so images can be customized on demand. I generated
> one
> image using this element, and uploaded to fedora people. The image
> has
> passed tests, and has been validated by several people.
> 
> So i'm raising that topic to decide what should be the next steps.
> This
> change to generate fedora-atomic images has not already landed into
> diskimage-builder. But we have two options here:
> - add this element to generic diskimage-builder elements, as i'm
> doing now
> - generate this element internally on magnum. So we can have a
> directory
> in magnum project, called "elements", and have the fedora-atomic
> element
> here. This will give us more control on the element behaviour, and
> will
> allow to update the element without waiting for external reviews.
> 
> Once the code for diskimage-builder has landed, another step can be
> to
> periodically generate images using a magnum job, and upload these
> images
> to OpenStack Infra mirrors. Currently the image is based on Fedora
> F23,
> docker-host tree. But different images can be generated if we need a
> better option.
> 
> As soon as the images are available on internal infra mirrors, the
> tests
> can be changed, to consume these internals images. By this way the
> tests
> can be a bit faster (i know that the bottleneck is on the functional
> testing, but if we reduce the download time it can help), and tests
> can
> be more reilable, because we will be removing an external dependency.
> 
> So i'd like to get more feedback on this topic, options and next
> steps
> to achieve the goals. Best
> 
 
 
___
__
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> -- 
>>> Yolanda Robla Mota
>>> Cloud Automation and Distribution Engineer
>>> +34 605641639
>>> yolanda.robla-m...@hpe.com
>>> 
>>> 
>>> 
>>>
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 

[openstack-dev] [election][tc] Annoouncing candidacy for the technical committee election

2016-03-29 Thread Steven Dake (stdake)
All My Peers,


TL;DR - I will increase adoption of OpenStack by removing governance

RED TAPE and mentoring individuals interested in these objectives.


A brief history of time:


I started my journey in OpenStack by doing a gap analysis of AWS to

OpenkStack at my previous employer Red Hat, Inc.  This gap analysis

turned up all kinds of gaps in OpenStack four years ago.  I

personally believed for OpenStack to be successful, it needed to expand

beyond a compute kit and deliver a complete IaaS platform.


Four years ago there was not really a way to add projects to OpenStack.

There was no big tent, but instead an incubation track.  It was very

poorly defined (half a wiki page), so I went about the efforts of

solving one of the most fundamental problems in OpenStack: Adding a

new project to OpenStack.  I did this by combining my previous gap

analysis with my experience starting and leading Open Source projects

to solve one of the most fundamental gaps in OpenStack: Orchestration.


This led to the founding of the Heat project with Angus Salkeld of

which I served as PTL for 18 months.  At the time the bar to add

projects to OpenStack was stratospheric.  Fortunately the

dedication and perseverance of the Heat project team resulted

in the addition of Heat as an incubated and later integrated

project as did another project Ceilometer led by Nick Barcet

that also went through the same process at nearly the same time as Heat.


Once Ceilometer and Heat were integrated into the integrated

release of OpenStack, a herd of projects attempted incubation

into OpenStack and the technical committee was faced with a dilemma.

In early 2014, OpenStack governance isolated projects into

"programs".  The technical committee believed it was necessary

to integrate all these new projects into existing programs.


The learning process from that led to the origination of the

Big Tent, of which I am a super hard-core fan.  Once the Big Tent

was reality, the bar for entry as a legitimate OpenStack project

was far lowered, creating a framework for new innovative projects

to flourish, evolve, and add value to the OpenStack community.


In mid 2014 I was feeling a little frustrated after recruiting a

fantastic diversely affiliated team and community and still

feeling like a track athlete for jumping all the hurdles in the

way of making Heat an integrated program.  How could others go

through this effort without all the hurdles?  I lacked an answer.

Fortunately the technical committee cut the RED TAPE by introducing

the Big Tent in late 2014 which re-energized me into solving

OpenStack's next two major gaps.  The first gap was lack of

support for container workloads (solved by Magnum), where Adrian

Otto served a PTL while I recruited a majority of the core

reviewer team and implemented much of the original architecture.


At the nearly the same time in 2015, I personally believed existing

deployment of OpenStack was too complex and error prone and formed

the Kolla project.


I recruited a great core review team with Kolla and trained this

young team on how to "Open Source".  I feel Kolla is one of OpenStack's

greatest successes - a team with a high degree of diverse affiliations

to solve OpenStack's #1 problem: How do I deploy the damn thing!


Now, I as the PTL of Kolla am faced with a problem: RED TAPE!

The technical committee decided during the Big Tent process that

projects should be labeled with tags.  Whether tagging is dangerous

or not to OpenStack projects I leave for a different forum, but

there is Operator value in the tagging process.


Tags provide a mechanisms to automate information transfer and serve

as a selection criteria for the OpenStack Operator community who

represent the folks that are actually going to deploy the software

the OpenStack developer community creates.


The current tags as they are written are full of RED TAPE [1].  It

is not easy writing a tag that doesn't require onerous hurdle jumping.

I wrote a couple type: tags myself [2][3] and it is not the blame of

the technical committee that the tags can appear so onerous to fresh

projects like Kolla that have only been in the Big Tent for ten

months.  It is a complex challenge handling all cases with a

limited document.  To correct the deficiencies of the governance

repository, we need to collectively involve the community in the

governance repository development process.


I don't want to "overthrow" the technical committee as it stands.

I think they have done a fantastic job of keeping up with rapid

pace of OpenStack's growth.  I honestly don't think I could have

done a better job in the past.  That said, I would appreciate the

opportunity to contribute to the technical committee's efforts

bringing to the table my 4 years of experience as PTL or co-PTL

of the Heat, Magnum, and Kolla projects as well as my previous

work in Open Source including leadership positions in the high

availability 

Re: [openstack-dev] [Congress] New bug for Mitaka: Glance authentication fails after token expiry

2016-03-29 Thread Nikhil Komawar
Thanks for bringing this up Eric!

On 3/29/16 4:01 PM, Eric K wrote:
> I just discovered a bug that¹s probably been around a long time but hidden
> by exception suppression. https://bugs.launchpad.net/congress/+bug/1563495
> When an auth attempt fails due to token expiry, Congress Glance driver
> obtains a new token from keystone and sets it in Glance client, but for
> some reason, Glance client continues to use the expired token and fails to
> authenticate. Glance data stops flowing to Congress. It might explain the
> issue Bryan Sullivan ran into
> (http://lists.openstack.org/pipermail/openstack-dev/2016-February/087364.ht
> ml).
>
> I haven¹t been able to nail down whether it¹s a Congress datasource driver
> issue or a Glance client issue. A few more eyes on it would be great.
> Thanks!
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Future of Service Orchestration with Puppet & Pacemaker

2016-03-29 Thread Emilien Macchi
Some TripleO folks are currently working hard on making automation for
upgrading TripleO between major OpenStack releases.
One of the biggest challenges is how to deal with Puppet who usually
notify Services when configuration change and Pacemaker who actually
manage the services.

Currently, we do this horrible thing that is disabling service
management when Pacemaker manages the service.
It's terrible because Puppet wants to manage the service with
'systemd' but we disable it to let Pacemaker deal with that.
The main limitation is that most of Puppet modules (specially in
OpenStack) know how to do orchestration for deployments & upgrades,
but we don't let it do because we disable service management.

A few months ago, we initiated a discussion about writing a Puppet
Resource Service Provider in openstack/puppet-pacemaker that will deal
with service management.
What will it change?
* we will enable service management again in THT.
* Puppet won't use systemd to deal with services, but use 'pcs' tool.
* We'll take the benefit of puppet-* modules orchestration. Example:
this configuration change will restart this service (something we had
hard time to do before).

We are currently working on it:
https://review.openstack.org/#/c/286124/

But this is work in progress:
* we need to implement the restart
* we need to create a special Puppet fact, that will make sure we only
run pcs on a single node (master by preference)
* investigate how to find who is master and stop hardcoding it in THT
(good for a bootstrap, but not good for upgrades).

Here is the kind of scenario we expect at the end:

Deployment of Neutron Server service
* openstack/puppet-neutron will configure neutron.conf
* openstack/puppet-neutron will notify neutron-server service
* THT is overriding the neutron-server service resource to use 'pcs'
provider, implemented in openstack/puppet-pacemaker
* once neutron.conf is configured on controllers, pcs will start
neutron-server, only from one node.

Upgrade of Neutron Server service
* we change RabbitMQ parameters in neutron.conf with THT
* openstack/puppet-neutron will update neutron.conf and notify
neutron-server service
* 'pcs' provider will restart neutron-server from one node, after the
change in the neutron.conf

That will be, in my opinion the most elegant way to make Puppet &
Pacemaker working together, any comment / feedback is highly welcome,
it will be a big change and we want to make it during Newton cycle.
Note: by designing this change, I found out we will also be able to
reduce the number of steps and also drop some bash code in the upgrade
scripts, which are both good news at making TripleO more consistent
and fast to deploy.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][puppet] Puppet OpenStack 8.0.0 release (mitaka)

2016-03-29 Thread Iury Gregory
Congratulations to everyone involved :D

2016-03-29 16:32 GMT-03:00 Emilien Macchi :

> On Tue, Mar 29, 2016 at 3:14 PM, Qasim Sarfraz 
> wrote:
> > Awesome.
> >
> >> > Neutron:
> >> > * Support of LBaaSv2
> >> > * More SDN integrations: OpenDayLight, PlugGrid, Midonet
> >> > * Use modern parameters for Nova notifications
> >
> > Just a minor correction, the SDN vendor is PLUMgrid here.
>
> Sorry for that, the good news is we did not write it wrong in the
> release note \o/
> http://docs.openstack.org/releasenotes/puppet-neutron/mitaka.html#id1
>
> Cheers,
>
> > --
> > Regards,
> > Qasim Sarfraz
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][puppet] Puppet OpenStack 8.0.0 release (mitaka)

2016-03-29 Thread Qasim Sarfraz
On Wed, Mar 30, 2016 at 12:32 AM, Emilien Macchi  wrote:

> On Tue, Mar 29, 2016 at 3:14 PM, Qasim Sarfraz 
> wrote:
> > Awesome.
> >
> >> > Neutron:
> >> > * Support of LBaaSv2
> >> > * More SDN integrations: OpenDayLight, PlugGrid, Midonet
> >> > * Use modern parameters for Nova notifications
> >
> > Just a minor correction, the SDN vendor is PLUMgrid here.
>
> Sorry for that, the good news is we did not write it wrong in the
> release note \o/
> http://docs.openstack.org/releasenotes/puppet-neutron/mitaka.html#id1
>
> Great.Thanks!


-- 
Regards,
Qasim Sarfraz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]Relationship between physical networks and segment

2016-03-29 Thread Miguel Lavalle
Hi,

I am writing a patchset to build a mapping between hosts and network
segments. The goal of this mapping is to be able to say whether a host has
access to a given network segment. I am building this mapping assuming that
if a host A has a bridges mapping containing 'physnet 1' and a segment has
'physnet 1' in its 'physical_network' attribute, then the host has access
to that segment.

1) Is this assumption correct? Looking at method check_segment_for_agent in
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_agent.py#n180
seems to me to suggest that my assumption is correct?

2) Furthermore, when a segment is mapped to a physical network, is there a
one to one relationship between segments and physical nets?

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][puppet] Puppet OpenStack 8.0.0 release (mitaka)

2016-03-29 Thread Emilien Macchi
On Tue, Mar 29, 2016 at 3:14 PM, Qasim Sarfraz  wrote:
> Awesome.
>
>> > Neutron:
>> > * Support of LBaaSv2
>> > * More SDN integrations: OpenDayLight, PlugGrid, Midonet
>> > * Use modern parameters for Nova notifications
>
> Just a minor correction, the SDN vendor is PLUMgrid here.

Sorry for that, the good news is we did not write it wrong in the
release note \o/
http://docs.openstack.org/releasenotes/puppet-neutron/mitaka.html#id1

Cheers,

> --
> Regards,
> Qasim Sarfraz
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][stable][sr-iov] Status of physical_device_mappings

2016-03-29 Thread Vladimir Eremin
Hi jay,

There was no ability to setup this configuration WITH Neutron SR-IOV ML2 agent 
in Liberty. That what you pointed out and you’re totally correct.

But in Liberty, you’re not required to use Neutron SR-IOV ML2 agent to get this 
functionality works. And if you configure only nova-compute and neutron-server 
(WITHOUT Neutron SR-IOV ML2 agent), you could achieve desired configuration.

Basically:
* Liberty: you can use agent and you will be limited in physnets, or you can 
use it without agent.
* Mitaka: you should use agent and you will be limited in physnets.

So, regression is introduced by making Neutron SR-IOV ML2 agent required. And 
this easy-fix removes the problem.

-- 
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.



> On Mar 23, 2016, at 11:01 PM, Jay Pipes  wrote:
> 
> +tags for stable and nova
> 
> Hi Vladimir, comments inline. :)
> 
> On 03/21/2016 05:16 AM, Vladimir Eremin wrote:
>> Hey OpenStackers,
>> 
>> I’ve recently found out, that changing of use neutron sriov-agent in Mitaka 
>> from optional to required[1] makes a kind of regression.
> 
> While I understand that it is important for you to be able to associate more 
> than one NIC to a physical network, I see no evidence that there was a 
> *regression* in Mitaka. I don't see any ability to specify more than one NIC 
> for a physical network in the Liberty Neutron SR-IOV ML2 agent:
> 
> https://github.com/openstack/neutron/blob/stable/liberty/neutron/common/utils.py#L223-L225
> 
>> Before Mitaka, there was possible to use any number of NICs with one Neutron 
>> physnet just by specifying pci_passthrough_whitelist in nova:
>> 
>> [default]
>> pci_passthrough_whitelist = { "devname": "eth3", "physical_network": 
>> "physnet2”},{ "devname": "eth4", "physical_network": "physnet2”},
>> 
>> which means, that eth3 and eth4 will be used for physnet2 in some manner.
> 
> Yes, *in Nova*, however from what I can tell, this functionality never 
> existed in the parse_mappings() function in neutron.common.utils module.
> 
>> In Mitaka, there also required to setup neutron sriov-agent as well:
>> 
>> [sriov_nic]
>> physical_device_mappings = physnet2:eth3
>> 
>> The problem actually is to unable to specify this mapping as 
>> "physnet2:eth3,physnet2:eth4” due to implementation details, so it is 
>> clearly a regression.
> 
> A regression means that a change broke some previously-working functionality. 
> This is not a regression, since there apparently was never such functionality 
> in Neutron.
> 
>> I’ve filed bug[2] for it and proposed a patch[3]. Originally 
>> physical_device_mappings is converted to dict, where physnet name goes to 
>> key, and interface name to value:
>> 
>> >>> parse_mappings('physnet2:eth3’)
>> {‘physnet2’: 'eth3’}
>> >>> parse_mappings('physnet2:eth3,physnet2:eth4’)
>> ValueError: Key physnet2 in mapping: 'physnet2:eth4' not unique
>> 
>> I’ve changed it a bit, so interface name is stored in list, so now this case 
>> is working:
>> 
>> >>> parse_mappings_multi('physnet2:eth3,physnet2:eth4’)
>> {‘physnet2’: [‘eth3’, 'eth4’]}
>> 
>> I’d like to see this fix[3] in master and Mitaka branch.
> 
> I understand you really want this functionality in Mitaka. And I will leave 
> it up to the stable team to determine whether this code should be backported 
> to stable/mitaka. However, I will point out that this is a new feature, not a 
> bug fix for a regression. There is no regression because the ability for 
> Neutron to use more than one NIC with a physnet was never supported as far as 
> I can tell.
> 
> Best,
> -jay
> 
>> Moshe Levi also proposed to refactor this part of code to remove 
>> physical_device_mappings and reuse data that nova provides somehow. I’ll 
>> file the RFE as soon as I figure out how it should work.
>> 
>> [1]: http://docs.openstack.org/liberty/networking-guide/adv_config_sriov.html
>> [2]: https://bugs.launchpad.net/neutron/+bug/1558626
>> [3]: https://review.openstack.org/294188
>> 
>> --
>> With best regards,
>> Vladimir Eremin,
>> Fuel Deployment Engineer,
>> Mirantis, Inc.
>> 
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [all] Service Catalog TNG work in Mitaka ... next steps

2016-03-29 Thread Sean Dague
At the Mitaka Summit we had a double session on the Service Catalog,
where we stood, and where we could move forward. Even though the service
catalog isn't used nearly as much as we'd like, it's used in just enough
odd places that every change pulls on a few other threads that are
unexpected. So this is going to be a slow process going forward, but I
do have faith we'll get there.

Thanks much to Brant, Chris, and Anne for putting in time this cycle to
keep this ball moving forward.

Mitaka did a lot of fact finding.

* public / admin / internal urls - mixed results

The notion of an internal url is used in many deployments because they
believe it means they won't be charged for data transfer. There is no
definitive semantic meaning to any of these. Many sites just make all of
these the same, and use the network to ensure that internal connections
hit internal interfaces.

Next Steps: this really needs a set of user stories built from what we
currently have. That's where that one is left.

* project_id optional in projects - good progress

One of the issues with lots of things that want to be done with the
service catalog, is that we've gone and hard coded project_id into urls
in projects where they are not really semantically meaningful. That
precluded things like an anonymous service catalog.

We decided to demonstrate this on Nova first. That landed as
microversion 2.18. It means that service catalog entries no longer need
project_id to be in the url. There is a patch up for devstack to enable
this - https://review.openstack.org/#/c/233079/ - though a Tempest patch
removing errant tests needs to land first.

The only real snag we found during this was that python Routes +
keystone's ability to have project id not be a uuid (even though it
defaults to one) made for the need to add a new config option to handle
this going either way.

This is probably easy to replicate on other projects during the next cycle.

Next Steps: get volunteers from additional projects to replicate this.

* service types authority

One of the things we know we need to make progress on is an actual
authority of all the service catalogue types which we recognize. We got
agreement to create this repository, I've got some outstanding patches
to restructure for starting off the repo -
https://review.openstack.org/#/q/project:openstack/service-types-authority

The thing we discovered here was even the apparently easy problems, some
times aren't. The assumption that there might be a single URL which
describes the API for a service, is an assumption we don't fulfil even
for most of the base services.

This bump in the road is part of what led to some shifted effort on the
API Reference in RST work - (see
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html)

Next Steps: the API doc conversion probably trumps this work, and at
some level is a requirement for it. Once we get the API reference
transition in full swing, this probably becomes top of stack.

* service catalogue tng schema

Brant has done some early work setting up a schema based on the known
knowns, and leaving some holes for the known unknowns until we get a few
of these locked down (types / allowed urls).

Next Steps: review current schema

* Weekly Meetings

We had been meeting weekly in #openstack-meeting-cp up until release
crunch, when most of us got swamped with such things.

I'd like to keep focus on the API doc conversion in the near term, as
there is a mountain to get over with getting the first API converted,
then we can start making the docs more friendly to our users. I think
this means we probably keep the weekly meeting on hiatus until post
Austin, and start it up again the week after we all get back.


Thanks to folks that helped get us this far. Hopefully we'll start
picking up steam again once we get a bit of this backlog cleared and
getting chugging during the cycle.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][puppet] Puppet OpenStack 8.0.0 release (mitaka)

2016-03-29 Thread Qasim Sarfraz
Awesome.

> Neutron:
> > * Support of LBaaSv2
> > * More SDN integrations: OpenDayLight, PlugGrid, Midonet
> > * Use modern parameters for Nova notifications
>
Just a minor correction, the SDN vendor is PLUMgrid here.

-- 
Regards,
Qasim Sarfraz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] New bug for Mitaka: Glance authentication fails after token expiry

2016-03-29 Thread Eric K
I just discovered a bug that¹s probably been around a long time but hidden
by exception suppression. https://bugs.launchpad.net/congress/+bug/1563495
When an auth attempt fails due to token expiry, Congress Glance driver
obtains a new token from keystone and sets it in Glance client, but for
some reason, Glance client continues to use the expired token and fails to
authenticate. Glance data stops flowing to Congress. It might explain the
issue Bryan Sullivan ran into
(http://lists.openstack.org/pipermail/openstack-dev/2016-February/087364.ht
ml).

I haven¹t been able to nail down whether it¹s a Congress datasource driver
issue or a Glance client issue. A few more eyes on it would be great.
Thanks!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic]Nova resource tracker is not able to identify resources of BM instance when fake_pxe driver used

2016-03-29 Thread Jim Rollenhagen


> On Mar 29, 2016, at 08:39, Senthilprabu Shanmugavel 
>  wrote:
> 
> Thanks Jim. WA did the trick. Very well explained. 
> 
> Should I raise a bug for fake driver's power state?.

That would be great, thanks in advance! :)

// jim 

> 
>> On Tue, Mar 29, 2016 at 5:30 PM, Jim Rollenhagen  
>> wrote:
>> On Tue, Mar 29, 2016 at 04:59:09PM +0300, Senthilprabu Shanmugavel wrote:
>> > Hello,
>> >
>> > I am using Ironic for deploying baremetal to my openstack environment.
>> > Using Liberty version on Ubuntu 14.04. I followed Ironic documentation  to
>> > deploy x86 servers using pxe_ipmitool. Now I have a working Ironic setup
>> > for PXE boot. I also want to add my test board running ARM 64 bit CPU to
>> > ironic deployment. I would like to try using fake_pxe drivers because my
>> > board don't support IPMI or anything else for out of band communication. So
>> > idea was to do deployment without power management, eventually fake_pxe is
>> > the obvious choice. But I have problem in updating the Ironic node which I
>> > will explain below.
>> >
>> > I created ironic node using fake_pxe driver. Added all necessary parameters
>> > using node-update command. Node-show command output is given below for
>> > reference
>> >
>> > 
>> >
>> > Because of this during nova boot, scheduler failed to boot the BM instance.
>> >
>> > Can anyone help me with what's wrong in my configuration?
>> 
>> I guess probably someone has never used the fake power driver with
>> devstack. :)
>> 
>> If a node's power state is None, it will be ignored by Nova:
>> https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L171
>> 
>> And the fake power driver doesn't set the power state to on/off itself:
>> https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/fake.py#L43
>> 
>> We should probably fix this by changing it to:
>> return task.node.power_state or states.POWER_ON
>> 
>> In the meantime, an easy workaround would be:
>> ironic node-set-power-state  on
>> ironic node-set-power-state  off
>> 
>> Which would have the driver 'remember' the power state is currently off,
>> allowing Nova to pick up the resources.
>> 
>> Hope that helps :)
>> 
>> // jim
>> 
>> >
>> >
>> >
>> > Thanks in advance,
>> > Senthil
>> 
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Best Regards,
> Senthilprabu Shanmugavel
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-29 Thread Doug Hellmann
Excerpts from Thomas Goirand's message of 2016-03-29 13:37:14 +0200:
> On 03/29/2016 11:56 AM, Neil Jerram wrote:
> > On 29/03/16 09:38, Thomas Goirand wrote:
> >> On 03/23/2016 04:06 PM, Mike Perez wrote:
> >>> Hey all,
> >>>
> >>> I've been talking to a variety of projects about lack of install guides. 
> >>> This
> >>> came from me not having a great experience with trying out projects in 
> >>> the big
> >>> tent.
> >>>
> >>> [...]
> >>
> >> Sorry to jump in this conversation a bit later, as I missed this thread.
> >>
> >> I've contributed lots of entries for Debian, and I'm a bit frustrated to
> >> still not have an active link to it.
> > 
> > I don't understand.  Do you mean that you could have published this 
> > guide yourself, but haven't yet done that;  or that you think someone 
> > else should be publishing it?
> > 
> > Neil
> 
> The documentation people decided to *not* publish the Debian
> install-guide (and remove the link to it) even though it can be
> generated and (to my opinion, which is probably biased) works well.
> 
> I'd like to know the steps needed to restore a working active link.
> 
> I also am not thrived to see that it's been decided to remove completely
> the Debian guide without even asking me about it. First, this isn't
> big-tent-ish spirit (ie: best effort based), and this isn't the first
> time I'm seeing this kind of things happening within the install-guide.
> 
> On each release it's the same: I try to first finish the Debian
> packages, test them well, and *then* work on the install-guide, but when
> this is happening, the Debian install-guide gets removed. I'm not given
> the opportunity to work on it when I have the time to (and my opinion
> isn't even asked).
> 
> I've been told multiple times to get someone else to work on the Debian
> guide. Well, it's not happening, nobody is volunteering, and there's
> only me so far. So what do you propose? Just get rid of my work?

If the core doc team isn't able to help you maintain it, maybe it's a
candidate for a separate guide, just like we're discussing for projects
that aren't part of the DefCore set included in the main guide.

Doug

> 
> Cheers,
> 
> Thomas Goirand (zigo)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Segments, subnet types, and IPAM

2016-03-29 Thread Carl Baldwin
I've been playing with this a bit on this patch set [1].  I haven't
gotten very far yet but it has me thinking.

Calico has a similar use case in mind as I do.  Essentially, we both
want to group subnets to allow for aggregation of routes.  (a) In
routed networks, we want to group them by segment and the boundaries
are hard meaning that if an IP is requested from a particular segment,
IPAM should fail if it can't allocate it from the same.

(b) For Calico, I believe that the goal is to group by host.  Their
boundaries are soft meaning that it is okay to allocate any IP on the
network but one that allows maximum route aggregation is strongly
preferred.

(c) Brian Haley will soon post a spec to address the need to group
subnets by service type.  This is another sort of grouping but is
orthogonal to the need to group for routing purposes.  Here, we're
trying to group like ports together so that we can use different types
of addresses.  This kind of grouping could coexist with route grouping
since they are orthogonal.

Given all this grouping, it seems like it might make sense to add some
sort of grouping feature to IPAM.  Here's how I'm thinking it will
work.

1.  When a subnet is allocated, a group id(s) can be passed with the
request.  IPAM will remember the group id with the subnet.
2.  When an IP address is needed, a group id(s) can be passed with the
request.  IPAM will try to allocate from a subnet with a matching
group id(s).
3.  If no IP address is available that exactly matches the group id(s)
then IPAM may fall back to another subnet.  This behavior needs to be
different for the various use cases mentioned which is where it gets
kind of complicated.
  (a) No fallback is allowed.  The IP allocation should fail.
  (b) We can fall back to any other subnet.  There might be some
reasons to prefer some over others but this could get complicated
fast.
  (c) We can fall back to any subnet with None as its group (legacy
subnets) but not to other groups (e.g. if I'm trying to allocate a
floating IP address, I don't want to fall back to a subnet meant for
DVR gateways because those aren't public IP addresses).

I put (s) after group id in many cases above because it appears that
we can have more than one kind of orthogonal grouping to consider at
the same time.

What do folks think?

Am I trying to generalize too much and making it complicated?

What are some alternatives?

Are there other compelling use cases that I haven't considered yet?

Carl

[1] https://review.openstack.org/#/c/288774

On Fri, Mar 11, 2016 at 4:15 PM, Carl Baldwin  wrote:
> Hi,
>
> I have started to get into coding [1] for the Neutron routed networks
> specification [2].
>
> This spec proposes a new association between network segments and
> subnets.  This affects how IPAM needs to work because until we know
> where the port is going to land, we cannot allocate an IP address for
> it.  Also, IPAM will need to somehow be aware of segments.  We have
> proposed a host / segment mapping which could be transformed to a host
> / subnet mapping for IPAM purposes.
>
> I wanted to get the opinion of folks like Salvatore, John Belamaric,
> and you (if you interested) on this.  How will this affect the
> interface to pluggable IPAM and how can pluggable implementations can
> accommodate this change.  Obviously, we wouldn't require
> implementations to support it but routed networks wouldn't be very
> useful without it.  So, those implementations would not be compatible
> when routed networks are deployed.
>
> Another related topic was brought up in the recent Neutron mid-cycle.
> We talked about adding a service type attribute to to subnets.  The
> reason for this change is to allow operators to create special subnets
> on a network to be used only by certain kinds of ports.  For example,
> DVR fip namespace gateway ports burn a public IP for no good reason.
> This new feature would allow operators to create a special subnet in
> the network with private addressing only to be used by these ports.
>
> Another example would give operators the ability to use private
> subnets for router external gateway ports if shared SNAT is not needed
> or doesn't need to use public IPs.
>
> These are two ways in which subnets are taking on extra
> characteristics which distinguish them from other subnets on the same
> network.  That is why I lumped them together in to one thread.
>
> Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-29 Thread Adrian Otto
Steve,

I’m very interested in having an image locally cached in glance in each of the 
clouds used by OpenStack infra. The local caching of the glance images will 
produce much faster gate testing times. I don’t care about how the images are 
built, but we really do care about the performance outcome.

Adrian

> On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake)  wrote:
> 
> Yolanda,
> 
> That is a fantastic objective.  Matthieu asked why build our own images if
> the upstream images work and need no further customization?
> 
> Regards
> -steve
> 
> On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
> wrote:
> 
>> Hi
>> The idea is to build own images using diskimage-builder, rather than
>> downloading the image from external sources. By that way, the image can
>> live in our mirrors, and is built using the same pattern as other images
>> used in OpenStack.
>> It also opens the door to customize the images, using custom trees, if
>> there is a need for it. Actually we rely on official tree for Fedora 23
>> Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
>> default.
>> 
>> Best,
>> Yolanda
>> 
>> El 29/03/16 a las 10:17, Mathieu Velten escribió:
>>> Hi,
>>> 
>>> We are using the official Fedora Atomic 23 images here (on Mitaka M1
>>> however) and it seems to work fine with at least Kubernetes and Docker
>>> Swarm.
>>> Any reason to continue building specific Magnum image ?
>>> 
>>> Regards,
>>> 
>>> Mathieu
>>> 
>>> Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
 Hi
 I wanted to start a discussion on how Fedora Atomic images are being
 built. Currently the process for generating the atomic images used
 on
 Magnum is described here:
 http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
 l.
 The image needs to be built manually, uploaded to fedorapeople, and
 then
 consumed from there in the magnum tests.
 I have been working on a feature to allow diskimage-builder to
 generate
 these images. The code that makes it possible is here:
 https://review.openstack.org/287167
 This will allow that magnum images are generated on infra, using
 diskimage-builder element. This element also has the ability to
 consume
 any tree we need, so images can be customized on demand. I generated
 one
 image using this element, and uploaded to fedora people. The image
 has
 passed tests, and has been validated by several people.
 
 So i'm raising that topic to decide what should be the next steps.
 This
 change to generate fedora-atomic images has not already landed into
 diskimage-builder. But we have two options here:
 - add this element to generic diskimage-builder elements, as i'm
 doing now
 - generate this element internally on magnum. So we can have a
 directory
 in magnum project, called "elements", and have the fedora-atomic
 element
 here. This will give us more control on the element behaviour, and
 will
 allow to update the element without waiting for external reviews.
 
 Once the code for diskimage-builder has landed, another step can be
 to
 periodically generate images using a magnum job, and upload these
 images
 to OpenStack Infra mirrors. Currently the image is based on Fedora
 F23,
 docker-host tree. But different images can be generated if we need a
 better option.
 
 As soon as the images are available on internal infra mirrors, the
 tests
 can be changed, to consume these internals images. By this way the
 tests
 can be a bit faster (i know that the bottleneck is on the functional
 testing, but if we reduce the download time it can help), and tests
 can
 be more reilable, because we will be removing an external dependency.
 
 So i'd like to get more feedback on this topic, options and next
 steps
 to achieve the goals. Best
 
>>> 
>>> _
>>> _
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> -- 
>> Yolanda Robla Mota
>> Cloud Automation and Distribution Engineer
>> +34 605641639
>> yolanda.robla-m...@hpe.com
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

[openstack-dev] [release][stable][barbican] python-barbicanclient 4.0.1 release (mitaka)

2016-03-29 Thread no-reply
We are psyched to announce the release of:

python-barbicanclient 4.0.1: Client Library for OpenStack Barbican Key
Management API

This release is part of the mitaka stable release series.

With source available at:

https://git.openstack.org/cgit/openstack/python-barbicanclient/

Please report issues through launchpad:

https://bugs.launchpad.net/python-barbicanclient/

For more details, please see below.

Changes in python-barbicanclient 4.0.0..4.0.1
-

085522c Updated from global requirements
b61869e Update .gitreview for stable/mitaka

Diffstat (except docs and test files)
-

.gitreview   | 2 +-
requirements.txt | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 2b75e2b..39eec21 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
-cliff!=1.16.0,>=1.15.0 # Apache-2.0
+cliff!=1.16.0,!=1.17.0,>=1.15.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-29 Thread Steven Dake (stdake)
Yolanda,

That is a fantastic objective.  Matthieu asked why build our own images if
the upstream images work and need no further customization?

Regards
-steve

On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
wrote:

>Hi
>The idea is to build own images using diskimage-builder, rather than
>downloading the image from external sources. By that way, the image can
>live in our mirrors, and is built using the same pattern as other images
>used in OpenStack.
>It also opens the door to customize the images, using custom trees, if
>there is a need for it. Actually we rely on official tree for Fedora 23
>Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
>default.
>
>Best,
>Yolanda
>
>El 29/03/16 a las 10:17, Mathieu Velten escribió:
>> Hi,
>>
>> We are using the official Fedora Atomic 23 images here (on Mitaka M1
>> however) and it seems to work fine with at least Kubernetes and Docker
>> Swarm.
>> Any reason to continue building specific Magnum image ?
>>
>> Regards,
>>
>> Mathieu
>>
>> Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
>>> Hi
>>> I wanted to start a discussion on how Fedora Atomic images are being
>>> built. Currently the process for generating the atomic images used
>>> on
>>> Magnum is described here:
>>> http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
>>> l.
>>> The image needs to be built manually, uploaded to fedorapeople, and
>>> then
>>> consumed from there in the magnum tests.
>>> I have been working on a feature to allow diskimage-builder to
>>> generate
>>> these images. The code that makes it possible is here:
>>> https://review.openstack.org/287167
>>> This will allow that magnum images are generated on infra, using
>>> diskimage-builder element. This element also has the ability to
>>> consume
>>> any tree we need, so images can be customized on demand. I generated
>>> one
>>> image using this element, and uploaded to fedora people. The image
>>> has
>>> passed tests, and has been validated by several people.
>>>
>>> So i'm raising that topic to decide what should be the next steps.
>>> This
>>> change to generate fedora-atomic images has not already landed into
>>> diskimage-builder. But we have two options here:
>>> - add this element to generic diskimage-builder elements, as i'm
>>> doing now
>>> - generate this element internally on magnum. So we can have a
>>> directory
>>> in magnum project, called "elements", and have the fedora-atomic
>>> element
>>> here. This will give us more control on the element behaviour, and
>>> will
>>> allow to update the element without waiting for external reviews.
>>>
>>> Once the code for diskimage-builder has landed, another step can be
>>> to
>>> periodically generate images using a magnum job, and upload these
>>> images
>>> to OpenStack Infra mirrors. Currently the image is based on Fedora
>>> F23,
>>> docker-host tree. But different images can be generated if we need a
>>> better option.
>>>
>>> As soon as the images are available on internal infra mirrors, the
>>> tests
>>> can be changed, to consume these internals images. By this way the
>>> tests
>>> can be a bit faster (i know that the bottleneck is on the functional
>>> testing, but if we reduce the download time it can help), and tests
>>> can
>>> be more reilable, because we will be removing an external dependency.
>>>
>>> So i'd like to get more feedback on this topic, options and next
>>> steps
>>> to achieve the goals. Best
>>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>-- 
>Yolanda Robla Mota
>Cloud Automation and Distribution Engineer
>+34 605641639
>yolanda.robla-m...@hpe.com
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn][Neutron] OVN support for routed networks(plugin interface for host mapping)

2016-03-29 Thread Russell Bryant
On Mon, Mar 21, 2016 at 12:26 PM, Russell Bryant  wrote:

> On Thu, Mar 17, 2016 at 1:45 AM, Hong Hui Xiao 
> wrote:
>
>> Hi Russell.
>>
>> Since the "ovn-bridge-mapping" will become accessible in OVN Southbound
>> DB, do you meant that neutron plugin can read those bridge mappings from
>> the OVN Southbound DB? I didn't think in that way because I thought
>> networking-ovn will only transact data with OVN Northbound DB.
>>
>
> ​You're right that networking-ovn currently only uses the OVN northbound
> DB.  This requirement crosses the line into physical space and needing to
> know about some physical environment details, so reading from the
> southbound DB for this info is acceptable.​
> ​
>
>> Also, do you have any link to describe the ongoing work in OVN to sync the
>> "ovn-bridge-mapping" from hypervisor?
>
>
> ​This patch introduces some new tables to the southbound DB:
>
> http://openvswitch.org/pipermail/dev/2016-March/068112.html
> ​
> I was thinking that we would be able to read the physical endpoints table
> to get what we need, but now I'm thinking it may not fit our use case.
>
> The alternative would be to just store the bridge mappings as an
> external_id on the Chassis record in the southbound database.  How quickly
> is this needed?
>

​This is now ready.

The Chassis table in OVN_Southbound now has

1) a hostname column

2) an external_ids column, including an ovn-bridge-mappings key.

Between those two, I think the Neutron plugin has all of the info it needs.

Let me know if you think of anything else that is missing.​


-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] containers are a hot topic at the operator summit

2016-03-29 Thread Steven Dake (stdake)
Fellow Kolla developers,

If you plan to be in Austin on Monday of OpenStack Summit week, there is a 
fantastic opportunity for us to receive feedback from Operators that Carol 
Barrett pointed out to me.  The official Operator summit schedule is yet to be 
defined.

The Operator summit planning Etherpad is here:
https://etherpad.openstack.org/p/AUS-ops-meetup

Regards,
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] OpenVSwitch support

2016-03-29 Thread Curtis
On Tue, Mar 29, 2016 at 8:13 AM, Truman, Travis
 wrote:
> Curtis,
>
> Please take a look at https://review.openstack.org/#/c/298765/ as its
> probably a good place to start digging into OVS.
>

Perfect, thanks, am looking into it now. :)

Thanks,
Curtis.

> Travis Truman
> IRC: automagically
>
> On 3/24/16, 12:37 PM, "Curtis"  wrote:
>
>>OK thanks everyone. I will put on my learnening hat and start digging
>>into this.
>>
>>Thanks,
>>Curtis.
>>
>>On Thu, Mar 24, 2016 at 10:07 AM, Truman, Travis
>> wrote:
>>> Michael Gugino and I were going to work on OVS and I believe we both got
>>> sidetracked. I¹ll echo Kevin¹s offer to help out where possible, but at
>>> the moment, I¹ve been tied up with other items.
>>>
>>> On 3/24/16, 11:55 AM, "Kevin Carter"  wrote:
>>>
I believe the OVS bits are being worked on, however I don't remember by
whom and I don't know the current state of the work. Personally, I'd
welcome the addition of other neutron plugin options and if you have
time
to work on any of those bits I'd be happy to help out where I can and
review the PRs.

--

Kevin Carter
IRC: cloudnull



From: Curtis 
Sent: Thursday, March 24, 2016 10:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack-ansible] OpenVSwitch support

Hi,

I'm in the process of building OPNFV style labs [1]. I'd prefer to
manage these labs with openstack-ansible, but I will need things like
OpenVSwitch.

I know there was talk of supporting OVS in some fashion [2] but I'm
wondering what the current status or thinking is. If it's desirable by
the community to add OpenVSwitch support, and potentially other OPNFV
related features, I have time to contribute to work on them (as best I
can, at any rate).

Let me know what you think,
Curtis.

[1]: https://www.opnfv.org/
[2]: https://etherpad.openstack.org/p/osa-neutron-dvr


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>>_
>>>_
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>--
>>Blog: serverascode.com
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][horizon] django_openstack_auth 2.2.1 release (mitaka)

2016-03-29 Thread no-reply
We are eager to announce the release of:

django_openstack_auth 2.2.1: Django authentication backend for use
with OpenStack Identity

This release is part of the mitaka stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/django_openstack_auth/

With package available at:

https://pypi.python.org/pypi/django_openstack_auth

Please report issues through launchpad:

https://bugs.launchpad.net/django-openstack-auth

For more details, please see below.

Changes in django_openstack_auth 2.2.0..2.2.1
-

9574d98 Imported Translations from Zanata
20f643e Imported Translations from Zanata
ef3a19f Update .gitreview for stable/mitaka

Diffstat (except docs and test files)
-

.gitreview|  1 +
openstack_auth/locale/de/LC_MESSAGES/django.po| 15 
openstack_auth/locale/fr/LC_MESSAGES/django.po| 24 +++--
openstack_auth/locale/it/LC_MESSAGES/django.po| 42 +++
openstack_auth/locale/ja/LC_MESSAGES/django.po| 17 -
openstack_auth/locale/ko_KR/LC_MESSAGES/django.po | 32 ++---
openstack_auth/locale/pl_PL/LC_MESSAGES/django.po | 15 
openstack_auth/locale/pt_BR/LC_MESSAGES/django.po | 16 +
openstack_auth/locale/tr_TR/LC_MESSAGES/django.po | 25 --
openstack_auth/locale/zh_CN/LC_MESSAGES/django.po | 32 ++---
openstack_auth/locale/zh_TW/LC_MESSAGES/django.po | 27 +++
11 files changed, 160 insertions(+), 86 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bug for image_cache_manager refactoring

2016-03-29 Thread Augustina Ragwitz
Thanks Markus (and Hans)!

---
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][puppet] Puppet OpenStack 8.0.0 release (mitaka)

2016-03-29 Thread Paul Belanger
On Mon, Mar 28, 2016 at 06:53:25PM -0400, Emilien Macchi wrote:
> Puppet OpenStack team has the immense pleasure to announce the release
> of 24 Puppet modules.
> 
> Some highlights and major features:
> 
> Keystone:
> * Federation with Mellon support
> * Support for multiple LDAP backends
> * Usage of keystone-manage bootstrap
> 
> Neutron:
> * Support of LBaaSv2
> * More SDN integrations: OpenDayLight, PlugGrid, Midonet
> * Use modern parameters for Nova notifications
> 
> Nova:
> * Manage Nova API database
> * Nova cells support with host aggregates
> * Remove EC2 support
> * Provider to manage security groups and rules
> 
> Glance:
> * Multi-backend support
> * Glare API support
> 
> Cinder:
> * Block Device backend support
> * Allow to deploy Cinder API v3
> 
> General features:
> * IPv6 deployment support
> * CI continues to have more use-cases coverage
> 
> New stable modules:
> * puppet-mistral
> * puppet-zaqar
> 
> 
> Detailed Release notes:
> 
> http://docs.openstack.org/releasenotes/puppet-aodh
> http://docs.openstack.org/releasenotes/puppet-ceilometer
> http://docs.openstack.org/releasenotes/puppet-cinder
> http://docs.openstack.org/releasenotes/puppet-designate
> http://docs.openstack.org/releasenotes/puppet-glance
> http://docs.openstack.org/releasenotes/puppet-gnocchi
> http://docs.openstack.org/releasenotes/puppet-heat
> http://docs.openstack.org/releasenotes/puppet-horizon
> http://docs.openstack.org/releasenotes/puppet-ironic
> http://docs.openstack.org/releasenotes/puppet-keystone
> http://docs.openstack.org/releasenotes/puppet-manila
> http://docs.openstack.org/releasenotes/puppet-mistral
> http://docs.openstack.org/releasenotes/puppet-murano
> http://docs.openstack.org/releasenotes/puppet-neutron
> http://docs.openstack.org/releasenotes/puppet-nova
> http://docs.openstack.org/releasenotes/puppet-openstack_extras
> http://docs.openstack.org/releasenotes/puppet-openstacklib
> http://docs.openstack.org/releasenotes/puppet-openstack_spec_helper
> http://docs.openstack.org/releasenotes/puppet-sahara
> http://docs.openstack.org/releasenotes/puppet-swift
> http://docs.openstack.org/releasenotes/puppet-tempest
> http://docs.openstack.org/releasenotes/puppet-trove
> http://docs.openstack.org/releasenotes/puppet-vswitch
> http://docs.openstack.org/releasenotes/puppet-zaqar
> 
> 
> Big kudos to the team and also our friends from OpenStack Infra, RDO,
> UCA, and Tempest!
> -- 
> Emilien Macchi on behalf of Puppet OpenStack team
> 
Great work to all that made the release happen.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-03-29 Thread Roman Prykhodchenko
Please, propose your options here then: 
https://etherpad.openstack.org/p/shotgun-rename

> 29 бер. 2016 р. о 18:15 Jay Pipes  написав(ла):
> 
> On 03/29/2016 08:41 AM, Roman Prykhodchenko wrote:
>> Should we propose options and then arrange a poll?
> 
> Yup, ++ :)
> 
>>> 29 бер. 2016 р. о 16:40 Neil Jerram  
>>> написав(ла):
>>> 
>>> On 29/03/16 15:17, Jay Pipes wrote:
 Hi!
 
 Once Shotgun is pulled out of Fuel, may I suggest renaming it to
 something different? I know in the past that Anita and a few others
 thought the name was not something we should really be encouraging in
 the OpenStack ecosystem.
 
 Just something to consider since it's being decoupled anyway and may be
 a good opportunity to rename at that point...
 
 Best,
 -jay
>>> 
>>> +1
>>> 
>>> Neil
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][all][ptl] release process changes for official projects

2016-03-29 Thread Doug Hellmann
During the Mitaka cycle, the release team worked on automation for
tagging and documenting releases [1]. For the first phase, we focused
on official teams with the release:managed tag for their deliverables,
to keep the number of projects manageable as we built out the tools
and processes we needed. That created a bit of confusion as official
projects still had to submit openstack/releases changes in order
to appear on the releases.openstack.org website.

For the second phase during the Newton cycle, we are prepared to
expand the use of automation to all deliverables for all official
projects. As part of this shift, we will be updating the Gerrit
ACLs for projects to ensure that the release team can handle the
releases and branching.  These updates will remove tagging and
branching rights for anyone not on the central release management
team. Instead of tagging releases and then recording them in the
releases repository after the tag is applied, all official teams
can now use the releases repo to request new releases. We will be
reviewing version numbers in all tag requests to ensure SemVer is
followed, and we won't release libraries late in the week, but we
will process releases regularly so there is no reason this change
should have a significant impact on your ability to release frequently.

If you're not familiar with the current release process, please
review the README.rst file in the openstack/releases repository.
Follow up here on the mailing list or in #openstack-release if you
have questions.

The project-config change to update ACLs and correct issues with
the build job definitions for official projects is
https://review.openstack.org/298866

Doug

[1] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Ryan Hallisey
+1

- Original Message -
From: "Paul Bourke" 
To: openstack-dev@lists.openstack.org
Sent: Tuesday, March 29, 2016 12:10:38 PM
Subject: Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core 
Reviewer

+1

On 29/03/16 17:07, Steven Dake (stdake) wrote:
> Hey folks,
>
> Consider this proposal a +1 in favor of Vikram joining the core reviewer
> team.  His reviews are outstanding.  If he doesn’t have anything useful
> to add to a review, he doesn't pile on the review with more –1s which
> are slightly disheartening to people.  Vikram has started a trend
> amongst the core reviewers of actually diagnosing gate failures in
> peoples patches as opposed to saying gate failed please fix.  He does
> this diagnosis in nearly every review I see, and if he is stumped  he
> says so.  His 30 days review stats place him in pole position and his 90
> day review stats place him in second position.  Of critical notice is
> that Vikram is ever-present on IRC which in my professional experience
> is the #1 indicator of how well a core reviewer will perform long term.
>Besides IRC and review requirements, we also have code requirements
> for core reviewers.  Vikram has implemented only 10 patches so far, butI
> feel he could amp this up if he had feature work to do.  At the moment
> we are in a holding pattern on master development because we need to fix
> Mitaka bugs.  That said Vikram is actively working on diagnosing root
> causes of people's bugs in the IRC channel pretty much 12-18 hours a day
> so we can ship Mitaka in a working bug-free state.
>
> Our core team consists of 11 people.  Vikram requires at minimum 6 +1
> votes, with no veto –2 votes within a 7 day voting window to end on
> April 7th.  If there is a veto vote prior to April 7th I will close
> voting.  If there is a unanimous vote prior to April 7th, I will make
> appropriate changes in gerrit.
>
> Regards
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla-group/30
> [2] http://stackalytics.com/report/contribution/kolla-group/90
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][puppet] Puppet OpenStack 8.0.0 release (mitaka)

2016-03-29 Thread Andrew Woodward
Absolutely fantastic. Great job Emilien and the Puppet-Openstack team.
We've made great progress cycle over cycle, and now have closed our release
before most of the projects we support.

On Mon, Mar 28, 2016 at 3:55 PM Emilien Macchi  wrote:

> Puppet OpenStack team has the immense pleasure to announce the release
> of 24 Puppet modules.
>
> Some highlights and major features:
>
> Keystone:
> * Federation with Mellon support
> * Support for multiple LDAP backends
> * Usage of keystone-manage bootstrap
>
> Neutron:
> * Support of LBaaSv2
> * More SDN integrations: OpenDayLight, PlugGrid, Midonet
> * Use modern parameters for Nova notifications
>
> Nova:
> * Manage Nova API database
> * Nova cells support with host aggregates
> * Remove EC2 support
> * Provider to manage security groups and rules
>
> Glance:
> * Multi-backend support
> * Glare API support
>
> Cinder:
> * Block Device backend support
> * Allow to deploy Cinder API v3
>
> General features:
> * IPv6 deployment support
> * CI continues to have more use-cases coverage
>
> New stable modules:
> * puppet-mistral
> * puppet-zaqar
>
>
> Detailed Release notes:
>
> http://docs.openstack.org/releasenotes/puppet-aodh
> http://docs.openstack.org/releasenotes/puppet-ceilometer
> http://docs.openstack.org/releasenotes/puppet-cinder
> http://docs.openstack.org/releasenotes/puppet-designate
> http://docs.openstack.org/releasenotes/puppet-glance
> http://docs.openstack.org/releasenotes/puppet-gnocchi
> http://docs.openstack.org/releasenotes/puppet-heat
> http://docs.openstack.org/releasenotes/puppet-horizon
> http://docs.openstack.org/releasenotes/puppet-ironic
> http://docs.openstack.org/releasenotes/puppet-keystone
> http://docs.openstack.org/releasenotes/puppet-manila
> http://docs.openstack.org/releasenotes/puppet-mistral
> http://docs.openstack.org/releasenotes/puppet-murano
> http://docs.openstack.org/releasenotes/puppet-neutron
> http://docs.openstack.org/releasenotes/puppet-nova
> http://docs.openstack.org/releasenotes/puppet-openstack_extras
> http://docs.openstack.org/releasenotes/puppet-openstacklib
> http://docs.openstack.org/releasenotes/puppet-openstack_spec_helper
> http://docs.openstack.org/releasenotes/puppet-sahara
> http://docs.openstack.org/releasenotes/puppet-swift
> http://docs.openstack.org/releasenotes/puppet-tempest
> http://docs.openstack.org/releasenotes/puppet-trove
> http://docs.openstack.org/releasenotes/puppet-vswitch
> http://docs.openstack.org/releasenotes/puppet-zaqar
>
>
> Big kudos to the team and also our friends from OpenStack Infra, RDO,
> UCA, and Tempest!
> --
> Emilien Macchi on behalf of Puppet OpenStack team
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] gnocchi backport exception for stable/mitaka

2016-03-29 Thread Steven Hardy
On Tue, Mar 29, 2016 at 11:05:00AM -0400, Pradeep Kilambi wrote:
> Hi Everyone:
> 
> As Mitaka branch was cut yesterday, I would like to request a backport
> exception to get gnocchi patches[1][2][3] into stable/mitaka. It
> should low risk feature as we decided not to set ceilometer to use
> gnocchi by default. So ceilometer would work as is and gnocchi is
> deployed along side as a new service but not used out of the box. So
> this should make upgrades pretty much a non issues as things should
> work exactly like before. If someone want to use gnocchi backend, they
> can add an env template file to override the backend. In Netwon, we'll
> flip the switch to make gnocchi the default backend.
> 
> If we can please vote to agree to get this in as an exception it would
> be super useful.

+1, provided we're able to confirm this plays nicely wrt upgrades I think
we should allow this.

We're taking a much stricter stance re backports for stable/mitaka, but I
think this is justified for the following reasons:

- The patches have been posted in plenty of time, but have suffered from a
  lack of reviews and a lot of issues getting CI passing, were it not for
  those issues this should really have landed by now.

- The Ceilometer community have been moving towards replacing the database
  dispatcher with gnocchi since kilo, and it should provide us with a
  (better performing) alternative the current setup AIUI.

Thus I think this is a case where an exception is probably justified, but
to be clear I'm generally opposed to granting exceptions for mitaka beyond
the few things we may discover in the next few days prior to the
coordinated release (in Newton I hope we can formalize this to be more
aligned with the normal feature-freeze and RC process).

Steve

> 
> Thanks,
> ~ Prad
> 
> [1] https://review.openstack.org/#/c/252032/
> [2] https://review.openstack.org/#/c/290710/
> [3] https://review.openstack.org/#/c/238013/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-03-29 Thread Jay Pipes

On 03/29/2016 08:41 AM, Roman Prykhodchenko wrote:

Should we propose options and then arrange a poll?


Yup, ++ :)


29 бер. 2016 р. о 16:40 Neil Jerram  написав(ла):

On 29/03/16 15:17, Jay Pipes wrote:

Hi!

Once Shotgun is pulled out of Fuel, may I suggest renaming it to
something different? I know in the past that Anita and a few others
thought the name was not something we should really be encouraging in
the OpenStack ecosystem.

Just something to consider since it's being decoupled anyway and may be
a good opportunity to rename at that point...

Best,
-jay


+1

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Component based testing pipeline

2016-03-29 Thread Vladimir Kozhukalov
Dear colleagues,

I have prepared a document [1] that describes some aspects
of possible testing pipeline in a component based
development environment. The main idea of the proposed
pipeline is to untie two steps:

1) Merge the code to git
2) Adopt changes to Fuel

By that I mean Fuel should be an aggregator of
RPM/DEB packages not of source code. We can merge
the code that successfully passes unit and functional tests.
Then we build package, but we don't put this package into
current master package repository. Then we use those packages
to run integration tests and if integration test succeeds,
get this package (not re-building it) and put it into the
current master package repo.

Feel free to leave your comments.


[1]
https://docs.google.com/document/d/1Yp63s3ctO0FNRhqrWNpLmXxMOaJT2tb8H0W80Jr_kjM

Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Paul Bourke

+1

On 29/03/16 17:07, Steven Dake (stdake) wrote:

Hey folks,

Consider this proposal a +1 in favor of Vikram joining the core reviewer
team.  His reviews are outstanding.  If he doesn’t have anything useful
to add to a review, he doesn't pile on the review with more –1s which
are slightly disheartening to people.  Vikram has started a trend
amongst the core reviewers of actually diagnosing gate failures in
peoples patches as opposed to saying gate failed please fix.  He does
this diagnosis in nearly every review I see, and if he is stumped  he
says so.  His 30 days review stats place him in pole position and his 90
day review stats place him in second position.  Of critical notice is
that Vikram is ever-present on IRC which in my professional experience
is the #1 indicator of how well a core reviewer will perform long term.
   Besides IRC and review requirements, we also have code requirements
for core reviewers.  Vikram has implemented only 10 patches so far, butI
feel he could amp this up if he had feature work to do.  At the moment
we are in a holding pattern on master development because we need to fix
Mitaka bugs.  That said Vikram is actively working on diagnosing root
causes of people's bugs in the IRC channel pretty much 12-18 hours a day
so we can ship Mitaka in a working bug-free state.

Our core team consists of 11 people.  Vikram requires at minimum 6 +1
votes, with no veto –2 votes within a 7 day voting window to end on
April 7th.  If there is a veto vote prior to April 7th I will close
voting.  If there is a unanimous vote prior to April 7th, I will make
appropriate changes in gerrit.

Regards
-steve

[1] http://stackalytics.com/report/contribution/kolla-group/30
[2] http://stackalytics.com/report/contribution/kolla-group/90


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Steven Dake (stdake)
Hey folks,

Consider this proposal a +1 in favor of Vikram joining the core reviewer team.  
His reviews are outstanding.  If he doesn't have anything useful to add to a 
review, he doesn't pile on the review with more -1s which are slightly 
disheartening to people.  Vikram has started a trend amongst the core reviewers 
of actually diagnosing gate failures in peoples patches as opposed to saying 
gate failed please fix.  He does this diagnosis in nearly every review I see, 
and if he is stumped  he says so.  His 30 days review stats place him in pole 
position and his 90 day review stats place him in second position.  Of critical 
notice is that Vikram is ever-present on IRC which in my professional 
experience is the #1 indicator of how well a core reviewer will perform long 
term.   Besides IRC and review requirements, we also have code requirements for 
core reviewers.  Vikram has implemented only 10 patches so far, butI feel he 
could amp this up if he had feature work to do.  At the moment we are in a 
holding pattern on master development because we need to fix Mitaka bugs.  That 
said Vikram is actively working on diagnosing root causes of people's bugs in 
the IRC channel pretty much 12-18 hours a day so we can ship Mitaka in a 
working bug-free state.

Our core team consists of 11 people.  Vikram requires at minimum 6 +1 votes, 
with no veto -2 votes within a 7 day voting window to end on April 7th.  If 
there is a veto vote prior to April 7th I will close voting.  If there is a 
unanimous vote prior to April 7th, I will make appropriate changes in gerrit.

Regards
-steve

[1] http://stackalytics.com/report/contribution/kolla-group/30
[2] http://stackalytics.com/report/contribution/kolla-group/90
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr]Why keep SNAT centralized and DNAT distributed?

2016-03-29 Thread Carl Baldwin
On Mon, Mar 28, 2016 at 7:25 PM, Wang, Yalei  wrote:
> Someone is working on full distributed SNAT, like this:
>
> https://www.openstack.org/summit/tokyo-2015/videos/presentation/network-node-is-not-needed-anymore-completed-distributed-virtual-router
>
> From: Zhi Chang [mailto:chang...@unitedstack.com]
> Sent: Saturday, March 26, 2016 1:53 PM
> To: openstack-dev
> Subject: [openstack-dev] [neutron][dvr]Why keep SNAT centralized and DNAT
> distributed?
>
> hi all.
>
> I have some questions about NAT in DVR.
>
> In Neutron, we provide two NAT types. One is SNAT, we can associate a
> floating ip to router so that all vms attached this router can connect
> external network. The other NAT types is DNAT, we can connect a vm which
> associated floating ip from external network.
>
>  Question A, Why keep SNAT centralized? We put the SNAT namespace in
> compute node which running DVR l3 agent, don't we?

The reason it was kept distributed was to maintain the current
behavior of a Neutron router where SNAT is through a single address
associated with the router.  Many other models were discussed on an
etherpad [1] long ago but I couldn't really get good consensus or
traction on any of the alternatives.

Folks were generally polarized around the idea of centralizing SNAT
around a compute host rather than a router.  Some people thought that
this was the obvious solution.  Yet others saw this as a non-starter
because it would mix traffic from different tenants on the same SNAT
address.

>  Question B, Why keep DNAT distributed? I think we can keep snat
> namespace and fip namespace in one node. Why not keep DNAT and SNAT
> together?

The whole point of distributing north/south traffic was to distribute
DNAT for floating IPs.  Maybe I don't understand your question.

Carl

[1] https://etherpad.openstack.org/p/decentralized-snat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #76

2016-03-29 Thread Emilien Macchi
We did our meeting, and you can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-03-29-15.00.html

Thanks!

On Mon, Mar 28, 2016 at 4:21 PM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi,
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.
>
> https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack
>
> As usual, free free to bring topics in this etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160329
>
> We'll also have open discussion for bugs & reviews, so anyone is welcome
> to join.
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Mitaka RC2 available

2016-03-29 Thread Thierry Carrez
Due to release-critical issues spotted in Heat during RC1 testing, a new 
release candidate was created for Mitaka. You can find the RC2 source 
code tarball at:


https://tarballs.openstack.org/heat/heat-6.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a last-minute 
release candidate respin, this tarball will be formally released as the 
final "Mitaka" version on April 7th. You are therefore strongly 
encouraged to test and validate this tarball !


Alternatively, you can directly test the mitaka release branches at:

http://git.openstack.org/cgit/openstack/heat/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/heat/+filebug

and tag it *mitaka-rc-potential* to bring it to the Heat release crew's 
attention.



--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic]Nova resource tracker is not able to identify resources of BM instance when fake_pxe driver used

2016-03-29 Thread Senthilprabu Shanmugavel
Thanks Jim. WA did the trick. Very well explained.

Should I raise a bug for fake driver's power state?.

On Tue, Mar 29, 2016 at 5:30 PM, Jim Rollenhagen 
wrote:

> On Tue, Mar 29, 2016 at 04:59:09PM +0300, Senthilprabu Shanmugavel wrote:
> > Hello,
> >
> > I am using Ironic for deploying baremetal to my openstack environment.
> > Using Liberty version on Ubuntu 14.04. I followed Ironic documentation
> to
> > deploy x86 servers using pxe_ipmitool. Now I have a working Ironic setup
> > for PXE boot. I also want to add my test board running ARM 64 bit CPU to
> > ironic deployment. I would like to try using fake_pxe drivers because my
> > board don't support IPMI or anything else for out of band communication.
> So
> > idea was to do deployment without power management, eventually fake_pxe
> is
> > the obvious choice. But I have problem in updating the Ironic node which
> I
> > will explain below.
> >
> > I created ironic node using fake_pxe driver. Added all necessary
> parameters
> > using node-update command. Node-show command output is given below for
> > reference
> >
> > 
> >
> > Because of this during nova boot, scheduler failed to boot the BM
> instance.
> >
> > Can anyone help me with what's wrong in my configuration?
>
> I guess probably someone has never used the fake power driver with
> devstack. :)
>
> If a node's power state is None, it will be ignored by Nova:
>
> https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L171
>
> And the fake power driver doesn't set the power state to on/off itself:
>
> https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/fake.py#L43
>
> We should probably fix this by changing it to:
> return task.node.power_state or states.POWER_ON
>
> In the meantime, an easy workaround would be:
> ironic node-set-power-state  on
> ironic node-set-power-state  off
>
> Which would have the driver 'remember' the power state is currently off,
> allowing Nova to pick up the resources.
>
> Hope that helps :)
>
> // jim
>
> >
> >
> >
> > Thanks in advance,
> > Senthil
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Senthilprabu Shanmugavel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-03-29 Thread Roman Prykhodchenko
Should we propose options and then arrange a poll?

> 29 бер. 2016 р. о 16:40 Neil Jerram  написав(ла):
> 
> On 29/03/16 15:17, Jay Pipes wrote:
>> Hi!
>> 
>> Once Shotgun is pulled out of Fuel, may I suggest renaming it to
>> something different? I know in the past that Anita and a few others
>> thought the name was not something we should really be encouraging in
>> the OpenStack ecosystem.
>> 
>> Just something to consider since it's being decoupled anyway and may be
>> a good opportunity to rename at that point...
>> 
>> Best,
>> -jay
> 
> +1
> 
>   Neil
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Mitaka RC2 available

2016-03-29 Thread Gyorgy Szombathelyi
Great, thanks!

From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: 29 March 2016 4:01 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: openst...@lists.openstack.org
Subject: Re: [openstack-dev] [Cinder] Mitaka RC2 available

That patch is now approved and in the process of merging; once it is merged, 
you can propose a backport - if it doesn't make the release, it will at least 
be one the stable tree.

On 29 March 2016 at 16:18, Gyorgy Szombathelyi 
>
 wrote:
Hi Thierry,

If a new tarball will necessary, is it possible to get this included, too?
https://review.openstack.org/#/c/272437/

Seems this is not critical enough, but a straight fix.

Br,
Gyorgy

-Original Message-
From: Thierry Carrez 
[mailto:thie...@openstack.org]
Sent: 29 March 2016 3:03 PM
To: OpenStack Development Mailing List 
>; 
openst...@lists.openstack.org
Subject: [openstack-dev] [Cinder] Mitaka RC2 available

Due to release-critical issues spotted in Cinder during RC1 testing, a new 
release candidate was created for Mitaka. You can find the RC2 source code 
tarballs at:

https://tarballs.openstack.org/cinder/cinder-8.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a last-minute release 
candidate respin, this tarball will be formally released as the final "Mitaka" 
version on April 7th. You are therefore strongly encouraged to test and 
validate this tarball !

Alternatively, you can directly test the mitaka release branches at:

http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please file it 
at:

https://bugs.launchpad.net/cinder/+filebug

and tag it *mitaka-rc-potential* to bring it to the Cinder release crew's 
attention.

--
Thierry Carrez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-29 Thread Matt Riedemann

Nova has had some long-standing bugs that Sahid is trying to fix here [1].

You can create a network in neutron with port_security_enabled=False. 
However, the bug is that since Nova adds the 'default' security group to 
the request (if none are specified) when allocating networks, neutron 
raises an error when you try to create a port on that network with a 
'default' security group.


Sahid's patch simply checks if the network that we're going to use has 
port_security_enabled and if it does not, no security groups are applied 
when creating the port (regardless of what's requested for security 
groups, which in nova is always at least 'default').


There has been a similar attempt at fixing this [2]. That change simply 
only added the 'default' security group when allocating networks with 
nova-network. It omitted the default security group if using neutron since:


a) If the network does not have port security enabled, we'll blow up 
trying to add a port on it with the default security group.


b) If the network does have port security enabled, neutron will 
automatically apply a 'default' security group to the port, nova doesn't 
need to specify one.


The problem both Feodor's and Sahid's patches ran into was that the nova 
REST API adds a 'default' security group to the server create response 
when using neutron if specific security groups weren't on the server 
create request [3].


This is clearly wrong in the case of 
network.port_security_enabled=False. When listing security groups for an 
instance, they are correctly not listed, but the server create response 
is still wrong.


So the question is, how to resolve this?  A few options come to mind:

a) Don't return any security groups in the server create response when 
using neutron as the backend. Given by this point we've cast off to the 
compute which actually does the work of network allocation, we can't 
call back into the network API to see what security groups are being 
used. Since we can't be sure, don't provide what could be false info.


b) Add a new method to the network API which takes the requested 
networks from the server create request and returns a best guess if 
security groups are going to be applied or not. In the case of 
network.port_security_enabled=False, we know a security group won't be 
applied so the method returns False. If there is port_security_enabled, 
we return whatever security group was requested (or 'default'). If there 
are multiple networks on the request, we return the security groups that 
will be applied to any networks that have port security enabled.


Option (b) is obviously more intensive and requires hitting the neutron 
API from nova API before we respond, which we'd like to avoid if 
possible. I'm also not sure what it means for the 
auto-allocated-topology (get-me-a-network) case. With a standard 
devstack setup, a network created via the auto-allocated-topology API 
has port_security_enabled=True, but I also have the 'Port Security' 
extension enabled and the default public external network has 
port_security_enabled=True. What if either of those are False (or the 
port security extension is disabled)? Does the auto-allocated network 
inherit port_security_enabled=False? We could duplicate that logic in 
Nova, but it's more proxy work that we would like to avoid.


[1] https://review.openstack.org/#/c/284095/
[2] https://review.openstack.org/#/c/173204/
[3] 
https://github.com/openstack/nova/blob/f8a01ccdffc13403df77148867ef3821100b5edb/nova/api/openstack/compute/security_groups.py#L472-L475


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >