Re: [openstack-dev] [nova] Should be instance_dir in all nova compute node same ?

2016-05-03 Thread Michael Still
I can't think of a reason. In fact its a bit warty because we've changed
the way we name the instance directories at least once. Its just how this
code was written back in the day.

Cleaning this up would be a fair bit of work though. Is it really worth the
effort just so people can have different directory names for some undefined
reason?

Michael



On Wed, May 4, 2016 at 9:03 AM, Ghe Rivero  wrote:

> Is there any specific reason why this is require that way? or just a
> "feature"?
>
> Ghe Rivero
>
>
> On 03/05/16 11:42, Matthew Booth wrote:
>
> On Fri, Apr 29, 2016 at 2:47 AM, Eli Qiao  wrote:
>
>> hi team,
>>
>> Is there any require that all compute node's instance_dir should be same?
>>
>
> Yes. This is assumed in many places, certainly in cold migration/resize.
>
> Matt
> --
> Matthew Booth
> Red Hat Engineering, Virtualisation Team
>
> Phone: +442070094448 (UK)
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-05-03 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of May. 4

2016-05-03 Thread joehuang
Hi,

In last virtual design summit meeting, some feature was identified blocking the 
tempest. So let's discuss the these topics in the weekly meeting:

Virtual design summit: 
https://etherpad.openstack.org/p/TricircleNeutonDesignSummit

IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.

Agenda:
# feature blocked tempest, server group
# L2GW integration for cross pod L2 networking
# dynamic pod binding

If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.



Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] lock the distro version in the stable branch

2016-05-03 Thread Jeffrey Zhang
On Wed, May 4, 2016 at 1:30 AM, Hui Kang  wrote:

> This commit fixes the tag:
>
> https://github.com/openstack/kolla/commit/e2fa75fce6f90de8b2766070bb65d0b80bcad8c8
>
​
 that fix is just a workaround. the end-user know nothing about this
​ and will be
confused about the result.​
​
​
​
​


>
>
> But I think fixing the tag in dockerfile of base container image is better
>


Yes that will better.
I intend to change the default config option. and ​check the base and
base_tag in
the Dockerfile of base.



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-k8s] Core team

2016-05-03 Thread Jeffrey Zhang
agree.

at the beginning, the kolla-k8s core team should accept new core reviewers
in the very high frequency, for example 1-2 member every 1 week, which will
be
helpful for the growth of the core team.

On Wed, May 4, 2016 at 12:48 AM, Michał Jastrzębski 
wrote:

> Hello,
>
> Since it seems that we have voted for separation of kolla-k8s repos
> (yay!) I would like to table another discussion (but let's wait till
> its official).
>
> Core Team.
>
> We need to build up new core team that will guard the gates on our
> brand new repo (when it arrives). One of ideas Steven pointed out is
> to add people from etherpad to core team, but I'd like to throw
> different idea to the mix, to keep things interesting.
>
> Idea is: let's start with current kolla core team and for the time
> being add new cores to kolla-k8s by invitation by existing core
> member. For example, I'm kolla core, working with k8s and I see some
> guy doing great job and investing time into it, I would propose him
> for core, and instead of normal voting, he will get his +2 powers
> immediately. This would allow quick core team buildout and not start
> with bunch of people who doesn't necessary want to contribute or even
> know each other.
>
> Cheers,
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle North-South L3 Networking

2016-05-03 Thread joehuang
Hi, Shinobu,

Correct, this is not the normal deployment scenario and the way of testbed 
setup. 

Cheers

BR
Chaoyi Huang ( joehuang )


From: Shinobu Kinjo [shinobu...@gmail.com]
Sent: 04 May 2016 9:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle North-South 
L3 Networking

Hi Chaoyi,

I didn't consider Ronghui's environment which I have no idea about.

> That's why Zhiyuan proposed hacking way to do it.

Considering such a limited situation, I understood this solution is
for particular situation which is not usual for cascaded stack
environment.
Is it same of what you are implying in your message?

I would like to avoid any misunderstanding between members as much as possible.

Cheers,
Shinobu

On Wed, May 4, 2016 at 10:25 AM, joehuang  wrote:
> Hi, Shinobu,
>
> I think Zhiyuan's suggestion is mainly for Ronghui's environment, for his 
> environment has very limited network infterfaces, it's difficult to 
> experiment N-S feature. It would be recommended to use VMs for setting up 
> Tricircle test bed with two bottom pods, so it's much more easier to manage 
> networking plane for different purpose. But Ronghui's machine also have very 
> limited vCPU and memory, so booting serveral VMs to establish the tricircle 
> and two bottom pods test bed also not possible. That's why Zhiyuan proposed 
> hacking way to do it.
>
> Best Regards
> Chaoyi Huang ( joehuang )
>
> 
> From: Shinobu Kinjo [shinobu...@gmail.com]
> Sent: 04 May 2016 6:58
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle 
> North-South L3 Networking
>
> Vega,
>
> On Tue, May 3, 2016 at 5:49 PM, Vega Cai  wrote:
>> Hi all,
>>
>> Just would like to share a way to test Tricircle north-south L3 networking
>> without requiring the third interface.
>>
>> In the Tricircle readme, it is said that you need to add an interface in
>> your host to br-ext bridge. One interface to access the host, one interface
>> for east-west networking and one interface for north-south networking, so
>> all together three interfaces are required.
>>
>> What if your host only have two interfaces? Here is another deployment
>> choice.
>>
>> First, change your external network type to flat type. If you are using the
>> DevStack script provided by Tricircle, do the following changes in node2
>> local.conf then run DevStack in node2.
>>
>> (1) change Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS
>> from (network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
>> to (network_vlan_ranges=bridge:2001:3000)
>> (since we going to use flat external network, no need to configure VLAN
>> range for extern)
>> (2) add PHYSICAL_NETWORK=extern
>> (3) keep OVS_BRIDGE_MAPPINGS=bridge:br-bridge,extern:br-ext
>
> Good point.
>
>>
>> Second, specify flat type when creating external network.
>>
>> curl -X POST http://127.0.0.1:9696/v2.0/networks
>>-H "Content-Type: application/json" \
>>-H "X-Auth-Token: $token" \
>>-d '{"network": {"name": "ext-net", "admin_state_up": true,
>> "router:external": true, "provider:network_type": "flat",
>> "provider:physical_network": "extern", "availability_zone_hints":
>> ["Pod2"]}}'
>
> Understood.
>
>>
>> Third, configure IP address of br-ext.
>>
>> sudo ifconfig br-ext 163.3.124.1 netmask 255.255.255.0
>>
>> Here 163.3.124.1 is your external network gateway IP, set net mask
>> according to your CIDR.
>>
>> After the above steps, you can access your VM via floating IP in node2. Also
>> your VM can ping the external gateway.
>>
>> Would like your VM to access the Internet?(Of course node2 should be able to
>> access the Internet) Two more steps to follow:
>> (1) Enable packet forward in node2
>>
>> sudo bash
>> echo 1 >/proc/sys/net/ipv4/ip_forward
>>
>> (2) Configure SNAT in node2
>>
>> sudo iptables -t nat -I POSTROUTING -s 163.3.124.0/24 -o eth1 -j SNAT
>> --to-source 10.250.201.21
>>
>> 163.3.124.0/24 is your external network CIDR, eth1 is the interface
>> associated with your default route in node2 and 10.250.201.21 is the IP of
>> eth1.
>
> I would like to avoid this kind of hackery way as much as possible.
> I would like to see your further recommendation so that we easily and
> quickly build cascaded stack system including top.
>
>>
>> Hope this information helps.
>>
>> BR
>> Zhiyuan
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Email:
> shin...@linux.com
> GitHub:
> shinobu-x
> Blog:
> Life with Distributed 

Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle North-South L3 Networking

2016-05-03 Thread Shinobu Kinjo
On Wed, May 4, 2016 at 10:38 AM, Shinobu Kinjo  wrote:
> Hi Chaoyi,
>
> I didn't consider Ronghui's environment which I have no idea about.

Anyhow this is my bad -;
Sorry for that!

Cheers,
S

>
>> That's why Zhiyuan proposed hacking way to do it.
>
> Considering such a limited situation, I understood this solution is
> for particular situation which is not usual for cascaded stack
> environment.
> Is it same of what you are implying in your message?
>
> I would like to avoid any misunderstanding between members as much as 
> possible.
>
> Cheers,
> Shinobu
>
> On Wed, May 4, 2016 at 10:25 AM, joehuang  wrote:
>> Hi, Shinobu,
>>
>> I think Zhiyuan's suggestion is mainly for Ronghui's environment, for his 
>> environment has very limited network infterfaces, it's difficult to 
>> experiment N-S feature. It would be recommended to use VMs for setting up 
>> Tricircle test bed with two bottom pods, so it's much more easier to manage 
>> networking plane for different purpose. But Ronghui's machine also have very 
>> limited vCPU and memory, so booting serveral VMs to establish the tricircle 
>> and two bottom pods test bed also not possible. That's why Zhiyuan proposed 
>> hacking way to do it.
>>
>> Best Regards
>> Chaoyi Huang ( joehuang )
>>
>> 
>> From: Shinobu Kinjo [shinobu...@gmail.com]
>> Sent: 04 May 2016 6:58
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle 
>> North-South L3 Networking
>>
>> Vega,
>>
>> On Tue, May 3, 2016 at 5:49 PM, Vega Cai  wrote:
>>> Hi all,
>>>
>>> Just would like to share a way to test Tricircle north-south L3 networking
>>> without requiring the third interface.
>>>
>>> In the Tricircle readme, it is said that you need to add an interface in
>>> your host to br-ext bridge. One interface to access the host, one interface
>>> for east-west networking and one interface for north-south networking, so
>>> all together three interfaces are required.
>>>
>>> What if your host only have two interfaces? Here is another deployment
>>> choice.
>>>
>>> First, change your external network type to flat type. If you are using the
>>> DevStack script provided by Tricircle, do the following changes in node2
>>> local.conf then run DevStack in node2.
>>>
>>> (1) change Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS
>>> from (network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
>>> to (network_vlan_ranges=bridge:2001:3000)
>>> (since we going to use flat external network, no need to configure VLAN
>>> range for extern)
>>> (2) add PHYSICAL_NETWORK=extern
>>> (3) keep OVS_BRIDGE_MAPPINGS=bridge:br-bridge,extern:br-ext
>>
>> Good point.
>>
>>>
>>> Second, specify flat type when creating external network.
>>>
>>> curl -X POST http://127.0.0.1:9696/v2.0/networks
>>>-H "Content-Type: application/json" \
>>>-H "X-Auth-Token: $token" \
>>>-d '{"network": {"name": "ext-net", "admin_state_up": true,
>>> "router:external": true, "provider:network_type": "flat",
>>> "provider:physical_network": "extern", "availability_zone_hints":
>>> ["Pod2"]}}'
>>
>> Understood.
>>
>>>
>>> Third, configure IP address of br-ext.
>>>
>>> sudo ifconfig br-ext 163.3.124.1 netmask 255.255.255.0
>>>
>>> Here 163.3.124.1 is your external network gateway IP, set net mask
>>> according to your CIDR.
>>>
>>> After the above steps, you can access your VM via floating IP in node2. Also
>>> your VM can ping the external gateway.
>>>
>>> Would like your VM to access the Internet?(Of course node2 should be able to
>>> access the Internet) Two more steps to follow:
>>> (1) Enable packet forward in node2
>>>
>>> sudo bash
>>> echo 1 >/proc/sys/net/ipv4/ip_forward
>>>
>>> (2) Configure SNAT in node2
>>>
>>> sudo iptables -t nat -I POSTROUTING -s 163.3.124.0/24 -o eth1 -j SNAT
>>> --to-source 10.250.201.21
>>>
>>> 163.3.124.0/24 is your external network CIDR, eth1 is the interface
>>> associated with your default route in node2 and 10.250.201.21 is the IP of
>>> eth1.
>>
>> I would like to avoid this kind of hackery way as much as possible.
>> I would like to see your further recommendation so that we easily and
>> quickly build cascaded stack system including top.
>>
>>>
>>> Hope this information helps.
>>>
>>> BR
>>> Zhiyuan
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Email:
>> shin...@linux.com
>> GitHub:
>> shinobu-x
>> Blog:
>> Life with Distributed Computational System based on OpenSource
>>
>> __
>> OpenStack 

[openstack-dev] [neutron][taas] cancel meeting

2016-05-03 Thread Takashi Yamamoto
considering trip recovery and japanese holiday,
this week's taas meeting is cancelled.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Proposing Shinobu Kinjo for Tricircle core reviewer

2016-05-03 Thread Vega Cai
+1, Shinobu has given me many suggestions in my patches.

On 4 May 2016 at 07:57, Zhipeng Huang  wrote:

> +1, still remember Shinobu getting started from basic concept of
> tricircle, and now becoming a maester of multisite openstack :)
>
> On Tue, May 3, 2016 at 9:03 PM, joehuang  wrote:
>
>> Hi,
>>
>> I would like to propose adding Shinobu Kinjo to the Tricircle core
>> reviewer team.
>>
>> Shinobu has been a highly valuable reviewer to Tricircle for the past few
>> months. His contribution covers each patch submitted, document, etherpad
>> discussion, and always give valueable, meaningful and helpful comment. His
>> review data could be found from http://stackalytics.com/ (but
>> unfortuantely something wrong in stackalytics temporary, tricircle is
>> missing in the project lists)
>>
>> I believe Shinobu will be a great addition to the Tricircle team.
>>
>> Please response with +1/-1. Thank you!
>>
>> Best Regards
>> Chaoyi Huang ( joehuang )
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle North-South L3 Networking

2016-05-03 Thread Shinobu Kinjo
Hi Chaoyi,

I didn't consider Ronghui's environment which I have no idea about.

> That's why Zhiyuan proposed hacking way to do it.

Considering such a limited situation, I understood this solution is
for particular situation which is not usual for cascaded stack
environment.
Is it same of what you are implying in your message?

I would like to avoid any misunderstanding between members as much as possible.

Cheers,
Shinobu

On Wed, May 4, 2016 at 10:25 AM, joehuang  wrote:
> Hi, Shinobu,
>
> I think Zhiyuan's suggestion is mainly for Ronghui's environment, for his 
> environment has very limited network infterfaces, it's difficult to 
> experiment N-S feature. It would be recommended to use VMs for setting up 
> Tricircle test bed with two bottom pods, so it's much more easier to manage 
> networking plane for different purpose. But Ronghui's machine also have very 
> limited vCPU and memory, so booting serveral VMs to establish the tricircle 
> and two bottom pods test bed also not possible. That's why Zhiyuan proposed 
> hacking way to do it.
>
> Best Regards
> Chaoyi Huang ( joehuang )
>
> 
> From: Shinobu Kinjo [shinobu...@gmail.com]
> Sent: 04 May 2016 6:58
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle 
> North-South L3 Networking
>
> Vega,
>
> On Tue, May 3, 2016 at 5:49 PM, Vega Cai  wrote:
>> Hi all,
>>
>> Just would like to share a way to test Tricircle north-south L3 networking
>> without requiring the third interface.
>>
>> In the Tricircle readme, it is said that you need to add an interface in
>> your host to br-ext bridge. One interface to access the host, one interface
>> for east-west networking and one interface for north-south networking, so
>> all together three interfaces are required.
>>
>> What if your host only have two interfaces? Here is another deployment
>> choice.
>>
>> First, change your external network type to flat type. If you are using the
>> DevStack script provided by Tricircle, do the following changes in node2
>> local.conf then run DevStack in node2.
>>
>> (1) change Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS
>> from (network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
>> to (network_vlan_ranges=bridge:2001:3000)
>> (since we going to use flat external network, no need to configure VLAN
>> range for extern)
>> (2) add PHYSICAL_NETWORK=extern
>> (3) keep OVS_BRIDGE_MAPPINGS=bridge:br-bridge,extern:br-ext
>
> Good point.
>
>>
>> Second, specify flat type when creating external network.
>>
>> curl -X POST http://127.0.0.1:9696/v2.0/networks
>>-H "Content-Type: application/json" \
>>-H "X-Auth-Token: $token" \
>>-d '{"network": {"name": "ext-net", "admin_state_up": true,
>> "router:external": true, "provider:network_type": "flat",
>> "provider:physical_network": "extern", "availability_zone_hints":
>> ["Pod2"]}}'
>
> Understood.
>
>>
>> Third, configure IP address of br-ext.
>>
>> sudo ifconfig br-ext 163.3.124.1 netmask 255.255.255.0
>>
>> Here 163.3.124.1 is your external network gateway IP, set net mask
>> according to your CIDR.
>>
>> After the above steps, you can access your VM via floating IP in node2. Also
>> your VM can ping the external gateway.
>>
>> Would like your VM to access the Internet?(Of course node2 should be able to
>> access the Internet) Two more steps to follow:
>> (1) Enable packet forward in node2
>>
>> sudo bash
>> echo 1 >/proc/sys/net/ipv4/ip_forward
>>
>> (2) Configure SNAT in node2
>>
>> sudo iptables -t nat -I POSTROUTING -s 163.3.124.0/24 -o eth1 -j SNAT
>> --to-source 10.250.201.21
>>
>> 163.3.124.0/24 is your external network CIDR, eth1 is the interface
>> associated with your default route in node2 and 10.250.201.21 is the IP of
>> eth1.
>
> I would like to avoid this kind of hackery way as much as possible.
> I would like to see your further recommendation so that we easily and
> quickly build cascaded stack system including top.
>
>>
>> Hope this information helps.
>>
>> BR
>> Zhiyuan
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Email:
> shin...@linux.com
> GitHub:
> shinobu-x
> Blog:
> Life with Distributed Computational System based on OpenSource
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack 

Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle North-South L3 Networking

2016-05-03 Thread joehuang
Hi, Shinobu,

I think Zhiyuan's suggestion is mainly for Ronghui's environment, for his 
environment has very limited network infterfaces, it's difficult to experiment 
N-S feature. It would be recommended to use VMs for setting up Tricircle test 
bed with two bottom pods, so it's much more easier to manage networking plane 
for different purpose. But Ronghui's machine also have very limited vCPU and 
memory, so booting serveral VMs to establish the tricircle and two bottom pods 
test bed also not possible. That's why Zhiyuan proposed hacking way to do it.

Best Regards
Chaoyi Huang ( joehuang )


From: Shinobu Kinjo [shinobu...@gmail.com]
Sent: 04 May 2016 6:58
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle North-South 
L3 Networking

Vega,

On Tue, May 3, 2016 at 5:49 PM, Vega Cai  wrote:
> Hi all,
>
> Just would like to share a way to test Tricircle north-south L3 networking
> without requiring the third interface.
>
> In the Tricircle readme, it is said that you need to add an interface in
> your host to br-ext bridge. One interface to access the host, one interface
> for east-west networking and one interface for north-south networking, so
> all together three interfaces are required.
>
> What if your host only have two interfaces? Here is another deployment
> choice.
>
> First, change your external network type to flat type. If you are using the
> DevStack script provided by Tricircle, do the following changes in node2
> local.conf then run DevStack in node2.
>
> (1) change Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS
> from (network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
> to (network_vlan_ranges=bridge:2001:3000)
> (since we going to use flat external network, no need to configure VLAN
> range for extern)
> (2) add PHYSICAL_NETWORK=extern
> (3) keep OVS_BRIDGE_MAPPINGS=bridge:br-bridge,extern:br-ext

Good point.

>
> Second, specify flat type when creating external network.
>
> curl -X POST http://127.0.0.1:9696/v2.0/networks
>-H "Content-Type: application/json" \
>-H "X-Auth-Token: $token" \
>-d '{"network": {"name": "ext-net", "admin_state_up": true,
> "router:external": true, "provider:network_type": "flat",
> "provider:physical_network": "extern", "availability_zone_hints":
> ["Pod2"]}}'

Understood.

>
> Third, configure IP address of br-ext.
>
> sudo ifconfig br-ext 163.3.124.1 netmask 255.255.255.0
>
> Here 163.3.124.1 is your external network gateway IP, set net mask
> according to your CIDR.
>
> After the above steps, you can access your VM via floating IP in node2. Also
> your VM can ping the external gateway.
>
> Would like your VM to access the Internet?(Of course node2 should be able to
> access the Internet) Two more steps to follow:
> (1) Enable packet forward in node2
>
> sudo bash
> echo 1 >/proc/sys/net/ipv4/ip_forward
>
> (2) Configure SNAT in node2
>
> sudo iptables -t nat -I POSTROUTING -s 163.3.124.0/24 -o eth1 -j SNAT
> --to-source 10.250.201.21
>
> 163.3.124.0/24 is your external network CIDR, eth1 is the interface
> associated with your default route in node2 and 10.250.201.21 is the IP of
> eth1.

I would like to avoid this kind of hackery way as much as possible.
I would like to see your further recommendation so that we easily and
quickly build cascaded stack system including top.

>
> Hope this information helps.
>
> BR
> Zhiyuan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread John Griffith
Definitely a +1 from me

On Tue, May 3, 2016 at 6:10 PM, Patrick East 
wrote:

> +1, Michal has done some awesome work on Cinder!
>
> -Patrick
>
> On Tue, May 3, 2016 at 11:16 AM, Sean McGinnis 
> wrote:
>
>> Hey everyone,
>>
>> I would like to nominate Michał Dulko to the Cinder core team. Michał's
>> contributions with both code reviews [0] and code contributions [1] have
>> been significant for some time now.
>>
>> His persistence with versioned objects has been instrumental in getting
>> support in the Mitaka release for rolling upgrades.
>>
>> If there are no objections from current cores by next week, I will add
>> Michał to the core group.
>>
>> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
>> [1]
>>
>> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
>>
>> Thanks!
>>
>> Sean McGinnis (smcginnis)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack's stackalytics is gummed up?

2016-05-03 Thread Jeremy Stanley
On 2016-05-04 10:52:56 +1000 (+1000), Michael Still wrote:
> The instance of stackalytics run by the openstack-infra team seems to be
> gummed up.
[...]

As is likely noted elsewhere, stackalytics.com is not run by the
Infra team but rather by Mirantis. There is a
stackalytics.openstack.org maintained by Infra sill under
development, but some final coordination with the current
Stackalytics maintainers at Mirantis is needed to get lifecycle
management details for the service worked out.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack's stackalytics is gummed up?

2016-05-03 Thread Michael Still
On Wed, May 4, 2016 at 11:03 AM, Davanum Srinivas  wrote:

> Michael,
>
> The stackalytics bots do not have access to gerrit at the moment. We
> noticed it last friday and talked to infra folks:
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-04-29.log.html#t2016-04-29T14:24:07
>
> Ilya Shakat helps a lot with the existing instance
> (http://stackalytics.com/) and eventually replaced by
> http://stackalytics.openstack.org/ owned by Infra folks.
>
> There's a few days of national holidays so hopefully when Ilya and
> others get back they will take care of the problems Clark pointed to
> in the discussion above.
>

Thanks. I will bravely wait then.

Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack's stackalytics is gummed up?

2016-05-03 Thread Davanum Srinivas
Michael,

The stackalytics bots do not have access to gerrit at the moment. We
noticed it last friday and talked to infra folks:
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-04-29.log.html#t2016-04-29T14:24:07

Ilya Shakat helps a lot with the existing instance
(http://stackalytics.com/) and eventually replaced by
http://stackalytics.openstack.org/ owned by Infra folks.

There's a few days of national holidays so hopefully when Ilya and
others get back they will take care of the problems Clark pointed to
in the discussion above.

Thanks,
Dims


On Tue, May 3, 2016 at 8:52 PM, Michael Still  wrote:
> The instance of stackalytics run by the openstack-infra team seems to be
> gummed up. It alleges that the last time there was a nova code review was
> April 17, which seems... unlikely.
>
> Who looks after this thing so I can ping them gently?
>
> Thanks,
> Michael
>
> --
> Rackspace Australia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack's stackalytics is gummed up?

2016-05-03 Thread Michael Still
The instance of stackalytics run by the openstack-infra team seems to be
gummed up. It alleges that the last time there was a nova code review was
April 17, which seems... unlikely.

Who looks after this thing so I can ping them gently?

Thanks,
Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Brant Knudson
On Tue, May 3, 2016 at 3:46 PM, Clint Byrum  wrote:

> Excerpts from Morgan Fainberg's message of 2016-05-03 11:13:38 -0700:
> > On Tue, May 3, 2016 at 10:28 AM, Monty Taylor 
> wrote:
> >
> > > On 05/03/2016 11:47 AM, Clint Byrum wrote:
> > >
> > >> Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
> > >>
> > >>> On 05/03/2016 08:55 AM, Clint Byrum wrote:
> > >>>
> > 
> >  Perhaps we have different perspectives. How is accepting what we
> >  previously emitted and told the user would be valid sneaky or wrong?
> >  Sounds like common sense due diligence to me.
> > 
> > >>>
> > >>> I agree - I see no reason we can't validate previously emitted
> tokens.
> > >>> But I don't agree strongly, because re-authing on invalid token is a
> > >>> thing users do hundreds of times a day. (these aren't oauth API Keys
> or
> > >>> anything)
> > >>>
> > >>>
> > >> Sure, one should definitely not be expecting everything to always work
> > >> without errors. On this we agree for sure. However, when we do decide
> to
> > >> intentionally induce errors for reasons we have not done so before, we
> > >> should weigh the cost of avoiding that with the cost of having it
> > >> happen. Consider this strawman:
> > >>
> > >> - User gets token, it says "expires_at Now+4 hours"
> > >> - User starts a brief set of automation tasks in their system
> > >>that does not use python and has not failed with invalid tokens
> thus
> > >>far.
> > >> - Keystone nodes are all updated at one time (AMAZING cloud ops team)
> > >> - User's automation jobs fail at next OpenStack REST call
> > >> - User begins debugging, wasting hours of time figuring out that
> > >>their tokens, which they stored and show should still be valid,
> were
> > >>rejected.
> > >>
> > >
> > > Ah - I guess this is where we're missing each other, which is good and
> > > helpful.
> > >
> > > I would argue that any user that is _storing_ tokens is doing way too
> much
> > > work. If they are doing short tasks, they should just treat them as
> > > ephemeral. If they are doing longer tasks, they need to deal with
> timeouts.
> > > SO, this:
> > >
> > >
> > > - User gets token, it says "expires_at Now+4 hours"
> > > - User starts a brief set of automation tasks in their system
> > >that does not use python and has not failed with invalid tokens thus
> > >far.
> > >
> > > should be:
> > >
> > > - User starts a brief set of automation tasks in their system
> > > that does not use python and has not failed with invalid tokens thus
> > > far.
> > >
> > > "Get a token" should never be an activity that anyone ever consciously
> > > performs.
> > >
> > >
> > This is my view. Never, ever, ever assume your token is good until
> > expiration. Assume the token might be broken at any request and know how
> to
> > re-auth.
> >
> > > And now they have to refactor their app, because this may happen again,
> > >> and they have to make sure that invalid token errors can bubble up to
> the
> > >> layer that has the username/password, or accept rolling back and
> > >> retrying the whole thing.
> > >>
> > >> I'm not saying anybody has this system, I'm suggesting we're putting
> > >> undue burden on users with an unknown consequence. Falling back to
> UUID
> > >> for a while has a known cost of a little bit of code and checking junk
> > >> tokens twice.
> > >>
> > >
> > Please do not advocate "falling back" to UUID. I am actually against
> making
> > fernet the default (very, very strongly), if we have to have this
> > "fallback" code. It is the wrong kind of approach, we already have
> serious
> > issues with complex code paths that produce subtly different results. If
> > the options are:
> >
> > 1) Make Fernet Default and have "fallback" code
> >
> > or
> >
> > 2) Leave UUID default and highly recommend fernet (plus gate on fernet
> > primarily, default in devstack)
> >
> > I will jump on my soapbox and be very loudly in favor of the 2nd option.
> If
> > we communicate this is a change that will happen (hey, maybe throw an
> > error/make the config option "none" so it has to be explicit) in Newton,
> > and then move to a Fernet default in O - I'd be ok with that.
> >
> > >
> > > Totally. I have no problem with the suggestion that keystone handle
> this.
> > > But I also think that users should quite honestly stop thinking about
> > > tokens at all. Tokens are an implementation detail that if any user
> thinks
> > > about while writing their app they're setting themselves up to be
> screwed -
> > > so we should make sure we're not talking about them in a primary way
> such
> > > as to suggest that people focus a lot of energy on them.
> > >
> > > (I also frequently see users who are using python libraries even get
> > > everything horribly wrong and screw themselves because they think they
> need
> > > to think about tokens)
> > >
> >
> > Better communication that tokens are ephemeral and should not 

[openstack-dev] [nova] Austin summit "getting started in Nova" session recap

2016-05-03 Thread Matt Riedemann
On Wednesday afternoon Sean Dague led a session on getting started in 
Nova. The full etherpad is here [1].


In this session we discussed a lot of the bulk work items we have in 
Nova right now, and how we can get newer people to the project involved 
in helping on these as an introduction, and then 'ladder up' to more 
advanced work.


We identified that we have a lot of low-hanging-fruit type bulk work 
going on right now:


* Integrating support for python 3.
* Converting mox usage in tests to using mock.
* Cleaning up the api-ref documentation.
* Centralizing and improving the help for the config options.
* Cleaning up fake uuid usage in tests.
* Removing the NovaObjectDictCompat mixin from objects.
* Cleaning up random stacktraces from test runs.
* Addressing deprecation warnings from dependent libraries.
* Bug skimming/triage.

We also talked about how we can better advertise these with enough 
details to on-board people that are looking to contribute.


We already have an etherpad for low-hanging-fruit [2] but etherpads get 
messy once there is a ton of content in them. They also aren't indexed 
by Google so they are hard to find. So we decided to move each major 
item to a wiki page and add a template structure for each which would 
include:


* A contact person for each effort to provide more details for new 
contributors interested in helping out with a particular item.


* An estimated difficulty level. For example, adding support for python 
3 is pretty straight-forward, as is removing fake uuids from tests, but 
removing NovaObjectDictCompat from objects is more complicated.


* Which part of the release cycle we'll focus on an effort, which is 
generally the 1st and sometimes 2nd milestones. We don't usually do 
these after the 2nd milestone in order to focus review effort on 
features and bug fixes and to avoid regressions.


* A priority level. While most of these are low-priority, some are 
higher priority than others so we can focus the work and get them done 
so they don't drag on from release to release.


* Examples of existing changes to copy or to get an idea of the context 
of the changes involved.


* Linking to the related blueprint for tracking the work and telling 
people how to use the appropriate topic branch in Gerrit.


As far as advertising these, it's mostly going to be links from docs for 
getting started in Nova. And if anyone is asking in the nova IRC channel 
about things they can help with (it does happen), we can point them to 
this. We might also have status updates in the dev list from the contact 
people, but I'd leave that up to them and don't want to set hard rules 
about this.


The idea of doing a recorded hangout session per effort was also brought 
up so we could link that into the background context/education per item. 
This would help with on-boarding so the contact person doesn't have to 
repeat the same details every time someone new asks for help in getting 
started.


We might also create a Gerrit dashboard for these so people can easily 
see what's ready for review. Sean has already started working on this 
for the virt drivers [3] which we discussed in the Friday meetup session.


There is a list of volunteers at the bottom of the session etherpad. 
Once we get the wikis created volunteers can start digging in, although 
I already recognize some of the names as people working on at least one 
of the items above.


Tony Breeds signed up to create the Gerrit dashboard.

We'll also need people to step up and own creating a wiki for each item 
in the above list and transferring the information from [2] to the 
respective wiki. There are contacts for some of the items in the 
etherpad so I'll assume those same people will create the wikis. As for 
the others, if there is already a blueprint I'll assume the assignee or 
creator of the blueprint will create the wiki. For anything else, expect 
your friendly neighborhood PTL to gently prod some people.


We didn't really get into the 'laddering up' part of the session. In my 
mind this is something that will happen naturally with people that get 
involved, stay involved and demonstrate their ability to work independently.


[1] https://etherpad.openstack.org/p/newton-nova-getting-started
[2] https://etherpad.openstack.org/p/nova-low-hanging-fruit
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/093753.html
[4] 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093538.html


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] - Summit session on Future of Neutron API

2016-05-03 Thread Kevin Benton
Hi all,

We had a session discussion the future of the Neutron API. The etherpad is
available here:
https://etherpad.openstack.org/p/newton-neutron-future-neutron-api

The first two topics were Keystone V3 and Pecan. Both of those were
relatively straight-forward.

For the switch to Keystone V3, we will need to accept project_id in the API
as well as tenant_id for backward compatibility. Without the adoption of
micro-versioning (more on that below), we will have to support both
indefinitely.

The plan for Pecan is to finish fixing up the last remaining bugs with some
of the service plugins. Once it is passing all tests, we will upgrade the
pecan jobs from experimental to non-voting and then voting (after maybe a
week).

Once we are comfortable that it looks stable, we will switch Neutron to
default to pecan so everything will be running it by default. This may
break some out-of-tree service plugins/extensions so we need to do it early
to address any issues.


Finally, the topic of microversioning came up again. Until we adopt it, we
will have to maintain backward compatibility in all of our responses (so we
would never be able to get rid of tenant_id).

This led to several questions that need to be answered before we can
proceed:

   - Do we have any volunteers to do the microversioning work?
   - Do we drop support for extensions on the switch to microversioning?
   - If we keep extensions, are we okay with them being un-versioned and
   possibly incompatible between implementations?


I think until we get the questions above agreed upon early on, there isn't
much point in getting down into detailed planning because the effort won't
go anywhere without high-level agreement.


Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread Patrick East
+1, Michal has done some awesome work on Cinder!

-Patrick

On Tue, May 3, 2016 at 11:16 AM, Sean McGinnis 
wrote:

> Hey everyone,
>
> I would like to nominate Michał Dulko to the Cinder core team. Michał's
> contributions with both code reviews [0] and code contributions [1] have
> been significant for some time now.
>
> His persistence with versioned objects has been instrumental in getting
> support in the Mitaka release for rolling upgrades.
>
> If there are no objections from current cores by next week, I will add
> Michał to the core group.
>
> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
> [1]
>
> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
>
> Thanks!
>
> Sean McGinnis (smcginnis)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Mark Doffman

This thread has been a depressing read.

I understand that the content is supposed to be distributed databases 
but for me it has become an inquisition of cellsV2.


Our question has clearly become "Should we continue efforts on 
cellsV2?", which I will address head-on.


We shouldn't be afraid to abandon CellsV2. If there are designs that are 
proven to be a better solution then our current momentum shouldn't keep 
us from an abrupt change. As someone who is working on this I have an 
attachment to the current design, but Its important for me to keep an 
open mind.


Here are my *main* reasons for continuing work on CellsV2.

1. It provides a proven solution to an immediate message queue problem.

Yes CellsV2 is different to CellsV1, but the previous solution showed 
that application-level sharding of the message queue can work. CellsV2 
provides this solution with a (moderately) easy upgrade path for 
existing deployments. These deployments may not be comfortable with 
changing MQ technologies or may already be using CellsV1. Application 
level sharding of the message queue is not pretty, but will work.


2. The 'complexity' of CellsV2 is vastly overstated.

Sure there is a-lot of *work* to do for cellsv2, but this doesn't imply 
increased complexity: any refactoring requires work. CellsV1 added 
complexity to our codebase, Cellsv2 does not. In-fact by clearly 
separating data that is 'owned'by the different services we have I 
believe that we are improving the modularity and encapsulation present 
in Nova.


3. CellsV2 does not prohibit *ANY* of the alternative scaling methods
   mentioned in this thread.

Really, it doesn't. Both message queue and database switching are 
completely optional. Both in the sense of running a single cell, and 
even when running multiple cells. If anything, the ability to run 
separate message queues and database connections could give us the 
ability to trial these alternative technologies within a real, running, 
cloud.


Just imagine the ability to set up a cell in your existing cloud that 
runs 0mq rather than rabbit. How about a NewSQL database integrated in 
to an existing cloud? Both of these things may (With some work) be possible.




I could go on, but I won't. These are my main reasons and I'll stick to 
them.


Its difficult to be proven wrong, but sometimes necessary to get the 
best product that we can. I don't think that the existence of 
alternative message queue and database options is enough to stop cellsV2 
work now. A proven solution, that meets the upgrade constraints that we 
have in Nova, would be a good reason to do so. We should of-course 
explore other options, nothing we are doing prevents that. When they 
work out, I'll be super excited.


Thanks

Mark

On 4/29/16 12:53 AM, Clint Byrum wrote:

Excerpts from Mike Bayer's message of 2016-04-28 22:16:54 -0500:


On 04/28/2016 08:25 PM, Edward Leafe wrote:


Your own tests showed that a single RDBMS instance doesn’t even break a sweat
under your test loads. I don’t see why we need to shard it in the first
place, especially if in doing so we add another layer of complexity and
another dependency in order to compensate for that choice. Cells are a useful
concept, but this proposed implementation is adding way too much complexity
and debt to make it worthwhile.


now that is a question I have also.  Horizontal sharding is usually for
the case where you need to store say, 10B rows, and you'd like to split
it up among different silos.  Nothing that I've seen about Nova suggests
this is a system with any large data requirements, or even medium size
data (a few million rows in relational databases is nothing).I
didn't have the impression that this was the rationale behind Cells, it
seems like this is more of some kind of logical separation of some kind
that somehow suits some environments (but I don't know how).
Certainly, if you're proposing a single large namespace of data across a
partition of nonrelational databases, and then the data size itself is
not that large, as long as "a single namespace" is appropriate then
there's no reason to break out of more than one MySQL database.  There's
not much reason to transparently shard unless you are concerned about
adding limitless storage capacity.   The Cells sharding seems to be
intentionally explicit and non-transparent.



There's a bit more to it than the number of rows. There's also a desire
to limit failure domains. IMO, that is entirely unfounded, as I've run
thousands of servers that depended on a single pair of MySQL servers
using simple DRBD and pacemaker with a floating IP for failover. This
is the main reason MySQL is a thing... it can handle 100,000 concurrent
connections just fine, and the ecosystem around detecting and handling
failure/maintenance is mature.

The whole cells conversation, IMO, stems from the way we use RabbitMQ.
We should just stop doing that. I know as I move forward with our scaling
efforts, I'll be trying several RPC 

Re: [openstack-dev] [tricircle]Proposing Shinobu Kinjo for Tricircle core reviewer

2016-05-03 Thread Zhipeng Huang
+1, still remember Shinobu getting started from basic concept of tricircle,
and now becoming a maester of multisite openstack :)

On Tue, May 3, 2016 at 9:03 PM, joehuang  wrote:

> Hi,
>
> I would like to propose adding Shinobu Kinjo to the Tricircle core
> reviewer team.
>
> Shinobu has been a highly valuable reviewer to Tricircle for the past few
> months. His contribution covers each patch submitted, document, etherpad
> discussion, and always give valueable, meaningful and helpful comment. His
> review data could be found from http://stackalytics.com/ (but
> unfortuantely something wrong in stackalytics temporary, tricircle is
> missing in the project lists)
>
> I believe Shinobu will be a great addition to the Tricircle team.
>
> Please response with +1/-1. Thank you!
>
> Best Regards
> Chaoyi Huang ( joehuang )
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] neutron-lib report from the summit

2016-05-03 Thread Henry Gessau
At the Newton summit in Austin we held a session on the next steps for
neutron-lib. Here is a report on what was discussed at the session.

Etherpad:
https://etherpad.openstack.org/p/newton-neutron-lib-next-steps

Progress so far
---
The package is on PyPI and sub-projects should be using it now.

Only the very obvious and easy items have been added to neutron-lib:
  - Common constants
  - Common exceptions
  - Attribute converters and validators
  - Common hacking checks, including one to aid decoupling (N530)

Adding database and DB model support

We are leaning towards a common pattern for interacting with neutron resources
using oslo versioned objects (OVOs). The OVO work in neutron core needs to
mature a bit before we start moving it to neutron-lib.

Some basic DB utility methods will be added to allow sub-projects to add and
update their own tables.

Architecture decisions
--
  - Should there be more than one library, with smaller pieces?
  - Decide what is useful before just trying to do  everything.
  - Decide on Callbacks: OVO integration, or something else?
  - OVOs are new in neutron core, which means they are inherently unstable
(undergoing changes) and buggy. The goal is for neutron-lib to contain
stable and proven code, yet OVOs are permeating many of the things we
want in neutron-lib. How do we reconcile this?

Documentation
-
We need to write API documentation. Contributions are welcome.
We need to expand the devref with details on how to use the lib, how to add
things, how to work on dependent code without being blocked, etc.

Work planned for Newton (and beyond)
---
  - DB common utils (for operations not requiring OVO)
  - DB common framework with OVO integration
  - DB alembic migration interface
  - RPC common framework and utilities
  - Finalize Callbacks
  - Context support
  - Policy support
  - Config support, on top of oslo.config?
  - Agent common utils/framework?
  - Extensions common utils/framework?

We should try to determine the priority of these planned items.

We should have a Release Cadence strategy. We can decide on publishing a
release at each weekly meeting, or have a regular cadence, or have a process
similar to oslo libraries.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] User feedback track: end user and operator pain points - report

2016-05-03 Thread Carl Baldwin
Hi all,

We had a productive session with operators at the summit [1].  I
wanted to be sure to go over the notes while they were fresh in my
mind.  Some of the issues still need some discussion...

Probably the most contentious issue was that of creating HA routers
when there aren't enough agents to satisfy the minimum agents
requirement of HA [7].  We need to drive this discussion.  There were
some very convincing but conflicting points of view expressed in the
session.  I have expressed my point of view in the bug report.

There was a complaint that l3-agent-router-remove doesn't work if
there are only 2 l3 agents on the network.  Assaf thinks this was
fixed in Liberty.  Please file a bug if this is still a problem.

A request was made that for upgrading, it would be nice if there was a
tool that took a flat config file and moved the deprecated options to
their new homes. Last upgrade we did was entirely automatable.  I did
not see if an RFE bug was filed to address this in the upgrades team.

We had a discussion about IP protocol numbers in security groups.  We
think that any IP protocol number can be specified in the API but that
is not well-documented in the API docs [9].  We need a documentation
bug filed.  Support for additional protocol names has been added to
the client [10].

There was discussion around routers and multiple mechanism drivers
[12].  My memory about this discussion is already fading.  Can anyone
fill in?

Some of the issues are gaining traction and have owners since the session...

  - Dougwig will explore using nginx as metadata proxy [2]
  - Kevin Benton is working on scalability of security groups changes [3].
- However, the work can't be backported so we may still need to
find some for stable branches.
  - Several operators expressed interest in north / south DVR for IPv6
tenant networks.
- We now have an RFE [11] to push.

Some issues were resolved in near real time during the session but
might still need final approval...  Great job!

  - Cleaning up stale flows needs a +A to stable/liberty [4]
  - A new Neutron Mitaka release [5] with this fix [6] is in the works.
  - Consume service plugins queues in RPC workers was merged [8].

If there is something that I missed, please let me know.

Carl Baldwin

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9103
[2] https://bugs.launchpad.net/neutron/+bug/1524916
[3] https://bugs.launchpad.net/neutron/+bug/1576425
[4] https://review.openstack.org/#/c/300424/
[5] https://review.openstack.org/#/c/310931/
[6] 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=90b9cd334b1b33df933bf1b61b38c6e087c431af
[7] https://bugs.launchpad.net/neutron/+bug/1555042
[8] https://review.openstack.org/#/c/238745/
[9] 
http://developer.openstack.org/api-ref-networking-v2-ext.html#security_groups
[10] https://review.openstack.org/#/c/307908/
[11] https://bugs.launchpad.net/neutron/+bug/1577488
[12] https://bugs.launchpad.net/neutron/+bug/1555384

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Hayes, Graham
On 03/05/2016 17:03, John Dickinson wrote:
> TC,
>
> In reference to 
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
> Thierry's reply, I'm currently drafting a TC resolution to update 
> http://governance.openstack.org/resolutions/20150901-programming-languages.html
>  to include Go as a supported language in OpenStack projects.
>
> As a starting point, what would you like to see addressed in the document I'm 
> drafting?
>
> --John
>
>
>

Great - I was about to write a thread like this :)

Designate is looking to move a single component of ours to Go - and we
were wondering what was the best way to do it.

The current policy does allow for the TC to bless different languages
on a case by case basis - do we need to go from just python and JS to
allowing all projects to use go, or should the TC approve (or
disapprove) the swift and designate requests?

I think the swift and designate changes might be a good test case to
see how the build / mirroring / packaging / artifact / library issues
shake out.

- Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics]many projects missing in the "others" category

2016-05-03 Thread Anita Kuno
On 05/03/2016 09:11 AM, joehuang wrote:
> Hello,
> 
> Very sad to know that some projects are missing again in the "others" 
> category. When I want to cite some statistic data for Tricircle core reviewer 
> nomination, can't find the data for many "others" projects which usually are 
> listed "others" category. Is there any new rule in Stackalytics?
> 
> Best Regards
> Chaoyi Huang ( joehuang )
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I use gerrit directly for evaluation of reviewing activity.

Gerrit has many different queries that can be created to evaluate
individual contributions both for ownership of patches and for reviewing
patches. Besides, while absolute numbers of patches can be useful,
evaluating the quality of the individual's review is vastly more
important. Taking the time to make useful inline comments and link to
references to support contributor's composing really good patches far
surpasses numbers in my book. Many of my best reviews on patches don't
include a vote, something that stackalytics doesn't count at all.

There is a whole world of valuable data on gerrit. I encourage you to
explore it: https://review.openstack.org/Documentation/user-search.html

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics]many projects missing in the "others" category

2016-05-03 Thread Shinobu Kinjo
AFAIK the Tricircle is one of them. [1]
How can we fix it out?

[1] http://stackalytics.com/report/contribution/tricircle/90

Cheers,
S

On Tue, May 3, 2016 at 10:11 PM, joehuang  wrote:
> Hello,
>
> Very sad to know that some projects are missing again in the "others" 
> category. When I want to cite some statistic data for Tricircle core reviewer 
> nomination, can't find the data for many "others" projects which usually are 
> listed "others" category. Is there any new rule in Stackalytics?
>
> Best Regards
> Chaoyi Huang ( joehuang )
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Clint Byrum
Excerpts from Andrew Laski's message of 2016-05-03 14:46:08 -0700:
> 
> On Mon, May 2, 2016, at 01:13 PM, Edward Leafe wrote:
> > On May 2, 2016, at 10:51 AM, Mike Bayer  wrote:
> > 
> > >> Concretely, we think that there are three possible approaches:
> > >> 1) We can use the SQLAlchemy API as the common denominator between a 
> > >> relational and non-relational implementation of the db.api component. 
> > >> These two implementation could continue to converge by sharing a large 
> > >> amount of code.
> > >> 2) We create a new non-relational implementation (from scratch) of 
> > >> the db.api component. It would require probably more work.
> > >> 3) We are also studying a last alternative: writing a SQLAlchemy 
> > >> engine that targets NewSQL databases (scalability + ACID):
> > >>  - https://github.com/cockroachdb/cockroach
> > >>  - https://github.com/pingcap/tidb
> > > 
> > > Going with a NewSQL backend is by far the best approach here.   That way, 
> > > very little needs to be reinvented and the application's approach to data 
> > > doesn't need to dramatically change.
> > 
> > I’m glad that Matthieu responded, but I did want to emphasize one thing:
> > of *course* this isn’t an ideal approach, but it *is* a practical one.
> > The biggest problem in any change like this isn’t getting it to work, or
> > to perform better, or anything else except being able to make the change
> > while disrupting as little of the existing code as possible. Taking an
> > approach that would be more efficient would be a non-starter since it
> > wouldn’t provide a clean upgrade path for existing deployments.
> 
> I would like to point out that this same logic applies to the current
> cellsv2 effort. It is a very practical set of changes which allows Nova
> to move forward with only minor effort on the part of deployers. And it
> moves towards a model that is already used and well understood by large
> deployers of Nova while also learning from the shortcomings of the
> previous architecture. In short, much of this is already battle tested
> and proven.
> 
> If we started Nova from scratch, I hear golang is lovely for this sort
> of thing, would we do things differently? Probably. However that's not
> the position we're in. And we're able to make measurable progress with
> cellsv2 at the moment and have a pretty clear idea of the end state. I
> can recall conversations about NoSQL as far back as the San Diego
> summit, which was my first so I can't say they didn't happen previously,
> and this is the first time I've seen any measurable progress on moving
> forward with it. But where it would go is not at all clear.
> 

I beg to differ about "pretty clear idea of the end state".

* There's no clear answer about scheduling. It's a high level "we'll
  give it a scheduler/resource tracker database of its own". But that's a
  massive amount of work just to design the migrations and solidify the
  API. I understand some of that work is ongoing and unrelated to cells
  v2, but it's not done  or clear yet.
  
* This also doesn't address the fact that for cellsv1 users a move like
  that will _regress_ scheduler scalability since now we can only have
  one scheduler and resource tracker instead of many. For those of us
  just now ramping up, it leaves us with no way to get high throughput
  on our scheduler.

* Further, if there's a central scheduler, that means all of the sort of
  clever scheduling hacks that people have achieved with cells v1 (a
  cell of baremetal, a cell of SSD, etc) will need to be done via other
  means, which is more design work that needs to happen.

* There's no clear way to efficiently list and sort results from lots of
  cells. The discussion came up with a few experiments to try, but the
  problem is _fundamental_ to sharding, and the cells v1 answer was a
  duplication of data which obviously cells v2 wants to avoid, and I
  would assume with good reason.

I have a huge amount of respect for what has been achieved with cells v1,
and I totally understand the hesitance to promote the way it works given
what cells v1 has taught its users. However, the design of v2 is quite
a bit different than v1, enough so that I think it should be treated as
an experiment until someone has a solid design of the whole thing and
can assert that it actually addresses scale without regressing things
significantly.

Meanwhile, there are other things deployers can do to address scale that
will likely cause less churn in Nova, and may even help other projects
scale to a similar size. I intend to return to my pursuit of actual
experiment results for these things now that I understand the state of
cells v2. I hope others will consider this path as well, so we can
collaborate on things like 0mq and better database connection handling.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Morgan Fainberg
On Tue, May 3, 2016 at 1:46 PM, Clint Byrum  wrote:

> Excerpts from Morgan Fainberg's message of 2016-05-03 11:13:38 -0700:
> > On Tue, May 3, 2016 at 10:28 AM, Monty Taylor 
> wrote:
> >
> > > On 05/03/2016 11:47 AM, Clint Byrum wrote:
> > >
> > >> Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
> > >>
> > >>> On 05/03/2016 08:55 AM, Clint Byrum wrote:
> > >>>
> > 
> >  Perhaps we have different perspectives. How is accepting what we
> >  previously emitted and told the user would be valid sneaky or wrong?
> >  Sounds like common sense due diligence to me.
> > 
> > >>>
> > >>> I agree - I see no reason we can't validate previously emitted
> tokens.
> > >>> But I don't agree strongly, because re-authing on invalid token is a
> > >>> thing users do hundreds of times a day. (these aren't oauth API Keys
> or
> > >>> anything)
> > >>>
> > >>>
> > >> Sure, one should definitely not be expecting everything to always work
> > >> without errors. On this we agree for sure. However, when we do decide
> to
> > >> intentionally induce errors for reasons we have not done so before, we
> > >> should weigh the cost of avoiding that with the cost of having it
> > >> happen. Consider this strawman:
> > >>
> > >> - User gets token, it says "expires_at Now+4 hours"
> > >> - User starts a brief set of automation tasks in their system
> > >>that does not use python and has not failed with invalid tokens
> thus
> > >>far.
> > >> - Keystone nodes are all updated at one time (AMAZING cloud ops team)
> > >> - User's automation jobs fail at next OpenStack REST call
> > >> - User begins debugging, wasting hours of time figuring out that
> > >>their tokens, which they stored and show should still be valid,
> were
> > >>rejected.
> > >>
> > >
> > > Ah - I guess this is where we're missing each other, which is good and
> > > helpful.
> > >
> > > I would argue that any user that is _storing_ tokens is doing way too
> much
> > > work. If they are doing short tasks, they should just treat them as
> > > ephemeral. If they are doing longer tasks, they need to deal with
> timeouts.
> > > SO, this:
> > >
> > >
> > > - User gets token, it says "expires_at Now+4 hours"
> > > - User starts a brief set of automation tasks in their system
> > >that does not use python and has not failed with invalid tokens thus
> > >far.
> > >
> > > should be:
> > >
> > > - User starts a brief set of automation tasks in their system
> > > that does not use python and has not failed with invalid tokens thus
> > > far.
> > >
> > > "Get a token" should never be an activity that anyone ever consciously
> > > performs.
> > >
> > >
> > This is my view. Never, ever, ever assume your token is good until
> > expiration. Assume the token might be broken at any request and know how
> to
> > re-auth.
> >
> > > And now they have to refactor their app, because this may happen again,
> > >> and they have to make sure that invalid token errors can bubble up to
> the
> > >> layer that has the username/password, or accept rolling back and
> > >> retrying the whole thing.
> > >>
> > >> I'm not saying anybody has this system, I'm suggesting we're putting
> > >> undue burden on users with an unknown consequence. Falling back to
> UUID
> > >> for a while has a known cost of a little bit of code and checking junk
> > >> tokens twice.
> > >>
> > >
> > Please do not advocate "falling back" to UUID. I am actually against
> making
> > fernet the default (very, very strongly), if we have to have this
> > "fallback" code. It is the wrong kind of approach, we already have
> serious
> > issues with complex code paths that produce subtly different results. If
> > the options are:
> >
> > 1) Make Fernet Default and have "fallback" code
> >
> > or
> >
> > 2) Leave UUID default and highly recommend fernet (plus gate on fernet
> > primarily, default in devstack)
> >
> > I will jump on my soapbox and be very loudly in favor of the 2nd option.
> If
> > we communicate this is a change that will happen (hey, maybe throw an
> > error/make the config option "none" so it has to be explicit) in Newton,
> > and then move to a Fernet default in O - I'd be ok with that.
> >
> > >
> > > Totally. I have no problem with the suggestion that keystone handle
> this.
> > > But I also think that users should quite honestly stop thinking about
> > > tokens at all. Tokens are an implementation detail that if any user
> thinks
> > > about while writing their app they're setting themselves up to be
> screwed -
> > > so we should make sure we're not talking about them in a primary way
> such
> > > as to suggest that people focus a lot of energy on them.
> > >
> > > (I also frequently see users who are using python libraries even get
> > > everything horribly wrong and screw themselves because they think they
> need
> > > to think about tokens)
> > >
> >
> > Better communication that tokens are ephemeral and should not 

Re: [openstack-dev] [nova] Should be instance_dir in all nova compute node same ?

2016-05-03 Thread Ghe Rivero
Is there any specific reason why this is require that way? or just a 
"feature"?


Ghe Rivero

On 03/05/16 11:42, Matthew Booth wrote:
On Fri, Apr 29, 2016 at 2:47 AM, Eli Qiao > wrote:


hi team,

Is there any require that all compute node's instance_dir should
be same?


Yes. This is assumed in many places, certainly in cold migration/resize.

Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Easy Way to Test Tricircle North-South L3 Networking

2016-05-03 Thread Shinobu Kinjo
Vega,

On Tue, May 3, 2016 at 5:49 PM, Vega Cai  wrote:
> Hi all,
>
> Just would like to share a way to test Tricircle north-south L3 networking
> without requiring the third interface.
>
> In the Tricircle readme, it is said that you need to add an interface in
> your host to br-ext bridge. One interface to access the host, one interface
> for east-west networking and one interface for north-south networking, so
> all together three interfaces are required.
>
> What if your host only have two interfaces? Here is another deployment
> choice.
>
> First, change your external network type to flat type. If you are using the
> DevStack script provided by Tricircle, do the following changes in node2
> local.conf then run DevStack in node2.
>
> (1) change Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS
> from (network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
> to (network_vlan_ranges=bridge:2001:3000)
> (since we going to use flat external network, no need to configure VLAN
> range for extern)
> (2) add PHYSICAL_NETWORK=extern
> (3) keep OVS_BRIDGE_MAPPINGS=bridge:br-bridge,extern:br-ext

Good point.

>
> Second, specify flat type when creating external network.
>
> curl -X POST http://127.0.0.1:9696/v2.0/networks
>-H "Content-Type: application/json" \
>-H "X-Auth-Token: $token" \
>-d '{"network": {"name": "ext-net", "admin_state_up": true,
> "router:external": true, "provider:network_type": "flat",
> "provider:physical_network": "extern", "availability_zone_hints":
> ["Pod2"]}}'

Understood.

>
> Third, configure IP address of br-ext.
>
> sudo ifconfig br-ext 163.3.124.1 netmask 255.255.255.0
>
> Here 163.3.124.1 is your external network gateway IP, set net mask
> according to your CIDR.
>
> After the above steps, you can access your VM via floating IP in node2. Also
> your VM can ping the external gateway.
>
> Would like your VM to access the Internet?(Of course node2 should be able to
> access the Internet) Two more steps to follow:
> (1) Enable packet forward in node2
>
> sudo bash
> echo 1 >/proc/sys/net/ipv4/ip_forward
>
> (2) Configure SNAT in node2
>
> sudo iptables -t nat -I POSTROUTING -s 163.3.124.0/24 -o eth1 -j SNAT
> --to-source 10.250.201.21
>
> 163.3.124.0/24 is your external network CIDR, eth1 is the interface
> associated with your default route in node2 and 10.250.201.21 is the IP of
> eth1.

I would like to avoid this kind of hackery way as much as possible.
I would like to see your further recommendation so that we easily and
quickly build cascaded stack system including top.

>
> Hope this information helps.
>
> BR
> Zhiyuan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-03 Thread Jim Rollenhagen

> On May 3, 2016, at 14:47, Truman, Travis  
> wrote:
> 
> Major has made an incredible number of contributions of code and reviews to 
> the OpenStack-Ansible community. Given his role as the primary author of the 
> openstack-ansible-security project, I can think of no better addition to the 
> core reviewer team.

Sounds like a major win for the team!

// jim 

> 
> Travis Truman
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Fox, Kevin M
RadosGW has been excluded from joining the OpenStack community in part due to 
its use of c++. Now that we're talking about alternate languages, that may be 
on the table now?

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Tuesday, May 03, 2016 3:22 PM
To: openstack-dev
Subject: Re: [openstack-dev] [tc] supporting Go

Excerpts from Fox, Kevin M's message of 2016-05-03 22:11:06 +:
> If we let go in, and there are no pluggable middleware, where does RadosGW 
> and other Swift api compatible implementations then stand? Should we bless 
> c++ too? As I understand it, there are a lot of clouds deployed with the 
> RadosGW but Refstack rejects them.

RadosGW isn't part of an OpenStack project, and DefCore requires running
the code produced by our community, not just code that meets the same
APIs.

Doug

>
> Thanks,
> Kevin
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: Tuesday, May 03, 2016 2:50 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [tc] supporting Go
>
> Excerpts from John Dickinson's message of 2016-05-03 13:01:28 -0700:
> >
> > On 3 May 2016, at 12:19, Monty Taylor wrote:
> >
> > > On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
> > >> On Tue, May 3, 2016 at 9:03 AM John Dickinson  > >> > wrote:
> > >>
> > >>
> > >> As a starting point, what would you like to see addressed in the
> > >> document I'm drafting?
> > >>
> > >>
> > >> I'm going through this project with JavaScript right now. Here's some of
> > >> the things I've had to address:
> > >>
> > >> - Common language formatting rules (ensure that a pep8-like thing 
> > >> exists).
> > >> - Mirroring dependencies?
> > >> - Building Documentation
> > >
> > > Mirroring and building are the ones that we'll definitely want to work 
> > > together on in terms of figuring out how to support. go get being able to 
> > > point at any git repo for depends is neat - but it increases the amount 
> > > of internet surface-area in the gate. Last time I looked (last year) 
> > > there were options for doing just the fetch part of go get separate from 
> > > the build part.
> > >
> > > In any case, as much info as you can get about the mechanics of 
> > > downloading dependencies, especially as it relates to pre-caching or 
> > > pointing build systems at local mirrors of things holistically rather 
> > > than by modifying the source code would be useful. We've gone through a 
> > > couple of design iterations on javascript support as we've dived in 
> > > further.
> >
> > Are these the sort of things that need to be in a resolution saying that 
> > it's ok to write code in Golang? I'll definitely agree that these questions 
> > are important, and I don't have the answers yet (although I expect we will 
> > by the time any Golang code lands in Swift). We've already got the 
> > Consistent Testing Interface doc[1] which talks about having tests, a 
> > coding style, and docs (amongst other things). Does a resolution about 
> > Golang being acceptable need to describe dependency management, build 
> > tooling, and CI?
>
> There are separate interfaces described there for Python and JavaScript.
> I think it makes sense to start documenting the expected interface for
> projects written in Go, for the same reason that we have the others, and
> I don't think we would want to say "Go is fine" until we at least have a
> start on that documentation -- otherwise we have a gap where projects
> may do whatever they want, and we have to work to get them back into
> sync.
>
> Doug
>
> >
> > --John
> >
> >
> >
> >
> > [1] http://governance.openstack.org/reference/project-testing-interface.html
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] release hiatus for 2-6 May

2016-05-03 Thread Doug Hellmann
Because some of the release team is traveling this week, and the
infrastructure team is also dealing with some issues that may make
building releases unreliable, we're going to hold off on publishing any
releases this week. We will check with the infra team on 9 May and
resume publishing releases then if they give the all-clear. Please feel
free to submit release requests this week anyway, just be aware that we
won't process them before next week.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2016-05-03 15:05:08 -0700:
> 
> On 3 May 2016, at 14:50, Doug Hellmann wrote:
> 
> > Excerpts from John Dickinson's message of 2016-05-03 13:01:28 -0700:
> >>
> >> On 3 May 2016, at 12:19, Monty Taylor wrote:
> >>
> >>> On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
>  On Tue, May 3, 2016 at 9:03 AM John Dickinson   > wrote:
> 
> 
>  As a starting point, what would you like to see addressed in the
>  document I'm drafting?
> 
> 
>  I'm going through this project with JavaScript right now. Here's some of
>  the things I've had to address:
> 
>  - Common language formatting rules (ensure that a pep8-like thing 
>  exists).
>  - Mirroring dependencies?
>  - Building Documentation
> >>>
> >>> Mirroring and building are the ones that we'll definitely want to work 
> >>> together on in terms of figuring out how to support. go get being able to 
> >>> point at any git repo for depends is neat - but it increases the amount 
> >>> of internet surface-area in the gate. Last time I looked (last year) 
> >>> there were options for doing just the fetch part of go get separate from 
> >>> the build part.
> >>>
> >>> In any case, as much info as you can get about the mechanics of 
> >>> downloading dependencies, especially as it relates to pre-caching or 
> >>> pointing build systems at local mirrors of things holistically rather 
> >>> than by modifying the source code would be useful. We've gone through a 
> >>> couple of design iterations on javascript support as we've dived in 
> >>> further.
> >>
> >> Are these the sort of things that need to be in a resolution saying that 
> >> it's ok to write code in Golang? I'll definitely agree that these 
> >> questions are important, and I don't have the answers yet (although I 
> >> expect we will by the time any Golang code lands in Swift). We've already 
> >> got the Consistent Testing Interface doc[1] which talks about having 
> >> tests, a coding style, and docs (amongst other things). Does a resolution 
> >> about Golang being acceptable need to describe dependency management, 
> >> build tooling, and CI?
> >
> > There are separate interfaces described there for Python and JavaScript.
> > I think it makes sense to start documenting the expected interface for
> > projects written in Go, for the same reason that we have the others, and
> > I don't think we would want to say "Go is fine" until we at least have a
> > start on that documentation -- otherwise we have a gap where projects
> > may do whatever they want, and we have to work to get them back into
> > sync.
> >
> > Doug
> >
> 
> Yeah, I see that. Can you help me come up with that list? I honestly don't 
> know the "right" way to do everything in Go. These are some of the things 
> that need to be sussed out over the next several months (see the original 
> email).
> 
> I've proposed my initial draft to https://review.openstack.org/#/c/312267/. 
> I'd be happy if you pushed over that or had a follow-on patch to help 
> describe the interfaces like with JS and Python.

I can try to work with you on that, but not being a Go programmer myself
I'm not sure I'll have answers. Your best bet is to start by identifying
the analogous operations. A few off the top of my head:

- How do you run unit tests? Do you need to do that for multiple versions
  of the language like we do with Python 2.7 and 3.x?

- How about code coverage as part of the tests?

- How do you manage dependencies? How does that fit with our global
  requirements and constraints lists pattern we have for Python apps?

- How do you build source packages?

- Should we be building binary packages in some format (what's usual
  for go?) and if so, what format?

- How will you manage in-tree documentation? Sphinx works fine
  stand-alone, so maybe stick with the existing pattern there?

- How will you handle log and message string translations? Are there new
  tools we need to add to our CI images?

- How would devstack install the project as part of a gate job?

Does go typically use a Makefile for these things, or some other driving
tool? Can the same tool be used across all projects with the same
targets ("make test", etc.) for consistency and to make the CI system
setup easier?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] OpenStack Newton Austin Summit

2016-05-03 Thread Tripp, Travis S
Hello everybody,

Below is my summary of the Searchlight related discussions and results from the 
Austin Summit. I apologize for the length, but just decided to include all the 
session summaries in a single email. As always, please correct, add, etc!

5 Sessions (3 Searchlight Session, 1 joint Swift Team Session, 1 Horizon 
Session)

https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=SearchLight%3A



Searchlight Notifications (Fishbowl)

This had good attendance with at least 25 people in the room. We used the 
following etherpad to communicate and share:

https://etherpad.openstack.org/p/searchlight-newton-summit-notifications

Nova notifications:

Jay Pipes and another person (didn’t catch his name) provided a lot of help 
with pointers on using versioned notifications. Jay said that they (Nova) are 
working towards providing an API where we can get a schema [0] for the 
versioned notifications and then can extract the data we need based on the 
version of the Nova data that we need.

[0] http://eavesdrop.openstack.org/#Nova_Notification_Meeting

Jay later introduced us to Balazs Gibizer (gibi - the Nova versioned 
notification sub-team lead). Balazs said he’d love to help us get any data in 
that we need and he gave us the pointer that they are restarting the weekly 
versioned notification meetings [1].
  

[1] https://review.openstack.org/#/c/311194/

It was mentioned that we also we should take a look at “Stackdistiller” [2].

[2] https://github.com/openstack/stacktach-stackdistiller

Jay also mentioned that he has brought up the topic of the Nova team to stop 
trying to do their own search and proxy the Nova list API to Searchlight [3].

[3] http://lists.openstack.org/pipermail/openstack-dev/2016-April/093482.html


Ironic notifications: Three people from Ironic attended and expressed a lot of 
interest in supporting Searchlight. They noted that Ironic is currently working 
on adding notifications and their feedback was "Please look at the 
notifications patch and just tell us what you need.” [4]

[4] https://bugs.launchpad.net/searchlight/+bug/1526408


Designate notifications: Graham Hayes noted that we need to consider v1 
deprecated and can do that as soon as Horizon moves to v2.

Cinder notifications: Duncan Thomas says that the changes Searchlight needs for 
notifications should be available within a few weeks.

Heat notifications: Several people expressed interest in Heat, but there 
weren’t any concrete actions taken from this session.


Searchlight Priorities

I communicated that the most important theme for Newton is production 
readiness. We will be targeting moving from 0.x to 1.x this release (Kilo was 
an experimental glance feature, Liberty was Searchlight 0.1, and Mitaka was 
Searchlight 0.2). So scalability, security, and performance are all top 
priority. This includes moving to ElasticSearch 2.x, which is nearly complete. 
Our ongoing theme of adding plugins for additional resources will continue, but 
reviews related to production readiness should have higher priority. Richard 
Jones mentioned using OSIC for testing and would talk to somebody about 
creating OSAD scripts to deploy Searchlight in OSIC.

The session was unfortunately a bit short, leaving us with only time to walk 
the list of all blueprints listed as High and give people an opportunity to 
voice an opinion whether or not any high priorities should be bumped down and 
whether or not we were missing any high priorities. Here are a few notes from 
that session:

Melissa, the Rackspace Public Cloud control panel program manager attended and 
said that they are looking into using Searchlight to fill the API gaps. They 
have tried to handle quite a few things on the front end (javascript), but 
still have some troubles getting what they need out of the APIs. She said her 
highest priority for Searchlight are servers (instances) and doing anything we 
can to ensure there is no impact to Nova when deploying Searchlight. We 
mentioned the versioned notification work and I also proposed that we add a 
configuration option to disable API callbacks for any data not received via 
notifications.  I have opened a bug on this, and we’ll have to look further 
into this idea [5].

[5] https://bugs.launchpad.net/searchlight/+bug/1577947

We dropped the priority of adding support for Neutron policy based sharing to 
medium [6]. Brad Pokorny (Symantec – had a main conference presentation on 
securing APIs via policy) told us that he thought this should be a lower 
priority than our other blueprints. He also said he’d be willing to look over 
our current Policy controls on searchlight to look for holes.

[6] https://blueprints.launchpad.net/searchlight/+spec/neutron-tenant-rbac

We added back a story to further improve developer docs.  Several people asked 
how they could create a plugin and said they wanted more help. Steve has 
already started on this request [7].

[7] 

Re: [openstack-dev] [devstack][neutron] VMWare NSX CI - voting on devstack changes long after plugin decomposition

2016-05-03 Thread Salvatore Orlando
There is a job which has been turned on again by mistake and I'm working on
ensuring it's put to sleep again (for good this time).

If you can avoid disabling the whole account it would be great as the same
credentials are used by the still-voting nova CI.

Cheers,
Salvatore

On 3 May 2016 at 10:47, Sean M. Collins  wrote:

> When the VMWare plugin was decomposed from the main Neutron tree (
> https://review.openstack.org/#/c/160463/) it appears that the CI system
> was left turned on.
>
>
> http://208.91.1.172/logs/neutron/168438/48/423669-large-ops/logs/q-svc.log.2016-05-03-085740
>
> 2016-05-03 09:21:00.577 21706 ERROR neutron plugin_class =
> self.load_class_for_provider(namespace, plugin_provider)
> 2016-05-03 09:21:00.577 21706 ERROR neutron   File
> "/opt/stack/neutron/neutron/manager.py", line 145, in
> load_class_for_provider
> 2016-05-03 09:21:00.577 21706 ERROR neutron raise
> ImportError(_("Plugin '%s' not found.") % plugin_provider)
> 2016-05-03 09:21:00.577 21706 ERROR neutron ImportError: Plugin
> 'neutron.plugins.vmware.plugin.NsxPlugin' not found.
>
>
> I don't know the criteria for when this specific CI job is run, I appear
> to be the only one triggering it for a . rather long time
>
> http://paste.openstack.org/show/495994/
>
> So, it's still voting on DevStack changes but I think we probably should
> revoke that.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Doug Hellmann
Excerpts from Fox, Kevin M's message of 2016-05-03 22:11:06 +:
> If we let go in, and there are no pluggable middleware, where does RadosGW 
> and other Swift api compatible implementations then stand? Should we bless 
> c++ too? As I understand it, there are a lot of clouds deployed with the 
> RadosGW but Refstack rejects them.

RadosGW isn't part of an OpenStack project, and DefCore requires running
the code produced by our community, not just code that meets the same
APIs.

Doug

> 
> Thanks,
> Kevin 
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: Tuesday, May 03, 2016 2:50 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [tc] supporting Go
> 
> Excerpts from John Dickinson's message of 2016-05-03 13:01:28 -0700:
> >
> > On 3 May 2016, at 12:19, Monty Taylor wrote:
> >
> > > On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
> > >> On Tue, May 3, 2016 at 9:03 AM John Dickinson  > >> > wrote:
> > >>
> > >>
> > >> As a starting point, what would you like to see addressed in the
> > >> document I'm drafting?
> > >>
> > >>
> > >> I'm going through this project with JavaScript right now. Here's some of
> > >> the things I've had to address:
> > >>
> > >> - Common language formatting rules (ensure that a pep8-like thing 
> > >> exists).
> > >> - Mirroring dependencies?
> > >> - Building Documentation
> > >
> > > Mirroring and building are the ones that we'll definitely want to work 
> > > together on in terms of figuring out how to support. go get being able to 
> > > point at any git repo for depends is neat - but it increases the amount 
> > > of internet surface-area in the gate. Last time I looked (last year) 
> > > there were options for doing just the fetch part of go get separate from 
> > > the build part.
> > >
> > > In any case, as much info as you can get about the mechanics of 
> > > downloading dependencies, especially as it relates to pre-caching or 
> > > pointing build systems at local mirrors of things holistically rather 
> > > than by modifying the source code would be useful. We've gone through a 
> > > couple of design iterations on javascript support as we've dived in 
> > > further.
> >
> > Are these the sort of things that need to be in a resolution saying that 
> > it's ok to write code in Golang? I'll definitely agree that these questions 
> > are important, and I don't have the answers yet (although I expect we will 
> > by the time any Golang code lands in Swift). We've already got the 
> > Consistent Testing Interface doc[1] which talks about having tests, a 
> > coding style, and docs (amongst other things). Does a resolution about 
> > Golang being acceptable need to describe dependency management, build 
> > tooling, and CI?
> 
> There are separate interfaces described there for Python and JavaScript.
> I think it makes sense to start documenting the expected interface for
> projects written in Go, for the same reason that we have the others, and
> I don't think we would want to say "Go is fine" until we at least have a
> start on that documentation -- otherwise we have a gap where projects
> may do whatever they want, and we have to work to get them back into
> sync.
> 
> Doug
> 
> >
> > --John
> >
> >
> >
> >
> > [1] http://governance.openstack.org/reference/project-testing-interface.html
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Live Migration: Austin summit update

2016-05-03 Thread Chris Friesen

On 05/03/2016 03:14 AM, Daniel P. Berrange wrote:


There are currently many options for live migration with QEMU that can
assist in completion





Given this I've spent the last week creating an automated test harness
for QEMU upstream which triggers migration with an extreme guest CPU
load and measures the performance impact of these features on the guest,
and whether the migration actually completes.

I hope to be able to publish the results of this investigation this week
which should facilitate us in deciding which is best to use for OpenStack.
The spoiler though is that all the options are pretty terrible, except for
post-copy.


Just to be clear, it's not really CPU load that's the issue though, right?

Presumably it would be more accurate to say that the issue is the rate at which 
unique memory pages are being dirtied and the total number of dirty pages 
relative to your copy bandwidth.


This probably doesn't change the results though...at a high enough dirty rate 
you either pause the VM to keep it from dirtying more memory or you post-copy 
migrate and dirty the memory on the destination.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][gate] Add a gating check job for precheck

2016-05-03 Thread Hui Kang
Hi, Lei and Steve,
I think what Lei said makes sense. When I look at the other proposed
gate jobs [1], the function can be tested against the service.
However, for the precheck roles, it depends on the deployment. If the
deployment task could not pass the precheck phase, the job should be
stopped.

So I agree that adding precheck to deployment gate task, rather than
as a dedicated gate job. Any comment? Thanks.

- Hui


[1] https://etherpad.openstack.org/p/kolla-newton-summit-kolla-functional-gating

On Tue, May 3, 2016 at 12:50 AM, Jeffrey Zhang  wrote:
> does Kolla really need a new job to run the precheck?
>
> why not run the precheck before deploying the kolla in the current
> logical?
>
> On Tue, May 3, 2016 at 12:45 PM, Hui Kang  wrote:
>>
>> Steve,
>> Ok, I created a bp for this. Feel free to edit
>> https://blueprints.launchpad.net/kolla/+spec/gate-job-precheck
>>
>> Best regards,
>> - Hui
>>
>> On Mon, May 2, 2016 at 11:50 PM, Steven Dake (stdake) 
>> wrote:
>> > Hui,
>> >
>> > I am planning to add a general gating blueprint with work items for the
>> > 24
>> > gates we identified.  Just g o ahead and get started and I'll have the
>> > gate blueprint ready to go by tomorrow.
>> >
>> > Regards
>> > -steve
>> >
>> > On 5/2/16, 8:41 PM, "Hui Kang"  wrote:
>> >
>> >>Fellow kolla developers,
>> >>I am wondering if anyone is working on adding a gate job for precheck.
>> >>If not, I'd like to kick off the task by adding a bp. Any comment?
>> >>Thanks.
>> >>
>> >>- Hui Kang
>> >>IRC: huikang
>> >>
>>
>> >> >>__
>> >>OpenStack Development Mailing List (not for usage questions)
>> >>Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Fox, Kevin M
If we let go in, and there are no pluggable middleware, where does RadosGW and 
other Swift api compatible implementations then stand? Should we bless c++ too? 
As I understand it, there are a lot of clouds deployed with the RadosGW but 
Refstack rejects them.

Thanks,
Kevin 

From: Doug Hellmann [d...@doughellmann.com]
Sent: Tuesday, May 03, 2016 2:50 PM
To: openstack-dev
Subject: Re: [openstack-dev] [tc] supporting Go

Excerpts from John Dickinson's message of 2016-05-03 13:01:28 -0700:
>
> On 3 May 2016, at 12:19, Monty Taylor wrote:
>
> > On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
> >> On Tue, May 3, 2016 at 9:03 AM John Dickinson  >> > wrote:
> >>
> >>
> >> As a starting point, what would you like to see addressed in the
> >> document I'm drafting?
> >>
> >>
> >> I'm going through this project with JavaScript right now. Here's some of
> >> the things I've had to address:
> >>
> >> - Common language formatting rules (ensure that a pep8-like thing exists).
> >> - Mirroring dependencies?
> >> - Building Documentation
> >
> > Mirroring and building are the ones that we'll definitely want to work 
> > together on in terms of figuring out how to support. go get being able to 
> > point at any git repo for depends is neat - but it increases the amount of 
> > internet surface-area in the gate. Last time I looked (last year) there 
> > were options for doing just the fetch part of go get separate from the 
> > build part.
> >
> > In any case, as much info as you can get about the mechanics of downloading 
> > dependencies, especially as it relates to pre-caching or pointing build 
> > systems at local mirrors of things holistically rather than by modifying 
> > the source code would be useful. We've gone through a couple of design 
> > iterations on javascript support as we've dived in further.
>
> Are these the sort of things that need to be in a resolution saying that it's 
> ok to write code in Golang? I'll definitely agree that these questions are 
> important, and I don't have the answers yet (although I expect we will by the 
> time any Golang code lands in Swift). We've already got the Consistent 
> Testing Interface doc[1] which talks about having tests, a coding style, and 
> docs (amongst other things). Does a resolution about Golang being acceptable 
> need to describe dependency management, build tooling, and CI?

There are separate interfaces described there for Python and JavaScript.
I think it makes sense to start documenting the expected interface for
projects written in Go, for the same reason that we have the others, and
I don't think we would want to say "Go is fine" until we at least have a
start on that documentation -- otherwise we have a gap where projects
may do whatever they want, and we have to work to get them back into
sync.

Doug

>
> --John
>
>
>
>
> [1] http://governance.openstack.org/reference/project-testing-interface.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-03 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-05-03 16:45:47 -0500:
> Excerpts from Joshua Harlow's message of 2016-05-03 14:24:13 -0700:
> > Howdy folks,
> > 
> > So I meet up with *some* of the mistral folks during friday last week at 
> > the summit and I was wondering if we as a group can find a path to help 
> > that project move forward in their desire to have some kind of process 
> > than ack (vs the existing ack then process) in there usage of the 
> > messaging layer.
> > 
> > I got to learn that the following exists in mistral (sad-face):
> > 
> > https://github.com/openstack/mistral/blob/master/mistral/engine/rpc.py#L38
> > 
> > And it got me thinking about how/if we can as a group possibly allow a 
> > variant of https://review.openstack.org/#/c/229186/ to get worked on and 
> > merged in and release so that the above 'hack' can be removed.
> 
> Based on the comments on that patch, it looks like the consensus was to
> add a new method (to the client & dispatcher & whatever else needs it)
> to implement the new semantics. That way it's clear from the caller
> side what is expected, and it even makes it possible to adopt the new
> capability in other projects without having to go all-or-nothing.
> 
> That said, I agree with Mehdi that *most* RPC calls throughout OpenStack,
> not being idempotent, should not use process-then-ack.

We also need to understand how the new semantics work for drivers other
than Rabbit. We have a lot of renewed interest in zmq for example.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson


On 3 May 2016, at 14:50, Doug Hellmann wrote:

> Excerpts from John Dickinson's message of 2016-05-03 13:01:28 -0700:
>>
>> On 3 May 2016, at 12:19, Monty Taylor wrote:
>>
>>> On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
 On Tue, May 3, 2016 at 9:03 AM John Dickinson > wrote:


 As a starting point, what would you like to see addressed in the
 document I'm drafting?


 I'm going through this project with JavaScript right now. Here's some of
 the things I've had to address:

 - Common language formatting rules (ensure that a pep8-like thing exists).
 - Mirroring dependencies?
 - Building Documentation
>>>
>>> Mirroring and building are the ones that we'll definitely want to work 
>>> together on in terms of figuring out how to support. go get being able to 
>>> point at any git repo for depends is neat - but it increases the amount of 
>>> internet surface-area in the gate. Last time I looked (last year) there 
>>> were options for doing just the fetch part of go get separate from the 
>>> build part.
>>>
>>> In any case, as much info as you can get about the mechanics of downloading 
>>> dependencies, especially as it relates to pre-caching or pointing build 
>>> systems at local mirrors of things holistically rather than by modifying 
>>> the source code would be useful. We've gone through a couple of design 
>>> iterations on javascript support as we've dived in further.
>>
>> Are these the sort of things that need to be in a resolution saying that 
>> it's ok to write code in Golang? I'll definitely agree that these questions 
>> are important, and I don't have the answers yet (although I expect we will 
>> by the time any Golang code lands in Swift). We've already got the 
>> Consistent Testing Interface doc[1] which talks about having tests, a coding 
>> style, and docs (amongst other things). Does a resolution about Golang being 
>> acceptable need to describe dependency management, build tooling, and CI?
>
> There are separate interfaces described there for Python and JavaScript.
> I think it makes sense to start documenting the expected interface for
> projects written in Go, for the same reason that we have the others, and
> I don't think we would want to say "Go is fine" until we at least have a
> start on that documentation -- otherwise we have a gap where projects
> may do whatever they want, and we have to work to get them back into
> sync.
>
> Doug
>

Yeah, I see that. Can you help me come up with that list? I honestly don't know 
the "right" way to do everything in Go. These are some of the things that need 
to be sussed out over the next several months (see the original email).

I've proposed my initial draft to https://review.openstack.org/#/c/312267/. I'd 
be happy if you pushed over that or had a follow-on patch to help describe the 
interfaces like with JS and Python.


--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2016-05-03 13:01:28 -0700:
> 
> On 3 May 2016, at 12:19, Monty Taylor wrote:
> 
> > On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
> >> On Tue, May 3, 2016 at 9:03 AM John Dickinson  >> > wrote:
> >>
> >>
> >> As a starting point, what would you like to see addressed in the
> >> document I'm drafting?
> >>
> >>
> >> I'm going through this project with JavaScript right now. Here's some of
> >> the things I've had to address:
> >>
> >> - Common language formatting rules (ensure that a pep8-like thing exists).
> >> - Mirroring dependencies?
> >> - Building Documentation
> >
> > Mirroring and building are the ones that we'll definitely want to work 
> > together on in terms of figuring out how to support. go get being able to 
> > point at any git repo for depends is neat - but it increases the amount of 
> > internet surface-area in the gate. Last time I looked (last year) there 
> > were options for doing just the fetch part of go get separate from the 
> > build part.
> >
> > In any case, as much info as you can get about the mechanics of downloading 
> > dependencies, especially as it relates to pre-caching or pointing build 
> > systems at local mirrors of things holistically rather than by modifying 
> > the source code would be useful. We've gone through a couple of design 
> > iterations on javascript support as we've dived in further.
> 
> Are these the sort of things that need to be in a resolution saying that it's 
> ok to write code in Golang? I'll definitely agree that these questions are 
> important, and I don't have the answers yet (although I expect we will by the 
> time any Golang code lands in Swift). We've already got the Consistent 
> Testing Interface doc[1] which talks about having tests, a coding style, and 
> docs (amongst other things). Does a resolution about Golang being acceptable 
> need to describe dependency management, build tooling, and CI?

There are separate interfaces described there for Python and JavaScript.
I think it makes sense to start documenting the expected interface for
projects written in Go, for the same reason that we have the others, and
I don't think we would want to say "Go is fine" until we at least have a
start on that documentation -- otherwise we have a gap where projects
may do whatever they want, and we have to work to get them back into
sync.

Doug

> 
> --John
> 
> 
> 
> 
> [1] http://governance.openstack.org/reference/project-testing-interface.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-03 Thread Dmitriy Ukhlov
Hi Joshua.

I think than Mistral have already fast solution - they customised 
oslo.messaging rpc to achieve ack-after-process in Mistral code base

About solution in oslo.messaging code base… I plan to write spec for new 
oslo.messaging driver interface soon as was agreed during design session (we 
need transport specific interface, not user API specific as we have now)
Also we could start work on new User API need by Mistral meanwhile.

> Begin forwarded message:
> 
> From: Joshua Harlow 
> Subject: [openstack-dev] [oslo][mistral] Saga of process than ack and where 
> can we go from here...
> Date: May 4, 2016 at 12:24:13 AM GMT+3
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Reply-To: "OpenStack Development Mailing List \(not for usage questions\)" 
> 
> 
> Howdy folks,
> 
> So I meet up with *some* of the mistral folks during friday last week at the 
> summit and I was wondering if we as a group can find a path to help that 
> project move forward in their desire to have some kind of process than ack 
> (vs the existing ack then process) in there usage of the messaging layer.
> 
> I got to learn that the following exists in mistral (sad-face):
> 
> https://github.com/openstack/mistral/blob/master/mistral/engine/rpc.py#L38
> 
> And it got me thinking about how/if we can as a group possibly allow a 
> variant of https://review.openstack.org/#/c/229186/ to get worked on and 
> merged in and release so that the above 'hack' can be removed.
> 
> I also would like to come to some kind of understanding that we also (mistral 
> folks would hopefully help here) would remove this kind of change in the 
> future as the longer term goal (of something like 
> https://review.openstack.org/#/c/260246/) would progress.
> 
> Thoughts from folks (mistral and oslo)?
> 
> Anyway we can create a solution that works in the short term (allowing for 
> that hack to be removed) and working toward the longer term goal?
> 
> -Josh
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Andrew Laski


On Mon, May 2, 2016, at 01:13 PM, Edward Leafe wrote:
> On May 2, 2016, at 10:51 AM, Mike Bayer  wrote:
> 
> >> Concretely, we think that there are three possible approaches:
> >> 1) We can use the SQLAlchemy API as the common denominator between a 
> >> relational and non-relational implementation of the db.api component. 
> >> These two implementation could continue to converge by sharing a large 
> >> amount of code.
> >> 2) We create a new non-relational implementation (from scratch) of the 
> >> db.api component. It would require probably more work.
> >> 3) We are also studying a last alternative: writing a SQLAlchemy 
> >> engine that targets NewSQL databases (scalability + ACID):
> >>  - https://github.com/cockroachdb/cockroach
> >>  - https://github.com/pingcap/tidb
> > 
> > Going with a NewSQL backend is by far the best approach here.   That way, 
> > very little needs to be reinvented and the application's approach to data 
> > doesn't need to dramatically change.
> 
> I’m glad that Matthieu responded, but I did want to emphasize one thing:
> of *course* this isn’t an ideal approach, but it *is* a practical one.
> The biggest problem in any change like this isn’t getting it to work, or
> to perform better, or anything else except being able to make the change
> while disrupting as little of the existing code as possible. Taking an
> approach that would be more efficient would be a non-starter since it
> wouldn’t provide a clean upgrade path for existing deployments.

I would like to point out that this same logic applies to the current
cellsv2 effort. It is a very practical set of changes which allows Nova
to move forward with only minor effort on the part of deployers. And it
moves towards a model that is already used and well understood by large
deployers of Nova while also learning from the shortcomings of the
previous architecture. In short, much of this is already battle tested
and proven.

If we started Nova from scratch, I hear golang is lovely for this sort
of thing, would we do things differently? Probably. However that's not
the position we're in. And we're able to make measurable progress with
cellsv2 at the moment and have a pretty clear idea of the end state. I
can recall conversations about NoSQL as far back as the San Diego
summit, which was my first so I can't say they didn't happen previously,
and this is the first time I've seen any measurable progress on moving
forward with it. But where it would go is not at all clear.

I also want to point out that what was being solved with ROME and what
cellsv2 is solving are two very different things. I saw the talk and was
very impressed, but it was looking to improve upon db access times in a
very specific deployment type. And I didn't get the sense that the point
being made was that ROME/redis was the best solution generally, but for
very geographically distributed controllers with a shared database it
performed much better than an active/active Galera cluster with a large
number of nodes.


> 
> By getting this working without ripping out all of the data models that
> currently exist is an amazing feat. And if by doing so it shows that a
> distributed database is indeed possible, it’s done more than anything
> else that has ever been discussed in the past few years. 
> 
> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-03 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2016-05-03 14:24:13 -0700:
> Howdy folks,
> 
> So I meet up with *some* of the mistral folks during friday last week at 
> the summit and I was wondering if we as a group can find a path to help 
> that project move forward in their desire to have some kind of process 
> than ack (vs the existing ack then process) in there usage of the 
> messaging layer.
> 
> I got to learn that the following exists in mistral (sad-face):
> 
> https://github.com/openstack/mistral/blob/master/mistral/engine/rpc.py#L38
> 
> And it got me thinking about how/if we can as a group possibly allow a 
> variant of https://review.openstack.org/#/c/229186/ to get worked on and 
> merged in and release so that the above 'hack' can be removed.

Based on the comments on that patch, it looks like the consensus was to
add a new method (to the client & dispatcher & whatever else needs it)
to implement the new semantics. That way it's clear from the caller
side what is expected, and it even makes it possible to adopt the new
capability in other projects without having to go all-or-nothing.

That said, I agree with Mehdi that *most* RPC calls throughout OpenStack,
not being idempotent, should not use process-then-ack.

Doug

> I also would like to come to some kind of understanding that we also 
> (mistral folks would hopefully help here) would remove this kind of 
> change in the future as the longer term goal (of something like 
> https://review.openstack.org/#/c/260246/) would progress.
> 
> Thoughts from folks (mistral and oslo)?
> 
> Anyway we can create a solution that works in the short term (allowing 
> for that hack to be removed) and working toward the longer term goal?
> 
> -Josh
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Austin summit - session recap/summary

2016-05-03 Thread Jason Rist
On 05/03/2016 10:34 AM, Steven Hardy wrote:
> Hi all,
>
> Some folks have requested a summary of our summit sessions, as has been
> provided for some other projects.
>
> I'll probably go into more detail on some of these topics either via
> subsequent more focussed threads an/or some blog posts but what follows is
> an overview of our summit sessions[1] with notable actions or decisions
> highlighted.  I'm including some of my own thoughts and conclusions, folks
> are welcome/encouraged to follow up with their own clarifications or
> different perspectives :)
>
> TripleO had a total of 5 sessions in Austin I'll cover them one-by-one:
>
> -
> Upgrades - current status and roadmap
> -
>
> In this session we discussed the current state of upgrades - initial
> support for full major version upgrades has been implemented, but the
> implementation is monolithic, highly coupled to pacemaker, and inflexible
> with regard to third-party extraconfig changes.
>
> The main outcomes were that we will add support for more granular
> definition of the upgrade lifecycle to the new composable services format,
> and that we will explore moving towards the proposed lightweight HA
> architecture to reduce the need for so much pacemaker specific logic.
>
> We also agreed that investigating use of mistral to drive upgrade workflows
> was a good idea - currently we have a mixture of scripts combined with Heat
> to drive the upgrade process, and some refactoring into discrete mistral
> workflows may provide a more maintainable solution.  Potential for using
> the existing SoftwareDeployment approach directly via mistral (outside of
> the heat templates) was also discussed as something to be further
> investigated and prototyped.
>
> We also touched on the CI implications of upgrades - we've got an upgrades
> job now, but we need to ensure coverage of full release-to-release upgrades
> (not just commit to commit).
>
> ---
> Containerization status/roadmap
> ---
>
> In this session we discussed the current status of containers in TripleO
> (which is to say, the container based compute node which deploys containers
> via Heat onto an an Atomic host node that is also deployed via Heat), and
> what strategy is most appropriate to achieve a fully containerized TripleO
> deployment.
>
> Several folks from Kolla participated in the session, and there was
> significant focus on where work may happen such that further collaboration
> between communities is possible.  To some extent this discussion on where
> (as opposed to how) proved a distraction and prevented much discussion on
> supportable architectural implementation for TripleO, thus what follows is
> mostly my perspective on the issues that exist:
>
> Significant uncertainty exists wrt integration between Kolla and TripleO -
> there's largely consensus that we want to consume the container images
> defined by the Kolla community, but much less agreement that we can
> feasably switch to the ansible-orchestrated deployment/config flow
> supported by Kolla without breaking many of our primary operator interfaces
> in a fundamentally unacceptable way, for example:
>
> - The Mistral based API is being implemented on the expectation that the
>   primary interface to TripleO deployments is a parameters schema exposed
>   by a series of Heat templates - this is no longer true in a "split stack"
>   model where we have to hand off to an alternate service orchestration tool.
>
> - The tripleo-ui (based on the Mistral based API) consumes heat parameter
>   schema to build it's UI, and Ansible doesn't support the necessary
>   parameter schema definition (such as types and descriptions) to enable
>   this pattern to be replicated.  Ansible also doesn't provide a HTTP API,
>   so we'd still have to maintain and API surface for the (non python) UI to
>   consume.
>
> We also discussed ideas around integration with kubernetes (a hot topic on
> the Kolla track this summit), but again this proved inconclusive beyond
> that yes someone should try developing a PoC to stimulate further
> discussion.  Again, significant challenges exist:
>
> - We still need to maintain the Heat parameter interfaces for the API/UI,
>   and there is also a strong preference to maintain puppet as a tool for
>   generating service configuration (so that existing operator integrations
>   via puppet continue to function) - this is a barrier to directly
>   consuming the kolla-kubernetes effort directly.
>
> - A COE layer like kubernetes is a poor fit for deployments where operators
>   require strict control of service placement (e.g exactly which nodes a 
> service
>   runs on, IP address assignments to specific nodes etc) - this is already
>   a strong requirement for TripleO users and we need to figure out if/how
>   it's possible to control container placement per node/namespace.
>
> - There 

[openstack-dev] [fuel] Release notes with reno

2016-05-03 Thread Andrew Woodward
To follow up one of the points brought up in the fuel-plugins [1] session.
We briefly discussed using reno [2]. The system appears to be quite clean
and concise and will work for this need, and should work for general
release notes.

I'd propose that we start using reno to catalog changes to the plug-in
interfaces and encourage usage elsewhere.

I'd like to start a discussion about this further.

[1] https://etherpad.openstack.org/p/austin-summit-fuel-plugins
[2] http://docs.openstack.org/developer/reno/

-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-03 Thread Joshua Harlow

Howdy folks,

So I meet up with *some* of the mistral folks during friday last week at 
the summit and I was wondering if we as a group can find a path to help 
that project move forward in their desire to have some kind of process 
than ack (vs the existing ack then process) in there usage of the 
messaging layer.


I got to learn that the following exists in mistral (sad-face):

https://github.com/openstack/mistral/blob/master/mistral/engine/rpc.py#L38

And it got me thinking about how/if we can as a group possibly allow a 
variant of https://review.openstack.org/#/c/229186/ to get worked on and 
merged in and release so that the above 'hack' can be removed.


I also would like to come to some kind of understanding that we also 
(mistral folks would hopefully help here) would remove this kind of 
change in the future as the longer term goal (of something like 
https://review.openstack.org/#/c/260246/) would progress.


Thoughts from folks (mistral and oslo)?

Anyway we can create a solution that works in the short term (allowing 
for that hack to be removed) and working toward the longer term goal?


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-k8s] Core team

2016-05-03 Thread Michał Jastrzębski
On 3 May 2016 at 14:36, Martin André  wrote:
> On Tue, May 3, 2016 at 6:48 PM, Michał Jastrzębski  wrote:
>> Hello,
>>
>> Since it seems that we have voted for separation of kolla-k8s repos
>> (yay!) I would like to table another discussion (but let's wait till
>> its official).
>>
>> Core Team.
>>
>> We need to build up new core team that will guard the gates on our
>> brand new repo (when it arrives). One of ideas Steven pointed out is
>> to add people from etherpad to core team, but I'd like to throw
>> different idea to the mix, to keep things interesting.
>>
>> Idea is: let's start with current kolla core team and for the time
>> being add new cores to kolla-k8s by invitation by existing core
>> member. For example, I'm kolla core, working with k8s and I see some
>> guy doing great job and investing time into it, I would propose him
>> for core, and instead of normal voting, he will get his +2 powers
>> immediately. This would allow quick core team buildout and not start
>> with bunch of people who doesn't necessary want to contribute or even
>> know each other.
>
> Interesting idea. I wonder if this will favor diversity or on the
> contrary cause cores to nominate their friends.

Yes, that's true. We, kolla core team, need to keep track of it;) I
have confidence in us keeping up diversity.

> Just to put things back in context, we're in this nice situation in
> the kolla project where a couple of companies wrote their own solution
> to run containers with kubernetes and now want to share their work
> with the community. Instead of encouraging a code dump, we'll start a
> new kolla-kubernetes effort from scratch where we can confront ideas
> and incorporate the work from these companies and other contributors.
> We certainly don't want to end up in a situation where a company is
> over-represented, leading to unbalanced discussions, or even worse to
> self-approved patches.
>
> In addition to having cores we trust, I think we can avoid most of
> conflicts by following a simple rule: a company can't push a patch
> through. In other words, we ask core reviewers from different
> affiliations to validate a patch before it can be approved.

+1 to that.

> Martin
>
>> Cheers,
>> Michal
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Morgan Fainberg's message of 2016-05-03 11:13:38 -0700:
> On Tue, May 3, 2016 at 10:28 AM, Monty Taylor  wrote:
> 
> > On 05/03/2016 11:47 AM, Clint Byrum wrote:
> >
> >> Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
> >>
> >>> On 05/03/2016 08:55 AM, Clint Byrum wrote:
> >>>
> 
>  Perhaps we have different perspectives. How is accepting what we
>  previously emitted and told the user would be valid sneaky or wrong?
>  Sounds like common sense due diligence to me.
> 
> >>>
> >>> I agree - I see no reason we can't validate previously emitted tokens.
> >>> But I don't agree strongly, because re-authing on invalid token is a
> >>> thing users do hundreds of times a day. (these aren't oauth API Keys or
> >>> anything)
> >>>
> >>>
> >> Sure, one should definitely not be expecting everything to always work
> >> without errors. On this we agree for sure. However, when we do decide to
> >> intentionally induce errors for reasons we have not done so before, we
> >> should weigh the cost of avoiding that with the cost of having it
> >> happen. Consider this strawman:
> >>
> >> - User gets token, it says "expires_at Now+4 hours"
> >> - User starts a brief set of automation tasks in their system
> >>that does not use python and has not failed with invalid tokens thus
> >>far.
> >> - Keystone nodes are all updated at one time (AMAZING cloud ops team)
> >> - User's automation jobs fail at next OpenStack REST call
> >> - User begins debugging, wasting hours of time figuring out that
> >>their tokens, which they stored and show should still be valid, were
> >>rejected.
> >>
> >
> > Ah - I guess this is where we're missing each other, which is good and
> > helpful.
> >
> > I would argue that any user that is _storing_ tokens is doing way too much
> > work. If they are doing short tasks, they should just treat them as
> > ephemeral. If they are doing longer tasks, they need to deal with timeouts.
> > SO, this:
> >
> >
> > - User gets token, it says "expires_at Now+4 hours"
> > - User starts a brief set of automation tasks in their system
> >that does not use python and has not failed with invalid tokens thus
> >far.
> >
> > should be:
> >
> > - User starts a brief set of automation tasks in their system
> > that does not use python and has not failed with invalid tokens thus
> > far.
> >
> > "Get a token" should never be an activity that anyone ever consciously
> > performs.
> >
> >
> This is my view. Never, ever, ever assume your token is good until
> expiration. Assume the token might be broken at any request and know how to
> re-auth.
> 
> > And now they have to refactor their app, because this may happen again,
> >> and they have to make sure that invalid token errors can bubble up to the
> >> layer that has the username/password, or accept rolling back and
> >> retrying the whole thing.
> >>
> >> I'm not saying anybody has this system, I'm suggesting we're putting
> >> undue burden on users with an unknown consequence. Falling back to UUID
> >> for a while has a known cost of a little bit of code and checking junk
> >> tokens twice.
> >>
> >
> Please do not advocate "falling back" to UUID. I am actually against making
> fernet the default (very, very strongly), if we have to have this
> "fallback" code. It is the wrong kind of approach, we already have serious
> issues with complex code paths that produce subtly different results. If
> the options are:
> 
> 1) Make Fernet Default and have "fallback" code
> 
> or
> 
> 2) Leave UUID default and highly recommend fernet (plus gate on fernet
> primarily, default in devstack)
> 
> I will jump on my soapbox and be very loudly in favor of the 2nd option. If
> we communicate this is a change that will happen (hey, maybe throw an
> error/make the config option "none" so it has to be explicit) in Newton,
> and then move to a Fernet default in O - I'd be ok with that.
> 
> >
> > Totally. I have no problem with the suggestion that keystone handle this.
> > But I also think that users should quite honestly stop thinking about
> > tokens at all. Tokens are an implementation detail that if any user thinks
> > about while writing their app they're setting themselves up to be screwed -
> > so we should make sure we're not talking about them in a primary way such
> > as to suggest that people focus a lot of energy on them.
> >
> > (I also frequently see users who are using python libraries even get
> > everything horribly wrong and screw themselves because they think they need
> > to think about tokens)
> >
> 
> Better communication that tokens are ephemeral and should not assume to
> work always (even until their expiry) should be the messaging we use. It's
> simple, plan to reauth as needed and handle failures.

This is not what the documentation says, so yeah, we need to actually
tell users that expires_at is an upper bounds, because right now it
says:

 The date and time when the 

Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2016-05-03 09:04:00 -0700:
> 
> On 05/02/2016 01:48 PM, Clint Byrum wrote:
> >>
> >
> > FWIW, I agree with you. If you're going to use SQLAlchemy, use it to
> > take advantage of the relational model.
> >
> > However, how is what you describe a win? Whether you use SELECT .. FOR
> > UPDATE, or a stored procedure, the lock is not distributed, and thus, will
> > still suffer rollback failures in Galera. For single DB server setups, you
> > don't have to worry about that, and SELECT .. FOR UPDATE will work fine.
> 
> Well it's a "win" vs. the lesser approach considered which also did not 
> include a distributed locking system like Zookeeper.   It is also a win 
> even with a Zookeeper-like system in place because it allows a SQL query 
> to be much smarter about selecting data that involves IP numbers and 
> CIDRs, without the need to pull data into memory and process it there. 
> This is the most common mistake in SQL programming, not taking advantage 
> of SQL's set-based nature and instead pulling data into memory 
> unnecessarily.
> 

Indeed, we use relational databases so we don't have to deal with lots
of data that doesn't make sense to us at the time we want it.

> Also, the "federated MySQL" approach of Cells V2 would still be OK with 
> pessimistic locking, since this lock is not "distributed" across the 
> entire dataspace.   Only the usual Galera caveats apply, e.g. point to 
> only one galera "master" at a time and/or wait for Galera to support 
> "SELECT FOR UPDATE" across the cluster.
> 

Right, of course it would work. It's just a ton of code for not much
improvement in scalability or resilience.

> >
> > Furthermore, any logic that happens inside the database server is extra
> > load on a much much much harder resource to scale, using code that is
> > much more complicated to update.
> 
> So I was careful to use the term "stored function" and not "stored 
> procedure".   As ironic as it is for me to defend both the ORM 
> business-logic-in-the-application-not-the-database position, *and* the 
> let-the-database-do-things-not-the-application at the same time, using 
> database functions to allow new kinds of math and comparison operations 
> to take place over sets is entirely reasonable, and should not be 
> confused with the old-school big-business approach of building an entire 
> business logic layer as a huge wall of stored procedures, this is 
> nothing like that.
> 

Indeed, it's a complicated and nuanced position, but I think I
understand where you're going with it. My reluctance to put intelligence
in the database is just that, reluctance, not some hard and fast rule I
can quote.

> The Postgresql database has INET and CIDR types native which include the 
> same overlap logic we are implementing here as a MySQL stored function, 
> so the addition of math functions like these shouldn't be controversial. 
>The "load" of this function is completely negligible (however I would 
> be glad to assist in load testing it to confirm), especially compared to 
> pulling the same data across the wire, processing it in Python, then 
> sending just a tiny portion of it back again after we've extracted the 
> needle from the haystack.
> 

It's death by 1000 paper cuts when you talk about scaling. Of course it
will be faster, but the slices of CPU on the database server are still a
limited resource, whereas slices of CPU on stateless API/conductor nodes
are virtually limitless and far cheaper to scale elastically.

> In pretty much every kind of load testing scenario we do with Openstack, 
> the actual "load" on the database barely pushes anything.   The only 
> database "resource" issue we have is Openstack using far more idle 
> connections than it should, which is on my end to work on improvements 
> to the connection pooling system which does not scale well across 
> Openstack's tons-of-processes model.
> 

Indeed, pooling is something we should improve upon. But even more, we
need to improve upon error handling and resilience.

> >
> > To be clear, it's not the amount of data, but the size of the failure
> > domain. We're more worried about what will happen to those 40,000 open
> > connections from our 4000 servers when we do have to violently move them.
> 
> That's a really big number and I will admit I would need to dig into 
> this particular problem domain more deeply to understand what exactly 
> the rationale of that kind of scale would be here.   But it does seem 
> like if you were using SQL databases, and the 4000 server system is in 
> fact grouped into hundreds of "silos" that only deal with strict 
> segments of the total dataspace, a federated approach would be exactly 
> what you'd want to go with.
> 

I think it would be easier to just make sure the client code handles
connection failovers gracefully than try to figure out sharding.

> >
> > That particular problem isn't as scary if you have a large
> > Cassandra/MongoDB/Riak/ROME cluster, 

Re: [openstack-dev] [glance] [glare] [heat] [tosca] [tacker] [murano] [magnum] [app-catalog] Austin summit summary: Generic cataloging and Glare v1 API

2016-05-03 Thread Flavio Percoco


I believe this model polarizes the community a bit as far as picking
reviews go.

We voted to remove it in Mitaka and I was hoping we would workout a
way to bring

the community together in the Glare reviews.


My goal is to have champions for each module that is being worked on in
Newton (import, micro-versions, glare, documentation, etc) . This does
have a little bit of effect in creating tribal knowledge but we do have
that even today. The iterative plan though (yet to be formalized) is
that we need some sort of knowledge sharing model. I have been trying to
do that using the dedicated Glare meetings but we may need other models
of KT (knowledge transfer) here.



++

I did the dedicated teams in Mitaka without formalizing it as I prefer there to
be one Glance team for as much as we can and just do some focalized reviews. I
agree we need to move Glare forward and it's awesome there's so much work on it.

I'm sorry I don't have a suggestion as far as transfering Glare's knowledge
goes. I'd probably recommend lots of docs and more public discussions for now
but other folks will have to dedicate time on reading the API and service to
become more familiar with it.

Thanks for clarifying, Nikhil!
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] api-ref docs cleanup review sprint 5/9 and 5/11

2016-05-03 Thread Matt Riedemann
We discussed at the summit a need for a review sprint on the api-ref 
docs cleanup effort that's going on.  See Sean's email on that from a 
few weeks ago [1].


So we plan to do a review sprint next Monday 5/9 and Wednesday 5/11.

The series to review is here [2].

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/092936.html

[2] https://review.openstack.org/#/q/status:open+topic:bp/api-ref-in-rst

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson


On 3 May 2016, at 12:19, Monty Taylor wrote:

> On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
>> On Tue, May 3, 2016 at 9:03 AM John Dickinson > > wrote:
>>
>>
>> As a starting point, what would you like to see addressed in the
>> document I'm drafting?
>>
>>
>> I'm going through this project with JavaScript right now. Here's some of
>> the things I've had to address:
>>
>> - Common language formatting rules (ensure that a pep8-like thing exists).
>> - Mirroring dependencies?
>> - Building Documentation
>
> Mirroring and building are the ones that we'll definitely want to work 
> together on in terms of figuring out how to support. go get being able to 
> point at any git repo for depends is neat - but it increases the amount of 
> internet surface-area in the gate. Last time I looked (last year) there were 
> options for doing just the fetch part of go get separate from the build part.
>
> In any case, as much info as you can get about the mechanics of downloading 
> dependencies, especially as it relates to pre-caching or pointing build 
> systems at local mirrors of things holistically rather than by modifying the 
> source code would be useful. We've gone through a couple of design iterations 
> on javascript support as we've dived in further.

Are these the sort of things that need to be in a resolution saying that it's 
ok to write code in Golang? I'll definitely agree that these questions are 
important, and I don't have the answers yet (although I expect we will by the 
time any Golang code lands in Swift). We've already got the Consistent Testing 
Interface doc[1] which talks about having tests, a coding style, and docs 
(amongst other things). Does a resolution about Golang being acceptable need to 
describe dependency management, build tooling, and CI?

--John




[1] http://governance.openstack.org/reference/project-testing-interface.html

signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [glare] [heat] [tosca] [tacker] [murano] [magnum] [app-catalog] Austin summit summary: Generic cataloging and Glare v1 API

2016-05-03 Thread Nikhil Komawar
Comment inline.

On 5/3/16 3:21 PM, Flavio Percoco wrote:
> On 02/05/16 19:09 -0400, Nikhil Komawar wrote:
>
>> Added a few more tags to the subject line.
>>
>>
>>
>> On 5/2/16 7:05 PM, Nikhil Komawar wrote:
>>
>>> Hello everyone,
>>>
>>>
>>>
>>> Just wanted to send a brief summary of the discussions at the summit.
>>>
>>> This list is not holistic however, it covers the relevant aspects that
>>>
>>> various stakeholders need to be aware of.
>>>
>>>
>>>
>>>   * Glare is useful for different use cases in OpenStack including
>>>
>>> currently being asked for in Heat, Murano and TOSCA
>>>
>>>   * Heat needs something for usage in Newton
>>>
>>>   * Murano needs the stable API to adapt the changes as they currently
>>>
>>> use experimental version
>>>
>>>   * Glance team will continue to make progress on this effort and plan
>>>
>>> to have POC after Newton R-16 [1]
>>>
>>>   * The initial plan is to focus on base artifact (no data asset
>>>
>>> associated) and then support at least one artifact type
>>>
>>>   * The first artifact can be Murano application catalogs or Heat
>>>
>>> templates depending on either team's priorities when Glare is ready
>>>
>>> for consumption
>>>
>>>   * In Newton, we will focus on the adoption of this service in at least
>>>
>>> the above mentioned two projects and getting the API in good shape
>>>
>>>   * Images compatibility is deferred for now
>>>
>>>   * Glare will be a side-priority for Newton meaning most of the cores
>>>
>>> are currently not expected to prioritize reviews on it except for
>>>
>>> those who want to focus on cross project initiatives and those
>>>
>>> involved in its adoption
>>>
>
>
> Does this mean there will be some sort of "Fast Track" again? I'm
> asking because

No, we won't have the FastTrack model. But at the same time, we want to
iterate over the code once that is consumed by the first service so that
the behavioral changes found during that phase can be corrected before
m-3. The end goal is to have a good API that can be consumed by other
services (and something compliant with OpenStack standards).

>
> I believe this model polarizes the community a bit as far as picking
> reviews go.
>
> We voted to remove it in Mitaka and I was hoping we would workout a
> way to bring
>
> the community together in the Glare reviews.

My goal is to have champions for each module that is being worked on in
Newton (import, micro-versions, glare, documentation, etc) . This does
have a little bit of effect in creating tribal knowledge but we do have
that even today. The iterative plan though (yet to be formalized) is
that we need some sort of knowledge sharing model. I have been trying to
do that using the dedicated Glare meetings but we may need other models
of KT (knowledge transfer) here.

>
>
>
> Please, don't get me wrong. As far as priorities go, I agree with what
> you've

Thanks for bringing this up. Refines the thought process for sure.

>
> said in the last point but review wise, I'm worried this would
> implicitly bring
>
> back some kind of fast track model.
>
>

Let's not go with the FastTrack model :-)

>
> Cheers,
>
> Flavio
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-05-03 Thread Hongbin Lu


> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: May-03-16 8:34 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit
> 
> Hi.
> 
> On Mon, May 2, 2016 at 7:11 PM, Cammann, Tom 
> wrote:
> > Thanks for the write up Hongbin and thanks to all those who
> contributed to the design summit. A few comments on the summaries below.
> >
> > 6. Ironic Integration:
> > https://etherpad.openstack.org/p/newton-magnum-ironic-integration
> > - Start the implementation immediately
> > - Prefer quick work-around for identified issues (cinder volume
> > attachment, variation of number of ports, etc.)
> >
> > We need to implement a bay template that can use a flat networking
> model as this is the only networking model Ironic currently supports.
> Multi-tenant networking is imminent. This should be done before work on
> an Ironic template starts.
> >
> > 7. Magnum adoption challenges:
> > https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
> > - The challenges is listed in the etherpad above
> >
> > Ideally we need to turn this list into a set of actions which we can
> implement over the cycle, i.e. create a BP to remove requirement for
> LBaaS.
> 
> There's one for floating IPs already:
> https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips
> 
> >
> > 9. Magnum Heat template version:
> > https://etherpad.openstack.org/p/newton-magnum-heat-template-
> versionin
> > g
> > - In each bay driver, version the template and template definition.
> > - Bump template version for minor changes, and bump bay driver
> version for major changes.
> >
> > We decided only bay driver versioning was required. The template and
> template driver does not need versioning due to the fact we can get
> heat to pass back the template which it used to create the bay.
> 
> This was also my understanding. We won't use heat template versioning,
> just the bays.
> 
> > 10. Monitoring:
> > https://etherpad.openstack.org/p/newton-magnum-monitoring
> > - Add support for sending notifications to Ceilometer
> > - Revisit bay monitoring and self-healing later
> > - Container monitoring should not be done by Magnum, but it can be
> done by cAdvisor, Heapster, etc.
> >
> > We split this topic into 3 parts – bay telemetry, bay monitoring,
> container monitoring.
> > Bay telemetry is done around actions such as bay/baymodel CRUD
> operations. This is implemented using using ceilometer notifications.
> > Bay monitoring is around monitoring health of individual nodes in the
> bay cluster and we decided to postpone work as more investigation is
> required on what this should look like and what users actually need.
> > Container monitoring focuses on what containers are running in the
> bay and general usage of the bay COE. We decided this will be done
> completed by Magnum by adding access to cAdvisor/heapster through
> baking access to cAdvisor by default.
> 
> I think we're missing a blueprint for this one too.

Created a blueprint for that: 
https://blueprints.launchpad.net/magnum/+spec/container-monitoring

> 
> Ricardo
> 
> >
> > - Manually manage bay nodes (instead of being managed by Heat
> ResourceGroup): It can address the use case of heterogeneity of bay
> nodes (i.e. different availability zones, flavors), but need to
> elaborate the details further.
> >
> > The idea revolves around creating a heat stack for each node in the
> bay. This idea shows a lot of promise but needs more investigation and
> isn’t a current priority.
> >
> > Tom
> >
> >
> > From: Hongbin Lu 
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)" 
> > Date: Saturday, 30 April 2016 at 05:05
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: [openstack-dev] [magnum] Notes for Magnum design summit
> >
> > Hi team,
> >
> > For reference, below is a summary of the discussions/decisions in
> Austin design summit. Please feel free to point out if anything is
> incorrect or incomplete. Thanks.
> >
> > 1. Bay driver:
> > https://etherpad.openstack.org/p/newton-magnum-bay-driver
> > - Refactor existing code into bay drivers
> > - Each bay driver will be versioned
> > - Individual bay driver can have API extension and magnum CLI could
> > load the extensions dynamically
> > - Work incrementally and support same API before and after the driver
> > change
> >
> > 2. Bay lifecycle operations:
> > https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-
> operatio
> > ns
> > - Support the following operations: reset the bay, rebuild the bay,
> rotate TLS certificates in the bay, adjust storage of the bay, scale
> the bay.
> >
> > 3. Scalability:
> > https://etherpad.openstack.org/p/newton-magnum-scalability
> > - Implement Magnum plugin for Rally
> > - 

Re: [openstack-dev] [kolla][kolla-k8s] Core team

2016-05-03 Thread Martin André
On Tue, May 3, 2016 at 6:48 PM, Michał Jastrzębski  wrote:
> Hello,
>
> Since it seems that we have voted for separation of kolla-k8s repos
> (yay!) I would like to table another discussion (but let's wait till
> its official).
>
> Core Team.
>
> We need to build up new core team that will guard the gates on our
> brand new repo (when it arrives). One of ideas Steven pointed out is
> to add people from etherpad to core team, but I'd like to throw
> different idea to the mix, to keep things interesting.
>
> Idea is: let's start with current kolla core team and for the time
> being add new cores to kolla-k8s by invitation by existing core
> member. For example, I'm kolla core, working with k8s and I see some
> guy doing great job and investing time into it, I would propose him
> for core, and instead of normal voting, he will get his +2 powers
> immediately. This would allow quick core team buildout and not start
> with bunch of people who doesn't necessary want to contribute or even
> know each other.

Interesting idea. I wonder if this will favor diversity or on the
contrary cause cores to nominate their friends.

Just to put things back in context, we're in this nice situation in
the kolla project where a couple of companies wrote their own solution
to run containers with kubernetes and now want to share their work
with the community. Instead of encouraging a code dump, we'll start a
new kolla-kubernetes effort from scratch where we can confront ideas
and incorporate the work from these companies and other contributors.
We certainly don't want to end up in a situation where a company is
over-represented, leading to unbalanced discussions, or even worse to
self-approved patches.

In addition to having cores we trust, I think we can avoid most of
conflicts by following a simple rule: a company can't push a patch
through. In other words, we ask core reviewers from different
affiliations to validate a patch before it can be approved.

Martin

> Cheers,
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Pete Zaitcev
On Tue, 3 May 2016 12:16:24 -0400
Rayson Ho  wrote:

> I like Go! However, Go does not offer binary compatibility between point
> releases. For those who install from source it may not be a big issue, but
> for commercial distributions that pre-package & pre-compile everything,
> then the compiled Go libs won't be compatible with old/new releases of the
> Go compiler that the user may want to install on their systems.

IMHO, it's not yet a problem worth worrying about. The C++ have demonstrated
poor binary compatibility over releases, even 25 years after its creation.
And it's not a big concern. Annoying, yes, but not a deal-breaker. In case
of Fedora, we'll ship with a nailed Golang version in each release.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] newton release deadlines

2016-05-03 Thread Steve Martinelli
A few keystone specific dates and definitions:

- May 30-03 -> R-18 -> **Spec proposal freeze**
  - Specs must be available for review by this week
  - keystone newton-1 will be released this week

- Jul 04-06 -> R-13 -> **Spec freeze**
  - Specs must be merged by this week

- Jul 11-15 -> R-12 -> **Feature proposal freeze**
  - Features must be available for review, passing Jenkins and not WIP by
this week
  - keystone newton-2 will be released this week

- Aug 29-02 -> R-5 -> **Feature Freeze**
  - Features must be merged by this week
  - keystone newton-3 will be released this week

A few notes:
- The midcycle meetup will likely happen on R14, R12 or R11 [1]
- As always, folks can ask for a spec or feature exemption by explaining
their case on the mailing list
- The full schedule [2]

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/092298.html
[2] http://releases.openstack.org/newton/schedule.html

Thanks!
Steve Martinelli - Keystone PTL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-05-03 Thread Hongbin Lu


> -Original Message-
> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> Sent: May-02-16 1:12 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit
> 
> Thanks for the write up Hongbin and thanks to all those who contributed
> to the design summit. A few comments on the summaries below.
> 
> 6. Ironic Integration: https://etherpad.openstack.org/p/newton-magnum-
> ironic-integration
> - Start the implementation immediately
> - Prefer quick work-around for identified issues (cinder volume
> attachment, variation of number of ports, etc.)
> 
> We need to implement a bay template that can use a flat networking
> model as this is the only networking model Ironic currently supports.
> Multi-tenant networking is imminent. This should be done before work on
> an Ironic template starts.
> 
> 7. Magnum adoption challenges: https://etherpad.openstack.org/p/newton-
> magnum-adoption-challenges
> - The challenges is listed in the etherpad above
> 
> Ideally we need to turn this list into a set of actions which we can
> implement over the cycle, i.e. create a BP to remove requirement for
> LBaaS.

I created a BP for that: 
https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas

> 
> 9. Magnum Heat template version:
> https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
> - In each bay driver, version the template and template definition.
> - Bump template version for minor changes, and bump bay driver version
> for major changes.
> 
> We decided only bay driver versioning was required. The template and
> template driver does not need versioning due to the fact we can get
> heat to pass back the template which it used to create the bay.

ACK. Thanks for pointing it out.

> 
> 10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-
> monitoring
> - Add support for sending notifications to Ceilometer
> - Revisit bay monitoring and self-healing later
> - Container monitoring should not be done by Magnum, but it can be done
> by cAdvisor, Heapster, etc.
> 
> We split this topic into 3 parts – bay telemetry, bay monitoring,
> container monitoring.
> Bay telemetry is done around actions such as bay/baymodel CRUD
> operations. This is implemented using using ceilometer notifications.
> Bay monitoring is around monitoring health of individual nodes in the
> bay cluster and we decided to postpone work as more investigation is
> required on what this should look like and what users actually need.
> Container monitoring focuses on what containers are running in the bay
> and general usage of the bay COE. We decided this will be done
> completed by Magnum by adding access to cAdvisor/heapster through
> baking access to cAdvisor by default.

ACK. Thanks for the clarification.

> 
> - Manually manage bay nodes (instead of being managed by Heat
> ResourceGroup): It can address the use case of heterogeneity of bay
> nodes (i.e. different availability zones, flavors), but need to
> elaborate the details further.
> 
> The idea revolves around creating a heat stack for each node in the bay.
> This idea shows a lot of promise but needs more investigation and isn’t
> a current priority.

Yes, the idea needs a thoughtful discussion. I will send another ML to discuss 
it. I agree this doesn't have to be a priority in Newton cycle, but I knew 
there are at least two requested features that will benefit from this idea:
1. The ability to specify different availability zones for bay nodes: 
https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
2. The ability to specify different flavors for bay nodes: 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/092838.html

> 
> Tom
> 
> 
> From: Hongbin Lu 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Saturday, 30 April 2016 at 05:05
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [magnum] Notes for Magnum design summit
> 
> Hi team,
> 
> For reference, below is a summary of the discussions/decisions in
> Austin design summit. Please feel free to point out if anything is
> incorrect or incomplete. Thanks.
> 
> 1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-
> driver
> - Refactor existing code into bay drivers
> - Each bay driver will be versioned
> - Individual bay driver can have API extension and magnum CLI could
> load the extensions dynamically
> - Work incrementally and support same API before and after the driver
> change
> 
> 2. Bay lifecycle operations: https://etherpad.openstack.org/p/newton-
> magnum-bays-lifecycle-operations
> - Support the following operations: reset the bay, rebuild the bay,
> rotate TLS certificates in the bay, adjust storage of the bay, scale
> the bay.
> 
> 3. Scalability: 

Re: [openstack-dev] [glance] [glare] [heat] [tosca] [tacker] [murano] [magnum] [app-catalog] Austin summit summary: Generic cataloging and Glare v1 API

2016-05-03 Thread Flavio Percoco

On 02/05/16 19:09 -0400, Nikhil Komawar wrote:

Added a few more tags to the subject line.

On 5/2/16 7:05 PM, Nikhil Komawar wrote:

Hello everyone,

Just wanted to send a brief summary of the discussions at the summit.
This list is not holistic however, it covers the relevant aspects that
various stakeholders need to be aware of.

  * Glare is useful for different use cases in OpenStack including
currently being asked for in Heat, Murano and TOSCA
  * Heat needs something for usage in Newton
  * Murano needs the stable API to adapt the changes as they currently
use experimental version
  * Glance team will continue to make progress on this effort and plan
to have POC after Newton R-16 [1]
  * The initial plan is to focus on base artifact (no data asset
associated) and then support at least one artifact type
  * The first artifact can be Murano application catalogs or Heat
templates depending on either team's priorities when Glare is ready
for consumption
  * In Newton, we will focus on the adoption of this service in at least
the above mentioned two projects and getting the API in good shape
  * Images compatibility is deferred for now
  * Glare will be a side-priority for Newton meaning most of the cores
are currently not expected to prioritize reviews on it except for
those who want to focus on cross project initiatives and those
involved in its adoption


Does this mean there will be some sort of "Fast Track" again? I'm asking because
I believe this model polarizes the community a bit as far as picking reviews go.
We voted to remove it in Mitaka and I was hoping we would workout a way to bring
the community together in the Glare reviews.

Please, don't get me wrong. As far as priorities go, I agree with what you've
said in the last point but review wise, I'm worried this would implicitly bring
back some kind of fast track model.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Monty Taylor

On 05/03/2016 01:45 PM, Michael Krotscheck wrote:

On Tue, May 3, 2016 at 9:03 AM John Dickinson > wrote:


As a starting point, what would you like to see addressed in the
document I'm drafting?


I'm going through this project with JavaScript right now. Here's some of
the things I've had to address:

- Common language formatting rules (ensure that a pep8-like thing exists).
- Mirroring dependencies?
- Building Documentation


Mirroring and building are the ones that we'll definitely want to work 
together on in terms of figuring out how to support. go get being able 
to point at any git repo for depends is neat - but it increases the 
amount of internet surface-area in the gate. Last time I looked (last 
year) there were options for doing just the fetch part of go get 
separate from the build part.


In any case, as much info as you can get about the mechanics of 
downloading dependencies, especially as it relates to pre-caching or 
pointing build systems at local mirrors of things holistically rather 
than by modifying the source code would be useful. We've gone through a 
couple of design iterations on javascript support as we've dived in further.



- Common tool choices for testing, coverage, etc.

Michael


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OSC transition

2016-05-03 Thread Richard Theis
I've asked about the conclusion here [1].

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093139.html

Na Zhu/China/IBM wrote on 05/01/2016 09:11:43 PM:

> From: Na Zhu/China/IBM@IBMCN
> To: rth...@us.ibm.com
> Cc: openstack-dev@lists.openstack.org
> Date: 05/01/2016 09:11 PM
> Subject: Re: [openstack-dev] [neutron] OSC transition
> 
> Hi Richard, 
> 
> So what is the conclusion of where to put *aas CLI? 
> 
> 
> 
> Regards,
> Juno Zhu
> IBM China Development Labs (CDL) Cloud IaaS Lab
> Email: na...@cn.ibm.com
> 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong 
> New District, Shanghai, China (201203) 
> 
> 
> 
> From:Richard Theis  
> To:"OpenStack Development Mailing List \(not for usage 
> questions\)"  
> Date:2016/04/26 22:32 
> Subject:Re: [openstack-dev] [neutron] OSC transition 
> 
> 
> 
> Hi, 
> 
> The latest devref [1] would place it in python-neutronclient as 
> Henry noted. But stay tuned for results from the summit session. 
> 
> [1] https://github.com/openstack/python-neutronclient/blob/master/
> doc/source/devref/transition_to_osc.rst 
> 
> - Richard
> 
> 
> "Na Zhu"  wrote on 04/26/2016 08:29:21 AM:
> 
> > From: "Na Zhu"  
> > To: hen...@gessau.net 
> > Cc: "OpenStack Development Mailing List \(not for usage questions\)"
> >  
> > Date: 04/26/2016 08:34 AM 
> > Subject: Re: [openstack-dev] [neutron] OSC transition 
> > 
> > Hi Henry,
> > 
> > Thanks your information, why you think neutron-dynamic-routing CLI 
> > should live in python-neutronclient?
> > From this link http://docs.openstack.org/developer/python-
> > neutronclient/devref/transition_to_osc.htmlsection "Where does my 
> CLI belong?
> > ", *aas CLI belongs to their own project, not project python-
> > neutronclient. BGP is also service like *aas, so I think BGP CLIs 
> > should live in neutron-dynamic-routing, or a separate repo named 
> > python-*client. Pls correct me if I am wrong, thanks.
> > 
> > 
> > 
> > Regards,
> > Juno Zhu
> > IBM China Development Labs (CDL) Cloud IaaS Lab
> > Email: na...@cn.ibm.com
> > 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong 
> > New District, Shanghai, China (201203)
> > 
> > 
> > 
> > From:Henry Gessau 
> > To:"OpenStack Development Mailing List (not for usage 
> > questions)" 
> > Date:2016/04/26 21:09
> > Subject:Re: [openstack-dev] [neutron] OSC transition
> > 
> > 
> > 
> > Adding the [neutron] tag.
> > 
> > I believe that the OSC extension for neutron-dynamic-routing should 
live in
> > the python-neutronclient repo. Keep in touch with Richard Theis ashe 
is the
> > one leading the transition to OSC. He is rtheis on IRC.
> > 
> > See:
> > 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093139.html
> > https://review.openstack.org/309587
> > 
> > 
> > Na Zhu  wrote:
> > > Dear All,
> > > 
> > > 
> > > I have a question about OSC transition, recently, the community 
approves
> > > moving bgp out of neutron, as a service like other *aas. The BGP 
> > CLIs need be
> > > removed from neutronclient. Because of OSC transition, I can not 
> > just move the
> > > BGP CLIs code from python-neutronclient repo to neutron-dynamic-
> > routing repo.
> > > I have to refactor the code and do transition to OSC plugin system.
> > > 
> > > From the
> > > link _http://docs.openstack.org/developer/python-openstackclient/
> > plugins.html_, the
> > > client has a separate repo, take designate as example, the CLI repo 
is
> > > python-designateclient, the project repo is designate. So for 
> BGP, should I
> > > create a repo for CLI, or leverage project repo 
neutron-dynamic-routing?
> > > 
> > > 
> > > 
> > > 
> > > Regards,
> > > Juno Zhu
> > > IBM China Development Labs (CDL) Cloud IaaS Lab
> > > Email: na...@cn.ibm.com
> > > 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong 
New
> > > District, Shanghai, China (201203)
> > > 
> > > 
> > > 
__
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > 
> > 
> > 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 

Re: [openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread Eric Harney
On 05/03/2016 02:16 PM, Sean McGinnis wrote:
> Hey everyone,
> 
> I would like to nominate Michał Dulko to the Cinder core team. Michał's
> contributions with both code reviews [0] and code contributions [1] have
> been significant for some time now.
> 
> His persistence with versioned objects has been instrumental in getting
> support in the Mitaka release for rolling upgrades.
> 
> If there are no objections from current cores by next week, I will add
> Michał to the core group.
> 
> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
> [1]
> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
> 
> Thanks!
> 
> Sean McGinnis (smcginnis)
> 
> 

+1, definitely a strong addition to the team.

Eric


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron client and plan to transition to OpenStack client

2016-05-03 Thread Armando M.
On 3 May 2016 at 11:01, Richard Theis  wrote:

> Steve Martinelli  wrote on 04/22/2016 05:49:32 PM:
>
> > From: Steve Martinelli 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 04/22/2016 05:52 PM
> > Subject: Re: [openstack-dev] [Neutron] Neutron client and plan to
> > transition to OpenStack client
> >
> > thanks to richard, tangchen, reepid and others for picking this up
> > and running with it; and thanks to armando for embracing OSC and
> > putting it in neutron's plan.
> >
> > On Fri, Apr 22, 2016 at 6:33 PM, reedip banerjee 
> wrote:
> > Hi Richard,
> > Thanks for the information :)
> >
> > Was waiting for it.
> >
> > On Sat, Apr 23, 2016 at 3:27 AM, Armando M.  wrote:
> >
> > On 22 April 2016 at 13:58, Richard Theis  wrote:
> > FYI: I've pushed a series of WIP patch sets [1], [2] and [3] to
> > enable python-neutronclient OSC plugins. I've used "openstack
> > network agent list" as the initial OSC plugin command example.
> > Hopefully these will help during the discussions at the summit.
> >
> > [1] https://review.openstack.org/#/c/309515/
> > [2] https://review.openstack.org/#/c/309530/
> > [3] https://review.openstack.org/#/c/309587/
> >
> > Super! Thanks for your help Richard!
> >
> > Cheers,
> > Armando
> >
> >
> > "Armando M."  wrote on 04/22/2016 12:19:45 PM:
> >
> > > From: "Armando M." 
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > Date: 04/22/2016 12:22 PM
> > > Subject: [openstack-dev] [Neutron] Neutron client and plan to
> > > transition to OpenStack client
> > >
> > > Hi Neutrinos,
> > >
> > > During the Mitaka release the team sat together to figure out a plan
> > > to embrace the OpenStack client and supplant the neutron CLI tool.
> > >
> > > Please note that this does not mean we will get rid of the
> > > openstack-neutronclient repo. In fact we still keep python client
> > > bindings and keep the development for features that cannot easily go
> > > in the OSC client (like the high level services).
> > >
> > > We did put together a transition plan in pace [1], but we're
> > > revising it slightly and we'll continue the discussion at the summit.
> > >
> > > If you are interested in this topic, are willing to help with the
> > > transition or have patches currently targeting the client and are
> > > unclear on what to do, please stay tuned. We'll report back after
> > the summit.
>
> Hi,
>
> Is there an update available from the summit session? I didn't see a
> resolution documented in [3].
>

I have asked the individual session moderators to come up with a report by
the end of the week. Some of us are either travelling, taking time off or
simply catching up with the backlog. Please look for a mail titled
[Neutron]  - Report.

If you don't see if by the end of the week, please come back again :)


>
> Thanks,
> Richard
>
> [3] https://etherpad.openstack.org/p/newton-neutron-future-neutron-client
>
> > >
> > > Armando
> > >
> > > [1] http://docs.openstack.org/developer/python-neutronclient/devref/
> > > transition_to_osc.html
> > > [2]
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9096
> >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >
>
> >
> > --
> > Thanks and Regards,
> > Reedip Banerjee
> > IRC: reedip
> >
> >
> >
>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 

Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-03 Thread Matthew Thode
On 05/03/2016 01:47 PM, Truman, Travis wrote:
> Major has made an incredible number of contributions of code and reviews
> to the OpenStack-Ansible community. Given his role as the primary author
> of the openstack-ansible-security project, I can think of no better
> addition to the core reviewer team.
> 
> Travis Truman
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
+1 because it still means something to me

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-03 Thread Truman, Travis
Major has made an incredible number of contributions of code and reviews to the 
OpenStack-Ansible community. Given his role as the primary author of the 
openstack-ansible-security project, I can think of no better addition to the 
core reviewer team.

Travis Truman


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Michael Krotscheck
On Tue, May 3, 2016 at 9:03 AM John Dickinson  wrote:

>
> As a starting point, what would you like to see addressed in the document
> I'm drafting?
>

I'm going through this project with JavaScript right now. Here's some of
the things I've had to address:

- Common language formatting rules (ensure that a pep8-like thing exists).
- Mirroring dependencies?
- Building Documentation
- Common tool choices for testing, coverage, etc.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] stadium evolution - report

2016-05-03 Thread Armando M.
Hi Neutrinos,

For those who could not attend or be in Austin for session [2], we've had
some recent discussions [1] and past ones in [3]. I am trying to get to a
closure on this topic, and I followed up with a spec proposal on [4]. I am
open to suggestions on how to improve the proposal and how to achieve
consensus.

I would strongly encourage you to take the opportunity to review and
provide feedback on [4].

Many thanks,
Armando

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093561.html
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9097
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/080865.html
[4] https://review.openstack.org/#/c/312199/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] [fuel] [javascript] Supporting ES6

2016-05-03 Thread Michael Krotscheck
TL/DR: Should we support EcmaScript 6?

Discussions I've had on the topic:

Vancouver:
- Browser support is not yet broad enough, so no- we shouldn't support ES6.
- TypeScript is too closely tied to Corporations (tm), not really an open
standard. Do not support TypeScript.

Austin:
- Fuel is using ES6, is now an official project (?).
- We have non-browser projects that could use it, assuming that we have a
recent version of Node.js that we can test on.
- We now have Node4 LTS on our infra build nodes, which support _most_ of
EcmaScript 6 things.
- EcmaScript continues to be moving target (And will likely always be a
moving target).
- Xenial contains Node 4 LTS. Ubuntu does _not_ have an upgrade exception
for node (akin to Firefox).
- Node 6 LTS was released during the summit.

Body of work required:
- Discuss and enable linting rules for ES6 in eslint-config-openstack.
- Smoke-test fuel's unit and functional testing for ES6 components.

Personal Assessment:

Frankly, we already have ES6 in our infra, so that train has left the
building. What we need to do is make sure it has the same level of support
as other languages, which, I believe, isn't going to be too difficult. I
also have some commitments of mutual assistance from Vitaly (Fuel) to keep
things sane and keep communication open. As for the upcoming Node4 vs.
Node6 question, I recommend that we _not_ upgrade to Node 6 LTS in the
Newton cycle, however strongly consider it for the Ocata cycle.

Am I missing anything? Does anyone have opinions?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Austin recap

2016-05-03 Thread Tim Hinrichs
Hi all,

Here’s a quick summary of the Congress activities in Austin.  Everyone
should feel free to chime in with corrections and things I missed.

1. Talks

Masahito gave a talk on applying Congress for fault recovery in the context
of NFV.

https://www.openstack.org/summit/austin-2016/summit-schedule/events/7199

Fabio gave a talk on applying Congress + Monasca to enforce
application-level SLAs.

https://www.openstack.org/summit/austin-2016/summit-schedule/events/7363

2. Integrations

We had discussions, both within the Congress Integrations fishbowl session,
and outside of that session on potential integrations with other OpenStack
projects.  Here's a quick overview.

- Monasca (fabiog). The proposed integration: Monasca pushes data to
Congress using the push driver to let Congress know about the alarms
Monasca configured.  Can use multiple alarms using a single table.
Eventually we talked about having Congress analyze the policy to configure
the alarms that Monasca uses, completing the loop.

- Watcher (acabot). Watcher aims to optimize the placement of VMs by
pulling data from Ceilometer/Monasca and Nova (including
affinity/anti-affinity info), computing necessary migrations for whichever
strategy is configured, and migrates the VMs.  Want to use Congress as a
source of policies that they take into account when computing the necessary
migrations.

- Nova scheduler.  There’s interest in policy-enabling the Nova scheduler,
and then integrating that with Congress in the context of delegation, both
to give Congress the ability to pull in the scheduling policy and to push
the scheduling policy.

- Mistral.  The use case for this integration is to help people create an
HA solution for VMs.  So have Congress monitor VMs, identify when they have
failed, and kick off a Mistral workflow to resurrect them.

- Vintrage.  Vintrage does root-cause-analysis.  It provides a graph-based
model for the structure of the datacenter (switches attached to
hypervisors, servers attached to hypervisors, etc.) and a templating
language for defining how to create new alarms from existing alarms.  The
action item that we left is that the Vintrage team will initiate a mailing
list thread where we discuss which Vintrage data might be valuable for
Congress policies.

3. Working sessions

- The new distributed architecture is nearing completion.  There seems to
be 1 blocker for having the basic functionality ready to test: at boot,
Congress doesn’t properly spin up datasources that have already been
configured in the database.  As an experiment to see how close we were to
completion, we started up the Congress server with just the API and policy
engine and saw the basics actually working!  When we added the datasources,
we found a bug where the API was assuming the datasources could be
referenced by UUID, when in fact they can only be referenced by Name on the
message-bus.   So while there’s still quite a bit to do, we’re getting
close to having all the basics working.

- We made progress on the high-availability and high-throughput design.
This is still very much open to design and discussion, so continuing the
design on the mailing list would be great.  Here are the highlights.

   o  Policy engine: split into (i) active-active for queries to deal with
high-throughput (ii) active-passive for action-execution (requiring
leader-election, etc.).  Policy CRUD modifies DB; undecided whether API
also informs all policy-engines, or whether they all sync from the DB.

   o  Pull datasources: no obvious need for replication, since they restart
really fast and will just re-pull the latest data anyhow

   o  Push datasources: Need HA for ensuring the pusher can always push,
e.g. the pusher drops the message onto oslo-messaging.  Still up for debate
is whether we also need HA for storing the data since there is no way to
ask for it after a restart; one suggestion is that every datasource must
allow us to ask for the state.  HT does not require replication, since
syncing the state between several instances would be required and would be
less performant than a single instance.

   o  API (didn’t really discuss this, so here’s my take).  No obvious need
for replication for HT, since if the API is a bottleneck, the backend will
be an even bigger bottleneck.  For HA, could do active-active since the API
is just a front-end to the message bus + database, though we would need to
look into locking now that there is no GIL.

It was great seeing everyone in Austin!
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread Sheel Rana Insaan
Even though I am not a core member, but I would like to vote for Michal as
he truely deserves it...
Huge acceptance from my side..
+1

Best Regards,
Sheel Rana

On Tue, May 3, 2016 at 11:46 PM, Sean McGinnis 
wrote:

> Hey everyone,
>
> I would like to nominate Michał Dulko to the Cinder core team. Michał's
> contributions with both code reviews [0] and code contributions [1] have
> been significant for some time now.
>
> His persistence with versioned objects has been instrumental in getting
> support in the Mitaka release for rolling upgrades.
>
> If there are no objections from current cores by next week, I will add
> Michał to the core group.
>
> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
> [1]
>
> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
>
> Thanks!
>
> Sean McGinnis (smcginnis)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-03 Thread Sean McGinnis
Hey everyone,

I would like to nominate Michał Dulko to the Cinder core team. Michał's
contributions with both code reviews [0] and code contributions [1] have
been significant for some time now.

His persistence with versioned objects has been instrumental in getting
support in the Mitaka release for rolling upgrades.

If there are no objections from current cores by next week, I will add
Michał to the core group.

[0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
[1]
https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged

Thanks!

Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Morgan Fainberg
On Tue, May 3, 2016 at 10:28 AM, Monty Taylor  wrote:

> On 05/03/2016 11:47 AM, Clint Byrum wrote:
>
>> Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
>>
>>> On 05/03/2016 08:55 AM, Clint Byrum wrote:
>>>

 Perhaps we have different perspectives. How is accepting what we
 previously emitted and told the user would be valid sneaky or wrong?
 Sounds like common sense due diligence to me.

>>>
>>> I agree - I see no reason we can't validate previously emitted tokens.
>>> But I don't agree strongly, because re-authing on invalid token is a
>>> thing users do hundreds of times a day. (these aren't oauth API Keys or
>>> anything)
>>>
>>>
>> Sure, one should definitely not be expecting everything to always work
>> without errors. On this we agree for sure. However, when we do decide to
>> intentionally induce errors for reasons we have not done so before, we
>> should weigh the cost of avoiding that with the cost of having it
>> happen. Consider this strawman:
>>
>> - User gets token, it says "expires_at Now+4 hours"
>> - User starts a brief set of automation tasks in their system
>>that does not use python and has not failed with invalid tokens thus
>>far.
>> - Keystone nodes are all updated at one time (AMAZING cloud ops team)
>> - User's automation jobs fail at next OpenStack REST call
>> - User begins debugging, wasting hours of time figuring out that
>>their tokens, which they stored and show should still be valid, were
>>rejected.
>>
>
> Ah - I guess this is where we're missing each other, which is good and
> helpful.
>
> I would argue that any user that is _storing_ tokens is doing way too much
> work. If they are doing short tasks, they should just treat them as
> ephemeral. If they are doing longer tasks, they need to deal with timeouts.
> SO, this:
>
>
> - User gets token, it says "expires_at Now+4 hours"
> - User starts a brief set of automation tasks in their system
>that does not use python and has not failed with invalid tokens thus
>far.
>
> should be:
>
> - User starts a brief set of automation tasks in their system
> that does not use python and has not failed with invalid tokens thus
> far.
>
> "Get a token" should never be an activity that anyone ever consciously
> performs.
>
>
This is my view. Never, ever, ever assume your token is good until
expiration. Assume the token might be broken at any request and know how to
re-auth.


> And now they have to refactor their app, because this may happen again,
>> and they have to make sure that invalid token errors can bubble up to the
>> layer that has the username/password, or accept rolling back and
>> retrying the whole thing.
>>
>> I'm not saying anybody has this system, I'm suggesting we're putting
>> undue burden on users with an unknown consequence. Falling back to UUID
>> for a while has a known cost of a little bit of code and checking junk
>> tokens twice.
>>
>
Please do not advocate "falling back" to UUID. I am actually against making
fernet the default (very, very strongly), if we have to have this
"fallback" code. It is the wrong kind of approach, we already have serious
issues with complex code paths that produce subtly different results. If
the options are:

1) Make Fernet Default and have "fallback" code

or

2) Leave UUID default and highly recommend fernet (plus gate on fernet
primarily, default in devstack)

I will jump on my soapbox and be very loudly in favor of the 2nd option. If
we communicate this is a change that will happen (hey, maybe throw an
error/make the config option "none" so it has to be explicit) in Newton,
and then move to a Fernet default in O - I'd be ok with that.


>
> Totally. I have no problem with the suggestion that keystone handle this.
> But I also think that users should quite honestly stop thinking about
> tokens at all. Tokens are an implementation detail that if any user thinks
> about while writing their app they're setting themselves up to be screwed -
> so we should make sure we're not talking about them in a primary way such
> as to suggest that people focus a lot of energy on them.
>
> (I also frequently see users who are using python libraries even get
> everything horribly wrong and screw themselves because they think they need
> to think about tokens)
>

Better communication that tokens are ephemeral and should not assume to
work always (even until their expiry) should be the messaging we use. It's
simple, plan to reauth as needed and handle failures.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron client and plan to transition to OpenStack client

2016-05-03 Thread Richard Theis
Steve Martinelli  wrote on 04/22/2016 05:49:32 PM:

> From: Steve Martinelli 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 04/22/2016 05:52 PM
> Subject: Re: [openstack-dev] [Neutron] Neutron client and plan to 
> transition to OpenStack client
> 
> thanks to richard, tangchen, reepid and others for picking this up 
> and running with it; and thanks to armando for embracing OSC and 
> putting it in neutron's plan.
> 
> On Fri, Apr 22, 2016 at 6:33 PM, reedip banerjee  
wrote:
> Hi Richard, 
> Thanks for the information :)
> 
> Was waiting for it.
> 
> On Sat, Apr 23, 2016 at 3:27 AM, Armando M.  wrote:
> 
> On 22 April 2016 at 13:58, Richard Theis  wrote:
> FYI: I've pushed a series of WIP patch sets [1], [2] and [3] to 
> enable python-neutronclient OSC plugins. I've used "openstack 
> network agent list" as the initial OSC plugin command example.  
> Hopefully these will help during the discussions at the summit. 
> 
> [1] https://review.openstack.org/#/c/309515/ 
> [2] https://review.openstack.org/#/c/309530/ 
> [3] https://review.openstack.org/#/c/309587/ 
> 
> Super! Thanks for your help Richard!
> 
> Cheers,
> Armando
>  
> 
> "Armando M."  wrote on 04/22/2016 12:19:45 PM:
> 
> > From: "Armando M."  
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> >  
> > Date: 04/22/2016 12:22 PM 
> > Subject: [openstack-dev] [Neutron] Neutron client and plan to 
> > transition to OpenStack client 
> > 
> > Hi Neutrinos, 
> > 
> > During the Mitaka release the team sat together to figure out a plan
> > to embrace the OpenStack client and supplant the neutron CLI tool. 
> > 
> > Please note that this does not mean we will get rid of the 
> > openstack-neutronclient repo. In fact we still keep python client 
> > bindings and keep the development for features that cannot easily go
> > in the OSC client (like the high level services). 
> > 
> > We did put together a transition plan in pace [1], but we're 
> > revising it slightly and we'll continue the discussion at the summit. 
> > 
> > If you are interested in this topic, are willing to help with the 
> > transition or have patches currently targeting the client and are 
> > unclear on what to do, please stay tuned. We'll report back after 
> the summit.

Hi,

Is there an update available from the summit session? I didn't see a
resolution documented in [3].

Thanks,
Richard

[3] https://etherpad.openstack.org/p/newton-neutron-future-neutron-client

> > 
> > Armando 
> > 
> > [1] http://docs.openstack.org/developer/python-neutronclient/devref/
> > transition_to_osc.html 
> > [2] 
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9096
> 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> 
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> 

> 
> -- 
> Thanks and Regards,
> Reedip Banerjee
> IRC: reedip
> 
> 
> 

> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] weekly meeting canceled on 5/3

2016-05-03 Thread Steve Martinelli
sorry for the late notice - there are no items on the agenda and i think
most are still decompressing from the summit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Austin summit sessions recap

2016-05-03 Thread Andrew Woodward
On Tue, May 3, 2016 at 6:38 AM Emilien Macchi  wrote:

> Here's a summary of Puppet OpenStack sessions [1] during Austin summit.
>
> * General feedback is excellent, things are stable, no major changes
> are coming during the next cycle.
> * We discussed about the work we want to do during Newton cycle [2]:
>
> Ubuntu 16.04 LTS
> Make Puppet OpenStack modules working and gated on Ubuntu 16.04,
> starting from Newton.
> Keep stable/mitaka and before gated on Ubuntu 14.04 LTS.
>
> Release management with trailing cycle
> The release model changed to:
> http://governance.openstack.org/reference/tags/release_cycle-trailing.html
> We'll start producing milestones within a cycle, continue efforts on
> tarballs and investigate package builds (rpm, etc).
>

We spoke some about marking point releases more often. On stable it sounded
like everyone was in favor for this to be automated. I didn't catch how we
wanted to handle dev milestones


>
> Move documentation out from Wiki
> See [3].
>
> puppet-pacemaker unification
> Mirantis & Red Hat to continue collaboration on merging efforts on
> puppet-pacemaker module: https://review.openstack.org/#/c/296440/)
> So both Fuel & TripleO will use the same Puppet module to deploy Pacemaker.
>
> CI stabilization
> We're supporting 18 months old releases, so we will continue all
> efforts to stabilize our CI and make it robust so it does not break
> every morning.


Does this make it the last 3 releases + dev = 4 or the last 2 + dev? Since
-3 technically falls off when the next dev cycle starts

>
>
Containers
> Most of containers deployments have common bits (user/group
> management, config files management, etc).
> We decided that we would add the common bits in our modules, so they
> can be used by people deploying OpenStack in containers. See [4].
>
> [1] https://etherpad.openstack.org/p/newton-design-puppet
> [2] https://etherpad.openstack.org/p/newton-puppet-project-status
> [3] https://etherpad.openstack.org/p/newton-puppet-docs
> [4] https://etherpad.openstack.org/p/newton-puppet-multinode-containers
>
>
> As a retrospective, we've noticed that we had a quiet agenda &
> sessions this time, without critical things. It is a sign for our
> group things are now very stable and we did an excellent job to be at
> this point.
> Thanks for everyone who attended our sessions, feel free to add more
> things that I might have missed, or any questions.
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
--
Andrew Woodward
Mirantis
Fuel Community Ambassador
Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][neutron] VMWare NSX CI - voting on devstack changes long after plugin decomposition

2016-05-03 Thread Sean M. Collins
When the VMWare plugin was decomposed from the main Neutron tree 
(https://review.openstack.org/#/c/160463/) it appears that the CI system was 
left turned on.

http://208.91.1.172/logs/neutron/168438/48/423669-large-ops/logs/q-svc.log.2016-05-03-085740

2016-05-03 09:21:00.577 21706 ERROR neutron plugin_class = 
self.load_class_for_provider(namespace, plugin_provider)
2016-05-03 09:21:00.577 21706 ERROR neutron   File 
"/opt/stack/neutron/neutron/manager.py", line 145, in load_class_for_provider
2016-05-03 09:21:00.577 21706 ERROR neutron raise ImportError(_("Plugin 
'%s' not found.") % plugin_provider)
2016-05-03 09:21:00.577 21706 ERROR neutron ImportError: Plugin 
'neutron.plugins.vmware.plugin.NsxPlugin' not found.


I don't know the criteria for when this specific CI job is run, I appear
to be the only one triggering it for a . rather long time

http://paste.openstack.org/show/495994/

So, it's still voting on DevStack changes but I think we probably should
revoke that.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [docs] [cinder] [swift] [glance] [keystone] [ironic] [trove] [neutron] [heat] [senlin] [manila] [sahara] RST + YAML files ready for pick up from WADL migration

2016-05-03 Thread Jim Rollenhagen
On Tue, May 03, 2016 at 08:29:16AM -0500, Anne Gentle wrote:
> Hi all,
> This patch contains all the RST + YAML for projects to bring over to their
> repos to begin building API reference information from within your repo.
> Get a copy of this patch, and pick up the files for your service in
> api-site/api-ref/source/:
> 
> https://review.openstack.org/#/c/311596/
> 
> There is required cleanup, and you'll need an index.rst, conf.py, and build
> jobs. All of these can be patterned after the nova repository api-ref
> directory. Read more at
> http://docs.openstack.org/contributor-guide/api-guides.html
> 
> It's overall in good shape thanks to Karen Bradshaw, Auggy Ragwitz, Andreas
> Jaeger, and Sean Dague. Appreciate the help over the finish line during
> Summit week, y'all.
> 
> The api-site/api-ref files are now frozen and we will not accept patches.
> The output at developer.openstack.org/api-ref.html remains frozen until we
> can provide redirects to the newly-sourced-and-built files. Please, make
> this work a priority in this release. Ideally we can get everyone ready by
> Milestone 1 (May 31).
> 
> If you would like to use a Swagger/OpenAPI file, pick that file up from
> developer.openstack.org/draft/swagger/ and create build jobs from your repo
> to publish it on developer.openstack.org.
> 
> Let me know if you have questions.
> Thanks,
> Anne

Thanks for doing this, Anne!

Don't forget to add an api-ref tox target, a la
https://github.com/openstack/nova/blob/1555736e3c1e0a66a99d0291934887250cd2e0cc/tox.ini#L106

This is necessary for the jobs to do anything.

// jim

> 
> -- 
> Anne Gentle
> www.justwriteclick.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] lock the distro version in the stable branch

2016-05-03 Thread Hui Kang
This commit fixes the tag:
https://github.com/openstack/kolla/commit/e2fa75fce6f90de8b2766070bb65d0b80bcad8c8

But I think fixing the tag in dockerfile of base container image is better

- Hui

On Tue, May 3, 2016 at 12:12 PM, Jeffrey Zhang  wrote:
> Hey guys,
>
> Recently, the ubuntu 16.04 is out and it crashed kolla when using
> ubuntu:lastest to
> build the images.
>
> even though kolla support multi base-tag, the kolla will failed when using
> other
> base-tag except for centos:7, ubuntu:14.04, rhel:7.
> And it is also hard to support all kind of the image tag.
>
> So I support that kolla should restrict the base-tag. the lastest tag is
> mutable and
> we should not use it, especially in the stable branch. When using a mutable
> image,
> it is never a *stable* release.
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Monty Taylor

On 05/03/2016 11:47 AM, Clint Byrum wrote:

Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:

On 05/03/2016 08:55 AM, Clint Byrum wrote:


Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.


I agree - I see no reason we can't validate previously emitted tokens.
But I don't agree strongly, because re-authing on invalid token is a
thing users do hundreds of times a day. (these aren't oauth API Keys or
anything)



Sure, one should definitely not be expecting everything to always work
without errors. On this we agree for sure. However, when we do decide to
intentionally induce errors for reasons we have not done so before, we
should weigh the cost of avoiding that with the cost of having it
happen. Consider this strawman:

- User gets token, it says "expires_at Now+4 hours"
- User starts a brief set of automation tasks in their system
   that does not use python and has not failed with invalid tokens thus
   far.
- Keystone nodes are all updated at one time (AMAZING cloud ops team)
- User's automation jobs fail at next OpenStack REST call
- User begins debugging, wasting hours of time figuring out that
   their tokens, which they stored and show should still be valid, were
   rejected.


Ah - I guess this is where we're missing each other, which is good and 
helpful.


I would argue that any user that is _storing_ tokens is doing way too 
much work. If they are doing short tasks, they should just treat them as 
ephemeral. If they are doing longer tasks, they need to deal with 
timeouts. SO, this:



- User gets token, it says "expires_at Now+4 hours"
- User starts a brief set of automation tasks in their system
   that does not use python and has not failed with invalid tokens thus
   far.

should be:

- User starts a brief set of automation tasks in their system
that does not use python and has not failed with invalid tokens thus
far.

"Get a token" should never be an activity that anyone ever consciously 
performs.



And now they have to refactor their app, because this may happen again,
and they have to make sure that invalid token errors can bubble up to the
layer that has the username/password, or accept rolling back and
retrying the whole thing.

I'm not saying anybody has this system, I'm suggesting we're putting
undue burden on users with an unknown consequence. Falling back to UUID
for a while has a known cost of a little bit of code and checking junk
tokens twice.


Totally. I have no problem with the suggestion that keystone handle 
this. But I also think that users should quite honestly stop thinking 
about tokens at all. Tokens are an implementation detail that if any 
user thinks about while writing their app they're setting themselves up 
to be screwed - so we should make sure we're not talking about them in a 
primary way such as to suggest that people focus a lot of energy on them.


(I also frequently see users who are using python libraries even get 
everything horribly wrong and screw themselves because they think they 
need to think about tokens)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Newton mid-cycle meetup RSVP

2016-05-03 Thread Matt Riedemann
We're doing the Nova mid-cycle meetup for Newton at the Intel campus in 
Hillsboro, OR on July 19-21.


I have an RSVP form here: http://goo.gl/forms/MxrriHsABq

If you plan on attending, or think you might be able to (or are trying 
to), please fill that out.


I'd like to have RSVPs completed by Tuesday 5/10 so I can get this 
information to the event planners at Intel.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] lock the distro version in the stable branch

2016-05-03 Thread Steven Dake (stdake)


From: Jeffrey Zhang >
Reply-To: OpenStack Development Mailing List 
>
Date: Tuesday, May 3, 2016 at 9:12 AM
To: OpenStack Development Mailing List 
>
Subject: [openstack-dev] [Kolla] lock the distro version in the stable branch

Hey guys,

Recently, the ubuntu 16.04 is out and it crashed kolla when using 
ubuntu:lastest to
build the images.

even though kolla support multi base-tag, the kolla will failed when using other
base-tag except for centos:7, ubuntu:14.04, rhel:7.
And it is also hard to support all kind of the image tag.

So I support that kolla should restrict the base-tag. the lastest tag is 
mutable and
we should not use it, especially in the stable branch. When using a mutable 
image,
it is never a *stable* release.

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

Totally agree.  File bug - fix :)

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Adam Young's message of 2016-05-03 07:21:52 -0700:
> On 05/03/2016 09:55 AM, Clint Byrum wrote:
> > When the operator has configured a new token format to emit, they should
> > also be able to allow any previously emitted formats to be validated to
> > allow users a smooth transition to the new format. We can then make the
> > default behavior for one release cycle to emit Fernet, and honor both
> > Fernet and UUID.
> >
> > Perhaps ignore the other bit that I put in there about switching formats
> > just because you have fernet keys. Let's say the new pseudo code only
> > happens in validation:
> >
> > try:
> >self._validate_fernet_token()
> > except NotAFernetToken:
> >self._validate_uuid_token()
> 
> I was actually thinking of a different migration strategy, exactly the 
> opposite:  for a while, run with the uuid tokens, but store the Fernet 
> body.  After while, switch from validating the uuid token body to the 
> stored Fernet.  Finally, switch to validating the Fernet token from the 
> request.  That way, we always have only one token provider, and the 
> migration can happen step by step.
> 
> It will not help someone that migrates from Icehouse to Ocata. Then 
> again, the dual plan you laid out above will not either;  at some point, 
> people will have to dump the token table to make major migrations.
> 

Your plan has a nice aspect that it allows validating Fernet tokens on
UUID-configured nodes too, which means operators don't have to be careful
to update all nodes at one time. So I think what you describe above is
an even better plan.

Either way, the point is to avoid an immediate mass token invalidation
event on change of provider.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Lance Bragstad's message of 2016-05-03 07:42:43 -0700:
> If we were to write a uuid/fernet hybrid provider, it would only be
> expected to support something like stable/liberty to stable/mitaka, right?
> This is something that we could contribute to stackforge, too.
> 

If done the way Adam Young described, with Fernet content as UUIDs,
one could in theory update from any UUID-aware provider, since the
Fernet-emitting nodes would just be writing their Fernet tokens into
the database that the UUID nodes read from, allowing the UUID-only nodes
to validate the new tokens. However, we never support jumping more than
one release at a time, so that is somewhat moot.

Also, stackforge isn't a thing, but I see what you're saying. It could
live out of tree, but let's not abandon all hope that we can collaborate
on something that works for users who desire to not have a window mass
token invalidation on update.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] config: deduce related options for config generator?

2016-05-03 Thread Doug Hellmann
Excerpts from Markus Zoeller's message of 2016-05-03 18:26:50 +0200:
> While working on [1] I came across a config option ("pybasedir")
> which gets used as a base for many other options, for example
> "state_path". The option "state_path" shows then a default value
> "state_path = $pybasedir".
> My question here is, is it possible/reasonable to enhance oslo.config
> to add an information to "pybasedir" that is used as a base for other
> config options?
> My concern is, that one could change "pybasedir" and expect that only
> this one single value changes, but actually one changes multiple other
> config options as well. Making it explicit that "pybasedir" gets used
> multiple times as a base could prevent confusion.
> 
> References:
> [1] https://review.openstack.org/#/c/299236/7/nova/conf/paths.py
> 
> Regards, Markus Zoeller (markus_z)
> 

(Sorry if this is a dupe, I'm having mail client issues.)

We can detect interpolated values in defaults, but those can also appear
in user-provided values. There are also plenty of options that are
related to each other without using interpolation.

Given that we have to handle the explicit cases anyway, and that
interpolation isn't used all that often, I think it likely makes more
sense to start with the explicit implementation and see how far that
takes us before adding any automation.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kolla-k8s] Core team

2016-05-03 Thread Michał Jastrzębski
Hello,

Since it seems that we have voted for separation of kolla-k8s repos
(yay!) I would like to table another discussion (but let's wait till
its official).

Core Team.

We need to build up new core team that will guard the gates on our
brand new repo (when it arrives). One of ideas Steven pointed out is
to add people from etherpad to core team, but I'd like to throw
different idea to the mix, to keep things interesting.

Idea is: let's start with current kolla core team and for the time
being add new cores to kolla-k8s by invitation by existing core
member. For example, I'm kolla core, working with k8s and I see some
guy doing great job and investing time into it, I would propose him
for core, and instead of normal voting, he will get his +2 powers
immediately. This would allow quick core team buildout and not start
with bunch of people who doesn't necessary want to contribute or even
know each other.

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Clint Byrum
Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
> On 05/03/2016 08:55 AM, Clint Byrum wrote:
> >
> > Perhaps we have different perspectives. How is accepting what we
> > previously emitted and told the user would be valid sneaky or wrong?
> > Sounds like common sense due diligence to me.
> 
> I agree - I see no reason we can't validate previously emitted tokens. 
> But I don't agree strongly, because re-authing on invalid token is a 
> thing users do hundreds of times a day. (these aren't oauth API Keys or 
> anything)
> 

Sure, one should definitely not be expecting everything to always work
without errors. On this we agree for sure. However, when we do decide to
intentionally induce errors for reasons we have not done so before, we
should weigh the cost of avoiding that with the cost of having it
happen. Consider this strawman:

- User gets token, it says "expires_at Now+4 hours"
- User starts a brief set of automation tasks in their system
  that does not use python and has not failed with invalid tokens thus
  far.
- Keystone nodes are all updated at one time (AMAZING cloud ops team)
- User's automation jobs fail at next OpenStack REST call
- User begins debugging, wasting hours of time figuring out that
  their tokens, which they stored and show should still be valid, were
  rejected.

And now they have to refactor their app, because this may happen again,
and they have to make sure that invalid token errors can bubble up to the
layer that has the username/password, or accept rolling back and
retrying the whole thing.

I'm not saying anybody has this system, I'm suggesting we're putting
undue burden on users with an unknown consequence. Falling back to UUID
for a while has a known cost of a little bit of code and checking junk
tokens twice.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Rayson Ho
On Tue, May 3, 2016 at 12:24 PM, John Dickinson  wrote:

> That's an interesting point. I'm not very familiar with Golang itself yet,
> and I haven't yet had to manage any Golang projects in prod. These sorts of
> questions are great!
>
>
See: https://golang.org/doc/go1compat



> If a distro is distributing pre-compiled binaries, isn't the compatibility
> issue up to the distros? OpenStack is not distributing binaries (or even
> distro packages!), so while it's an important question, how does it affect
> the question of golang being an ok language in which to write openstack
> source code?
>


I mean a commercial OpenStack distro...

OpenStack does not distribute binaries today (because Python is an
interpreted language), but Go is a compiled language. So may be I should
simplify my question -- in what form should a commercial OpenStack distro
distribute OpenStack components written in Go?

Rayson

==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html





>
> --John
>
>
>
>
> On 3 May 2016, at 9:16, Rayson Ho wrote:
>
> > I like Go! However, Go does not offer binary compatibility between point
> > releases. For those who install from source it may not be a big issue,
> but
> > for commercial distributions that pre-package & pre-compile everything,
> > then the compiled Go libs won't be compatible with old/new releases of
> the
> > Go compiler that the user may want to install on their systems.
> >
> > Rayson
> >
> > ==
> > Open Grid Scheduler - The Official Open Source Grid Engine
> > http://gridscheduler.sourceforge.net/
> > http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
> >
> >
> >
> >
> > On Tue, May 3, 2016 at 11:58 AM, John Dickinson  wrote:
> >
> >> TC,
> >>
> >> In reference to
> >> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html
> >> and Thierry's reply, I'm currently drafting a TC resolution to update
> >>
> http://governance.openstack.org/resolutions/20150901-programming-languages.html
> >> to include Go as a supported language in OpenStack projects.
> >>
> >> As a starting point, what would you like to see addressed in the
> document
> >> I'm drafting?
> >>
> >> --John
> >>
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Austin summit - session recap/summary

2016-05-03 Thread Steven Hardy
Hi all,

Some folks have requested a summary of our summit sessions, as has been
provided for some other projects.

I'll probably go into more detail on some of these topics either via
subsequent more focussed threads an/or some blog posts but what follows is
an overview of our summit sessions[1] with notable actions or decisions
highlighted.  I'm including some of my own thoughts and conclusions, folks
are welcome/encouraged to follow up with their own clarifications or
different perspectives :)

TripleO had a total of 5 sessions in Austin I'll cover them one-by-one:

-
Upgrades - current status and roadmap
-

In this session we discussed the current state of upgrades - initial
support for full major version upgrades has been implemented, but the
implementation is monolithic, highly coupled to pacemaker, and inflexible
with regard to third-party extraconfig changes.

The main outcomes were that we will add support for more granular
definition of the upgrade lifecycle to the new composable services format,
and that we will explore moving towards the proposed lightweight HA
architecture to reduce the need for so much pacemaker specific logic.

We also agreed that investigating use of mistral to drive upgrade workflows
was a good idea - currently we have a mixture of scripts combined with Heat
to drive the upgrade process, and some refactoring into discrete mistral
workflows may provide a more maintainable solution.  Potential for using
the existing SoftwareDeployment approach directly via mistral (outside of
the heat templates) was also discussed as something to be further
investigated and prototyped.

We also touched on the CI implications of upgrades - we've got an upgrades
job now, but we need to ensure coverage of full release-to-release upgrades
(not just commit to commit).

---
Containerization status/roadmap
---

In this session we discussed the current status of containers in TripleO
(which is to say, the container based compute node which deploys containers
via Heat onto an an Atomic host node that is also deployed via Heat), and
what strategy is most appropriate to achieve a fully containerized TripleO
deployment.

Several folks from Kolla participated in the session, and there was
significant focus on where work may happen such that further collaboration
between communities is possible.  To some extent this discussion on where
(as opposed to how) proved a distraction and prevented much discussion on
supportable architectural implementation for TripleO, thus what follows is
mostly my perspective on the issues that exist:

Significant uncertainty exists wrt integration between Kolla and TripleO -
there's largely consensus that we want to consume the container images
defined by the Kolla community, but much less agreement that we can
feasably switch to the ansible-orchestrated deployment/config flow
supported by Kolla without breaking many of our primary operator interfaces
in a fundamentally unacceptable way, for example:

- The Mistral based API is being implemented on the expectation that the
  primary interface to TripleO deployments is a parameters schema exposed
  by a series of Heat templates - this is no longer true in a "split stack"
  model where we have to hand off to an alternate service orchestration tool.

- The tripleo-ui (based on the Mistral based API) consumes heat parameter
  schema to build it's UI, and Ansible doesn't support the necessary
  parameter schema definition (such as types and descriptions) to enable
  this pattern to be replicated.  Ansible also doesn't provide a HTTP API,
  so we'd still have to maintain and API surface for the (non python) UI to
  consume.

We also discussed ideas around integration with kubernetes (a hot topic on
the Kolla track this summit), but again this proved inconclusive beyond
that yes someone should try developing a PoC to stimulate further
discussion.  Again, significant challenges exist:

- We still need to maintain the Heat parameter interfaces for the API/UI,
  and there is also a strong preference to maintain puppet as a tool for
  generating service configuration (so that existing operator integrations
  via puppet continue to function) - this is a barrier to directly
  consuming the kolla-kubernetes effort directly.

- A COE layer like kubernetes is a poor fit for deployments where operators
  require strict control of service placement (e.g exactly which nodes a service
  runs on, IP address assignments to specific nodes etc) - this is already
  a strong requirement for TripleO users and we need to figure out if/how
  it's possible to control container placement per node/namespace.

- There are several uncertainties regarding the HA architecture, such as
  how do we achieve fencing for nodes (which is currently provided via
  pacemaker), in particular the HA model for real production deployments
  via 

[openstack-dev] [neutron][nova][SR-IOV] SR-IOV meeting May 3 2016 - update

2016-05-03 Thread Moshe Levi
Hi,



I just wanted to give a short update regarding SR-IOV/PCI Passthrough /NFV 
meeting.



* We decide to change the meeting frequency to every week, until 
PCI/SR-IOV/NUMA will be more stable  see [1]

* Improving SR-IOV/PCI Passthrough /NFV testing

o   With the help of wznoinsk we are working to move Mellanox CI to containers 
(owner lennyb)

o   How Muti node CI for IOV/PCI Passthrough /NFV (needs an owner)

o   CI for  PF passthrough (needs an owner)

* Documentation

o   Improve PCI Passthrough SR-IOV Documentation -  (owners lbeliveau, moshele)

o   Improve NUMA and cpu pinning Documentation - (owner sfinucan)

* I updated the etherpad [2] with the agenda for next week (May 10 
2016) and added the following sections:

o   Patches ready for core reviews

o   Patches for sub-team review - please try to review them for our next meeting

o   Patches that needs owner - feel free to  add your irc name to the patches 
you think you can continue



[1] - https://review.openstack.org/#/c/312107/

[2] - https://etherpad.openstack.org/p/sriov_meeting_agenda



Thanks,

 Moshe Levi.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] config: deduce related options for config generator?

2016-05-03 Thread Markus Zoeller
While working on [1] I came across a config option ("pybasedir")
which gets used as a base for many other options, for example
"state_path". The option "state_path" shows then a default value
"state_path = $pybasedir".
My question here is, is it possible/reasonable to enhance oslo.config
to add an information to "pybasedir" that is used as a base for other
config options?
My concern is, that one could change "pybasedir" and expect that only
this one single value changes, but actually one changes multiple other
config options as well. Making it explicit that "pybasedir" gets used
multiple times as a base could prevent confusion.

References:
[1] https://review.openstack.org/#/c/299236/7/nova/conf/paths.py

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson
That's an interesting point. I'm not very familiar with Golang itself yet, and 
I haven't yet had to manage any Golang projects in prod. These sorts of 
questions are great!

If a distro is distributing pre-compiled binaries, isn't the compatibility 
issue up to the distros? OpenStack is not distributing binaries (or even distro 
packages!), so while it's an important question, how does it affect the 
question of golang being an ok language in which to write openstack source code?

--John




On 3 May 2016, at 9:16, Rayson Ho wrote:

> I like Go! However, Go does not offer binary compatibility between point
> releases. For those who install from source it may not be a big issue, but
> for commercial distributions that pre-package & pre-compile everything,
> then the compiled Go libs won't be compatible with old/new releases of the
> Go compiler that the user may want to install on their systems.
>
> Rayson
>
> ==
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
>
>
>
>
> On Tue, May 3, 2016 at 11:58 AM, John Dickinson  wrote:
>
>> TC,
>>
>> In reference to
>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html
>> and Thierry's reply, I'm currently drafting a TC resolution to update
>> http://governance.openstack.org/resolutions/20150901-programming-languages.html
>> to include Go as a supported language in OpenStack projects.
>>
>> As a starting point, what would you like to see addressed in the document
>> I'm drafting?
>>
>> --John
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson
That's a good question, and I'll be sure to address it. Thanks.

In the context of "golang code in swift", any discussion around a "goslo" 
library would be up to the oslo team, I think. The proposed functionality that 
would be in golang in swift does not currently depend on any oslo library. In 
general, if the TC supports Go, I'd think it wouldn't be any different than the 
question of "where's the oslo libraries for javascript [which is already an 
approved language]?"

--John




On 3 May 2016, at 9:14, Tim Bell wrote:

> John,
>
> How would Oslo like functionality be included ? Would the aim be to produce 
> equivalent libraries ?
>
> Tim
>
>
>
>
> On 03/05/16 17:58, "John Dickinson"  wrote:
>
>> TC,
>>
>> In reference to 
>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
>> Thierry's reply, I'm currently drafting a TC resolution to update 
>> http://governance.openstack.org/resolutions/20150901-programming-languages.html
>>  to include Go as a supported language in OpenStack projects.
>>
>> As a starting point, what would you like to see addressed in the document 
>> I'm drafting?
>>
>> --John
>>
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-03 Thread Clint Byrum
Excerpts from Edward Leafe's message of 2016-05-03 08:20:36 -0700:
> On May 3, 2016, at 6:45 AM, Miles Gould  wrote:
> 
> >> This DB could be an RDBMS or Cassandra, depending on the deployer's 
> >> preferences
> > AFAICT this would mean introducing and maintaining a layer that abstracts 
> > over RDBMSes and Cassandra. That's a big abstraction, over two quite 
> > different systems, and it would be hard to write code that performs well in 
> > both cases. If performance in this layer is critical, then pick whichever 
> > DB architecture handles the expected query load better and use that.
> 
> Agreed - you simply can’t structure the data the same way. When I read 
> criticisms of Cassandra that include “you can’t do joins” or “you can’t 
> aggregate”, it highlights this fact: you have to think about (and store) your 
> data completely differently than you would in an RDBMS. You cannot simply 
> abstract out the differences.
> 

Right, once one accepts that fact, Cassandra looks a lot less like a
revolutionary database, and a lot more like a sharding toolkit.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Rayson Ho
I like Go! However, Go does not offer binary compatibility between point
releases. For those who install from source it may not be a big issue, but
for commercial distributions that pre-package & pre-compile everything,
then the compiled Go libs won't be compatible with old/new releases of the
Go compiler that the user may want to install on their systems.

Rayson

==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html




On Tue, May 3, 2016 at 11:58 AM, John Dickinson  wrote:

> TC,
>
> In reference to
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html
> and Thierry's reply, I'm currently drafting a TC resolution to update
> http://governance.openstack.org/resolutions/20150901-programming-languages.html
> to include Go as a supported language in OpenStack projects.
>
> As a starting point, what would you like to see addressed in the document
> I'm drafting?
>
> --John
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Tim Bell
John,

How would Oslo like functionality be included ? Would the aim be to produce 
equivalent libraries ?

Tim




On 03/05/16 17:58, "John Dickinson"  wrote:

>TC,
>
>In reference to 
>http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
>Thierry's reply, I'm currently drafting a TC resolution to update 
>http://governance.openstack.org/resolutions/20150901-programming-languages.html
> to include Go as a supported language in OpenStack projects.
>
>As a starting point, what would you like to see addressed in the document I'm 
>drafting?
>
>--John
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kolla] lock the distro version in the stable branch

2016-05-03 Thread Jeffrey Zhang
Hey guys,

Recently, the ubuntu 16.04 is out and it crashed kolla when using
ubuntu:lastest to
build the images.

even though kolla support multi base-tag, the kolla will failed when using
other
base-tag except for centos:7, ubuntu:14.04, rhel:7.
And it is also hard to support all kind of the image tag.

So I support that kolla should restrict the base-tag. the lastest tag is
mutable and
we should not use it, especially in the stable branch. When using a mutable
image,
it is never a *stable* release.

-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >