[openstack-dev] [glance] Unable to set metadata_encryption_key

2016-05-17 Thread Djimeli Konrad
Hello,

Please I am working on a bug
(https://bugs.launchpad.net/glance/+bug/1569937), but when trying to
replicate the bug by setting

metadata_encryption_key = AoAMaVuEEJVYRvWgWrfHJoThUPmvniTi

I get the following error from glance-api

ValueError: Input strings must be a multiple of 16 in length

but the string above is actualy  32 characters. I would like to know
if there is something I am doing wrong.


Thanks
Konrad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-17 Thread Sukhdev Kapur
Hi Martinx,

L2GW is designed to bridge neutron networks with non-neutron networks to
form a L2 broadcast domain - not two neutron networks.

regards..

-Sukhdev


On Mon, May 16, 2016 at 8:58 PM, Martinx - ジェームズ 
wrote:

>
>
> On 11 May 2016 at 16:05, Sukhdev Kapur  wrote:
>
>>
>> Folks,
>>
>> I am happy to announce that Mitaka release for L2 Gateway is released and
>> now available at https://pypi.python.org/pypi/networking-l2gw.
>>
>> You can install it by using "pip install networking-l2gw"
>>
>> This release has several enhancements and fixes for issues discovered in
>> liberty release.
>>
>> Thanks
>> Sukhdev Kapur
>>
>>
> Sounds very interesting!
>
> Currently, I have a DPDK App that is a L2 Bridge, however, when I bridge
> two Neutron Networks together (under the same L2 broadcast domain),
> OpenStack itself is "not aware" of this!
>
> Basically, OpenStack doesn't "knows" that a "regular Instance" is a L2
> Bridge, which make things very weird for NFV applications like mine.
>
> So, my question is: can this "L2 Gateway" help my setup? I mean, can I use
> "L2 Gateway" to tell: "Hey, OpenStack, those two Networks X & Y are in
> fact, just one. Is this possible?
>
> Cheers!
> Thiago
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-17 Thread Joshua Harlow

Doug Hellmann wrote:

Excerpts from Renat Akhmerov's message of 2016-05-17 19:10:55 +0700:

Team,

Our stable/mitaka branch is now broken by oslo.messaging 5.0.0. Global 
requirements for stable/mitaka has oslo.messaging>=4.0.0 so it can fetch 5.0.0.

Just reminding that it breaks us because we intentionally modified 
RPCDispatcher like in [1]. It was needed for “at-least-once” delivery. In 
master we already agreed to remove that hack and work towards having a decent 
solution (there are options). The patch is [2]. But we need to handle it in 
mitaka somehow.

Options I see:
Constrain oslo.messaging in global-requirements.txt for stable/mitaka with 
4.6.1. Hard to do since it requires wide cross-project coordination.
Remove that hack in stable/mitaka as we did with master. It may be bad because 
this was wanted very much by some of the users

Not sure what else we can do.


You could set up your test jobs to use the upper-constraints.txt file in
the requirements repo.

What was the outcome of the discussion about adding the at-least-once
semantics to oslo.messaging?



So there are a few options I am seeing so far (there might be more that 
I don't see also), others can hopefully correct me if they are wrong 
(which they might be) ;)


Option #1

Oslo.messaging (and the dispatcher part that does this) stays as is, 
doing at-most-once for RPC (notifications are in a different category 
here so let's not discuss them) and doing at-most-once well and 
battle-hardened (it's current goal) across the various backend drivers 
it supports.


At that point at-least-once will have to done via some other library 
where this kind of semantics can be placed, that might be tooz via 
https://review.openstack.org/#/c/260246/ (which has similar semantics, 
but is not based on a kind of RPC, instead it's more like a job-queue).


Option #2

Oslo.messaging (and the dispatcher part that does this) changes 
(possibly allowing it to be replaced with a different type of 
dispatcher, ie like in https://review.openstack.org/#/c/314732/); the 
default class continues being great at for RPC (notifications are in a 
different category here so let's not discuss them) and doing 
at-most-once well and battle-hardened (it's current goal) across the 
various backend drivers it supports. If people want to provide an 
alternate class with different semantics they are somewhat on there own 
(but at least they can do this).


Issues raised: this though may not be wanted, as some of the 
oslo.messaging folks do not want the dispatcher class to be exposed at 
all (and would prefer to make it totally private, so exposing it would 
be against that goal); though people are already 'hacking' this kind of 
functionality in, so it might be the best we can get at the current time?


Option #3

Do nothing.

Issues raised: everytime oslo.messaging changes this *mostly* internal 
dispatcher API a project will have to make a new 'hack' to replace it 
and hope that the semantics that it has 'hacked' in will continue to be 
compatible with the various drivers in oslo.messaging. Not IMHO a 
sustainable way to keep on working (and I'd be wary of doing this in a 
project if I was the owner of one, because it's ummm, 'dirty').


My thoughts on what could work:

What I'd personally like to see is a mix of option #1 and #2, where we 
have commitment from folks (besides myself, lol) to work on option #1 
and we temporarily move forward with option #2 with a strict-statement 
that the functionality we would be enabling will only exist for say a 
single release (and then it will be removed).


Thoughts from others?


Doug


Thoughts?

[1] 
https://github.com/openstack/mistral/blob/stable/mitaka/mistral/engine/rpc.py#L38-L88
[2] 
https://review.openstack.org/#/c/316578/

Renat Akhmerov
@Nokia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-17 Thread Renat Akhmerov

> On 17 May 2016, at 21:50, Doug Hellmann  wrote:
> 
> Excerpts from Renat Akhmerov's message of 2016-05-17 19:10:55 +0700:
>> Team,
>> 
>> Our stable/mitaka branch is now broken by oslo.messaging 5.0.0. Global 
>> requirements for stable/mitaka has oslo.messaging>=4.0.0 so it can fetch 
>> 5.0.0.
>> 
>> Just reminding that it breaks us because we intentionally modified 
>> RPCDispatcher like in [1]. It was needed for “at-least-once” delivery. In 
>> master we already agreed to remove that hack and work towards having a 
>> decent solution (there are options). The patch is [2]. But we need to handle 
>> it in mitaka somehow.
>> 
>> Options I see:
>> Constrain oslo.messaging in global-requirements.txt for stable/mitaka with 
>> 4.6.1. Hard to do since it requires wide cross-project coordination.
>> Remove that hack in stable/mitaka as we did with master. It may be bad 
>> because this was wanted very much by some of the users
>> 
>> Not sure what else we can do.
> 
> You could set up your test jobs to use the upper-constraints.txt file in
> the requirements repo.

Yes, it’s an option. I’m just thinking from a regular user perspective. There 
will be a lot of people who don’t know about upper-constraints.txt and they 
will be stumbling on it just using our requirements.txt. My question here is: 
is upper-constraints.txt something that’s officially promoted and should be 
used by everyone or it’s mostly introduced for our internal OpenStack gating 
system?

> What was the outcome of the discussion about adding the at-least-once
> semantics to oslo.messaging?


No outcome yet, we’re still discussing. I expect that more people join the 
thread since some stakeholders were off after the summit.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Carl Baldwin
On May 17, 2016 2:18 PM, "Kevin Benton"  wrote:
>
> >I kind of think it makes sense to require evacuating a segment of
its ports before deleting it.
>
> Ah, I left out an important assumption I was making. We also need to auto
delete the DHCP port as the segment is deleted. I was thinking this will be
basically be like the delete_network case where we will auto remove the
network owned ports.

I can go along with that. Thanks for the clarification.

Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] driver (authentication issue)

2016-05-17 Thread Yue Xin
Hi Tim and all,

Thank you very much. I put congress in a tag so I missed your email and
relay late. Sorry about that.

My problem here is that i want to use the command line to put the data into
the congress datasource.
I use the command openstack congress datasource create test test(i have a
test_driver), and successed in creating a table.
Then I want to check the table use the command(openstack congress
datasource list table test) the errer is (internal server error 501).
Then I try to push the data to the test table, use

curl -g -i -X PUThttp://localhost:1789/v1/data-sources/id/tables/


the response is "authentication required" which means I can't push
data into congress datasource. I have no idea how to fix it. Can you
give me some hints?

Thank you very much.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Aodh] add "meter-list" command in Aodh CLI

2016-05-17 Thread li . yuanzhen
Ok, Perfect. 
I like this solution. subsequently, I would like to add this message in 
aodhclient. 

Thank you very much

Best Regards
Rajen


> Hi,
> 
> 
> We shouldn't have 'meter-list' in aodhclient.
> 
> Nova have API and function to show image list, and 'nova image-list' is 
talking to nova, not glance. It would be strange if '$ aodh meter-list' 
(aodhclient) made request to ceilometer.
> 
> On the other hand, I understand your concern. We can improve aodh 
document and/or add help message of aodhclient, saying like 'meter can be 
found in ceilometer'. What do you think?
> 
> 
> Cheers,
> Ryota
> 
> > -Original Message-
> > From: li.yuanz...@zte.com.cn [mailto:li.yuanz...@zte.com.cn]
> > Sent: Wednesday, May 18, 2016 11:52 AM
> > To: Julien Danjou; openstack-dev@lists.openstack.org
> > Cc: openstack-dev@lists.openstack.org; liusheng2...@gmail.com; 
aji.zq...@gmail.com; ildiko.van...@ericsson.com;
> > lianhao...@intel.com; Mibu Ryota(壬生 亮太); zhang.yuj...@zte.com.cn
> > Subject: [Openstack] [Aodh] add "meter-list" command in Aodh CLI
> > 
> > So sorry, my mistake when create the link.
> > 
> > 
> > > On Tue, May 17 2016, li.yuanz...@zte.com.cn wrote:
> > >
> > > > Now, in Aodh CLI, when create a threshold alarm, the
> > > > -m/--meter-name must be required.
> > > > But, if the user is not familiar with ceilometer or aodh,
> > > > the user is not sure what value should give to -m/--meter-name.
> > > > So the command "aodh meter list", I think, is needed.
> > 
> > > I don't think so, that's a Ceilometer command. There's no need for
> > > Aodh client to talk to Ceilometer.
> > 
> > Thank you for your suggestion, but I have a different opinion.
> > meter list, I think, is not exclusive for ceilometer command, 
especially after aodh is seperated from ceilometer.
> > Add meter list in Aodh CLI can make user more convenience and easy to 
use Aodh, such as when users create a threshold
> > alarm and don't know which kind of meter is supported, users can 
easily query it by meter-list.
> > For example, "nova image-list ", as well as "glance list"command, can 
get the image value, which is required for creating
> > a instance by "nova boot --image  ".
> > 
> > But there is really an ambiguity for the " migrate meter-list command 
from ceilometer CLI to Aodh CLI ", my miss.
> > May "add meter-list in Aodh CLI" is more appropriate.
> > 
> > 
> > 
> > 
> > 
> > >
> > > > I create a bp[1], to migrate "meter-list" command from
> > > > ceilometer CLI to Aodh CLI.
> > > > Is there any suggestion for this bp? or is the bp 
necessary?
> > > >
> > > > [1]:
> > > > 
https://blueprints.launchpad.net/cinder/+spec/migrate-meter-list-com
> > > > mand-from-ceilometer-cli-to-aodh-cli
> > > > <
https://blueprints.launchpad.net/cinder/+spec/migrate-meter-list-co
> > > > mmand-from-ceilometer-cli-to-aodh-cli>
> > 
> > > Thanks for your effort! Though I'd like to point a few problems 
here:
> > > 1. This blueprint is created in Cinder 2. You sent this mail to the
> > > *user* mailing list, and not to the
> > >developer mailing list (openstack-dev@lists.openstack.org)
> > > 3. You did not discuss anything on the dev mailing list with any of 
the
> > >   active Telemetry developer before, and just took action before we 
can
> > >   comment and state that this is a bad idea (like I just did). Not 
the
> > >   best move.
> > > 4. Cc'ing individuals is not the best move (again, dev mailing list 
is
> > >the right medium)
> > >
> > > Cheers,
> > > --
> > > Julien Danjou
> > 
> > I will obsolete the link in cinder, and thank you for your suggestion 
again.
> > 
> > 
> > 
> > 
> > 
> > ZTE Information Security Notice: The information contained in this 
mail (and any attachment transmitted herewith) is
> > privileged and confidential and is intended for the exclusive use of 
the addressee(s).  If you are not an intended recipient,
> > any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.
> > If you have received this mail in error, please delete it and notify 
us immediately.
> > 
> > 
> 
> 

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [Neutron][Stable][Liberty][CLI] Gate broken

2016-05-17 Thread Darek Smigiel
Hello Stable Maint Team,
It seems that python-neutronclient gate for Liberty is broken[1][2] by update 
for keystoneclient. OpenStack proposal bot already sent update to requirements 
[3], but it needs to be merged.
If you have enough power, please unblock gate.

Thanks,
Darek

[1] https://review.openstack.org/#/c/296580/
[2] https://review.openstack.org/#/c/296576/
[3] https://review.openstack.org/#/c/258336/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack] [Smaug] Use Trello to coordinate?

2016-05-17 Thread xiangxinyong
Hello everyone,

Smaug project is considering to use Trello to coordinate.

At present, there are four lists in the Smaug board:

Todo, In Progress, Pending Review and Done.

Welcome to give the suggestions on how to use Trello in Smaug.

Welcome to give the suggestions on coordinate.

Welcome to Join Smaug Trello Board:

https://trello.com/invite/b/Sudr4fKT/826e3dcffc7259b1447d4ecc448c1a45/smaug

Looking forward to your feedback.

Thanks very much.

Best Regards,
 xiangxinyong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-17 Thread Steven Dake (stdake)
This should be obvious to core reviewers, but for Mauricio to join the core 
reviewer team, there must be no veto votes and a majority of +1s.  Apologies 
for leaving that out.

Regards
-steve

From: Steven Dake >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Tuesday, May 17, 2016 at 12:00 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

Hello core reviewers,

I am proposing Mauricio (mlima on irc) for the core review team.  He has done a 
fantastic job reviewing appearing in the middle of the pack for 90 days [1] and 
appearing as #2 in 45 days [2].  His IRC participation is also fantastic and 
does a good job on technical work including implementing Manila from zero 
experience :) as well as code cleanup all over the code base and documentation. 
 Consider my proposal a +1 vote.

I will leave voting open for 1 week until May 24th.  Please vote +1 (approve), 
or -2 (veto), or abstain.  I will close voting early if there is a veto vote, 
or a unanimous vote is reached.

Thanks,
-steve

[1] http://stackalytics.com/report/contribution/kolla/90
[2] http://stackalytics.com/report/contribution/kolla/45
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-17 Thread Jeffrey Zhang
+1
​
​
nice work Mauricio Lima

On Wed, May 18, 2016 at 11:03 AM, Swapnil Kulkarni  wrote:

> On Wed, May 18, 2016 at 12:30 AM, Steven Dake (stdake) 
> wrote:
> > Hello core reviewers,
> >
> > I am proposing Mauricio (mlima on irc) for the core review team.  He has
> > done a fantastic job reviewing appearing in the middle of the pack for 90
> > days [1] and appearing as #2 in 45 days [2].  His IRC participation is
> also
> > fantastic and does a good job on technical work including implementing
> > Manila from zero experience :) as well as code cleanup all over the code
> > base and documentation.  Consider my proposal a +1 vote.
> >
> > I will leave voting open for 1 week until May 24th.  Please vote +1
> > (approve), or –2 (veto), or abstain.  I will close voting early if there
> is
> > a veto vote, or a unanimous vote is reached.
> >
> > Thanks,
> > -steve
> >
> > [1] http://stackalytics.com/report/contribution/kolla/90
> > [2] http://stackalytics.com/report/contribution/kolla/45
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> +1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Aodh] add "meter-list" command in Aodh CLI

2016-05-17 Thread Ryota Mibu
Hi,


We shouldn't have 'meter-list' in aodhclient.

Nova have API and function to show image list, and 'nova image-list' is talking 
to nova, not glance. It would be strange if '$ aodh meter-list' (aodhclient) 
made request to ceilometer.

On the other hand, I understand your concern. We can improve aodh document 
and/or add help message of aodhclient, saying like 'meter can be found in 
ceilometer'. What do you think?


Cheers,
Ryota

> -Original Message-
> From: li.yuanz...@zte.com.cn [mailto:li.yuanz...@zte.com.cn]
> Sent: Wednesday, May 18, 2016 11:52 AM
> To: Julien Danjou; openstack-dev@lists.openstack.org
> Cc: openstack-dev@lists.openstack.org; liusheng2...@gmail.com; 
> aji.zq...@gmail.com; ildiko.van...@ericsson.com;
> lianhao...@intel.com; Mibu Ryota(壬生 亮太); zhang.yuj...@zte.com.cn
> Subject: [Openstack] [Aodh] add "meter-list" command in Aodh CLI
> 
> So sorry, my mistake when create the link.
> 
> 
> > On Tue, May 17 2016, li.yuanz...@zte.com.cn wrote:
> >
> > > Now, in Aodh CLI, when create a threshold alarm, the
> > > -m/--meter-name must be required.
> > > But, if the user is not familiar with ceilometer or aodh,
> > > the user is not sure what value should give to -m/--meter-name.
> > > So the command "aodh meter list", I think, is needed.
> 
> > I don't think so, that's a Ceilometer command. There's no need for
> > Aodh client to talk to Ceilometer.
> 
> Thank you for your suggestion, but I have a different opinion.
> meter list, I think, is not exclusive for ceilometer command, especially 
> after aodh is seperated from ceilometer.
> Add meter list in Aodh CLI can make user more convenience and easy to use 
> Aodh, such as when users create a threshold
> alarm and don't know which kind of meter is supported, users can easily query 
> it by meter-list.
> For example, "nova image-list ", as well as "glance list"command, can get the 
> image value, which is required for creating
> a instance by "nova boot --image  ".
> 
> But there is really an ambiguity for the " migrate meter-list command from 
> ceilometer CLI to Aodh CLI ", my miss.
> May "add meter-list in Aodh CLI" is more appropriate.
> 
> 
> 
> 
> 
> >
> > > I create a bp[1], to migrate "meter-list" command from
> > > ceilometer CLI to Aodh CLI.
> > > Is there any suggestion for this bp? or is the bp necessary?
> > >
> > > [1]:
> > > https://blueprints.launchpad.net/cinder/+spec/migrate-meter-list-com
> > > mand-from-ceilometer-cli-to-aodh-cli
> > >  > > mmand-from-ceilometer-cli-to-aodh-cli>
> 
> > Thanks for your effort! Though I'd like to point a few problems here:
> > 1. This blueprint is created in Cinder 2. You sent this mail to the
> > *user* mailing list, and not to the
> >developer mailing list (openstack-dev@lists.openstack.org)
> > 3. You did not discuss anything on the dev mailing list with any of the
> >   active Telemetry developer before, and just took action before we can
> >   comment and state that this is a bad idea (like I just did). Not the
> >   best move.
> > 4. Cc'ing individuals is not the best move (again, dev mailing list is
> >the right medium)
> >
> > Cheers,
> > --
> > Julien Danjou
> 
> I will obsolete the link in cinder, and thank you for your suggestion again.
> 
> 
> 
> 
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is
> privileged and confidential and is intended for the exclusive use of the 
> addressee(s).  If you are not an intended recipient,
> any disclosure, reproduction, distribution or other dissemination or use of 
> the information contained is strictly prohibited.
> If you have received this mail in error, please delete it and notify us 
> immediately.
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][all] First Requirements Team Meeting

2016-05-17 Thread Robert Collins
On 17 May 2016 at 23:47, Davanum Srinivas  wrote:
> Team,
>
> Let's meet in #openstack-meeting-cp channel at 12:00 UTC on May 18th:
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=5=18=12=0=0=43=240=195=166=83=281=141
>
> We can decide if we need a better time/day for regular meetings.
>
> MUST read before the meeting :
> https://etherpad.openstack.org/p/requirements-tasks :)

FWIW I can't make that (its midnight here), but if you need anything
from me, drop me a mail :)

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-17 Thread Swapnil Kulkarni
On Wed, May 18, 2016 at 12:30 AM, Steven Dake (stdake)  wrote:
> Hello core reviewers,
>
> I am proposing Mauricio (mlima on irc) for the core review team.  He has
> done a fantastic job reviewing appearing in the middle of the pack for 90
> days [1] and appearing as #2 in 45 days [2].  His IRC participation is also
> fantastic and does a good job on technical work including implementing
> Manila from zero experience :) as well as code cleanup all over the code
> base and documentation.  Consider my proposal a +1 vote.
>
> I will leave voting open for 1 week until May 24th.  Please vote +1
> (approve), or –2 (veto), or abstain.  I will close voting early if there is
> a veto vote, or a unanimous vote is reached.
>
> Thanks,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla/90
> [2] http://stackalytics.com/report/contribution/kolla/45
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-17 Thread Robert Collins
On 18 May 2016 at 00:54, Brian Rosmaita  wrote:

>> Couple of examples:
>> 1. switching from "is_public=true" to "visibility=public"
>
>
> This was a major version change in the Images API.  The 'is_public' boolean
> is in the original Images v1 API, 'visibility' was introduced with the
> Images v2 API in the Folsom release.  You just need an awareness of which
> version of the API you're talking to.

So I realise this is ancient history, but this is really a good
example of why Monty has been pushing on 'never break our APIs': API
breaks hurt users, major versions or not. Keeping the old attribute as
an alias to the new one would have avoided the user pain for a very
small amount of code.

We are by definition an API - doesn't matter that its HTTP vs Python -
when we break compatibility, there's a very long tail of folk that
will have to spend time updating their code; 'Microversions' are a
good answer to this, as long as we never raise the minimum version we
support. glibc does a very similar thing with versioned symbols - and
they support things approximately indefinitely.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Aodh] add "meter-list" command in Aodh CLI

2016-05-17 Thread li . yuanzhen
So sorry, my mistake when create the link.


> On Tue, May 17 2016, li.yuanz...@zte.com.cn wrote:
> 
> > Now, in Aodh CLI, when create a threshold alarm, the 
> > -m/--meter-name must be required.
> > But, if the user is not familiar with ceilometer or aodh, the 
user 
> > is not sure what value should give to -m/--meter-name.
> > So the command "aodh meter list", I think, is needed.

> I don't think so, that's a Ceilometer command. There's no need for Aodh
> client to talk to Ceilometer.

Thank you for your suggestion, but I have a different opinion.
meter list, I think, is not exclusive for ceilometer command, especially 
after aodh is seperated from ceilometer. 
Add meter list in Aodh CLI can make user more convenience and easy to use 
Aodh, such as when users create a threshold
alarm and don't know which kind of meter is supported, users can easily 
query it by meter-list.
For example, "nova image-list ", as well as "glance list"command, can get 
the image value, which is required for
creating a instance by "nova boot --image  ".

But there is really an ambiguity for the " migrate meter-list command from 
ceilometer CLI to Aodh CLI ", my miss.
May "add meter-list in Aodh CLI" is more appropriate.





> 
> > I create a bp[1], to migrate "meter-list" command from 
ceilometer 
> > CLI to Aodh CLI.
> > Is there any suggestion for this bp? or is the bp necessary?
> >
> > [1]:
> > 
https://blueprints.launchpad.net/cinder/+spec/migrate-meter-list-command-from-ceilometer-cli-to-aodh-cli


> Thanks for your effort! Though I'd like to point a few problems here:
> 1. This blueprint is created in Cinder
> 2. You sent this mail to the *user* mailing list, and not to the
>developer mailing list (openstack-dev@lists.openstack.org)
> 3. You did not discuss anything on the dev mailing list with any of the
>   active Telemetry developer before, and just took action before we can
>   comment and state that this is a bad idea (like I just did). Not the
>   best move.
> 4. Cc'ing individuals is not the best move (again, dev mailing list is
>the right medium)
>
> Cheers,
> -- 
> Julien Danjou

I will obsolete the link in cinder, and thank you for your suggestion 
again.


ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] How to setup hyper-converged nodes (compute+ceph)?

2016-05-17 Thread Gerard Braad
Hi all,


Hereby updating the status of this message:

On Mon, May 16, 2016 at 9:55 AM, Gerard Braad  wrote:
> we would like
> to deploy Compute nodes with Ceph installed on them. This will
> probably be a change to the tripleo-heat-templates and the compute,
> and cephstorage resources

I noticed a review enabling the deployment of Ceph OSDs on the compute
node: https://review.openstack.org/#/c/273754/5
At the moment, it is marked as Workflow -1 due to possible
implementation of this feature by composable roles.

regards,


Gerard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-05-17 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Matt Fischer
>
>
> If config sample files are being used as a living document then that would
> be a reason to leave the deprecated options in there. In my experience as a
> cloud deployer I never once used them in that manner so it didn't occur to
> me that people might, hence my question to the list.
>
> This may also indicate that people aren't checking release notes as we
> hope they are. A release note is where I would expect to find this
> information aggregated with all the other changes I should be aware of.
> That seems easier to me than aggregating that data myself by checking
> various sources.
>


One way to think about this is that the config file has to be accurate or
the code won't work, but release notes can miss things with no consequences
other than perhaps an annoyed operator. So they are sources of truth about
the state options on of a release or branch.


>
> Anyways, I have no strong cause for removing the deprecated options. I
> just wondered if it was a low hanging fruit and thought I would ask.
>

It's always good to have these kind of conversations, thanks for starting
it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Crag Wolfe
Now getting very Heat-specific. W.r.t. to
https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
raw_template.files (this is a dict of template filename to contents),
both in the DB and in RAM. The approach this patch is taking is that,
when one template is created by reference to another, we just re-use the
original template's files (ultimately in a new table,
raw_template_files). In the case of nested stacks, this saves on quite a
bit of duplication.

If we follow the 3-step pattern discussed earlier in this thread, we
would be looking at P release as to when we start seeing DB storage
improvements. As far as RAM is concerned, we would see improvement in
the O release since that is when we would start reading from the new
column location (and could cache the template files object by its ID).
It also means that for the N release, we wouldn't see any RAM or DB
improvements, we'll just start writing template files to the new
location (in addition to the old location). Is this acceptable, or do
impose some sort of downtime restrictions on the next Heat upgrade?

A compromise could be to introduce a little bit of downtime:

For the N release:
 1. Add the new column (no need to shut down heat-engine).
 2. Shut down all heat-engine's.
 3. Upgrade code base to N throughout cluster.
 4. Start all heat engine's. Read from new and old template files
locations, but only write to the new one.

For the O release, we could perform a rolling upgrade with no downtime
where we are only reading and writing to the new location, and then drop
the old column as a post-upgrade migration (i.e, the typical N+2 pattern
[1] that Michal referenced earlier and I'm re-referencing :-).

The advantage to the compromise is we would immediately start seeing RAM
and DB improvements with the N-release.

[1]
http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Andrew Laski
 
 
 
On Tue, May 17, 2016, at 03:00 PM, Dolph Mathews wrote:
> I think the metadata_manager is one of many terrible examples of
> deprecated configuration options. The documentation surrounding a
> deprecation should *always* tell you why something is being
> deprecated, and what you should be using instead to achieve the
> same, or better, result moving forward. But instead, we get "you
> don't need to see any documentation... these aren't the configs
> you're looking for."
 
In this case there is no replacement to point towards, Nova is just
removing a plug point. But it is fair to say that the reasoning behind
the removal should be captured in the comment there.
 
>
> If all our deprecated config options provided useful pointers in their
> descriptions, there would be tremendous value in retaining deprecated
> configuration options in sample config files -- in which case, I don't
> believe we would be having this conversation which questions their
> value in the first place.
 
If config sample files are being used as a living document then that
would be a reason to leave the deprecated options in there. In my
experience as a cloud deployer I never once used them in that manner so
it didn't occur to me that people might, hence my question to the list.
 
This may also indicate that people aren't checking release notes as we
hope they are. A release note is where I would expect to find this
information aggregated with all the other changes I should be aware of.
That seems easier to me than aggregating that data myself by checking
various sources.
 
Anyways, I have no strong cause for removing the deprecated options. I
just wondered if it was a low hanging fruit and thought I would ask.
 
 
 
>
> On Tue, May 17, 2016 at 1:49 PM Andrew Laski
>  wrote:
>>
>>
>>
>> On Tue, May 17, 2016, at 02:36 PM, Matt Fischer wrote:
>>> On Tue, May 17, 2016 at 12:25 PM, Andrew Laski 
>>> wrote:
 I was in a discussion earlier about discouraging deployers from
 using
 deprecated options and the question came up about why we put
 deprecated
 options into the sample files generated in the various projects.
 So, why
 do we do that?

 I view the sample file as a reference to be used when setting up a
 service for the first time, or when looking to configure
 something for
 the first time. In neither of those cases do I see a benefit to
 advertising options that are marked deprecated.

 Is there some other case I'm not considering here? And how does
 everyone
 feel about modifying the sample file generation to exclude options
 which
 are marked with "deprecated_for_removal"?

>>>
>>>
>>> Can you clarify what you mean by having them? The way they are now
>>> is great for deployers I think and people (like me) who work on
>>> things like puppet and need to update options sometimes. For
>>> example, I like this way, example from keystone:
>>>
>>> # Deprecated group/name - [DEFAULT]/log_config
>>> #log_config_append = 
>>>
>>> Are you proposing removing that top line?
>>
>> That is a different type of deprecation which I didn't do a great job
>> of distinguishing.
>>
>> There is deprecation of where a config option is defined, as in your
>> example. I am not proposing to touch that at all. That simply
>> indicates that a config option used to be in a different group or
>> used to be named something else. That's very useful.
>>
>> There is deprecation of a config option in the sense that it is going
>> away completely. An example would be:
>>
>> # DEPRECATED: OpenStack metadata service manager (string value)
>> # This option is deprecated for removal.
>> # Its value may be silently ignored in the future.
>> #metadata_manager = nova.api.manager.MetadataManager
>>
>> I'm wondering if anyone sees a benefit to including that in the
>> sample file when it is clearly not meant for use.
>>
>>
>>> ___-
>>> _
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-
>>> requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___-
>> ___
>>  OpenStack Development Mailing List (not for usage questions)
>>  Unsubscribe: OpenStack-dev-
>>  requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> -Dolph
> -
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [TripleO] [diskimage-builder] LVM in diskimage-builder

2016-05-17 Thread Robert Collins
On 18 May 2016 at 07:32, Andre Florath  wrote:
> Hi All!
>
> AFAIK the diskimage-builder started as a testing tool, but it looks
> that it evolves more and more into a general propose tool for creating
> docker and VM disk images.

We started as a fast, production, targeted image builder suitable for
cloud images - the other existing tools were either overly hard to
customise for different use cases, or too slow.

>   Problem: I have no idea how to boot from a pure LVM image.

Install grub2 in the boot block, it can deal with some layout metadata.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Re: [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-17 Thread Armando M.
On 17 May 2016 at 03:25, Ihar Hrachyshka  wrote:

>
> > On 16 May 2016, at 21:16, Armando M.  wrote:
> >
> >
> >
> > On 16 May 2016 at 05:15, Ihar Hrachyshka  wrote:
> >
> > > On 11 May 2016, at 22:05, Sukhdev Kapur 
> wrote:
> > >
> > >
> > > Folks,
> > >
> > > I am happy to announce that Mitaka release for L2 Gateway is released
> and now available at https://pypi.python.org/pypi/networking-l2gw.
> > >
> > > You can install it by using "pip install networking-l2gw"
> > >
> > > This release has several enhancements and fixes for issues discovered
> in liberty release.
> >
> > How do you release it? I think the way to release new deliverables as of
> Newton dev cycle is thru openstack/releases repository, as documented in
> https://review.openstack.org/#/c/308984/
> >
> > Have you pushed git tag manually?
> >
> > I can only see the stable branch, tags can only be pushed by the
> neutron-release team.
>
> 2016.1.0 tag is in the repo, and is part of stable/mitaka branch.
>

Weren't we supposed to use semver?


>
> Git tag history suggests that Carl pushed it (manually I guess?) It seems
> that we release some independent deliverables thru openstack/releases, and
> some manually pushing tags into repos.
>
> I would love if we can consolidate all our releases to use a single
> automation mechanism (openstack/releases patches), independent of release
> model. For that, I would like to hear from release folks whether we are
> allowed to use openstack/releases repo for release:independent deliverables
> that are part of an official project (neutron).
>
> [Note that it would not mean we move the oversight burden for those
> deliverables onto release team; neutron-release folks would still need to
> approve them; it’s only about technicalities, not governance]
>
> The existence of the following git directory suggests that it’s supported:
>
> https://github.com/openstack/releases/tree/master/deliverables/_independent
>
> We already have some networking-* subprojects there, like
> networking-bgpvpn or networking-odl. I would love to see all new releases
> tracked there.
>
> >
> > Ihar
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-17 Thread Pradeep Kilambi
On Tue, May 17, 2016 at 1:31 PM, James Slagle 
wrote:

> On Tue, May 17, 2016 at 12:04 PM, Pradeep Kilambi  wrote:
> > Thanks Steve. I was under the impression we cannot run puppet at this
> > stage. Hence my suggestion to run bash or some script here, but if we
> > can find a way to easily wire the existing aodh puppet manifests into
> > the upgrade process and get aodh up and running then even better, we
> > dont have to duplicate what puppet gives us already and reuse that.
>
> We could add any SoftwareDeployment resource(s) to the templates that
> trigger either scripts or puppet.
>
> >
> >
> >>> At most, it seems we'd have to surround the puppet apply with some
> >>> pacemaker commands to possibly set maintenance mode and migrate
> >>> constraints.
> >>>
> >>> The puppet manifest itself would just be the includes and classes for
> aodh.
> >>
> >> +1
> >>
> >>> One complication might be that the aodh packages from Mitaka might
> >>> pull in new deps that required updating other OpenStack services,
> >>> which we wouldn't yet want to do. That is probably worth confirming
> >>> though.
> >>
> >> It seems like we should at least investigate this approach before going
> >> ahead with the backport proposed - I'll -2 the backports pending further
> >> discussion and investigation into this alternate approach.
> >>
> >
> > Makes sense to me. I understand the hesitation behind backports. I'm
> > happy to work with jistr and slagle to see if this is a viable
> > alaternative. If we can get this working without too much effort, i'm
> > all for dumping the backports and going with this.
>
> Using a liberty overcloud-full image, I enabled the mitaka repos and
> tried to install aodh:
> http://paste.openstack.org/show/497395/
>
> It looks like it will cleanly pull in just aodh packages, and there
> aren't any transitive dependencies thatt require updating any other
> OpenStack services.
>
> That means that we ought to be able to take a liberty cloud and update
> it to use aodh from mitaka. That could be step 1 of the upgrade. The
> operator could pause there for as long as they wanted, and then
> continue on with the rest of the upgrade of the other services to
> Mitaka. It may even be possible to implement them as separate stack
> updates.
>
> Does that sound like it could work? Would we have to update some parts
> of Ceilometer as well, or does Liberty Ceilometer and Mitaka Aodh work
> together nicely?
>


To install Aodh along side ceilometer in Liberty, we have to explicitly
disable or remove ceilometer-alarms services before aodh is installed.
Otherwise both the evaluators will step on each other for alarms. But other
than that, they should work.


~ Prad




>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Searchlight Core Nomination - Lei Zhang

2016-05-17 Thread McLellan, Steven
+1, Lei's made some great contributions.

On 5/17/16, 3:56 PM, "Brian Rosmaita"  wrote:

>+1
>
>I second the motion!
>
>On 5/17/16, 3:42 PM, "Tripp, Travis S"  wrote:
>
>>I am nominating Lei Zhang from Intel (lei-zh on IRC) to join the
>>Searchlight core reviewers team. He has been actively participating with
>>thoughtful patches and reviews demonstrating his depth of understanding
>>in a variety of areas. He also participates in meetings regularly,
>>despite a difficult time zone. You may review his Searchlight activity
>>reports below [0] [1].
>>
>>[1] (~Mitaka + Newton)
>>  http://stackalytics.com/report/contribution/searchlight-group/200
>>[0] (~Newton) 
>>  http://stackalytics.com/report/contribution/searchlight-group/40
>>
>>Please vote for this change in reply to this message.
>>
>>Thank you,
>>Travis
>>
>>
>>_
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Searchlight Core Nomination - Lei Zhang

2016-05-17 Thread Brian Rosmaita
+1

I second the motion!

On 5/17/16, 3:42 PM, "Tripp, Travis S"  wrote:

>I am nominating Lei Zhang from Intel (lei-zh on IRC) to join the
>Searchlight core reviewers team. He has been actively participating with
>thoughtful patches and reviews demonstrating his depth of understanding
>in a variety of areas. He also participates in meetings regularly,
>despite a difficult time zone. You may review his Searchlight activity
>reports below [0] [1].
>
>[1] (~Mitaka + Newton)
>   http://stackalytics.com/report/contribution/searchlight-group/200
>[0] (~Newton) 
>   http://stackalytics.com/report/contribution/searchlight-group/40
>
>Please vote for this change in reply to this message.
>
>Thank you,
>Travis
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DHCP Agent Scheduling for Segments

2016-05-17 Thread Kevin Benton
I'm leaning towards option A because it keeps things cleanly separated.
Also, if a cloud is using a plugin that supports segments, an operator
could use the new API for everything (including single segment networks) so
it shouldn't be that unfriendly.

However...

>If there's some other option that I somehow missed please suggest it.

The other option is to not make an API for this at all. In a multi-segment
use-case, a DHCP agent will normally have access to only one segment of a
network. By using the current API we can still assign/un-assign an agent
from a network and leave the segment selection details to the scheduler.
What is the use case for exposing this all of the way up to the operator?


On Tue, May 17, 2016 at 1:07 PM, Brandon Logan 
wrote:

> As part of the routed networks work [1], the DHCP agent and scheduling
> needs to be segment aware.  Right now, the dhcpagentscheduler extension
> exposes API resources to manage networks:
>
> - List networks hosted by an agent
> - GET /agents/{agent_id}/dhcp-networks
> - Response Body: {"networks": [{...}]}
>
> - List agents hosting a network - GET /network
> - GET /networks/{network_id}/dhcp-agents
> - Response Body: {"agents": [{...}]}
>
> - Add a network to an agent
> - POST /agents/{agent_id}/dhcp-networks
> - Request Body: {"network_id": "NETWORK_UUID"}
>
> - Remove a network from an agent
> - DELETE /agents/{agent_id}/dhcp-networks/{network_id}
>
> This same functionality needs to also be exposed for working with
> segments.  We need some opinions on the best way to do this.  The
> options are:
>
> A) Expose new resources for segment dhcp agent manipulation
> - GET /agents/{agent_id}/dhcp-segments
> - GET /segments/{segment_id}/dhcp-agents
> - POST /agents/{agent_id}/dhcp-segments
> - DELETE /agents/{agent_id}/dhcp-segments/{segment_id}
>
> B) Allow segment info gathering and manipulation via the current network
> dhcp agent API resources. No new API resources.
>
> C) Some combination of A and B.
>
> My current opinion is that option C shouldn't even be an option but I
> just put it on here just in case someone has a strong argument.  If
> we're going to add new resources, we may as well do it all the way,
> which is what C implies would happen.
>
> Option B would be great to use if segment support could easily be added
> in while maintaining backwards compatibility.  I'm not sure if that is
> going to be possible in a clean way.  Regardless, an extension will have
> to be created for this.
>
> Option A is the cleanest strategy IMHO.  It may not be the most user
> friendly though because some networks may have multiple segments while
> others may not.  If a network is made up of just a single segment then
> the current network dhcp agent calls will be fine.  However, once a
> network is made up of multiple segments, it wouldn't make sense for the
> current network dhcp agent calls to be valid, they'd need to be made to
> the new segment resources.  This same line of thinking would probably
> have to be considered with Option B as well, so it may be a problem for
> both.
>
> Anyway, I'd like to gather suggestions and opinions on this.  If there's
> some other option that I somehow missed please suggest it.
>
> Thanks,
> Brandon
>
> [1]
>
> https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html#dhcp-scheduling
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Kevin Benton
>I kind of think it makes sense to require evacuating a segment of its ports
before deleting it.

Ah, I left out an important assumption I was making. We also need to auto
delete the DHCP port as the segment is deleted. I was thinking this will be
basically be like the delete_network case where we will auto remove the
network owned ports.

On Tue, May 17, 2016 at 12:29 PM, Carl Baldwin  wrote:

> On Tue, May 17, 2016 at 10:56 AM, Kevin Benton  wrote:
> >>a) Deleting network's last segment will be prevented. Every network
> should
> >> have at least one segment to let the port to bind.
> >
> > This seems a bit arbitrary to me. If a segment is limited to a small
> part of
> > the datacenter, it being able to bind for one section of the datacenter
> and
> > not the rest is not much different than being able to bind from no
> sections.
> > Just allow it to be deleted because we need to have logic to deal with
> the
> > unbindable port case anyway. Especially since it's a racy check that is
> hard
> > to get correct for little gain.
>
> I agree with Kevin here.
>
> >>b) Deleting the segment that has been associated with subnet will be
> >> prevented.
> >
> > +1
>
> ++
>
> >>c) Deleting the segment that has been bound to port will be prevented.
> >
> > +1.
>
> ++
>
> >>d) Based on c), we need to query ml2_port_binding_levels, I think
> >> neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2.
> This
> >> is also because port and segment are both neutron server resources, no
> need
> >> to keep PortBindingLevel at ml2.
> >
> > There are things in this model that make sense only to ML2 (level and
> > driver), especially since ML2 allows a single port_id to appear multiple
> > times in the table (primary key is port_id + level).  To achieve your
> goals
> > in 'C' above, just emit a BEFORE_DELETE event in the callback registry
> for
> > segments. Then ML2 can query this table with a registered callback and
> other
> > plugins can register a callback to prevent this however they want.
>
> Sounds reasonable.
>
> > However, be sure to ignore the DHCP port when preventing segment deletion
> > otherwise having DHCP enabled will make it difficult to get rid of a
> > segment.
>
> They will be left somewhat defunct, won't they?  I think a foreign key
> constraint would be violated if you tried to delete a segment with
> even a DHCP port on it.
>
>   port <- ipallocations (FK) -> subnets (FK) -> networksegments
>
> I guess there is no foreign key constraint holding the ipallocations
> to the port.  So, the ipallocations could be deleted.  But, that is
> effectively stripping an existing port of its IP addresses which would
> be weird.
>
> I kind of think it makes sense to require evacuating a segment of its
> ports before deleting it.
>
> >>e) Is it possible to update a segment(physical_network, segmentation_id,
> or
> >> even network_type), when the segment is being used?
> >
> > I would defer this for future work and not allow it for now. If the
> segment
> > details change, we need to ask the drivers responsible for every bound
> port
> > to make they can support it under the new conditions. It will be quite a
> bit
> > of logic to deal with that I don't think we need to support up front.
>
> ++ Simplify!  We don't have a use case for this now.
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Ian Cordasco
 

-Original Message-
From: Dolph Mathews 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: May 17, 2016 at 14:02:00
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [all] Deprecated options in sample configs?

> I think the metadata_manager is one of many terrible examples of deprecated
> configuration options. The documentation surrounding a deprecation should
> *always* tell you why something is being deprecated, and what you should be
> using instead to achieve the same, or better, result moving forward. But
> instead, we get "you don't need to see any documentation... these aren't
> the configs you're looking for."
>  
> If all our deprecated config options provided useful pointers in their
> descriptions, there would be tremendous value in retaining deprecated
> configuration options in sample config files -- in which case, I don't
> believe we would be having this conversation which questions their value in
> the first place.

I have to agree that better documentation is the way forward, not removing 
information. I think we're more likely to frustrate operators who see options 
suddenly disappear as soon as they're marked as deprecated for removal.

I don't disagree that some operators might want a way to exclude options that 
are deprecated for removal when generating a sample config file, but I don't 
think that should be the default.
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Mathieu Gagné
On Tue, May 17, 2016 at 3:51 PM, Ben Nemec  wrote:
>
> I'm +1 on removing them from sample configs.  This feels like release
> note information to me, and if someone happens to miss the release note
> then they'll get deprecation warnings in their logs.  The sample config
> entries are redundant and just clutter things up for new deployments.
>

As Matt explained, we use the sample config file to get information
about existing, deprecated and soon to be removed configuration.
I'm not reading the release notes as it's not the definitive source of
truth. Someone could very well forget to include a note about a
deprecation.
Furthermore, release notes only cover the latest version, not all of
them. This means I end up having to browse through multiple release
notes to make sure I don't miss one.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] DHCP Agent Scheduling for Segments

2016-05-17 Thread Brandon Logan
As part of the routed networks work [1], the DHCP agent and scheduling
needs to be segment aware.  Right now, the dhcpagentscheduler extension
exposes API resources to manage networks:

- List networks hosted by an agent
- GET /agents/{agent_id}/dhcp-networks
- Response Body: {"networks": [{...}]}

- List agents hosting a network - GET /network
- GET /networks/{network_id}/dhcp-agents
- Response Body: {"agents": [{...}]}

- Add a network to an agent
- POST /agents/{agent_id}/dhcp-networks
- Request Body: {"network_id": "NETWORK_UUID"}

- Remove a network from an agent
- DELETE /agents/{agent_id}/dhcp-networks/{network_id}

This same functionality needs to also be exposed for working with
segments.  We need some opinions on the best way to do this.  The
options are:

A) Expose new resources for segment dhcp agent manipulation
- GET /agents/{agent_id}/dhcp-segments
- GET /segments/{segment_id}/dhcp-agents
- POST /agents/{agent_id}/dhcp-segments
- DELETE /agents/{agent_id}/dhcp-segments/{segment_id}

B) Allow segment info gathering and manipulation via the current network
dhcp agent API resources. No new API resources.

C) Some combination of A and B.

My current opinion is that option C shouldn't even be an option but I
just put it on here just in case someone has a strong argument.  If
we're going to add new resources, we may as well do it all the way,
which is what C implies would happen.

Option B would be great to use if segment support could easily be added
in while maintaining backwards compatibility.  I'm not sure if that is
going to be possible in a clean way.  Regardless, an extension will have
to be created for this.

Option A is the cleanest strategy IMHO.  It may not be the most user
friendly though because some networks may have multiple segments while
others may not.  If a network is made up of just a single segment then
the current network dhcp agent calls will be fine.  However, once a
network is made up of multiple segments, it wouldn't make sense for the
current network dhcp agent calls to be valid, they'd need to be made to
the new segment resources.  This same line of thinking would probably
have to be considered with Option B as well, so it may be a problem for
both.

Anyway, I'd like to gather suggestions and opinions on this.  If there's
some other option that I somehow missed please suggest it.

Thanks,
Brandon

[1]
https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html#dhcp-scheduling

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [diskimage-builder] LVM in diskimage-builder

2016-05-17 Thread Monty Taylor
I have nothing useful to add, except that I am excited about getting
good LVM support in diskimage-builder.

On 05/17/2016 02:32 PM, Andre Florath wrote:
> Hi All!
> 
> AFAIK the diskimage-builder started as a testing tool, but it looks
> that it evolves more and more into a general propose tool for creating
> docker and VM disk images.
> 
> Currently there are ongoing efforts to add LVM [1]. But because some
> features that I need are missing, I created my own prototype to get a
> 'feeling' for the complexity and a possible way doing things [2]. I
> contacted Yolanda (the author of the original patch) and we agreed to
> join forces here to implement a patch that fits our both needs.
> 
> Yolanda made the proposal before starting implementing things, we
> could contact Open Stack developers via this mailing list and ask
> about possible additional requirements and comments.
> 
> Here is a short list of my requirements - and as far as I understood
> Yolanda, her requirements are a subset:
> 
> MUST be able to
> o use one partition as PV
> o use one VG
> o use many LV (up to 10)
> o specify the mount point for each of the LVs
> o specify mount points that 'overlap', e.g.
>   /, /var, /var/log, /var/spool
> o use the default file system (options) as specified via command line
> o survive in every day's live - and not only in dedicated test
>   environment: must be robust and handle error scenarios
> o use '/' (root fs) as LV
> o run within different distributions - like Debian, Ubuntu, Centos7.
> 
> NICE TO HAVE
> o Multiple partitions as PVs
> o Multiple VGs
> o LVM without any partition
>   Or: why do we need partitions these days? ;-)
>   Problem: I have no idea how to boot from a pure LVM image.
> 
> Every idea or comment will help!  Please feel invited to have a
> (short) look / review at the implementation [1] and the design study
> [2].
> 
> Kind regards
> 
> Andre
> 
> 
> [1] https://review.openstack.org/#/c/252041/
> [2] https://review.openstack.org/#/c/316529/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Zane Bitter

On 17/05/16 15:40, Crag Wolfe wrote:

Another thing I am wondering about: if my particular object is not
exposed by RPC, is it worth making it a full blown o.vo or not? I.e, I
can do the 3 steps over 3 releases just in the object's .py file -- what
additional value do I get from o.vo?


It's more of a cargo-cult thing ;)

(Given that *none* of the versioned objects in Heat are exposed over 
RPC, the additional value you get is consistency and a nice place to 
abstract whatever weirdness is going on in the database at any given 
time so that it doesn't leak into the rest of the code.)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Ben Nemec
On 05/17/2016 01:47 PM, Andrew Laski wrote:
>  
>  
>  
> On Tue, May 17, 2016, at 02:36 PM, Matt Fischer wrote:
>> On Tue, May 17, 2016 at 12:25 PM, Andrew Laski > > wrote:
>>
>> I was in a discussion earlier about discouraging deployers from using
>> deprecated options and the question came up about why we put
>> deprecated
>> options into the sample files generated in the various projects.
>> So, why
>> do we do that?
>>  
>> I view the sample file as a reference to be used when setting up a
>> service for the first time, or when looking to configure something for
>> the first time. In neither of those cases do I see a benefit to
>> advertising options that are marked deprecated.
>>  
>> Is there some other case I'm not considering here? And how does
>> everyone
>> feel about modifying the sample file generation to exclude options
>> which
>> are marked with "deprecated_for_removal"?
>>  
>>
>>  
>>  
>> Can you clarify what you mean by having them? The way they are now is
>> great for deployers I think and people (like me) who work on things
>> like puppet and need to update options sometimes. For example, I like
>> this way, example from keystone:
>>  
>> # Deprecated group/name - [DEFAULT]/log_config
>> #log_config_append = 
>>  
>> Are you proposing removing that top line?
>  
> That is a different type of deprecation which I didn't do a great job of
> distinguishing.
>  
> There is deprecation of where a config option is defined, as in your
> example. I am not proposing to touch that at all. That simply indicates
> that a config option used to be in a different group or used to be named
> something else. That's very useful.
>  
> There is deprecation of a config option in the sense that it is going
> away completely. An example would be:
>  
> # DEPRECATED: OpenStack metadata service manager (string value)
> # This option is deprecated for removal.
> # Its value may be silently ignored in the future.
> #metadata_manager = nova.api.manager.MetadataManager
>  
> I'm wondering if anyone sees a benefit to including that in the sample
> file when it is clearly not meant for use.

I'm +1 on removing them from sample configs.  This feels like release
note information to me, and if someone happens to miss the release note
then they'll get deprecation warnings in their logs.  The sample config
entries are redundant and just clutter things up for new deployments.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Searchlight Core Nomination - Lei Zhang

2016-05-17 Thread Tripp, Travis S
I am nominating Lei Zhang from Intel (lei-zh on IRC) to join the Searchlight 
core reviewers team. He has been actively participating with thoughtful patches 
and reviews demonstrating his depth of understanding in a variety of areas. He 
also participates in meetings regularly, despite a difficult time zone. You may 
review his Searchlight activity reports below [0] [1].

[1] (~Mitaka + Newton)  
http://stackalytics.com/report/contribution/searchlight-group/200
[0] (~Newton)   
http://stackalytics.com/report/contribution/searchlight-group/40

Please vote for this change in reply to this message.

Thank you,
Travis


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Crag Wolfe
On 05/17/2016 10:34 AM, Michał Dulko wrote:
> On 05/17/2016 06:30 PM, Crag Wolfe wrote:
>> Hi all,
>>
>> I've read that versioned objects are favored for supporting different
>> versions between RPC services and to support rolling upgrades. I'm
>> looking to follow the pattern for Heat. Basically, it is the classic
>> problem where we want to migrate from writing to a column in one table
>> to having that column live in a different table. Looking at nova code,
>> the version for a given versioned object is a constant in the given
>> object/.py file. To properly support rolling upgrades
>> where we have older and newer heat-engine processes running
>> simultaneously (thus avoiding downtime), we have to write to both the
>> old column and the new column. Once all processes have been upgraded,
>> we can upgrade again to only write to the new location (but still able
>> to read from the old location of course). Following the existing
>> pattern, this means the operator has to upgrade 
>> twice (it may be possible to increment VERSION in 
>> only once, however, the first time).
>>
>> The drawback of the above is it means cutting two releases (since two
>> different .py files). However, I wanted to check if anyone has gone
>> with a different approach so only one release is required. One way to
>> do that would be by specifying a version (or some other flag) in
>> heat.conf. Then, only one .py release would be
>> required -- the logic of whether to write to both the old and new
>> location (the intermediate step) versus just the new location (the
>> final step) would be in .py, dictated by the config
>> value. The advantage to this approach is now there is only one .py
>> file released, though the operator would still have to make a config
>> change and restart heat processes a second time to move from the
>> intermediate step to the final step.
> 
> Nova has the pattern of being able to do all that in one release by
> exercising o.vo, but there are assumptions they are relying on (details
> [1]):
> 
>   * nova-compute accesses the DB through nova-conductor.
>   * nova-conductor gets upgraded atomically.
>   * nova-conductor is able to backport an object if nova-compute is
> older and doesn't understand it.
> 
> Now if you want to have heat-engines running in different versions and
> all of them are freely accessing the DB, then that approach won't work
> as there's no one who can do a backport.
> 
> We've faced same issue in Cinder and developed a way to do such
> modifications in three releases for columns that are writable and two
> releases for columns that are read-only. This is explained in spec [2]
> and devref [3]. And yes, it's a little painful.
> 
> If I got everything correctly, your idea of two-step upgrade will work
> only for read-only columns. Consider this situation:
> 
>  1. We have deployment running h-eng (A and B) in version X.
>  2. We apply X+1 migration moving column `foo` to `bar`.
>  3. We upgrade h-eng A to X+1. Now it writes to both `foo` and `bar`.
>  4. A updates `foo` and `bar`.
>  5. B updates `foo`. Now correct value is in `foo` only.
>  6. A want to read the value. But is latest one in `foo` or `bar`? No
> way to tell that.
> 
> 
> I know Keystone team is trying to solve that with some SQLAlchemy magic,
> but I don't think the design is agreed on yet. There was a presentation
> at the summit [4] that mentions it (and attempts clarification of
> approaches taken by different projects).
> 
> Hopefully this helps a little.
> 
> Thanks,
> Michal (dulek on freenode)
> 
> [1] 
> http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
> 
> [2] 
> http://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html
> 
> [3] 
> http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations
> 
> [4] https://www.youtube.com/watch?v=ivcNI7EHyAY
> 

That helps a lot, thanks! You are right, it would have to be a 3-step
upgrade to avoid the issue you mentioned in 6.

Another thing I am wondering about: if my particular object is not
exposed by RPC, is it worth making it a full blown o.vo or not? I.e, I
can do the 3 steps over 3 releases just in the object's .py file -- what
additional value do I get from o.vo?

I'm also shying away from the idea of allowing for config-driven
upgrades. The reason is, suppose an operator updates a config, then does
a rolling restart to go from X to X+1. Then again (and probably again)
as needed. Everything works great, run a victory lap. A few weeks later,
some ansible or puppet automation accidentally blows away the config
value saying that heat-engine should be running at the X+3 version for
my_object. Ouch. Probably unlikely, but more likely than say
accidentally deploying a .py file from three releases ago.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [neutron][stable] proposing Brian Haley for neutron-stable-maint

2016-05-17 Thread Brandon Logan
+1 +1
On Tue, 2016-05-17 at 18:58 +, Vasudevan, Swaminathan (PNB
Roseville) wrote:
> +1 for both.
> 
>  
> 
> From: Kevin Benton [mailto:ke...@benton.pub] 
> Sent: Tuesday, May 17, 2016 11:06 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [neutron][stable] proposing Brian Haley
> for neutron-stable-maint
> 
>  
> 
> +1 for both
> 
> 
>  
> 
> On Tue, May 17, 2016 at 5:57 AM, Kyle Mestery 
> wrote:
> 
> +1 (Also +1 for Cedric).
> 
> 
> On Tue, May 17, 2016 at 6:07 AM, Ihar Hrachyshka
>  wrote:
> > Hi stable-maint-core and all,
> >
> > I would like to propose Brian for neutron specific stable
> team.
> >
> > His stats for neutron stable branches are (last 120 days):
> >
> > mitaka: 19 reviews; liberty: 68 reviews (3rd place in the
> top); kilo: 16 reviews.
> >
> > Brian helped the project with stabilizing liberty
> neutron/DVR jobs, and with other L3 related stable matters. In
> his stable reviews, he shows attention to details.
> >
> > If Brian is added to the team, I will make sure he is aware
> of all stable policy intricacies.
> >
> > Side note: recently I added another person to the team
> (Cedric Brandilly), and now I realize that I haven’t followed
> the usual approval process. That said, the person also has
> decent stable review stats, and is aware of the policy. If
> someone has doubts about that addition to the team, please
> ping me and we will discuss how to proceed.
> >
> > Ihar
> >
> 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
>  
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [diskimage-builder] LVM in diskimage-builder

2016-05-17 Thread Andre Florath
Hi All!

AFAIK the diskimage-builder started as a testing tool, but it looks
that it evolves more and more into a general propose tool for creating
docker and VM disk images.

Currently there are ongoing efforts to add LVM [1]. But because some
features that I need are missing, I created my own prototype to get a
'feeling' for the complexity and a possible way doing things [2]. I
contacted Yolanda (the author of the original patch) and we agreed to
join forces here to implement a patch that fits our both needs.

Yolanda made the proposal before starting implementing things, we
could contact Open Stack developers via this mailing list and ask
about possible additional requirements and comments.

Here is a short list of my requirements - and as far as I understood
Yolanda, her requirements are a subset:

MUST be able to
o use one partition as PV
o use one VG
o use many LV (up to 10)
o specify the mount point for each of the LVs
o specify mount points that 'overlap', e.g.
  /, /var, /var/log, /var/spool
o use the default file system (options) as specified via command line
o survive in every day's live - and not only in dedicated test
  environment: must be robust and handle error scenarios
o use '/' (root fs) as LV
o run within different distributions - like Debian, Ubuntu, Centos7.

NICE TO HAVE
o Multiple partitions as PVs
o Multiple VGs
o LVM without any partition
  Or: why do we need partitions these days? ;-)
  Problem: I have no idea how to boot from a pure LVM image.

Every idea or comment will help!  Please feel invited to have a
(short) look / review at the implementation [1] and the design study
[2].

Kind regards

Andre


[1] https://review.openstack.org/#/c/252041/
[2] https://review.openstack.org/#/c/316529/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Carl Baldwin
On Tue, May 17, 2016 at 10:56 AM, Kevin Benton  wrote:
>>a) Deleting network's last segment will be prevented. Every network should
>> have at least one segment to let the port to bind.
>
> This seems a bit arbitrary to me. If a segment is limited to a small part of
> the datacenter, it being able to bind for one section of the datacenter and
> not the rest is not much different than being able to bind from no sections.
> Just allow it to be deleted because we need to have logic to deal with the
> unbindable port case anyway. Especially since it's a racy check that is hard
> to get correct for little gain.

I agree with Kevin here.

>>b) Deleting the segment that has been associated with subnet will be
>> prevented.
>
> +1

++

>>c) Deleting the segment that has been bound to port will be prevented.
>
> +1.

++

>>d) Based on c), we need to query ml2_port_binding_levels, I think
>> neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2. This
>> is also because port and segment are both neutron server resources, no need
>> to keep PortBindingLevel at ml2.
>
> There are things in this model that make sense only to ML2 (level and
> driver), especially since ML2 allows a single port_id to appear multiple
> times in the table (primary key is port_id + level).  To achieve your goals
> in 'C' above, just emit a BEFORE_DELETE event in the callback registry for
> segments. Then ML2 can query this table with a registered callback and other
> plugins can register a callback to prevent this however they want.

Sounds reasonable.

> However, be sure to ignore the DHCP port when preventing segment deletion
> otherwise having DHCP enabled will make it difficult to get rid of a
> segment.

They will be left somewhat defunct, won't they?  I think a foreign key
constraint would be violated if you tried to delete a segment with
even a DHCP port on it.

  port <- ipallocations (FK) -> subnets (FK) -> networksegments

I guess there is no foreign key constraint holding the ipallocations
to the port.  So, the ipallocations could be deleted.  But, that is
effectively stripping an existing port of its IP addresses which would
be weird.

I kind of think it makes sense to require evacuating a segment of its
ports before deleting it.

>>e) Is it possible to update a segment(physical_network, segmentation_id, or
>> even network_type), when the segment is being used?
>
> I would defer this for future work and not allow it for now. If the segment
> details change, we need to ask the drivers responsible for every bound port
> to make they can support it under the new conditions. It will be quite a bit
> of logic to deal with that I don't think we need to support up front.

++ Simplify!  We don't have a use case for this now.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Matt Fischer
On Tue, May 17, 2016 at 12:47 PM, Andrew Laski  wrote:

>
>
>
> On Tue, May 17, 2016, at 02:36 PM, Matt Fischer wrote:
>
> On Tue, May 17, 2016 at 12:25 PM, Andrew Laski  wrote:
>
> I was in a discussion earlier about discouraging deployers from using
> deprecated options and the question came up about why we put deprecated
> options into the sample files generated in the various projects. So, why
> do we do that?
>
> I view the sample file as a reference to be used when setting up a
> service for the first time, or when looking to configure something for
> the first time. In neither of those cases do I see a benefit to
> advertising options that are marked deprecated.
>
> Is there some other case I'm not considering here? And how does everyone
> feel about modifying the sample file generation to exclude options which
> are marked with "deprecated_for_removal"?
>
>
>
>
> Can you clarify what you mean by having them? The way they are now is
> great for deployers I think and people (like me) who work on things like
> puppet and need to update options sometimes. For example, I like this way,
> example from keystone:
>
> # Deprecated group/name - [DEFAULT]/log_config
> #log_config_append = 
>
> Are you proposing removing that top line?
>
>
> That is a different type of deprecation which I didn't do a great job of
> distinguishing.
>
> There is deprecation of where a config option is defined, as in your
> example. I am not proposing to touch that at all. That simply indicates
> that a config option used to be in a different group or used to be named
> something else. That's very useful.
>
> There is deprecation of a config option in the sense that it is going away
> completely. An example would be:
>
> # DEPRECATED: OpenStack metadata service manager (string value)
> # This option is deprecated for removal.
> # Its value may be silently ignored in the future.
> #metadata_manager = nova.api.manager.MetadataManager
>
> I'm wondering if anyone sees a benefit to including that in the sample
> file when it is clearly not meant for use.
>
>
I believe it has value still and the use case is similar. That conveys
information about a feature, which I might be using, is going away. The
release notes and log files provide similar info and if I saw this I'd
probably head there next.

If this is confusing, what if instead the warning was more strong? "this
feature will not work in release X or after". Also what's the confusion
issue, is it just due to a sheer number of config options to dig through as
a new operator?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telemetry] Time to test and import Panko

2016-05-17 Thread gordon chung
what needs to be tested here? that Ceilometer can dispatch to Panko? do 
we need to worry about any redirection like we had with Aodh?



On 17/05/2016 8:29 AM, Julien Danjou wrote:
> Hi fellows,
>
> I'm done creating Panko, our new project born from cutting off the event
> part of Ceilometer. It's at: https://github.com/jd/panko
>
> There are only a few commits as you can see:
>
>https://github.com/jd/panko/commits/master
>
> The code has been created in a 4 steps process:
> 1. Remove code that is not related to events storage and API
> 2. Rename to Panko
> 3. Remove base class for dispatcher
> 4. Rename database event dispatcher to panko
>
> Some testing would be welcome. It should be pretty straightforward, it
> provides `panko-api' that has a /v2/events endpoint and a "panko"
> event dispatcher for ceilometer-collector.
>
> The devstack plugin might need some love to integrate with Ceilometer,
> but I imagine we can do that in a later pass.
>
> I'm gonna create the openstack-infra patch to import the project unless
> someone tells me not to.
>
> Cheers,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proposing Brian Haley for neutron-stable-maint

2016-05-17 Thread Vasudevan, Swaminathan (PNB Roseville)
+1 for both.

From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Tuesday, May 17, 2016 11:06 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron][stable] proposing Brian Haley for 
neutron-stable-maint

+1 for both

On Tue, May 17, 2016 at 5:57 AM, Kyle Mestery 
> wrote:
+1 (Also +1 for Cedric).

On Tue, May 17, 2016 at 6:07 AM, Ihar Hrachyshka 
> wrote:
> Hi stable-maint-core and all,
>
> I would like to propose Brian for neutron specific stable team.
>
> His stats for neutron stable branches are (last 120 days):
>
> mitaka: 19 reviews; liberty: 68 reviews (3rd place in the top); kilo: 16 
> reviews.
>
> Brian helped the project with stabilizing liberty neutron/DVR jobs, and with 
> other L3 related stable matters. In his stable reviews, he shows attention to 
> details.
>
> If Brian is added to the team, I will make sure he is aware of all stable 
> policy intricacies.
>
> Side note: recently I added another person to the team (Cedric Brandilly), 
> and now I realize that I haven’t followed the usual approval process. That 
> said, the person also has decent stable review stats, and is aware of the 
> policy. If someone has doubts about that addition to the team, please ping me 
> and we will discuss how to proceed.
>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-17 Thread Steven Dake (stdake)
Hello core reviewers,

I am proposing Mauricio (mlima on irc) for the core review team.  He has done a 
fantastic job reviewing appearing in the middle of the pack for 90 days [1] and 
appearing as #2 in 45 days [2].  His IRC participation is also fantastic and 
does a good job on technical work including implementing Manila from zero 
experience :) as well as code cleanup all over the code base and documentation. 
 Consider my proposal a +1 vote.

I will leave voting open for 1 week until May 24th.  Please vote +1 (approve), 
or -2 (veto), or abstain.  I will close voting early if there is a veto vote, 
or a unanimous vote is reached.

Thanks,
-steve

[1] http://stackalytics.com/report/contribution/kolla/90
[2] http://stackalytics.com/report/contribution/kolla/45
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Dolph Mathews
I think the metadata_manager is one of many terrible examples of deprecated
configuration options. The documentation surrounding a deprecation should
*always* tell you why something is being deprecated, and what you should be
using instead to achieve the same, or better, result moving forward. But
instead, we get "you don't need to see any documentation... these aren't
the configs you're looking for."

If all our deprecated config options provided useful pointers in their
descriptions, there would be tremendous value in retaining deprecated
configuration options in sample config files -- in which case, I don't
believe we would be having this conversation which questions their value in
the first place.

On Tue, May 17, 2016 at 1:49 PM Andrew Laski  wrote:

>
>
>
> On Tue, May 17, 2016, at 02:36 PM, Matt Fischer wrote:
>
> On Tue, May 17, 2016 at 12:25 PM, Andrew Laski  wrote:
>
> I was in a discussion earlier about discouraging deployers from using
> deprecated options and the question came up about why we put deprecated
> options into the sample files generated in the various projects. So, why
> do we do that?
>
> I view the sample file as a reference to be used when setting up a
> service for the first time, or when looking to configure something for
> the first time. In neither of those cases do I see a benefit to
> advertising options that are marked deprecated.
>
> Is there some other case I'm not considering here? And how does everyone
> feel about modifying the sample file generation to exclude options which
> are marked with "deprecated_for_removal"?
>
>
>
>
> Can you clarify what you mean by having them? The way they are now is
> great for deployers I think and people (like me) who work on things like
> puppet and need to update options sometimes. For example, I like this way,
> example from keystone:
>
> # Deprecated group/name - [DEFAULT]/log_config
> #log_config_append = 
>
> Are you proposing removing that top line?
>
>
> That is a different type of deprecation which I didn't do a great job of
> distinguishing.
>
> There is deprecation of where a config option is defined, as in your
> example. I am not proposing to touch that at all. That simply indicates
> that a config option used to be in a different group or used to be named
> something else. That's very useful.
>
> There is deprecation of a config option in the sense that it is going away
> completely. An example would be:
>
> # DEPRECATED: OpenStack metadata service manager (string value)
> # This option is deprecated for removal.
> # Its value may be silently ignored in the future.
> #metadata_manager = nova.api.manager.MetadataManager
>
> I'm wondering if anyone sees a benefit to including that in the sample
> file when it is clearly not meant for use.
>
>
>
>
> *__*
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
-Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-05-17 Thread Clint Byrum
I missed your reply originally, so sorry for the 2 week lag...

Excerpts from Mike Bayer's message of 2016-04-30 15:14:05 -0500:
> 
> On 04/30/2016 10:50 AM, Clint Byrum wrote:
> > Excerpts from Roman Podoliaka's message of 2016-04-29 12:04:49 -0700:
> >>
> >
> > I'm curious why you think setting wsrep_sync_wait=1 wouldn't help.
> >
> > The exact example appears in the Galera documentation:
> >
> > http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html#wsrep-sync-wait
> >
> > The moment you say 'SET SESSION wsrep_sync_wait=1', the behavior should
> > prevent the list problem you see, and it should not matter that it is
> > a separate session, as that is the entire point of the variable:
> 
> 
> we prefer to keep it off and just point applications at a single node 
> using master/passive/passive in HAProxy, so that we don't have the 
> unnecessary performance hit of waiting for all transactions to 
> propagate; we just stick on one node at a time.   We've fixed a lot of 
> issues in our config in ensuring that HAProxy definitely keeps all 
> clients on exactly one Galera node at a time.
> 

Indeed, haproxy does a good job at shifting over rapidly. But it's not
atomic, so you will likely have a few seconds where commits landed on
the new demoted backup.

> >
> > "When you enable this parameter, the node triggers causality checks in
> > response to certain types of queries. During the check, the node blocks
> > new queries while the database server catches up with all updates made
> > in the cluster to the point where the check was begun. Once it reaches
> > this point, the node executes the original query."
> >
> > In the active/passive case where you never use the passive node as a
> > read slave, one could actually set wsrep_sync_wait=1 globally. This will
> > cause a ton of lag while new queries happen on the new active and old
> > transactions are still being applied, but that's exactly what you want,
> > so that when you fail over, nothing proceeds until all writes from the
> > original active node are applied and available on the new active node.
> > It would help if your failover technology actually _breaks_ connections
> > to a presumed dead node, so writes stop happening on the old one.
> 
> If HAProxy is failing over from the master, which is no longer 
> reachable, to another passive node, which is reachable, that means that 
> master is partitioned and will leave the Galera primary component.   It 
> also means all current database connections are going to be bounced off, 
> which will cause errors for those clients either in the middle of an 
> operation, or if a pooled connection is reused before it is known that 
> the connection has been reset.  So failover is usually not an error-free 
> situation in any case from a database client perspective and retry 
> schemes are always going to be needed.
> 

There are some really big assumptions above, so I want to enumerate
them:

1. You assume that a partition between haproxy and a node is a partition
   between that node and the other galera nodes.
2. You assume that I never want to failover on purpose, smoothly.

In the case of (1), there are absolutely times where the load balancer
thinks a node is dead, and it is quite happily chugging along doing its
job. Transactions will be already committed in this scenario that have
not propagated, and there may be more than one load balancer, and only
one of them thinks that node is dead.

For the limited partition problem, having wsrep_sync_wait turned on
would result in consistency, and the lag would only be minimal as the
transactions propagate onto the new primary server.

For the multiple haproxy problem, lag would be _horrible_ on all nodes
that are getting reads as long as there's another one getting writes,
so a solution for making sure only one is specified would need to be
developed using a leader election strategy. If haproxy is able to query
wsrep status, that might be ideal, as galera will in fact elect leaders
for you (assuming all of your wsrep nodes are also mysql nodes, which
is not the case if you're using 2 nodes + garbd for example).

This is, however, a bit of a strawman, as most people don't need
active/active haproxy nodes, so the simplest solution is to go
active/passive on your haproxy nodes with something like UCARP handling
the failover there. As long as they all use the same primary/backup
ordering, then a new UCARP target should just result in using the same
node, and a very tiny window for inconsistency and connection errors.

The second assumption is handled by leader election as well. If there's
always one leader node that load balancers send traffic to, then one
should be able to force promotion of a different node as the leader,
and all new transactions and queries go to the new leader. The window
for that would be pretty small, and so wsrep_sync_wait time should
be able to be very low, if not 0. I'm not super familiar with the way
haproxy gracefully 

Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Andrew Laski
 
 
 
On Tue, May 17, 2016, at 02:36 PM, Matt Fischer wrote:
> On Tue, May 17, 2016 at 12:25 PM, Andrew Laski
>  wrote:
>> I was in a discussion earlier about discouraging deployers from using
>>  deprecated options and the question came up about why we put
>>  deprecated
>>  options into the sample files generated in the various projects.
>>  So, why
>>  do we do that?
>>
>>  I view the sample file as a reference to be used when setting up a
>>  service for the first time, or when looking to configure
>>  something for
>>  the first time. In neither of those cases do I see a benefit to
>>  advertising options that are marked deprecated.
>>
>>  Is there some other case I'm not considering here? And how does
>>  everyone
>>  feel about modifying the sample file generation to exclude
>>  options which
>>  are marked with "deprecated_for_removal"?
>>
>
>
> Can you clarify what you mean by having them? The way they are now is
> great for deployers I think and people (like me) who work on things
> like puppet and need to update options sometimes. For example, I like
> this way, example from keystone:
>
> # Deprecated group/name - [DEFAULT]/log_config
> #log_config_append = 
>
> Are you proposing removing that top line?
 
That is a different type of deprecation which I didn't do a great job of
distinguishing.
 
There is deprecation of where a config option is defined, as in your
example. I am not proposing to touch that at all. That simply indicates
that a config option used to be in a different group or used to be named
something else. That's very useful.
 
There is deprecation of a config option in the sense that it is going
away completely. An example would be:
 
# DEPRECATED: OpenStack metadata service manager (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#metadata_manager = nova.api.manager.MetadataManager
 
I'm wondering if anyone sees a benefit to including that in the sample
file when it is clearly not meant for use.
 
 
> -
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Matt Fischer
On Tue, May 17, 2016 at 12:25 PM, Andrew Laski  wrote:

> I was in a discussion earlier about discouraging deployers from using
> deprecated options and the question came up about why we put deprecated
> options into the sample files generated in the various projects. So, why
> do we do that?
>
> I view the sample file as a reference to be used when setting up a
> service for the first time, or when looking to configure something for
> the first time. In neither of those cases do I see a benefit to
> advertising options that are marked deprecated.
>
> Is there some other case I'm not considering here? And how does everyone
> feel about modifying the sample file generation to exclude options which
> are marked with "deprecated_for_removal"?
>
>
>

Can you clarify what you mean by having them? The way they are now is great
for deployers I think and people (like me) who work on things like puppet
and need to update options sometimes. For example, I like this way, example
from keystone:

# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = 

Are you proposing removing that top line?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Deprecated options in sample configs?

2016-05-17 Thread Andrew Laski
I was in a discussion earlier about discouraging deployers from using
deprecated options and the question came up about why we put deprecated
options into the sample files generated in the various projects. So, why
do we do that?

I view the sample file as a reference to be used when setting up a
service for the first time, or when looking to configure something for
the first time. In neither of those cases do I see a benefit to
advertising options that are marked deprecated.

Is there some other case I'm not considering here? And how does everyone
feel about modifying the sample file generation to exclude options which
are marked with "deprecated_for_removal"?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Re: [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-17 Thread Carl Baldwin
tl;dr  Merge changes to process documentation before expecting them to
be followed!  :)

On Tue, May 17, 2016 at 4:25 AM, Ihar Hrachyshka  wrote:
> 2016.1.0 tag is in the repo, and is part of stable/mitaka branch.
>
> Git tag history suggests that Carl pushed it (manually I guess?) It seems 
> that we release some independent deliverables thru openstack/releases, and 
> some manually pushing tags into repos.

Yes, I did this manually.  I wondered about new process because I've
seen the review [1] but it has not merged and so I did not attempt to
follow it.  Merging these things signals changes in process and keeps
things clear and consistent for everyone.

> I would love if we can consolidate all our releases to use a single 
> automation mechanism (openstack/releases patches), independent of release 
> model. For that, I would like to hear from release folks whether we are 
> allowed to use openstack/releases repo for release:independent deliverables 
> that are part of an official project (neutron).

This would be great!  We just need to merge the changes to the
process.  I will continue to follow the process documented here [3].
(I even asked for some stale pages to be removed so that we wouldn't
be confused [2]).

Carl

[1] https://review.openstack.org/#/c/308984
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094698.html
[3] 
http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-17 Thread Kevin Benton
Yeah, no meetings in #openstack-neutron please. It leaves us nowhere to
discuss development stuff during that hour.

On Tue, May 17, 2016 at 2:54 AM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:

> I agree, let's try to find a timeslot that works.
>
> using #openstack-neutron with the meetbot works, but it's going to
> generate a lot of noise.
>
> On Tue, May 17, 2016 at 11:47 AM, Ihar Hrachyshka 
> wrote:
>
>>
>> > On 16 May 2016, at 15:47, Takashi Yamamoto 
>> wrote:
>> >
>> > On Mon, May 16, 2016 at 10:25 PM, Takashi Yamamoto
>> >  wrote:
>> >> hi,
>> >>
>> >> On Mon, May 16, 2016 at 9:00 PM, Ihar Hrachyshka 
>> wrote:
>> >>> +1 for earlier time. But also, have we booked any channel for the
>> meeting? Hijacking #openstack-neutron may not work fine during such a busy
>> (US) time. I suggest we propose a patch for
>> https://github.com/openstack-infra/irc-meetings
>> >>
>> >> i agree and submitted a patch.
>> >> https://review.openstack.org/#/c/316830/
>> >
>> > oops, unfortunately there seems no meeting channel free at the time
>> slot.
>>
>> This should be solved either by changing the slot, or by getting a new
>> channel registered for meetings. Using unregistered channels, especially
>> during busy hours, is not effective, and is prone to overlaps for relevant
>> meetings. The meetings will also not get a proper slot at
>> eavesdrop.openstack.org.
>>
>> Ihar
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proposing Brian Haley for neutron-stable-maint

2016-05-17 Thread Kevin Benton
+1 for both

On Tue, May 17, 2016 at 5:57 AM, Kyle Mestery  wrote:

> +1 (Also +1 for Cedric).
>
> On Tue, May 17, 2016 at 6:07 AM, Ihar Hrachyshka 
> wrote:
> > Hi stable-maint-core and all,
> >
> > I would like to propose Brian for neutron specific stable team.
> >
> > His stats for neutron stable branches are (last 120 days):
> >
> > mitaka: 19 reviews; liberty: 68 reviews (3rd place in the top); kilo: 16
> reviews.
> >
> > Brian helped the project with stabilizing liberty neutron/DVR jobs, and
> with other L3 related stable matters. In his stable reviews, he shows
> attention to details.
> >
> > If Brian is added to the team, I will make sure he is aware of all
> stable policy intricacies.
> >
> > Side note: recently I added another person to the team (Cedric
> Brandilly), and now I realize that I haven’t followed the usual approval
> process. That said, the person also has decent stable review stats, and is
> aware of the policy. If someone has doubts about that addition to the team,
> please ping me and we will discuss how to proceed.
> >
> > Ihar
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][fwaas] neutron-fwaas meeting reminder

2016-05-17 Thread Nate Johnston
All,

Now that the FWaaS team has a brand new sparkling meeting time, I wanted to
remind anyone who is interested to join us on #openstack-meeting-3 for a
discussion about the state of Firewall as a Service.  The meeting is Wednesday
at 0400 UTC.  For your convenience, this is what that translates to:


   - Wednesday 01:00pm JST (GMT +9): Tokyo


   - Wednesday 09:30am IST (GMT+5.5): New Delhi, Chennai


   - Wednesday midnight EDT (GMT -4): Washington DC, Philadelphia


   - Tuesday 9:00pm PDT (GMT -7): San Jose


If you have items to add, please add them to the agenda here:
https://etherpad.openstack.org/p/fwaas-meeting

Thanks very much, and see you in ten hours from now.

--N.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [javascript] Async file uploads in anew JS-centric UI

2016-05-17 Thread Thai Q Tran

Thanks for the great explanation Timur. I have bookmarked both and will
look into it shortly.



From:   Timur Sufiev 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   05/17/2016 10:38 AM
Subject:Re: [openstack-dev] [horizon] [javascript] Async file uploads
in anew JS-centric UI



Since 10 lines of code > 1000 words, here are references to 2 patch chains:

* New Swift UI file upload https://review.openstack.org/#/c/316143/
* New Angular Create Image file upload
https://review.openstack.org/#/c/317456/

I like Create Image solution more because it doesn't use Django csrf_exempt
and single FileField form hacks just to accept a binary stream on Django
side. So CORS offers us a shorter and more elegant solution to the task of
file uploads.

I got an off-ML feedback that the question / intention of original mail is
not clear. My intention / proposal is to adopt the approach used for
uploading files in Create Image workflow as the standard way for other
interactions (which include file uploads) between Web Clients and OpenStack
services.

On Sat, May 14, 2016 at 2:48 AM Timur Sufiev  wrote:
  Hello, fellow web developers!

  I'd like to share some of my thoughts and findings that I made during
  playing with ng-file-upload [1] for file uploads in Swift UI and Create
  Image wizard in Horizon (both Angular). Sorry for a long letter, couldn't
  put it shorter (TL;DR => go to CONCLUSION section).

  As a foreword: it's a really useful library, both for customizing
  stubborn  widget appearance (hello, themability!) and
  behavior and for the actual file transfer. More on the file transfer
  below, since it's the topic that really interests me.

  MAIN PART

  First, in modern browsers (by modern I mean HTML5 support and
  particularly FileReader API) you don't need a single-purposed library to
  upload file asynchronously, both jQuery $.ajax() and vanilla Angular
  $http calls support it - just pass File()/Blob() object as data (no
  wrapping {} please) and it works - browser transparently reads data chunk
  by chunk  from a local file system and sends it to the server. There is
  even a working solution for Django and jQuery-based 'Create Image' form
  [2]. There are a few subtleties though. Most importantly, there should be
  no other data (like other key-value pairs from form fields), just the
  file blob - and then the server endpoint must support raw byte stream as
  well. This rules out Django views which expect certain headers and body
  structure.

  (Second,) What ng-file-upload offers us to solve the challenge of file
  transfers? There are 2 methods in Upload service: .http() and .upload().
  First one is a very thin wrapper around Angular $http, with one
  difference that it allows to notify() browser of file upload progress
  (when just a single file blob is passed in .http(), as in case of $http()
  as well). The second method offers more features, like
  abortable/resumable uploads and transparent handling of data like {key1:
  value1, key2: value2, file: FileBlob}. Uploading such data is implemented
  using standard multipart/form-data content type, so actually, it's just a
  convenient wrapper around facilities we've already seen. Anyways it's
  better to just feed the data into Upload.upload() than to construct
  FormData() on your own (still the same is happening under the bonnet).

  Third, and most important point, we still have to couple Upload.http() /
  Upload.upload() with a server-side machinery. If it's a single file
  upload with Upload.http(), then the server must be able to work with raw
  binary stream (I'm repeating myself here). If it's a form data including
  file blob, it's easily handled on front-end with Upload.upload(), then
  the server must be able to parse multipart/form-data (Django perfectly
  does that). What's bad in this situation is that it also needs to store
  any sufficiently sized file in a web server's file system - which is both
  bug-prone [4] and suboptimal from performance POV. First we need to send
  a file (possibly GB-sized) from browser to web server, then from web
  server to the Glance/Swift/any other service host. So, blindly using
  Upload.upload() won't solve our _real_ problems with file uploads.

  CONCLUSION

  What can be done here to help JS UI to handle really large uploads? Split
  any API calls / views / whatever server things we have into 2 parts:
  lightweight JSON metadata + heavyweight binary stream. Moreover, use CORS
  for the second part to send binary streams directly to that data
  consumers (I know of 2 services atm - Glance & Swift, maybe there are
  more?). That will reduce the processing time, increasing the possibility
  that an operation will complete successfully before Keystone token
  expires :). IMO any new Angular wizard in Horizon should be designed with
  this thing in 

Re: [openstack-dev] [horizon] [javascript] Async file uploads in a new JS-centric UI

2016-05-17 Thread Timur Sufiev
Since 10 lines of code > 1000 words, here are references to 2 patch chains:

* New Swift UI file upload https://review.openstack.org/#/c/316143/
* New Angular Create Image file upload
https://review.openstack.org/#/c/317456/

I like Create Image solution more because it doesn't use Django csrf_exempt
and single FileField form hacks just to accept a binary stream on Django
side. So CORS offers us a shorter and more elegant solution to the task of
file uploads.

I got an off-ML feedback that the question / intention of original mail is
not clear. My intention / proposal is to adopt the approach used for
uploading files in Create Image workflow as the standard way for other
interactions (which include file uploads) between Web Clients and OpenStack
services.

On Sat, May 14, 2016 at 2:48 AM Timur Sufiev  wrote:

> Hello, fellow web developers!
>
> I'd like to share some of my thoughts and findings that I made during
> playing with ng-file-upload [1] for file uploads in Swift UI and Create
> Image wizard in Horizon (both Angular). Sorry for a long letter, couldn't
> put it shorter (TL;DR => go to CONCLUSION section).
>
> As a foreword: it's a really useful library, both for customizing stubborn
>  widget appearance (hello, themability!) and behavior
> and for the actual file transfer. More on the file transfer below, since
> it's the topic that really interests me.
>
> MAIN PART
>
> First, in modern browsers (by modern I mean HTML5 support and particularly
> FileReader API) you don't need a single-purposed library to upload file
> asynchronously, both jQuery $.ajax() and vanilla Angular $http calls
> support it - just pass File()/Blob() object as data (no wrapping {} please)
> and it works - browser transparently reads data chunk by chunk  from a
> local file system and sends it to the server. There is even a working
> solution for Django and jQuery-based 'Create Image' form [2]. There are a
> few subtleties though. Most importantly, there should be no other data
> (like other key-value pairs from form fields), just the file blob - and
> then the server endpoint must support raw byte stream as well. This rules
> out Django views which expect certain headers and body structure.
>
> (Second,) What ng-file-upload offers us to solve the challenge of file
> transfers? There are 2 methods in Upload service: .http() and .upload().
> First one is a very thin wrapper around Angular $http, with one difference
> that it allows to notify() browser of file upload progress (when just a
> single file blob is passed in .http(), as in case of $http() as well). The
> second method offers more features, like abortable/resumable uploads and
> transparent handling of data like {key1: value1, key2: value2, file:
> FileBlob}. Uploading such data is implemented using standard
> multipart/form-data content type, so actually, it's just a convenient
> wrapper around facilities we've already seen. Anyways it's better to just
> feed the data into Upload.upload() than to construct FormData() on your own
> (still the same is happening under the bonnet).
>
> Third, and most important point, we still have to couple Upload.http() /
> Upload.upload() with a server-side machinery. If it's a single file upload
> with Upload.http(), then the server must be able to work with raw binary
> stream (I'm repeating myself here). If it's a form data including file
> blob, it's easily handled on front-end with Upload.upload(), then the
> server must be able to parse multipart/form-data (Django perfectly does
> that). What's bad in this situation is that it also needs to store any
> sufficiently sized file in a web server's file system - which is both
> bug-prone [4] and suboptimal from performance POV. First we need to send a
> file (possibly GB-sized) from browser to web server, then from web server
> to the Glance/Swift/any other service host. So, blindly using
> Upload.upload() won't solve our _real_ problems with file uploads.
>
> CONCLUSION
>
> What can be done here to help JS UI to handle really large uploads? Split
> any API calls / views / whatever server things we have into 2 parts:
> lightweight JSON metadata + heavyweight binary stream. Moreover, use CORS
> for the second part to send binary streams directly to that data consumers
> (I know of 2 services atm - Glance & Swift, maybe there are more?). That
> will reduce the processing time, increasing the possibility that an
> operation will complete successfully before Keystone token expires :). IMO
> any new Angular wizard in Horizon should be designed with this thing in
> mind: a separate API call for binary stream transfer.
>
> Thoughts, suggestions?
>
> P.S. None of above means that we shouldn't use ng-file-upload, it's still
> a very convenient tool.
>
> [1] https://github.com/danialfarid/ng-file-upload
> [2] https://review.openstack.org/#/c/230434/
> [3]
> https://github.com/openstack/horizon/blob/master/horizon/static/horizon/js/horizon.modals.js#L216
> [4] 

Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Michał Dulko
On 05/17/2016 06:30 PM, Crag Wolfe wrote:
> Hi all,
>
> I've read that versioned objects are favored for supporting different
> versions between RPC services and to support rolling upgrades. I'm
> looking to follow the pattern for Heat. Basically, it is the classic
> problem where we want to migrate from writing to a column in one table
> to having that column live in a different table. Looking at nova code,
> the version for a given versioned object is a constant in the given
> object/.py file. To properly support rolling upgrades
> where we have older and newer heat-engine processes running
> simultaneously (thus avoiding downtime), we have to write to both the
> old column and the new column. Once all processes have been upgraded,
> we can upgrade again to only write to the new location (but still able
> to read from the old location of course). Following the existing
> pattern, this means the operator has to upgrade 
> twice (it may be possible to increment VERSION in 
> only once, however, the first time).
>
> The drawback of the above is it means cutting two releases (since two
> different .py files). However, I wanted to check if anyone has gone
> with a different approach so only one release is required. One way to
> do that would be by specifying a version (or some other flag) in
> heat.conf. Then, only one .py release would be
> required -- the logic of whether to write to both the old and new
> location (the intermediate step) versus just the new location (the
> final step) would be in .py, dictated by the config
> value. The advantage to this approach is now there is only one .py
> file released, though the operator would still have to make a config
> change and restart heat processes a second time to move from the
> intermediate step to the final step.

Nova has the pattern of being able to do all that in one release by
exercising o.vo, but there are assumptions they are relying on (details
[1]):

  * nova-compute accesses the DB through nova-conductor.
  * nova-conductor gets upgraded atomically.
  * nova-conductor is able to backport an object if nova-compute is
older and doesn't understand it.

Now if you want to have heat-engines running in different versions and
all of them are freely accessing the DB, then that approach won't work
as there's no one who can do a backport.

We've faced same issue in Cinder and developed a way to do such
modifications in three releases for columns that are writable and two
releases for columns that are read-only. This is explained in spec [2]
and devref [3]. And yes, it's a little painful.

If I got everything correctly, your idea of two-step upgrade will work
only for read-only columns. Consider this situation:

 1. We have deployment running h-eng (A and B) in version X.
 2. We apply X+1 migration moving column `foo` to `bar`.
 3. We upgrade h-eng A to X+1. Now it writes to both `foo` and `bar`.
 4. A updates `foo` and `bar`.
 5. B updates `foo`. Now correct value is in `foo` only.
 6. A want to read the value. But is latest one in `foo` or `bar`? No
way to tell that.


I know Keystone team is trying to solve that with some SQLAlchemy magic,
but I don't think the design is agreed on yet. There was a presentation
at the summit [4] that mentions it (and attempts clarification of
approaches taken by different projects).

Hopefully this helps a little.

Thanks,
Michal (dulek on freenode)

[1] 
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/

[2] 
http://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html

[3] 
http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations

[4] https://www.youtube.com/watch?v=ivcNI7EHyAY


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-17 Thread James Slagle
On Tue, May 17, 2016 at 12:04 PM, Pradeep Kilambi  wrote:
> Thanks Steve. I was under the impression we cannot run puppet at this
> stage. Hence my suggestion to run bash or some script here, but if we
> can find a way to easily wire the existing aodh puppet manifests into
> the upgrade process and get aodh up and running then even better, we
> dont have to duplicate what puppet gives us already and reuse that.

We could add any SoftwareDeployment resource(s) to the templates that
trigger either scripts or puppet.

>
>
>>> At most, it seems we'd have to surround the puppet apply with some
>>> pacemaker commands to possibly set maintenance mode and migrate
>>> constraints.
>>>
>>> The puppet manifest itself would just be the includes and classes for aodh.
>>
>> +1
>>
>>> One complication might be that the aodh packages from Mitaka might
>>> pull in new deps that required updating other OpenStack services,
>>> which we wouldn't yet want to do. That is probably worth confirming
>>> though.
>>
>> It seems like we should at least investigate this approach before going
>> ahead with the backport proposed - I'll -2 the backports pending further
>> discussion and investigation into this alternate approach.
>>
>
> Makes sense to me. I understand the hesitation behind backports. I'm
> happy to work with jistr and slagle to see if this is a viable
> alaternative. If we can get this working without too much effort, i'm
> all for dumping the backports and going with this.

Using a liberty overcloud-full image, I enabled the mitaka repos and
tried to install aodh:
http://paste.openstack.org/show/497395/

It looks like it will cleanly pull in just aodh packages, and there
aren't any transitive dependencies thatt require updating any other
OpenStack services.

That means that we ought to be able to take a liberty cloud and update
it to use aodh from mitaka. That could be step 1 of the upgrade. The
operator could pause there for as long as they wanted, and then
continue on with the rest of the upgrade of the other services to
Mitaka. It may even be possible to implement them as separate stack
updates.

Does that sound like it could work? Would we have to update some parts
of Ceilometer as well, or does Liberty Ceilometer and Mitaka Aodh work
together nicely?

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI] Tempest on periodic jobs

2016-05-17 Thread Sagi Shnaidman
Hi,
raising again the question about tempest running on TripleO CI as it was
discussed in the last TripleO meeting.

I'd like to get your attention that in these tests, which I ran just for
ensure it works, there were bugs discovered, and these weren't corner cases
but real failures of TripleO installation. Like this one for Sahara:
https://review.openstack.org/#/c/309042/
I'm sorry, I should have prepared these bugs for the meeting as proofs for
testing value.

The second issue that was blocker before is a wall time and now, as we can
see from jobs length, after HW upgrade of CI - is not an issue anymore. We
can run tempest without any fear to get into timeout problem, "nonha" job
for sure, as most short from all.

So I'd insist on running tempest exactly on promoting job in order not to
promote images with bugs, especially the critical like the whole service
not available at all. The pingtest is not enough for this purpose as we can
see from the bugs above, it checks very basic things and not all services
are covered. I think we aren't interested just to see the jobs green, but
sticking for the basic working functionality and quality of promoting.
Maybe it's influence of my previous QA roles, but I don't see any value to
promote something with bugs.

The point about CI stability - the last issues that CI faces now are not so
connected to tempest tests or CI code at all, it's bugs of underlying
projects and whether tempest will run or not doesn't really matters in this
case. These issues fail everything yet before any testing starts.
Indication of such issues before they leak into TripleO is different topic
and approach.

So my main point for running tempest tests on "nonha" periodic jobs is:
Quality and guaranteed basic functionality of installed overcloud services.
At least all of them are up and can accept connections. Avoid and early
discover critical bugs that are not seen in pingtest. I remind that we
going to run the only smoke tests, which takes not much time and check the
basic functionality only.

P.S. If there is interest, we can run the whole tempest set or specific
sets in experimental or third-party jobs just for indication. And I mean
not only tempest tests, but project scenario tests as well, for example
Heat integration tests. Both for undercloud and overcloud.

P.P.S Just ping me if you have any unclear points or would like to discuss
it in separate meeting, I'll give the all required info.

Thanks
-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Kevin Benton
>a) Deleting network's last segment will be prevented. Every network should have
at least one segment to let the port to bind.

This seems a bit arbitrary to me. If a segment is limited to a small part
of the datacenter, it being able to bind for one section of the datacenter
and not the rest is not much different than being able to bind from no
sections. Just allow it to be deleted because we need to have logic to deal
with the unbindable port case anyway. Especially since it's a racy check
that is hard to get correct for little gain.

>b) Deleting the segment that has been associated with subnet will be
prevented.

+1

>c) Deleting the segment that has been bound to port will be prevented.

+1.

>d) Based on c), we need to query ml2_port_binding_levels, I think
neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2. This
is also because port and segment are both neutron server resources, no need
to keep PortBindingLevel at ml2.

There are things in this model that make sense only to ML2 (level and
driver), especially since ML2 allows a single port_id to appear multiple
times in the table (primary key is port_id + level).  To achieve your goals
in 'C' above, just emit a BEFORE_DELETE event in the callback registry for
segments. Then ML2 can query this table with a registered callback and
other plugins can register a callback to prevent this however they want.

However, be sure to ignore the DHCP port when preventing segment deletion
otherwise having DHCP enabled will make it difficult to get rid of a
segment.

>e) Is it possible to update a segment(physical_network, segmentation_id, or
even network_type), when the segment is being used?

I would defer this for future work and not allow it for now. If the segment
details change, we need to ask the drivers responsible for every bound port
to make they can support it under the new conditions. It will be quite a
bit of logic to deal with that I don't think we need to support up front.

On Tue, May 17, 2016 at 8:39 AM, Hong Hui Xiao  wrote:

> Hi,
>
> I create this patch [1] to allow multi-segmented routed provider networks
> to grow and shrink over time, reviews are welcomed. I found these points
> during working on the patch, and I think it is good to bring them out for
> discussion.
>
> a) Deleting network's last segment will be prevented. Every network should
> have at least one segment to let the port to bind.
>
> b) Deleting the segment that has been associated with subnet will be
> prevented.
>
> c) Deleting the segment that has been bound to port will be prevented.
>
> d) Based on c), we need to query ml2_port_binding_levels, I think
> neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2.
> This is also because port and segment are both neutron server resources,
> no need to keep PortBindingLevel at ml2.
>
> e) Is it possible to update a segment(physical_network, segmentation_id,
> or even network_type), when the segment is being used?
>
> [1] https://review.openstack.org/#/c/317358
>
> HongHui Xiao(肖宏辉)
>
>
>
> From:   Carl Baldwin 
> To: OpenStack Development Mailing List
> 
> Date:   05/12/2016 23:36
> Subject:[openstack-dev] [Neutron][ML2][Routed Networks]
>
>
>
> Hi,
>
> Segments are now a first class thing in Neutron with the merge of this
> patch [1].  It exposes API for segments directly.  With ML2, it is
> currently only possible to view segments that have been created
> through the provider net or multi-provider net extensions.  This can
> only be done at network creation time.
>
> In order to allow multi-segmented routed provider networks to grow and
> shrink over time, it is necessary to allow creation and deletion of
> segments through the new segment endpoint.  Hong Hui Xiao has offered
> to help with this.
>
> We need to provide the integration between the service plugin that
> provides the segments endpoint with ML2 to allow the creates and
> deletes to work properly.  We'd like to here from ML2 experts out
> there on how this integration can proceed.  Is there any caution that
> we need to take?  What are the non-obvious aspects of this that we're
> not thinking about?
>
> Carl Baldwin
>
> [1] https://review.openstack.org/#/c/296603/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List 

[openstack-dev] Questions about Openstack Mitaka Dashboard

2016-05-17 Thread zhihao wang
Dear OpenStack Dev Team

May I ask you some questions about the Openstakc Mitaka Horizon Dashboard?
I have followed the Openstack online installation document and your 
installation guide http://docs.openstack.org/mitaka/install-guide-ubuntu/
But after I installed the horizon, I cannot access the Horizon.  
It showed this following error.  I am not sure what was wrong. but I have 
followed the same steps, and installed the openstack Mitaka twice, but I still 
got the same problem again.
I user two physical servers, 1 controller and 1 compute node. All are Ubuntu 
14.04
I am not sure what was wrong, I followed the same steps on the documents. I 
think should be the apache configuration problem.
Could you please give me a hand on this? 
==Internal Server ErrorThe server encountered an internal error 
or misconfiguration and was unable to complete your request.Please contact the 
server administrator at webmaster@localhost to inform them of the time this 
error occurred, and the actions you performed just before this error.More 
information about this error may be available in the server error 
log.Apache/2.4.7 (Ubuntu) Server at 10.145.213.30 Port 80
Thanks a lotWally

  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Crag Wolfe
Hi all,

I've read that versioned objects are favored for supporting different
versions between RPC services and to support rolling upgrades. I'm
looking to follow the pattern for Heat. Basically, it is the classic
problem where we want to migrate from writing to a column in one table
to having that column live in a different table. Looking at nova code,
the version for a given versioned object is a constant in the given
object/.py file. To properly support rolling upgrades
where we have older and newer heat-engine processes running
simultaneously (thus avoiding downtime), we have to write to both the
old column and the new column. Once all processes have been upgraded,
we can upgrade again to only write to the new location (but still able
to read from the old location of course). Following the existing
pattern, this means the operator has to upgrade 
twice (it may be possible to increment VERSION in 
only once, however, the first time).

The drawback of the above is it means cutting two releases (since two
different .py files). However, I wanted to check if anyone has gone
with a different approach so only one release is required. One way to
do that would be by specifying a version (or some other flag) in
heat.conf. Then, only one .py release would be
required -- the logic of whether to write to both the old and new
location (the intermediate step) versus just the new location (the
final step) would be in .py, dictated by the config
value. The advantage to this approach is now there is only one .py
file released, though the operator would still have to make a config
change and restart heat processes a second time to move from the
intermediate step to the final step.

Thanks,
--Crag

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-05-17 Thread Bogdan Dobrelya
On 04/22/2016 05:42 PM, Bogdan Dobrelya wrote:
> [crossposting to openstack-operat...@lists.openstack.org]
> 
> Hello.
> I wrote this paper [0] to demonstrate an approach how we can leverage a
> Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
> (DB) or Trove, Tooz DLM and perhaps for any integration projects which
> rely on distributed systems. Although all tests are yet to be finished,
> results are quite visible, so I better off share early for a review,
> discussion and comments.
> 
> I have similar tests done for the RabbitMQ OCF RA clusterers as well,
> although have yet wrote a report.
> 
> PS. I'm sorry for so many tags I placed in the topic header, should I've
> used just "all" :) ? Have a nice weekends and take care!
> 
> [0] https://goo.gl/VHyIIE
> 

[ cross posting to operators ]

An update.
I added Appendix B there I made a few more tests dancing mostly around
that funny topic [0] full of interesting nuances, and there I tried to
cover some generic patterns OpenStack uses for transactions constructed
by sqlalchemy's ORM (I hope so).

Those test cases cover a5a read skews, SERIALIZABLE / RR / RC TI levels,
lock modes for select, and wsrep_sync_wait Galera settings. And reworked
conclusions and recommendations sections by the new tests results as well.

For now, I've finished all items I had on my TODO list for that paper.
If anyone would like to do more test runs, re-use the given approach to
the cluster-labs' upstream OCF RA or re-check with another configuration
tunings (mostly to wsrep& like things perhaps), you're welcome! I'm open
for questions, if any.

Also note, that with all submitted fixes to those multiple
testing-discovered bugs, cluster recovery after network partitions have
been working almost seamlessly, for me :-) For those who interested, the
full list of related bugs is easy to locate in this backport's commit
message [2].

The link is the same [1].

[0] https://goo.gl/YWEc5A
[1] https://goo.gl/VHyIIE
[2] https://review.openstack.org/#/c/315989/

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-05-17 Thread Morgan Fainberg
On Tue, May 10, 2016 at 4:26 PM, Morgan Fainberg 
wrote:

> On Wed, Apr 13, 2016 at 7:07 PM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>> It is that time again, the time to plan the Keystone midcycle! Looking at
>> the schedule [1] for Newton, the weeks that make the most sense look to be
>> (not in preferential order):
>>
>> R-14 June 27-01
>> R-12 July 11-15
>> R-11 July 18-22
>>
>> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based
>> on previous attendance we can expect ~30 people to attend. Based upon all
>> the information (other midcycles, other events, the US July4th holiday), I
>> am thinking that week R-12 (the week of the newton-2 milestone) would be
>> the best offering. Weeks before or after these three tend to push too close
>> to the summit or too far into the development cycle.
>>
>> I am trying to arrange for a venue in the Bay Area (most likely will be
>> South Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we
>> have done east coast and central over the last few midcycles.
>>
>> Please let me know your thoughts / preferences. In summary:
>>
>> * Venue will be Bay Area (more info to come soon)
>>
>> * Options of weeks (in general subjective order of preference): R-12,
>> R-11, R-14
>>
>> Cheers,
>> --Morgan
>>
>> [1] http://releases.openstack.org/newton/schedule.html
>>
>
> We have an update for the midcycle planning!
>
> First of all, I want to thank Cisco for hosting us for this midcycle. The
> Dates will be R-11[1], Wed-Friday: July 20-22 (expect to be around for a
> full day on the 20th and at least 1/2 day on the 22nd). The address will be
> 170 W Tasman Dr, San Jose, CA 95134 . The exact building and room # will be
> determined soon. Expect a place (wiki, google form, etc) to be posted this
> week so we can collect real numbers of those who will be joining us.
>
> Thanks for being patient with the planning. We should have ~35 spots for
> this midcycle.
>
> Cheers,
> --Morgan
>
>
RSVP Form for the Keystone Newton MidCycle:
http://goo.gl/forms/NfFMpJe6MSCXSNhr2

Please RSVP Early, maximum attendance is 35.

We look forward to seeing you there!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-17 Thread Pradeep Kilambi
On Tue, May 17, 2016 at 4:59 AM, Steven Hardy  wrote:
> Hi Pradeep,
>
> Firstly, as discussed on IRC, I echo all of bnemec's concerns, this is not
> well aligned with our stable branch policy[1], or the stable-maint
> direction towards "critical bugfixes only"[2], so if possible I'd rather we
> figured out a general way to solve this problem that doesn't involve
> invasive/risky feature backports.
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-March/090855.html
> [2] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094440.html
>
> On Mon, May 16, 2016 at 03:33:29PM -0400, James Slagle wrote:
>> On Mon, May 16, 2016 at 10:34 AM, Pradeep Kilambi  wrote:
>> > Hi Everyone:
>> >
>> > I wanted to start a discussion around considering backporting Aodh to
>> > stable/liberty for upgrades. We have been discussing quite a bit on whats
>> > the best way for our users to upgrade ceilometer alarms to Aodh when moving
>> > from liberty to mitaka. A quick refresh on what changed, In Mitaka,
>> > ceilometer alarms were replaced by Aodh. So only way to get alarms
>> > functionality is use aodh. Now when the user kicks off upgrades from 
>> > liberty
>> > to Mitaka, we want to make sure alarms continue to function as expected
>> > during the process which could take multiple days. To accomplish this I
>> > propose the following approach:
>> >
>> > * Backport Aodh functionality to stable/liberty. Note, Aodh functionality 
>> > is
>> > backwards compatible, so with Aodh running, ceilometer api and client will
>> > redirect requests to Aodh api. So this should not impact existing users who
>> > are using ceilometer api or client.
>> >
>> > * As part of Aodh deployed via heat stack update, ceilometer alarms 
>> > services
>> > will be replaced by openstack-aodh-*. This will be done by the puppet apply
>> > as part of stack convergence phase.
>> >
>> > * Add checks in the Mitaka pre upgrade steps when overcloud install kicks
>> > off to check and warn the user to update to liberty + aodh to ensure aodh 
>> > is
>> > running. This will ensure heat stack update is run and, if alarming is 
>> > used,
>> > Aodh is running as expected.
>> >
>> > The upgrade scenarios between various releases would work as follows:
>> >
>> > Liberty -> Mitaka
>> >
>> > * Upgrade starts with ceilometer alarms running
>> > * A pre-flight check will kick in to make sure Liberty is upgraded to
>> > liberty + aodh with stack update
>> > * Run heat stack update to upgrade to aodh
>> > * Now ceilometer alarms should be removed and Aodh should be running
>> > * Proceed with mitaka upgrade
>> > * End result, Aodh continue to run as expected
>> >
>> > Liberty + aodh -> Mitaka:
>> >
>> > * Upgrade starts with Aodh running
>> > * A pre-flight check will kick in to make sure Liberty is upgraded to Aodh
>> > with stack update
>> > * Confirming Aodh is indeed running, proceed with Mitaka upgrade with Aodh
>> > running
>> > * End result, Aodh continue to be run as expected
>> >
>> >
>> > This seems to be a good way to get the upgrades working for aodh. Other 
>> > less
>> > effective options I can think of are:
>> >
>> > 1. Let the Mitaka upgrade kick off and do "yum update" which replace aodh
>> > during migration, alarm functionality will be down until puppet converge
>> > runs and configures Aodh. This means alarms will be down during upgrade
>> > which is not ideal.
>> >
>> > 2. During Mitaka upgrades, replace with Aodh and add a bash script that
>> > fully configures Aodh and ensures aodh is functioning. This will involve
>> > significant work and results in duplicating everything puppet does today.
>>
>> How much duplication would this really be? Why would it have to be in bash?
>>
>> Could it be:
>>
>> Liberty -> Mitaka
>>
>> * Upgrade starts with ceilometer alarms running
>> * Add a new hook for the first step of Mitaka upgrade that does:
>>   ** sets up mitaka repos
>>   ** migrates from ceilometer alarms to aodh, can use puppet
>>   ** ensures aodh is running
>> * Proceed with rest of mitaka upgrade
>
> +1, I was thinking the same thing - I also don't get why it has to be bash,
> surely we could have a script that can apply a puppet manifest that uses our
> existing puppet profile/module implementation to bring up aodh?
>
> If we can figure out a clean way to do that, we could add a pre-upgrade
> interface to composable services that allows the same thing, e.g implement
> a supportable upgrade workflow that can be reused vs a one-off workaround.
>

Thanks Steve. I was under the impression we cannot run puppet at this
stage. Hence my suggestion to run bash or some script here, but if we
can find a way to easily wire the existing aodh puppet manifests into
the upgrade process and get aodh up and running then even better, we
dont have to duplicate what puppet gives us already and reuse that.


>> At most, it seems we'd have to surround the puppet apply with some
>> pacemaker commands to possibly 

[openstack-dev] [glance] Cross project and all other liaisons

2016-05-17 Thread Nikhil Komawar
Team,


Please make sure the Cross project liaisons page [1] is up to date with
your Newton commitments. 


* If you are planning to stick around with certain duties and your name
is already on the wiki page [1] , please make sure the contact
information is up-to-date. Also, please note any changes that may have
happened on the responsibilities mentioned in the wiki.


* If you are not looking forward to contributing to a specific duty that
you signed up for earlier or if you wish to change your duties (for
example if you wish to be a QA liaison and you have been a release
liaison until now), please let me know first and after the ack update
the page [1] -- this can be racy (two or more people interested in the
same role) so trying to avoid conflict of interests.


* If you are interested in contributing to any of the roles or are
interested in signing up for more than one role, you are welcome to do
so. You do not need to be a core in the project for this sign-up but
cores are encouraged to sign-up. If you are looking to make your way
into the core group, this could be a good opportunity as well. The
responsibilities for each role have been mentioned in the wiki [1]. But
the first and foremost responsibility is being the primary point of
contact of Glance for that cross-group and keep a regular sync with the
PTL about the updates. You are strongly encouraged to mention updates at
our weekly meetings (except in some cases like security liaisons where
you will send updates privately via email to the glance-core-sec group
or sometimes to the VMT team).


All the names against these roles will be considered final tomorrow
(Wednesday May 18th 23:59 UTC).


Let me know if you have concerns.


[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons




-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Hong Hui Xiao
Hi,

I create this patch [1] to allow multi-segmented routed provider networks 
to grow and shrink over time, reviews are welcomed. I found these points 
during working on the patch, and I think it is good to bring them out for 
discussion.

a) Deleting network's last segment will be prevented. Every network should 
have at least one segment to let the port to bind.

b) Deleting the segment that has been associated with subnet will be 
prevented.

c) Deleting the segment that has been bound to port will be prevented. 

d) Based on c), we need to query ml2_port_binding_levels, I think 
neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2. 
This is also because port and segment are both neutron server resources, 
no need to keep PortBindingLevel at ml2.

e) Is it possible to update a segment(physical_network, segmentation_id, 
or even network_type), when the segment is being used? 

[1] https://review.openstack.org/#/c/317358

HongHui Xiao(肖宏辉)



From:   Carl Baldwin 
To: OpenStack Development Mailing List 

Date:   05/12/2016 23:36
Subject:[openstack-dev] [Neutron][ML2][Routed Networks]



Hi,

Segments are now a first class thing in Neutron with the merge of this
patch [1].  It exposes API for segments directly.  With ML2, it is
currently only possible to view segments that have been created
through the provider net or multi-provider net extensions.  This can
only be done at network creation time.

In order to allow multi-segmented routed provider networks to grow and
shrink over time, it is necessary to allow creation and deletion of
segments through the new segment endpoint.  Hong Hui Xiao has offered
to help with this.

We need to provide the integration between the service plugin that
provides the segments endpoint with ML2 to allow the creates and
deletes to work properly.  We'd like to here from ML2 experts out
there on how this integration can proceed.  Is there any caution that
we need to take?  What are the non-obvious aspects of this that we're
not thinking about?

Carl Baldwin

[1] https://review.openstack.org/#/c/296603/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-05-17 Thread Matthew Mosesohn
Hi Simon,

For 8.0 and earlier, I would deploy ElasticSearch before deploy_end
and LMA collector after post_deploy_start

For Mitaka and Newton releases, the task graph now skips dependencies
that are not found for the role being processed. Now this "requires"
dependency will work that previously errored.

Best Regards,
Matthew Mosesohn

On Tue, May 17, 2016 at 6:27 PM, Simon Pasquier  wrote:
> I'm resurrecting this thread because I didn't manage to find a satisfying
> solution to deal with this issue.
>
> First let me provide more context on the use case. The Elasticsearch/Kibana
> and LMA collector plugins need to synchronize their deployment. Without too
> many details, here is the workflow when both plugins are deployed:
> 1. [Deployment] Install the Elasticsearch/Kibana primary node.
> 2. [Deployment] Install the other Elasticsearch/Kibana nodes.
> 3. [Post-Deployment] Configure the Elasticsearch cluster.
> 4. [Post-Deployment] Install and configure the LMA collector.
>
> Task #4 should happen after #3 so we've specified the dependency in
> deployement_tasks.yaml [0] but when the Elasticsearch/Kibana plugin isn't
> deployed in the same environment (which is a valid case), it fails [1] with:
>
> Tasks 'elasticsearch-kibana-configuration, influxdb-configuration' can't be
> in requires|required_for|groups|tasks for [lma-backends] because they don't
> exist in the graph
>
> To workaround this restriction, we're using 'upload_nodes_info' as an anchor
> task [2][3] since it is always present in the graph but this isn't really
> elegant. Any suggestion to improve this?
>
> BR,
> Simon
>
> [0]
> https://github.com/openstack/fuel-plugin-lma-collector/blob/fd9337b43b6bdae6012f421e22847a1b0307ead0/deployment_tasks.yaml#L123-L139
> [1] https://bugs.launchpad.net/lma-toolchain/+bug/1573087
> [2]
> https://github.com/openstack/fuel-plugin-lma-collector/blob/56ef5c42f4cd719958c4c2ac3fded1b08fe2b90f/deployment_tasks.yaml#L25-L37
> [3]
> https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/4c5736dadf457b693c30e20d1a2679165ae1155a/deployment_tasks.yaml#L156-L173
>
> On Fri, Jan 29, 2016 at 4:27 PM, Igor Kalnitsky 
> wrote:
>>
>> Hey folks,
>>
>> Simon P. wrote:
>> > 1. Run task X for plugin A (if installed).
>> > 2. Run task Y for plugin B (if installed).
>> > 3. Run task Z for plugin A (if installed).
>>
>> Simon, could you please explain do you need this at the first place? I
>> can imagine this case only if your two plugins are kinda dependent on
>> each other. In this case, it's better to do what was said by Andrew W.
>> - set 'Task Y' to require 'Task X' and that requirement will be
>> satisfied anyway (even if Task X doesn't exist in the graph).
>>
>>
>> Alex S. wrote:
>> > Before we get rid of tasks.yaml can we provide a mechanism for plugin
>> > devs could leverage to have tasks executes at specific points in the
>> > deploy process.
>>
>> Yeah, I think that may be useful sometime. However, I'd prefer to
>> avoid anchor usage as much as possible. There's no guarantees that
>> other plugin didn't make any destructive actions early, that breaks
>> you later. Anchors is good way to resolve possible conflicts, but they
>> aren't bulletproof.
>>
>> - igor
>>
>> On Thu, Jan 28, 2016 at 1:31 PM, Bogdan Dobrelya 
>> wrote:
>> > On 27.01.2016 14:44, Simon Pasquier wrote:
>> >> Hi,
>> >>
>> >> I see that tasks.yaml is going to be deprecated in the future MOS
>> >> versions [1]. I've got one question regarding the ordering of tasks
>> >> between different plugins.
>> >> With tasks.yaml, it was possible to coordinate the execution of tasks
>> >> between plugins without prior knowledge of which plugins were installed
>> >> [2].
>> >> For example, lets say we have 2 plugins: A and B. The plugins may or
>> >> may
>> >> not be installed in the same environment and the tasks execution should
>> >> be:
>> >> 1. Run task X for plugin A (if installed).
>> >> 2. Run task Y for plugin B (if installed).
>> >> 3. Run task Z for plugin A (if installed).
>> >>
>> >> Right now, we can set task priorities like:
>> >>
>> >> # tasks.yaml for plugin A
>> >> - role: ['*']
>> >>   stage: post_deployment/1000
>> >>   type: puppet
>> >>   parameters:
>> >> puppet_manifest: puppet/manifests/task_X.pp
>> >> puppet_modules: puppet/modules
>> >>
>> >> - role: ['*']
>> >>   stage: post_deployment/3000
>> >>   type: puppet
>> >>   parameters:
>> >> puppet_manifest: puppet/manifests/task_Z.pp
>> >> puppet_modules: puppet/modules
>> >>
>> >> # tasks.yaml for plugin B
>> >> - role: ['*']
>> >>   stage: post_deployment/2000
>> >>   type: puppet
>> >>   parameters:
>> >> puppet_manifest: puppet/manifests/task_Y.pp
>> >> puppet_modules: puppet/modules
>> >>
>> >> How would it be handled without tasks.yaml?
>> >
>> > I created a kinda related bug [0] and submitted a patch [1] to MOS docs
>> > [2] to kill some entropy on the topic of tasks schema 

Re: [openstack-dev] [tripleo] tripleo-common docs

2016-05-17 Thread Ryan Brady
On Tue, May 17, 2016 at 11:01 AM, Ben Nemec  wrote:

> On 05/17/2016 08:16 AM, Marius Cornea wrote:
> > Hi everyone,
> >
> > The tripleo-common readme[1] points to a broken documentation link[2].
> > What is the correct link for the docs?
>
> At the moment it should probably point at
> http://docs.openstack.org/developer/tripleo-docs/
>
> Although it would be nice to have some real developer docs in
> tripleo-common at some point too.  We're adding a bunch of stuff to that
> repo and it's going to get unwieldy if there's no API docs for people to
> look at.
>


There's a start at https://review.openstack.org/#/c/313109/.



>
> >
> > Thanks
> >
> > [1] https://github.com/openstack/tripleo-common/blob/master/README.rst
> > [2] http://docs.openstack.org/developer/tripleo-common
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ryan Brady
Cloud Engineering
rbr...@redhat.com
919.890.8925
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-05-17 Thread Simon Pasquier
I'm resurrecting this thread because I didn't manage to find a satisfying
solution to deal with this issue.

First let me provide more context on the use case. The Elasticsearch/Kibana
and LMA collector plugins need to synchronize their deployment. Without too
many details, here is the workflow when both plugins are deployed:
1. [Deployment] Install the Elasticsearch/Kibana primary node.
2. [Deployment] Install the other Elasticsearch/Kibana nodes.
3. [Post-Deployment] Configure the Elasticsearch cluster.
4. [Post-Deployment] Install and configure the LMA collector.

Task #4 should happen after #3 so we've specified the dependency in
deployement_tasks.yaml [0] but when the Elasticsearch/Kibana plugin isn't
deployed in the same environment (which is a valid case), it fails [1] with:

Tasks 'elasticsearch-kibana-configuration, influxdb-configuration' can't be
in requires|required_for|groups|tasks for [lma-backends] because they don't
exist in the graph

To workaround this restriction, we're using 'upload_nodes_info' as an
anchor task [2][3] since it is always present in the graph but this isn't
really elegant. Any suggestion to improve this?

BR,
Simon

[0]
https://github.com/openstack/fuel-plugin-lma-collector/blob/fd9337b43b6bdae6012f421e22847a1b0307ead0/deployment_tasks.yaml#L123-L139
[1] https://bugs.launchpad.net/lma-toolchain/+bug/1573087
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/56ef5c42f4cd719958c4c2ac3fded1b08fe2b90f/deployment_tasks.yaml#L25-L37
[3]
https://github.com/openstack/fuel-plugin-elasticsearch-kibana/blob/4c5736dadf457b693c30e20d1a2679165ae1155a/deployment_tasks.yaml#L156-L173

On Fri, Jan 29, 2016 at 4:27 PM, Igor Kalnitsky 
wrote:

> Hey folks,
>
> Simon P. wrote:
> > 1. Run task X for plugin A (if installed).
> > 2. Run task Y for plugin B (if installed).
> > 3. Run task Z for plugin A (if installed).
>
> Simon, could you please explain do you need this at the first place? I
> can imagine this case only if your two plugins are kinda dependent on
> each other. In this case, it's better to do what was said by Andrew W.
> - set 'Task Y' to require 'Task X' and that requirement will be
> satisfied anyway (even if Task X doesn't exist in the graph).
>
>
> Alex S. wrote:
> > Before we get rid of tasks.yaml can we provide a mechanism for plugin
> > devs could leverage to have tasks executes at specific points in the
> > deploy process.
>
> Yeah, I think that may be useful sometime. However, I'd prefer to
> avoid anchor usage as much as possible. There's no guarantees that
> other plugin didn't make any destructive actions early, that breaks
> you later. Anchors is good way to resolve possible conflicts, but they
> aren't bulletproof.
>
> - igor
>
> On Thu, Jan 28, 2016 at 1:31 PM, Bogdan Dobrelya 
> wrote:
> > On 27.01.2016 14:44, Simon Pasquier wrote:
> >> Hi,
> >>
> >> I see that tasks.yaml is going to be deprecated in the future MOS
> >> versions [1]. I've got one question regarding the ordering of tasks
> >> between different plugins.
> >> With tasks.yaml, it was possible to coordinate the execution of tasks
> >> between plugins without prior knowledge of which plugins were installed
> [2].
> >> For example, lets say we have 2 plugins: A and B. The plugins may or may
> >> not be installed in the same environment and the tasks execution should
> be:
> >> 1. Run task X for plugin A (if installed).
> >> 2. Run task Y for plugin B (if installed).
> >> 3. Run task Z for plugin A (if installed).
> >>
> >> Right now, we can set task priorities like:
> >>
> >> # tasks.yaml for plugin A
> >> - role: ['*']
> >>   stage: post_deployment/1000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_X.pp
> >> puppet_modules: puppet/modules
> >>
> >> - role: ['*']
> >>   stage: post_deployment/3000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_Z.pp
> >> puppet_modules: puppet/modules
> >>
> >> # tasks.yaml for plugin B
> >> - role: ['*']
> >>   stage: post_deployment/2000
> >>   type: puppet
> >>   parameters:
> >> puppet_manifest: puppet/manifests/task_Y.pp
> >> puppet_modules: puppet/modules
> >>
> >> How would it be handled without tasks.yaml?
> >
> > I created a kinda related bug [0] and submitted a patch [1] to MOS docs
> > [2] to kill some entropy on the topic of tasks schema roles versus
> > groups and using wildcards for basic and custom roles from plugins as
> > well. There is also a fancy picture to clarify things a bit. Would be
> > nice to put more details there about custom stages as well!
> >
> > If plugins are not aware of each other, they cannot be strictly ordered
> > like "to be the very last in the deployment" as one and only shall be
> > so. That is why "coordinating the execution of tasks
> > between plugins without prior knowledge of which plugins were installed"
> > looks very confusing for me. Though, maybe wildcards with 

Re: [openstack-dev] [puppet] weekly meeting #81

2016-05-17 Thread Emilien Macchi
On Mon, May 16, 2016 at 9:43 AM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi Puppeteers!
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.

Sorry for the typo, it was #openstack-meeting-4.

> Here's a first agenda:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160517
>
> Feel free to add more topics, and any outstanding bug and patch.
>
> See you tomorrow!

Here are the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-05-17-15.00.html

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Mid-cycle: Response required ASAP

2016-05-17 Thread Nikhil Komawar
Hello,


If you have booked travel for Glance midcycle or are looking to book the
same in the near future, please respond back to me ASAP. I am going to
take a final decisions on whether we need to conduct the physically
co-located Glance midcycle or not and if we have a quorum (5 people) for
the same. I am not considering the etherpad as a source of truth for
someone who has done the booking but forgotten to RSVP [1].


Let me know _today_ (in next 24 hours).


[1] https://etherpad.openstack.org/p/newton-glance-midcycle-meetup


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Swagger limitations?

2016-05-17 Thread Chris Dent

On Tue, 17 May 2016, Jamie Hannaford wrote:


Completely agree. The good news is that the API-WG have a spec in
review (https://review.openstack.org/#/c/234994/) which might solve
these problems. They want to make actions first-class resources in the
API. From my initial understanding, this could be represented in
Swagger without the need for vendor extensions.


For clarity, you might note a few things about that review:

* Last real work on it was back in January
* There are 4 -1 votes and only one +1 (which is from October)

Suffice it to say that that proposal (and the entire concept of how
to represent actions in a RESTful API) was contentious enough that we
stalled out. The main people who were in favor of trying to get some
kind of pro-actions guideline in place have had to step back from
participation in the  API-wg[1]. Until we are able to have a new
conversation about the issues, with a diverse set of participants,
it's not likely to see much progress.

[1] This is an all too common happening in cross-project work.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Swagger limitations?

2016-05-17 Thread Anne Gentle
On Tue, May 17, 2016 at 9:51 AM, Jamie Hannaford <
jamie.hannaf...@rackspace.com> wrote:

> Hi Anne,
>
>
> So it seems that the only remaining issues are:
>
>
> - using $ref, which is not supported by some upstream tools
>
> - embedding Heat templates in a Swagger doc (not applicable to most
> OpenStack services)
>
>
> I responded to the other two in my e-mail to Sean. For the $ref problem,
> what is the problem with using NPM packages like swagger-tools or
> swagger-parser [1][2]? They can deference Swagger files into one unified
> file, which the build tool can then use for HTML rendering. The alternative
> is to let each project choose whether to use $ref or not. If they do want
> to spread Swagger docs out into separate documents and reference them, they
> will need to use a tool in whatever language that works.
>
>
>
It's not that these mechanisms are not supported. It's that we don't have
assigned development resources to work on Swagger/OpenAPI solutions.

In my first reply I sent you a link to an example using OpenStack
infrastructure already in place to use npm tooling to build. Please feel
free to test with that example: https://review.openstack.org/#/c/286659/

Anne


> I agree that Swagger is a new tool in a new ecosystem, but it definitely
> solves a lot of our use cases. I think it'd be a lot stronger if we adopted
> it - at least partially - and contributed our ideas for improvement back
> upstream.
>
>
> Was there any cons/disadvantages that I missed which would prevent is
> incorporating it?
>
>
> Jamie
>
>
> [1] https://www.npmjs.com/package/swagger-parser
>
> [2] https://github.com/jamiehannaford/swagger-magnum/blob/master/deref.js
>
>
> --
> *From:* Anne Gentle 
> *Sent:* 17 May 2016 15:35
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [docs] [api] Swagger limitations?
>
>
>
> On Tue, May 17, 2016 at 7:57 AM, Jamie Hannaford <
> jamie.hannaf...@rackspace.com> wrote:
>
>> Hi all,
>>
>>
>> Back in March Anne wrote a great summary on the current state of Swagger
>> for OpenStack docs:
>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html.
>> 
>>  Is this still the current state of things? Magnum is looking to
>> document its API soon, so I want to look at the feasibility of doing it in
>> Swagger. If it's not possible, RST should suffice.
>>
>>
>>
> Hi Jamie, thanks for asking. There are a few more limitations/concerns
> that I'll outline below.
>
>
>> In terms of the specific limitations that were listed:
>>
>>
>> - the /actions endpoint. Would it be possible to use vendor extensions to
>> solve this problem? For example, imagine that you want to document reboot
>> and rebuild operations. You could create custom verb properties for these
>> operations (`x-post-reboot`, `x-post-rebuild`) that would allow
>> differentiation of the JSON payload under the same URL category. The HTML
>> rendering would need to be adapted to take this into consideration. [1]
>>
>>
>> Yes, our first goal is to migrate off of WADL, but that won't prevent use
> of Swagger, it just means the first priority is to get all projects off of
> WADL.
>
> I think the details and nuances here were fairly well communicated in the
> rest of the thread [1] and in the etherpads, but of course that requires a
> LOT of reading. :)
>
> The main points I'd also like to ensure you know we discussed are [2]:
> 
> The current Plan of Record is to move to swagger. However this doesn't
> address a number of key issues:
> - swagger is young, most community tools around swagger are 50% - 80%
> solutions
> - swagger provides no facility for nova's 'action' interfaces or
> microversions
>
> NOTE: actions are used by nova, cinder, manilla, trove, neutron (in
> extensions) - so 40% of the 12 APIs documented on
> http://developer.openstack.org/ but only 5 of 19 (26%) REST APIs in
> OpenStack's governance
>
> NOTE: microversions are used by nova, manilla, ironic, and cinder (and
> neutron probably in the future unless it proves awful to document and use)
>
> NOTE: while Heat neither uses actions or microversions, it's content
> templates as inline post requests makes the swagger definition potentially
> really tough. Not that there shouldn't be full docs for those formats, but
> they just don't do that today. This same issue will hit other APIs that
> support post requests of 3rd party content format. Tacker being a good
> instance.
>
> - the swagger-tools npm package supports $ref, which will clearly be
> required for maintaining our APIs (autoconverted Nova API is 21KLOC, and
> probably would be triple that once missing parts are filled in).
>
> NOTE: this is a brand new toolchain that is yet another speed bump in
> getting people to contribute and maintain, though it is maintained outside
> of OpenStack
> ---
>
> Please do 

Re: [openstack-dev] [tripleo] tripleo-common docs

2016-05-17 Thread Ben Nemec
On 05/17/2016 08:16 AM, Marius Cornea wrote:
> Hi everyone,
> 
> The tripleo-common readme[1] points to a broken documentation link[2].
> What is the correct link for the docs?

At the moment it should probably point at
http://docs.openstack.org/developer/tripleo-docs/

Although it would be nice to have some real developer docs in
tripleo-common at some point too.  We're adding a bunch of stuff to that
repo and it's going to get unwieldy if there's no API docs for people to
look at.

> 
> Thanks
> 
> [1] https://github.com/openstack/tripleo-common/blob/master/README.rst
> [2] http://docs.openstack.org/developer/tripleo-common
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Re: [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-17 Thread Ihar Hrachyshka

> On 17 May 2016, at 16:46, Doug Hellmann  wrote:
> 
> Excerpts from Ihar Hrachyshka's message of 2016-05-17 14:39:40 +0200:
>> 
>>> On 17 May 2016, at 14:27, Doug Hellmann  wrote:
>>> 
>>> Excerpts from Ihar Hrachyshka's message of 2016-05-17 12:25:55 +0200:
 
> On 16 May 2016, at 21:16, Armando M.  wrote:
> 
> 
> 
> On 16 May 2016 at 05:15, Ihar Hrachyshka  wrote:
> 
>> On 11 May 2016, at 22:05, Sukhdev Kapur  wrote:
>> 
>> 
>> Folks,
>> 
>> I am happy to announce that Mitaka release for L2 Gateway is released 
>> and now available at https://pypi.python.org/pypi/networking-l2gw.
>> 
>> You can install it by using "pip install networking-l2gw"
>> 
>> This release has several enhancements and fixes for issues discovered in 
>> liberty release.
> 
> How do you release it? I think the way to release new deliverables as of 
> Newton dev cycle is thru openstack/releases repository, as documented in 
> https://review.openstack.org/#/c/308984/
> 
> Have you pushed git tag manually?
> 
> I can only see the stable branch, tags can only be pushed by the 
> neutron-release team.  
 
 2016.1.0 tag is in the repo, and is part of stable/mitaka branch.
 
 Git tag history suggests that Carl pushed it (manually I guess?) It seems 
 that we release some independent deliverables thru openstack/releases, and 
 some manually pushing tags into repos.
 
 I would love if we can consolidate all our releases to use a single 
 automation mechanism (openstack/releases patches), independent of release 
 model. For that, I would like to hear from release folks whether we are 
 allowed to use openstack/releases repo for release:independent 
 deliverables that are part of an official project (neutron).
>>> 
>>> We're working on it. This cycle we've expanded coverage of the releases
>>> repo to all official projects using cycle-based models.
>>> 
 
 [Note that it would not mean we move the oversight burden for those 
 deliverables onto release team; neutron-release folks would still need to 
 approve them; it’s only about technicalities, not governance]
 
 The existence of the following git directory suggests that it’s supported:
 
 https://github.com/openstack/releases/tree/master/deliverables/_independent
 
 We already have some networking-* subprojects there, like 
 networking-bgpvpn or networking-odl. I would love to see all new releases 
 tracked there.
>>> 
>>> Any official projects following the independent model may use the
>>> openstack/releases repository to record their releases after they
>>> are tagged.
>> 
>> You mean we should follow the steps:
>> - first, push git tag manually;
>> - then, propose openstack/releases patch describing the new tag?
> 
> Yes.
> 
>> 
>> If that’s the case, it was not exactly followed for some late neutron 
>> stadium releases, like: https://review.openstack.org/#/c/308962/
>> 
>> Actually, that later one created the tag in the repo for us. So what’s the 
>> reason to take the first step manually?
> 
> Creating the tags is still a manual step, even when you submit the
> request and the release team does it for you. We've said we would help a
> few teams who asked, but most teams should tag themselves before
> recording the results in the releases repo. I expect this to change
> after newton, when the automation work is done.

Oh, that’s great to know. I haven’t realized that it’s not automated yet, and 
that release folks expect tags to be created before openstack/releases patches 
are sent for release:independent. I assume it’s documented somewhere?..

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Swagger limitations?

2016-05-17 Thread Jamie Hannaford
Hi Anne,


So it seems that the only remaining issues are:


- using $ref, which is not supported by some upstream tools

- embedding Heat templates in a Swagger doc (not applicable to most OpenStack 
services)


I responded to the other two in my e-mail to Sean. For the $ref problem, what 
is the problem with using NPM packages like swagger-tools or swagger-parser 
[1][2]? They can deference Swagger files into one unified file, which the build 
tool can then use for HTML rendering. The alternative is to let each project 
choose whether to use $ref or not. If they do want to spread Swagger docs out 
into separate documents and reference them, they will need to use a tool in 
whatever language that works.


I agree that Swagger is a new tool in a new ecosystem, but it definitely solves 
a lot of our use cases. I think it'd be a lot stronger if we adopted it - at 
least partially - and contributed our ideas for improvement back upstream.


Was there any cons/disadvantages that I missed which would prevent is 
incorporating it?


Jamie


[1] https://www.npmjs.com/package/swagger-parser

[2] https://github.com/jamiehannaford/swagger-magnum/blob/master/deref.js



From: Anne Gentle 
Sent: 17 May 2016 15:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [docs] [api] Swagger limitations?



On Tue, May 17, 2016 at 7:57 AM, Jamie Hannaford 
> wrote:

Hi all,


Back in March Anne wrote a great summary on the current state of Swagger for 
OpenStack docs: 
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html.
  Is this still the current state of things? Magnum is looking to document its 
API soon, so I want to look at the feasibility of doing it in Swagger. If it's 
not possible, RST should suffice.


Hi Jamie, thanks for asking. There are a few more limitations/concerns that 
I'll outline below.


In terms of the specific limitations that were listed:


- the /actions endpoint. Would it be possible to use vendor extensions to solve 
this problem? For example, imagine that you want to document reboot and rebuild 
operations. You could create custom verb properties for these operations 
(`x-post-reboot`, `x-post-rebuild`) that would allow differentiation of the 
JSON payload under the same URL category. The HTML rendering would need to be 
adapted to take this into consideration. [1]


Yes, our first goal is to migrate off of WADL, but that won't prevent use of 
Swagger, it just means the first priority is to get all projects off of WADL.

I think the details and nuances here were fairly well communicated in the rest 
of the thread [1] and in the etherpads, but of course that requires a LOT of 
reading. :)

The main points I'd also like to ensure you know we discussed are [2]:

The current Plan of Record is to move to swagger. However this doesn't address 
a number of key issues:
- swagger is young, most community tools around swagger are 50% - 80% solutions
- swagger provides no facility for nova's 'action' interfaces or microversions

NOTE: actions are used by nova, cinder, manilla, trove, neutron (in extensions) 
- so 40% of the 12 APIs documented on http://developer.openstack.org/ but only 
5 of 19 (26%) REST APIs in OpenStack's governance

NOTE: microversions are used by nova, manilla, ironic, and cinder (and neutron 
probably in the future unless it proves awful to document and use)

NOTE: while Heat neither uses actions or microversions, it's content templates 
as inline post requests makes the swagger definition potentially really tough. 
Not that there shouldn't be full docs for those formats, but they just don't do 
that today. This same issue will hit other APIs that support post requests of 
3rd party content format. Tacker being a good instance.

- the swagger-tools npm package supports $ref, which will clearly be required 
for maintaining our APIs (autoconverted Nova API is 21KLOC, and probably would 
be triple that once missing parts are filled in).

NOTE: this is a brand new toolchain that is yet another speed bump in getting 
people to contribute and maintain, though it is maintained outside of OpenStack
---

Please do consider the additional concerns about $ref and tooling listed above.

In all the communication, I have not prevented the use of Swagger/OpenAPI but I 
want to make it clear that focused efforts are best for the earliest projects.

- microversions. The more I think about it, the more I realise that this is not 
really a Swagger limitation but more of a consideration that *any* 
documentation format needs to solve. It seems that per microversion release, a 
separate doc artefact is required which encapsulates the API at that point in 
time. This would be very easy to do in Swagger compared to RST (single file vs 

Re: [openstack-dev] [docs] [api] Swagger limitations?

2016-05-17 Thread Anne Gentle
On Tue, May 17, 2016 at 9:13 AM, Jamie Hannaford <
jamie.hannaf...@rackspace.com> wrote:

> All great points, thanks Anne and Sean. My responses are inline.
>
> 
> From: Sean Dague 
> Sent: 17 May 2016 15:30
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [docs] [api] Swagger limitations?
>
> On 05/17/2016 08:57 AM, Jamie Hannaford wrote:
> > Hi all,
> >
> >
> > Back in March Anne wrote a great summary on the current state of Swagger
> > for OpenStack
> > docs:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html.
> > <
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html>
> >  Is this still the current state of things? Magnum is looking to
> > document its API soon, so I want to look at the feasibility of doing it
> > in Swagger. If it's not possible, RST should suffice.
> >
> >
> > In terms of the specific limitations that were listed:
> >
> >
> > - the /actions endpoint. Would it be possible to use vendor extensions
> > to solve this problem? For example, imagine that you want to document
> > reboot and rebuild operations. You could create custom verb properties
> > for these operations (`x-post-reboot`, `x-post-rebuild`) that would
> > allow differentiation of the JSON payload under the same URL category.
> > The HTML rendering would need to be adapted to take this into
> > consideration. [1]
>
> If you do this you're now outside the swagger spec, which means that all
> the toolchains in swagger that exist to:
>
> * build docs
> * build clients
> * build tests
>
> No longer apply to your API.
>
> So yes, you could do this, but in doing so you abandon most of the
> ecosystem that makes swagger interesting, and now have to maintain all
> your own dedicated tooling.
>
> There is no swagger HTML rendering project in OpenStack that can be
> simply modified for this. One of the advantages for swagger was that
> would be an upstream ecosystem tool to build things like the API site.
> You now give that up and take on a multi person year engineering effort
> to build a custom documentation site rendering engine (probably in
> javascript, as that's where most of the starting components are).
> Swagger is also pretty non trivial in the way references are resolved,
> so this may end up a *really* big engineering effort.
>
> I agree, but I don't think it's an unreasonable compromise to make
> because, at least in the context of OpenStack's ecosystem, we already
> provide much of that tooling to end-users. We have a plethora of clients
> already out there and a project underway to unify them into a single
> client. So I don't think we'd be putting users at a disadvantage by using
> our own vendor extensions, so long as they're well designed and well
> documented.
>
> The only inconvenience would be to adapt the build process to take custom
> verbs into consideration. In my opinion that's more of a short-term
> software challenge for contributors rather than an insurmountable one that
> affects end-users. Then again, I don't have much experience with the
> build/infra process, so please call me out if that's a complete
> mischaracterisation :)
>
> While I accept that it's definitely preferable to use upstream tools when
> they're useful, with such a complex beast like OpenStack, how long do you
> think we could carry on using vanilla upstream tools? At some point we
> would probably need to roll out our own modifications to satisfy our use
> cases. In terms of it requiring a multi-person engineering effort, isn't
> that already the remit of the current docs team?
>

In a word, No. We have not had dedicated development resources specifically
for API docs since 2013.

Do you have ideas for how to bring this type of development resource into
the OpenStack community? I've tried. I have worked on this for
fairy-slipper, with the API working group, and with the docs tools team,
and it leaves me with the sense that we must prioritize due to lack of
development resources.


> > - microversions. The more I think about it, the more I realise that this
> > is not really a Swagger limitation but more of a consideration that
> > *any* documentation format needs to solve. It seems that per
> > microversion release, a separate doc artefact is required which
> > encapsulates the API at that point in time. This would be very easy to
> > do in Swagger compared to RST (single file vs directory of RST files).
>
> You could go that route, and depending on how often you change your
> format, and how big your API is, it may or may not be cumbersome. Fixes
> to a base resource would require fixing it in N copies of the file.
>
> If my understanding is correct, this would not happen. Each microversion,
> when released, is locked and would be preserved in that doc artefact. If a
> new software release is needed to fix a bug, that change would be
> encapsulated in a microversion with its own doc file.
>
> It would mean 

Re: [openstack-dev] [oslo][all] oslo.log `verbose` and $your project

2016-05-17 Thread Alexis Lee
Dmitry Tantsur said on Tue, May 17, 2016 at 09:54:02AM +0200:
> >I'd like to retry 314573 in a few weeks, so cooperation and helping
> >getting any leftover cases of 'verbose' out of source trees would be
> >much appreciated.
> 
> Let me use this thread as a reminder: the goal of that deprecation
> was to make INFO the default log level. I've fixed it this time, but
> as it gets reverted, please make sure that the next time we don't
> end up with WARNING again.

Thanks Joshua, Dmitry.

I've put up https://review.openstack.org/#/c/317547 (with a -2 on it) so
hopefully we remember to merge that fixed patch instead of re-proposing
the broken one.


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-17 Thread Doug Hellmann
Excerpts from Renat Akhmerov's message of 2016-05-17 19:10:55 +0700:
> Team,
> 
> Our stable/mitaka branch is now broken by oslo.messaging 5.0.0. Global 
> requirements for stable/mitaka has oslo.messaging>=4.0.0 so it can fetch 
> 5.0.0.
> 
> Just reminding that it breaks us because we intentionally modified 
> RPCDispatcher like in [1]. It was needed for “at-least-once” delivery. In 
> master we already agreed to remove that hack and work towards having a decent 
> solution (there are options). The patch is [2]. But we need to handle it in 
> mitaka somehow.
> 
> Options I see:
> Constrain oslo.messaging in global-requirements.txt for stable/mitaka with 
> 4.6.1. Hard to do since it requires wide cross-project coordination.
> Remove that hack in stable/mitaka as we did with master. It may be bad 
> because this was wanted very much by some of the users
> 
> Not sure what else we can do.

You could set up your test jobs to use the upper-constraints.txt file in
the requirements repo.

What was the outcome of the discussion about adding the at-least-once
semantics to oslo.messaging?

Doug

> 
> Thoughts?
> 
> [1] 
> https://github.com/openstack/mistral/blob/stable/mitaka/mistral/engine/rpc.py#L38-L88
>  
> 
> [2] https://review.openstack.org/#/c/316578/ 
> 
> 
> Renat Akhmerov
> @Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Thoughts on deprecating the legacy bdm v1 API support

2016-05-17 Thread Matt Riedemann
In the live migration meeting today mdbooth and I were chatting about 
how hard it is to follow the various BDM code through nova, because you 
have the three block_device modules:


* nova.block_device - dict that does some translation magic
* nova.objects.block_device - contains the BDM(List) objects for RPC and 
DB access
* nova.virt.block_device - dict that wraps a BDM object, used for 
attaching volumes to instances, updates the BDM.connection_info field in 
the DB via the wrapper on the BDM object. This module also has 
translation logic in it.


The BDM v1 extension translates that type of request to the BDM v2 model 
before it gets to server create, and then passed down to the 
nova.compute.api. But there is still a lot of legacy bdm v1 translation 
logic spread through the code.


So I'd like to propose that we deprecate the v1 BDM API in the same vein 
that we're deprecating other untested things, like agent-builds, 
cloudpipe, certificates, and the proxy APIs. We can't remove the code, 
but we can signal to users to not use the API and eventually when we 
raise the minimum required microversion >= the deprecation, we can drop 
that code. Since that's a long ways off, the earlier we start a 
deprecation clock on this the better - if we're going to do it.


Does anyone have any issues with this?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Re: [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-17 Thread Doug Hellmann
Excerpts from Ihar Hrachyshka's message of 2016-05-17 14:39:40 +0200:
> 
> > On 17 May 2016, at 14:27, Doug Hellmann  wrote:
> > 
> > Excerpts from Ihar Hrachyshka's message of 2016-05-17 12:25:55 +0200:
> >> 
> >>> On 16 May 2016, at 21:16, Armando M.  wrote:
> >>> 
> >>> 
> >>> 
> >>> On 16 May 2016 at 05:15, Ihar Hrachyshka  wrote:
> >>> 
>  On 11 May 2016, at 22:05, Sukhdev Kapur  wrote:
>  
>  
>  Folks,
>  
>  I am happy to announce that Mitaka release for L2 Gateway is released 
>  and now available at https://pypi.python.org/pypi/networking-l2gw.
>  
>  You can install it by using "pip install networking-l2gw"
>  
>  This release has several enhancements and fixes for issues discovered in 
>  liberty release.
> >>> 
> >>> How do you release it? I think the way to release new deliverables as of 
> >>> Newton dev cycle is thru openstack/releases repository, as documented in 
> >>> https://review.openstack.org/#/c/308984/
> >>> 
> >>> Have you pushed git tag manually?
> >>> 
> >>> I can only see the stable branch, tags can only be pushed by the 
> >>> neutron-release team.  
> >> 
> >> 2016.1.0 tag is in the repo, and is part of stable/mitaka branch.
> >> 
> >> Git tag history suggests that Carl pushed it (manually I guess?) It seems 
> >> that we release some independent deliverables thru openstack/releases, and 
> >> some manually pushing tags into repos.
> >> 
> >> I would love if we can consolidate all our releases to use a single 
> >> automation mechanism (openstack/releases patches), independent of release 
> >> model. For that, I would like to hear from release folks whether we are 
> >> allowed to use openstack/releases repo for release:independent 
> >> deliverables that are part of an official project (neutron).
> > 
> > We're working on it. This cycle we've expanded coverage of the releases
> > repo to all official projects using cycle-based models.
> > 
> >> 
> >> [Note that it would not mean we move the oversight burden for those 
> >> deliverables onto release team; neutron-release folks would still need to 
> >> approve them; it’s only about technicalities, not governance]
> >> 
> >> The existence of the following git directory suggests that it’s supported:
> >> 
> >> https://github.com/openstack/releases/tree/master/deliverables/_independent
> >> 
> >> We already have some networking-* subprojects there, like 
> >> networking-bgpvpn or networking-odl. I would love to see all new releases 
> >> tracked there.
> > 
> > Any official projects following the independent model may use the
> > openstack/releases repository to record their releases after they
> > are tagged.
> 
> You mean we should follow the steps:
> - first, push git tag manually;
> - then, propose openstack/releases patch describing the new tag?

Yes.

> 
> If that’s the case, it was not exactly followed for some late neutron stadium 
> releases, like: https://review.openstack.org/#/c/308962/
> 
> Actually, that later one created the tag in the repo for us. So what’s the 
> reason to take the first step manually?

Creating the tags is still a manual step, even when you submit the
request and the release team does it for you. We've said we would help a
few teams who asked, but most teams should tag themselves before
recording the results in the releases repo. I expect this to change
after newton, when the automation work is done.

Doug

> 
> Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker] Can not launch new VNF

2016-05-17 Thread Alioune
Hi all,
It's my first time using Tacker, I've installed it with Openstack/libery
using [1].

I've created a VNFD but I always got the following erreor when I tried to
launch new VNF.

Someone knowshow to correct that error ?

Regards,


tacker vnfd-list
+--+-+--+--+-+
| id | name|
description| infra_driver | mgmt_driver |
+--+-+--+--+-+
| 10813e8e-1375-4751-ab49-db2dfcf1cb27 | my_vnfd | demo-example | heat
| noop|
+--+-+--+--+-+-


vnf-create --name my_vnf --vnfd-id 10813e8e-1375-4751-ab49-db2dfcf1cb27

2 ERROR root [-] Original exception being dropped: ['Traceback (most recent
call last):\n',
 '  File "/usr/local/lib/python2.7/dist-packages/tacker/vm/plugin.py", line
249, in _create_device\ncontext=context, device=device_dict)\n',
 '  File
"/usr/local/lib/python2.7/dist-packages/tacker/common/driver_manager.py",
line 75, in invoke\nreturn getattr(driver, method_name)(**kwargs)\n',
 '  File "/usr/local/lib/python2.7/dist-packages/tacker/common/log.py",
line 34, in wrapper\nreturn method(*args, **kwargs)\n',
 '  File
"/usr/local/lib/python2.7/dist-packages/tacker/vm/drivers/heat/heat.py",
line 320, in create\nstack = heatclient_.create(fields)\n',
 '  File
"/usr/local/lib/python2.7/dist-packages/tacker/vm/drivers/heat/heat.py",
line 486, in create\nreturn self.stacks.create(**fields)\n',
 '  File "/usr/lib/python2.7/dist-packages/heatclient/v1/stacks.py", line
136, in create\ndata=kwargs, headers=headers)\n',
 '  File "/usr/lib/python2.7/dist-packages/heatclient/common/http.py", line
287, in post\nreturn self.client_request("POST", url, **kwargs)\n',
 '  File "/usr/lib/python2.7/dist-packages/heatclient/common/http.py", line
277, in client_request\nresp, body = self.json_request(method, url,
**kwargs)\n',
 '  File "/usr/lib/python2.7/dist-packages/heatclient/common/http.py", line
266, in json_request\nresp = self._http_request(url, method,
**kwargs)\n',
 '  File "/usr/lib/python2.7/dist-packages/heatclient/common/http.py", line
196, in _http_request\n**kwargs)\n',
 '  File "/usr/lib/python2.7/dist-packages/requests/api.py", line 50, in
request\nresponse = session.request(method=method, url=url,
**kwargs)\n',
 '  File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 465,
in request\nresp = self.send(prep, **send_kwargs)\n',
 '  File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 573,
in send\nr = adapter.send(request, **kwargs)\n',
 '  File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 415,
in send\nraise ConnectionError(err, request=request)\n',
 "ConnectionError: ('Connection aborted.', error(111, 'ECONNREFUSED'))\n"]
2016-05-17 14:19:55.814 29892 ERROR tacker.api.v1.resource
[req-9a0ae91c-6e98-4f95-8548-697874339b1e None] create failed


[1]
http://tacker-docs.readthedocs.io/en/latest/install/manual_installation.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [dhcp] DHCP Agent resync

2016-05-17 Thread Randy Tuttle
Hi DHCP Agent SMEs

 

We are reviewing the DHCP Agent in Icehouse, specifically for an issue where 
the dnsmasq host file has multiple stale or replicated (IP address) entries (as 
compared to the DB allocations for the subnet). We also checked the other HA 
dnsmasq instance and it is correctly sync’d with the DB. Right now we can only 
guess that the network cache of the DHCP Agent has somehow gotten out of sync 
with the DB due to Rabbit message loss of port_create_end or port_update_end 
(but there might be other reasons), which then results in a stales agent cache. 
A DHCP Agent restart will, of course, clear the problem.

 

Historically speaking, we are wondering what the reasoning / thoughts might 
have been at that time as to why a cache refresh on the DHCP Agent was not 
implemented. Our thoughts were that it could generate a lot of messaging 
traffic across Rabbit, but are there other reasons? NOTE: We can see that the 
code could support it under some exception handling cases, but it seems the 
“agent_updated” method is not triggered.

 

Does anyone have the history on this, and is there any thought on enabling such 
a sync? 

 

Thanks for any insights,

Randy

 

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] no meeting 20 May

2016-05-17 Thread Doug Hellmann
The release team will skip our meeting this week (20 May).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][nova] os_vif 1.0.0 release (newton)

2016-05-17 Thread no-reply
We are pumped to announce the release of:

os_vif 1.0.0: A library for plugging and unplugging virtual interfaces
in OpenStack.

This is the first release of os_vif. This release is part of the
newton release series.

With package available at:

https://pypi.python.org/pypi/os_vif

For more details, please see below.

1.0.0
^

Initial release of os-vif


New Features


* There is an object model describing the different ways a virtual
  network interface can be configured on the host. There is a plugin
  API contract defined to enable configuration of the host OS to match
  a desired VIF setup There is an object model describing the plugins
  available on the host. Two built-in plugins provide support for
  Linux Bridge and OpenVSwitch integration on Linux compute hosts.

Changes in os_vif deb06cfe2401a50c5ce96f08483f16a9777c61fa..1.0.0
-

e5227ec Start using reno for release notes
722b02d vif_plug_ovs: merge both plugins into one
3f60c8e ovs: convert over to use privsep module
48d9e64 ovs: move code from plugin into linux_net helper
bf845fe linux_bridge: convert over to use privsep module
76994a0 test: use real UUID in all UUID fields
ae034f5 test: add workaround for non-deterministic ovo object comparison
9686110 os-vif: introduce a ComputeInfo object to represent compute info
2d9c516 linux_bridge: actually apply the iptables rules
5e72ecd Fix calls to create_ovs_vif_port
e91941a Remove vlan from hostdev and direct vif
b2a910c Change network vlan to integer
40f6f49 VIFDirect: replace dev_name with dev_address
c32d2a6 Use names() method of ExtensionManager insted of keys()
678c34e Remove obsolete obj_relationships attribute
1717625 os-vif: add test for versioned object fingerprints
c091359 os_vif: ensure objects are in an 'os_vif' namespace
88db713 vif_plug_ovs: Disable IPv6 on bridge devices
f874a46 import openvswitch plugin implementation
5888af0 import linux bridge plugin implementation
72d5dfb Provide plugins an oslo_config group for their setup
6a099ff Adding dev_type field to VIFHostDevice
fb2c061 Fix PciAddress regex
46045ca Update the test_os_vif.test_initialize documentation
faf3a0a tox: ignore E126, E127, E128 indentation checks
7ae8163 Fix logic getting access to stevedore loaded plugin instance
992b409 plugin: fix typo in method annotation
d15fe91 Pass InstanceInfo to the plug/unplug methods
d8b14f1 Fix definition of subnet object to not be untyped strings
11af752 Add formal classes for each of the types of VIF backend config
1339a45 don't catch ProcessExecutionError exception as special case
664d363 remove dependancy on nova object model
e030fa9 actually register the various objects we define
9370d74 remove obsolete requirements
fab8d72 Remove raise NotImplementedError from abstractmethods
65c0f37 remove python 2.6 trove classifier
2deb99e reorder tox envlist to run python 3.4 before 2.7
8023f33 Import of code from https://github.com/jaypipes/os_vif



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Nominating Anusha Ramineni and Eric Kao for core reviewer

2016-05-17 Thread Anusha Ramineni
Thanks. It's great working with you guys and I'm glad to be part of this
team :)
+1.

On 2016/05/14 9:16, Tim Hinrichs wrote:

> Hi all,
>
> I'm writing to nominate Anusha Ramineni and Eric Kao as Congress core
> reviewers.  Both Anusha and Eric have been active and consistent
> contributors in terms of code, reviewing, and interacting with the
> community since September--for all of Mitaka and a few months before that.
>
> Anusha was so active in Mitaka that she committed more code than the
> other core reviewers, and wrote the 2nd most reviews over all.  She took
> on stable-maintenance, is the first person to fix gate breakages, and
> manages to keep Congress synchronized with the rest of the OpenStack
> projects we depend on.  She's taken on numerous tasks in migrating to
> our new distributed architecture, especially around the API.  She
> manages to write honest yet kind reviews, and has discussions at the
> same level as the rest of the cores.
>
> Eric also committed more code in Mitaka than the other core reviewers.
> He has demonstrated his ability to design and implement solutions and
> work well with the community through the review process.  In particular,
> he completed the Congress migration to Python3 (including dealing with
> the antlr grammar), worked through difficult problems with the new
> distributed architecture (e.g. message sequencing, test-nondeterminism),
> and is now designing an HA deployment architecture.  His reviews and
> responses are both thoughtful and thorough and engages in discussions at
> the same level as the rest of the core team.
>
> Anusha and Eric: it's great working with you both!
>
> Tim
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

-- 
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Swagger limitations?

2016-05-17 Thread Jamie Hannaford
All great points, thanks Anne and Sean. My responses are inline.


From: Sean Dague 
Sent: 17 May 2016 15:30
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [docs] [api] Swagger limitations?

On 05/17/2016 08:57 AM, Jamie Hannaford wrote:
> Hi all,
>
>
> Back in March Anne wrote a great summary on the current state of Swagger
> for OpenStack
> docs: 
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html.
> 
>  Is this still the current state of things? Magnum is looking to
> document its API soon, so I want to look at the feasibility of doing it
> in Swagger. If it's not possible, RST should suffice.
>
>
> In terms of the specific limitations that were listed:
>
>
> - the /actions endpoint. Would it be possible to use vendor extensions
> to solve this problem? For example, imagine that you want to document
> reboot and rebuild operations. You could create custom verb properties
> for these operations (`x-post-reboot`, `x-post-rebuild`) that would
> allow differentiation of the JSON payload under the same URL category.
> The HTML rendering would need to be adapted to take this into
> consideration. [1]

If you do this you're now outside the swagger spec, which means that all
the toolchains in swagger that exist to:

* build docs
* build clients
* build tests

No longer apply to your API.

So yes, you could do this, but in doing so you abandon most of the
ecosystem that makes swagger interesting, and now have to maintain all
your own dedicated tooling.

There is no swagger HTML rendering project in OpenStack that can be
simply modified for this. One of the advantages for swagger was that
would be an upstream ecosystem tool to build things like the API site.
You now give that up and take on a multi person year engineering effort
to build a custom documentation site rendering engine (probably in
javascript, as that's where most of the starting components are).
Swagger is also pretty non trivial in the way references are resolved,
so this may end up a *really* big engineering effort.

I agree, but I don't think it's an unreasonable compromise to make because, at 
least in the context of OpenStack's ecosystem, we already provide much of that 
tooling to end-users. We have a plethora of clients already out there and a 
project underway to unify them into a single client. So I don't think we'd be 
putting users at a disadvantage by using our own vendor extensions, so long as 
they're well designed and well documented.

The only inconvenience would be to adapt the build process to take custom verbs 
into consideration. In my opinion that's more of a short-term software 
challenge for contributors rather than an insurmountable one that affects 
end-users. Then again, I don't have much experience with the build/infra 
process, so please call me out if that's a complete mischaracterisation :)

While I accept that it's definitely preferable to use upstream tools when 
they're useful, with such a complex beast like OpenStack, how long do you think 
we could carry on using vanilla upstream tools? At some point we would probably 
need to roll out our own modifications to satisfy our use cases. In terms of it 
requiring a multi-person engineering effort, isn't that already the remit of 
the current docs team?

> - microversions. The more I think about it, the more I realise that this
> is not really a Swagger limitation but more of a consideration that
> *any* documentation format needs to solve. It seems that per
> microversion release, a separate doc artefact is required which
> encapsulates the API at that point in time. This would be very easy to
> do in Swagger compared to RST (single file vs directory of RST files).

You could go that route, and depending on how often you change your
format, and how big your API is, it may or may not be cumbersome. Fixes
to a base resource would require fixing it in N copies of the file.

If my understanding is correct, this would not happen. Each microversion, when 
released, is locked and would be preserved in that doc artefact. If a new 
software release is needed to fix a bug, that change would be encapsulated in a 
microversion with its own doc file.

It would mean that users of your API must preselect the API version
first before seeing anything. And they will only see a pristine document
at that version.

They will not have access to change notes inline (i.e. "New in 2.12").
When looking at consuming an API. IMHO, that is something which is
incredibly useful in reading an API. It's existence in the python
standard lib makes it much clearer about whether you want to take
advantage of feature X, or decide you'd rather have compatibility, and not.

I definitely agree that this information is useful for the user, but I think it 
belongs in a RELEASENOTES format, not API docs. I think we should get to a 

Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-17 Thread John Griffith
Thanks Brian

On Tue, May 17, 2016 at 6:54 AM, Brian Rosmaita <
brian.rosma...@rackspace.com> wrote:

> Subject was: Re: [openstack-dev] [tc] [all] [glance] On operating a high
> throughput or otherwise team
>
> Un-hijacking the thread.  Here are some answers to John's questions, hope
> they are helpful.
>
> On 5/16/16, 9:06 PM, "John Griffith"  wrote:
>
> Hey,
>
> Maybe not related, but maybe it is.  After spending the past couple of
> hours trying to help a customer with a Glance issue I'm a bit... well
> annoyed with Glance.  I'd like to chime in on this thread.  I'm honesty not
> entirely sure what the goal of the thread is, but honestly there's
> something rather important to me that I don't really seem to see being
> called out.
>
> Is there any way we could stop breaking the API and it's behaviors?  Is
> there any way we can fix some of the issues with respect to how things work
> when folks configure multiple Glance repos?
>
> Couple of examples:
> 1. switching from "is_public=true" to "visibility=public"
>
>
> This was a major version change in the Images API.  The 'is_public'
> boolean is in the original Images v1 API, 'visibility' was introduced with
> the Images v2 API in the Folsom release.  You just need an awareness of
> which version of the API you're talking to.
>
>
>
> Ok, cool, I'm sure there's great reasons, but it really sucks when folks
> update their client and now none of their automation works any longer
>
>
> The Images v1 API went from CURRENT to SUPPORTED in the Kilo release
> (April 30, 2015).  The python-glanceclient began using v2 as the default
> with Change-Id: I09c9e409d149e2d797785591183e06c13229b7f7 on June 21, 2015
> (and hence would have been in release 0.17.2 on July 16, 2015).  So these
> changes have been in the works for a while.
>
> 2. making virtual_size R/O
>
>
> So for some time this was a property that folks could use to set the size
> of an image needed for things like volume creation, cloning etc.  At some
> point though it was decided "this should be read only", ok... well again
> all sorts of code is now broken, including code in Cinder.​  It also seems
> there's no way to set it, so it's always there and just Null.  It looked
> like I would be able to set it during image-create maybe... but then I hit
> number 3.
>
>
> The virtual_size was added to the Images v2 API with Change-Id:
> Ie4f58ee2e4da3a6c1229840295c7f62023a95b70 on February 11, 2014.  The commit
> message indicates: "This patch adds the knowledge of a virtual_size field
> to Glance's API v2. The virtual_size field should respect the same rules
> applied to the size field in terms of readability, access control and
> propagation."  The 'size' field has never been end-user modifiable, hence
> the virtual_size is read-only as well.
>
> 3. broken parsing for size and virtual_size
>
> I just started looking at this one and I'm not sure what happened here
> yet, but it seems that these inputs aren't being parsed any more and are
> now raising an exception due to trying to stuff a string into an int field
> in the json schema.
>
>
> Please file a bug with some details when you know more about this one.  It
> sounds like a client issue, but you can put details in the bug report.
>
> So I think if the project wants to move faster that's great, but please is
> theres any chance to value backwards compatibility just a bit more?​  I'm
> sure I'm going to get flamed for this email, and the likely response will
> be "you're doing it wrong".  I guess if I'm the only one that has these
> sorts of issues then alright, I deserve the flames, and maybe somebody will
> enlighten me on the proper ways of using Glance so I can be happier and
> more in tune with my Universe.
>
>
> Well, since you asked for enlightenment ... it *is* helpful to make sure
> that you know which version of the Images API you're using.  The Glance
> community values backwards compatibility, but not across major releases.
>
> As I imagine you're aware, Glance is tagged "release:
> cycle-with-milestones", so you can read about any changes in the release
> notes.  Or if you want a quick overview of what major features were added
> to Glance for each release, there was an excellent presentation at the
> Tokyo summit about the evolution of the Glance APIs:
>
> https://www.openstack.org/summit/tokyo-2015/videos/presentation/the-evolution-of-glance-api-on-the-way-from-v1-to-v3
> slides only:
> http://www.slideshare.net/racker_br/the-evolution-of-glance-api-on-the-way-from-v1-to-v3
>
> Before people begin freaking out at the mention of the Images v3 API,
> please note that the presentation above described the state of Glance as of
> October 2015.  The Glance documentation has a statement about the two
> Images APIs and the current plans for The Future that was updated shortly
> after the Austin summit:
>
> http://docs.openstack.org/developer/glance/glanceapi.html#glance-and-the-images-apis-past-present-and-future
> 

[openstack-dev] [cross-project][ptl] Update your liaisons

2016-05-17 Thread Mike Perez
Hi PTL's,

Please make sure your cross-project spec liaisons are up-to-date [1]. This role
defaults to the PTL if no liaison is selected. See list of reponsibilities [2].

As agreed by the TC, the cross-project spec liaison team will have voting
rights on the openstack/openstack-spec repo [3]. Next week I will be adding
people from the cross-project spec liaison list to the gerrit group with the
appropriate ACLs.


[1] - 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons
[2] - 
http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons
[3] - 
http://governance.openstack.org/resolutions/20160414-grant-cross-project-spec-team-voting.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Swagger limitations?

2016-05-17 Thread Anne Gentle
On Tue, May 17, 2016 at 7:57 AM, Jamie Hannaford <
jamie.hannaf...@rackspace.com> wrote:

> Hi all,
>
>
> Back in March Anne wrote a great summary on the current state of Swagger
> for OpenStack docs:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html.
> 
>  Is this still the current state of things? Magnum is looking to document
> its API soon, so I want to look at the feasibility of doing it in Swagger.
> If it's not possible, RST should suffice.
>
>
>
Hi Jamie, thanks for asking. There are a few more limitations/concerns that
I'll outline below.


> In terms of the specific limitations that were listed:
>
>
> - the /actions endpoint. Would it be possible to use vendor extensions to
> solve this problem? For example, imagine that you want to document reboot
> and rebuild operations. You could create custom verb properties for these
> operations (`x-post-reboot`, `x-post-rebuild`) that would allow
> differentiation of the JSON payload under the same URL category. The HTML
> rendering would need to be adapted to take this into consideration. [1]
>
>
> Yes, our first goal is to migrate off of WADL, but that won't prevent use
of Swagger, it just means the first priority is to get all projects off of
WADL.

I think the details and nuances here were fairly well communicated in the
rest of the thread [1] and in the etherpads, but of course that requires a
LOT of reading. :)

The main points I'd also like to ensure you know we discussed are [2]:

The current Plan of Record is to move to swagger. However this doesn't
address a number of key issues:
- swagger is young, most community tools around swagger are 50% - 80%
solutions
- swagger provides no facility for nova's 'action' interfaces or
microversions

NOTE: actions are used by nova, cinder, manilla, trove, neutron (in
extensions) - so 40% of the 12 APIs documented on
http://developer.openstack.org/ but only 5 of 19 (26%) REST APIs in
OpenStack's governance

NOTE: microversions are used by nova, manilla, ironic, and cinder (and
neutron probably in the future unless it proves awful to document and use)

NOTE: while Heat neither uses actions or microversions, it's content
templates as inline post requests makes the swagger definition potentially
really tough. Not that there shouldn't be full docs for those formats, but
they just don't do that today. This same issue will hit other APIs that
support post requests of 3rd party content format. Tacker being a good
instance.

- the swagger-tools npm package supports $ref, which will clearly be
required for maintaining our APIs (autoconverted Nova API is 21KLOC, and
probably would be triple that once missing parts are filled in).

NOTE: this is a brand new toolchain that is yet another speed bump in
getting people to contribute and maintain, though it is maintained outside
of OpenStack
---

Please do consider the additional concerns about $ref and tooling listed
above.

In all the communication, I have not prevented the use of Swagger/OpenAPI
but I want to make it clear that focused efforts are best for the earliest
projects.

> - microversions. The more I think about it, the more I realise that this
> is not really a Swagger limitation but more of a consideration that *any*
> documentation format needs to solve. It seems that per microversion
> release, a separate doc artefact is required which encapsulates the API at
> that point in time. This would be very easy to do in Swagger compared to
> RST (single file vs directory of RST files).
>
>
> Agreed. But are we going to solve the microversions display problem once
or twice? Right now, I'm focusing on RST + YAML, and microversions need to
be solved.

Plus I'd like your thoughts on maintenance of the $ref mechanism and HTML
publishing toolchain. We have a patch that lets us build to HTML using
npm-provided tooling [3], but I really need to focus on getting a
navigation for all OpenStack APIs to be read in a unified way on
developer.openstack.org so I haven't tried to make a nice header/footer on
that output.

Heh, I see Sean replying also, so I'll go ahead and send. Thanks Jamie.

Anne

1. http://lists.openstack.org/pipermail/openstack-dev/2016-March/090704.html
2. https://etherpad.openstack.org/p/api-site-in-rst
3. https://review.openstack.org/#/c/286659/

> Am I way off here? I would really like to hear others' opinions on
> this. Thanks for all the great work!
>
>
> Jamie
>
>
> [1]
> https://github.com/OAI/OpenAPI-Specification/blob/master/guidelines/EXTENSIONS.md
> ​
>
> --
> Rackspace International GmbH a company registered in the Canton of Zurich,
> Switzerland (company identification number CH-020.4.047.077-1) whose
> registered office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland.
> Rackspace International GmbH privacy policy can be viewed at
> www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may
> 

Re: [openstack-dev] [docs] [api] Swagger limitations?

2016-05-17 Thread Sean Dague
On 05/17/2016 08:57 AM, Jamie Hannaford wrote:
> Hi all,
> 
> 
> Back in March Anne wrote a great summary on the current state of Swagger
> for OpenStack
> docs: 
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html.
> 
>  Is this still the current state of things? Magnum is looking to
> document its API soon, so I want to look at the feasibility of doing it
> in Swagger. If it's not possible, RST should suffice.
> 
> 
> In terms of the specific limitations that were listed:
> 
> 
> - the /actions endpoint. Would it be possible to use vendor extensions
> to solve this problem? For example, imagine that you want to document
> reboot and rebuild operations. You could create custom verb properties
> for these operations (`x-post-reboot`, `x-post-rebuild`) that would
> allow differentiation of the JSON payload under the same URL category.
> The HTML rendering would need to be adapted to take this into
> consideration. [1]

If you do this you're now outside the swagger spec, which means that all
the toolchains in swagger that exist to:

* build docs
* build clients
* build tests

No longer apply to your API.

So yes, you could do this, but in doing so you abandon most of the
ecosystem that makes swagger interesting, and now have to maintain all
your own dedicated tooling.

There is no swagger HTML rendering project in OpenStack that can be
simply modified for this. One of the advantages for swagger was that
would be an upstream ecosystem tool to build things like the API site.
You now give that up and take on a multi person year engineering effort
to build a custom documentation site rendering engine (probably in
javascript, as that's where most of the starting components are).
Swagger is also pretty non trivial in the way references are resolved,
so this may end up a *really* big engineering effort.

> - microversions. The more I think about it, the more I realise that this
> is not really a Swagger limitation but more of a consideration that
> *any* documentation format needs to solve. It seems that per
> microversion release, a separate doc artefact is required which
> encapsulates the API at that point in time. This would be very easy to
> do in Swagger compared to RST (single file vs directory of RST files). 

You could go that route, and depending on how often you change your
format, and how big your API is, it may or may not be cumbersome. Fixes
to a base resource would require fixing it in N copies of the file.

It would mean that users of your API must preselect the API version
first before seeing anything. And they will only see a pristine document
at that version.

They will not have access to change notes inline (i.e. "New in 2.12").
When looking at consuming an API. IMHO, that is something which is
incredibly useful in reading an API. It's existence in the python
standard lib makes it much clearer about whether you want to take
advantage of feature X, or decide you'd rather have compatibility, and not.

As with everything, it's trade offs, and what you want to get out of it.

Swagger is a really interesting API Design Language. And if you start
with Swagger when designing your API you impose a set of constraints
that give you a bunch of benefits. If you go beyond it's constraints,
you have imposed a very large and novel DSL on your environment (which
is strictly no longer swagger), which means you get all the costs of
conforming to an external standard without any of the benefits.

I think that our actions interface turned out to be a mistake, but it's
one made so long ago that it's pretty embedded. Long term getting
ourselves out of that hole would be good.

The microversions model is something that I think would be worth
engaging the OpenAPI upstream community. Our current implementation is
pretty OpenStack specific, but the concept would apply to any open
source project with a REST API that is CDed at different rates by
different deployers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Performance] IRC meetings starting today after a break

2016-05-17 Thread Dina Belova
Folks,

just a reminder - today we'll start our periodical Performance Team
meetings after a post-summit break we had. *#openstack-performance channel
and 16:00 UTC* as usually :)

See you!

-- Dina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] tripleo-common docs

2016-05-17 Thread Marius Cornea
Hi everyone,

The tripleo-common readme[1] points to a broken documentation link[2].
What is the correct link for the docs?

Thanks

[1] https://github.com/openstack/tripleo-common/blob/master/README.rst
[2] http://docs.openstack.org/developer/tripleo-common

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Nominating new members to Sahara Core

2016-05-17 Thread Chad Roberts
Thanks everyone.  Always a pleasure to work with you all.

Chad

On Tue, May 17, 2016 at 4:42 AM, Vitaly Gridnev 
wrote:

> Ok, seems that we have a quorum. Welcome folks!
>
> On Tue, May 17, 2016 at 10:44 AM, Alexander Ignatov  > wrote:
>
>> +2 to all! Thank you for your contributions, folks!
>>
>> Regards,
>> Alexander Ignatov
>>
>> On 13 May 2016, at 19:26, Ethan Gafford  wrote:
>>
>> On Fri, May 13, 2016 at 11:33 AM, Vitaly Gridnev 
>> wrote:
>>
>>> Hello Sahara core folks!
>>>
>>> I'd like to bring the following folks to Sahara Core:
>>>
>>> 1. Lu Huichun
>>> 2. Nikita Konovalov
>>> 3. Chad Roberts
>>>
>>> Let's vote with +2/-2 for additions above.
>>>
>>> [0] http://stackalytics.com/?module=sahara-group
>>> [1] http://stackalytics.com/?module=sahara-group=mitaka
>>>
>>> --
>>> Best Regards,
>>> Vitaly Gridnev,
>>> Project Technical Lead of OpenStack DataProcessing Program (Sahara)
>>> Mirantis, Inc
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Lu Huichin: +2
>> Nikita Konovalov: +2
>> Chad Roberts: +2
>>
>> All deeply well deserved after a great deal of work. Thanks!
>>
>> - egafford
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards,
> Vitaly Gridnev,
> Project Technical Lead of OpenStack DataProcessing Program (Sahara)
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] [api] Swagger limitations?

2016-05-17 Thread Jamie Hannaford
Hi all,


Back in March Anne wrote a great summary on the current state of Swagger for 
OpenStack docs: 
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html.
  Is this still the current state of things? Magnum is looking to document its 
API soon, so I want to look at the feasibility of doing it in Swagger. If it's 
not possible, RST should suffice.


In terms of the specific limitations that were listed:


- the /actions endpoint. Would it be possible to use vendor extensions to solve 
this problem? For example, imagine that you want to document reboot and rebuild 
operations. You could create custom verb properties for these operations 
(`x-post-reboot`, `x-post-rebuild`) that would allow differentiation of the 
JSON payload under the same URL category. The HTML rendering would need to be 
adapted to take this into consideration. [1]


- microversions. The more I think about it, the more I realise that this is not 
really a Swagger limitation but more of a consideration that *any* 
documentation format needs to solve. It seems that per microversion release, a 
separate doc artefact is required which encapsulates the API at that point in 
time. This would be very easy to do in Swagger compared to RST (single file vs 
directory of RST files).


Am I way off here? I would really like to hear others' opinions on this. Thanks 
for all the great work!


Jamie


[1] 
https://github.com/OAI/OpenAPI-Specification/blob/master/guidelines/EXTENSIONS.md?



Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proposing Brian Haley for neutron-stable-maint

2016-05-17 Thread Kyle Mestery
+1 (Also +1 for Cedric).

On Tue, May 17, 2016 at 6:07 AM, Ihar Hrachyshka  wrote:
> Hi stable-maint-core and all,
>
> I would like to propose Brian for neutron specific stable team.
>
> His stats for neutron stable branches are (last 120 days):
>
> mitaka: 19 reviews; liberty: 68 reviews (3rd place in the top); kilo: 16 
> reviews.
>
> Brian helped the project with stabilizing liberty neutron/DVR jobs, and with 
> other L3 related stable matters. In his stable reviews, he shows attention to 
> details.
>
> If Brian is added to the team, I will make sure he is aware of all stable 
> policy intricacies.
>
> Side note: recently I added another person to the team (Cedric Brandilly), 
> and now I realize that I haven’t followed the usual approval process. That 
> said, the person also has decent stable review stats, and is aware of the 
> policy. If someone has doubts about that addition to the team, please ping me 
> and we will discuss how to proceed.
>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-17 Thread Brian Rosmaita
Subject was: Re: [openstack-dev] [tc] [all] [glance] On operating a high 
throughput or otherwise team

Un-hijacking the thread.  Here are some answers to John's questions, hope they 
are helpful.

On 5/16/16, 9:06 PM, "John Griffith" 
> wrote:
Hey,

Maybe not related, but maybe it is.  After spending the past couple of hours 
trying to help a customer with a Glance issue I'm a bit... well annoyed with 
Glance.  I'd like to chime in on this thread.  I'm honesty not entirely sure 
what the goal of the thread is, but honestly there's something rather important 
to me that I don't really seem to see being called out.

Is there any way we could stop breaking the API and it's behaviors?  Is there 
any way we can fix some of the issues with respect to how things work when 
folks configure multiple Glance repos?

Couple of examples:
1. switching from "is_public=true" to "visibility=public"

This was a major version change in the Images API.  The 'is_public' boolean is 
in the original Images v1 API, 'visibility' was introduced with the Images v2 
API in the Folsom release.  You just need an awareness of which version of the 
API you're talking to.


Ok, cool, I'm sure there's great reasons, but it really sucks when folks update 
their client and now none of their automation works any longer

The Images v1 API went from CURRENT to SUPPORTED in the Kilo release (April 30, 
2015).  The python-glanceclient began using v2 as the default with Change-Id: 
I09c9e409d149e2d797785591183e06c13229b7f7 on June 21, 2015 (and hence would 
have been in release 0.17.2 on July 16, 2015).  So these changes have been in 
the works for a while.

2. making virtual_size R/O

So for some time this was a property that folks could use to set the size of an 
image needed for things like volume creation, cloning etc.  At some point 
though it was decided "this should be read only", ok... well again all sorts of 
code is now broken, including code in Cinder.​  It also seems there's no way to 
set it, so it's always there and just Null.  It looked like I would be able to 
set it during image-create maybe... but then I hit number 3.

The virtual_size was added to the Images v2 API with Change-Id: 
Ie4f58ee2e4da3a6c1229840295c7f62023a95b70 on February 11, 2014.  The commit 
message indicates: "This patch adds the knowledge of a virtual_size field to 
Glance's API v2. The virtual_size field should respect the same rules applied 
to the size field in terms of readability, access control and propagation."  
The 'size' field has never been end-user modifiable, hence the virtual_size is 
read-only as well.

3. broken parsing for size and virtual_size

I just started looking at this one and I'm not sure what happened here yet, but 
it seems that these inputs aren't being parsed any more and are now raising an 
exception due to trying to stuff a string into an int field in the json schema.

Please file a bug with some details when you know more about this one.  It 
sounds like a client issue, but you can put details in the bug report.

So I think if the project wants to move faster that's great, but please is 
theres any chance to value backwards compatibility just a bit more?​  I'm sure 
I'm going to get flamed for this email, and the likely response will be "you're 
doing it wrong".  I guess if I'm the only one that has these sorts of issues 
then alright, I deserve the flames, and maybe somebody will enlighten me on the 
proper ways of using Glance so I can be happier and more in tune with my 
Universe.

Well, since you asked for enlightenment ... it *is* helpful to make sure that 
you know which version of the Images API you're using.  The Glance community 
values backwards compatibility, but not across major releases.

As I imagine you're aware, Glance is tagged "release: cycle-with-milestones", 
so you can read about any changes in the release notes.  Or if you want a quick 
overview of what major features were added to Glance for each release, there 
was an excellent presentation at the Tokyo summit about the evolution of the 
Glance APIs:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/the-evolution-of-glance-api-on-the-way-from-v1-to-v3
slides only: 
http://www.slideshare.net/racker_br/the-evolution-of-glance-api-on-the-way-from-v1-to-v3

Before people begin freaking out at the mention of the Images v3 API, please 
note that the presentation above described the state of Glance as of October 
2015.  The Glance documentation has a statement about the two Images APIs and 
the current plans for The Future that was updated shortly after the Austin 
summit:
http://docs.openstack.org/developer/glance/glanceapi.html#glance-and-the-images-apis-past-present-and-future
(Spoiler alert: no plans for Images v3 API at this point.)

Thanks,
John

Hope that was helpful,
brian

__
OpenStack Development Mailing List 

Re: [openstack-dev] [fuel][plugins][lma] Leveraging OpenStack logstash grok filters in StackLight?

2016-05-17 Thread Simon Pasquier
The short answer is no. StackLight is based on Heka for log processing and
parsing. Heka itself uses Lua Parsing Expression Grammars [1].
For now the patterns are maintained in the LMA collector repository [2] but
it's on our to-do list to have it available in a dedicated repo.
One advantage of having Lua-based parsing is that it's fairly easy to unit
test the patterns.
BR,
Simon

[1] http://www.inf.puc-rio.br/~roberto/lpeg/lpeg.html
[2]
https://github.com/openstack/fuel-plugin-lma-collector/blob/master/deployment_scripts/puppet/modules/lma_collector/files/plugins/common/patterns.lua

On Tue, May 17, 2016 at 2:23 PM, Bogdan Dobrelya 
wrote:

> Hi.
> Are there plans to align the StackLight (LMA plugin) [0] with that
> recently announced source of Logstash filters [1]? I found no fast info
> if the plugin supports Logstash input log shippers, so I'm just asking
> as well.
>
> Writing grok filters is... hard, I'd had a sad experience [2] with that
> some time ago, and that is not that I'd like to repeat or maintain on my
> own, so writing those is something definitely should be done
> collaboratively :)
>
> [0] https://launchpad.net/lma-toolchain
> [1] https://github.com/openstack-infra/logstash-filters
> [2] https://goo.gl/bG6EwX
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Re: [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-17 Thread Ihar Hrachyshka

> On 17 May 2016, at 14:27, Doug Hellmann  wrote:
> 
> Excerpts from Ihar Hrachyshka's message of 2016-05-17 12:25:55 +0200:
>> 
>>> On 16 May 2016, at 21:16, Armando M.  wrote:
>>> 
>>> 
>>> 
>>> On 16 May 2016 at 05:15, Ihar Hrachyshka  wrote:
>>> 
 On 11 May 2016, at 22:05, Sukhdev Kapur  wrote:
 
 
 Folks,
 
 I am happy to announce that Mitaka release for L2 Gateway is released and 
 now available at https://pypi.python.org/pypi/networking-l2gw.
 
 You can install it by using "pip install networking-l2gw"
 
 This release has several enhancements and fixes for issues discovered in 
 liberty release.
>>> 
>>> How do you release it? I think the way to release new deliverables as of 
>>> Newton dev cycle is thru openstack/releases repository, as documented in 
>>> https://review.openstack.org/#/c/308984/
>>> 
>>> Have you pushed git tag manually?
>>> 
>>> I can only see the stable branch, tags can only be pushed by the 
>>> neutron-release team.  
>> 
>> 2016.1.0 tag is in the repo, and is part of stable/mitaka branch.
>> 
>> Git tag history suggests that Carl pushed it (manually I guess?) It seems 
>> that we release some independent deliverables thru openstack/releases, and 
>> some manually pushing tags into repos.
>> 
>> I would love if we can consolidate all our releases to use a single 
>> automation mechanism (openstack/releases patches), independent of release 
>> model. For that, I would like to hear from release folks whether we are 
>> allowed to use openstack/releases repo for release:independent deliverables 
>> that are part of an official project (neutron).
> 
> We're working on it. This cycle we've expanded coverage of the releases
> repo to all official projects using cycle-based models.
> 
>> 
>> [Note that it would not mean we move the oversight burden for those 
>> deliverables onto release team; neutron-release folks would still need to 
>> approve them; it’s only about technicalities, not governance]
>> 
>> The existence of the following git directory suggests that it’s supported:
>> 
>> https://github.com/openstack/releases/tree/master/deliverables/_independent
>> 
>> We already have some networking-* subprojects there, like networking-bgpvpn 
>> or networking-odl. I would love to see all new releases tracked there.
> 
> Any official projects following the independent model may use the
> openstack/releases repository to record their releases after they
> are tagged.

You mean we should follow the steps:
- first, push git tag manually;
- then, propose openstack/releases patch describing the new tag?

If that’s the case, it was not exactly followed for some late neutron stadium 
releases, like: https://review.openstack.org/#/c/308962/

Actually, that later one created the tag in the repo for us. So what’s the 
reason to take the first step manually?

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Smaug]- IRC Meeting today (05/17) - 1400 UTC

2016-05-17 Thread Saggi Mizrahi
Hi All,

We will hold our bi-weekly IRC meeting today (Tuesday, 05/17) at 1400
UTC in #openstack-meeting


Please review the proposed meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/smaug

Please
 feel free to add to the agenda any subject you would like to discuss.


We will also be discussing the time change for the meeting so either
attend or hit me up with an email before the meeting so I can take your
requests into consideration.

Thanks,
Saggi
-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Telemetry] Time to test and import Panko

2016-05-17 Thread Julien Danjou
Hi fellows,

I'm done creating Panko, our new project born from cutting off the event
part of Ceilometer. It's at: https://github.com/jd/panko

There are only a few commits as you can see:

  https://github.com/jd/panko/commits/master

The code has been created in a 4 steps process:
1. Remove code that is not related to events storage and API
2. Rename to Panko
3. Remove base class for dispatcher 
4. Rename database event dispatcher to panko

Some testing would be welcome. It should be pretty straightforward, it
provides `panko-api' that has a /v2/events endpoint and a "panko"
event dispatcher for ceilometer-collector.

The devstack plugin might need some love to integrate with Ceilometer,
but I imagine we can do that in a later pass.

I'm gonna create the openstack-infra patch to import the project unless
someone tells me not to.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >