Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-04 Thread Claudiu Belu
Hi, 

The Hyper-V implementation of the bp virt-device-role-tagging is mergeable [1]. 
The patch is quite simple, it got some reviews, and the tempest test 
test_device_tagging [2] passed. [3]

[1] https://review.openstack.org/#/c/331889/
[2] https://review.openstack.org/#/c/305120/
[3] http://64.119.130.115/debug/nova/331889/8/04-07-2016_19-43/results.html.gz

Best regards,

Claudiu Belu


From: Markus Zoeller [mzoel...@linux.vnet.ibm.com]
Sent: Monday, July 04, 2016 2:24 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

On 01.07.2016 23:03, Matt Riedemann wrote:
> We're now past non-priority feature freeze. I've started going through
> some blueprints and -2ing them if they still have outstanding changes. I
> haven't gone through the full list yet (we started with 100).
>
> I'm also building a list of potential FFE candidates based on:
>
> 1. How far along the change is (how ready is it?), e.g. does it require
> a lot of change yet? Does it require a Tempest test and is that passing
> already? How much of the series has already merged and what's left?
>
> 2. How much core reviewer attention has it already gotten?
>
> 3. What kind of priority does it have, i.e. if we don't get it done in
> Newton do we miss something in Ocata? Think things that start
> deprecation/removal timers.
>
> The plan is for the nova core team to have an informal meeting in the
> #openstack-nova IRC channel early next week, either Tuesday or
> Wednesday, and go through the list of potential FFE candidates.
>
> Blueprints that get exceptions will be checked against the above
> criteria and who on the core team is actually going to push the changes
> through.
>
> I'm looking to get any exceptions completed within a week, so targeting
> Wednesday 7/13. That leaves a few days for preparing for the meetup.
>

FWIW, bp "libvirt-virtlogd" [1] is basically ready to merge. The two
changes [2] and [3] did already get a lot of attention from danpb.

References:
[1] https://blueprints.launchpad.net/openstack/?searchtext=libvirt-virtlogd
[2] https://review.openstack.org/#/c/334480/
[3] https://review.openstack.org/#/c/323765/

--
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug

2016-07-04 Thread Stan Lagun
Hi!

The issue with join is just a yaql bug that is already fixed. The problem
with yaqluator is that it doesn't use the latest yaql library.

Another problem is that it does't sets options correctly. As a result it is
possible to bring the site down with a query that produces endless
collection

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Tue, Jun 28, 2016 at 9:46 AM, Elisha, Moshe (Nokia - IL) <
moshe.eli...@nokia.com> wrote:

> Hi,
>
> Thank you for the kind words, Alexey.
>
> I was able to reproduce your bug and I have also found the issue.
>
> The problem is that we did not create the parser with the engine_options
> used in the yaql library by default when using the CLI.
> Specifically, the "yaql.limitIterators" was missing… I am not sure that
> this settings should have this affect but maybe the Yaql guys can comment
> on that.
>
> If we will change yaqluator to use this setting it will mean that
> yaqluator will not be consistent with Mistral because Mistral is using YAQL
> without this engine option (If I use your example in a workflow, Mistral
> returns exactly like the yaqluator returns)
>
>
> Workflow:
>
> ---
> version: '2.0'
>
> test_yaql:
>   tasks:
> test_yaql:
>   action: std.noop
>   publish:
> output_expr: <% [1,2].join([3], true, [$1, $2]) %>
>
>
> Workflow result:
>
>
> [root@s53-19 ~(keystone_admin)]# mistral task-get-published
> 01d2bce3-20d0-47b2-84f2-7bd1cb2bf9f7
> {
> "output_expr": [
> [
> 1,
> 3
> ]
> ]
> }
>
>
> As Matthews pointed out, the yaqluator is indeed OpenSource and
> contributions are welcomed.
>
> [1]
> https://github.com/ALU-CloudBand/yaqluator/commit/e523dacdde716d200b5ed1015543d4c4680c98c2
>
>
>
> From: Dougal Matthews 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, 27 June 2016 at 16:44
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug
>
> On 27 June 2016 at 14:30, Alexey Khivin  wrote:
>
>> Hello, Moshe
>>
>> Tomorrow I discovered yaqluator.com for myself! Thanks for the useful
>> tool!
>>
>> But suddenly I was said that the expression
>> [1,2].join([3], true, [$1, $2])
>> evaluated to [[1,3]] on the yaqluator
>>
>> A the same time this expression evaluated right when I using raw yaql
>> interpreter.
>>
>> Could we fix this issue?
>>
>> By the way, don't you want to make yaqluator opensource? If you would
>> transfer yaqluator to Openstack Foundation, then  community will be able to
>> fix such kind of bugs
>>
>
> It looks like it is open source, there is a link in the footer:
> https://github.com/ALU-CloudBand/yaqluator
>
>
>>
>> Thank you!
>> Best regards, Alexey Khivin
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug

2016-07-04 Thread Renat Akhmerov
If I understand the meaning of “join” function correctly then from user 
perspective this behavior in Mistral and Yaqluator is a bug because we’re 
joining two collections similar to how it works in SQL so the correct result 
should be:

[[1, 3], [2, 3]]

I.e. collection consisting of two collections where each element if the first 
one is combined with each element of the second one.

If so, we need to fix this in both Mistral and Yaqluator.


Alex, Stan, do you agree?

Renat Akhmerov
@Nokia

> On 28 Jun 2016, at 23:46, Elisha, Moshe (Nokia - IL)  
> wrote:
> 
> Hi,
> 
> Thank you for the kind words, Alexey.
> 
> I was able to reproduce your bug and I have also found the issue.
> 
> The problem is that we did not create the parser with the engine_options used 
> in the yaql library by default when using the CLI.
> Specifically, the "yaql.limitIterators" was missing… I am not sure that this 
> settings should have this affect but maybe the Yaql guys can comment on that.
> 
> If we will change yaqluator to use this setting it will mean that yaqluator 
> will not be consistent with Mistral because Mistral is using YAQL without 
> this engine option (If I use your example in a workflow, Mistral returns 
> exactly like the yaqluator returns)
> 
> 
> Workflow:
> 
>> ---
>> version: '2.0'
>> 
>> test_yaql:
>>   tasks:
>> test_yaql:
>>   action: std.noop
>>   publish:
>> output_expr: <% [1,2].join([3], true, [$1, $2]) %>
> 
> Workflow result:
> 
> 
> [root@s53-19 ~(keystone_admin)]# mistral task-get-published 
> 01d2bce3-20d0-47b2-84f2-7bd1cb2bf9f7
> {
> "output_expr": [
> [
> 1,
> 3
> ]
> ]
> }
> 
> 
> As Matthews pointed out, the yaqluator is indeed OpenSource and contributions 
> are welcomed.
> 
> [1] 
> https://github.com/ALU-CloudBand/yaqluator/commit/e523dacdde716d200b5ed1015543d4c4680c98c2
>  
> 
> 
> 
> 
> From: Dougal Matthews mailto:dou...@redhat.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Monday, 27 June 2016 at 16:44
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug
> 
> On 27 June 2016 at 14:30, Alexey Khivin  > wrote:
>> Hello, Moshe 
>> 
>> Tomorrow I discovered yaqluator.com  for myself! 
>> Thanks for the useful tool!
>> 
>> But suddenly I was said that the expression 
>> [1,2].join([3], true, [$1, $2]) 
>> evaluated to [[1,3]] on the yaqluator
>> 
>> A the same time this expression evaluated right when I using raw yaql 
>> interpreter.
>> 
>> Could we fix this issue?
>> 
>> By the way, don't you want to make yaqluator opensource? If you would 
>> transfer yaqluator to Openstack Foundation, then  community will be able to 
>> fix such kind of bugs
> 
> It looks like it is open source, there is a link in the footer: 
> https://github.com/ALU-CloudBand/yaqluator 
> 
>  
>> 
>> Thank you!
>> Best regards, Alexey Khivin
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-04 Thread Vikas Choudhary
Hello Everybody !!!

As we discussed in the meeting yesterday, I have submitted restructured
patches now in kuryr-lib and kuryr-libnetwork to address only dropping
non-relevant code and adding kuryr-lib as dependency to kuryr-libnetwork.

Please provide review comments so that i quickly reiterate over patches and
eventually we will be able to move our existing under review patches also
from kuryr to kuryr-libnetwork.

Since RPC support is also ready, will be submitting the same in both repos
as seperate patches(linked with dependency)

Here is the link for base patchsets:
Kuryr-libnetwork:  https://review.openstack.org/#/c/337350/
kuryr-lib: https://review.openstack.org/#/c/336784/



Regards
Vikas

On Fri, Jul 1, 2016 at 6:40 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

> Hi fellow kuryrs!
>
> In order to proceed with the split of kuryr into a main lib and it's kuryr
> libnetwork component, we've cloned the contents of openstack/kuryr over to
> openstack/kuryr-libnetwork.
>
> The idea is that after this, the patches that will go to openstack/kuryr
> will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
> release of the common parts so that openstack/kuryr-libnetwork can start
> using it.
>
> I propose that we use python namespaces and the current common code in
> kuryr is moved to:
> kuryr/lib/
>
>
> which openstack/kuryr-libnetwork would import like so:
>
> from kuryr.lib import binding
>
> So, right now, patches in review that are for the Docker ipam or remote
> driver, should be moved to openstack/kuryr-libnetwork and soon we should
> make openstack/kuryr-libnetwork add kuryr-lib to the requirements.
>
> Regards,
>
> Toni
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Jamie Lennox
On 4 July 2016 at 19:58, Julien Danjou  wrote:

> On Mon, Jul 04 2016, Denis Makogon wrote:
>
> > What would be the best place to submit spec?
>
> The cross project spec repo?
>
>   https://git.openstack.org/cgit/openstack/openstack-specs/
>
> --
> Julien Danjou
> /* Free Software hacker
>https://julien.danjou.info */


The cross project might be a good place to submit the spec - however
keystoneauth is where you will likely want to implement this so it is
available to all clients.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [gnocchi] profiling and benchmarking 2.1.x

2016-07-04 Thread gordon chung
hi folks,

i didn't get as far as i'd hope so i've decided to release what i have 
currently and create another deck for more 'future enhancements' 
benchmarking.

the following deck aims to show how performance changes as you scale out 
Ceph and some configuration options that can help stabilise Gnocchi+Ceph 
performance: http://www.slideshare.net/GordonChung/gnocchi-profiling-v2.

this is by no means a large scale architecture -- the tests are run 
against tens of thousands metrics currently -- but it's a start until we 
get more data. i'm hoping to test against a larger dataset going 
forward. also, will be testing some enhancements we've been discussing 
for Gnocchi 3.x

hope it helps.

cheers,


On 25/06/2016 8:50 AM, Curtis wrote:
> On Fri, Jun 24, 2016 at 2:09 PM, gordon chung  wrote:
>> hi,
>>
>> i realised i didn't post this beyond IRC, so here are some initial
>> numbers for some performance/benchmarking i did on Gnocchi.
>>
>> http://www.slideshare.net/GordonChung/gnocchi-profiling-21x
>>
>> as a headsup, the data above is using Ceph and with pretty much a
>> default configuration. i'm currently doing more tests to see how it
>> works if you actually start turning some knobs on Ceph (spoiler: it gets
>> better).
>
> Thanks Gordon. I will definitely take a look through your slides.
> Looking forward to what test results you get when you start turning
> some knobs.
>
> Thanks,
> Curtis.
>
>>
>> cheers,
>>
>> --
>> gord
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker] Weekly meeting agenda - July 5, 2016

2016-07-04 Thread Sridhar Ramaswamy
https://wiki.openstack.org/wiki/Meetings/Tacker

1600 UTC Tuesday @ #openstack-meeting

   - Announcements
   - Audit Event Log - https://review.openstack.org/321370
   - Network Services Descriptor (NSD) - https://review.openstack.org/304667
   - Grooming Midcycle Meetup topics -

 - https://etherpad.openstack.org/p/tacker-newton-midcycle


   - Open Discussion
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][alembic] Upgrade of db with alembic migration script

2016-07-04 Thread Sławek Kapłoński
Hello,

I'm working on patch to add QoS ingress bandwidth limit to Neutron
currently: https://review.openstack.org/303626 and I have small problem
with db upgrade with alembic.
Problem description:
In qos_bandwidth_limit_rules table now there is foreign key
"qos_policy_id" with unique constraint.
I need to add new column called "direction" to this table and then remove
unique constraint for qos_policy_id. At the end I need to add new unique
constraint to pair (direction, qos_policy_id).
To do that I need to:
1. remove qos_policy_id foreign key
2. remove unique constraint for qos_policy_id (because it is not removed
automatically)
3. add new column
4. add new unique constraint

Points 3 and 4 are easy and there is no problem with it.

Problem is with point 2 (remove unique constraint)
To remove qos_policy_id fk I used function:
neutron.db.migration.migration.remove_fks_from_table() and it is working
fine but it's not removing unique constraint.
I made some modification to this function:
https://review.openstack.org/#/c/303626/21/neutron/db/migration/__init__.py
and this modifications works fine for mysql but in pgsql this unique
constraint is not removed so after all there is two constraints in table
and this is wrong.

I'm not expert in pgsql and alembic. Maybe someone with bigger
experience can look on it and help me how to do such migration script?

Thx in advance for any help.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl



signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #86

2016-07-04 Thread Emilien Macchi
If you have any topic for our weekly meeting tomorrow, please add it here:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160705

If no topic, we will postpone the meeting to next week.
Thanks,

On Tue, Jun 28, 2016 at 11:06 AM, Emilien Macchi  wrote:
> Meeting cancelled again, no topic this week.
> See you next week!
>
> On Mon, Jun 27, 2016 at 8:39 AM, Emilien Macchi  wrote:
>> Hi,
>>
>> If you have any topic for our meeting tomorrow, please add them on the 
>> etherpad:
>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160628
>>
>> See you tomorrow,
>>
>> On Tue, Jun 21, 2016 at 10:59 AM, Emilien Macchi  wrote:
>>> Meeting cancelled, no topics this week.
>>>
>>> See you next week!
>>>
>>> On Mon, Jun 20, 2016 at 9:44 AM, Emilien Macchi  wrote:
 Hi Puppeteers!

 We'll have our weekly meeting tomorrow at 3pm UTC on
 #openstack-meeting-4.

 Here's a first agenda:
 https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160621

 Feel free to add more topics, and any outstanding bug and patch.

 See you tomorrow!
 Thanks,
 --
 Emilien Macchi
>>>
>>>
>>>
>>> --
>>> Emilien Macchi
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Stability and reliability of gate jobs

2016-07-04 Thread David Moreau Simard
I mentioned this on IRC to some extent but I'm going to post it here
for posterity.

I think we can all agree that Integration tests are pretty darn
important and I'm convinced I don't need to remind you why.
I'm going to re-iterate that I am very concerned about the state of
the jobs but also their coverage.

Kolla provides an implementation for a lot of the big tents projects
but they are not properly (if at all) tested in the gate.
Only the core services are tested in an "all-in-one" fashion and if a
commit happens to break a project that isn't tested in that all-in-one
test, no one will know about it.

This is very dangerous territory -- you can't guarantee that what
Kolla supports really works on every commit.
Both Packstack [1] and Puppet-OpenStack [2] have an extensive matrix
of test coverage across different jobs and different operating systems
to work around the memory constraints of the gate virtual machines.
They test themselves with their project implementations in different
ways (i.e, glance with file, glance with swift, cinder with lvm,
cinder with ceph, neutron with ovs, neutron with linuxbridge, etc.)
and do so successfully.

I don't see why Kolla should be different if it is to be taken seriously.
My apologies if it feels I am being harsh - I am being open and honest
about Kolla's loss of credibility from my perspective.

I've put my attempts to put Kolla in RDO's testing pipeline on hold
for the Newton cycle.
I hope we can straighten out all of this -- I care about Kolla and I
want it to succeed, which is why I started this thread in the first
place.

While I don't really have the bandwidth to contribute to Kolla, I hope
you can at least consider my feedback and you can also find me on IRC
if you have questions.

[1]: https://github.com/openstack/packstack#packstack-integration-tests
[2]: https://github.com/openstack/puppet-openstack-integration#description

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Thu, Jun 16, 2016 at 8:20 AM, Steven Dake (stdake)  wrote:
> David,
>
> The gates are unreliable for a variety of reasons - some we can fix - some
> we can't directly.
>
> RDO rabbitmq introduced IPv6 support to erlang, which caused our gate
> reliably to drop dramatically.  Prior to this change, our gate was running
> 95% reliability or better - assuming the code wasn¹t busted.
> The gate gear is different - meaning different setup.  We have been
> working on debugging all these various gate provider issues with infra
> team and I think that is mostly concluded.
> The gate changed to something called bindeps which has been less reliable
> for us.
> We do not have mirrors of CentOS repos - although it is in the works.
> Mirrors will ensure that images always get built.  At the moment many of
> the gate failures are triggered by build failures (the mirrors are too
> busy).
> We do not have mirrors of the other 5-10 repos and files we use.  This
> causes more build failures.
>
> Complicating matters, any of theses 5 things above can crater one gate job
> of which we run about 15 jobs, which causes the entire gate to fail (if
> they were voting).  I really want a voting gate for kolla's jobs.  I super
> want it.  The reason we can't make the gates voting at this time is
> because of the sheer unreliability of the gate.
>
> If anyone is up for a thorough analysis of *why* the gates are failing,
> that would help us fix them.
>
> Regards
> -steve
>
> On 6/15/16, 3:27 AM, "Paul Bourke"  wrote:
>
>>Hi David,
>>
>>I agree with this completely. Gates continue to be a problem for Kolla,
>>reasons why have been discussed in the past but at least for me it's not
>>clear what the key issues are.
>>
>>I've added this item to agenda for todays IRC meeting (16:00 UTC -
>>https://wiki.openstack.org/wiki/Meetings/Kolla). It may help if before
>>hand we can brainstorm a list of the most common problems here beforehand.
>>
>>To kick things off, rabbitmq seems to cause a disproportionate amount of
>>issues, and the problems are difficult to diagnose, particularly when
>>the only way to debug is to summit "DO NOT MERGE" patch sets over and
>>over. Here's an example of a failed centos binary gate from a simple
>>patch set I was reviewing this morning:
>>http://logs.openstack.org/06/329506/1/check/gate-kolla-dsvm-deploy-centos-
>>binary/3486d03/console.html#_2016-06-14_15_36_19_425413
>>
>>Cheers,
>>-Paul
>>
>>On 15/06/16 04:26, David Moreau Simard wrote:
>>> Hi Kolla o/
>>>
>>> I'm writing to you because I'm concerned.
>>>
>>> In case you didn't already know, the RDO community collaborates with
>>> upstream deployment and installation projects to test it's packaging.
>>>
>>> This relationship is beneficial in a lot of ways for both parties, in
>>>summary:
>>> - RDO has improved test coverage (because it's otherwise hard to test
>>> different ways of installing, configuring and deploying OpenStack by
>>> ourselves)
>>> - The RDO community works with upstream 

Re: [openstack-dev] [Fuel] - Nominate Maksim Malchuk to Fuel Library Core

2016-07-04 Thread Maksim Malchuk
Hi All,

I'm very appreciate being a part of the Fuel project as core reviewers team!
Thank you all for nomination and for +1s!

On Mon, Jul 4, 2016 at 12:17 PM, Sergii Golovatiuk  wrote:

> Hi,
>
> Please welcome Maksim as he's just joined fuel-library Core Team!
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Tue, Jun 28, 2016 at 1:06 PM, Adam Heczko  wrote:
>
>> Although I'm not Fuel core, +1 from me to Maksim. Maksim is not only
>> excellent engineer but also very friendly and helpful folk.
>>
>> On Tue, Jun 28, 2016 at 12:19 PM, Georgy Kibardin > > wrote:
>>
>>> +1
>>>
>>> On Tue, Jun 28, 2016 at 1:12 PM, Kyrylo Galanov 
>>> wrote:
>>>
 +1

 On Tue, Jun 28, 2016 at 1:16 AM, Matthew Mosesohn <
 mmoses...@mirantis.com> wrote:

> +1. Maksim is an excellent reviewer.
>
> On Mon, Jun 27, 2016 at 6:06 PM, Alex Schultz 
> wrote:
> > +1
> >
> > On Mon, Jun 27, 2016 at 9:04 AM, Bogdan Dobrelya <
> bdobre...@mirantis.com>
> > wrote:
> >>
> >> On 06/27/2016 04:54 PM, Sergii Golovatiuk wrote:
> >> > I am very sorry for sending without subject. I am adding subject
> to
> >> > voting and my +1
> >>
> >> +1 from my side!
> >>
> >> >
> >> > --
> >> > Best regards,
> >> > Sergii Golovatiuk,
> >> > Skype #golserge
> >> > IRC #holser
> >> >
> >> > On Mon, Jun 27, 2016 at 4:42 PM, Sergii Golovatiuk
> >> > mailto:sgolovat...@mirantis.com>>
> wrote:
> >> >
> >> > Hi,
> >> >
> >> > I would like to nominate Maksim Malchuk to Fuel-Library Core
> team.
> >> > He’s been doing a great job so far [0]. He’s #2 reviewer and
> #2
> >> > contributor with 28 commits for last 90 days [1][2].
> >> >
> >> > Fuelers, please vote with +1/-1 for approval/objection.
> Voting will
> >> > be open until July of 4th. This will go forward after voting
> is
> >> > closed if there are no objections.
> >> >
> >> > Overall contribution:
> >> > [0] http://stackalytics.com/?user_id=mmalchuk
> >> > Fuel library contribution for last 90 days:
> >> > [1]
> >> > 
> http://stackalytics.com/report/contribution/fuel-library/90
> >> > http://stackalytics.com/report/users/mmalchuk
> >> > List of reviews:
> >> > [2]
> >> >
> https://review.openstack.org/#/q/reviewer:%22Maksim+Malchuk%22+status:merged,n,z
> >> > --
> >> > Best regards,
> >> > Sergii Golovatiuk,
> >> > Skype #golserge
> >> > IRC #holser
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> >> --
> >> Best regards,
> >> Bogdan Dobrelya,
> >> Irc #bogdando
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Adam Heczko
>> Security Engineer @ Mirantis Inc.
>>
>> _

Re: [openstack-dev] [nova] gate testing with lvm imagebackend

2016-07-04 Thread Chris Friesen

On 05/10/2016 04:47 PM, Matt Riedemann wrote:

On 5/10/2016 5:14 PM, Chris Friesen wrote:



For what it's worth, we've got internal patches to enable cold migration
and resize for LVM-backed instances.  We've also got a proof of concept
to enable thin-provisioned LVM to get rid of the huge
wipe-volume-on-deletion cost.



I'd be interested if you want to push the changes up as a WIP, then we could run
my devstack-gate change with your series as a dependency and see what kind of
fallout there is.

I also need to check of there are any tempest tests that test resize/migrate
from a volume-backed instance and if those are passing on this.



I finally got around to massaging our "allow resize/migration for LVM-backed 
instances" patch into something that should work on upstream:


https://review.openstack.org/#/c/337334

I'm not totally sure the logic is 100% correct, haven't worked through all the 
cases.  It's possible there are still some assumptions in there based on our 
configuration.


Let me know what you think...maybe this could be O-release material?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glare] No meeting today - 07/04/2016

2016-07-04 Thread Mikhail Fedosin
Hi,

We’re cancelling team meeting today because a bunch of team members are on
holidays or won’t be able to attend.

Best regards,
Mikhail Fedosin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday July 5th at 19:00 UTC

2016-07-04 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday July 5th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-06-28-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-06-28-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-06-28-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][stable] liberty periodic bitrot jobs have been failing more than a week

2016-07-04 Thread Ben Swartzlander

On 07/03/2016 09:19 AM, Matt Riedemann wrote:

On 7/1/2016 8:18 PM, Ravi, Goutham wrote:

Thanks Matt.

https://review.openstack.org/#/c/334220 adds the upper constraints.

--
Goutham


On 7/1/16, 5:08 PM, "Matt Riedemann"  wrote:

The manila periodic stable/liberty jobs have been failing for at least a
week.

It looks like manila isn't using upper-constraints when running unit
tests, not even on stable/mitaka or master. So in liberty it's pulling
in uncapped oslo.utils even though the upper constraint for oslo.utils
in liberty is 3.2.

Who from the manila team is going to be working on fixing this, either
via getting upper-constraints in place in the tox.ini for manila (on all
supported branches) or performing some kind of workaround in the code?



Thanks.

I noticed that there is no Tempest / devstack job run against the
stable/liberty change - why is there no integration testing of Manila in
stable/liberty outside of 3rd party CI (which is not voting)?


Matt, this is why: https://review.openstack.org/#/c/286497/

-Ben





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Smaug]- IRC Meeting today (05/07) - 0900 UTC SM,

2016-07-04 Thread Saggi Mizrahi
Hi All,

We will hold our weekly IRC meeting today (Tuesday, 05/07) at 0900
UTC in #openstack-meeting

Please review the proposed meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/smaug

Please feel free to add to the agenda any subject you would like to
discuss.

Thanks,
Saggi
-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Fail to install murano in Pro environment / Ansible cookbook for murano

2016-07-04 Thread Kirill Zaitsev
Hi Alioune!

Can you please share which versions of murano and existing cloud do you have? 

I would bet that you’re using stable/liberty murano, or earlier. The code you 
mentioned has been refactored in stable/mitaka [1] This was done, because 
oslo.messaging has merged a commit [2], that changed a couple of interfaces 
murano used. This commit AFAIK triggered a major version release of 
oslo.messaging (5.0.0). At least according to gerrit 5.0.0 is the 1st release 
it is included in.
Then again in stable/liberty the upper constraint for oslo.messaging was 2.5.0 
[3]. However it is not capped in requirements. [4]

So the problem here (and that’s my educated guess =)) is that tox is installing 
way too recent version of oslo.messaging library for murano to use on 
stable/liberty. A quick solution for your case would be to downgrade your 
oslo.messaging to 2.5.0

A longer and better solution would be to teach our tox to honor 
upper-constraints for stable branches, I guess, since they’re not always 
backward compatible as we may see.
Hope this helps!

Also, please note, that this ML is not for usage questions — the question is 
more suited for openst...@lists.openstack.org 
Or you can come to #murano in IRC =) We try to keep it alive and you’ll be able 
to get answers quicker.

[1] 
https://review.openstack.org/#/q/Ied7acaeed6edde17d2d6f073c304021e22811409,n,z 
[2] https://review.openstack.org/#/c/297988/ 
[3] 
https://github.com/openstack/requirements/blob/stable/liberty/upper-constraints.txt#L196
[4] 
https://github.com/openstack/requirements/blob/stable/liberty/global-requirements.txt#L90
 
-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc

On 4 juillet 2016 at 17:34:40, Alioune (baliou...@gmail.com) wrote:

Hi all,

I'm trying to install murano in existing openstack platform following [1].
So the installation process fails at step [2] with this error.
Any suggestion for solving that ?
Is there existing murano playbook for easy deployment with ansible ?


You can find my murano.conf in [3]

[1] http://docs.openstack.org/developer/murano/install/manual.html

[2] sudo  tox -e venv -- murano-api --config-file ./etc/murano/murano.conf

[3] 
https://drive.google.com/file/d/0B-bfET74f0WZd2N6WGR0Szh1UUE/view?usp=sharing


ERROR:
2016-07-04 14:06:59.960 9251 WARNING keystonemiddleware.auth_token [-] Use of 
the auth_admin_prefix, auth_host, auth_port, auth_protocol, identity_uri, 
admin_token, admin_user, admin_password, and admin_tenant_name configuration 
options was deprecated in the Mitaka release in favor of an auth_plugin and its 
related options. This class may be removed in a future release.
2016-07-04 14:06:59.960 9251 WARNING keystonemiddleware.auth_token [-] 
Configuring admin URI using auth fragments was deprecated in the Kilo release, 
and will be removed in the N release, use 'identity_uri\ instead.
2016-07-04 14:07:00.252 9251 CRITICAL murano [-] TypeError: __init__() takes 
exactly 3 arguments (5 given)
2016-07-04 14:07:00.252 9251 ERROR murano Traceback (most recent call last):
2016-07-04 14:07:00.252 9251 ERROR murano   File ".tox/venv/bin/murano-api", 
line 10, in 
2016-07-04 14:07:00.252 9251 ERROR murano sys.exit(main())
2016-07-04 14:07:00.252 9251 ERROR murano   File 
"/home/vagrant/murano/murano/cmd/api.py", line 66, in main
2016-07-04 14:07:00.252 9251 ERROR murano 
launcher.launch_service(server.get_notification_service())
2016-07-04 14:07:00.252 9251 ERROR murano   File 
"/home/vagrant/murano/murano/common/server.py", line 235, in 
get_notification_service
2016-07-04 14:07:00.252 9251 ERROR murano NOTIFICATION_SERVICE = 
_prepare_notification_service(str(uuid.uuid4()))
2016-07-04 14:07:00.252 9251 ERROR murano   File 
"/home/vagrant/murano/murano/common/server.py", line 219, in 
_prepare_notification_service
2016-07-04 14:07:00.252 9251 ERROR murano [s_target], endpoints, None, True)
2016-07-04 14:07:00.252 9251 ERROR murano TypeError: __init__() takes exactly 3 
arguments (5 given)
2016-07-04 14:07:00.252 9251 ERROR murano
ERROR: InvocationError: '/home/vagrant/murano/.tox/venv/bin/murano-api 
--config-file etc/murano/murano.conf'
_ summary 
_
ERROR:   venv: commands failed
__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  


signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla + BiFrost integration

2016-07-04 Thread Michał Jastrzębski
I'd be cautious about how much of customization we allow. But don't
forget that Kolla itself and BiFrost will be effectively separate. Run
bifrost, run every customization playbook you want on host, run
kolla-host-bootstrap playbook for installation of docker and stuff and
then run kolla. This will not be single-step operation, so you can do
stuff in between.

Cheers,
Michal

On 4 July 2016 at 03:17, Stephen Hindle  wrote:
> Hi Steve
>
>   I'm just suggesting the bi-frost stuff allow sufficient 'hooks' for
> operators to insert site specific setup.  Not that Kolla/Bi-Frost try
> to handle 'everything'.
>
>   For instance LDAP... Horizon integration with LDAP would certainly
> be within Kolla's perview.  However, operators also use LDAP for login
> access to the host via PAM.  This is site-specific, and outside of
> Kolla's mission.
>
>   As an example of 'respecting existing configuration' - some sites
> may use OpenVSwitch for host level networking.  Kolla currently starts
> a new openvswitchdb container without checking if OpenVSwitch is
> already running - this kills the host networking.
>
>   If you'll pardon the pun, there are a 'host' of situations like
> this, where operators will have to provide
> (possibly many/detailed) site specific configurations to a bare metal
> host.  Networking, Security, Backups, Monitoring/Logging, etc.  These
> may all be subject to corporate wide policies that are non-negotiable
> and have to be followed.
>
>   Again, I realize Kolla/BiFrost can not be everything for everyone.
> I just want to suggest that we provide well documented methods for
> operators to insert site-specific roles/plays/whatever, and that we
> take care to avoid 'stepping on' things.
>
>   I have no idea as to what/how-many 'hooks' would be required.  I
> tend to think a simple 'before-bifrost' role and 'after-bifrost' role
> would be enough.  However, others may have more input on that...
> I like the idea of using roles, as that would allow you to centralize
> all your 'site-specific' bits.  This way operators don't have to
> modify the existing kolla/BiFrost stuff.
>
>
> On Sat, Jul 2, 2016 at 3:10 PM, Steven Dake (stdake)  wrote:
>> Stephen,
>>
>> Responses inline.
>>
>> On 7/1/16, 11:35 AM, "Stephen Hindle"  wrote:
>>
>>>Maybe I missed it - but is there a way to provide site specific
>>>configurations?  Things we will run into in the wild include:
>>>Configuring multiple non-openstack nics
>>
>> We don¹t have anything like this at present or planned.  Would you mind
>> explaining the use case?  Typically we in the Kolla community expect a
>> production deployment to only deploy OpenStack, and not other stacks on
>> top of the bare metal hardware.  This is generally considered best
>> practice at this time, unless of course your deploying something on top of
>> OpenStack that may need these nics.  The reason is that OpenStack itself
>> managed alongside another application doesn¹t know what it doesn't know
>> and can't handle capacity management or any of a number of other things
>> required to make an OpenStack cloud operate.
>>
>>> IPMI configuration
>>
>> BiFrost includes IPMI integration - assumption being we will just use
>> whatever BiFrost requires here for configuration.
>>
>>> Password integration with Corporate LDAP etc.
>>
>> We have been asked several times for this functionality, and it will come
>> naturally during either Newton or Occata.
>>
>>> Integration with existing SANs
>>
>> Cinder integrates with SANs, and in Newton, we have integration with
>> iSCSI.  Unfortunately because of some controversy around how glance should
>> provide images with regards to cinder, using existing SAN gear with iSCSI
>> integration as is done by Cinder may not work as expected in a HA setup.
>>
>>> Integration with existing corporate IPAM
>>
>> No idea
>>
>>> Corporate Security policy (firewall rules, sudo groups,
>>>hosts.allow, ssh configs,etc)
>>
>> This is environmental specific and its hard to make a promise on what we
>> could deliver in a generic way that would be usable by everyone.
>> Therefore our generic implementation will be the "bare minimum" to get the
>> system into an operational state.  The things listed above are outside the
>> "bare minimum" iiuc.
>>
>>>
>>>Thats just off the top of my head - I'm sure we'll run into others.  I
>>>tend to think the best way
>>>to approach this is to allow some sort of 'bootstrap' role, that could
>>>be populated by the
>>>operators.  This should initially be empty (Kolla specific 'bootstrap'
>>
>> Our bootstrap playbook is for launching BiFrost and bringing up the bare
>> metal machines with an SSH credential.  It appears from this thread we
>> will have another playbook to do the bare metal initialization (thiings
>> like turning off firewalld, turning on chrony, I.e. Making the bare metal
>> environment operational for OpenStack)
>>
>> I think what you want is a third playbook which really belongs in the
>> domain of 

Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Igor Kalnitsky
Denis Makagon wrote:
> I'm using event loop with uvloop policy, so i must stay non-blocked
> within main thread and don't mess up with GIL by instantiating new thread.

You won't be blocked or "messed up" with GIL as long as new thread is
I/O bound. Background thread is a good solution in such cases, and
it's used by many libraries under the hood (for instance, pymongo).

- Igor

On Mon, Jul 4, 2016 at 5:09 PM, Denis Makogon  wrote:
>
>
> 2016-07-04 16:47 GMT+03:00 Roman Podoliaka :
>>
>> That's exactly what https://github.com/koder-ua/os_api is for: it
>> polls status changes in a separate thread and then updates the
>> futures, so that you can wait on multiple futures at once.
>>
>
> This is what i exactly want to avoid - new thread. I'm using event loop with
> uvloop policy, so i must stay non-blocked within main thread and don't mess
> up with GIL by instantiating new thread. With coroutines concept from
> asyncio i can do non-blocking operations relying on EPOLL under the hood.
>
> Kind regards,
> Denys Makogon
>
>>
>> On Mon, Jul 4, 2016 at 2:19 PM, Denis Makogon 
>> wrote:
>> >
>> >
>> > 2016-07-04 13:22 GMT+03:00 Roman Podoliaka :
>> >>
>> >> Denis,
>> >>
>> >> >  Major problem
>> >> > appears when you trying to provision resource that requires to have
>> >> > some
>> >> > time to reach ACTIVE/COMPLETED state (like, nova instance, stack,
>> >> > trove
>> >> > database, etc.) and you have to use polling for status changes and in
>> >> > general polling requires to send HTTP requests within specific time
>> >> > frame
>> >> > defined by number of polling retries and delays between them (almost
>> >> > all
>> >> > PaaS solutions in OpenStack are doing it that might be the case of
>> >> > distributed backend services, but not for async frameworks).
>> >>
>> >> How would an asynchronous client help you avoid polling here? You'd
>> >> need some sort of a streaming API producing events on the server side.
>> >>
>> >
>> > No, it would not help me to get rid of polling, but using async requests
>> > will allow to proceed with next independent async tasks while awaiting
>> > result on async HTTP request.
>> >
>> >>
>> >> If you are simply looking for a better API around polling in OS
>> >> clients, take a look at https://github.com/koder-ua/os_api , which is
>> >> based on futures (be aware that HTTP requests are still *synchronous*
>> >> under the hood).
>> >>
>> >> Thanks,
>> >> Roman
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Fail to install murano in Pro environment / Ansible cookbook for murano

2016-07-04 Thread Alioune
Hi all,

I'm trying to install murano in existing openstack platform following [1].
So the installation process fails at step [2] with this error.
Any suggestion for solving that ?
Is there existing murano playbook for easy deployment with ansible ?


You can find my murano.conf in [3]

[1] http://docs.openstack.org/developer/murano/install/manual.html

[2] sudo  tox -e venv -- murano-api --config-file ./etc/murano/murano.conf

[3]
https://drive.google.com/file/d/0B-bfET74f0WZd2N6WGR0Szh1UUE/view?usp=sharing


ERROR:
2016-07-04 14:06:59.960 9251 WARNING keystonemiddleware.auth_token [-] Use
of the auth_admin_prefix, auth_host, auth_port, auth_protocol,
identity_uri, admin_token, admin_user, admin_password, and
admin_tenant_name configuration options was deprecated in the Mitaka
release in favor of an auth_plugin and its related options. This class may
be removed in a future release.
2016-07-04 14:06:59.960 9251 WARNING keystonemiddleware.auth_token [-]
Configuring admin URI using auth fragments was deprecated in the Kilo
release, and will be removed in the N release, use 'identity_uri\ instead.
2016-07-04 14:07:00.252 9251 CRITICAL murano [-] TypeError: __init__()
takes exactly 3 arguments (5 given)
2016-07-04 14:07:00.252 9251 ERROR murano Traceback (most recent call last):
2016-07-04 14:07:00.252 9251 ERROR murano   File
".tox/venv/bin/murano-api", line 10, in 
2016-07-04 14:07:00.252 9251 ERROR murano sys.exit(main())
2016-07-04 14:07:00.252 9251 ERROR murano   File
"/home/vagrant/murano/murano/cmd/api.py", line 66, in main
2016-07-04 14:07:00.252 9251 ERROR murano
launcher.launch_service(server.get_notification_service())
2016-07-04 14:07:00.252 9251 ERROR murano   File
"/home/vagrant/murano/murano/common/server.py", line 235, in
get_notification_service
2016-07-04 14:07:00.252 9251 ERROR murano NOTIFICATION_SERVICE =
_prepare_notification_service(str(uuid.uuid4()))
2016-07-04 14:07:00.252 9251 ERROR murano   File
"/home/vagrant/murano/murano/common/server.py", line 219, in
_prepare_notification_service
2016-07-04 14:07:00.252 9251 ERROR murano [s_target], endpoints, None,
True)
2016-07-04 14:07:00.252 9251 ERROR murano TypeError: __init__() takes
exactly 3 arguments (5 given)
2016-07-04 14:07:00.252 9251 ERROR murano
ERROR: InvocationError: '/home/vagrant/murano/.tox/venv/bin/murano-api
--config-file etc/murano/murano.conf'
_ summary
_
ERROR:   venv: commands failed
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Denis Makogon
2016-07-04 16:47 GMT+03:00 Roman Podoliaka :

> That's exactly what https://github.com/koder-ua/os_api is for: it
> polls status changes in a separate thread and then updates the
> futures, so that you can wait on multiple futures at once.
>
>
This is what i exactly want to avoid - new thread. I'm using event loop
with uvloop policy, so i must stay non-blocked within main thread and don't
mess up with GIL by instantiating new thread. With coroutines concept from
asyncio i can do non-blocking operations relying on EPOLL under the hood.

Kind regards,
Denys Makogon


> On Mon, Jul 4, 2016 at 2:19 PM, Denis Makogon 
> wrote:
> >
> >
> > 2016-07-04 13:22 GMT+03:00 Roman Podoliaka :
> >>
> >> Denis,
> >>
> >> >  Major problem
> >> > appears when you trying to provision resource that requires to have
> some
> >> > time to reach ACTIVE/COMPLETED state (like, nova instance, stack,
> trove
> >> > database, etc.) and you have to use polling for status changes and in
> >> > general polling requires to send HTTP requests within specific time
> >> > frame
> >> > defined by number of polling retries and delays between them (almost
> all
> >> > PaaS solutions in OpenStack are doing it that might be the case of
> >> > distributed backend services, but not for async frameworks).
> >>
> >> How would an asynchronous client help you avoid polling here? You'd
> >> need some sort of a streaming API producing events on the server side.
> >>
> >
> > No, it would not help me to get rid of polling, but using async requests
> > will allow to proceed with next independent async tasks while awaiting
> > result on async HTTP request.
> >
> >>
> >> If you are simply looking for a better API around polling in OS
> >> clients, take a look at https://github.com/koder-ua/os_api , which is
> >> based on futures (be aware that HTTP requests are still *synchronous*
> >> under the hood).
> >>
> >> Thanks,
> >> Roman
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storlets] changes to .functests

2016-07-04 Thread eran

Hi all,
patch https://review.openstack.org/#/c/334878/
introduces a (bonus) change to the functests.
The functests script need to get a flavour: jenkins | dev
the dev flavour skips slow tests such as 1GB PUT.
When invoked from tox it uses the jenkins flavour which executes everything.
this is implemented using python's unit test attributes.

Comments are welcome.
Thanks,
Eran


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] No meeting today - 07/04/2016

2016-07-04 Thread Renat Akhmerov
Hi,

We’re cancelling team meeting today because a bunch of team members are on 
holidays or won’t be able to attend.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Roman Podoliaka
That's exactly what https://github.com/koder-ua/os_api is for: it
polls status changes in a separate thread and then updates the
futures, so that you can wait on multiple futures at once.

On Mon, Jul 4, 2016 at 2:19 PM, Denis Makogon  wrote:
>
>
> 2016-07-04 13:22 GMT+03:00 Roman Podoliaka :
>>
>> Denis,
>>
>> >  Major problem
>> > appears when you trying to provision resource that requires to have some
>> > time to reach ACTIVE/COMPLETED state (like, nova instance, stack, trove
>> > database, etc.) and you have to use polling for status changes and in
>> > general polling requires to send HTTP requests within specific time
>> > frame
>> > defined by number of polling retries and delays between them (almost all
>> > PaaS solutions in OpenStack are doing it that might be the case of
>> > distributed backend services, but not for async frameworks).
>>
>> How would an asynchronous client help you avoid polling here? You'd
>> need some sort of a streaming API producing events on the server side.
>>
>
> No, it would not help me to get rid of polling, but using async requests
> will allow to proceed with next independent async tasks while awaiting
> result on async HTTP request.
>
>>
>> If you are simply looking for a better API around polling in OS
>> clients, take a look at https://github.com/koder-ua/os_api , which is
>> based on futures (be aware that HTTP requests are still *synchronous*
>> under the hood).
>>
>> Thanks,
>> Roman
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Fail build request if we can't inject files?

2016-07-04 Thread Matt Riedemann

On 7/4/2016 3:40 AM, Daniel P. Berrange wrote:


Won't the user provided files also get made available by the config drive /
metadata service ?  I think that's the primary reason for file injection not
being a fatal problem. Oh that and the fact that we've wanted to kill it for
at least 3 years now :-)

Regards,
Daniel



Ugh, good point, except force_config_drive defaults to False and running 
the metadata service is optional.


In the case of this failing in the tempest-dsvm-neutron-full-ssh job, 
the instance is not created with a config drive, but the metadata 
service is running. Tempest doesn't check for the files there though 
because it's configured to expect file injection to work, so it ssh's 
into the guest and looks for the files.


I have several changes up related to this:

https://review.openstack.org/#/q/topic:bug/1598581

One is making Tempest disable file injection tests by default since Nova 
disables file injection by default (at least for the libvirt driver).


Another is changing devstack to actually configure nova/tempest for file 
injection which is what the job should have been doing anyway.


My nova fix is not going to fly because of config drive (which I could 
check from the virt driver) and the metadata service (which I can't from 
the virt driver). So I guess the best we can do is log something...


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Fail build request if we can't inject files?

2016-07-04 Thread Roman Podoliaka
Hi all,

> Won't the user provided files also get made available by the config drive /
> metadata service ?

I believe, they should.

Not sure it's the same problem, so just FYI: we recently encountered
an issue with VFAT formatted config drives when nova-compute is
deployed on CentOS or RHEL:

https://bugs.launchpad.net/cirros/+bug/1598783
https://bugs.launchpad.net/mos/+bug/1587960

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Senlin] Deprecation roadmap for Heat's Autoscaling resources?

2016-07-04 Thread Johannes Grassler

Hello,

On 07/04/2016 02:52 PM, Thomas Herve wrote:

On Mon, Jul 4, 2016 at 2:06 PM, Johannes Grassler  wrote:

[...]

Right now OS::Heat::AutoScalingGroup and friends still exist, even in the
master branch. Are they slated for removal at some stage or will they remain
available even as Senlin becomes the go-to mechanism for autoscaling?


They will remain available for the foreseeable future. We just don't
have any great plans to improve them currently.


Ok, good to know that. Thank you!

Cheers,

Johannes

--
Johannes Grassler, Cloud Developer
SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
GF: Felix Imendörffer, Jane Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Senlin] Deprecation roadmap for Heat's Autoscaling resources?

2016-07-04 Thread Thomas Herve
On Mon, Jul 4, 2016 at 2:06 PM, Johannes Grassler  wrote:
> Hello,
>
> I just noticed the note at the top of
> :
>
>  | The content on this page is obsolete. The autoscaling solution is
> offloaded from Heat to Senlin since Mitaka.
>
> Right now OS::Heat::AutoScalingGroup and friends still exist, even in the
> master branch. Are they slated for removal at some stage or will they remain
> available even as Senlin becomes the go-to mechanism for autoscaling?

Hi,

They will remain available for the foreseeable future. We just don't
have any great plans to improve them currently.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-07-04 Thread Sanjay Upadhyay
On Mon, Jul 4, 2016 at 5:12 PM, Steven Hardy  wrote:

> Hi Dmitry,
>
> I wanted to revisit this thread, as I see some of these interfaces
> are now posted for review, and I have a couple of questions around
> the naming (specifically for the "provide" action):
>
> On Thu, May 19, 2016 at 03:31:36PM +0200, Dmitry Tantsur wrote:
> 
> > The last step before the deployment it to make nodes "available" using
> the
> > "provide" provisioning action. Such nodes are exposed to nova, and can be
> > deployed to at any moment. No long-running configuration actions should
> be
> > run in this state. The "manage" action can be used to bring nodes back to
> > "manageable" state for configuration (e.g. reintrospection).
>
> So, I've been reviewing https://review.openstack.org/#/c/334411/ which
> implements support for "openstack overcloud node provide"
>
> I really hate to be the one nitpicking over openstackclient verbiage, but
> I'm a little unsure if the literal translation of this results in an
> intuitive understanding of what happens to the nodes as a result of this
> action. So I wanted to have a broaded discussion before we land the code
> and commit to this interface.
>
> More info below:
> 
> > what do you propose?
> > 
> >
> > I would like the new TripleO mistral workflows to start following the
> ironic
> > state machine closer. Imagine the following workflows:
> >
> > 1. register: take JSON, create nodes in "manageable" state. I do believe
> we
> > can automate the enroll->manageable transition, as it serves the purpose
> of
> > validation (and discovery, but lets put it aside).
> >
> > 2. provide: take a list of nodes or all "managable" nodes and move them
> to
> > "available". By using this workflow an operator will make a *conscious*
> > decision to add some nodes to the cloud.
>
> Here, I think the problem is that while the dictionary definition of
> "provide" is "make available for use, supply" (according to google), it
> implies obtaining the node, not just activating it.
>
> So, to me "provide node" implies going and physically getting the node that
> does not yet exist, but AFAICT what this action actually does is takes an
> existing node, and activates it (sets it to "available" state)
>
> I'm worried this could be a source of operator confusion - has this
> discussion already happened in the Ironic community, or is this a TripleO
> specific term?
>
> To me, something like "openstack overcloud node enable" or maybe "node
> activate" would be more intuitive, as it implies taking an existing node
> from the inventory and making it active/available in the context of the
> overcloud deployment?
>


My 2 cents, as a operator, the part wherein a node is enrolled, manageable,
available is a bit confusing to first timers.
If we have something more simple ie all baremetal nodes (baremetal nodes
are the nodes enrolled or manageable states), all cluster nodes (are either
available or deployed states).

I do not know if there is a deployed state :)
regards
/sanjay


>
> Anyway, not a huge issue, but given that this is a new step in our nodes
> workflow, I wanted to ensure folks are comfortable with the terminology
> before we commit to it in code.
>
> Thanks!
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] inclusion of charm-helpers (LGPL licensed)

2016-07-04 Thread James Page
Hi Billy

On Thu, 30 Jun 2016 at 17:24 Billy Olsen  wrote:

> I suspect that the reactive charming model wouldn't have this issue
> due to the ability to essentially statically link the libraries via
> wheels/pip packages. If that's the case, it's likely possible to
> follow along the same lines as the base-layer charm and bootstrap the
> environment using pip/wheel libraries included at build time. As I see
> it, this would require:
>
> * Updates to the process/tooling for pushing to the charm store
> * Update the install/upgrade-charm hook to bootstrap the environment
> with the requirements files
> * If using virtualenv (not a requirement in my mind), then each of the
> hooks needs to be bootstrapped to ensure that they are running within
> the virtualenv.
>

I was thinking of something along those lines as well.  I'll spike on
something this week to figure out exactly how this might work.


> To make life easier in development mode, the charms can downla
> build step before deployment, though certainly for the published
> versions the statically linked libraries should be included (which,
> from my understanding, I believe the licensing allows and why the
> reactive charming/layered model wouldn't have this issue).
>

That sounds like a neat idea (although building out a layered charm is
pretty easy as well).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] retiring puppet-openstack-release-tools

2016-07-04 Thread Emilien Macchi
Hi,

Before managing releases with OpenStack release team, we had our own
scripts, I suggest we retire the repo to avoid confusion and do some
cleanup.
I'll propose a patch that retire
openstack/puppet-openstack-release-tools, feel free to give feedback
on this proposal.

Note: most of the tools were written by Mathieu Gagné, thanks again
for his work, it was super useful for us.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-04 Thread Sean McGinnis
On Mon, Jul 04, 2016 at 01:59:09PM +0200, Thierry Carrez wrote:
[...]
> The issue here is that oslo.rootwrap uses config files to determine
> what to allow, but those are not really configuration files as far
> as the application using them is concerned. Those must match the
> code being executed.
> 
> So from Grenade perspective, those should really not be considered
> configuration files, but application files.
[...]

+1

I have to agree with this perspective. They are config files, but they
are a special type of config file that is closely tied in to the code. I
think we should treat them as application files.

I say we allow these changes for grenade and move forward on this. I
think we all agree we want to move to privsep. As long as we document
this very clearly that these changes need to be made for upgrades, I'm
OK with that.

I would really like to be able to decided on this and move forward. I'm
afraid sticking with rootwrap for another cycle with just confuse things
and compound our issues.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Senlin] Deprecation roadmap for Heat's Autoscaling resources?

2016-07-04 Thread Johannes Grassler

Hello,

I just noticed the note at the top of
:

 | The content on this page is obsolete. The autoscaling solution is offloaded 
from Heat to Senlin since Mitaka.

Right now OS::Heat::AutoScalingGroup and friends still exist, even in the 
master branch. Are they slated for removal at some stage or will they remain 
available even as Senlin becomes the go-to mechanism for autoscaling?

Cheers,

Johannes

--
Johannes Grassler, Cloud Developer
SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
GF: Felix Imendörffer, Jane Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-04 Thread Thierry Carrez

Angus Lees wrote:

Ack. Ok .. so what's the additional difficulty around config files?  It
sounds like we can replace all the config files with something
completely different during the update phase, if we wanted to do so.
Indeed, it sounds like there isn't even a need to manage a deprecation
period for config files since there will never be mismatched code+config
(good, means fewer poorly tested legacy combinations in code).

Specifically, it seems grenade in both doc and code currently describes
something quite a bit stricter than this.  I think what we want is more
like "use devstack to deploy old; run/test; **use devstack to deploy
new** but pointing at existing DB/state_path from old; run/test,
interact with things created with old, etc".

A solution to our immediate rootwrap issue would be to just copy over
the rootwrap configs from 'new' during upgrade always, and this
shouldn't even be controversial.  I can't read body language over email,
so .. is everyone ok with this?  Why is this not what everyone was
jumping to suggest before now?


The issue here is that oslo.rootwrap uses config files to determine what 
to allow, but those are not really configuration files as far as the 
application using them is concerned. Those must match the code being 
executed.


So from Grenade perspective, those should really not be considered 
configuration files, but application files. They should be updated 
together with the code. Some distributions ship them in 
/usr/share/$project IIRC to reinforce that point. If anyone adds their 
own filters they would/should do that in separate files (and probably 
separate directories).


So you can hardcode that subtle distinction in Grenade (copy over the 
rootwrap config files from new). We could maybe also move (during 
Newton) rootwrap "config files" into a location that looks less like 
config files and more like code (and make rootwrap magically find that 
location). That way on upgrade rootwrap could use the Mitaka config 
files + the Newton code-like location (allowing both old and new 
commands to run until the Mitaka files are cleaned up). That might be an 
option if we don't want to special-case rootwrap files (the way they 
should always have been special-cased) in Grenade. Making rootwrap find 
that new location without requiring altering its configuration is a bit 
tricky though.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] Global stack-list for Magnum service user

2016-07-04 Thread Johannes Grassler

Hello,

Thanks for the exhaustive comment on the issue. Won't help much in the short
term, but it's good to see there will eventually be a way to sort this out
properly!

On 07/04/2016 12:50 PM, Steven Hardy wrote:

On Mon, Jul 04, 2016 at 11:43:47AM +0200, Johannes Grassler wrote:

[Magnum's global stack-list versus Heat's policy.json]


Yes, this sort of problem is the reason we disabled global_index by
default[1] - because of the somewhat notorious keystone bug #968696[2], we
could not create a default rule which correctly communicated that this
should be a cloud-admin only action.

So, instead of creating an insecure-by-default rule, we disabled it for
everybody, so that deployers could make an explicit choice about how to
enable access to this potentially sensitive API.


Ok, that's fair enough.


| stacks:global_index: "role:service",

[...]

I don't think this is feasible, because it implies a level of admin-ness
for service users that I think isn't desirable (even it if may be the
status-quo, I don't personally think trusting all services to have global
access to heat by default is a good model from a security isolation
perspective).


Yes...also, it just shifts the problem as Pavlo pointed out: an admin user can
just assign themselves the 'service' role in their own tenant. So that's no
advantage over using role:admin :-)
 
[...]



So, in summary, I think we should fix our integration with the new keystone
is_admin_project stuff, then potentially switch the global_index to use the
is_admin rule as defined by our policy.json.


Indeed, that sounds a lot better.


Then, you'd just need to add the magnum service user to whatever project is
defined in keystone as being the admin project, but it would not require
exposing this API to every other service by default.

Hope that helps!


Definitely does, thanks!

Cheers,

Johannes

--
Johannes Grassler, Cloud Developer
SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
GF: Felix Imendörffer, Jane Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-07-04 Thread Steven Hardy
Hi Dmitry,

I wanted to revisit this thread, as I see some of these interfaces
are now posted for review, and I have a couple of questions around
the naming (specifically for the "provide" action):

On Thu, May 19, 2016 at 03:31:36PM +0200, Dmitry Tantsur wrote:

> The last step before the deployment it to make nodes "available" using the
> "provide" provisioning action. Such nodes are exposed to nova, and can be
> deployed to at any moment. No long-running configuration actions should be
> run in this state. The "manage" action can be used to bring nodes back to
> "manageable" state for configuration (e.g. reintrospection).

So, I've been reviewing https://review.openstack.org/#/c/334411/ which
implements support for "openstack overcloud node provide"

I really hate to be the one nitpicking over openstackclient verbiage, but
I'm a little unsure if the literal translation of this results in an
intuitive understanding of what happens to the nodes as a result of this
action. So I wanted to have a broaded discussion before we land the code
and commit to this interface.

More info below:

> what do you propose?
> 
> 
> I would like the new TripleO mistral workflows to start following the ironic
> state machine closer. Imagine the following workflows:
> 
> 1. register: take JSON, create nodes in "manageable" state. I do believe we
> can automate the enroll->manageable transition, as it serves the purpose of
> validation (and discovery, but lets put it aside).
> 
> 2. provide: take a list of nodes or all "managable" nodes and move them to
> "available". By using this workflow an operator will make a *conscious*
> decision to add some nodes to the cloud.

Here, I think the problem is that while the dictionary definition of
"provide" is "make available for use, supply" (according to google), it
implies obtaining the node, not just activating it.

So, to me "provide node" implies going and physically getting the node that
does not yet exist, but AFAICT what this action actually does is takes an
existing node, and activates it (sets it to "available" state)

I'm worried this could be a source of operator confusion - has this
discussion already happened in the Ironic community, or is this a TripleO
specific term?

To me, something like "openstack overcloud node enable" or maybe "node
activate" would be more intuitive, as it implies taking an existing node
from the inventory and making it active/available in the context of the
overcloud deployment?

Anyway, not a huge issue, but given that this is a new step in our nodes
workflow, I wanted to ensure folks are comfortable with the terminology
before we commit to it in code.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack]-Help required for configuring GBP in Openstack Kilo

2016-07-04 Thread Kiruthiga R
Hello Team,

I have an Openstack Kilo three node setup. It has one controller, one compute 
and one neutron node.

I am using OVS switch as soft switch between compute and neutron nodes.

Now, I need GBP to be configured on this existing Openstack. All the documents 
that I could find have only explained about enabling GBP in a devstack 
environment. I am not sure about the pre-requisites required, packages to be 
installed, configuration files to be changed.

Can anyone please help me with some information regarding GBP installation and 
configuration?

Thanks & Regards,
Kiruthiga

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-04 Thread Markus Zoeller
On 01.07.2016 23:03, Matt Riedemann wrote:
> We're now past non-priority feature freeze. I've started going through 
> some blueprints and -2ing them if they still have outstanding changes. I 
> haven't gone through the full list yet (we started with 100).
> 
> I'm also building a list of potential FFE candidates based on:
> 
> 1. How far along the change is (how ready is it?), e.g. does it require 
> a lot of change yet? Does it require a Tempest test and is that passing 
> already? How much of the series has already merged and what's left?
> 
> 2. How much core reviewer attention has it already gotten?
> 
> 3. What kind of priority does it have, i.e. if we don't get it done in 
> Newton do we miss something in Ocata? Think things that start 
> deprecation/removal timers.
> 
> The plan is for the nova core team to have an informal meeting in the 
> #openstack-nova IRC channel early next week, either Tuesday or 
> Wednesday, and go through the list of potential FFE candidates.
> 
> Blueprints that get exceptions will be checked against the above 
> criteria and who on the core team is actually going to push the changes 
> through.
> 
> I'm looking to get any exceptions completed within a week, so targeting 
> Wednesday 7/13. That leaves a few days for preparing for the meetup.
> 

FWIW, bp "libvirt-virtlogd" [1] is basically ready to merge. The two
changes [2] and [3] did already get a lot of attention from danpb.

References:
[1] https://blueprints.launchpad.net/openstack/?searchtext=libvirt-virtlogd
[2] https://review.openstack.org/#/c/334480/
[3] https://review.openstack.org/#/c/323765/

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Denis Makogon
2016-07-04 13:22 GMT+03:00 Roman Podoliaka :

> Denis,
>
> >  Major problem
> > appears when you trying to provision resource that requires to have some
> > time to reach ACTIVE/COMPLETED state (like, nova instance, stack, trove
> > database, etc.) and you have to use polling for status changes and in
> > general polling requires to send HTTP requests within specific time frame
> > defined by number of polling retries and delays between them (almost all
> > PaaS solutions in OpenStack are doing it that might be the case of
> > distributed backend services, but not for async frameworks).
>
> How would an asynchronous client help you avoid polling here? You'd
> need some sort of a streaming API producing events on the server side.
>
>
No, it would not help me to get rid of polling, but using async requests
will allow to proceed with next independent async tasks while awaiting
result on async HTTP request.


> If you are simply looking for a better API around polling in OS
> clients, take a look at https://github.com/koder-ua/os_api , which is
> based on futures (be aware that HTTP requests are still *synchronous*
> under the hood).
>
> Thanks,
> Roman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] Global stack-list for Magnum service user

2016-07-04 Thread Steven Hardy
On Mon, Jul 04, 2016 at 11:43:47AM +0200, Johannes Grassler wrote:
> Hello,
> 
> Magnum has a periodic task that checks the state of the Heat stacks it creates
> for its bays. It does this across all users/tenants that have Magnum bays.
> Currently it uses a global stack-list operation to query these Heat stacks:
> 
> https://github.com/openstack/magnum/blob/master/magnum/service/periodic.py#L83
> 
> Now the Magnum service user does not normally have permission to perform this 
> operation,
> hence the Magnum documentation currently suggests the following change to
> Heat's policy.json:
> 
> | stacks:global_index: "role:admin",
> 
> This is less than optimal since it allows any tenant's admin user to perform a
> global stack-list. Would it be an option to have something like this in Heat's
> default policy.json?

Yes, this sort of problem is the reason we disabled global_index by
default[1] - because of the somewhat notorious keystone bug #968696[2], we
could not create a default rule which correctly communicated that this
should be a cloud-admin only action.

So, instead of creating an insecure-by-default rule, we disabled it for
everybody, so that deployers could make an explicit choice about how to
enable access to this potentially sensitive API.

> | stacks:global_index: "role:service",
> 
> That way the global stack-list would be restricted to service users and seting
> Magnum (or other services that use Heat internally) wouldn't need a change to
> Heat's policy.json.
> 
> If that kind of approach is feasible I'd be happy to submit a change.

I don't think this is feasible, because it implies a level of admin-ness
for service users that I think isn't desirable (even it if may be the
status-quo, I don't personally think trusting all services to have global
access to heat by default is a good model from a security isolation
perspective).

This topic was discussed[3] recently in the context of another RFE bug[4] that
wanted additional global-admin capabilities, and the outcome of that
discussion was a reccomendation that we use the new "is_admin_project"
capability added to keystone[5] in order to fix bug #968696.

The net result of this is a redefinition of "is_admin" in our context to
look like:

"role:admin and auth_token_info.token.is_admin_project:True",

However ref https://review.openstack.org/#/c/08/ there are problems
with backwards compatibility when accessing this directly from the
token_info, so we need https://review.openstack.org/#/c/331916/ which will
enable access of this attribute in a backwards compatible way.

I assume the net result will be that anyone not configuring an admin
project in keystone will get the old buggy #968696 behaviour, but at least
then we won't have codified any assumptions around admin scope inside heat,
and that access can be controlled centrally for all services via keystone
in a consistent way.

So, in summary, I think we should fix our integration with the new keystone
is_admin_project stuff, then potentially switch the global_index to use the
is_admin rule as defined by our policy.json.

Then, you'd just need to add the magnum service user to whatever project is
defined in keystone as being the admin project, but it would not require
exposing this API to every other service by default.

Hope that helps!

Steve

[1] https://github.com/openstack/heat/blob/master/etc/heat/policy.json#L47
[2] https://bugs.launchpad.net/keystone/+bug/968696
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-November/079006.html
[4] https://bugs.launchpad.net/heat/+bug/1466694
[5] https://review.openstack.org/#/c/240719/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Today's Cinder-Nova API changes meeting is cancelled

2016-07-04 Thread Ildikó Váncsa
Hi,

The today's meeting is cancelled due to the holiday in the U.S. The next 
meeting will be held next Monday (July 11th), 1700 UTC on #openstack-meeting-cp.

Thanks and Best Regards,
/Ildikó

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting and ask.openstack.org

2016-07-04 Thread Thierry Carrez

Adam Young wrote:

[...]
I know we getting nuked on the Wiki.  What I would like to be able to
generate is  Frequently Asked Questions (FAQ) page, but as a living
document.


Small correction... As explained in my summary of wiki use cases[1], we 
are removing "reference" documentation from the wiki but we'll likely 
keep a wiki (or some other light publication platform) for things like 
team pages (internal team organization documents). So you could post 
your FAQ there, but users would likely miss it.


[1] http://lists.openstack.org/pipermail/openstack-dev/2016-June/096481.html


I think that ask.openstack.org is the right forum for this, but we need
some more help:
[...]


I agree that would be the best place. Users already expect to find 
answers on that website. Current content is of varying quality, which 
can discourage question-askers and question-answerers alike. If we can 
get more quality/curated content there we might start a virtuous circle 
-- getting developers more present there can only help.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] Global stack-list for Magnum service user

2016-07-04 Thread Pavlo Shchelokovskyy
Hi Johannes,

this is still not too optimal, as AFAIK admin role is still global, so
admin in tenant also means admin of whole OpenStack, thus it still can
assign himself/whomever the 'service' role and get access to global stack
list.

Best solution would probably be to create a separate domain in Keystone,
and a service user in it, and check in policy json the actual
domain+tenant+some role or username in this tenant. This domain and tenant
are completely controlled by Magnum service then (creds are in the magnum
config) - all similar to how Heat is working.

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Mon, Jul 4, 2016 at 12:43 PM, Johannes Grassler 
wrote:

> Hello,
>
> Magnum has a periodic task that checks the state of the Heat stacks it
> creates
> for its bays. It does this across all users/tenants that have Magnum bays.
> Currently it uses a global stack-list operation to query these Heat stacks:
>
>
> https://github.com/openstack/magnum/blob/master/magnum/service/periodic.py#L83
>
> Now the Magnum service user does not normally have permission to perform
> this operation,
> hence the Magnum documentation currently suggests the following change to
> Heat's policy.json:
>
> | stacks:global_index: "role:admin",
>
> This is less than optimal since it allows any tenant's admin user to
> perform a
> global stack-list. Would it be an option to have something like this in
> Heat's
> default policy.json?
>
> | stacks:global_index: "role:service",
>
> That way the global stack-list would be restricted to service users and
> seting
> Magnum (or other services that use Heat internally) wouldn't need a change
> to
> Heat's policy.json.
>
> If that kind of approach is feasible I'd be happy to submit a change.
>
> Cheers,
>
> Johannes
>
> --
> Johannes Grassler, Cloud Developer
> SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
> GF: Felix Imendörffer, Jane Smithard, Graham Norton
> Maxfeldstr. 5, 90409 Nürnberg, Germany
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Roman Podoliaka
Denis,

>  Major problem
> appears when you trying to provision resource that requires to have some
> time to reach ACTIVE/COMPLETED state (like, nova instance, stack, trove
> database, etc.) and you have to use polling for status changes and in
> general polling requires to send HTTP requests within specific time frame
> defined by number of polling retries and delays between them (almost all
> PaaS solutions in OpenStack are doing it that might be the case of
> distributed backend services, but not for async frameworks).

How would an asynchronous client help you avoid polling here? You'd
need some sort of a streaming API producing events on the server side.

If you are simply looking for a better API around polling in OS
clients, take a look at https://github.com/koder-ua/os_api , which is
based on futures (be aware that HTTP requests are still *synchronous*
under the hood).

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Antoni Segura Puimedon
On Mon, Jul 4, 2016 at 11:59 AM, Julien Danjou  wrote:

> On Mon, Jul 04 2016, Antoni Segura Puimedon wrote:
>
> > for the neutron clients now we use a thread executor from the asyncio
> loop
> > any time
> > we do neutron client request.
>
> It's a good trade-off, but it won't be as good as going full on async
> I/O. :)
>

Sure, if the neutronclient doesn't grow async support we'll most likely add
the calls
to neutron we need in our API watcher using that aio lib I linked. Using
the thread
executor is more of a workaround than a definitive solution.


>
> --
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Denis Makogon
2016-07-04 12:40 GMT+03:00 Antoni Segura Puimedon <
toni+openstac...@midokura.com>:

>
>
> On Mon, Jul 4, 2016 at 11:16 AM, Julien Danjou  wrote:
>
>> On Sun, Jun 26 2016, Denis Makogon wrote:
>>
>> > I know that some work in progress to bring Python 3.4 compatibility to
>> > backend services and it is kinda hard question to answer, but i'd like
>> to
>> > know if there are any plans to support asynchronous HTTP API client in
>> the
>> > nearest future using aiohttp [1] (PEP-3156)?
>>
>
> We were not sure if aiohttp would be taken in as a requirement, so in our
> kuryr kubernetes
> prototype we did our own asyncio http request library (it only does GET
> for now)[2]
>
> 
>

Good to see that someone already using async features. But i'd doubt
regarding re-inventing wheel, aiohttp is good choice for implementing both
clients and servers, so going deeper to asyncio core parts is required when
you're doing something very protocol-specific on the transport layer.

So, current question is about have common part of code that utilizes async
HTTP coroutines for current SDKs, and unfortunately, backward compatibility
should remain because not all projects were shifted to Py34 or greater yet,
but pace of feature deliver should remain as is.


>
>
>>
>> I don't think there is unfortunately. Most clients now relies on
>> `requests', and unfortunately it's not async not it seems ready to be
>> last time I checked.
>>
>
> for the neutron clients now we use a thread executor from the asyncio loop
> any time
> we do neutron client request.
>
> [2] https://github.com/midonet/kuryr/blob/k8s/kuryr/raven/aio/methods.py
>
>>
>> --
>> Julien Danjou
>> // Free Software hacker
>> // https://julien.danjou.info
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Julien Danjou
On Mon, Jul 04 2016, Antoni Segura Puimedon wrote:

> for the neutron clients now we use a thread executor from the asyncio loop
> any time
> we do neutron client request.

It's a good trade-off, but it won't be as good as going full on async
I/O. :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Julien Danjou
On Mon, Jul 04 2016, Denis Makogon wrote:

> What would be the best place to submit spec?

The cross project spec repo?

  https://git.openstack.org/cgit/openstack/openstack-specs/

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][heat] Global stack-list for Magnum service user

2016-07-04 Thread Johannes Grassler

Hello,

Magnum has a periodic task that checks the state of the Heat stacks it creates
for its bays. It does this across all users/tenants that have Magnum bays.
Currently it uses a global stack-list operation to query these Heat stacks:

https://github.com/openstack/magnum/blob/master/magnum/service/periodic.py#L83

Now the Magnum service user does not normally have permission to perform this 
operation,
hence the Magnum documentation currently suggests the following change to
Heat's policy.json:

| stacks:global_index: "role:admin",

This is less than optimal since it allows any tenant's admin user to perform a
global stack-list. Would it be an option to have something like this in Heat's
default policy.json?

| stacks:global_index: "role:service",

That way the global stack-list would be restricted to service users and seting
Magnum (or other services that use Heat internally) wouldn't need a change to
Heat's policy.json.

If that kind of approach is feasible I'd be happy to submit a change.

Cheers,

Johannes

--
Johannes Grassler, Cloud Developer
SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
GF: Felix Imendörffer, Jane Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Antoni Segura Puimedon
On Mon, Jul 4, 2016 at 11:16 AM, Julien Danjou  wrote:

> On Sun, Jun 26 2016, Denis Makogon wrote:
>
> > I know that some work in progress to bring Python 3.4 compatibility to
> > backend services and it is kinda hard question to answer, but i'd like to
> > know if there are any plans to support asynchronous HTTP API client in
> the
> > nearest future using aiohttp [1] (PEP-3156)?
>

We were not sure if aiohttp would be taken in as a requirement, so in our
kuryr kubernetes
prototype we did our own asyncio http request library (it only does GET for
now)[2]




>
> I don't think there is unfortunately. Most clients now relies on
> `requests', and unfortunately it's not async not it seems ready to be
> last time I checked.
>

for the neutron clients now we use a thread executor from the asyncio loop
any time
we do neutron client request.

[2] https://github.com/midonet/kuryr/blob/k8s/kuryr/raven/aio/methods.py

>
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Denis Makogon
2016-07-04 12:16 GMT+03:00 Julien Danjou :

> On Sun, Jun 26 2016, Denis Makogon wrote:
>
> > I know that some work in progress to bring Python 3.4 compatibility to
> > backend services and it is kinda hard question to answer, but i'd like to
> > know if there are any plans to support asynchronous HTTP API client in
> the
> > nearest future using aiohttp [1] (PEP-3156)?
>
> I don't think there is unfortunately. Most clients now relies on
> `requests', and unfortunately it's not async not it seems ready to be
> last time I checked.
>
>
Unfortunately, it is what it is. So, i guess this is something that is
worth considering discuss during summit and find the way and capacity to
support async HTTP API during next release. I'll start work on general
concept that would satisfy both 2.7 and 3.4 or greater Python versions.

What would be the best place to submit spec?


--
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
>

Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] - Nominate Maksim Malchuk to Fuel Library Core

2016-07-04 Thread Sergii Golovatiuk
Hi,

Please welcome Maksim as he's just joined fuel-library Core Team!

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Jun 28, 2016 at 1:06 PM, Adam Heczko  wrote:

> Although I'm not Fuel core, +1 from me to Maksim. Maksim is not only
> excellent engineer but also very friendly and helpful folk.
>
> On Tue, Jun 28, 2016 at 12:19 PM, Georgy Kibardin 
> wrote:
>
>> +1
>>
>> On Tue, Jun 28, 2016 at 1:12 PM, Kyrylo Galanov 
>> wrote:
>>
>>> +1
>>>
>>> On Tue, Jun 28, 2016 at 1:16 AM, Matthew Mosesohn <
>>> mmoses...@mirantis.com> wrote:
>>>
 +1. Maksim is an excellent reviewer.

 On Mon, Jun 27, 2016 at 6:06 PM, Alex Schultz 
 wrote:
 > +1
 >
 > On Mon, Jun 27, 2016 at 9:04 AM, Bogdan Dobrelya <
 bdobre...@mirantis.com>
 > wrote:
 >>
 >> On 06/27/2016 04:54 PM, Sergii Golovatiuk wrote:
 >> > I am very sorry for sending without subject. I am adding subject to
 >> > voting and my +1
 >>
 >> +1 from my side!
 >>
 >> >
 >> > --
 >> > Best regards,
 >> > Sergii Golovatiuk,
 >> > Skype #golserge
 >> > IRC #holser
 >> >
 >> > On Mon, Jun 27, 2016 at 4:42 PM, Sergii Golovatiuk
 >> > mailto:sgolovat...@mirantis.com>>
 wrote:
 >> >
 >> > Hi,
 >> >
 >> > I would like to nominate Maksim Malchuk to Fuel-Library Core
 team.
 >> > He’s been doing a great job so far [0]. He’s #2 reviewer and #2
 >> > contributor with 28 commits for last 90 days [1][2].
 >> >
 >> > Fuelers, please vote with +1/-1 for approval/objection. Voting
 will
 >> > be open until July of 4th. This will go forward after voting is
 >> > closed if there are no objections.
 >> >
 >> > Overall contribution:
 >> > [0] http://stackalytics.com/?user_id=mmalchuk
 >> > Fuel library contribution for last 90 days:
 >> > [1]
 >> > 
 http://stackalytics.com/report/contribution/fuel-library/90
 >> > http://stackalytics.com/report/users/mmalchuk
 >> > List of reviews:
 >> > [2]
 >> >
 https://review.openstack.org/#/q/reviewer:%22Maksim+Malchuk%22+status:merged,n,z
 >> > --
 >> > Best regards,
 >> > Sergii Golovatiuk,
 >> > Skype #golserge
 >> > IRC #holser
 >> >
 >> >
 >> >
 >> >
 >> >
 >> >
 __
 >> > OpenStack Development Mailing List (not for usage questions)
 >> > Unsubscribe:
 >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >> >
 >>
 >>
 >> --
 >> Best regards,
 >> Bogdan Dobrelya,
 >> Irc #bogdando
 >>
 >>
 __
 >> OpenStack Development Mailing List (not for usage questions)
 >> Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >
 >
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsu

Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Julien Danjou
On Sun, Jun 26 2016, Denis Makogon wrote:

> I know that some work in progress to bring Python 3.4 compatibility to
> backend services and it is kinda hard question to answer, but i'd like to
> know if there are any plans to support asynchronous HTTP API client in the
> nearest future using aiohttp [1] (PEP-3156)?

I don't think there is unfortunately. Most clients now relies on
`requests', and unfortunately it's not async not it seems ready to be
last time I checked.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Fail build request if we can't inject files?

2016-07-04 Thread Daniel P. Berrange
On Sun, Jul 03, 2016 at 10:08:04AM -0500, Matt Riedemann wrote:
> I want to use the gate-tempest-dsvm-neutron-full-ssh in nova since it runs
> ssh validation + neutron + config drive + metadata service, which will test
> the virtual device tagging 2.32 microversion API (added last week).
> 
> The job has a file injection test that fails consistently which is keeping
> it from being voting.
> 
> After debugging, the problem is the files to inject are silently ignored
> because n-cpu is configured with libvirt.inject_partition=-2 by default.
> That disables file injection:
> 
> https://github.com/openstack/nova/blob/faf50a747e03873c3741dac89263a80112da915a/nova/virt/libvirt/driver.py#L3030
> 
> We don't even log a warning if the user requested files to inject and we
> can't honor it. If I were a user and tried to inject files when creating a
> server but they didn't show up in the guest, I'd open a support ticket
> against my cloud provider. So I don't think a warning (that only the admin
> sees) is sufficient here. This isn't something that's discoverable from the
> API either, it's really host configuration / capability (something we still
> need to tackle).

Won't the user provided files also get made available by the config drive /
metadata service ?  I think that's the primary reason for file injection not
being a fatal problem. Oh that and the fact that we've wanted to kill it for
at least 3 years now :-)

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [daisycloud-core] [kolla] Kolla Mitaka requirementssupported by CentOS

2016-07-04 Thread Gerard Braad
Hi,

> 发件人: Haïkel 
> As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
> It's proven very hard to prevent EPEL pushing broken updates, or push
> updates to fit OpenStack requirements.
> Actually, all the dependency above but ansible, docker and git python
> modules are in CentOS Cloud SIG repositories.
> If you are interested to work w/ CentOS Cloud SIG, we can add missing
> dependencies in our repositories.

Interesting point, as currently the preference is to use docker
project's provided
packages for installing. This means that `docker-storage-setup` is not
available.
This can actually be very helpful for CentOS-based deployments to get a
production-ready environment setup.


Gerard

--

   Gerard Braad | http://gbraad.nl
   [ Doing Open Source Matters ]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [probably forge email可能是仿冒邮件]Re: [daisycloud-core] Kolla Mitaka requirementssupported by CentOS

2016-07-04 Thread hu . zhijiang
Hi Haïkel

> Actually, all the dependency above but ansible, docker and git python
> modules are in CentOS Cloud SIG repositories.
> If you are interested to work w/ CentOS Cloud SIG, we can add missing
> dependencies in our repositories.

So currently Jinja2 version >= 2.8 is already in the  CentOS Cloud SIG 
repository. could you please point a way to get it? That will be a great 
help for us to use kolla, because currenly we are using fedora repo  
http://koji.fedoraproject.org/koji/packageinfo?packageID=6506 to get 
Jinja2 version >= 2.8, but we are sure that  CentOS Cloud SIG repository 
will be a more better choise than fedora repo.  




发件人: Haïkel 
收件人: "OpenStack Development Mailing List (not for usage 
questions)" , 
日期:   2016-07-03 05:18
主题:   [probably forge email可能是仿冒邮件]Re: [openstack-dev] 
[daisycloud-core] Kolla Mitaka requirementssupported by CentOS



2016-07-02 20:42 GMT+02:00 jason :
> Pip Package Name Supported By Centos CentOS Name Repo Name
> 
==
> ansible   yes
> ansible1.9.noarchepel
> docker-py  yes
> python-docker-py.noarchextras
> gitdb  yes
> python-gitdb.x86_64epel
> GitPython  yes
> GitPython.noarchepel
> oslo.config yes
> python2-oslo-config.noarch centos-openstack-mitaka
> pbryes
> python-pbr.noarch   epel
> setuptools yes
> python-setuptools.noarchbase
> six yes
> python-six.noarch base
> pycryptoyes
> python2-crypto  epel
> graphvizno
> Jinja2no (Note: Jinja2 2.7.2 will be installed as
> dependency by ansible)
>

As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
It's proven very hard to prevent EPEL pushing broken updates, or push
updates to fit OpenStack requirements.

Actually, all the dependency above but ansible, docker and git python
modules are in CentOS Cloud SIG repositories.
If you are interested to work w/ CentOS Cloud SIG, we can add missing
dependencies in our repositories.


>
> As above table shows, only two (graphviz and Jinja2) are not supported
> by centos currently. As those not supported packages are definitly not
> used by OpenStack as well as Daisy. So basicaly we can use pip to
> install them after installing other packages by yum. But note that
> Jinja2 2.7.2 will be installed as dependency while yum install
> ansible, so we need to using pip to install jinja2 2.8 after that to
> overide the old one. Also note that we must make sure pip is ONLY used
> for installing those two not supported packages.
>
> But before you trying to use pip, please consider these:
>
> 1) graphviz is just for saving image depend graph text file and is not
> used by default and only used in build process if it is configured to
> be used.
>
> 2) Jinja2 rpm can be found at
> http://koji.fedoraproject.org/koji/packageinfo?packageID=6506, which I
> think is suitable for CentOS. I have tested it.
>
> So, as far as Kolla deploy process concerned, there is no need to use
> pip to install graphviz and Jinja2. Further more, if we do not install
> Kolla either then we can get ride of pip totally!
>
> I encourage all of you to think about not using pip any more for
> Daisy+Kolla, because pip hase a lot of overlaps between yum/rpm, files
> may be overide back and force if not using them carefully. So not
> using pip will make things easier and make jump server more cleaner.
> Any ideas?
>
>
> Thanks,
> Zhijiang
>
> --
> Yours,
> Jason
>
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-de

Re: [openstack-dev] [daisycloud-core] [kolla] Kolla Mitaka requirementssupported by CentOS

2016-07-04 Thread hu . zhijiang
> As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
> It's proven very hard to prevent EPEL pushing broken updates, or push
> updates to fit OpenStack requirements.

> Actually, all the dependency above but ansible, docker and git python
> modules are in CentOS Cloud SIG repositories.
> If you are interested to work w/ CentOS Cloud SIG, we can add missing
> dependencies in our repositories.

I added [kolla] key word in the mail subject. Hope can get response from 
Kolla team about how to choose repos.


Thanks,
Zhijiang



发件人: Haïkel 
收件人: "OpenStack Development Mailing List (not for usage 
questions)" , 
日期:   2016-07-03 05:18
主题:   [probably forge email可能是仿冒邮件]Re: [openstack-dev] 
[daisycloud-core] Kolla Mitaka requirementssupported by CentOS



2016-07-02 20:42 GMT+02:00 jason :
> Pip Package Name Supported By Centos CentOS Name Repo Name
> 
==
> ansible   yes
> ansible1.9.noarchepel
> docker-py  yes
> python-docker-py.noarchextras
> gitdb  yes
> python-gitdb.x86_64epel
> GitPython  yes
> GitPython.noarchepel
> oslo.config yes
> python2-oslo-config.noarch centos-openstack-mitaka
> pbryes
> python-pbr.noarch   epel
> setuptools yes
> python-setuptools.noarchbase
> six yes
> python-six.noarch base
> pycryptoyes
> python2-crypto  epel
> graphvizno
> Jinja2no (Note: Jinja2 2.7.2 will be installed as
> dependency by ansible)
>

As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
It's proven very hard to prevent EPEL pushing broken updates, or push
updates to fit OpenStack requirements.

Actually, all the dependency above but ansible, docker and git python
modules are in CentOS Cloud SIG repositories.
If you are interested to work w/ CentOS Cloud SIG, we can add missing
dependencies in our repositories.


>
> As above table shows, only two (graphviz and Jinja2) are not supported
> by centos currently. As those not supported packages are definitly not
> used by OpenStack as well as Daisy. So basicaly we can use pip to
> install them after installing other packages by yum. But note that
> Jinja2 2.7.2 will be installed as dependency while yum install
> ansible, so we need to using pip to install jinja2 2.8 after that to
> overide the old one. Also note that we must make sure pip is ONLY used
> for installing those two not supported packages.
>
> But before you trying to use pip, please consider these:
>
> 1) graphviz is just for saving image depend graph text file and is not
> used by default and only used in build process if it is configured to
> be used.
>
> 2) Jinja2 rpm can be found at
> http://koji.fedoraproject.org/koji/packageinfo?packageID=6506, which I
> think is suitable for CentOS. I have tested it.
>
> So, as far as Kolla deploy process concerned, there is no need to use
> pip to install graphviz and Jinja2. Further more, if we do not install
> Kolla either then we can get ride of pip totally!
>
> I encourage all of you to think about not using pip any more for
> Daisy+Kolla, because pip hase a lot of overlaps between yum/rpm, files
> may be overide back and force if not using them carefully. So not
> using pip will make things easier and make jump server more cleaner.
> Any ideas?
>
>
> Thanks,
> Zhijiang
>
> --
> Yours,
> Jason
>
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/open

Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla + BiFrost integration

2016-07-04 Thread Stephen Hindle
Hi Steve

  I'm just suggesting the bi-frost stuff allow sufficient 'hooks' for
operators to insert site specific setup.  Not that Kolla/Bi-Frost try
to handle 'everything'.

  For instance LDAP... Horizon integration with LDAP would certainly
be within Kolla's perview.  However, operators also use LDAP for login
access to the host via PAM.  This is site-specific, and outside of
Kolla's mission.

  As an example of 'respecting existing configuration' - some sites
may use OpenVSwitch for host level networking.  Kolla currently starts
a new openvswitchdb container without checking if OpenVSwitch is
already running - this kills the host networking.

  If you'll pardon the pun, there are a 'host' of situations like
this, where operators will have to provide
(possibly many/detailed) site specific configurations to a bare metal
host.  Networking, Security, Backups, Monitoring/Logging, etc.  These
may all be subject to corporate wide policies that are non-negotiable
and have to be followed.

  Again, I realize Kolla/BiFrost can not be everything for everyone.
I just want to suggest that we provide well documented methods for
operators to insert site-specific roles/plays/whatever, and that we
take care to avoid 'stepping on' things.

  I have no idea as to what/how-many 'hooks' would be required.  I
tend to think a simple 'before-bifrost' role and 'after-bifrost' role
would be enough.  However, others may have more input on that...
I like the idea of using roles, as that would allow you to centralize
all your 'site-specific' bits.  This way operators don't have to
modify the existing kolla/BiFrost stuff.


On Sat, Jul 2, 2016 at 3:10 PM, Steven Dake (stdake)  wrote:
> Stephen,
>
> Responses inline.
>
> On 7/1/16, 11:35 AM, "Stephen Hindle"  wrote:
>
>>Maybe I missed it - but is there a way to provide site specific
>>configurations?  Things we will run into in the wild include:
>>Configuring multiple non-openstack nics
>
> We don¹t have anything like this at present or planned.  Would you mind
> explaining the use case?  Typically we in the Kolla community expect a
> production deployment to only deploy OpenStack, and not other stacks on
> top of the bare metal hardware.  This is generally considered best
> practice at this time, unless of course your deploying something on top of
> OpenStack that may need these nics.  The reason is that OpenStack itself
> managed alongside another application doesn¹t know what it doesn't know
> and can't handle capacity management or any of a number of other things
> required to make an OpenStack cloud operate.
>
>> IPMI configuration
>
> BiFrost includes IPMI integration - assumption being we will just use
> whatever BiFrost requires here for configuration.
>
>> Password integration with Corporate LDAP etc.
>
> We have been asked several times for this functionality, and it will come
> naturally during either Newton or Occata.
>
>> Integration with existing SANs
>
> Cinder integrates with SANs, and in Newton, we have integration with
> iSCSI.  Unfortunately because of some controversy around how glance should
> provide images with regards to cinder, using existing SAN gear with iSCSI
> integration as is done by Cinder may not work as expected in a HA setup.
>
>> Integration with existing corporate IPAM
>
> No idea
>
>> Corporate Security policy (firewall rules, sudo groups,
>>hosts.allow, ssh configs,etc)
>
> This is environmental specific and its hard to make a promise on what we
> could deliver in a generic way that would be usable by everyone.
> Therefore our generic implementation will be the "bare minimum" to get the
> system into an operational state.  The things listed above are outside the
> "bare minimum" iiuc.
>
>>
>>Thats just off the top of my head - I'm sure we'll run into others.  I
>>tend to think the best way
>>to approach this is to allow some sort of 'bootstrap' role, that could
>>be populated by the
>>operators.  This should initially be empty (Kolla specific 'bootstrap'
>
> Our bootstrap playbook is for launching BiFrost and bringing up the bare
> metal machines with an SSH credential.  It appears from this thread we
> will have another playbook to do the bare metal initialization (thiings
> like turning off firewalld, turning on chrony, I.e. Making the bare metal
> environment operational for OpenStack)
>
> I think what you want is a third playbook which really belongs in the
> domain of the operators to handle site-specific configuration as required
> by corporate rules and the like.
>
>
>>actions should be
>>in another role) to prevent confusion.
>>
>>We also have to be careful that kolla doesn't stomp on any non-kolla
>>configuration...
>
> Could you expand here.  Kolla currently expects the machines under its
> control to be only OpenStack machines, and not have other applications
> running on them.
>
> Hope that was helpful.
>
> Regards
> -steve
>
>>
>>
>>On Thu, Jun 30, 2016 at 12:43 PM, Mooney, Sean K
>> wrote:
>>>
>>>
 

Re: [openstack-dev] [neutron][upgrades] Bi-weekly upgrades work status. 6/20/2016

2016-07-04 Thread Damon Wang
Very glad to see *Bi-weekly Upgrades Work Status*, besides, we
(UnitedStack) are also writing a Chinese version of weekly neutron status:

http://bit.ly/29nTorX

We wrote from 4.3 and now have wrote 12 pieces. :-D

Wei Wang

2016-06-20 21:58 GMT+08:00 Ihar Hrachyshka :

> Hi all,
>
> (It’s not really bi-weekly since I missed it the previous week. This
> report is for the last 3 weeks. I will try to keep a more regular schedule
> for those updates in the future.)
>
> OK. What’s new in neutron upgrades since last update?
>
> 1. For the most part, the team works on migrating existing code base to
> using versioned objects.
>
> What landed:
>
> - base db plugin switched to objects for subnetpools:
> https://review.openstack.org/300056
> - get_object(s) API now allows to pass renamed fields as filters:
> https://review.openstack.org/327249
>
> What’s in the queue:
> - utilizing DNSNameServer object in the code:
> https://review.openstack.org/326477
> - security groups object: https://review.openstack.org/284738
> - *PortSecurity objects: https://review.openstack.org/327257
>
> There are things still crafting worth mentioning:
> - subnet adoption in db code: https://review.openstack.org/321001
> - subnet object adjustments: https://review.openstack.org/331009
> - address scope adoption: https://review.openstack.org/#/c/308005/
>
> A lot of api test coverage for sorting and pagination happened. That is
> something that we push for before we switch resources to using objects to
> avoid potential regressions. Things that landed:
> - next/prev href links tests: https://review.openstack.org/318270
> - subnet tests: https://review.openstack.org/329340
> - subnetpools tests: https://review.openstack.org/327081
>
> We have a lot more related patches though, including test coverage as well
> as enabling sorting/pagination for all installations. All those are tracked
> under:
>
>
> https://review.openstack.org/#/q/status:open++(topic:bug/1566514+OR+topic:bug/1591981)
>
> Reviews for ^ are highly welcome!
>
> There were other related changes that landed in master:
> - migrated code from using private ._context attributes to .obj_context:
> https://review.openstack.org/283616
> - added type information to ObjectNotFound exception:
> https://review.openstack.org/327582
> - NetworkDhcpAgentBinding model moved to a separate module:
> https://review.openstack.org/328452
> - get_object() switched to using _query_model to support RBAC filtering:
> https://review.openstack.org/#/c/326361/
> - query filter hook added to objects: https://review.openstack.org/328304
> - qos policy filtering by ‘shared’ field is fixed by utilizing ^:
> https://review.openstack.org/328313
>
> 2. As for multinode grenade testing, there was little progress on getting
> voting for the DVR job. This is something that I plan to tackle in the near
> future.
>
> ===
>
> Team info:
> Upgrades Subteam has the weekly meetings on Mondays, 2PM UTC, wiki page:
> https://wiki.openstack.org/wiki/Meetings/Neutron-Upgrades-Subteam
>
> New patches are generally tracked under the following topic:
> https://review.openstack.org/#/q/topic:bp/adopt-oslo-versioned-objects-for-db
>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-04 Thread Antoni Segura Puimedon
On Fri, Jul 1, 2016 at 7:58 PM, Doug Hellmann  wrote:

> Excerpts from Jeremy Stanley's message of 2016-07-01 15:05:30 +:
> > On 2016-07-01 08:26:13 -0500 (-0500), Monty Taylor wrote:
> > [...]
> > > Check with Doug Hellman about namespaces. We used to use them in some
> > > oslo things and had to step away from them because of some pretty weird
> > > and horrible breakage issues.
> > [...]
> >
> > Or read the associated Oslo spec from when that was done:
> >
> >  https://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html
> >
> >
>
> Yes, please don't use python namespaces. It's a cool feature, as you
> say, but the setuptools implementation available for Python 2 has some
> buggy edge cases that we hit on a regular basis before moving back to
> regular packages. It might be something we could look into again when
> we're running only on Python 3, since at that point the feature is built
> into the language.
>

For kuryr-kubernetes we target only Python3, I wonder if we could move
kuryr-libnetwork
to be python3 only and, if that were the case, how this alters the
situation for namespace
packages.


>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Mitaka Neutron LBaaS Question

2016-07-04 Thread Elena Ezhova
Hi!

You also have to configure Octavia on your controller. The most
straightforward way would be to follow the steps that are done in Octavia
DevStack plugin
.
There is also an overview presentation

which
has some troubleshooting tips.

Thanks,
Elena

On Sat, Jul 2, 2016 at 1:24 AM, zhihao wang 
wrote:

> Dear OpenStack Dev member:
>
> May I ask you some question about neutron lbaaS?
>
> How to install the neutron LBaaS with Octavia in Mitaka?
> I followed these two guide ,but which one I should use? (My openstack is
> Mitaka , 1 controller, 2 compute nodes)
>
> https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun  --  Ubuntu
> Packages Setup
> http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html
> -- Configuring LBaaS v2 with Octavia
>
> Here is what I did:
>
> pip install octavia
>
> and then :
> vim /etc/neutron/neutron.conf
> service_plugins =
> router,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
>
> [service_providers]
> service_provider =
> LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default
>
> /etc/openstack-dashboard/local_settings.py
>
>
> OPENSTACK_NEUTRON_NETWORK = {
> 'enable_lb': True
> }
>
>
> And then I restart all the neutron service and apache server
>   service neutron-server restart
>   service neutron-dhcp-agent restart
>   service neutron-metadata-agent restart
>   service neutron-l3-agent restart
>
> but and then i ran the command neutron agent-list, it return this. I am
> wondering what is wrong with this? how can I install Neutron LaaS?
>
> root@controller:~# neutron agent-list
> Unable to establish connection to http://controller:9696/v2.0/agents.json
>
>
> Please help
>
> Thanks so much
>
> Thanks
> Wally
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Python35 Jobs coming

2016-07-04 Thread Victor Stinner

Le 04/07/2016 à 07:59, Denis Makogon a écrit :

Then let it run for a day or two in our CI, discuss with neutron team,
and send a patch for project-config to change the setup,

Can confirm that nova, glance, cinder, heat clients are py35 compatible.


tox.ini of Nova, Swift and Trove need to be modified to copy/paste the 
whitelist of tests running on Python 3. Fix for Nova:


   https://review.openstack.org/#/c/336432/

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]

2016-07-04 Thread Luck Dog
Hello everyone,

I am trying to run DevStack on Ubuntu 14.04 in a single VirtualBox. An
error turns up  before it successfully starts. Yesterday I clarified this
question not  clearly enough,so I make a supplication for it. My steps are:
1), Git clone DevStack,
2), Copy devstack/local.conf.sample to DevStack folder and rename it to
local.conf.

the finished steps before the error turns up are listed as follows:

2016-06-29 09:11:53.081 | stack.sh log
/opt/stack/logs/stack.sh.log.2016-06-29-171152
2016-06-29 09:12:19.797 | Installing package prerequisites
2016-06-29 09:15:27.224 | Installing OpenStack project source
2016-06-29 09:24:43.323 | Installing Tricircle
2016-06-29 09:24:55.979 | Starting RabbitMQ
2016-06-29 09:25:00.731 | Configuring and starting MySQL
2016-06-29 09:25:20.143 | Starting Keystone
2016-06-29 09:43:18.591 | Configuring Glance
2016-06-29 09:43:59.667 | Configuring Neutron
2016-06-29 09:46:30.646 | Configuring Cinder
2016-06-29 09:46:54.719 | Configuring Nova
2016-06-29 09:48:23.175 | Configuring Tricircle
2016-06-29 09:51:24.143 | Starting Glance
2016-06-29 09:52:11.133 | Uploading images
2016-06-29 09:52:45.460 | Starting Nova API
2016-06-29 09:53:27.511 | Starting Neutron
2016-06-29 09:54:21.476 | Creating initial neutron network elements

The last errors when it stops running are:

Request body: {u'network': {u'router:external': True,
u'provider:network_type': u'flat', u'name': u'public',
u'provider:physical_network': u'public', u'admin_state_up': True}}^[[00m
^[[00;33mfrom (pid=29980) prepare_request_body
/opt/stack/neutron/neutron/api/v2/base.py:674^[[00m
2016-06-29 17:56:04.359 ^[[00;32mDEBUG neutron.db.quota.driver
[^[[01;36mreq-e97f6276-8e19-408b-829a-004a31256453 ^[[00;36madmin
13869ba8005b480bbcbe17b2695fd5e2^[[00;32m] ^[[01;35m^[[00;32mResources
subnetpool have unlimited quota limit. It is not required to calculate
headroom ^[[00m ^[[00;33mfrom (pid=29980) make_reservation
/opt/stack/neutron/neutron/db/quota/driver.py:191^[[00m
2016-06-29 17:56:04.381 ^[[00;32mDEBUG neutron.db.quota.driver
[^[[01;36mreq-e97f6276-8e19-408b-829a-004a31256453 ^[[00;36madmin
13869ba8005b480bbcbe17b2695fd5e2^[[00;32m] ^[[01;35m^[[00;32mAttempting to
reserve 1 items for resource network. Total usage: 0; quota limit: 10;
headroom:10^[[00m ^[[00;33mfrom (pid=29980) make_reservation
/opt/stack/neutron/neutron/db/quota/driver.py:223^[[00m
2016-06-29 17:56:04.425 ^[[01;31mERROR neutron.api.v2.resource
[^[[01;36mreq-e97f6276-8e19-408b-829a-004a31256453 ^[[00;36madmin
13869ba8005b480bbcbe17b2695fd5e2^[[01;31m] ^[[01;35m^[[01;31mcreate
failed^[[00m
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/api/v2/resource.py", line
78, in resource
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00mresult = method(request=request, **args)
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/api/v2/base.py", line
424, in create
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00mreturn self._create(request, body, **kwargs)
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in
wrapper
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00mectxt.value = e.inner_exc
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221,
in __exit__
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00mself.force_reraise()
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197,
in force_reraise
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00msix.reraise(self.type_, self.value, self.tb)
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in
wrapper
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00mreturn f(*args, **kwargs)
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/api/v2/base.py", line
535, in _create
[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00mreturn obj_creator(request.context, **kwargs)
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00m  File "/opt/stack/tricircle/tricircle/network/plugin.py",
line 238, in create_network
^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
^[[01;35m^[[00mis_external =
self._ensure_az_set_for_external_network(net_data)
^[[01;31m2016-06-29 17:5