Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug

2016-07-05 Thread Renat Akhmerov
Great! Alex, enjoy using yaqluator :)

Renat Akhmerov
@Nokia

> On 05 Jul 2016, at 23:16, Elisha, Moshe (Nokia - IL)  
> wrote:
> 
> Thank you all for assisting.
> 
> When I tested Mistral I used an older version of Mistral (meaning an older 
> version of yaql).
> 
> I have verified that latest Mistral is working as expected.
> I have upgraded the yaql library in yaqluator to 1.1.0 and it is now working 
> as expected.
> 
> Thanks!
> 
> From: Dougal Matthews >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> >
> Date: Tuesday, 5 July 2016 at 17:53
> To: "OpenStack Development Mailing List (not for usage questions)" 
> >
> Subject: Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug
> 
> 
> 
> On 5 July 2016 at 08:32, Renat Akhmerov  > wrote:
>> Stan, thanks for clarification. What’s the latest stable version that we’re 
>> supposed to use? global-requirements.txt has yaql>=1.1.0,
>>  I wonder if it’s correct.
> 
> It is also worth looking at the upper-constraints.txt. It has 1.1.1 which is 
> the latest release. So it all seems up to date.
> 
> https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L376
>  
> 
> 
> I think the problem is that this external project isn't being updated. 
> Assuming they have not deployed anything that isn't committed, then they are 
> running YAQL 1.0.2 which is almost a year old.
> 
> https://github.com/ALU-CloudBand/yaqluator/blob/master/requirements.txt#L3 
> 
>  
>> 
>> Renat Akhmerov
>> @Nokia
>> 
>>> On 05 Jul 2016, at 12:12, Stan Lagun >> > wrote:
>>> 
>>> Hi!
>>> 
>>> The issue with join is just a yaql bug that is already fixed. The problem 
>>> with yaqluator is that it doesn't use the latest yaql library.
>>> 
>>> Another problem is that it does't sets options correctly. As a result it is 
>>> possible to bring the site down with a query that produces endless 
>>> collection
>>> 
>>> Sincerely yours,
>>> Stan Lagun
>>> Principal Software Engineer @ Mirantis
>>> 
>>>  
>>> On Tue, Jun 28, 2016 at 9:46 AM, Elisha, Moshe (Nokia - IL) 
>>> > wrote:
 Hi,
 
 Thank you for the kind words, Alexey.
 
 I was able to reproduce your bug and I have also found the issue.
 
 The problem is that we did not create the parser with the engine_options 
 used in the yaql library by default when using the CLI.
 Specifically, the "yaql.limitIterators" was missing… I am not sure that 
 this settings should have this affect but maybe the Yaql guys can comment 
 on that.
 
 If we will change yaqluator to use this setting it will mean that 
 yaqluator will not be consistent with Mistral because Mistral is using 
 YAQL without this engine option (If I use your example in a workflow, 
 Mistral returns exactly like the yaqluator returns)
 
 
 Workflow:
 
> ---
> version: '2.0'
> 
> test_yaql:
>   tasks:
> test_yaql:
>   action: std.noop
>   publish:
> output_expr: <% [1,2].join([3], true, [$1, $2]) %>
 
 Workflow result:
 
 
 [root@s53-19 ~(keystone_admin)]# mistral task-get-published 
 01d2bce3-20d0-47b2-84f2-7bd1cb2bf9f7
 {
 "output_expr": [
 [
 1,
 3
 ]
 ]
 }
 
 
 As Matthews pointed out, the yaqluator is indeed OpenSource and 
 contributions are welcomed.
 
 [1] 
 https://github.com/ALU-CloudBand/yaqluator/commit/e523dacdde716d200b5ed1015543d4c4680c98c2
  
 
 
 
 
 From: Dougal Matthews >
 Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
 >
 Date: Monday, 27 June 2016 at 16:44
 To: "OpenStack Development Mailing List (not for usage questions)" 
 >
 Subject: Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug
 
 On 27 June 2016 at 14:30, Alexey Khivin > wrote:
> Hello, Moshe 
> 
> Tomorrow I discovered yaqluator.com  for myself! 
> 

Re: [openstack-dev] Quesion about Openstack Magnum

2016-07-05 Thread Ton Ngo

Hi Wally,
You can try using an IP address instead of the name "controller" for
the "host" attribute.  This should be in /etc/magnum/magnum.conf in the
section [api].
Ton Ngo,



From:   zhihao wang 
To: "openst...@lists.openstack.org"
,
"openstack-dev@lists.openstack.org"

Date:   07/05/2016 11:52 AM
Subject:[openstack-dev] Quesion about Openstack Magnum



Dear Magnum team member

May I ask you some question about Magnum

I have Openstack mistaka (1 controller and 2 compute nodes(
I have installed all the components for Magnum, also install the lbaaS V2.

I have install the Magnum followed this guide
http://docs.openstack.org/developer/magnum/install-guide-from-source.html

and then I try to use Magnum, but I am not sure how to use it to create k8s

when I try to list the bay model list, it return this, I source
admin-openrc, but still the same..

root@controller:/var/lib/magnum/magnum# magnum --version
2.0.0
root@controller:/var/lib/magnum/magnum# magnum servie-list
ERROR: You must provide a tenant name or tenant id via --os-tenant-name,
--os-tenant-id, env[OS_TENANT_NAME] or env[OS_TENANT_ID]
root@controller:/var/lib/magnum/magnum#

also , I saw lots of error from the magnum-api.log

2016-07-05 11:45:47.239 85426 WARNING oslo_reports.guru_meditation_report
[-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for
backward compatibility. SIGUSR1 will no longer be registered in a future
release, so please use SIGUSR2 to generate reports.
2016-07-05 11:45:47.240 85426 INFO magnum.api.app [-] Full WSGI config
used: /etc/magnum/api-paste.ini
2016-07-05 11:45:47.303 85426 CRITICAL magnum [-] ConfigFileValueError:
Value for option host is not valid: controller is not IPv4 or IPv6 address
2016-07-05 11:45:47.303 85426 ERROR magnum Traceback (most recent call
last):
2016-07-05 11:45:47.303 85426 ERROR magnum   File
"/var/lib/magnum/env/bin/magnum-api", line 10, in 
2016-07-05 11:45:47.303 85426 ERROR magnum sys.exit(main())
2016-07-05 11:45:47.303 85426 ERROR magnum   File
"/var/lib/magnum/env/local/lib/python2.7/site-packages/magnum/cmd/api.py",
line 46, in main
2016-07-05 11:45:47.303 85426 ERROR magnum host, port =
cfg.CONF.api.host, cfg.CONF.api.port
2016-07-05 11:45:47.303 85426 ERROR magnum   File
"/var/lib/magnum/env/local/lib/python2.7/site-packages/oslo_config/cfg.py",
line 3004, in __getattr__
2016-07-05 11:45:47.303 85426 ERROR magnum return self._conf._get(name,
self._group)
2016-07-05 11:45:47.303 85426 ERROR magnum   File
"/var/lib/magnum/env/local/lib/python2.7/site-packages/oslo_config/cfg.py",
line 2615, in _get
2016-07-05 11:45:47.303 85426 ERROR magnum value = self._do_get(name,
group, namespace)
2016-07-05 11:45:47.303 85426 ERROR magnum   File
"/var/lib/magnum/env/local/lib/python2.7/site-packages/oslo_config/cfg.py",
line 2658, in _do_get
2016-07-05 11:45:47.303 85426 ERROR magnum % (opt.name, str(ve)))
2016-07-05 11:45:47.303 85426 ERROR magnum ConfigFileValueError: Value for
option host is not valid: controller is not IPv4 or IPv6 address
2016-07-05 11:45:47.303 85426 ERROR magnum
^C

I am wondering is there anything for configure the magnum?

thanks so much

wally
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [puppet] [desginate] An update on the state of puppet-designate (and designate in RDO)

2016-07-05 Thread Matt Fischer
We're using Designate but still on Juno. We're running puppet from around
then, summer of 2015. We'll likely try to upgrade to Mitaka at some point
but Juno Designate "just works" so it's been low priority. Look forward to
your efforts here.

On Tue, Jul 5, 2016 at 7:47 PM, David Moreau Simard  wrote:

> Hi !
>
> tl;dr
> puppet-designate is going under some significant updates to bring it
> up to par right now.
> While I will try to ensure it is well tested and backwards compatible,
> things *could* break. Would like feedback.
>
> I cc'd -operators because I'm interested in knowing if there are any
> users of puppet-designate right now: which distro and release of
> OpenStack?
>
> I'm a RDO maintainer and I took interest in puppet-designate because
> we did not have any proper test coverage for designate in RDO
> packaging until now.
>
> The RDO community mostly relies on collaboration with installation and
> deployment projects such as Puppet OpenStack to test our packaging.
> We can, in turn, provide some level of guarantee that packages built
> out of trunk branches (and eventually stable releases) should work.
> The idea is to make puppet-designate work with RDO, then integrate it
> in the puppet-openstack-integration CI scenarios and we can leverage
> that in RDO CI afterwards.
>
> Both puppet-designate and designate RDO packaging were unfortunately
> in quite a sad state after not being maintained very well and a lot of
> work was required to even get basic tests to pass.
> The good news is that it didn't work with RDO before and now it does,
> for newton.
> Testing coverage has been improved and will be improved even further
> for both RDO and Ubuntu Cloud Archive.
>
> If you'd like to follow the progress of the work, the reviews are
> tagged with the topic "designate-with-rdo" [1].
>
> Let me know if you have any questions !
>
> [1]: https://review.openstack.org/#/q/topic:designate-with-rdo
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [desginate] An update on the state of puppet-designate (and designate in RDO)

2016-07-05 Thread Sam Morrison
We (NeCTAR) use puppet-designate on Ubuntu 14.04 with Liberty.

Cheers,
Sam


> On 6 Jul 2016, at 11:47 AM, David Moreau Simard  wrote:
> 
> Hi !
> 
> tl;dr
> puppet-designate is going under some significant updates to bring it
> up to par right now.
> While I will try to ensure it is well tested and backwards compatible,
> things *could* break. Would like feedback.
> 
> I cc'd -operators because I'm interested in knowing if there are any
> users of puppet-designate right now: which distro and release of
> OpenStack?
> 
> I'm a RDO maintainer and I took interest in puppet-designate because
> we did not have any proper test coverage for designate in RDO
> packaging until now.
> 
> The RDO community mostly relies on collaboration with installation and
> deployment projects such as Puppet OpenStack to test our packaging.
> We can, in turn, provide some level of guarantee that packages built
> out of trunk branches (and eventually stable releases) should work.
> The idea is to make puppet-designate work with RDO, then integrate it
> in the puppet-openstack-integration CI scenarios and we can leverage
> that in RDO CI afterwards.
> 
> Both puppet-designate and designate RDO packaging were unfortunately
> in quite a sad state after not being maintained very well and a lot of
> work was required to even get basic tests to pass.
> The good news is that it didn't work with RDO before and now it does,
> for newton.
> Testing coverage has been improved and will be improved even further
> for both RDO and Ubuntu Cloud Archive.
> 
> If you'd like to follow the progress of the work, the reviews are
> tagged with the topic "designate-with-rdo" [1].
> 
> Let me know if you have any questions !
> 
> [1]: https://review.openstack.org/#/q/topic:designate-with-rdo
> 
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
> 
> dmsimard = [irc, github, twitter]
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [desginate] An update on the state of puppet-designate (and designate in RDO)

2016-07-05 Thread David Moreau Simard
Hi !

tl;dr
puppet-designate is going under some significant updates to bring it
up to par right now.
While I will try to ensure it is well tested and backwards compatible,
things *could* break. Would like feedback.

I cc'd -operators because I'm interested in knowing if there are any
users of puppet-designate right now: which distro and release of
OpenStack?

I'm a RDO maintainer and I took interest in puppet-designate because
we did not have any proper test coverage for designate in RDO
packaging until now.

The RDO community mostly relies on collaboration with installation and
deployment projects such as Puppet OpenStack to test our packaging.
We can, in turn, provide some level of guarantee that packages built
out of trunk branches (and eventually stable releases) should work.
The idea is to make puppet-designate work with RDO, then integrate it
in the puppet-openstack-integration CI scenarios and we can leverage
that in RDO CI afterwards.

Both puppet-designate and designate RDO packaging were unfortunately
in quite a sad state after not being maintained very well and a lot of
work was required to even get basic tests to pass.
The good news is that it didn't work with RDO before and now it does,
for newton.
Testing coverage has been improved and will be improved even further
for both RDO and Ubuntu Cloud Archive.

If you'd like to follow the progress of the work, the reviews are
tagged with the topic "designate-with-rdo" [1].

Let me know if you have any questions !

[1]: https://review.openstack.org/#/q/topic:designate-with-rdo

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][stable] liberty periodic bitrot jobs have been failing more than a week

2016-07-05 Thread Matt Riedemann

On 7/4/2016 11:30 AM, Ben Swartzlander wrote:

On 07/03/2016 09:19 AM, Matt Riedemann wrote:

On 7/1/2016 8:18 PM, Ravi, Goutham wrote:

Thanks Matt.

https://review.openstack.org/#/c/334220 adds the upper constraints.

--
Goutham


On 7/1/16, 5:08 PM, "Matt Riedemann"  wrote:

The manila periodic stable/liberty jobs have been failing for at least a
week.

It looks like manila isn't using upper-constraints when running unit
tests, not even on stable/mitaka or master. So in liberty it's pulling
in uncapped oslo.utils even though the upper constraint for oslo.utils
in liberty is 3.2.

Who from the manila team is going to be working on fixing this, either
via getting upper-constraints in place in the tox.ini for manila (on all
supported branches) or performing some kind of workaround in the code?



Thanks.

I noticed that there is no Tempest / devstack job run against the
stable/liberty change - why is there no integration testing of Manila in
stable/liberty outside of 3rd party CI (which is not voting)?


Matt, this is why: https://review.openstack.org/#/c/286497/

-Ben





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Ah great, thanks for pointing that out.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI mentoring

2016-07-05 Thread Sanjay Upadhyay
On Tue, Jul 5, 2016 at 10:59 PM, Wesley Hayutin  wrote:

>
>
> On Tue, Jul 5, 2016 at 1:06 PM, Steven Hardy  wrote:
>
>> Hi all,
>>
>> At last weeks meeting, we discussed the idea of some sort of rotation
>> where
>> folks would volunteer their time to both help fix CI when it breaks, and
>> also pass on some of the accrued knowledge within the team to newer folks
>> wishing to learn.
>>
>> I'm hoping this will achive a few things:
>> - Reduce the load on the subset of folks constantly fixing CI by getting
>>   more people involved and familiar
>> - Identify areas where we need to document better so 1-1 mentoring isn't
>>   needed in the future.
>>
>> Note that this is explicitly *not* about volunteering to be the one person
>> that fixes all-the-things in CI, everyone is still encouraged to do that,
>> it's more about finding folks willing to set aside some time to be
>> responsive on IRC, act as a point of contact, and take some extra time to
>> pass on knowledge around the series of steps we take when a trunk
>> regression or other CI related issue occurs.
>>
>> I started this etherpad:
>>
>> https://etherpad.openstack.org/p/tripleo-ci-mentoring
>>
>> I'd suggest we start from the week after the n-2 milestone, and I've
>> volunteered as the first mentor for that week.
>>
>> Feel free to update if you're willing in participating in the ongoing task
>> of keeping TripleO CI running smoothly in any capacity, and hopefully we
>> can get more folks involved and communicating.
>>
>>
+1 to all the effort you guys are putting in. I have added myself under
mentees.

regards
/sanjay

> If anyone has any thoughts around this process feel free to reply here and
>> we can hopefully refine things so they are helpful to folks.
>>
>> Thanks!
>>
>> Steve
>>
>
> Awesome, thanks Steve!
>
>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Sam Morrison
We had some issues related to this too, we ended up changing our 
collect_statistics_interval to 30 seconds as opposed to the default which is 5 
I think.

We also upgraded to 3.6.2 and that version is very buggy and wouldn’t recommend 
anyone to use it. It has a memory leak and some other nasty bugs we encountered.

3.6.1 on the other hand is very stable for us and we’ve been using it in 
production for several months now. 

Sam


> On 6 Jul 2016, at 3:50 AM, Alexey Lebedev  wrote:
> 
> Hi Joshua,
> 
> Does this happen with `rates_mode` set to `none` and tuned 
> `collect_statistics_interval`? Like in 
> https://bugs.launchpad.net/fuel/+bug/1510835 
> 
> 
> High connection/channel churn during upgrade can cause such issues.
> 
> BTW, soon-to-be-released rabbitmq 3.6.3 contains several improvements related 
> to management plugin statistics handling. And almost every version before 
> that also contained some related fixes. And I think that upstream devs 
> response will have some mention of upgrade =)
> 
> Best,
> Alexey
> 
> On Tue, Jul 5, 2016 at 8:02 PM, Joshua Harlow  > wrote:
> Hi ops and dev-folks,
> 
> We over at godaddy (running rabbitmq with openstack) have been hitting a 
> issue that has been causing the `rabbit_mgmt_db` consuming nearly all the 
> processes memory (after a given amount of time),
> 
> We've been thinking that this bug (or bugs?) may have existed for a while and 
> our dual-version-path (where we upgrade the control plane and then 
> slowly/eventually upgrade the compute nodes to the same version) has somehow 
> triggered this memory leaking bug/issue since it has happened most 
> prominently on our cloud which was running nova-compute at kilo and the other 
> services at liberty (thus using the versioned objects code path more 
> frequently due to needing translations of objects).
> 
> The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with 
> kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems to 
> make the issue go away),
> 
> # rpm -qa | grep rabbit
> 
> rabbitmq-server-3.4.0-1.noarch
> 
> The logs that seem relevant:
> 
> ```
> **
> *** Publishers will be blocked until this alarm clears ***
> **
> 
> =INFO REPORT 1-Jul-2016::16:37:46 ===
> accepting AMQP connection <0.23638.342> (127.0.0.1:51932 
>  -> 127.0.0.1:5671 )
> 
> =INFO REPORT 1-Jul-2016::16:37:47 ===
> vm_memory_high_watermark clear. Memory used:29910180640 allowed:47126781542
> ```
> 
> This happens quite often, the crashes have been affecting our cloud over the 
> weekend (which made some dev/ops not so happy especially due to the july 4th 
> mini-vacation),
> 
> Looking to see if anyone else has seen anything similar?
> 
> For those interested this is the upstream bug/mail that I'm also seeing about 
> getting confirmation from the upstream users/devs (which also has erlang 
> crash dumps attached/linked),
> 
> https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg 
> 
> 
> Thanks,
> 
> -Josh
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> -- 
> Best,
> Alexey
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-05 Thread James E. Blair
Kashyap Chamarthy  writes:

> If it reduces nondeterministic spam for the CI Infra, and makes us
> achieve the task at hand, sure.  [/me need to educate himself a
> bit more on the Zuul pipeline infrastructure.]
>
> Worth filing this (and your 'idle pipeline' thought below) in the Zuul
> tracker here?
>
> https://storyboard.openstack.org/#!/project/679
>
>> In the past we've discussed the option of having an "idle pipeline"
>> which repeatedly runs specified jobs only when there are unused
>> resources available, so that it doesn't significantly cut into our
>> resource pool when we're under high demand but still allows to
>> automatically collect a large amount of statistical data.
>> 
>> Anyway, hopefully James Blair can weigh in on this, since Zuul is
>> basically in a feature freeze for a while to limit the number of
>> significant changes we'll need to forward-port into the v3 branch.
>> We'd want to discuss these new features in the context of Zuul v3
>> instead.

Yes, I think there is more that we can do around having specific jobs
run, and also more types of pipeline managers that understand load
conditions -- or at least more fine-grained priority specification so
they don't have to.  But I also think what Jeremy said is correct --
we're in the middle of a push toward Zuul v3 and need to stay focused on
that.  These are good suggestions with well articulated use-cases, so I
think adding this to the issue tracker for now so that we can address it
later is the thing to do.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Fail build request if we can't inject files?

2016-07-05 Thread Matt Riedemann

On 7/5/2016 4:36 PM, Matt Riedemann wrote:

On 7/4/2016 8:45 AM, Matt Riedemann wrote:

On 7/4/2016 3:40 AM, Daniel P. Berrange wrote:


Won't the user provided files also get made available by the config
drive /
metadata service ?  I think that's the primary reason for file
injection not
being a fatal problem. Oh that and the fact that we've wanted to kill
it for
at least 3 years now :-)

Regards,
Daniel



Ugh, good point, except force_config_drive defaults to False and running
the metadata service is optional.

In the case of this failing in the tempest-dsvm-neutron-full-ssh job,
the instance is not created with a config drive, but the metadata
service is running. Tempest doesn't check for the files there though
because it's configured to expect file injection to work, so it ssh's
into the guest and looks for the files.

I have several changes up related to this:

https://review.openstack.org/#/q/topic:bug/1598581

One is making Tempest disable file injection tests by default since Nova
disables file injection by default (at least for the libvirt driver).

Another is changing devstack to actually configure nova/tempest for file
injection which is what the job should have been doing anyway.

My nova fix is not going to fly because of config drive (which I could
check from the virt driver) and the metadata service (which I can't from
the virt driver). So I guess the best we can do is log something...



I think I can still fail the server create from the libvirt driver if we
can't honor the request.

For config drive, I can just check configdrive.required_by(instance)
like we normally do.

For the metadata API, it's a bit uglier, but I could check if 'metadata'
is in CONF.enabled_apis and if not, and no config drive and files were
requested for injection but it's disabled, then we fail.


mtreinish pointed out that this would rely on having enabled_apis 
configured in nova.conf on the nova-compute nodes, which might not be 
the case, and I'm not sure we have a way to tell with oslo.config if an 
option is not present vs getting the default value when not specified. 
So I probably can't rely on that.




How terrible is that?




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Fail build request if we can't inject files?

2016-07-05 Thread Matt Riedemann

On 7/4/2016 8:45 AM, Matt Riedemann wrote:

On 7/4/2016 3:40 AM, Daniel P. Berrange wrote:


Won't the user provided files also get made available by the config
drive /
metadata service ?  I think that's the primary reason for file
injection not
being a fatal problem. Oh that and the fact that we've wanted to kill
it for
at least 3 years now :-)

Regards,
Daniel



Ugh, good point, except force_config_drive defaults to False and running
the metadata service is optional.

In the case of this failing in the tempest-dsvm-neutron-full-ssh job,
the instance is not created with a config drive, but the metadata
service is running. Tempest doesn't check for the files there though
because it's configured to expect file injection to work, so it ssh's
into the guest and looks for the files.

I have several changes up related to this:

https://review.openstack.org/#/q/topic:bug/1598581

One is making Tempest disable file injection tests by default since Nova
disables file injection by default (at least for the libvirt driver).

Another is changing devstack to actually configure nova/tempest for file
injection which is what the job should have been doing anyway.

My nova fix is not going to fly because of config drive (which I could
check from the virt driver) and the metadata service (which I can't from
the virt driver). So I guess the best we can do is log something...



I think I can still fail the server create from the libvirt driver if we 
can't honor the request.


For config drive, I can just check configdrive.required_by(instance) 
like we normally do.


For the metadata API, it's a bit uglier, but I could check if 'metadata' 
is in CONF.enabled_apis and if not, and no config drive and files were 
requested for injection but it's disabled, then we fail.


How terrible is that?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-07-05 Thread Walter A. Boring IV
This is great!   I know I'm a bit late to replying to this on the ML, 
due to my vacation,

but I whole heartedly agree!

+1

Walt
On 06/27/2016 10:27 AM, Sean McGinnis wrote:

I would like to nominate Scott D'Angelo to core. Scott has been very
involved in the project for a long time now and is always ready to help
folks out on IRC. His contributions [1] have been very valuable and he
is a thorough reviewer [2].

Please let me know if there are any objects to this within the next
week. If there are none I will switch Scott over by next week, unless
all cores approve prior to then.

Thanks!

Sean McGinnis (smcginnis)

[1] 
https://review.openstack.org/#/q/owner:%22Scott+DAngelo+%253Cscott.dangelo%2540hpe.com%253E%22+status:merged
[2] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Charm][Congress] Upstreaming JuJu Charm for Openstack Congress

2016-07-05 Thread SULLIVAN, BRYAN L
Hi,

I've been working in the OPNFV (https://wiki.opnfv.org/) on a JuJu Charm to 
install the OpenStack Congress service on the OPNFV reference platform. This 
charm should be useful for anyone that wants to install Congress for use with 
an OpenStack deployment, using the JuJu tool. I want to get the charm 
upstreamed as an official openstack.org git repo similar to the other repos for 
JuJu Charms for OpenStack services. I participate in the Congress team in 
OpenStack, who don't know the process for getting this charm upstreamed into an 
OpenStack repo, so I am reaching out to anyone on this list for help.

I have been working with gnuoy on #juju and narindergupta on #opnfv-joid on 
this. The charm has been tested and used to successfully install Congress on 
the OPNFV Colorado release (Mitaka-based, re OpenStack). While there are some 
features we need to continue to work on (e.g. Horizon integration), the charm 
is largely complete and stable. We are currently integrating it into the OPNFV 
CI/CD system through which it will be regularly/automatically tested in any 
OPNFV JuJu-based deployment (with this charm, Congress will become a default 
service deployed in OPNFV thru JuJu).

Any pointers on how to get started toward creating an OpenStack git repo for 
this are appreciated.

Thanks,
Bryan Sullivan | AT


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] IRC Meeting Canceled (week of 7/4/2016)

2016-07-05 Thread Anil Rao
Hi,

This week's TaaS IRC meeting is being canceled since some of the team members 
are planning to attend OpenStack Days in Tokyo. We will reconvene next week as 
usual.

Thanks,
Anil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Python35 Jobs coming

2016-07-05 Thread Andreas Jaeger
On 07/05/2016 02:52 PM, Andreas Jaeger wrote:
> [...]
> The change has merged and the python35 non-voting tests are run as part
> of new changes now.
> 
> The database setup for the python35-db variant is not working yet, and
> needs adjustment on the infra side.


The python35-db jobs are working now as well.

Happy Python35 testing,
Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] Global stack-list for Magnum service user

2016-07-05 Thread Fox, Kevin M
+1.  Id like to see a similar thing for keystone validate user tokens.

Thanks,
Kevin


From: Johannes Grassler
Sent: Monday, July 04, 2016 2:43:47 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum][heat] Global stack-list for Magnum service 
user

Hello,

Magnum has a periodic task that checks the state of the Heat stacks it creates
for its bays. It does this across all users/tenants that have Magnum bays.
Currently it uses a global stack-list operation to query these Heat stacks:

https://github.com/openstack/magnum/blob/master/magnum/service/periodic.py#L83

Now the Magnum service user does not normally have permission to perform this 
operation,
hence the Magnum documentation currently suggests the following change to
Heat's policy.json:

| stacks:global_index: "role:admin",

This is less than optimal since it allows any tenant's admin user to perform a
global stack-list. Would it be an option to have something like this in Heat's
default policy.json?

| stacks:global_index: "role:service",

That way the global stack-list would be restricted to service users and seting
Magnum (or other services that use Heat internally) wouldn't need a change to
Heat's policy.json.

If that kind of approach is feasible I'd be happy to submit a change.

Cheers,

Johannes

--
Johannes Grassler, Cloud Developer
SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
GF: Felix Imendörffer, Jane Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [docker] Storage-driver and loopback usage?

2016-07-05 Thread Steven Dake (stdake)
Actually our documentation recommends AUFS on Ubuntu and BTRFS on CentOS.  See:

http://docs.oenstack.org/developer/kolla/operating-kolla.html

Search for BTRFS.

From: Jeffrey Zhang >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, July 3, 2016 at 8:58 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Kolla] [docker] Storage-driver and loopback usage?

Hi Gerard,

Here is what the docker official recommend[0]. In the prod env, the they 
recommend
using the direct-lvm driver.

Kolla has no recommendation now. In the dev process, i know someone use 
overlayfs,
some use btrfs. These two are both faster than others.


[0] https://docs.docker.com/engine/userguide/storagedriver/selectadriver/

On Mon, Jul 4, 2016 at 11:00 AM, Gerard Braad 
> wrote:
Hi guys,


This weekend I have been looking into some issues I encountered with
`ostree` inside a Docker container, and this seemed to have been
caused by the use of loopback storage with device mapper. After this
experience I was wondering what Kolla did...

Usually for development purpose, or on a laptop, it is easy to just
work out-of-the-box. But I would not consider using devicemapper after
this experience as a pleasant experience. I moved to all development
environment using OverlayFS, and will evaluate this for the time
being...

What do you guys think or use? And what about the quickstart? I was
unable to find a statement about this. I did find a change of the
storage-driver in `kolla/tools/setup_RedHat.sh` to btrfs... and what
is used in CI?

regards,


Gerard

--

   Gerard Braad | http://gbraad.nl
   [ Doing Open Source Matters ]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][horizon] Out of branch horizon plugins?

2016-07-05 Thread Fox, Kevin M
I wrote the app-catalog-ui plugin. I was going to bring this up but hadn't 
gotten to it yet. Thanks for bringing it up.

We do package it up in an rpm, so if its installed with the rest of the 
packages it should just work. The horizon compress/collect rpm hook does the 
right thing already. It does cause it to be enabled though, so I was thinking, 
maybe we make a docker environment variable ENABLED_PLUGINS that contains a 
list of plugins to be enabled?

The value of ENABLED_PLUGINS could be written into 
/etc/openstack-dashboard/local-settings.d and the plugins could enable 
themselves based on it?

I'd rather have the horizon container be bigger then needed and have all the 
plugins test/ready to go as needed instead of trying to slide the plugins in 
myself. Its kind of a pain.

Thanks,
Kevin

From: Dave Walker [em...@daviey.com]
Sent: Sunday, July 03, 2016 1:15 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [kolla][horizon] Out of branch horizon plugins?

Hi,

Whilst writing a Kolla plugin, I noticed some issues with the way Horizon is 
configured in Kolla.

Horizon is increasingly embracing a plugin architecture with UI's and 
Dashboards being maintained outside of Horizon's tree.

These can be found with the type:horizon-plugin tag in openstack/governance 
[0], with 16 projects at current.

This isn't really addressed in Kolla, and is a little awkward to integrate as 
the horizon docker image is pure horizon.

Some projects have a tools/register_plugin.sh which performs the grunt work, 
where as others require a workflow similar to:

cp /path/to/projects/new/panel openstack_dashboard/local/enabled/
cp /path/to/local/defualt/settings openstack_dashboard/local/local_settings.d/
cp /path/to/*policy.json openstack_dashboard/conf/
# compress if environment wants it
./manage.py collectstatic
./manage.py compress

(Separately, it would be nice if this was standardised.. but not the topic of 
this thread)

It would seem logical to pack all of these into the horizon docker image, and 
add a symlink into dashboard/local/enabled via ansible, policy.json and default 
settings with some conditionals if enabled_$service... but this will make the 
horizon docker image larger and more complicated.

What are your thoughts?

[0] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Alexey Lebedev
Hi Joshua,

Does this happen with `rates_mode` set to `none` and tuned
`collect_statistics_interval`? Like in
https://bugs.launchpad.net/fuel/+bug/1510835

High connection/channel churn during upgrade can cause such issues.

BTW, soon-to-be-released rabbitmq 3.6.3 contains several improvements
related to management plugin statistics handling. And almost every version
before that also contained some related fixes. And I think that upstream
devs response will have some mention of upgrade =)

Best,
Alexey

On Tue, Jul 5, 2016 at 8:02 PM, Joshua Harlow  wrote:

> Hi ops and dev-folks,
>
> We over at godaddy (running rabbitmq with openstack) have been hitting a
> issue that has been causing the `rabbit_mgmt_db` consuming nearly all the
> processes memory (after a given amount of time),
>
> We've been thinking that this bug (or bugs?) may have existed for a while
> and our dual-version-path (where we upgrade the control plane and then
> slowly/eventually upgrade the compute nodes to the same version) has
> somehow triggered this memory leaking bug/issue since it has happened most
> prominently on our cloud which was running nova-compute at kilo and the
> other services at liberty (thus using the versioned objects code path more
> frequently due to needing translations of objects).
>
> The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with
> kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems to
> make the issue go away),
>
> # rpm -qa | grep rabbit
>
> rabbitmq-server-3.4.0-1.noarch
>
> The logs that seem relevant:
>
> ```
> **
> *** Publishers will be blocked until this alarm clears ***
> **
>
> =INFO REPORT 1-Jul-2016::16:37:46 ===
> accepting AMQP connection <0.23638.342> (127.0.0.1:51932 -> 127.0.0.1:5671
> )
>
> =INFO REPORT 1-Jul-2016::16:37:47 ===
> vm_memory_high_watermark clear. Memory used:29910180640 allowed:47126781542
> ```
>
> This happens quite often, the crashes have been affecting our cloud over
> the weekend (which made some dev/ops not so happy especially due to the
> july 4th mini-vacation),
>
> Looking to see if anyone else has seen anything similar?
>
> For those interested this is the upstream bug/mail that I'm also seeing
> about getting confirmation from the upstream users/devs (which also has
> erlang crash dumps attached/linked),
>
> https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg
>
> Thanks,
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best,
Alexey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Kolla - External Ceph

2016-07-05 Thread Steven Dake (stdake)
Matthias,

Copying the response to the mailing list since this is all technical in nature 
and I don't have all the answers.

From: Mathias Ewald >
Date: Sunday, July 3, 2016 at 12:19 PM
To: Steven Dake >
Subject: OpenStack Kolla - External Ceph

Hi Steven,

I am beginning to work on the external Ceph integration for OpenStack Kolla and 
wanted to share some thoughts with someone more involved in the project: I've 
been digging into to the code a bit and the work that was already in in this 
direction. From the code I read so far, the current approach uses global.yml to 
configure everything necessary to connect to external ceph. Service config, 
ceph.conf and keyring are then generated via Ansible.

At this point, I feel we are going a direction in which we try to wrap 
everything anybody could possibly want to configure with Kolla by making 
extensive use of global.yml. We would have to introduce flags indicating a 
couple of different scenarios:

1. Deploy Ceph (already there: enable_ceph="yes")
2. Use Ceph with Glance (enable_ceph_glance="yes")
3. Use Ceph with Cinder (enable_ceph_cinder="yes")
4. Use Ceph with Nova (enable_ceph_nova="yes")

I disagree.  If ceph is enabled, then ceph should be used, if ceph is not 
enabled, then ceph shouldn't be used.  That implies all of OpenStack either 
uses Ceph or not.  So we really just need enable_ceph.


If enable_ceph is "yes" and enable_ceph_X="yes" we can follow the code that is 
active right now: Generate ceph.conf, create cephx credentials, generate 
keyring file, generate e.g. cinder.conf and configure backend.

Ack


If enable_ceph is "no" but enable_ceph_X are set to "yes", we need many more 
parameters in global.yml to tell Kolla the username, password, monitor nodes 
and some other stuff.

You see how adding configuration for each option creates a whole lot of 
configuration options.  I just don't see the use case and its a violation of 
our philosophy.


Now what if I wanted to have some custom parameter in my ceph.conf? As far as I 
understand Kolla now, we can provider custom configuration that is merged into 
the generated default file, but that's only true for the standard config files 
of services, right? (cinder.conf, nova.conf, neutron.conf, ...)

Kolla can only merge ini files.  I don't know what format ceph.conf is in (on 
PTO atm and don't have access to my lab).  If the format of ceph.conf is ini it 
could be merged.


Pretty much all we need is:

1. Make custom configs to (glance|cinder|nova).conf to enable RBD backend
2. Create /etc/ceph/ceph.conf
3. Create keyring in /etc/ceph

We already have (1) as Kolla has that INI merging functionality, so 
/etc/cinder/cinder.conf and others are already taken care of. (2) and (3) can 
be dealt with by allowing to copy arbitrary files into the container. Only Nova 
is a bit more complex as Libvirt secret must be created.

Allowing to copy arbitrary files into the container would also solve another 
problem at the same time: Keystone allows per domain backend configuration of 
identity and assignment backends. This is typically used to connect Keystone to 
different LDAP directories for different tenants. This involves creating the 
domain via API and placing a file in 
/etc/keystone/domains/keystone..conf for each domain.

We clearly need to handle the ldap case - we have been asked as a community by 
5+ people for ldap integration.  I think to prioritize this we need someone 
with ldap setup to do the actual development.  I don't have an LDAP 
environment.  I'm not opposed to setting one up, but I have a lot of high 
priority stuff and my plate is already full.  Maybe another developer (or you) 
would be interested?

Regards
-steve



What do you think?

cheers
Mathias

--
Mobil: +49 176 10567592
E-Mail: mew...@evoila.de

evoila GmbH
Wilhelm-Theodor-Römheld-Str. 34
55130 Mainz
Germany

Geschäftsführer: Johannes Hiemer

Amtsgericht Mainz HRB 42719

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If You are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re-licensing OpenStack charms under Apache 2.0

2016-07-05 Thread James Beedy
James,

This is fine by me! +1

~James



Hi
>
> We're currently blocked on becoming an OpenStack project under the
> big-tent by the licensing of the 26 OpenStack charms under GPL v3.
>
> I'm proposing that we re-license the following code repositories as Apache
> 2.0:
>
>   charm-ceilometer
>   charm-ceilometer-agent
>   charm-ceph
>   charm-ceph-mon
>   charm-ceph-osd
>   charm-ceph-radosgw
>   charm-cinder
>   charm-cinder-backup
>   charm-cinder-ceph
>   charm-glance
>   charm-hacluster
>   charm-heat
>   charm-keystone
>   charm-lxd
>   charm-neutron-api
>   charm-neutron-api-odl
>   charm-neutron-gateway
>   charm-neutron-openvswitch
>   charm-nova-cloud-controller
>   charm-nova-compute
>   charm-odl-controller
>   charm-openstack-dashboard
>   charm-openvswitch-odl
>   charm-percona-cluster
>   charm-rabbitmq-server
>   charm-swift-proxy
>   charm-swift-storage
>
> The majority of contributors are from Canonical (from whom I have
> permission to make this switch) with a further 18 contributors from outside
> of Canonical who I will be directly contacting for approval in gerrit as
> reviews are raised for each repository.
>
> Any new charms and supporting repositories will be licensed under Apache
> 2.0 from the outset.
>
> Cheers
>
> James
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Joshua Harlow

Ah, those sets of command sound pretty nice to run periodically,

Sounds like a useful script that could be placed in the ops tools repo 
(I forget where this repo exists at, but pretty sure it does exist?).


Some other oddness though is that this issue seems to go away when we 
don't run cross-release; do you see that also?


Another hypothesis was that the following fix may be triggering part of 
this @ https://bugs.launchpad.net/oslo.messaging/+bug/1495568


So that if we have some queues being set up as auto-delete and some 
beign set up with expiry that perhaps the combination of these causes 
more work (and therefore eventually it falls behind and falls over) for 
the management database.


Matt Fischer wrote:

Yes! This happens often but I'd not call it a crash, just the mgmt db
gets behind then eats all the memory. We've started monitoring it and
have runbooks on how to bounce just the mgmt db. Here are my notes on that:

restart rabbitmq mgmt server - this seems to clear the memory usage.

rabbitmqctl eval 'application:stop(rabbitmq_management).'
rabbitmqctl eval 'application:start(rabbitmq_management).'

run GC on rabbit_mgmt_db:
rabbitmqctl eval
'(erlang:garbage_collect(global:whereis_name(rabbit_mgmt_db)))'

status of rabbit_mgmt_db:
rabbitmqctl eval 'sys:get_status(global:whereis_name(rabbit_mgmt_db)).'

Rabbitmq mgmt DB how much memory is used:
/usr/sbin/rabbitmqctl status | grep mgmt_db

Unfortunately I didn't see that an upgrade would fix for sure and any
settings changes to reduce the number of monitored events also require a
restart of the cluster. The other issue with an upgrade for us is the
ancient version of erlang shipped with trusty. When we upgrade to Xenial
we'll upgrade erlang and rabbit and hope it goes away. I'll also
probably tweak the settings on retention of events then too.

Also for the record the GC doesn't seem to help at all.

On Jul 5, 2016 11:05 AM, "Joshua Harlow" > wrote:

Hi ops and dev-folks,

We over at godaddy (running rabbitmq with openstack) have been
hitting a issue that has been causing the `rabbit_mgmt_db` consuming
nearly all the processes memory (after a given amount of time),

We've been thinking that this bug (or bugs?) may have existed for a
while and our dual-version-path (where we upgrade the control plane
and then slowly/eventually upgrade the compute nodes to the same
version) has somehow triggered this memory leaking bug/issue since
it has happened most prominently on our cloud which was running
nova-compute at kilo and the other services at liberty (thus using
the versioned objects code path more frequently due to needing
translations of objects).

The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511
with kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to
3.6.2 seems to make the issue go away),

# rpm -qa | grep rabbit

rabbitmq-server-3.4.0-1.noarch

The logs that seem relevant:

```
**
*** Publishers will be blocked until this alarm clears ***
**

=INFO REPORT 1-Jul-2016::16:37:46 ===
accepting AMQP connection <0.23638.342> (127.0.0.1:51932
 -> 127.0.0.1:5671 )

=INFO REPORT 1-Jul-2016::16:37:47 ===
vm_memory_high_watermark clear. Memory used:29910180640
allowed:47126781542
```

This happens quite often, the crashes have been affecting our cloud
over the weekend (which made some dev/ops not so happy especially
due to the july 4th mini-vacation),

Looking to see if anyone else has seen anything similar?

For those interested this is the upstream bug/mail that I'm also
seeing about getting confirmation from the upstream users/devs
(which also has erlang crash dumps attached/linked),

https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg

Thanks,

-Josh

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Kris G. Lindgren
We tried some of these (well I did last night), but the issue was that 
eventually rabbitmq actually died.  I was trying some of the eval commands to 
try to get what was in the mgmt_db, bet any get-status call eventually lead to 
a timeout error.  Part of the problem is that we can go from a warning to a 
zomg out of memory in under 2 minutes.  Last night it was taking only 2 hours 
to chew thew 40GB of ram.  Messaging rates were in the 150-300/s which is not 
all that high (another cell is doing a constant 1k-2k).

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Matt Fischer >
Date: Tuesday, July 5, 2016 at 11:25 AM
To: Joshua Harlow >
Cc: 
"openstack-dev@lists.openstack.org" 
>, 
OpenStack Operators 
>
Subject: Re: [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else 
seen this?)


Yes! This happens often but I'd not call it a crash, just the mgmt db gets 
behind then eats all the memory. We've started monitoring it and have runbooks 
on how to bounce just the mgmt db. Here are my notes on that:

restart rabbitmq mgmt server - this seems to clear the memory usage.

rabbitmqctl eval 'application:stop(rabbitmq_management).'
rabbitmqctl eval 'application:start(rabbitmq_management).'

run GC on rabbit_mgmt_db:
rabbitmqctl eval '(erlang:garbage_collect(global:whereis_name(rabbit_mgmt_db)))'

status of rabbit_mgmt_db:
rabbitmqctl eval 'sys:get_status(global:whereis_name(rabbit_mgmt_db)).'

Rabbitmq mgmt DB how much memory is used:
/usr/sbin/rabbitmqctl status | grep mgmt_db

Unfortunately I didn't see that an upgrade would fix for sure and any settings 
changes to reduce the number of monitored events also require a restart of the 
cluster. The other issue with an upgrade for us is the ancient version of 
erlang shipped with trusty. When we upgrade to Xenial we'll upgrade erlang and 
rabbit and hope it goes away. I'll also probably tweak the settings on 
retention of events then too.

Also for the record the GC doesn't seem to help at all.

On Jul 5, 2016 11:05 AM, "Joshua Harlow" 
> wrote:
Hi ops and dev-folks,

We over at godaddy (running rabbitmq with openstack) have been hitting a issue 
that has been causing the `rabbit_mgmt_db` consuming nearly all the processes 
memory (after a given amount of time),

We've been thinking that this bug (or bugs?) may have existed for a while and 
our dual-version-path (where we upgrade the control plane and then 
slowly/eventually upgrade the compute nodes to the same version) has somehow 
triggered this memory leaking bug/issue since it has happened most prominently 
on our cloud which was running nova-compute at kilo and the other services at 
liberty (thus using the versioned objects code path more frequently due to 
needing translations of objects).

The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with kernel 
3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems to make the 
issue go away),

# rpm -qa | grep rabbit

rabbitmq-server-3.4.0-1.noarch

The logs that seem relevant:

```
**
*** Publishers will be blocked until this alarm clears ***
**

=INFO REPORT 1-Jul-2016::16:37:46 ===
accepting AMQP connection <0.23638.342> 
(127.0.0.1:51932 -> 
127.0.0.1:5671)

=INFO REPORT 1-Jul-2016::16:37:47 ===
vm_memory_high_watermark clear. Memory used:29910180640 allowed:47126781542
```

This happens quite often, the crashes have been affecting our cloud over the 
weekend (which made some dev/ops not so happy especially due to the july 4th 
mini-vacation),

Looking to see if anyone else has seen anything similar?

For those interested this is the upstream bug/mail that I'm also seeing about 
getting confirmation from the upstream users/devs (which also has erlang crash 
dumps attached/linked),

https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg

Thanks,

-Josh

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI mentoring

2016-07-05 Thread Wesley Hayutin
On Tue, Jul 5, 2016 at 1:06 PM, Steven Hardy  wrote:

> Hi all,
>
> At last weeks meeting, we discussed the idea of some sort of rotation where
> folks would volunteer their time to both help fix CI when it breaks, and
> also pass on some of the accrued knowledge within the team to newer folks
> wishing to learn.
>
> I'm hoping this will achive a few things:
> - Reduce the load on the subset of folks constantly fixing CI by getting
>   more people involved and familiar
> - Identify areas where we need to document better so 1-1 mentoring isn't
>   needed in the future.
>
> Note that this is explicitly *not* about volunteering to be the one person
> that fixes all-the-things in CI, everyone is still encouraged to do that,
> it's more about finding folks willing to set aside some time to be
> responsive on IRC, act as a point of contact, and take some extra time to
> pass on knowledge around the series of steps we take when a trunk
> regression or other CI related issue occurs.
>
> I started this etherpad:
>
> https://etherpad.openstack.org/p/tripleo-ci-mentoring
>
> I'd suggest we start from the week after the n-2 milestone, and I've
> volunteered as the first mentor for that week.
>
> Feel free to update if you're willing in participating in the ongoing task
> of keeping TripleO CI running smoothly in any capacity, and hopefully we
> can get more folks involved and communicating.
>
> If anyone has any thoughts around this process feel free to reply here and
> we can hopefully refine things so they are helpful to folks.
>
> Thanks!
>
> Steve
>

Awesome, thanks Steve!


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Matt Fischer
For the record we're on 3.5.6-1.
On Jul 5, 2016 11:27 AM, "Mike Lowe"  wrote:

> I was having just this problem last week.  We updated to 3.6.2 from 3.5.4
> on ubuntu and stated seeing crashes due to excessive memory usage. I did
> this on each node of my rabbit cluster and haven’t had any problems since
> 'rabbitmq-plugins disable rabbitmq_management’.  From what I could gather
> from rabbitmq mailing lists the stats collection part of the management
> console is single threaded and can’t keep up thus the ever growing memory
> usage from the ever growing backlog of stats to be processed.
>
>
> > On Jul 5, 2016, at 1:02 PM, Joshua Harlow  wrote:
> >
> > Hi ops and dev-folks,
> >
> > We over at godaddy (running rabbitmq with openstack) have been hitting a
> issue that has been causing the `rabbit_mgmt_db` consuming nearly all the
> processes memory (after a given amount of time),
> >
> > We've been thinking that this bug (or bugs?) may have existed for a
> while and our dual-version-path (where we upgrade the control plane and
> then slowly/eventually upgrade the compute nodes to the same version) has
> somehow triggered this memory leaking bug/issue since it has happened most
> prominently on our cloud which was running nova-compute at kilo and the
> other services at liberty (thus using the versioned objects code path more
> frequently due to needing translations of objects).
> >
> > The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with
> kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems to
> make the issue go away),
> >
> > # rpm -qa | grep rabbit
> >
> > rabbitmq-server-3.4.0-1.noarch
> >
> > The logs that seem relevant:
> >
> > ```
> > **
> > *** Publishers will be blocked until this alarm clears ***
> > **
> >
> > =INFO REPORT 1-Jul-2016::16:37:46 ===
> > accepting AMQP connection <0.23638.342> (127.0.0.1:51932 ->
> 127.0.0.1:5671)
> >
> > =INFO REPORT 1-Jul-2016::16:37:47 ===
> > vm_memory_high_watermark clear. Memory used:29910180640
> allowed:47126781542
> > ```
> >
> > This happens quite often, the crashes have been affecting our cloud over
> the weekend (which made some dev/ops not so happy especially due to the
> july 4th mini-vacation),
> >
> > Looking to see if anyone else has seen anything similar?
> >
> > For those interested this is the upstream bug/mail that I'm also seeing
> about getting confirmation from the upstream users/devs (which also has
> erlang crash dumps attached/linked),
> >
> > https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg
> >
> > Thanks,
> >
> > -Josh
> >
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI mentoring

2016-07-05 Thread Emilien Macchi
Excellent idea overall.

On Tue, Jul 5, 2016 at 1:06 PM, Steven Hardy  wrote:
> Hi all,
>
> At last weeks meeting, we discussed the idea of some sort of rotation where
> folks would volunteer their time to both help fix CI when it breaks, and
> also pass on some of the accrued knowledge within the team to newer folks
> wishing to learn.
>
> I'm hoping this will achive a few things:
> - Reduce the load on the subset of folks constantly fixing CI by getting
>   more people involved and familiar
> - Identify areas where we need to document better so 1-1 mentoring isn't
>   needed in the future.
>
> Note that this is explicitly *not* about volunteering to be the one person
> that fixes all-the-things in CI, everyone is still encouraged to do that,
> it's more about finding folks willing to set aside some time to be
> responsive on IRC, act as a point of contact, and take some extra time to
> pass on knowledge around the series of steps we take when a trunk
> regression or other CI related issue occurs.
>
> I started this etherpad:
>
> https://etherpad.openstack.org/p/tripleo-ci-mentoring
>
> I'd suggest we start from the week after the n-2 milestone, and I've
> volunteered as the first mentor for that week.

I added my name for the week before, we can give a try next week. Feel
free to reach me on IRC anytime.

> Feel free to update if you're willing in participating in the ongoing task
> of keeping TripleO CI running smoothly in any capacity, and hopefully we
> can get more folks involved and communicating.
>
> If anyone has any thoughts around this process feel free to reply here and
> we can hopefully refine things so they are helpful to folks.
>
> Thanks!
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla + BiFrost integration

2016-07-05 Thread Stephen Hindle
On Mon, Jul 4, 2016 at 7:53 AM, Michał Jastrzębski  wrote:
> I'd be cautious about how much of customization we allow. But don't
> forget that Kolla itself and BiFrost will be effectively separate. Run
> bifrost, run every customization playbook you want on host, run
> kolla-host-bootstrap playbook for installation of docker and stuff and
> then run kolla. This will not be single-step operation, so you can do
> stuff in between.
>

So it sounds like we effectively have a 'post-bifrost' hook already
(run another playbook after bifrost). Having a 'pre-bifrost' hook at
the start of the bifrost playbook to setup things like OpenVSwitch
networking (so kolla/bifrost know your using it for host networking)
would cover most things I can think of

Steve

-- 
The information in this message may be confidential.  It is intended solely 
for
the addressee(s).  If you are not the intended recipient, any disclosure,
copying or distribution of the message, or any action or omission taken by 
you
in reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-07-05 Thread Jesse Pretorius
From: Joshua Hesketh >

I assume you want to wait for the tag to merge before removing the branch?

The only tag job I can see for openstack-ansible* projects is the releasenotes 
one. This should be harmless as it just generates the notes for mitaka and 
liberty branches. I'm going to hold off until the final tag has merged anyway 
if you want to confirm this first.

Thanks Josh – The final Kilo tag has merged so we’re good to go. We’re happy to 
also go straight ahead with the eol tags for the icehouse and juno branches too.


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Matt Fischer
Yes! This happens often but I'd not call it a crash, just the mgmt db gets
behind then eats all the memory. We've started monitoring it and have
runbooks on how to bounce just the mgmt db. Here are my notes on that:

restart rabbitmq mgmt server - this seems to clear the memory usage.

rabbitmqctl eval 'application:stop(rabbitmq_management).'
rabbitmqctl eval 'application:start(rabbitmq_management).'

run GC on rabbit_mgmt_db:
rabbitmqctl eval
'(erlang:garbage_collect(global:whereis_name(rabbit_mgmt_db)))'

status of rabbit_mgmt_db:
rabbitmqctl eval 'sys:get_status(global:whereis_name(rabbit_mgmt_db)).'

Rabbitmq mgmt DB how much memory is used:
/usr/sbin/rabbitmqctl status | grep mgmt_db

Unfortunately I didn't see that an upgrade would fix for sure and any
settings changes to reduce the number of monitored events also require a
restart of the cluster. The other issue with an upgrade for us is the
ancient version of erlang shipped with trusty. When we upgrade to Xenial
we'll upgrade erlang and rabbit and hope it goes away. I'll also probably
tweak the settings on retention of events then too.

Also for the record the GC doesn't seem to help at all.
On Jul 5, 2016 11:05 AM, "Joshua Harlow"  wrote:

> Hi ops and dev-folks,
>
> We over at godaddy (running rabbitmq with openstack) have been hitting a
> issue that has been causing the `rabbit_mgmt_db` consuming nearly all the
> processes memory (after a given amount of time),
>
> We've been thinking that this bug (or bugs?) may have existed for a while
> and our dual-version-path (where we upgrade the control plane and then
> slowly/eventually upgrade the compute nodes to the same version) has
> somehow triggered this memory leaking bug/issue since it has happened most
> prominently on our cloud which was running nova-compute at kilo and the
> other services at liberty (thus using the versioned objects code path more
> frequently due to needing translations of objects).
>
> The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with
> kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems to
> make the issue go away),
>
> # rpm -qa | grep rabbit
>
> rabbitmq-server-3.4.0-1.noarch
>
> The logs that seem relevant:
>
> ```
> **
> *** Publishers will be blocked until this alarm clears ***
> **
>
> =INFO REPORT 1-Jul-2016::16:37:46 ===
> accepting AMQP connection <0.23638.342> (127.0.0.1:51932 -> 127.0.0.1:5671
> )
>
> =INFO REPORT 1-Jul-2016::16:37:47 ===
> vm_memory_high_watermark clear. Memory used:29910180640 allowed:47126781542
> ```
>
> This happens quite often, the crashes have been affecting our cloud over
> the weekend (which made some dev/ops not so happy especially due to the
> july 4th mini-vacation),
>
> Looking to see if anyone else has seen anything similar?
>
> For those interested this is the upstream bug/mail that I'm also seeing
> about getting confirmation from the upstream users/devs (which also has
> erlang crash dumps attached/linked),
>
> https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg
>
> Thanks,
>
> -Josh
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][keystone] keystonemiddleware 4.6.0 release (newton)

2016-07-05 Thread no-reply
We are satisfied to announce the release of:

keystonemiddleware 4.6.0: Middleware for OpenStack Identity

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystonemiddleware

With package available at:

https://pypi.python.org/pypi/keystonemiddleware

Please report issues through launchpad:

http://bugs.launchpad.net/keystonemiddleware

For more details, please see below.

4.6.0
^

* Add the *X_IS_ADMIN_PROJECT* header.


New Features


* [bug 1583690
  (https://bugs.launchpad.net/keystonemiddleware/+bug/1583690)] For
  services such as Swift, which may not be utilizing oslo_config, we
  need to be able to determine the project name from local config. If
  project name is specified in both local config and oslo_config, the
  one in local config will be used instead. In case project is
  undetermined (i.e. not set), we use taxonomy.UNKNOWN as an indicator
  so operators can take corrective actions.

* [bug 1540115
  (https://bugs.launchpad.net/keystonemiddleware/+bug/1540115)]
  Optional dependencies can now be installed using *extras*. To
  install audit related libraries, use "pip install
  keystonemiddleware[audit_nofications]". Refer to keystonemiddleware
  documentation for further information.

* Added the *X_IS_ADMIN_PROJECT* header to authenticated headers.
  This has the string value of 'True' or 'False' and can be used to
  enforce admin project policies.


Bug Fixes
*

* [bug 1583699
  (https://bugs.launchpad.net/keystonemiddleware/+bug/1583699)] Some
  service APIs (such as Swift list public containers) do not require a
  token. Therefore, there will be no identity or service catalog
  information available. In these cases, audit now fills in the
  default (i.e. taxonomy.UNKNOWN) for both initiator and target
  instead of raising an exception.

* [bug 1583702
  (https://bugs.launchpad.net/keystonemiddleware/+bug/1583702)] Some
  services such as Swift does not use Oslo (global) config. In that
  case, the options are conveyed via local config. This patch utilized
  an established pattern in auth_token middleware, which is to first
  look for the given option in local config, then Oslo global config.

Changes in keystonemiddleware 4.5.1..4.6.0
--

85ce086 Updated from global requirements
ef29dfc Use extras for oslo.messaging dependency
3ee96f1 Refactor API tests to not run middleware
46f831e Refactor audit api tests into their own file
bf80779 Refactor create_event onto the api object.
515a990 Extract a common notifier pattern
aa2cde7 Break out the API piece into its own file
b49449f Use createfile fixture in audit test
9c67fee Move audit into its own folder
8859345 use local config options if available in audit middleware
ed76943 Use oslo.config fixture in audit tests
adcdecb Pop oslo_config_config before doing paste convert
7a6af0b Updated from global requirements
adb59a7 Fix typo 'olso' to 'oslo'
31c8582 Config: no need to set default=None
2798b2e Fix an issue with oslo_config_project paste config
1f4a8fa Updated from global requirements
0562670 Pass X_IS_ADMIN_PROJECT header from auth_token
6f53905 Clean up middleware architecture
627ec92 Updated from global requirements
b5a2535 Add a fixture method to add your own token data
cc58b62 Move auth token opts calculation into auth_token
63f83ce Make audit middleware use common config object
5cabfc1 Consolidate user agent calculation
f8c150a Create a Config object
20b4a87 Updated from global requirements
68c9514 Updated from global requirements
38a5f79 Improve documentation for auth_uri
2387f9b PEP257: Ignore D203 because it was deprecated
fead001 Updated from global requirements
59fef23 Use method split_path from oslo.utils
cebebd2 Updated from global requirements
d8cb5a3 Make sure audit can handle API requests which does not require a token
06fb469 Updated from global requirements
ae891c1 Updated from global requirements
f864dc2 Updated from global requirements
619dbf3 Determine project name from oslo_config or local config


Diffstat (except docs and test files)
-

keystonemiddleware/_common/__init__.py |   0
keystonemiddleware/_common/config.py   | 157 +
keystonemiddleware/audit.py| 485 ---
keystonemiddleware/audit/__init__.py   | 193 ++
keystonemiddleware/audit/_api.py   | 312 ++
keystonemiddleware/audit/_notifier.py  |  77 +++
keystonemiddleware/auth_token/__init__.py  | 393 +++-
keystonemiddleware/auth_token/_auth.py |   3 -
keystonemiddleware/auth_token/_exceptions.py   |  11 +-
keystonemiddleware/auth_token/_opts.py | 162 -
keystonemiddleware/auth_token/_request.py  |  13 +
keystonemiddleware/auth_token/_user_plugin.py  |   8 +
keystonemiddleware/exceptions.py   |  19 +

Re: [openstack-dev] [StoryBoard] Thanks for the bugsquash, plus a new-things roundup

2016-07-05 Thread David Moreau Simard
I appreciate all the great work that's been going into StoryBoard lately.

Thanks everyone involved.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Tue, Jul 5, 2016 at 12:50 PM, Zara Zaimeche
 wrote:
> Hi all,
>
> A big thank you to everyone who came and helped out in the spectacular*
> StoryBoard bug squash! We look forward to the next. :) Here are some
> hilights from the last couple of weeks:
>
> * BEAUTIFUL NEW COMMENTS AND EVENTS TIMELINE
>
> It's so beautiful, it requires all-caps. SotK has transformed the barebones
> events timeline into an elegant swan. Well, that's a weird mixed metaphor,
> but it *is* lovely! Furthermore, this magnificent gentleman has removed
> pagination so that comments are no longer lost on the second page of the
> results, and has made it possible to link comments directly. Extra thanks to
> ttx for fixing some of the css during the bugsquash! :) Here's an example:
>
> https://storyboard.openstack.org/#!/story/2000464#comment-7029
>
> There is a WIP patch in review for editing one's own comments, for anyone
> interested in trying it out and giving feedback:
>
> https://review.openstack.org/#/c/333418/
>
>
> * Email threading
>
> The kindly pedroalvarez has worked some magic on the emails StoryBoard
> sends, so that they are threaded according to story. It should now be easier
> to see what an email refers to at a glance.
>
>
> * API Docs example commands
>
> anteaya has made it easier for people to interact with StoryBoard via the
> API with these examples. This should be good news for anyone who wants to
> use scripts with StoryBoard. You can see them here:
>
> http://docs.openstack.org/infra/storyboard/webapi/v1.html#stories
>
>
> * Gerrit integration for storyboard-dev
>
> Review-dev can now post comments on storyboard-dev (our test instance)!
> Thanks so much, zaro! You can see an example patch here:
> https://review-dev.openstack.org/#/c/5454/
>
>
> * Tags search upgraded
>
> Tags search now suggests existing tags! This should making searching-by-tag
> much easier.
>
> I hope to build on this to change task-statuses in the next couple of weeks.
>
>
>
> It's been a pretty busy time... which is why I'm over a week late with this
> email \o/. Anyway, yes, thanks again to everyone who helped out. If you'd
> like to get involved in the project, we're always available in #storyboard
> on freenode; the project is a mix of python and angularjs. We have a
> worklist of stories that contain easy tasks here:
> https://storyboard.openstack.org/#!/worklist/76 , so you can see if anything
> takes your interest, then it's best to ask in the channel for the specifics.
> :)
>
> Hope to see you there! If I've missed anything, please let me know.
>
> Best Wishes,
>
> Zara
>
> *I haven't personally written any interesting patches of late, so I am
> allowed to call it 'spectacular'. :)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-07-05 Thread Steven Hardy
On Tue, Jul 05, 2016 at 12:22:33PM +0200, Dmitry Tantsur wrote:
> On 07/04/2016 01:42 PM, Steven Hardy wrote:
> > Hi Dmitry,
> > 
> > I wanted to revisit this thread, as I see some of these interfaces
> > are now posted for review, and I have a couple of questions around
> > the naming (specifically for the "provide" action):
> > 
> > On Thu, May 19, 2016 at 03:31:36PM +0200, Dmitry Tantsur wrote:
> > 
> > > The last step before the deployment it to make nodes "available" using the
> > > "provide" provisioning action. Such nodes are exposed to nova, and can be
> > > deployed to at any moment. No long-running configuration actions should be
> > > run in this state. The "manage" action can be used to bring nodes back to
> > > "manageable" state for configuration (e.g. reintrospection).
> > 
> > So, I've been reviewing https://review.openstack.org/#/c/334411/ which
> > implements support for "openstack overcloud node provide"
> > 
> > I really hate to be the one nitpicking over openstackclient verbiage, but
> > I'm a little unsure if the literal translation of this results in an
> > intuitive understanding of what happens to the nodes as a result of this
> > action. So I wanted to have a broaded discussion before we land the code
> > and commit to this interface.
> > 
> 
> > 
> > Here, I think the problem is that while the dictionary definition of
> > "provide" is "make available for use, supply" (according to google), it
> > implies obtaining the node, not just activating it.
> > 
> > So, to me "provide node" implies going and physically getting the node that
> > does not yet exist, but AFAICT what this action actually does is takes an
> > existing node, and activates it (sets it to "available" state)
> > 
> > I'm worried this could be a source of operator confusion - has this
> > discussion already happened in the Ironic community, or is this a TripleO
> > specific term?
> 
> Hi, and thanks for the great question.
> 
> As I've already responded on the patch, this term is settled in our OSC
> plugin spec [1], and we feel like it reflects the reality pretty well. But I
> clearly understand that naming things is really hard, and what feels obvious
> to me does not feel obvious to the others. Anyway, I'd prefer if we stay
> consistent with how Ironic names things now.
> 
> [1] 
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/ironicclient-osc-plugin.html

Thanks, this is the context I was missing - If the term is already accepted
by the ironic community then I agree, let's keep things consistent.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI mentoring

2016-07-05 Thread Steven Hardy
Hi all,

At last weeks meeting, we discussed the idea of some sort of rotation where
folks would volunteer their time to both help fix CI when it breaks, and
also pass on some of the accrued knowledge within the team to newer folks
wishing to learn.

I'm hoping this will achive a few things:
- Reduce the load on the subset of folks constantly fixing CI by getting
  more people involved and familiar
- Identify areas where we need to document better so 1-1 mentoring isn't
  needed in the future.

Note that this is explicitly *not* about volunteering to be the one person
that fixes all-the-things in CI, everyone is still encouraged to do that,
it's more about finding folks willing to set aside some time to be
responsive on IRC, act as a point of contact, and take some extra time to
pass on knowledge around the series of steps we take when a trunk
regression or other CI related issue occurs.

I started this etherpad:

https://etherpad.openstack.org/p/tripleo-ci-mentoring

I'd suggest we start from the week after the n-2 milestone, and I've
volunteered as the first mentor for that week.

Feel free to update if you're willing in participating in the ongoing task
of keeping TripleO CI running smoothly in any capacity, and hopefully we
can get more folks involved and communicating.

If anyone has any thoughts around this process feel free to reply here and
we can hopefully refine things so they are helpful to folks.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-05 Thread Joshua Harlow

Hi ops and dev-folks,

We over at godaddy (running rabbitmq with openstack) have been hitting a 
issue that has been causing the `rabbit_mgmt_db` consuming nearly all 
the processes memory (after a given amount of time),


We've been thinking that this bug (or bugs?) may have existed for a 
while and our dual-version-path (where we upgrade the control plane and 
then slowly/eventually upgrade the compute nodes to the same version) 
has somehow triggered this memory leaking bug/issue since it has 
happened most prominently on our cloud which was running nova-compute at 
kilo and the other services at liberty (thus using the versioned objects 
code path more frequently due to needing translations of objects).


The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with 
kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems 
to make the issue go away),


# rpm -qa | grep rabbit

rabbitmq-server-3.4.0-1.noarch

The logs that seem relevant:

```
**
*** Publishers will be blocked until this alarm clears ***
**

=INFO REPORT 1-Jul-2016::16:37:46 ===
accepting AMQP connection <0.23638.342> (127.0.0.1:51932 -> 127.0.0.1:5671)

=INFO REPORT 1-Jul-2016::16:37:47 ===
vm_memory_high_watermark clear. Memory used:29910180640 allowed:47126781542
```

This happens quite often, the crashes have been affecting our cloud over 
the weekend (which made some dev/ops not so happy especially due to the 
july 4th mini-vacation),


Looking to see if anyone else has seen anything similar?

For those interested this is the upstream bug/mail that I'm also seeing 
about getting confirmation from the upstream users/devs (which also has 
erlang crash dumps attached/linked),


https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg

Thanks,

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][keystone] keystoneauth1 2.9.0 release (newton)

2016-07-05 Thread no-reply
We are amped to announce the release of:

keystoneauth1 2.9.0: Authentication Library for OpenStack Identity

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystoneauth

With package available at:

https://pypi.python.org/pypi/keystoneauth1

Please report issues through launchpad:

http://bugs.launchpad.net/keystoneauth

For more details, please see below.

2.9.0
^


New Features


* [blueprint totp-auth
  (https://blueprints.launchpad.net/keystone/+spec/totp-auth)] Add an
  auth plugin to handle Time-Based One-Time Password (TOTP)
  authentication via the "totp" method. This new plugin will accept
  the following identity options: - "user-id": user ID - "username":
  username - "user-domain-id": user's domain ID - "user-domain-name":
  user's domain name - "passcode": passcode generated by TOTP app or
  device User is uniquely identified by either "user-id" or
  combination of "username" and "user-domain-id" or "user-domain-
  name".


Bug Fixes
*

* Fix passing scope parameters in Oidc* auth plugins. [Bug 1582774
  (https://bugs.launchpad.net/bugs/1582774)]

Changes in keystoneauth1 2.8.0..2.9.0
-

464aaac Updated from global requirements
ea2b2a4 move release note to correct directory
bd18bc3 oidc: fix OpenID Connect scope option
aa01e5f oidc: add tests for plugin loader
61b49d0 Don't mock the session.request function
049cf2d Updated from global requirements
280cde2 oidc: refactor unit tests
0515513 Updated from global requirements
ad54777 Fix code example for OAuth1 authentication
d86df86 Add entrypoint for Federated Kerberos
bf53e7e Fix kerberos available property
3e24beb Document named kerberos plugin
9e29e6e Support TOTP auth plugin
fc95d25 Make the kerberos plugin loadable
bc61428 Add available flag to plugin loaders
a607e71 Updated from global requirements
cf520d9 PEP257: Ignore D203 because it was deprecated
c7ceb42 Updated from global requirements
2a8133c Apply a heuristic for product name if a user_agent is not provided


Diffstat (except docs and test files)
-

keystoneauth1/extras/kerberos.py   |  67 
keystoneauth1/extras/kerberos/__init__.py  |  87 ++
keystoneauth1/extras/kerberos/_loading.py  |  36 
keystoneauth1/identity/__init__.py |   6 +-
keystoneauth1/identity/v3/__init__.py  |   8 +-
keystoneauth1/identity/v3/totp.py  |  81 +
keystoneauth1/loading/_plugins/identity/v3.py  |  59 +--
keystoneauth1/loading/base.py  |  29 +++-
.../notes/bug-1582774-49af731b6dfc6f2f.yaml|   4 -
keystoneauth1/session.py   |  82 -
.../unit/extras/kerberos/test_kerberos_loading.py  |  33 
.../add-totp-auth-plugin-0650d220899c25b7.yaml |  16 ++
.../notes/bug-1582774-49af731b6dfc6f2f.yaml|   4 +
setup.cfg  |   5 +-
test-requirements.txt  |  12 +-
tox.ini|   3 +-
24 files changed, 726 insertions(+), 199 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 6d08bf7..df4f591 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -11 +11 @@ discover # BSD
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
@@ -13 +13 @@ mock>=2.0 # BSD
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
@@ -15 +15 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-oslo.utils>=3.11.0 # Apache-2.0
+oslo.utils>=3.14.0 # Apache-2.0
@@ -20,3 +20,3 @@ pycrypto>=2.6 # Public Domain
-reno>=1.6.2 # Apache2
-requests-mock>=0.7.0 # Apache-2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+reno>=1.8.0 # Apache2
+requests-mock>=1.0 # Apache-2.0
+sphinx!=1.3b1,<1.3,>=1.2.1 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [StoryBoard] Thanks for the bugsquash, plus a new-things roundup

2016-07-05 Thread Zara Zaimeche

Hi all,

A big thank you to everyone who came and helped out in the spectacular* 
StoryBoard bug squash! We look forward to the next. :) Here are some 
hilights from the last couple of weeks:


* BEAUTIFUL NEW COMMENTS AND EVENTS TIMELINE

It's so beautiful, it requires all-caps. SotK has transformed the 
barebones events timeline into an elegant swan. Well, that's a weird 
mixed metaphor, but it *is* lovely! Furthermore, this magnificent 
gentleman has removed pagination so that comments are no longer lost on 
the second page of the results, and has made it possible to link 
comments directly. Extra thanks to ttx for fixing some of the css during 
the bugsquash! :) Here's an example:


https://storyboard.openstack.org/#!/story/2000464#comment-7029

There is a WIP patch in review for editing one's own comments, for 
anyone interested in trying it out and giving feedback:


https://review.openstack.org/#/c/333418/


* Email threading

The kindly pedroalvarez has worked some magic on the emails StoryBoard 
sends, so that they are threaded according to story. It should now be 
easier to see what an email refers to at a glance.



* API Docs example commands

anteaya has made it easier for people to interact with StoryBoard via 
the API with these examples. This should be good news for anyone who 
wants to use scripts with StoryBoard. You can see them here:


http://docs.openstack.org/infra/storyboard/webapi/v1.html#stories


* Gerrit integration for storyboard-dev

Review-dev can now post comments on storyboard-dev (our test instance)! 
Thanks so much, zaro! You can see an example patch here: 
https://review-dev.openstack.org/#/c/5454/



* Tags search upgraded

Tags search now suggests existing tags! This should making 
searching-by-tag much easier.


I hope to build on this to change task-statuses in the next couple of weeks.



It's been a pretty busy time... which is why I'm over a week late with 
this email \o/. Anyway, yes, thanks again to everyone who helped out. If 
you'd like to get involved in the project, we're always available in 
#storyboard on freenode; the project is a mix of python and angularjs. 
We have a worklist of stories that contain easy tasks here: 
https://storyboard.openstack.org/#!/worklist/76 , so you can see if 
anything takes your interest, then it's best to ask in the channel for 
the specifics. :)


Hope to see you there! If I've missed anything, please let me know.

Best Wishes,

Zara

*I haven't personally written any interesting patches of late, so I am 
allowed to call it 'spectacular'. :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug

2016-07-05 Thread Elisha, Moshe (Nokia - IL)
Thank you all for assisting.

When I tested Mistral I used an older version of Mistral (meaning an older 
version of yaql).

I have verified that latest Mistral is working as expected.
I have upgraded the yaql library in yaqluator to 1.1.0 and it is now working as 
expected.

Thanks!

From: Dougal Matthews >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, 5 July 2016 at 17:53
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug



On 5 July 2016 at 08:32, Renat Akhmerov 
> wrote:
Stan, thanks for clarification. What’s the latest stable version that we’re 
supposed to use? global-requirements.txt has yaql>=1.1.0, I wonder if it’s 
correct.

It is also worth looking at the upper-constraints.txt. It has 1.1.1 which is 
the latest release. So it all seems up to date.

https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L376

I think the problem is that this external project isn't being updated. Assuming 
they have not deployed anything that isn't committed, then they are running 
YAQL 1.0.2 which is almost a year old.

https://github.com/ALU-CloudBand/yaqluator/blob/master/requirements.txt#L3


Renat Akhmerov
@Nokia

On 05 Jul 2016, at 12:12, Stan Lagun 
> wrote:

Hi!

The issue with join is just a yaql bug that is already fixed. The problem with 
yaqluator is that it doesn't use the latest yaql library.

Another problem is that it does't sets options correctly. As a result it is 
possible to bring the site down with a query that produces endless collection

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Tue, Jun 28, 2016 at 9:46 AM, Elisha, Moshe (Nokia - IL) 
> wrote:
Hi,

Thank you for the kind words, Alexey.

I was able to reproduce your bug and I have also found the issue.

The problem is that we did not create the parser with the engine_options used 
in the yaql library by default when using the CLI.
Specifically, the "yaql.limitIterators" was missing… I am not sure that this 
settings should have this affect but maybe the Yaql guys can comment on that.

If we will change yaqluator to use this setting it will mean that yaqluator 
will not be consistent with Mistral because Mistral is using YAQL without this 
engine option (If I use your example in a workflow, Mistral returns exactly 
like the yaqluator returns)


Workflow:


---
version: '2.0'

test_yaql:
  tasks:
test_yaql:
  action: std.noop
  publish:
output_expr: <% [1,2].join([3], true, [$1, $2]) %>

Workflow result:


[root@s53-19 ~(keystone_admin)]# mistral task-get-published 
01d2bce3-20d0-47b2-84f2-7bd1cb2bf9f7
{
"output_expr": [
[
1,
3
]
]
}


As Matthews pointed out, the yaqluator is indeed OpenSource and contributions 
are welcomed.

[1] 
https://github.com/ALU-CloudBand/yaqluator/commit/e523dacdde716d200b5ed1015543d4c4680c98c2



From: Dougal Matthews >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, 27 June 2016 at 16:44
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug

On 27 June 2016 at 14:30, Alexey Khivin 
> wrote:
Hello, Moshe

Tomorrow I discovered yaqluator.com for myself! Thanks 
for the useful tool!

But suddenly I was said that the expression
[1,2].join([3], true, [$1, $2])
evaluated to [[1,3]] on the yaqluator

A the same time this expression evaluated right when I using raw yaql 
interpreter.

Could we fix this issue?

By the way, don't you want to make yaqluator opensource? If you would transfer 
yaqluator to Openstack Foundation, then  community will be able to fix such 
kind of bugs

It looks like it is open source, there is a link in the footer: 
https://github.com/ALU-CloudBand/yaqluator


Thank you!
Best regards, Alexey Khivin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




[openstack-dev] [new][openstack] osc-lib 0.2.1 release (newton)

2016-07-05 Thread no-reply
We are jubilant to announce the release of:

osc-lib 0.2.1: OpenStackClient Library

This release is part of the newton release series.

With source available at:

https://git.openstack.org/cgit/openstack/osc-lib

With package available at:

https://pypi.python.org/pypi/osc-lib

Please report issues through launchpad:

https://bugs.launchpad.net/python-openstackclient

For more details, please see below.

Changes in osc-lib 0.2.0..0.2.1
---

812e074 Get VersionInfo of "osc-lib"
7a5987c Attempt to find resource by ID, without kwargs


Diffstat (except docs and test files)
-

osc_lib/__init__.py |  2 +-
osc_lib/utils.py| 30 ++
3 files changed, 37 insertions(+), 10 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-05 Thread Artom Lifshitz
> The Hyper-V implementation of the bp virt-device-role-tagging is mergeable 
> [1]. The patch is quite simple, it got some reviews, and the tempest test 
> test_device_tagging [2] passed. [3]
>
> [1] https://review.openstack.org/#/c/331889/
> [2] https://review.openstack.org/#/c/305120/
> [3] http://64.119.130.115/debug/nova/331889/8/04-07-2016_19-43/results.html.gz

For what it's worth, the implementation for libvirt and all the
plumbing in the API, metadata API, compute manager, etc, has merged,
so this can be thought of as a continuation of that same patch series.

There's the XenAPI implementation [4] as well, but that's not
mergeable in its current state.

[4] https://review.openstack.org/#/c/333781/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] When to purge the DB, and when not to purge the DB?

2016-07-05 Thread Carter, Kevin
+1 to #4 -- As an operator I have archive and audit requirements that
proactive automated DB pruning would likely get in the way of. If we
do produce pruning tools I think they should exist in the OPS repo
and, as a rule, should not be part of the general deployment/upgrade
framework.

On Sat, Jul 2, 2016 at 5:20 PM, Ian Cordasco  wrote:
>
> On Jul 2, 2016 10:37 AM, "Dan Smith"  wrote:
>>
>> > The question is whether we should do something like this:
>> >
>> > 1) As part of the normal execution of the service playbooks;
>> > 2) As part of the automated major upgrade (i.e. The step is not
>> > optional);
>> > 3) As part of the manual major upgrade (i.e. The step is optional);
>> > 4) Never.
>>
>> I am not an operator, but I would think that #4 is the right thing to
>> do. If I want to purge the database, it's going to be based on billing
>> reasons (or lack thereof) and be tied to another archival, audit, etc
>> policy that the "business people" are involved with. Install and
>> configuration of my services shouldn't really ever touch my data other
>> than mechanical upgrade scripts and the like, IMHO.
>>
>> Purging the database only during upgrades is not sufficient for large
>> installs, so why artificially tie it to that process? In Nova we don't
>> do data migrations as part of schema updates anymore, so it's not like a
>> purge is going to make the upgrade any faster...
>
> I agree with this sentiment. If OSA feels like it must provide automation
> for purging databases, it should be in the ops repo mentioned earlier.
>
> I see no reason to over extend upgrades with something not inherently
> necessary or appropriate for upgrades.
>
> --
> Ian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hangzhou Bug Smash will be held from July 6 to 8

2016-07-05 Thread Liyongle (Fred)
Hi OpenStackers,

The 4th China OpenStack Bug Smash will start on July 6 in Hangzhou, Beijing 
Time. Please find this bug smash home page at [1], and the bugs list in [2].

[1] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Newton-Hangzhou
[2] https://etherpad.openstack.org/p/hackathon4_all_list

Fred (李永乐)

-Original Message-
From: Liyongle (Fred) 
Sent: 2016年6月30日 23:00
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Hangzhou Bug Smash will be held from July 6 to 8

Hi OpenStackers,

The 4th China OpenStack Bug Smash, hosted by CESI, Huawei, and intel, will be 
held in Hangzhou, China from July 6 to 8 (Beijing time), from 01:00 July 6 to 
6:00 July 8 UTC. And the target to get bugs fixed before the milestone newton-2 
 [1].

Around 50 stackers will fix the bugs in nova, cinder, neutron, magnum, 
ceilometer, heat, ironic, smaug, freezer, oslo, murano and kolla. You are 
appreciated to provide any support, or work remotely with the team. 

Please find this bug smash home page at [2], and the bugs list in [3] (under 
preparation). 

[1] http://releases.openstack.org/newton/schedule.html
[2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Newton-Hangzhou
[3] https://etherpad.openstack.org/p/hackathon4_all_list

Best Regards

Fred (李永乐)

China OpenStack Bug Smash Team
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #86

2016-07-05 Thread Emilien Macchi
No topic this week, meeting cancelled.

See you next week!

On Mon, Jul 4, 2016 at 4:34 PM, Emilien Macchi <emil...@redhat.com> wrote:
> If you have any topic for our weekly meeting tomorrow, please add it here:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160705
>
> If no topic, we will postpone the meeting to next week.
> Thanks,
>
> On Tue, Jun 28, 2016 at 11:06 AM, Emilien Macchi <emil...@redhat.com> wrote:
>> Meeting cancelled again, no topic this week.
>> See you next week!
>>
>> On Mon, Jun 27, 2016 at 8:39 AM, Emilien Macchi <emil...@redhat.com> wrote:
>>> Hi,
>>>
>>> If you have any topic for our meeting tomorrow, please add them on the 
>>> etherpad:
>>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160628
>>>
>>> See you tomorrow,
>>>
>>> On Tue, Jun 21, 2016 at 10:59 AM, Emilien Macchi <emil...@redhat.com> wrote:
>>>> Meeting cancelled, no topics this week.
>>>>
>>>> See you next week!
>>>>
>>>> On Mon, Jun 20, 2016 at 9:44 AM, Emilien Macchi <emil...@redhat.com> wrote:
>>>>> Hi Puppeteers!
>>>>>
>>>>> We'll have our weekly meeting tomorrow at 3pm UTC on
>>>>> #openstack-meeting-4.
>>>>>
>>>>> Here's a first agenda:
>>>>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160621
>>>>>
>>>>> Feel free to add more topics, and any outstanding bug and patch.
>>>>>
>>>>> See you tomorrow!
>>>>> Thanks,
>>>>> --
>>>>> Emilien Macchi
>>>>
>>>>
>>>>
>>>> --
>>>> Emilien Macchi
>>>
>>>
>>>
>>> --
>>> Emilien Macchi
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug

2016-07-05 Thread Dougal Matthews
On 5 July 2016 at 08:32, Renat Akhmerov  wrote:

> Stan, thanks for clarification. What’s the latest stable version that
> we’re supposed to use? global-requirements.txt has yaql>=1.1.0, I wonder
> if it’s correct.
>

It is also worth looking at the upper-constraints.txt. It has 1.1.1 which
is the latest release. So it all seems up to date.

https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L376

I think the problem is that this external project isn't being updated.
Assuming they have not deployed anything that isn't committed, then they
are running YAQL 1.0.2 which is almost a year old.

https://github.com/ALU-CloudBand/yaqluator/blob/master/requirements.txt#L3


>
> Renat Akhmerov
> @Nokia
>
> On 05 Jul 2016, at 12:12, Stan Lagun  wrote:
>
> Hi!
>
> The issue with join is just a yaql bug that is already fixed. The problem
> with yaqluator is that it doesn't use the latest yaql library.
>
> Another problem is that it does't sets options correctly. As a result it
> is possible to bring the site down with a query that produces endless
> collection
>
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
>
> 
>
> On Tue, Jun 28, 2016 at 9:46 AM, Elisha, Moshe (Nokia - IL) <
> moshe.eli...@nokia.com> wrote:
>
>> Hi,
>>
>> Thank you for the kind words, Alexey.
>>
>> I was able to reproduce your bug and I have also found the issue.
>>
>> The problem is that we did not create the parser with the engine_options
>> used in the yaql library by default when using the CLI.
>> Specifically, the "yaql.limitIterators" was missing… I am not sure that
>> this settings should have this affect but maybe the Yaql guys can comment
>> on that.
>>
>> If we will change yaqluator to use this setting it will mean that
>> yaqluator will not be consistent with Mistral because Mistral is using YAQL
>> without this engine option (If I use your example in a workflow, Mistral
>> returns exactly like the yaqluator returns)
>>
>>
>> Workflow:
>>
>> ---
>> version: '2.0'
>>
>> test_yaql:
>>   tasks:
>> test_yaql:
>>   action: std.noop
>>   publish:
>> output_expr: <% [1,2].join([3], true, [$1, $2]) %>
>>
>>
>> Workflow result:
>>
>>
>> [root@s53-19 ~(keystone_admin)]# mistral task-get-published
>> 01d2bce3-20d0-47b2-84f2-7bd1cb2bf9f7
>> {
>> "output_expr": [
>> [
>> 1,
>> 3
>> ]
>> ]
>> }
>>
>>
>> As Matthews pointed out, the yaqluator is indeed OpenSource and
>> contributions are welcomed.
>>
>> [1]
>> https://github.com/ALU-CloudBand/yaqluator/commit/e523dacdde716d200b5ed1015543d4c4680c98c2
>>
>>
>>
>> From: Dougal Matthews 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Monday, 27 June 2016 at 16:44
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug
>>
>> On 27 June 2016 at 14:30, Alexey Khivin  wrote:
>>
>>> Hello, Moshe
>>>
>>> Tomorrow I discovered yaqluator.com for myself! Thanks for the useful
>>> tool!
>>>
>>> But suddenly I was said that the expression
>>> [1,2].join([3], true, [$1, $2])
>>> evaluated to [[1,3]] on the yaqluator
>>>
>>> A the same time this expression evaluated right when I using raw yaql
>>> interpreter.
>>>
>>> Could we fix this issue?
>>>
>>> By the way, don't you want to make yaqluator opensource? If you would
>>> transfer yaqluator to Openstack Foundation, then  community will be able to
>>> fix such kind of bugs
>>>
>>
>> It looks like it is open source, there is a link in the footer:
>> https://github.com/ALU-CloudBand/yaqluator
>>
>>
>>>
>>> Thank you!
>>> Best regards, Alexey Khivin
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [nova][cinder] Integration testing for Nova API os-assisted-volume-snapshots

2016-07-05 Thread Silvan Kaiser
Hi Matt!
I'm not sure which tempest tests use os-assisted-volume-snapshots,
therefore not sure our CI covers it.
Our CIs are running volume related tests against cinder and nova changes
but currently there are two bugs resulting in false negatives on all
changes [1],[2].
The latter regards snapshots of instances with volumes in use, do these
possibly use the  os-assisted-volume-snapshots api call?
Best
Silvan

[1] https://bugs.launchpad.net/nova/+bug/1597644
[2] https://bugs.launchpad.net/nova/+bug/1598833

2016-06-17 15:37 GMT+02:00 Matt Riedemann :

> On 6/17/2016 2:44 AM, Silvan Kaiser wrote:
>
>> I'd be happy to help, too. Please drop e.g. a bug link in this thread we
>> can use to follow up on things, that would be great.
>> Best
>> Silvan
>>
>> 2016-06-15 22:44 GMT+02:00 Sean McGinnis > >:
>>
>> On Wed, Jun 15, 2016 at 07:01:17PM +0200, Jordan Pittier wrote:
>> > On Wed, Jun 15, 2016 at 6:21 PM, Matt Riedemann <
>> mrie...@linux.vnet.ibm.com >
>>
>> > wrote:
>> >
>> ...
>> > > Does someone have a link to a successful job run for one of those
>> drivers?
>> > > I'd like to see if they are testing volume snapshot and that it's
>> properly
>> > > calling the nova API and everything is working. Because this is
>> also
>> > > something that Nova could totally unknowingly break to that flow
>> since we
>> > > have no CI coverage for it (we don't have those cinder 3rd party
>> CI jobs
>> > > running against nova changes).
>> > >
>> > > --
>> > >
>> >
>> > Hi Matt,
>> > I am in charge of the Scality CI. It used to report to changes in
>> Cinder. A
>> > change in devstack broke us a couple of months ago, so I had to
>> turn off my
>> > CI (because it was reporting false negative) while developing a
>> patch. The
>> > patch took a long time to develop and merge but was merged finally:
>> > https://review.openstack.org/#/c/310204/
>> >
>> > But in the mean time, something else crept in, hidden by the first
>> failure.
>> > So the Scality CI is still broken, but it is my intention to find
>> the
>> > commit that broke it and come up with a patch.
>> >
>> Jordan, please ping me when you have a patch for that and I will try
>> to
>> make it a priority.
>>
>> Thanks,
>> Sean
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Dr. Silvan Kaiser
>> Quobyte GmbH
>> Hardenbergplatz 2, 10623 Berlin - Germany
>> +49-30-814 591 800 - www.quobyte.com
>> 
>> Amtsgericht Berlin-Charlottenburg, HRB 149012B
>> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Getting back to the original intent of the thread, are there CI results
> with instance/volume snapshot for these volume drivers so we can see that
> os-assisted-volume-snapshots is passing?
>
> I'm not particularly interested in the volume-backed instance snapshot
> scenario because unless I'm mistaken, that would do something like:
>
> 1. nova-api to snapshot the instance
> 2. calls to nova-compute to quiesce the instance
> 3. then calls to cinder to snapshot the volume
> 4. which then cinder calls back to nova's os-assisted-volume-snapshots API
>
> If that's the actual flow it's quite a complicated back and forth between
> the two services where lots of things could break down and I doubt get
> rolled back properly, similar to how cinder volume migration / retype has
> to call the nova swap-volume API which then calls back to cinder to tell it
> that the migration is complete.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

[openstack-dev] [Performance] Team meeting canceled for today. Next meeting: July 19th

2016-07-05 Thread Dina Belova
Folks,

accidentally I have no opportunity to chair today meeting due to feeling
sick, and as discussed  on last Tuesday there is no chance I'll be
available on July 12th. Let's assume our next meeting will be on Tuesday,
July 19th at 16:00 UTC.

Sorry for inconvenience!

Cheers,
Dina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Python35 Jobs coming

2016-07-05 Thread Andreas Jaeger
On 2016-07-01 21:51, Clark Boylan wrote:
> The infra team is working on taking advantage of the new Ubuntu Xenial
> release including running unittests on python35. The current plan is to
> get https://review.openstack.org/#/c/336272/ merged next Tuesday (July
> 5, 2016). This will add non voting python35 tests restricted to >=
> master/Newton on all projects that had python34 testing.
> 
> The expectation is that in many cases python35 tests will just work if
> python34 testing was also working. If this is the case for your project
> you can propose a change to openstack-infra/project-config to make these
> jobs voting against your project. You should only need to edit
> jenkins/jobs/projects.yaml and zuul/layout.yaml and remove the '-nv'
> portion of the python35 jobs to do this.
> 
> We do however expect that there will be a large group of failed tests
> too. If your project has a specific tox.ini py34 target to restrict
> python3 testing to a specific list of tests you will need to add a tox
> target for py35 that does the same thing as the py34 target. We have
> also seen bug reports against some projects whose tests rely on stable
> error messages from Python itself which isn't always the case across
> version changes so these tests will need to be updated as well.
> 
> Note this change will not add python35 jobs for cases where projects
> have special tox targets. This is restricted just to the default py35
> unittesting.
> 
> As always let us know if you questions,
> Clark

The change has merged and the python35 non-voting tests are run as part
of new changes now.

The database setup for the python35-db variant is not working yet, and
needs adjustment on the infra side.

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] Taas can not capture the packet, if the two VM on the same host. Is it a Bug?

2016-07-05 Thread SUZUKI, Kazuhiro

Hi Jimmy,

I guess that it has not been resoved yet.
You should try to ask it on IRC meeting, I think.

http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting

Regards,
Kaz


From: 张广明 
Subject: Re: [openstack-dev] [neutron][taas] Taas can not capture the 
packet, if the two VM on the same host. Is it a Bug?

Date: Tue, 5 Jul 2016 19:31:14 +0800


Hi Kaz
Thanks for your answer. But int the log, i can not find how to  
resolve

this issue. In fact ,this issue is not related with br-ex.
In OVS, the normal action add or remove vlan id when output the pac 
ket. So

we should add another rule that use in_port that
belongs to the same vlan with mirror port as rule condition  in br- 
int.




 Jimmy

2016-07-05 17:01 GMT+08:00 SUZUKI, Kazuhiro :


Hi,

I also have seen the same situation.
The same issue is discussed at the IRC meeting of TaaS.
Please take a look at the log.


http://eavesdrop.openstack.org/meetings/taas/2016/taas.2016-04-13- 
06.30.log.html


Regards,
Kaz


From: 张广明 
Subject: [openstack-dev] [neutron][taas] Taas can not capture the  
packet,

if the two VM on the same host. Is it a Bug?
Date: Fri, 1 Jul 2016 16:03:53 +0800

> Hi ,
> I found a limitation when use taas.  My test case is descrip 
ped as

> follow:
> VM1 and VM2 is running on the same host and  they are belong 
 the

vlan.
> The monitor VM is on the same host or the  other host . I want t 
o monitor

> the only INPUT flow to the VM1.
> So I configure the tap-flow like this "neutron tap-flow-crea 
te

--port
> 2a5a4382-a600-4fb1-8955-00d0fc9f648f  --tap-service
> c510e5db-4ba8-48e3-bfc8-1f0b61f8f41b --direction IN ".
> When ping from VM2 to VM1.  I can not get the flow in the mo 
nitor VM.
>The reason is the the flow from VM2 to VM1 in br-int has not  
vlan
> information. The vlan tag was added in flow when output the pack 
et  in

OVS.
> So the code in file ovs_taas.py did not work in this case .
>
>  if direction == 'IN' or direction == 'BOTH':
> port_mac = tap_flow['port_mac']
>  self.int_br.add_flow(table=0,
>  priority=20,
> dl_vlan=port_vlan_id,
> dl_dst=port_mac,
>
actions="normal,mod_vlan_vid:%s,output:%s" %
>  (str(taas_id), str(patch_int_ta 
p_id)))

>
>
>
>
>  Is this is a Bug or a Design ??
>
>
>
> Thanks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Failed to install fuel master

2016-07-05 Thread Vladimir Kuklin
Hi, Alioune

Let's start with the most basic steps. Which Fuel are you using?
And how do you install the node? Do you use USB stick or DVD or some IPMI
virtual media? From what I see here it pretty much looks like, that this
media is not inserted into the host.

On Tue, Jul 5, 2016 at 3:11 PM, Alioune  wrote:

> Hi all,
>
> I'm trying to install fuel master using [1] on phycal server but the
> installation process fails and switches to a dracut console with the
> following error:
>
> dracut-initqueue[595] floppy0: no floppy controllers found
> dracut-initqueue[595] Warning: failed to fetck kickstart from
> hd:sr0:/ks.cfg
> dracut-initqueue[595] mount: no medium found on /dev/sr0
> dracut-initqueue[595] Warning: Cloud not boot
> dracut-initqueue[595] Warning: /dev/root does not exist
>
>
> Starting Dracut Emergency Shell
> Warning: /dev/root does not exist
> Generating "/run/initramfs/rdsosreport.txt"
> dracut:/#
>
> Any suggestion for solving that error ?
>
> Regards,
>
> [1]
> http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-install-guide/install/install_prepare_install_media.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Failed to install fuel master

2016-07-05 Thread Alioune
Hi all,

I'm trying to install fuel master using [1] on phycal server but the
installation process fails and switches to a dracut console with the
following error:

dracut-initqueue[595] floppy0: no floppy controllers found
dracut-initqueue[595] Warning: failed to fetck kickstart from hd:sr0:/ks.cfg
dracut-initqueue[595] mount: no medium found on /dev/sr0
dracut-initqueue[595] Warning: Cloud not boot
dracut-initqueue[595] Warning: /dev/root does not exist


Starting Dracut Emergency Shell
Warning: /dev/root does not exist
Generating "/run/initramfs/rdsosreport.txt"
dracut:/#

Any suggestion for solving that error ?

Regards,

[1]
http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-install-guide/install/install_prepare_install_media.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] Taas can not capture the packet, if the two VM on the same host. Is it a Bug?

2016-07-05 Thread 张广明
Hi Kaz
Thanks for your answer. But int the log, i can not find how to resolve
this issue. In fact ,this issue is not related with br-ex.
In OVS, the normal action add or remove vlan id when output the packet. So
we should add another rule that use in_port that
belongs to the same vlan with mirror port as rule condition  in br-int.



 Jimmy

2016-07-05 17:01 GMT+08:00 SUZUKI, Kazuhiro :

> Hi,
>
> I also have seen the same situation.
> The same issue is discussed at the IRC meeting of TaaS.
> Please take a look at the log.
>
>
> http://eavesdrop.openstack.org/meetings/taas/2016/taas.2016-04-13-06.30.log.html
>
> Regards,
> Kaz
>
>
> From: 张广明 
> Subject: [openstack-dev] [neutron][taas] Taas can not capture the packet,
> if the two VM on the same host. Is it a Bug?
> Date: Fri, 1 Jul 2016 16:03:53 +0800
>
> > Hi ,
> > I found a limitation when use taas.  My test case is descripped as
> > follow:
> > VM1 and VM2 is running on the same host and  they are belong the
> vlan.
> > The monitor VM is on the same host or the  other host . I want to monitor
> > the only INPUT flow to the VM1.
> > So I configure the tap-flow like this "neutron tap-flow-create
> --port
> > 2a5a4382-a600-4fb1-8955-00d0fc9f648f  --tap-service
> > c510e5db-4ba8-48e3-bfc8-1f0b61f8f41b --direction IN ".
> > When ping from VM2 to VM1.  I can not get the flow in the monitor VM.
> >The reason is the the flow from VM2 to VM1 in br-int has not vlan
> > information. The vlan tag was added in flow when output the packet  in
> OVS.
> > So the code in file ovs_taas.py did not work in this case .
> >
> >  if direction == 'IN' or direction == 'BOTH':
> > port_mac = tap_flow['port_mac']
> >  self.int_br.add_flow(table=0,
> >  priority=20,
> > dl_vlan=port_vlan_id,
> > dl_dst=port_mac,
> >
> actions="normal,mod_vlan_vid:%s,output:%s" %
> >  (str(taas_id), str(patch_int_tap_id)))
> >
> >
> >
> >
> >  Is this is a Bug or a Design ??
> >
> >
> >
> > Thanks.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-07-05 Thread Dmitry Tantsur

On 07/04/2016 01:42 PM, Steven Hardy wrote:

Hi Dmitry,

I wanted to revisit this thread, as I see some of these interfaces
are now posted for review, and I have a couple of questions around
the naming (specifically for the "provide" action):

On Thu, May 19, 2016 at 03:31:36PM +0200, Dmitry Tantsur wrote:


The last step before the deployment it to make nodes "available" using the
"provide" provisioning action. Such nodes are exposed to nova, and can be
deployed to at any moment. No long-running configuration actions should be
run in this state. The "manage" action can be used to bring nodes back to
"manageable" state for configuration (e.g. reintrospection).


So, I've been reviewing https://review.openstack.org/#/c/334411/ which
implements support for "openstack overcloud node provide"

I really hate to be the one nitpicking over openstackclient verbiage, but
I'm a little unsure if the literal translation of this results in an
intuitive understanding of what happens to the nodes as a result of this
action. So I wanted to have a broaded discussion before we land the code
and commit to this interface.





Here, I think the problem is that while the dictionary definition of
"provide" is "make available for use, supply" (according to google), it
implies obtaining the node, not just activating it.

So, to me "provide node" implies going and physically getting the node that
does not yet exist, but AFAICT what this action actually does is takes an
existing node, and activates it (sets it to "available" state)

I'm worried this could be a source of operator confusion - has this
discussion already happened in the Ironic community, or is this a TripleO
specific term?


Hi, and thanks for the great question.

As I've already responded on the patch, this term is settled in our OSC 
plugin spec [1], and we feel like it reflects the reality pretty well. 
But I clearly understand that naming things is really hard, and what 
feels obvious to me does not feel obvious to the others. Anyway, I'd 
prefer if we stay consistent with how Ironic names things now.


[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/ironicclient-osc-plugin.html




To me, something like "openstack overcloud node enable" or maybe "node
activate" would be more intuitive, as it implies taking an existing node
from the inventory and making it active/available in the context of the
overcloud deployment?


The problem here is that "provide" does not just "enable" nodes. It also 
makes nodes pass through cleaning, which may be a pretty complex and 
long process (we have it disabled for TripleO for this reason).




Anyway, not a huge issue, but given that this is a new step in our nodes
workflow, I wanted to ensure folks are comfortable with the terminology
before we commit to it in code.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-07-05 Thread Markus Zoeller
The expiration of old (stale) bug reports happened right now.

Stats:
# of bug reports before expiration: 826
# of expired bug reports:   188
# of bug reports after expiration:  638

That affected ~22% of our overall open bug reports and ~36% of bug
reports which are not in progress. You can see a graphical impact at
[1]. The list of affected bug reports is at [2] and as an attached file.

References:
[1] http://45.55.105.55:3000/dashboard/db/openstack-bugs
[2] http://paste.openstack.org/show/525937/

-- 
Regards, Markus Zoeller (markus_z)

On 21.06.2016 14:43, Markus Zoeller wrote:
> A reminder that this will happen in ~2 weeks.
> 
> Please note that you can spare bug reports if you leave a comment there
> which says one of these (case-sensitive flags):
> * CONFIRMED FOR: NEWTON
> * CONFIRMED FOR: MITAKA
> * CONFIRMED FOR: LIBERTY
> 
> On 23.05.2016 13:02, Markus Zoeller wrote:
>> TL;DR: Automatic closing of 185 bug reports which are older than 18
>> months in the week R-13. Skipping specific bug reports is possible. A
>> bug report comment explains the reasons.
>>
>>
>> I'd like to get rid of more clutter in our bug list to make it more
>> comprehensible by a human being. For this, I'm targeting our ~185 bug
>> reports which were reported 18 months ago and still aren't in progress.
>> That's around 37% of open bug reports which aren't in progress. This
>> post is about *how* and *when* I do it. If you have very strong reasons
>> to *not* do it, let me hear them.
>>
>> When
>> 
>> I plan to do it in the week after the non-priority feature freeze.
>> That's week R-13, at the beginning of July. Until this date you can
>> comment on bug reports so they get spared from this cleanup (see below).
>> Beginning from R-13 until R-5 (Newton-3 milestone), we should have
>> enough time to gain some overview of the rest.
>>
>> I also think it makes sense to make this a repeated effort, maybe after
>> each milestone/release or monthly or daily.
>>
>> How
>> ---
>> The bug reports which will be affected are:
>> * in status: [new, confirmed, triaged]
>> * AND without assignee
>> * AND created at: > 18 months
>> A preview of them can be found at [1].
>>
>> You can spare bug reports if you leave a comment there which says
>> one of these (case-sensitive flags):
>> * CONFIRMED FOR: NEWTON
>> * CONFIRMED FOR: MITAKA
>> * CONFIRMED FOR: LIBERTY
>>
>> The expired bug report will have:
>> * status: won't fix
>> * assignee: none
>> * importance: undecided
>> * a new comment which explains *why* this was done
>>
>> The comment the expired bug reports will get:
>> This is an automated cleanup. This bug report got closed because
>> it is older than 18 months and there is no open code change to
>> fix this. After this time it is unlikely that the circumstances
>> which lead to the observed issue can be reproduced.
>> If you can reproduce it, please:
>> * reopen the bug report
>> * AND leave a comment "CONFIRMED FOR: "
>>   Only still supported release names are valid.
>>   valid example: CONFIRMED FOR: LIBERTY
>>   invalid example: CONFIRMED FOR: KILO
>> * AND add the steps to reproduce the issue (if applicable)
>>
>>
>> Let me know if you think this comment gives enough information how to
>> handle this situation.
>>
>>
>> References:
>> [1] http://45.55.105.55:8082/bugs-dashboard.html#tabExpired
>>
> 
> 


+-+--+-+
|   Bug # | Title   
 | Age (d) |
+-+--+-+
|  933498 | "List Volumes" should support filtering 
 |1599 |
|  955792 | No public IP addresses listed in server representation  
 |1573 |
|  956589 | Device is  busy error on lxc instance shutdown  
 |1572 |
| 1018253 | No error message prompt during attaching when mountpoint is 
occupied |1462 |
| 1039665 | Creating Neutron L2 networks (without subnets) doesn't work as 
expected  |1413 |
| 1045248 | dhcp server defaults to gateway for filtering when unset
 |1401 |
| 1072734 | Improve instance state recovery for Compute service failure during 
Create Server |1344 |
| 1080278 | addFloatingIp multi-network  

Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-05 Thread Balázs Gibizer
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: July 01, 2016 23:03
> 
> We're now past non-priority feature freeze. I've started going through
> some blueprints and -2ing them if they still have outstanding changes. I
> haven't gone through the full list yet (we started with 100).
> 
> I'm also building a list of potential FFE candidates based on:

I'm proposing 5 of the remaining notification transformations patches 
as FFE candidates. [1]

> 
> 1. How far along the change is (how ready is it?), e.g. does it require
> a lot of change yet? Does it require a Tempest test and is that passing
> already? How much of the series has already merged and what's left?

The below patches are all ready but they needed a rebase after the
last minute changes on the instance.delete patch which they depend on.
Tempest test is not required for these patches.

already had +2 +W:
 * https://review.openstack.org/329089 Transform instance.suspend notifications

already had +2:
* https://review.openstack.org/331972 Transform instance.restore notifications
* https://review.openstack.org/332696 Transform instance.shelve notifications

ready for core review 
* https://review.openstack.org/329141 Transform instance.pause notifications
* https://review.openstack.org/329255 Transform instance.resize notifications

The spec-ed scope of the bp [1] is already merged but these are fairly trivial
patches

> 
> 2. How much core reviewer attention has it already gotten?

See above.

> 
> 3. What kind of priority does it have, i.e. if we don't get it done in
> Newton do we miss something in Ocata? Think things that start
> deprecation/removal timers.

If we move these to Ocata then we slow the notifications transformation
work which means the deprecation of the legacy notifications
also moves further in the future. 

Cheers,
Gibi


[1] 
https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-newton
 

> 
> The plan is for the nova core team to have an informal meeting in the
> #openstack-nova IRC channel early next week, either Tuesday or
> Wednesday, and go through the list of potential FFE candidates.
> 
> Blueprints that get exceptions will be checked against the above
> criteria and who on the core team is actually going to push the changes
> through.
> 
> I'm looking to get any exceptions completed within a week, so targeting
> Wednesday 7/13. That leaves a few days for preparing for the meetup.
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] Taas can not capture the packet, if the two VM on the same host. Is it a Bug?

2016-07-05 Thread SUZUKI, Kazuhiro
Hi,

I also have seen the same situation.
The same issue is discussed at the IRC meeting of TaaS.
Please take a look at the log.

http://eavesdrop.openstack.org/meetings/taas/2016/taas.2016-04-13-06.30.log.html

Regards,
Kaz


From: 张广明 
Subject: [openstack-dev] [neutron][taas] Taas can not capture the packet, if 
the two VM on the same host. Is it a Bug?
Date: Fri, 1 Jul 2016 16:03:53 +0800

> Hi ,
> I found a limitation when use taas.  My test case is descripped as
> follow:
> VM1 and VM2 is running on the same host and  they are belong the vlan.
> The monitor VM is on the same host or the  other host . I want to monitor
> the only INPUT flow to the VM1.
> So I configure the tap-flow like this "neutron tap-flow-create  --port
> 2a5a4382-a600-4fb1-8955-00d0fc9f648f  --tap-service
> c510e5db-4ba8-48e3-bfc8-1f0b61f8f41b --direction IN ".
> When ping from VM2 to VM1.  I can not get the flow in the monitor VM.
>The reason is the the flow from VM2 to VM1 in br-int has not vlan
> information. The vlan tag was added in flow when output the packet  in OVS.
> So the code in file ovs_taas.py did not work in this case .
> 
>  if direction == 'IN' or direction == 'BOTH':
> port_mac = tap_flow['port_mac']
>  self.int_br.add_flow(table=0,
>  priority=20,
> dl_vlan=port_vlan_id,
> dl_dst=port_mac,
>actions="normal,mod_vlan_vid:%s,output:%s" %
>  (str(taas_id), str(patch_int_tap_id)))
> 
> 
> 
> 
>  Is this is a Bug or a Design ??
> 
> 
> 
> Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][alembic] Upgrade of db with alembic migration script

2016-07-05 Thread slawek

Hello,

Thx a lot for help. I will check this review and  You comments.

Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

W dniu 05.07.2016 09:23, Anna Kamyshnikova napisał(a):


Hi!

I've posted comment on your change. please check this out. I think I've 
seen the similar issue on https://review.openstack.org/#/c/283802.


On Mon, Jul 4, 2016 at 11:54 PM, Sławek Kapłoński  
wrote:



Hello,

I'm working on patch to add QoS ingress bandwidth limit to Neutron
currently: https://review.openstack.org/303626 and I have small 
problem

with db upgrade with alembic.
Problem description:
In qos_bandwidth_limit_rules table now there is foreign key
"qos_policy_id" with unique constraint.
I need to add new column called "direction" to this table and then 
remove
unique constraint for qos_policy_id. At the end I need to add new 
unique

constraint to pair (direction, qos_policy_id).
To do that I need to:
1. remove qos_policy_id foreign key
2. remove unique constraint for qos_policy_id (because it is not 
removed

automatically)
3. add new column
4. add new unique constraint

Points 3 and 4 are easy and there is no problem with it.

Problem is with point 2 (remove unique constraint)
To remove qos_policy_id fk I used function:
neutron.db.migration.migration.remove_fks_from_table() and it is 
working

fine but it's not removing unique constraint.
I made some modification to this function:
https://review.openstack.org/#/c/303626/21/neutron/db/migration/__init__.py
and this modifications works fine for mysql but in pgsql this unique
constraint is not removed so after all there is two constraints in 
table

and this is wrong.

I'm not expert in pgsql and alembic. Maybe someone with bigger
experience can look on it and help me how to do such migration script?

Thx in advance for any help.

--
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Regards,
Ann Kamyshnikova
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug

2016-07-05 Thread Renat Akhmerov
Stan, thanks for clarification. What’s the latest stable version that we’re 
supposed to use? global-requirements.txt has yaql>=1.1.0, I wonder if it’s 
correct.

Renat Akhmerov
@Nokia

> On 05 Jul 2016, at 12:12, Stan Lagun  wrote:
> 
> Hi!
> 
> The issue with join is just a yaql bug that is already fixed. The problem 
> with yaqluator is that it doesn't use the latest yaql library.
> 
> Another problem is that it does't sets options correctly. As a result it is 
> possible to bring the site down with a query that produces endless collection
> 
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
> 
>  
> On Tue, Jun 28, 2016 at 9:46 AM, Elisha, Moshe (Nokia - IL) 
> > wrote:
> Hi,
> 
> Thank you for the kind words, Alexey.
> 
> I was able to reproduce your bug and I have also found the issue.
> 
> The problem is that we did not create the parser with the engine_options used 
> in the yaql library by default when using the CLI.
> Specifically, the "yaql.limitIterators" was missing… I am not sure that this 
> settings should have this affect but maybe the Yaql guys can comment on that.
> 
> If we will change yaqluator to use this setting it will mean that yaqluator 
> will not be consistent with Mistral because Mistral is using YAQL without 
> this engine option (If I use your example in a workflow, Mistral returns 
> exactly like the yaqluator returns)
> 
> 
> Workflow:
> 
> ---
> version: '2.0'
> 
> test_yaql:
>   tasks:
> test_yaql:
>   action: std.noop
>   publish:
> output_expr: <% [1,2].join([3], true, [$1, $2]) %>
> 
> Workflow result:
> 
> 
> [root@s53-19 ~(keystone_admin)]# mistral task-get-published 
> 01d2bce3-20d0-47b2-84f2-7bd1cb2bf9f7
> {
> "output_expr": [
> [
> 1,
> 3
> ]
> ]
> }
> 
> 
> As Matthews pointed out, the yaqluator is indeed OpenSource and contributions 
> are welcomed.
> 
> [1] 
> https://github.com/ALU-CloudBand/yaqluator/commit/e523dacdde716d200b5ed1015543d4c4680c98c2
>  
> 
> 
> 
> 
> From: Dougal Matthews >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> >
> Date: Monday, 27 June 2016 at 16:44
> To: "OpenStack Development Mailing List (not for usage questions)" 
> >
> Subject: Re: [openstack-dev] [mistral] [murano] [yaql] yaqluator bug
> 
> On 27 June 2016 at 14:30, Alexey Khivin  > wrote:
> Hello, Moshe 
> 
> Tomorrow I discovered yaqluator.com  for myself! 
> Thanks for the useful tool!
> 
> But suddenly I was said that the expression 
> [1,2].join([3], true, [$1, $2]) 
> evaluated to [[1,3]] on the yaqluator
> 
> A the same time this expression evaluated right when I using raw yaql 
> interpreter.
> 
> Could we fix this issue?
> 
> By the way, don't you want to make yaqluator opensource? If you would 
> transfer yaqluator to Openstack Foundation, then  community will be able to 
> fix such kind of bugs
> 
> It looks like it is open source, there is a link in the footer: 
> https://github.com/ALU-CloudBand/yaqluator 
> 
>  
> 
> Thank you!
> Best regards, Alexey Khivin
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][alembic] Upgrade of db with alembic migration script

2016-07-05 Thread Anna Kamyshnikova
Hi!

I've posted comment on your change. please check this out. I think I've
seen the similar issue on https://review.openstack.org/#/c/283802.

On Mon, Jul 4, 2016 at 11:54 PM, Sławek Kapłoński 
wrote:

> Hello,
>
> I'm working on patch to add QoS ingress bandwidth limit to Neutron
> currently: https://review.openstack.org/303626 and I have small problem
> with db upgrade with alembic.
> Problem description:
> In qos_bandwidth_limit_rules table now there is foreign key
> "qos_policy_id" with unique constraint.
> I need to add new column called "direction" to this table and then remove
> unique constraint for qos_policy_id. At the end I need to add new unique
> constraint to pair (direction, qos_policy_id).
> To do that I need to:
> 1. remove qos_policy_id foreign key
> 2. remove unique constraint for qos_policy_id (because it is not removed
> automatically)
> 3. add new column
> 4. add new unique constraint
>
> Points 3 and 4 are easy and there is no problem with it.
>
> Problem is with point 2 (remove unique constraint)
> To remove qos_policy_id fk I used function:
> neutron.db.migration.migration.remove_fks_from_table() and it is working
> fine but it's not removing unique constraint.
> I made some modification to this function:
> https://review.openstack.org/#/c/303626/21/neutron/db/migration/__init__.py
> and this modifications works fine for mysql but in pgsql this unique
> constraint is not removed so after all there is two constraints in table
> and this is wrong.
>
> I'm not expert in pgsql and alembic. Maybe someone with bigger
> experience can look on it and help me how to do such migration script?
>
> Thx in advance for any help.
>
> --
> Best regards / Pozdrawiam
> Sławek Kapłoński
> sla...@kaplonski.pl
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-05 Thread tie...@vn.fujitsu.com
Hi folks,

I want to give more information about our nova patch for bp 
ironic-serial-console-support. The whole feature needs work to be done in Nova 
and Ironic. The nova bp [1] has been approved, and the Ironic spec [2] has been 
merged.

This nova patch [3] is simple, we got some reviews by some Nova and Ironic core 
reviewers. The depended patches in Ironic are [4][5] which [4] will get merged 
soon and [5] is in review progress.

Hope Nova core team considers adding this case to the exception list.

[1] https://blueprints.launchpad.net/nova/+spec/ironic-serial-console-support  
(Nova bp, approved by dansmith)
[2] https://review.openstack.org/#/c/319505/  (Ironic spec, merged)

[3] https://review.openstack.org/#/c/328157/  (Nova patch, in review)
[4] https://review.openstack.org/#/c/328168/  (Ironic patch 1st, got two +2, 
will get merged soon)
[5] https://review.openstack.org/#/c/293873/  (Ironic patch 2nd, in review)

Thanks and Regards
Dao Cong Tien


-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Saturday, July 02, 2016 4:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] Non-priority feature freeze and FFEs

We're now past non-priority feature freeze. I've started going through some 
blueprints and -2ing them if they still have outstanding changes. I haven't 
gone through the full list yet (we started with 100).

I'm also building a list of potential FFE candidates based on:

1. How far along the change is (how ready is it?), e.g. does it require a lot 
of change yet? Does it require a Tempest test and is that passing already? How 
much of the series has already merged and what's left?

2. How much core reviewer attention has it already gotten?

3. What kind of priority does it have, i.e. if we don't get it done in Newton 
do we miss something in Ocata? Think things that start deprecation/removal 
timers.

The plan is for the nova core team to have an informal meeting in the 
#openstack-nova IRC channel early next week, either Tuesday or Wednesday, and 
go through the list of potential FFE candidates.

Blueprints that get exceptions will be checked against the above criteria and 
who on the core team is actually going to push the changes through.

I'm looking to get any exceptions completed within a week, so targeting 
Wednesday 7/13. That leaves a few days for preparing for the meetup.

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]

2016-07-05 Thread Shinobu Kinjo
# This email is true to reply.
Would you attach local.conf you're using?

On Mon, Jul 4, 2016 at 4:10 PM, Luck Dog  wrote:

> Hello everyone,
>
> I am trying to run DevStack on Ubuntu 14.04 in a single VirtualBox. An
> error turns up  before it successfully starts. Yesterday I clarified this
> question not  clearly enough,so I make a supplication for it. My steps are:
> 1), Git clone DevStack,
> 2), Copy devstack/local.conf.sample to DevStack folder and rename it to
> local.conf.
>
> the finished steps before the error turns up are listed as follows:
>
> 2016-06-29 09:11:53.081 | stack.sh log
> /opt/stack/logs/stack.sh.log.2016-06-29-171152
> 2016-06-29 09:12:19.797 | Installing package prerequisites
> 2016-06-29 09:15:27.224 | Installing OpenStack project source
> 2016-06-29 09:24:43.323 | Installing Tricircle
> 2016-06-29 09:24:55.979 | Starting RabbitMQ
> 2016-06-29 09:25:00.731 | Configuring and starting MySQL
> 2016-06-29 09:25:20.143 | Starting Keystone
> 2016-06-29 09:43:18.591 | Configuring Glance
> 2016-06-29 09:43:59.667 | Configuring Neutron
> 2016-06-29 09:46:30.646 | Configuring Cinder
> 2016-06-29 09:46:54.719 | Configuring Nova
> 2016-06-29 09:48:23.175 | Configuring Tricircle
> 2016-06-29 09:51:24.143 | Starting Glance
> 2016-06-29 09:52:11.133 | Uploading images
> 2016-06-29 09:52:45.460 | Starting Nova API
> 2016-06-29 09:53:27.511 | Starting Neutron
> 2016-06-29 09:54:21.476 | Creating initial neutron network elements
>
> The last errors when it stops running are:
>
> Request body: {u'network': {u'router:external': True,
> u'provider:network_type': u'flat', u'name': u'public',
> u'provider:physical_network': u'public', u'admin_state_up': True}}^[[00m
> ^[[00;33mfrom (pid=29980) prepare_request_body
> /opt/stack/neutron/neutron/api/v2/base.py:674^[[00m
> 2016-06-29 17:56:04.359 ^[[00;32mDEBUG neutron.db.quota.driver
> [^[[01;36mreq-e97f6276-8e19-408b-829a-004a31256453 ^[[00;36madmin
> 13869ba8005b480bbcbe17b2695fd5e2^[[00;32m] ^[[01;35m^[[00;32mResources
> subnetpool have unlimited quota limit. It is not required to calculate
> headroom ^[[00m ^[[00;33mfrom (pid=29980) make_reservation
> /opt/stack/neutron/neutron/db/quota/driver.py:191^[[00m
> 2016-06-29 17:56:04.381 ^[[00;32mDEBUG neutron.db.quota.driver
> [^[[01;36mreq-e97f6276-8e19-408b-829a-004a31256453 ^[[00;36madmin
> 13869ba8005b480bbcbe17b2695fd5e2^[[00;32m] ^[[01;35m^[[00;32mAttempting to
> reserve 1 items for resource network. Total usage: 0; quota limit: 10;
> headroom:10^[[00m ^[[00;33mfrom (pid=29980) make_reservation
> /opt/stack/neutron/neutron/db/quota/driver.py:223^[[00m
> 2016-06-29 17:56:04.425 ^[[01;31mERROR neutron.api.v2.resource
> [^[[01;36mreq-e97f6276-8e19-408b-829a-004a31256453 ^[[00;36madmin
> 13869ba8005b480bbcbe17b2695fd5e2^[[01;31m] ^[[01;35m^[[01;31mcreate
> failed^[[00m
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00mTraceback (most recent call last):
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/api/v2/resource.py", line
> 78, in resource
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00mresult = method(request=request, **args)
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/api/v2/base.py", line
> 424, in create
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00mreturn self._create(request, body, **kwargs)
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in
> wrapper
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00mectxt.value = e.inner_exc
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221,
> in __exit__
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00mself.force_reraise()
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197,
> in force_reraise
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00msix.reraise(self.type_, self.value, self.tb)
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00m  File
> "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in
> wrapper
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00mreturn f(*args, **kwargs)
> ^[[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/api/v2/base.py", line
> 535, in _create
> [[01;31m2016-06-29 17:56:04.425 TRACE neutron.api.v2.resource
> ^[[01;35m^[[00mreturn obj_creator(request.context, **kwargs)
> 

Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-05 Thread Claudiu Belu
Hi, 

The Hyper-V implementation of the bp virt-device-role-tagging is mergeable [1]. 
The patch is quite simple, it got some reviews, and the tempest test 
test_device_tagging [2] passed. [3]

[1] https://review.openstack.org/#/c/331889/
[2] https://review.openstack.org/#/c/305120/
[3] http://64.119.130.115/debug/nova/331889/8/04-07-2016_19-43/results.html.gz

Best regards,

Claudiu Belu


From: Markus Zoeller [mzoel...@linux.vnet.ibm.com]
Sent: Monday, July 04, 2016 2:24 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

On 01.07.2016 23:03, Matt Riedemann wrote:
> We're now past non-priority feature freeze. I've started going through
> some blueprints and -2ing them if they still have outstanding changes. I
> haven't gone through the full list yet (we started with 100).
>
> I'm also building a list of potential FFE candidates based on:
>
> 1. How far along the change is (how ready is it?), e.g. does it require
> a lot of change yet? Does it require a Tempest test and is that passing
> already? How much of the series has already merged and what's left?
>
> 2. How much core reviewer attention has it already gotten?
>
> 3. What kind of priority does it have, i.e. if we don't get it done in
> Newton do we miss something in Ocata? Think things that start
> deprecation/removal timers.
>
> The plan is for the nova core team to have an informal meeting in the
> #openstack-nova IRC channel early next week, either Tuesday or
> Wednesday, and go through the list of potential FFE candidates.
>
> Blueprints that get exceptions will be checked against the above
> criteria and who on the core team is actually going to push the changes
> through.
>
> I'm looking to get any exceptions completed within a week, so targeting
> Wednesday 7/13. That leaves a few days for preparing for the meetup.
>

FWIW, bp "libvirt-virtlogd" [1] is basically ready to merge. The two
changes [2] and [3] did already get a lot of attention from danpb.

References:
[1] https://blueprints.launchpad.net/openstack/?searchtext=libvirt-virtlogd
[2] https://review.openstack.org/#/c/334480/
[3] https://review.openstack.org/#/c/323765/

--
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev