Re: [openstack-dev] [Bilean][CloudKitty][Telemetry] Open discussionaround Bilean and existing OpenStack components

2016-07-06 Thread 吕冬兵
Hi,


I'm sorry to see this discussion so late:). Thanks for attention.


I don't oppose to contribute to add trigger-based solution to CloudKitty, I 
just want to know if it's possible to support trigger-based for CloudKitty 
based on now arch, pls generally describe how. And another thing I want to make 
sure is that if it's good to mix two different solution in one component.




 
 
-- Original --
From:  "Stéphan Albert";
Date:  Fri, Jul 1, 2016 11:37 PM
To:  "openstack-dev"; 

Subject:  [openstack-dev] [Bilean][CloudKitty][Telemetry] Open discussionaround 
Bilean and existing OpenStack components

 
Hi,

I would like to continue the discussion that started in the [review][1]
for the Big-Tent integration of the project Bilean.

In the [review][1] the Bilean team stated that a new project was needed
to overcome limitations of existing components.
In this thread, I would like to have an open discussion about what
features are lacking in the available components and what needs to be
done to integrate the Bilean use case with the current components.

I'm not opposed to changes and new features in CloudKitty, and I'm
pretty sure that trigger-based billing can be integrated in CloudKitty's
codebase.

From my perspective, CloudKitty team is a small team and having two
teams working on rating/billing is just scattering contributions and is
detrimental to both projects. It brings confusion to users minds about
what components should be used.

We can add this topic to our meeting agenda, so we can have a talk on
IRC.

I'm hoping we'll find a solution that benefits existing projects and
enables you to implement your trigger-based billing solution.

Cheers,
Stéphane

[1]: https://review.openstack.org/#/c/334350/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Angus Lees
On Thu, 7 Jul 2016 at 03:06 Matthew Treinish  wrote:

> On Wed, Jul 06, 2016 at 11:41:56AM -0500, Matt Riedemann wrote:
> > I just wonder how many deployments are actually relying on this, since as
> > noted elsewhere in this thread we don't really enforce this for all
> things,
> > only what happens to get tested in our CI system, e.g. the virtuozzo
> > rootwrap filters that don't have grenade testing.
>
> Sure, our testing coverage here is far from perfect, that's never been in
> dispute. It's always been best effort (which there has been limited in this
> space) like I'm not aware of anything doing any upgrade testing with
> virtuozzo enabled, or any of the other random ephemeral storage backends,
> **cough** ceph **cough**.  But, as I said before just because we don't
> catch all
> the issues isn't a reason to throw everything out the window.
>

So now we have identified some other examples recently added to the
codebase, that where not noticed by grenade for one reason or another.

Do we:
A) revert+postpone the virtuozzo changes until the next release?
B) add a releasenote saying you need to update the rootwrap filter first?

(Yes, this is a test)

It's boring, but not that hard to manually diff filters between releases -
I can do an audit if we'd like to build a list of other such changes.

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Stability and reliability of gate jobs

2016-07-06 Thread Paul Belanger
On Thu, Jun 16, 2016 at 12:20:06PM +, Steven Dake (stdake) wrote:
> David,
> 
> The gates are unreliable for a variety of reasons - some we can fix - some
> we can't directly.
> 
> RDO rabbitmq introduced IPv6 support to erlang, which caused our gate
> reliably to drop dramatically.  Prior to this change, our gate was running
> 95% reliability or better - assuming the code wasn¹t busted.
> The gate gear is different - meaning different setup.  We have been
> working on debugging all these various gate provider issues with infra
> team and I think that is mostly concluded.
> The gate changed to something called bindeps which has been less reliable
> for us.

I would be curious to hear your issues with bindep. A quick look at kolla show
you are not using other-requirements.txt yet, so you are using our default
fallback.txt file. I am unsure how that could be impacting you.

> We do not have mirrors of CentOS repos - although it is in the works.
> Mirrors will ensure that images always get built.  At the moment many of
> the gate failures are triggered by build failures (the mirrors are too
> busy).

This is no longer the case, openstack-infra is now mirroring both centos-7[1]
and epel-7[2]. And just this week we brought Ubuntu Cloud Archive[3] online. It
would be pretty trivial to update kolla to start using them.

[1] http://mirror.dfw.rax.openstack.org/centos/7/
[2] http://mirror.dfw.rax.openstack.org/epel/7/
[3] http://mirror.dfw.rax.openstack.org/ubuntu-cloud-archive/

> We do not have mirrors of the other 5-10 repos and files we use.  This
> causes more build failures.
> 
We do have the infrastructure in AFS to do this, it would require you to write
the patch and submit it to openstack-infra so we can bring it online.  In fact,
the OpenStack Ansible team was responsible for UCA mirror above, I simply did
the last 5% to bring it into production.

> Complicating matters, any of theses 5 things above can crater one gate job
> of which we run about 15 jobs, which causes the entire gate to fail (if
> they were voting).  I really want a voting gate for kolla's jobs.  I super
> want it.  The reason we can't make the gates voting at this time is
> because of the sheer unreliability of the gate.
> 
> If anyone is up for a thorough analysis of *why* the gates are failing,
> that would help us fix them.
> 
> Regards
> -steve
> 
> On 6/15/16, 3:27 AM, "Paul Bourke"  wrote:
> 
> >Hi David,
> >
> >I agree with this completely. Gates continue to be a problem for Kolla,
> >reasons why have been discussed in the past but at least for me it's not
> >clear what the key issues are.
> >
> >I've added this item to agenda for todays IRC meeting (16:00 UTC -
> >https://wiki.openstack.org/wiki/Meetings/Kolla). It may help if before
> >hand we can brainstorm a list of the most common problems here beforehand.
> >
> >To kick things off, rabbitmq seems to cause a disproportionate amount of
> >issues, and the problems are difficult to diagnose, particularly when
> >the only way to debug is to summit "DO NOT MERGE" patch sets over and
> >over. Here's an example of a failed centos binary gate from a simple
> >patch set I was reviewing this morning:
> >http://logs.openstack.org/06/329506/1/check/gate-kolla-dsvm-deploy-centos-
> >binary/3486d03/console.html#_2016-06-14_15_36_19_425413
> >
> >Cheers,
> >-Paul
> >
> >On 15/06/16 04:26, David Moreau Simard wrote:
> >> Hi Kolla o/
> >>
> >> I'm writing to you because I'm concerned.
> >>
> >> In case you didn't already know, the RDO community collaborates with
> >> upstream deployment and installation projects to test it's packaging.
> >>
> >> This relationship is beneficial in a lot of ways for both parties, in
> >>summary:
> >> - RDO has improved test coverage (because it's otherwise hard to test
> >> different ways of installing, configuring and deploying OpenStack by
> >> ourselves)
> >> - The RDO community works with upstream projects (deployment or core
> >> projects) to fix issues that we find
> >> - In return, the collaborating deployment project can feel more
> >> confident that the RDO packages it consumes have already been tested
> >> using it's platform and should work
> >>
> >> To make a long story short, we do this with a project called WeIRDO
> >> [1] which essentially runs gate jobs outside of the gate.
> >>
> >> I tried to get Kolla in our testing pipeline during the Mitaka cycle.
> >> I really did.
> >> I contributed the necessary features I needed in Kolla in order to
> >> make this work, like the configurable Yum repositories for example.
> >>
> >> However, in the end, I had to put off the initiative because the gate
> >> jobs were very flappy and unreliable.
> >> We cannot afford to have a job that is *expected* to flap in our
> >> testing pipeline, it leads to a lot of wasted time, effort and
> >> resources.
> >>
> >> I think there's been a lot of improvements since my last attempt but
> >> to get a sample of data, I looked at ~30 recently 

Re: [openstack-dev] [security] [horizon] Security implications of exposing a keystone token to a JS client

2016-07-06 Thread David Stanek
On 07/01 at 19:41, Fox, Kevin M wrote:
> Hi David,
> 
> How do you feel about the approach here:
> https://review.openstack.org/#/c/311189/
> 
> Its lets the existing angular js module:
> horizon.app.core.openstack-service-api.keystone
> 
> access the current token via getCurrentUserSession().token
> 

Hey Kevin,

It's hard to tell without a lot of the context. From what I can tell the
token is pulled down as part of the data of an API request. As long as
that's not cached I think you are OK.

-- 
David Stanek
web: http://dstanek.com
blog: http://traceback.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog][murano] Dashboards

2016-07-06 Thread Kirill Zaitsev
Back in Austin we’ve agreed to start bringing murano-dashboard and 
app-catalog-ui horizon dashboards closer to each other. See the etherpad [1] 
for more context and to refresh what we’ve been talking about. Since 
murano-dashboard is mostly a python project and app-catalog-ui is mostly a 
javascript/angular project moving them now to a single code base doesn’t make 
much sense to me =) However we can start working towards moving under a common 
namespace/dashboard in horizon.

In Austin we agreed to see if that is possible/feasible to do. I’ve uploaded 
two patches for review [2] that do exactly that. Both of them are currently 
WIP, but despite that they show that we can fit our panels under one 
horizon-dashboard (I took the liberty of naming it «Applications"). There is 
also a short gif, that shows my dev horizon environment [3].

We haven’t decided the exact layout in Austin, so I would like to kickstart 
that discussion. I’ve drafted a small etherpad[4], that I suggest we could use 
to share ideas.

Would really appreciate if you guys would find a couple of minutes of your time 
to think about the layout for our dashboards and put those thoughts in the 
etherpad.

P.S. The other part where we agreed to collaborate on naming was the OSC 
command namespaces. I do remember talking about the names, but can’t remember 
if we have agreed on anything specific. So this might be a good opportunity to 
also figure this question out.

[1] https://etherpad.openstack.org/p/AUS-app-catalog 
[2] https://review.openstack.org/#/q/topic:applications-dashboard  
[3] https://www.dropbox.com/s/26sfxpoc9hd8gi7/shared_dashboard.gif?dl=0  
[4] https://etherpad.openstack.org/p/apps-dashboard-structure 

-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc

signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][networking-sfc] meeting topics for 7/7/2016 networking-sfc project IRC meeting

2016-07-06 Thread Cathy Zhang
Hi everyone,
The following link shows the topics I have for this week's project meeting 
discussion. Feel free to add more.
https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting
Meeting Info: Every Thursday 1700 UTC on #openstack-meeting-4
Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-06 Thread Rochelle Grober
repository is:  http://git.openstack.org/cgit/openstack/osops-tools-contrib/

FYI, there are also:  osops-tools-generic, osops-tools-logging, 
osops-tools-monitoring, osops-example-configs and osops-coda

Wish I could help more,

--Rocky

-Original Message-
From: Joshua Harlow [mailto:harlo...@fastmail.com] 
Sent: Tuesday, July 05, 2016 10:44 AM
To: Matt Fischer
Cc: openstack-dev@lists.openstack.org; OpenStack Operators
Subject: Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 
crashing (anyone else seen this?)

Ah, those sets of command sound pretty nice to run periodically,

Sounds like a useful script that could be placed in the ops tools repo 
(I forget where this repo exists at, but pretty sure it does exist?).

Some other oddness though is that this issue seems to go away when we 
don't run cross-release; do you see that also?

Another hypothesis was that the following fix may be triggering part of 
this @ https://bugs.launchpad.net/oslo.messaging/+bug/1495568

So that if we have some queues being set up as auto-delete and some 
beign set up with expiry that perhaps the combination of these causes 
more work (and therefore eventually it falls behind and falls over) for 
the management database.

Matt Fischer wrote:
> Yes! This happens often but I'd not call it a crash, just the mgmt db
> gets behind then eats all the memory. We've started monitoring it and
> have runbooks on how to bounce just the mgmt db. Here are my notes on that:
>
> restart rabbitmq mgmt server - this seems to clear the memory usage.
>
> rabbitmqctl eval 'application:stop(rabbitmq_management).'
> rabbitmqctl eval 'application:start(rabbitmq_management).'
>
> run GC on rabbit_mgmt_db:
> rabbitmqctl eval
> '(erlang:garbage_collect(global:whereis_name(rabbit_mgmt_db)))'
>
> status of rabbit_mgmt_db:
> rabbitmqctl eval 'sys:get_status(global:whereis_name(rabbit_mgmt_db)).'
>
> Rabbitmq mgmt DB how much memory is used:
> /usr/sbin/rabbitmqctl status | grep mgmt_db
>
> Unfortunately I didn't see that an upgrade would fix for sure and any
> settings changes to reduce the number of monitored events also require a
> restart of the cluster. The other issue with an upgrade for us is the
> ancient version of erlang shipped with trusty. When we upgrade to Xenial
> we'll upgrade erlang and rabbit and hope it goes away. I'll also
> probably tweak the settings on retention of events then too.
>
> Also for the record the GC doesn't seem to help at all.
>
> On Jul 5, 2016 11:05 AM, "Joshua Harlow"  > wrote:
>
> Hi ops and dev-folks,
>
> We over at godaddy (running rabbitmq with openstack) have been
> hitting a issue that has been causing the `rabbit_mgmt_db` consuming
> nearly all the processes memory (after a given amount of time),
>
> We've been thinking that this bug (or bugs?) may have existed for a
> while and our dual-version-path (where we upgrade the control plane
> and then slowly/eventually upgrade the compute nodes to the same
> version) has somehow triggered this memory leaking bug/issue since
> it has happened most prominently on our cloud which was running
> nova-compute at kilo and the other services at liberty (thus using
> the versioned objects code path more frequently due to needing
> translations of objects).
>
> The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511
> with kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to
> 3.6.2 seems to make the issue go away),
>
> # rpm -qa | grep rabbit
>
> rabbitmq-server-3.4.0-1.noarch
>
> The logs that seem relevant:
>
> ```
> **
> *** Publishers will be blocked until this alarm clears ***
> **
>
> =INFO REPORT 1-Jul-2016::16:37:46 ===
> accepting AMQP connection <0.23638.342> (127.0.0.1:51932
>  -> 127.0.0.1:5671 )
>
> =INFO REPORT 1-Jul-2016::16:37:47 ===
> vm_memory_high_watermark clear. Memory used:29910180640
> allowed:47126781542
> ```
>
> This happens quite often, the crashes have been affecting our cloud
> over the weekend (which made some dev/ops not so happy especially
> due to the july 4th mini-vacation),
>
> Looking to see if anyone else has seen anything similar?
>
> For those interested this is the upstream bug/mail that I'm also
> seeing about getting confirmation from the upstream users/devs
> (which also has erlang crash dumps attached/linked),
>
> https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg
>
> Thanks,
>
> -Josh
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> 
> 

Re: [openstack-dev] [kolla] Stability and reliability of gate jobs

2016-07-06 Thread Steven Dake (stdake)
David,

Thanks for the feedback.  We know we have more work to do on our
integration gate.  It is a matter of finding people that have been trained
on gating development to do gate work.

Regards
-steve

On 7/4/16, 12:39 PM, "David Moreau Simard"  wrote:

>I mentioned this on IRC to some extent but I'm going to post it here
>for posterity.
>
>I think we can all agree that Integration tests are pretty darn
>important and I'm convinced I don't need to remind you why.
>I'm going to re-iterate that I am very concerned about the state of
>the jobs but also their coverage.
>
>Kolla provides an implementation for a lot of the big tents projects
>but they are not properly (if at all) tested in the gate.
>Only the core services are tested in an "all-in-one" fashion and if a
>commit happens to break a project that isn't tested in that all-in-one
>test, no one will know about it.
>
>This is very dangerous territory -- you can't guarantee that what
>Kolla supports really works on every commit.
>Both Packstack [1] and Puppet-OpenStack [2] have an extensive matrix
>of test coverage across different jobs and different operating systems
>to work around the memory constraints of the gate virtual machines.
>They test themselves with their project implementations in different
>ways (i.e, glance with file, glance with swift, cinder with lvm,
>cinder with ceph, neutron with ovs, neutron with linuxbridge, etc.)
>and do so successfully.
>
>I don't see why Kolla should be different if it is to be taken seriously.
>My apologies if it feels I am being harsh - I am being open and honest
>about Kolla's loss of credibility from my perspective.
>
>I've put my attempts to put Kolla in RDO's testing pipeline on hold
>for the Newton cycle.
>I hope we can straighten out all of this -- I care about Kolla and I
>want it to succeed, which is why I started this thread in the first
>place.
>
>While I don't really have the bandwidth to contribute to Kolla, I hope
>you can at least consider my feedback and you can also find me on IRC
>if you have questions.
>
>[1]: https://github.com/openstack/packstack#packstack-integration-tests
>[2]: https://github.com/openstack/puppet-openstack-integration#description
>
>David Moreau Simard
>Senior Software Engineer | Openstack RDO
>
>dmsimard = [irc, github, twitter]
>
>
>On Thu, Jun 16, 2016 at 8:20 AM, Steven Dake (stdake) 
>wrote:
>> David,
>>
>> The gates are unreliable for a variety of reasons - some we can fix -
>>some
>> we can't directly.
>>
>> RDO rabbitmq introduced IPv6 support to erlang, which caused our gate
>> reliably to drop dramatically.  Prior to this change, our gate was
>>running
>> 95% reliability or better - assuming the code wasn¹t busted.
>> The gate gear is different - meaning different setup.  We have been
>> working on debugging all these various gate provider issues with infra
>> team and I think that is mostly concluded.
>> The gate changed to something called bindeps which has been less
>>reliable
>> for us.
>> We do not have mirrors of CentOS repos - although it is in the works.
>> Mirrors will ensure that images always get built.  At the moment many of
>> the gate failures are triggered by build failures (the mirrors are too
>> busy).
>> We do not have mirrors of the other 5-10 repos and files we use.  This
>> causes more build failures.
>>
>> Complicating matters, any of theses 5 things above can crater one gate
>>job
>> of which we run about 15 jobs, which causes the entire gate to fail (if
>> they were voting).  I really want a voting gate for kolla's jobs.  I
>>super
>> want it.  The reason we can't make the gates voting at this time is
>> because of the sheer unreliability of the gate.
>>
>> If anyone is up for a thorough analysis of *why* the gates are failing,
>> that would help us fix them.
>>
>> Regards
>> -steve
>>
>> On 6/15/16, 3:27 AM, "Paul Bourke"  wrote:
>>
>>>Hi David,
>>>
>>>I agree with this completely. Gates continue to be a problem for Kolla,
>>>reasons why have been discussed in the past but at least for me it's not
>>>clear what the key issues are.
>>>
>>>I've added this item to agenda for todays IRC meeting (16:00 UTC -
>>>https://wiki.openstack.org/wiki/Meetings/Kolla). It may help if before
>>>hand we can brainstorm a list of the most common problems here
>>>beforehand.
>>>
>>>To kick things off, rabbitmq seems to cause a disproportionate amount of
>>>issues, and the problems are difficult to diagnose, particularly when
>>>the only way to debug is to summit "DO NOT MERGE" patch sets over and
>>>over. Here's an example of a failed centos binary gate from a simple
>>>patch set I was reviewing this morning:
>>>http://logs.openstack.org/06/329506/1/check/gate-kolla-dsvm-deploy-cento
>>>s-
>>>binary/3486d03/console.html#_2016-06-14_15_36_19_425413
>>>
>>>Cheers,
>>>-Paul
>>>
>>>On 15/06/16 04:26, David Moreau Simard wrote:
 Hi Kolla o/

 I'm writing to you because I'm concerned.


Re: [openstack-dev] 答复: [probably forge email可能是仿冒邮件]Re: [daisycloud-core] [kolla] Kolla Mitakarequirementssupported by CentOS

2016-07-06 Thread Steven Dake (stdake)
I understand chasing global rrequirements is a bit painful, but they exist so 
that the entire release (Newton for example) operate correctly together.  I 
don't want to be in the business of defining custom requirement version numbers 
just because upstream is 'too fast".

Regards
-steve

From: "hu.zhiji...@zte.com.cn" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, July 6, 2016 at 2:56 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] 答复: [probably forge email可能是仿冒邮件]Re: [daisycloud-core] 
[kolla] Kolla Mitakarequirementssupported by CentOS

Steve,

I agree with you on that Kolla can use EPEL, because Kolla is not a function 
part of OpenStack but a deployment tool. In the same vein, is that means Kolla 
do not need to catch up with the global requirements of OpenStack? For example, 
instead of using oslo required by OpenStack, can Kolla be more open to be 
compatible with an old oslo? This is a big deal because if installers such as 
daisycloud-core want to use Kolla as underlying deployment tool, they should 
keep the same requirements updating cadence as kolla, but sometimes that is not 
necessary for OpenStack installers.




发件人: "Steven Dake (stdake)" >
收件人: "OpenStack Development Mailing List (not for usage questions)" 
>,
日期: 2016-07-06 02:03
主题:[probably forge email可能是仿冒邮件]Re: [openstack-dev] [daisycloud-core] 
[kolla] Kolla Mitakarequirementssupported by CentOS




Hu,

I am open to not using EPEL in the containers themselves.  I didn't even know 
all the dependencies were available without EPEL.  I am on vacation at present, 
but I'll try a build without EPEL and see if it actually builds correctly.  
This does raise the question were ansible, docker, and git tools come from.  I 
am not keen to pull them from COPRs because I want them distributed via the CDN 
for HA purposes.

Are you planning to pull ansible, docker, and the git tools into delorean-deps? 
 One thing I am not keen on doing is yum install with an URL.

Also for source builds we absolutely require EPEL because we need gcc and other 
development tools.



As for the deployment host, we do require EPEL which is perfectly normal.

From: "hu.zhiji...@zte.com.cn" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, July 4, 2016 at 1:02 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [daisycloud-core] [kolla] Kolla Mitaka 
requirementssupported by CentOS

> As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
> It's proven very hard to prevent EPEL pushing broken updates, or push
> updates to fit OpenStack requirements.

> Actually, all the dependency above but ansible, docker and git python
> modules are in CentOS Cloud SIG repositories.
> If you are interested to work w/ CentOS Cloud SIG, we can add missing
> dependencies in our repositories.

I added [kolla] key word in the mail subject. Hope can get response from Kolla 
team about how to choose repos.


Thanks,
Zhijiang



发件人: Haïkel 
>
收件人: "OpenStack Development Mailing List (not for usage questions)" 
>,
日期: 2016-07-03 05:18
主题:[probably forge email可能是仿冒邮件]Re: [openstack-dev] [daisycloud-core] 
Kolla Mitaka requirementssupported by CentOS




2016-07-02 20:42 GMT+02:00 jason 
>:
> Pip Package Name Supported By Centos CentOS Name  Repo 
> Name
> ==
> ansible   yes
> ansible1.9.noarchepel
> docker-py  yes
> python-docker-py.noarchextras
> gitdb  yes
> python-gitdb.x86_64epel
> GitPython  yes
> GitPython.noarchepel
> oslo.config yes
> python2-oslo-config.noarch centos-openstack-mitaka
> pbryes
> python-pbr.noarch   epel
> setuptools yes
> python-setuptools.noarchbase
> six

[openstack-dev] [horizon] Next weeks meeting cancelled (2016-07-13)

2016-07-06 Thread Rob Cresswell
Next weeks meeting is cancelled due to the midcycle. Meetings will resume as 
normal on the 20th of July.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Charm][Congress] Upstreaming JuJu Charm for Openstack Congress

2016-07-06 Thread Bryan Sullivan
 Hi James, responses inline.
Sorry for the weird formatting of this - I found that I was not subscribed to 
openstack-dev for some reason (probably inbox overload at some point). That's 
fixed now.

Thanks,
Bryan Sullivan 
 
James Page james.page at ubuntu.com Wed Jul 6 09:12:08 UTC 2016
Hi Bryan

On Tue, 5 Jul 2016 at 21:21 SULLIVAN, BRYAN L  wrote:

> I've been working in the OPNFV (https://wiki.opnfv.org/) on a JuJu Charm
> to install the OpenStack Congress service on the OPNFV reference platform.
> This charm should be useful for anyone that wants to install Congress for
> use with an OpenStack deployment, using the JuJu tool. I want to get the
> charm upstreamed as an official openstack.org git repo similar to the
> other repos for JuJu Charms for OpenStack services. I participate in the
> Congress team in OpenStack, who don't know the process for getting this
> charm upstreamed into an OpenStack repo, so I am reaching out to anyone on
> this list for help.
>

I can help you with that as I did most of the project-config changes for
the original move of OpenStack charms to git/gerrit a few months back.[bryan] 
Thanks, your help will be appreciated!


> I have been working with gnuoy on #juju and narindergupta on #opnfv-joid
> on this. The charm has been tested and used to successfully install
> Congress on the OPNFV Colorado release (Mitaka-based, re OpenStack). While
> there are some features we need to continue to work on (e.g. Horizon
> integration), the charm is largely complete and stable. We are currently
> integrating it into the OPNFV CI/CD system through which it will be
> regularly/automatically tested in any OPNFV JuJu-based deployment (with
> this charm, Congress will become a default service deployed in OPNFV thru
> JuJu).
>

That all sounds awesome; it would also be great to get that system doing
testing of reviews and providing feedback to proposed changes; we can cover
some of that using Canonical CI (you'll notice that feeding back on reviews
on the charms already being developed under git/gerrit), but having other
3rd party CI systems provide feedback would also be great!.[bryan] For now I 
don't expect OPNFV CI to be acting as an OpenStack 3rd party CI system (though 
that's being discussed for some cases e.g. Nova-EPA features). The CI/CD will 
though verify that the charm as published in the Charm Store (when it is) 
deploys Congress successfully on the OPNFV system.

Any pointers on how to get started toward creating an OpenStack git repo
> for this are appreciated.
>

https://review.openstack.org/#/c/323716/ is probably a good start; this was
for the hacluster charm which got missed during the original migration.[bryan] 
Thanks, that looks like a good template for what's needed to add it.

You can add an additional field to the gerrit/projects.yaml entry to detail
the repository to do a one-time import from:

 upstream: https://github.com/gnuoy/charm-congress


Lets make sure we get any outstanding changes merged into that repository
first, or we can use your git repo for the one-time import if that's easier
for now.[bryan] I'll make sure with Narinder etc that the key outstanding 
changes are merged first.

I'd really love to have congress as part of the OpenStack Charms project
(we're currently working towards official project status at the moment) but
I'd also like to ensure that you're engaged directly with reviews and
changes to the congress charm specifically, so I think its worth having a
slight different ACL configuration that includes the core charm acl config,
but also adds an additional charms-congress-core group with permissions to
Code-Review: +2 and Workflow: +1 for congress charm repository.

Does that make sense? you can reach out to me in #openstack-charms to
discuss in more detail if you like.[bryan] I will. I'm new to the OpenStack 
review system, though we use Gerrit in OPNFV, I need to get familiar with the 
workflow as typical in OpenStack. Just to level-set, I'm not really a Charm 
developer though I have found that Canonical is very good at helping me work 
thru the issues so I can reasonably expect to act as a maintainer/reviewer. 
This will help me get that direct experience. In the process I hope to pick up 
enough of the Charm design approach to be able to maintain the charm directly 
(currently it's a bit magical to me, being a linear/imperative programmer in 
the past). 

Cheers

James __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nfv][tacker] Virtual Midcycle Meetup - Mark you calendars

2016-07-06 Thread Sridhar Ramaswamy
As finalized in the recent weekly meeting [1], Tacker Virtual Midcycle
Meetup is planned for July 27 & 28th. The meetup is spread over two days,
5-hours each day with a time slot that tries to spread the pain evenly :)

Here is the event etherpad,

https://etherpad.openstack.org/p/tacker-newton-midcycle

Please RSVP in the pad and add your topics of interest.

thanks,
Sridhar

[1]
http://eavesdrop.openstack.org/meetings/tacker/2016/tacker.2016-07-05-16.00.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-06 Thread Matt Riedemann

On 7/5/2016 2:14 AM, tie...@vn.fujitsu.com wrote:

Hi folks,

I want to give more information about our nova patch for bp 
ironic-serial-console-support. The whole feature needs work to be done in Nova 
and Ironic. The nova bp [1] has been approved, and the Ironic spec [2] has been 
merged.

This nova patch [3] is simple, we got some reviews by some Nova and Ironic core 
reviewers. The depended patches in Ironic are [4][5] which [4] will get merged 
soon and [5] is in review progress.

Hope Nova core team considers adding this case to the exception list.

[1] https://blueprints.launchpad.net/nova/+spec/ironic-serial-console-support  
(Nova bp, approved by dansmith)
[2] https://review.openstack.org/#/c/319505/  (Ironic spec, merged)

[3] https://review.openstack.org/#/c/328157/  (Nova patch, in review)
[4] https://review.openstack.org/#/c/328168/  (Ironic patch 1st, got two +2, 
will get merged soon)
[5] https://review.openstack.org/#/c/293873/  (Ironic patch 2nd, in review)

Thanks and Regards
Dao Cong Tien




When I looked last week the nova change was dependent on multiple ironic 
patches which weren't merged yet, so it wasn't ready to go for the 
non-priority feature freeze. The ironic changes are all merged yet 
either when we were going over FFE candidates. So this is going to have 
to wait for Ocata.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Higgins][Zun] Team meeting next week

2016-07-06 Thread Hongbin Lu
Hi all,

FYI. I won't be able to chair the next team meeting because I will be on the 
flight at that time. Madhuri will chair the next meeting on behalf. Thanks.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Non-priority FFEs granted...and more!

2016-07-06 Thread Matt Riedemann

We've gone through the list of non-priority FFE candidates.

For those that got an FFE you have a week to get your change merged, so 
EOD for dansmith (Pacific Time) on Wednesday 7/13.


This is the list that the core team should be focusing on for the next 
week (in no particular order):


* https://blueprints.launchpad.net/nova/+spec/keypairs-pagination

* https://blueprints.launchpad.net/nova/+spec/refresh-quotas-usage

* 
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-whitelist-regex


* 
https://blueprints.launchpad.net/nova/+spec/async-live-migration-rest-check


* https://blueprints.launchpad.net/nova/+spec/vendordata-reboot (note: 
this might extend beyond 7/13 for sdague to review once he's back from 
vacation and discuss with mikal at the midcycle)


* https://blueprints.launchpad.net/nova/+spec/neutron-routed-networks - 
really just the deferred IP allocation part of this, but that might also 
be held up until the midcycle (which carl_baldwin and armax are attending).


* 
https://blueprints.launchpad.net/nova/+spec/hyper-v-block-device-mapping-support 
- just the bottom change: https://review.openstack.org/#/c/246299/


* https://blueprints.launchpad.net/nova/+spec/hyper-v-uefi-secureboot

* https://blueprints.launchpad.net/nova/+spec/virt-device-role-tagging - 
this is really just for hyper-v https://review.openstack.org/#/c/331889/


The "and more" part is that we also talked about what to do about some 
of the long-running mass cleanup blueprints.


1. 
https://blueprints.launchpad.net/nova/+spec/centralize-config-options-newton


This is (mostly) a documentation effort, so this can continue.

2. https://blueprints.launchpad.net/nova/+spec/nova-python3-newton

The deadline for this is 7/28. Anything left after that will have to 
happen in Ocata.


3. https://blueprints.launchpad.net/nova/+spec/remove-mox-newton

Same as #2 for python 3.

4. 
https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-newton


Anything that has a +2 on it already can get in, but anything that's not 
ready as of today needs to wait for Ocata.


5. https://blueprints.launchpad.net/nova/+spec/rm-object-dict-compat-newton

This is now closed for Newton. We'll pick this up again in Ocata.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting Thursday July 7th

2016-07-06 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for July 7th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Tomorrow we will be talking more about our plan toimplement GLARE as a
back-end for the Community App Catalog, and what we'll need to merge
in the next few weeks to make this a reality.

Hope to see you there tomorrow!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] trove weekly meeting notes

2016-07-06 Thread Amrith Kumar
Notes from today's trove meeting are at[1]
Action items

  1.  [done] vgnbkr to figure some way of updating the spec to get the 
check/gate jobs to pass; maybe simplify the script
  2.  [all, esp amrith] watch https://www.youtube.com/watch?v=-5wpm-gesOY
  3.  [amrith] review logstash to see how the redis scenario tests perform
  4.  [amrith] revisit https://review.openstack.org/#/c/237710/ and figure out 
some way to get the code merged even if it is disabled but available for 
debugging?
  5.  [amrith] review what kind(s) of machines we get in the gate and see if we 
can get some more beefy horses ...
Also please update the mid-cycle etherpad with topics.


Thanks,



-amrith



[1] 
http://eavesdrop.openstack.org/meetings/trove/2016/trove.2016-07-06-18.00.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 35 jobs currently broken (was Re: New Python35 Jobs coming)

2016-07-06 Thread Andreas Jaeger
On 07/06/2016 08:27 AM, Andreas Jaeger wrote:
> On 2016-07-05 21:38, Andreas Jaeger wrote:
>> On 07/05/2016 02:52 PM, Andreas Jaeger wrote:
>>> [...]
>>> The change has merged and the python35 non-voting tests are run as part
>>> of new changes now.
>>>
>>> The database setup for the python35-db variant is not working yet, and
>>> needs adjustment on the infra side.
>>
>>
>> The python35-db jobs are working now as well.
> 
> Note that currently *all* python35 and python35-db jobs will fail, we
> need to build new xenial images first. This might take until tomorrow
> this time.
> 
> Error message is
> "Package php5-cli is not available, but is referred to by another package."

New images for Xenial have been build and have been uploaded, first
tests are passing. You should now be able to check python35 jobs for
anything that's starting now.

Note: If it fails with the above error, you might have had an older
image and need to recheck,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] OpenStack-Ansible and Open vSwitch

2016-07-06 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 07/06/2016 06:52 AM, Truman, Travis wrote:
> Please find the post here:
> http://trumant.github.io/openstack-ansible-openvswitch.html
> 
> I hope others find this useful and that it may serve as a good reference
> point when the community begins building scenario-based documentation.

Thanks for writing this, Travis!  It's really easy to follow along and I plan 
to give this a run-through in the lab in the next week or two. :)

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXfU1dAAoJEHNwUeDBAR+x2rIP/ivWP/uj9PJoGcdwi//HlM3f
2IrE9akCPxxqDiHKtW2K4DTeP282iPYwSzagu7aHk55H6PU8wozVtnZWE/8hFqMF
DSTm3OCPIJpILkwKzCtbx63CO2NOLh+lgSCbSmQ4stfxIlApGNqsTyYK62fEJUbu
Evh9apXmhS2u5HDieJ1cs70LfAyzq+A56IZ2J7h5GFWKBQxZ6ROugIB3ctAKls6u
bYxgXPdlJp3qcMhJ1OunVDU5Goj5Q6fAjlBdh4HvuHn6yBJH2F1esgNEx+zCxQ6W
kc7YXBcTTJ4EeWlu57kGj68E/t2aNLXD4WrPaO+0cV64q4F3QPVYY3ZKR6ViTdgZ
9i0iWm7l9216CNPGYGIhf5Lh4LLnXMzs+bJktnmRE51gx2o2TJLWT2qEtVsBiBts
Gt9YwTj/IrN/UuXmM+UPvWcSXSYsyo2Lq9lM9911V8oR+LSn/mDLv2c9X9IbfY7B
gau88nuOgTLzNuqzTyNGVD8M4dA21vItu2TZCb7zY7m5waV4ret0NGo2n+p05wge
T5XHUV6dBoZCDuH3rupsy0E3p0OmjtLS7/NmMFyG4d5J5VIhnYkpQBPOZLQ+Z8Xs
ravg6k2hq2ac0zGIm2c7QV7PuOkjABMSU/CNtpEjF8N3KaOPZ3mFF1Gzwaq/vZbD
F2iITxTNe6SM3r8q+unt
=mqmu
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Juju Charm Usability/Functionality Enhancements

2016-07-06 Thread James Beedy
Hello Team,

I've a few things I want to bring up concerning usability/functionality in
nova-cloud-controller and openstack-dashboard.

1. Request-timeout configurability for openstack-dashboard.
- Everyone who accesses horizon asks me for this. I think it would be
smart to set the timeout to the keystone token session timeout value.
- I've filed a feature request/bug, and proposed a merge for this in
the past, to no avail. This is a must for usability, and so simple to add.
What gives?

2. 'cross_az_attach=True'
- Without "cross_az_attach=True" under the cinder context in nova.conf,
live migration across availability_zones, alongside a handful of other
critical ops fail.
- This is a critical config param, the absence of which has blocked me
on multiple levels for a long time now.
- I've previously filed a bug/feature request for this, and also have
proposed a merge that adds this functionality.
- This is critical. ATTN HERE ASAP, PLEASE!


I would greatly appreciate some <3 in these areas, or a logical reason why
not to have these features.

Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] [ceilometer] [fuel] [freezer] [monasca] Elasticsearch 2.x gate support

2016-07-06 Thread Aulino, Rick
Folks,

A follow-up to Steve's earlier email.

Elasticsearch recommends that the version's major numbers match between 
Elasticsearch and the Elasticsearch python client [1]. To accomodate this, we 
have submitted a requirement patch to update the Elasticsearch client to 2.3.X.

If this is a concern, please comment on the patch:

https://review.openstack.org/#/c/338425

[1] https://elasticsearch-py.readthedocs.io/en/master

> I'm looking into supporting and testing Elasticsearch 2.x in Searchlight's 
> test jobs. Currently I don't know of any other projects that run tests 
> against Elasticsearch in the gate (since we had to add it to Jenkins [1]). 
> Several projects install the python elasticsearch client in requirements.txt, 
> and it is currently capped to <2.0 in global-requirements [2], and others 
> consume it directly through HTTP requests. Searchlight needs to move to 
> support Elasticsearch 2.x in Newton but we are aware that doing so will 
> affect other projects.
>
> Elasticsearch 2.x is backwards incompatible [3] with 1.x in some ways. The 
> python client library is similarly backwards-incompatible; it is strongly 
> recommended the client major version matches the server major version. In 
> testing Searchlight we found only a couple of fairly minor changes were 
> needed (and the 1.x client library seems to continue to work against a 2.x 
> server) , but YMMV. Devstack's default ES version is 1.4.2 [4] (which should 
> be changed to 1.7 in any case) and we obviously cannot change that until all 
> projects support 2.x.
>
> A wholesale change to move to Elasticsearch 2.x would require changing 
> global-requirements, but this may obviously break projects not ready for the 
> change. My questions for the projects affected are:
>
> * Have you tested with ES 2.x at all?
> * Do you have plans to move to ES 2.x?
>
> Our likely fallback is testing with the 1.x client until we can move devstack 
> and global-requirements to 2.x; if we discover issues in the meantime we will 
> include a deployer note that the python library needs to be updated if 
> Elasticsearch 2.x is in use.
>
> Thanks,
>
> Steve

--
Rick Aulino

HPCS R  
rick.aul...@hpe.com
Hewlett Packard Enterprise 970-898-0575

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [puppet] [desginate] An update on the state of puppet-designate (and designate in RDO)

2016-07-06 Thread David Moreau Simard
I drafted some tentative release notes that summarizes the work that
has been done so far [1].

I asked input from #openstack-dns but would love if users could chime
in on a deprecation currently in review [2].

This change also makes it so designate will stop maintaining a
directory in /var/lib/designate/bind9.
This directory and was introduced in puppet-designate in 2013 and
doesn't seem relevant anymore according to upstream and designate
documentation.

[1]: 
http://docs-draft.openstack.org/04/338404/1/check/gate-puppet-designate-releasenotes/273e921//releasenotes/build/html/unreleased.html#id1
[2]: https://review.openstack.org/#/c/337951/

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Wed, Jul 6, 2016 at 11:51 AM, David Moreau Simard  wrote:
> Thanks Matt, if you don't mind I might add you to some puppet reviews.
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
>
> On Tue, Jul 5, 2016 at 10:22 PM, Matt Fischer  wrote:
>> We're using Designate but still on Juno. We're running puppet from around
>> then, summer of 2015. We'll likely try to upgrade to Mitaka at some point
>> but Juno Designate "just works" so it's been low priority. Look forward to
>> your efforts here.
>>
>> On Tue, Jul 5, 2016 at 7:47 PM, David Moreau Simard  wrote:
>>>
>>> Hi !
>>>
>>> tl;dr
>>> puppet-designate is going under some significant updates to bring it
>>> up to par right now.
>>> While I will try to ensure it is well tested and backwards compatible,
>>> things *could* break. Would like feedback.
>>>
>>> I cc'd -operators because I'm interested in knowing if there are any
>>> users of puppet-designate right now: which distro and release of
>>> OpenStack?
>>>
>>> I'm a RDO maintainer and I took interest in puppet-designate because
>>> we did not have any proper test coverage for designate in RDO
>>> packaging until now.
>>>
>>> The RDO community mostly relies on collaboration with installation and
>>> deployment projects such as Puppet OpenStack to test our packaging.
>>> We can, in turn, provide some level of guarantee that packages built
>>> out of trunk branches (and eventually stable releases) should work.
>>> The idea is to make puppet-designate work with RDO, then integrate it
>>> in the puppet-openstack-integration CI scenarios and we can leverage
>>> that in RDO CI afterwards.
>>>
>>> Both puppet-designate and designate RDO packaging were unfortunately
>>> in quite a sad state after not being maintained very well and a lot of
>>> work was required to even get basic tests to pass.
>>> The good news is that it didn't work with RDO before and now it does,
>>> for newton.
>>> Testing coverage has been improved and will be improved even further
>>> for both RDO and Ubuntu Cloud Archive.
>>>
>>> If you'd like to follow the progress of the work, the reviews are
>>> tagged with the topic "designate-with-rdo" [1].
>>>
>>> Let me know if you have any questions !
>>>
>>> [1]: https://review.openstack.org/#/q/topic:designate-with-rdo
>>>
>>> David Moreau Simard
>>> Senior Software Engineer | Openstack RDO
>>>
>>> dmsimard = [irc, github, twitter]
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> openstack-operat...@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Matthew Treinish
On Wed, Jul 06, 2016 at 11:41:56AM -0500, Matt Riedemann wrote:
> On 7/6/2016 10:55 AM, Matthew Treinish wrote:
> > 
> > Well, for better or worse rootwrap filters are put in /etc and treated like 
> > a
> > config file. What you're essentially saying is that it shouldn't be config 
> > and
> > just be in code. I completely agree with that being what we want 
> > eventually, but
> > it's not how we advertise it today. Privsep sounds like it's our way of 
> > making
> > this migration. But, it doesn't change the status quo where it's this hybrid
> > config/code thing today, like policy was in nova before:
> > 
> > http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/policy-in-code.html
> > 
> > (which has come up before as another tension point in the past during 
> > upgrades)
> > I don't think we should break what we're currently enforcing today because 
> > we
> > don't like the model we've built. We need to handle the migration to the new
> > better thing gracefully so we don't break people who are relying on our 
> > current
> > guarantees, regardless of how bad they are.
> > 
> > -Matt Treinish
> > 
> > 
> 
> I just wonder how many deployments are actually relying on this, since as
> noted elsewhere in this thread we don't really enforce this for all things,
> only what happens to get tested in our CI system, e.g. the virtuozzo
> rootwrap filters that don't have grenade testing.

Sure, our testing coverage here is far from perfect, that's never been in
dispute. It's always been best effort (which there has been limited in this
space) like I'm not aware of anything doing any upgrade testing with
virtuozzo enabled, or any of the other random ephemeral storage backends,
**cough** ceph **cough**.  But, as I said before just because we don't catch all
the issues isn't a reason to throw everything out the window.

> 
> Which is also why I'd like to get some operator perspective on this.
> 

I think what we'll find is the people who rely on this don't even realize it.
(which is kinda the point) I expect the people on the ops list are knowledgeable
enough and have enough experience to figure this kind of issue out and just
expect it during the course of an upgrade. This is more likely a trap for young
players who haven't even thought about this as being a potential issue before.
I don't think there is any disagreement we should move to something better in
this space. But, this is something we've said we would guarantee and I don't
think we should break that in the process of moving to the new better thing.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Doc Smash - 13th July 2016 @ 14:30 UTC

2016-07-06 Thread Hayes, Graham
Hi All,

As we have seen over the last cycle or so, our docs need a bit of a refresh.

We decided at the last IRC meeting that we should have a Doc Smash on 
the 13th July 2016 @ 14:30 UTC.

The place holder bug is here - 
https://bugs.launchpad.net/designate/+bug/1590937 and I will break it 
out into smaller bugs for the day itself.

So - if you are interested in helping us give the docs a bit of TLC,
please pop along to #openstack-dns on 13th July 2016 @ 14:30 UTC.

Thanks,

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] existing centralized syslog server

2016-07-06 Thread Michael Gugino
Hello Fabrice,

  You should also set the rsyslog affinity to 0 for the infra_hosts.  I’m not 
sure if the affinity is necessary for the log hosts, but I don’t think it will 
hurt.  I recommend trying a few different ways and verify the corresponding 
openstack_inventory.json file to ensure it meets your requirements.


Michael Gugino
Cloud Powered
(540) 846-0304 Mobile

Walmart ✻
Saving people money so they can live better.


From: fabrice grelaud 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, July 6, 2016 at 12:15 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [openstack-ansible] existing centralized syslog 
server

Hi Michael,

Le 6 juil. 2016 à 17:07, Michael Gugino 
> a écrit :

Hello Fabrice,

 I think the easiest way would be to set the affinity for each
controller/infrastructure host using 'rsyslog_container: 0' as seen in the
following section of the install guide:
http://docs.openstack.org/developer/openstack-ansible/install-guide-revised
-draft/configure-initial.html#affinity

ok, i look at this.

Actually (first deploy with a dedicated log server) , i have:
./scripts/inventory-manage.py -G
rsyslog_container | log1_rsyslog_container-81441bbb

So, from what you write, not to create a log_container on the log server, i can 
modify my openstack_user_config.yml to be:

log_hosts:
  log1:
affinity:
  rsyslog_container: 0
ip: 172.29.236.240

Is that right ?

 Next, you should add your actual logging hosts to your
openstack_user_config as seen here:
https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_de
ploy/openstack_user_config.yml.aio#L122-L124

 Be sure to comment out the rsyslog-install.yml line of
setup-infrastructure.yml, and be sure that you make any necessary
modifications to the openstack-ansible-rsyslog_client role.  Modifications
may not be necessary, depending on your needs, and you may be able to
specify certain variables in user_variables.yml to achieve the desired
results.

In openstack-ansible-rsyslog_client, the template 99-rsyslog.conf.j2 use :
*.* @{{ hostvars[server]['ansible_ssh_host'] }}:{{ rsyslog_client_udp_port 
}};RFC3164fmt

I will test to ensure the IP is that from log1.
If yes, no more modification is needed.

Thanks again.
Regards

 As always, make sure you test these modifications in a non-production
environment to ensure you achieve the desired results.


Michael Gugino
Cloud Powered
(540) 846-0304 Mobile

Walmart ✻
Saving people money so they can live better.





On 7/6/16, 10:24 AM, "fabrice grelaud" 
>
wrote:

Hi,

I would like to know what is the best approach to customize our
openstack-ansible deployment if we want to use our existing solution of
ELK and centralized rsyslog server.

We deploy openstack-ansible (mitaka 13.1.2 release) on our infrastructure
with for convenient and no risk a vm for syslog server role. That is ok.
But now if i want to use our centralized syslog server ?

What i need is to set ip address of our existing server to the rsyslog
client (containers + metal) and of course configure our rsyslog.conf to
manage openstack template.
So:
- no need to create on the log server a lxc container (setup-hosts.yml:
lxc-hosts-setup, lxc-containers-create)
- no need to install syslog server (setup-infrastructure.yml:
rsyslog-install.yml)

How can i modify my openstack-ansible environment (/etc/openstack_deploy,
env.d, conf.d, openstack_user_config.yml, user_variables.yml, playbook ?)
the most transparent manner and that permits minor release update simply ?

Thanks.

Fabrice Grelaud
Université de Bordeaux




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


This email and any files transmitted with it are confidential and intended 
solely for the individual or entity to whom they are addressed. If you have 
received this email in error destroy it immediately. *** Walmart Confidential 
***
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


This email and any files transmitted with it are confidential and intended 
solely for the 

[openstack-dev] [Gluon] Project Creation and Weekly Team Meeting starts on 07/13/2016 at 1800UTC Wednesdays

2016-07-06 Thread HU, BIN
Hello team,

I am gladly to announce that we have created a new, although unofficial, 
project Gluon. Congratulations.

Our team wiki is [1], and general meeting page is [2] that intends to archive 
the history of all meetings. Because the project was just created, both [3] and 
[4] are still under construction. More information will come soon.

I am also pleased to announce that we will start our team weekly meeting on 
07/13/2016 at 1800UTC Wednesdays.

Please see [3] for details of meeting logistics, and [4] for tentative agenda. 
Feel free to propose other items for agenda.

Thanks
Bin

[1] https://wiki.openstack.org/wiki/Gluon
[2] https://wiki.openstack.org/wiki/Meetings/Gluon
[3] http://eavesdrop.openstack.org/#Gluon_Meeting
[4] https://wiki.openstack.org/wiki/Meetings/Gluon/Agenda-20160713

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Matthew Treinish
On Wed, Jul 06, 2016 at 06:20:30PM +0200, Thierry Carrez wrote:
> Matthew Treinish wrote:
> > > [...]
> > > Am I missing something else here?
> > 
> > Well, for better or worse rootwrap filters are put in /etc and treated like 
> > a
> > config file. What you're essentially saying is that it shouldn't be config 
> > and
> > just be in code. I completely agree with that being what we want 
> > eventually, but
> > it's not how we advertise it today.
> 
> Well, some (most ?) distros ship them as code rather than configuration,
> under /usr/share rather than under /etc. So one may argue that the issue is
> that devstack is installing them under /etc :)
> 

Devstack doesn't do anything special here, it just uses the project defaults.
For most cases that's what devstack strives to do that wherever possible. Your
issue is with nova and pretty much everything using rootwrap then. The fact that
most distros do this is just further indication that how we have things setup
today is the wrong way to handle this.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Leadership training recap & steps forward

2016-07-06 Thread Amrith Kumar
A huge "THANK YOU" to Colette for all the time and effort that went into making 
this a reality. What she doesn't mention is that this training (which lasted 
just a couple of days) was a year in the making!

In addition to the great sandwiches and BBQ (and beer), I think it was great to 
have so many of us in a room at once. There is something about meeting in 
person, breaking bread (and jokes) that is totally lost in a virtual 
environment.

I think the training and the discussions were well worth it and I encourage 
anyone who is serious about leadership in OpenStack to consider attending the 
next session. And while you are there, make it a point to go to Maker Works and 
meet Tom Root. It is truly inspiring to see what he's doing there.

-amrith

> -Original Message-
> From: Colette Alexander [mailto:colettealexan...@gmail.com]
> Sent: Wednesday, July 06, 2016 12:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [tc] Leadership training recap & steps forward
> 
> Hello Stackers!
> 
> I wanted to send an update about what happened last week at the
> leadership training session in Ann Arbor, Michigan and some of the
> things moving out of the training that we're hoping help improve the
> community as a whole.
> 
> 17 people from the community (including 8 from the TC) and 2 trainers
> from ZingTrain spent 2 days covering a range of materials and
> subjects: servant leadership! Visioning! Stages of learning! Good
> practices for leading organizational change! Also there were delicious
> sandwiches and BBQ.
> 
> Reviews and reflections from the training have been overwhelmingly
> positive, and some blog posts about it are starting to pop up.[0]
> 
> On the day after training, a slightly smaller group than the full 17
> met to discuss how some of the ideas presented might help the
> OpenStack community, and identified some areas of work that are now
> being initiated by the TC. Some of the work identified involves the TC
> examining and where necessary, altering the ways it works, and some of
> it involves a general support for and development of leadership in all
> areas of the OpenStack community. To more clearly define and
> accomplish that work, a Stewardship Working Group has been
> proposed.[1]
> 
> Because of the success of the training, and the fact that 5 TC members
> weren't able to attend this past instance, I'm working to arrange a
> repeated offering. It's still very much in the planning stages, but
> I'm hoping it will be in September, and follow the same 2-day
> training/1 day reflection/planning structure as before. The training
> will be enrolled similarly to the last one - with TC and Board members
> having first crack at the sign up list, and then opening remaining
> seats to the rest of the community. I will post to the -dev list as
> dates and details are finalized for that.
> 
> Ideally there will be some discussion and work on this (including a
> potential panel discussion) at the Ocata Summit in Barcelona as well -
> stay tuned for information on that as it becomes available!
> 
> A huge thanks to all who attended, and to the Foundation who sponsored
> the training for everyone - I'm incredibly excited to work on all of
> this in such a great environment, with such great people.
> 
> Sincerely,
> 
> -colette/gothicmindfood
> 
> [0] http://www.tesora.com/openstack-tc-will-not-opening-deli/
> [1] https://review.openstack.org/#/c/337895/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Clint Byrum
Excerpts from Matthew Treinish's message of 2016-07-06 11:55:53 -0400:
> On Wed, Jul 06, 2016 at 10:34:49AM -0500, Matt Riedemann wrote:
> > On 6/27/2016 6:24 AM, Sean Dague wrote:
> > > On 06/26/2016 10:02 PM, Angus Lees wrote:
> > > > On Fri, 24 Jun 2016 at 20:48 Sean Dague  > > > > wrote:
> > > > 
> > > > On 06/24/2016 05:12 AM, Thierry Carrez wrote:
> > > > > I'm adding Possibility (0): change Grenade so that rootwrap
> > > > filters from
> > > > > N+1 are put in place before you upgrade.
> > > > 
> > > > If you do that as general course what you are saying is that every
> > > > installer and install process includes overwriting all of rootwrap
> > > > before every upgrade. Keep in mind we do upstream upgrade as 
> > > > offline,
> > > > which means that we've fully shut down the cloud. This would remove 
> > > > the
> > > > testing requirement that rootwrap configs were even compatible 
> > > > between N
> > > > and N+1. And you think this is theoretical, you should see the 
> > > > patches
> > > > I've gotten over the years to grenade because people didn't see an 
> > > > issue
> > > > with that at all. :)
> > > > 
> > > > I do get that people don't like the constraints we've self imposed, 
> > > > but
> > > > we've done that for very good reasons. The #1 complaint from 
> > > > operators,
> > > > for ever, has been the pain and danger of upgrading. That's why we 
> > > > are
> > > > still trademarking new Juno clouds. When you upgrade Apache, you 
> > > > don't
> > > > have to change your config files.
> > > > 
> > > > 
> > > > In case it got lost, I'm 100% on board with making upgrades safe and
> > > > straightforward, and I understand that grenade is merely a tool to help
> > > > us test ourselves against our process and not an enemy to be worked
> > > > around.  I'm an ops guy proud and true and hate you all for making
> > > > openstack hard to upgrade in the first place :P
> > > > 
> > > > Rootwrap configs need to be updated in line with new rootwrap-using code
> > > > - that's just the way the rootwrap security mechanism works, since the
> > > > security "trust" flows from the root-installed rootwrap config files.
> > > > 
> > > > I would like to clarify what our self-imposed upgrade rules are so that
> > > > I can design code within those constraints, and no-one is answering my
> > > > question so I'm just getting more confused as this thread progresses...
> > > > 
> > > > ***
> > > > What are we trying to impose on ourselves for upgrades for the present
> > > > and near future (ie: while rootwrap is still a thing)?
> > > > ***
> > > > 
> > > > A. Sean says above that we do "offline" upgrades, by which I _think_ he
> > > > means a host-by-host (or even global?) "turn everything (on the same
> > > > host/container) off, upgrade all files on disk for that host/container,
> > > > turn it all back on again".  If this is the model, then we can trivially
> > > > update rootwrap files during the "upgrade" step, and I don't see any
> > > > reason why we need to discuss anything further - except how we implement
> > > > this in grenade.
> > > > 
> > > > B. We need to support a mix of old + new code running on the same
> > > > host/container, running against the same config files (presumably
> > > > because we're updating service-by-service, or want to minimise the
> > > > service-unavailability during upgrades to literally just a process
> > > > restart).  So we need to think about how and when we stage config vs
> > > > code updates, and make sure that any overlap is appropriately allowed
> > > > for (expand-contract, etc).
> > > > 
> > > > C. We would like to just never upgrade rootwrap (or other config) files
> > > > ever again (implying a freeze in as_root command lines, effective ~a
> > > > year ago).  Any config update is an exception dealt with through
> > > > case-by-case process and release notes.
> > > > 
> > > > 
> > > > I feel like the grenade check currently implements (B) with a 6 month
> > > > lead time on config changes, but the "theory of upgrade" doc and our
> > > > verbal policy might actually be (C) (see this thread, eg), and Sean
> > > > above introduced the phrase "offline" which threw me completely into
> > > > thinking maybe we're aiming for (A).  You can see why I'm looking for
> > > > clarification  ;)
> > > 
> > > Ok, there is theory of what we are striving for, and there is what is
> > > viable to test consistently.
> > > 
> > > The thing we are shooting for is making the code Continuously
> > > Deployable. Which means the upgrade process should be "pip install -U
> > > $foo && $foo-manage db-sync" on the API surfaces and "pip install -U
> > > $foo; service restart" on everything else.
> > > 
> > > Logic we can put into the python install process is common logic shared
> > > by all deployment tools, and we can encode it in there. So all
> > > installers just get it.
> 

Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Matt Riedemann

On 7/6/2016 10:55 AM, Matthew Treinish wrote:


Well, for better or worse rootwrap filters are put in /etc and treated like a
config file. What you're essentially saying is that it shouldn't be config and
just be in code. I completely agree with that being what we want eventually, but
it's not how we advertise it today. Privsep sounds like it's our way of making
this migration. But, it doesn't change the status quo where it's this hybrid
config/code thing today, like policy was in nova before:

http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/policy-in-code.html

(which has come up before as another tension point in the past during upgrades)
I don't think we should break what we're currently enforcing today because we
don't like the model we've built. We need to handle the migration to the new
better thing gracefully so we don't break people who are relying on our current
guarantees, regardless of how bad they are.

-Matt Treinish




I just wonder how many deployments are actually relying on this, since 
as noted elsewhere in this thread we don't really enforce this for all 
things, only what happens to get tested in our CI system, e.g. the 
virtuozzo rootwrap filters that don't have grenade testing.


Which is also why I'd like to get some operator perspective on this.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO deep dive hour?

2016-07-06 Thread James Slagle
I've updated the etherpad with details about the deep dive tomorrow:
https://etherpad.openstack.org/p/tripleo-deep-dive-topics

Talk to everyone at 1400 UTC tomorrow.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Thierry Carrez

Matthew Treinish wrote:

[...]
Am I missing something else here?


Well, for better or worse rootwrap filters are put in /etc and treated like a
config file. What you're essentially saying is that it shouldn't be config and
just be in code. I completely agree with that being what we want eventually, but
it's not how we advertise it today.


Well, some (most ?) distros ship them as code rather than configuration, 
under /usr/share rather than under /etc. So one may argue that the issue 
is that devstack is installing them under /etc :)


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-06 Thread Matt Riedemann

On 7/5/2016 4:31 AM, Balázs Gibizer wrote:

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
Sent: July 01, 2016 23:03

We're now past non-priority feature freeze. I've started going through
some blueprints and -2ing them if they still have outstanding changes. I
haven't gone through the full list yet (we started with 100).

I'm also building a list of potential FFE candidates based on:


I'm proposing 5 of the remaining notification transformations patches
as FFE candidates. [1]



1. How far along the change is (how ready is it?), e.g. does it require
a lot of change yet? Does it require a Tempest test and is that passing
already? How much of the series has already merged and what's left?


The below patches are all ready but they needed a rebase after the
last minute changes on the instance.delete patch which they depend on.
Tempest test is not required for these patches.

already had +2 +W:
 * https://review.openstack.org/329089 Transform instance.suspend notifications

already had +2:
* https://review.openstack.org/331972 Transform instance.restore notifications
* https://review.openstack.org/332696 Transform instance.shelve notifications

ready for core review
* https://review.openstack.org/329141 Transform instance.pause notifications
* https://review.openstack.org/329255 Transform instance.resize notifications

The spec-ed scope of the bp [1] is already merged but these are fairly trivial
patches



2. How much core reviewer attention has it already gotten?


See above.



3. What kind of priority does it have, i.e. if we don't get it done in
Newton do we miss something in Ocata? Think things that start
deprecation/removal timers.


If we move these to Ocata then we slow the notifications transformation
work which means the deprecation of the legacy notifications
also moves further in the future.

Cheers,
Gibi


[1] 
https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-newton



The plan is for the nova core team to have an informal meeting in the
#openstack-nova IRC channel early next week, either Tuesday or
Wednesday, and go through the list of potential FFE candidates.

Blueprints that get exceptions will be checked against the above
criteria and who on the core team is actually going to push the changes
through.

I'm looking to get any exceptions completed within a week, so targeting
Wednesday 7/13. That leaves a few days for preparing for the meetup.

--

Thanks,

Matt Riedemann


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We agreed to FFE anything that's already got a +2, but anything else 
that's not ready needs to wait for Ocata, including new notification 
transformation patches.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] existing centralized syslog server

2016-07-06 Thread fabrice grelaud
Hi Michael,

> Le 6 juil. 2016 à 17:07, Michael Gugino  a écrit :
> 
> Hello Fabrice,
> 
>  I think the easiest way would be to set the affinity for each
> controller/infrastructure host using 'rsyslog_container: 0' as seen in the
> following section of the install guide:
> http://docs.openstack.org/developer/openstack-ansible/install-guide-revised
> -draft/configure-initial.html#affinity
> 
ok, i look at this.

Actually (first deploy with a dedicated log server) , i have:
./scripts/inventory-manage.py -G
rsyslog_container   |   log1_rsyslog_container-81441bbb

So, from what you write, not to create a log_container on the log server, i can 
modify my openstack_user_config.yml to be:

log_hosts:
  log1:
affinity:
  rsyslog_container: 0
ip: 172.29.236.240

Is that right ?

>  Next, you should add your actual logging hosts to your
> openstack_user_config as seen here:
> https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_de
> ploy/openstack_user_config.yml.aio#L122-L124
> 
>  Be sure to comment out the rsyslog-install.yml line of
> setup-infrastructure.yml, and be sure that you make any necessary
> modifications to the openstack-ansible-rsyslog_client role.  Modifications
> may not be necessary, depending on your needs, and you may be able to
> specify certain variables in user_variables.yml to achieve the desired
> results.
> 
In openstack-ansible-rsyslog_client, the template 99-rsyslog.conf.j2 use :
*.* @{{ hostvars[server]['ansible_ssh_host'] }}:{{ rsyslog_client_udp_port 
}};RFC3164fmt

I will test to ensure the IP is that from log1.
If yes, no more modification is needed.

Thanks again.
Regards

>  As always, make sure you test these modifications in a non-production
> environment to ensure you achieve the desired results.
> 
> 
> Michael Gugino
> Cloud Powered
> (540) 846-0304 Mobile
> 
> Walmart ✻
> Saving people money so they can live better.
> 
> 
> 
> 
> 
> On 7/6/16, 10:24 AM, "fabrice grelaud" 
> wrote:
> 
>> Hi,
>> 
>> I would like to know what is the best approach to customize our
>> openstack-ansible deployment if we want to use our existing solution of
>> ELK and centralized rsyslog server.
>> 
>> We deploy openstack-ansible (mitaka 13.1.2 release) on our infrastructure
>> with for convenient and no risk a vm for syslog server role. That is ok.
>> But now if i want to use our centralized syslog server ?
>> 
>> What i need is to set ip address of our existing server to the rsyslog
>> client (containers + metal) and of course configure our rsyslog.conf to
>> manage openstack template.
>> So:
>> - no need to create on the log server a lxc container (setup-hosts.yml:
>> lxc-hosts-setup, lxc-containers-create)
>> - no need to install syslog server (setup-infrastructure.yml:
>> rsyslog-install.yml)
>> 
>> How can i modify my openstack-ansible environment (/etc/openstack_deploy,
>> env.d, conf.d, openstack_user_config.yml, user_variables.yml, playbook ?)
>> the most transparent manner and that permits minor release update simply ?
>> 
>> Thanks.
>> 
>> Fabrice Grelaud
>> Université de Bordeaux
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> This email and any files transmitted with it are confidential and intended 
> solely for the individual or entity to whom they are addressed. If you have 
> received this email in error destroy it immediately. *** Walmart Confidential 
> ***
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-07-06 Thread Matt Riedemann

On 7/4/2016 1:12 AM, Zhenyu Zheng wrote:

I'm willing to work on this, should this be a Blueprint for O?



The spec will need to be re-proposed for Ocata and any adjustments for 
the sorting/paging/marker discussions from this thread and/or the review 
should be laid out in the spec.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Leadership training recap & steps forward

2016-07-06 Thread Colette Alexander
Hello Stackers!

I wanted to send an update about what happened last week at the
leadership training session in Ann Arbor, Michigan and some of the
things moving out of the training that we're hoping help improve the
community as a whole.

17 people from the community (including 8 from the TC) and 2 trainers
from ZingTrain spent 2 days covering a range of materials and
subjects: servant leadership! Visioning! Stages of learning! Good
practices for leading organizational change! Also there were delicious
sandwiches and BBQ.

Reviews and reflections from the training have been overwhelmingly
positive, and some blog posts about it are starting to pop up.[0]

On the day after training, a slightly smaller group than the full 17
met to discuss how some of the ideas presented might help the
OpenStack community, and identified some areas of work that are now
being initiated by the TC. Some of the work identified involves the TC
examining and where necessary, altering the ways it works, and some of
it involves a general support for and development of leadership in all
areas of the OpenStack community. To more clearly define and
accomplish that work, a Stewardship Working Group has been
proposed.[1]

Because of the success of the training, and the fact that 5 TC members
weren't able to attend this past instance, I'm working to arrange a
repeated offering. It's still very much in the planning stages, but
I'm hoping it will be in September, and follow the same 2-day
training/1 day reflection/planning structure as before. The training
will be enrolled similarly to the last one - with TC and Board members
having first crack at the sign up list, and then opening remaining
seats to the rest of the community. I will post to the -dev list as
dates and details are finalized for that.

Ideally there will be some discussion and work on this (including a
potential panel discussion) at the Ocata Summit in Barcelona as well -
stay tuned for information on that as it becomes available!

A huge thanks to all who attended, and to the Foundation who sponsored
the training for everyone - I'm incredibly excited to work on all of
this in such a great environment, with such great people.

Sincerely,

-colette/gothicmindfood

[0] http://www.tesora.com/openstack-tc-will-not-opening-deli/
[1] https://review.openstack.org/#/c/337895/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Matthew Treinish
On Wed, Jul 06, 2016 at 10:34:49AM -0500, Matt Riedemann wrote:
> On 6/27/2016 6:24 AM, Sean Dague wrote:
> > On 06/26/2016 10:02 PM, Angus Lees wrote:
> > > On Fri, 24 Jun 2016 at 20:48 Sean Dague  > > > wrote:
> > > 
> > > On 06/24/2016 05:12 AM, Thierry Carrez wrote:
> > > > I'm adding Possibility (0): change Grenade so that rootwrap
> > > filters from
> > > > N+1 are put in place before you upgrade.
> > > 
> > > If you do that as general course what you are saying is that every
> > > installer and install process includes overwriting all of rootwrap
> > > before every upgrade. Keep in mind we do upstream upgrade as offline,
> > > which means that we've fully shut down the cloud. This would remove 
> > > the
> > > testing requirement that rootwrap configs were even compatible 
> > > between N
> > > and N+1. And you think this is theoretical, you should see the patches
> > > I've gotten over the years to grenade because people didn't see an 
> > > issue
> > > with that at all. :)
> > > 
> > > I do get that people don't like the constraints we've self imposed, 
> > > but
> > > we've done that for very good reasons. The #1 complaint from 
> > > operators,
> > > for ever, has been the pain and danger of upgrading. That's why we are
> > > still trademarking new Juno clouds. When you upgrade Apache, you don't
> > > have to change your config files.
> > > 
> > > 
> > > In case it got lost, I'm 100% on board with making upgrades safe and
> > > straightforward, and I understand that grenade is merely a tool to help
> > > us test ourselves against our process and not an enemy to be worked
> > > around.  I'm an ops guy proud and true and hate you all for making
> > > openstack hard to upgrade in the first place :P
> > > 
> > > Rootwrap configs need to be updated in line with new rootwrap-using code
> > > - that's just the way the rootwrap security mechanism works, since the
> > > security "trust" flows from the root-installed rootwrap config files.
> > > 
> > > I would like to clarify what our self-imposed upgrade rules are so that
> > > I can design code within those constraints, and no-one is answering my
> > > question so I'm just getting more confused as this thread progresses...
> > > 
> > > ***
> > > What are we trying to impose on ourselves for upgrades for the present
> > > and near future (ie: while rootwrap is still a thing)?
> > > ***
> > > 
> > > A. Sean says above that we do "offline" upgrades, by which I _think_ he
> > > means a host-by-host (or even global?) "turn everything (on the same
> > > host/container) off, upgrade all files on disk for that host/container,
> > > turn it all back on again".  If this is the model, then we can trivially
> > > update rootwrap files during the "upgrade" step, and I don't see any
> > > reason why we need to discuss anything further - except how we implement
> > > this in grenade.
> > > 
> > > B. We need to support a mix of old + new code running on the same
> > > host/container, running against the same config files (presumably
> > > because we're updating service-by-service, or want to minimise the
> > > service-unavailability during upgrades to literally just a process
> > > restart).  So we need to think about how and when we stage config vs
> > > code updates, and make sure that any overlap is appropriately allowed
> > > for (expand-contract, etc).
> > > 
> > > C. We would like to just never upgrade rootwrap (or other config) files
> > > ever again (implying a freeze in as_root command lines, effective ~a
> > > year ago).  Any config update is an exception dealt with through
> > > case-by-case process and release notes.
> > > 
> > > 
> > > I feel like the grenade check currently implements (B) with a 6 month
> > > lead time on config changes, but the "theory of upgrade" doc and our
> > > verbal policy might actually be (C) (see this thread, eg), and Sean
> > > above introduced the phrase "offline" which threw me completely into
> > > thinking maybe we're aiming for (A).  You can see why I'm looking for
> > > clarification  ;)
> > 
> > Ok, there is theory of what we are striving for, and there is what is
> > viable to test consistently.
> > 
> > The thing we are shooting for is making the code Continuously
> > Deployable. Which means the upgrade process should be "pip install -U
> > $foo && $foo-manage db-sync" on the API surfaces and "pip install -U
> > $foo; service restart" on everything else.
> > 
> > Logic we can put into the python install process is common logic shared
> > by all deployment tools, and we can encode it in there. So all
> > installers just get it.
> > 
> > The challenge is there is no facility for config file management in
> > python native packaging. Which means that software which *depends* on
> > config files for new or even working features now moves from the camp of
> > CDable to manual upgrade needed. What you need to do is in 

Re: [openstack-dev] [Openstack-operators] [puppet] [desginate] An update on the state of puppet-designate (and designate in RDO)

2016-07-06 Thread David Moreau Simard
Thanks Matt, if you don't mind I might add you to some puppet reviews.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Tue, Jul 5, 2016 at 10:22 PM, Matt Fischer  wrote:
> We're using Designate but still on Juno. We're running puppet from around
> then, summer of 2015. We'll likely try to upgrade to Mitaka at some point
> but Juno Designate "just works" so it's been low priority. Look forward to
> your efforts here.
>
> On Tue, Jul 5, 2016 at 7:47 PM, David Moreau Simard  wrote:
>>
>> Hi !
>>
>> tl;dr
>> puppet-designate is going under some significant updates to bring it
>> up to par right now.
>> While I will try to ensure it is well tested and backwards compatible,
>> things *could* break. Would like feedback.
>>
>> I cc'd -operators because I'm interested in knowing if there are any
>> users of puppet-designate right now: which distro and release of
>> OpenStack?
>>
>> I'm a RDO maintainer and I took interest in puppet-designate because
>> we did not have any proper test coverage for designate in RDO
>> packaging until now.
>>
>> The RDO community mostly relies on collaboration with installation and
>> deployment projects such as Puppet OpenStack to test our packaging.
>> We can, in turn, provide some level of guarantee that packages built
>> out of trunk branches (and eventually stable releases) should work.
>> The idea is to make puppet-designate work with RDO, then integrate it
>> in the puppet-openstack-integration CI scenarios and we can leverage
>> that in RDO CI afterwards.
>>
>> Both puppet-designate and designate RDO packaging were unfortunately
>> in quite a sad state after not being maintained very well and a lot of
>> work was required to even get basic tests to pass.
>> The good news is that it didn't work with RDO before and now it does,
>> for newton.
>> Testing coverage has been improved and will be improved even further
>> for both RDO and Ubuntu Cloud Archive.
>>
>> If you'd like to follow the progress of the work, the reviews are
>> tagged with the topic "designate-with-rdo" [1].
>>
>> Let me know if you have any questions !
>>
>> [1]: https://review.openstack.org/#/q/topic:designate-with-rdo
>>
>> David Moreau Simard
>> Senior Software Engineer | Openstack RDO
>>
>> dmsimard = [irc, github, twitter]
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Matt Riedemann

On 6/27/2016 6:24 AM, Sean Dague wrote:

On 06/26/2016 10:02 PM, Angus Lees wrote:

On Fri, 24 Jun 2016 at 20:48 Sean Dague > wrote:

On 06/24/2016 05:12 AM, Thierry Carrez wrote:
> I'm adding Possibility (0): change Grenade so that rootwrap
filters from
> N+1 are put in place before you upgrade.

If you do that as general course what you are saying is that every
installer and install process includes overwriting all of rootwrap
before every upgrade. Keep in mind we do upstream upgrade as offline,
which means that we've fully shut down the cloud. This would remove the
testing requirement that rootwrap configs were even compatible between N
and N+1. And you think this is theoretical, you should see the patches
I've gotten over the years to grenade because people didn't see an issue
with that at all. :)

I do get that people don't like the constraints we've self imposed, but
we've done that for very good reasons. The #1 complaint from operators,
for ever, has been the pain and danger of upgrading. That's why we are
still trademarking new Juno clouds. When you upgrade Apache, you don't
have to change your config files.


In case it got lost, I'm 100% on board with making upgrades safe and
straightforward, and I understand that grenade is merely a tool to help
us test ourselves against our process and not an enemy to be worked
around.  I'm an ops guy proud and true and hate you all for making
openstack hard to upgrade in the first place :P

Rootwrap configs need to be updated in line with new rootwrap-using code
- that's just the way the rootwrap security mechanism works, since the
security "trust" flows from the root-installed rootwrap config files.

I would like to clarify what our self-imposed upgrade rules are so that
I can design code within those constraints, and no-one is answering my
question so I'm just getting more confused as this thread progresses...

***
What are we trying to impose on ourselves for upgrades for the present
and near future (ie: while rootwrap is still a thing)?
***

A. Sean says above that we do "offline" upgrades, by which I _think_ he
means a host-by-host (or even global?) "turn everything (on the same
host/container) off, upgrade all files on disk for that host/container,
turn it all back on again".  If this is the model, then we can trivially
update rootwrap files during the "upgrade" step, and I don't see any
reason why we need to discuss anything further - except how we implement
this in grenade.

B. We need to support a mix of old + new code running on the same
host/container, running against the same config files (presumably
because we're updating service-by-service, or want to minimise the
service-unavailability during upgrades to literally just a process
restart).  So we need to think about how and when we stage config vs
code updates, and make sure that any overlap is appropriately allowed
for (expand-contract, etc).

C. We would like to just never upgrade rootwrap (or other config) files
ever again (implying a freeze in as_root command lines, effective ~a
year ago).  Any config update is an exception dealt with through
case-by-case process and release notes.


I feel like the grenade check currently implements (B) with a 6 month
lead time on config changes, but the "theory of upgrade" doc and our
verbal policy might actually be (C) (see this thread, eg), and Sean
above introduced the phrase "offline" which threw me completely into
thinking maybe we're aiming for (A).  You can see why I'm looking for
clarification  ;)


Ok, there is theory of what we are striving for, and there is what is
viable to test consistently.

The thing we are shooting for is making the code Continuously
Deployable. Which means the upgrade process should be "pip install -U
$foo && $foo-manage db-sync" on the API surfaces and "pip install -U
$foo; service restart" on everything else.

Logic we can put into the python install process is common logic shared
by all deployment tools, and we can encode it in there. So all
installers just get it.

The challenge is there is no facility for config file management in
python native packaging. Which means that software which *depends* on
config files for new or even working features now moves from the camp of
CDable to manual upgrade needed. What you need to do is in releasenotes,
not in code that's shipped with your software. Release notes are not
scriptable.

So, we've said, doing that has to be the exception and not the rule.
It's also the same reasoning behind our deprecation phase for all config
options. Things move from working (in N), to working with warnings (in
N+1), to not working (in N+2). Which allows people to CD across this
boundary, and do config file fixing in their Config Management tools
*post* upgrade.


rootwrap filters aren't config options, but I get the feeling we're 
shoe-horning grenade to treat them as such.


I get why 

Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-06 Thread Matt Riedemann

On 7/3/2016 10:25 PM, Angus Lees wrote:


I see there are already a few other additions to the rootwrap filters in
nova/cinder (the comments suggest (nova) libvirt/imagebackend.py,
(cinder) remotefs.py, and (both) vzstorage.py).  The various
privsep-only suggestions about fallback strategies don't help in these
other examples.  Any corresponding code changes that rely on these new
filters will also need to be reverted and resubmitted during next cycle
- or do what usually happens and slip under the radar as they are not
exercised by grenade.


This is a good point - there were a couple of rootwrap filters added to 
nova already for virtuozzo features (vz volume attach support and 
rescue/resize support using the prl_disk_tool binary). These would fail 
grenade if we ran it with resize and the virtuozzo config with libvirt.


It seems a bit crazy to me to have to land rootwrap filters 6 months 
ahead of the code that uses them though, which is why I didn't block 
those changes from getting in.




 - Gus


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I haven't noticed anyone from the operators community weigh in on this 
thread, but I'm very curious to how they handle rootwrap filters when 
doing upgrades. I might start a separate thread in the operators list 
about that.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] existing centralized syslog server

2016-07-06 Thread Michael Gugino
Hello Fabrice,

  I think the easiest way would be to set the affinity for each
controller/infrastructure host using 'rsyslog_container: 0' as seen in the
following section of the install guide:
http://docs.openstack.org/developer/openstack-ansible/install-guide-revised
-draft/configure-initial.html#affinity

  Next, you should add your actual logging hosts to your
openstack_user_config as seen here:
https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_de
ploy/openstack_user_config.yml.aio#L122-L124

  Be sure to comment out the rsyslog-install.yml line of
setup-infrastructure.yml, and be sure that you make any necessary
modifications to the openstack-ansible-rsyslog_client role.  Modifications
may not be necessary, depending on your needs, and you may be able to
specify certain variables in user_variables.yml to achieve the desired
results.

  As always, make sure you test these modifications in a non-production
environment to ensure you achieve the desired results.


Michael Gugino
Cloud Powered
(540) 846-0304 Mobile
 
Walmart ✻
Saving people money so they can live better.
 




On 7/6/16, 10:24 AM, "fabrice grelaud" 
wrote:

>Hi,
>
>I would like to know what is the best approach to customize our
>openstack-ansible deployment if we want to use our existing solution of
>ELK and centralized rsyslog server.
>
>We deploy openstack-ansible (mitaka 13.1.2 release) on our infrastructure
>with for convenient and no risk a vm for syslog server role. That is ok.
>But now if i want to use our centralized syslog server ?
>
>What i need is to set ip address of our existing server to the rsyslog
>client (containers + metal) and of course configure our rsyslog.conf to
>manage openstack template.
>So:
>- no need to create on the log server a lxc container (setup-hosts.yml:
>lxc-hosts-setup, lxc-containers-create)
>- no need to install syslog server (setup-infrastructure.yml:
>rsyslog-install.yml)
>
>How can i modify my openstack-ansible environment (/etc/openstack_deploy,
>env.d, conf.d, openstack_user_config.yml, user_variables.yml, playbook ?)
>the most transparent manner and that permits minor release update simply ?
>
>Thanks.
>
>Fabrice Grelaud
>Université de Bordeaux
>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


This email and any files transmitted with it are confidential and intended 
solely for the individual or entity to whom they are addressed. If you have 
received this email in error destroy it immediately. *** Walmart Confidential 
***
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova] Microversions support coverage

2016-07-06 Thread Timofei Durakov
I think we could at least fix-up schema validation.

On Wed, Jul 6, 2016 at 5:51 PM, Matt Riedemann 
wrote:

> On 7/6/2016 9:37 AM, Timofei Durakov wrote:
>
>> Hi,
>>
>> there are several patches in a tempest that improves micro
>> versions coverage for Nova REST API:
>> https://review.openstack.org/#/c/337598
>> https://review.openstack.org/#/c/338248
>> https://review.openstack.org/#/c/287605
>> https://review.openstack.org/#/c/338256
>> https://review.openstack.org/#/c/233176
>> https://review.openstack.org/#/c/327191
>>
>> The purpose of this thread is to align our efforts in merging these and
>> also ask other contributors to join this activity, as it will improve
>> nova tempest coverage.
>>
>>
>> Timofey.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Are these all needed in Tempest? We don't need 100% nova microversion REST
> API coverage in Tempest if there are things that we can test inside nova,
> like in the functional tests. The only thing we don't have in nova is the
> response schema validation. For the most part I only ask for a microversion
> test in Tempest when it requires other services like cinder or neutron, or
> it hits new behavior in the virt drivers.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storlets] Starting storlets releases

2016-07-06 Thread eran

Hi,
Its high time we started to have storlet releases.
For one thing it will help us resolve the problem we face with the new  
COPY middleware: fixing this on master is not backward compatible.


Some random thoughts:
1. The releases will - at first - follow those of openstack, and the  
first release will be mitaka

2. A release will be a branch.
3. Future releases may include artifacts such as the common jar  
required for writing storlet, and perhaps a docker image on docker hub  
with s2aio allowing to test the storlet.


Here is the initial plan:
1. land the single range to run locally on object server
2. land the storlet request refactoring work done by Takashi.
3. update the install scripts and docs to use keystone v3.
4. create the mitaka branch.

Any comments/additions are mostly welcome.
Thanks,
Eran


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova] Microversions support coverage

2016-07-06 Thread Matt Riedemann

On 7/6/2016 9:37 AM, Timofei Durakov wrote:

Hi,

there are several patches in a tempest that improves micro
versions coverage for Nova REST API:
https://review.openstack.org/#/c/337598
https://review.openstack.org/#/c/338248
https://review.openstack.org/#/c/287605
https://review.openstack.org/#/c/338256
https://review.openstack.org/#/c/233176
https://review.openstack.org/#/c/327191

The purpose of this thread is to align our efforts in merging these and
also ask other contributors to join this activity, as it will improve
nova tempest coverage.


Timofey.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Are these all needed in Tempest? We don't need 100% nova microversion 
REST API coverage in Tempest if there are things that we can test inside 
nova, like in the functional tests. The only thing we don't have in nova 
is the response schema validation. For the most part I only ask for a 
microversion test in Tempest when it requires other services like cinder 
or neutron, or it hits new behavior in the virt drivers.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest][nova] Microversions support coverage

2016-07-06 Thread Timofei Durakov
Hi,

there are several patches in a tempest that improves micro
versions coverage for Nova REST API:
https://review.openstack.org/#/c/337598
https://review.openstack.org/#/c/338248
https://review.openstack.org/#/c/287605
https://review.openstack.org/#/c/338256
https://review.openstack.org/#/c/233176
https://review.openstack.org/#/c/327191

The purpose of this thread is to align our efforts in merging these and
also ask other contributors to join this activity, as it will improve nova
tempest coverage.


Timofey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] existing centralized syslog server

2016-07-06 Thread fabrice grelaud
Hi,

I would like to know what is the best approach to customize our 
openstack-ansible deployment if we want to use our existing solution of ELK and 
centralized rsyslog server.

We deploy openstack-ansible (mitaka 13.1.2 release) on our infrastructure with 
for convenient and no risk a vm for syslog server role. That is ok. But now if 
i want to use our centralized syslog server ?

What i need is to set ip address of our existing server to the rsyslog client 
(containers + metal) and of course configure our rsyslog.conf to manage 
openstack template.
So:
- no need to create on the log server a lxc container (setup-hosts.yml: 
lxc-hosts-setup, lxc-containers-create)
- no need to install syslog server (setup-infrastructure.yml: 
rsyslog-install.yml)

How can i modify my openstack-ansible environment (/etc/openstack_deploy, 
env.d, conf.d, openstack_user_config.yml, user_variables.yml, playbook ?) the 
most transparent manner and that permits minor release update simply ?

Thanks.

Fabrice Grelaud
Université de Bordeaux




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Use Keystone trusts in Magnum?

2016-07-06 Thread Johannes Grassler

Hello,

I submitted https://review.openstack.org/#/c/326428 a while ago to get around
having to configure Heat's policy.json in a very permissive manner[0]. I
naively only tested it as one user, but gating caught that omission and
dutifully failed (a user cannot stack-get another user's Heat stack, even if
it's the Magnum service user). Ordinarily, that is.

Beyond the ordinary, Heat uses[1] Keystone trusts[2] to handle what is
basically the same problem (acting on a user's behalf way past the time of the
stack-create when the token used for the stack-create may have expired
already).

I propose doing the same thing in Magnum to get the Magnum service user the
ability to perform a stack-get on all of its bays' stacks. That way the hairy
problems with the wide-open permissions neccessary for a global stack-list can
be avoided entirely.

I'd be willing to implement this, either as part of the existing change
referenced above or with a blueprint and all the bells and whistles.

So I have two questions:

1) Is this an acceptable way to handle the issue?

2) If so, is it blueprint material or can I get away with adding the code
   required for Keystone trusts to the existing change?

Cheers,

Johannes


Footnotes:

[0] See Steven Hardy's excellent dissection of the problem at the root of it:

http://lists.openstack.org/pipermail/openstack-dev/2016-July/098742.html


[1] 
http://hardysteven.blogspot.de/2014/04/heat-auth-model-updates-part-1-trusts.html

[2] https://wiki.openstack.org/wiki/Keystone/Trusts

--
Johannes Grassler, Cloud Developer
SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
GF: Felix Imendörffer, Jane Smithard, Graham Norton
Maxfeldstr. 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-07-06 Thread D'Angelo, Scott
Thanks Everyone!


Scott(da)


From: Sean McGinnis 
Sent: Wednesday, July 6, 2016 3:12:57 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

I'm a little late following through on this, but since Scott is on
vacation right now anyway I suppose that's OK.

Since there were no objections and all respondents were positive, I've
now added Scott to the cinder-core group.

Welcome Scott!

Sean

On Mon, Jun 27, 2016 at 12:27:06PM -0500, Sean McGinnis wrote:
> I would like to nominate Scott D'Angelo to core. Scott has been very
> involved in the project for a long time now and is always ready to help
> folks out on IRC. His contributions [1] have been very valuable and he
> is a thorough reviewer [2].
>
> Please let me know if there are any objects to this within the next
> week. If there are none I will switch Scott over by next week, unless
> all cores approve prior to then.
>
> Thanks!
>
> Sean McGinnis (smcginnis)
>
> [1] 
> https://review.openstack.org/#/q/owner:%22Scott+DAngelo+%253Cscott.dangelo%2540hpe.com%253E%22+status:merged
> [2] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Nominate Thomas Herve for Zaqar core

2016-07-06 Thread Flavio Percoco

On 01/07/16 07:18 +1200, Fei Long Wang wrote:

Hi team,

I would like to propose adding Thomas Herve(therve) for the Zaqar core 
team. TBH, I have drafted this mail about 6 months ago, the reason you 
see this mail until now is because I'm not sure if Thomas can dedicate 
his time on Zaqar(He is a very busy man). But as you see, I'm wrong. 
He continually contribute a lot of high quality patches for Zaqar[1] 
and a lot of inspiring comments for this project and team. I'm sure he 
would make excellent addition to the team. If no one objects, I'll 
proceed and add him  in a week from now.


[1] 
http://stackalytics.com/?module=zaqar-group=commits=all_id=therve


+1 makes total sense to me!

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] pbr potential breaking change coming

2016-07-06 Thread Flavio Percoco

On 21/06/16 09:01 -0400, Doug Hellmann wrote:

A while back pbr had a feature that let projects pass "warnerror"
through to Sphinx during documentation builds, causing any warnings in
that build to be treated as an error and fail the build. This lets us
avoid things like links to places that don't exist in the docs, bad but
renderable rst, typos in directive or role names, etc.

Somewhat more recently, but still a while ago, that feature "broke"
with a Sphinx release that was not API compatible. Sachi King has
fixed this issue within pbr, and so the next release of pbr will
fix the broken behavior and start correctly passing warnerror again.
That may result in doc builds breaking where they didn't before.

The short-term solution is to turn of warnerrors (look in your
setup.cfg), then fix the issues and turn it back on. Or you could
preemptively fix any existing warnings in your doc builds before the
release, but it's simple enough to turn off the feature if there isn't
time.

Josh, Sachi, & other Oslo folks, I think we should hold off on
releasing until next week to give folks time. Is that OK?

Doug

PS - Thanks, Sachi, I know that bug wasn't a ton of fun to fix!


Glance should be ok and I've just proposed a patch like Ihar's for Zaqar.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] SFC stable/mitaka version

2016-07-06 Thread Gary Kotton
Hi,
Is anyone looking at creating a stable/mitaka version? What if someone want to 
use this for stable/mitaka?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] OpenStack-Ansible and Open vSwitch

2016-07-06 Thread Truman, Travis
I’ve been anxiously awaiting Open vSwitch support in the OpenStack-Ansible
project and it looks like we¹ve made great progress in the Newton cycle
thanks to efforts from many in the community.

After spending some time testing the current implementation in my lab, I
wanted to throw together a quick blog post to explain how I structured
my OSA configuration to achieve the end results I was looking for.

Please find the post here:
http://trumant.github.io/openstack-ansible-openvswitch.html

I hope others find this useful and that it may serve as a good reference
point when the community begins building scenario-based documentation.

- Travis Truman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [probably forge email可能是仿冒邮件]Re: [daisycloud-core] [kolla] Kolla Mitakarequirementssupported by CentOS

2016-07-06 Thread hu . zhijiang
Steve,

I agree with you on that Kolla can use EPEL, because Kolla is not a 
function part of OpenStack but a deployment tool. In the same vein, is 
that means Kolla do not need to catch up with the global requirements of 
OpenStack? For example, instead of using oslo required by OpenStack, can 
Kolla be more open to be compatible with an old oslo? This is a big deal 
because if installers such as daisycloud-core want to use Kolla as 
underlying deployment tool, they should keep the same requirements 
updating cadence as kolla, but sometimes that is not necessary for 
OpenStack installers. 




发件人: "Steven Dake (stdake)" 
收件人: "OpenStack Development Mailing List (not for usage 
questions)" , 
日期:   2016-07-06 02:03
主题:   [probably forge email可能是仿冒邮件]Re: [openstack-dev] 
[daisycloud-core] [kolla] Kolla Mitakarequirementssupported by CentOS



Hu,

I am open to not using EPEL in the containers themselves.  I didn't even 
know all the dependencies were available without EPEL.  I am on vacation 
at present, but I'll try a build without EPEL and see if it actually 
builds correctly.  This does raise the question were ansible, docker, and 
git tools come from.  I am not keen to pull them from COPRs because I want 
them distributed via the CDN for HA purposes.

Are you planning to pull ansible, docker, and the git tools into 
delorean-deps?  One thing I am not keen on doing is yum install with an 
URL.

Also for source builds we absolutely require EPEL because we need gcc and 
other development tools.



As for the deployment host, we do require EPEL which is perfectly normal. 

From: "hu.zhiji...@zte.com.cn" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: Monday, July 4, 2016 at 1:02 AM
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [daisycloud-core] [kolla] Kolla Mitaka 
requirementssupported by CentOS

> As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
> It's proven very hard to prevent EPEL pushing broken updates, or push
> updates to fit OpenStack requirements.

> Actually, all the dependency above but ansible, docker and git python
> modules are in CentOS Cloud SIG repositories.
> If you are interested to work w/ CentOS Cloud SIG, we can add missing
> dependencies in our repositories.

I added [kolla] key word in the mail subject. Hope can get response from 
Kolla team about how to choose repos.


Thanks, 
Zhijiang 



发件人: Haïkel 
收件人: "OpenStack Development Mailing List (not for usage 
questions)" , 
日期: 2016-07-03 05:18
主题:[probably forge email可能是仿冒邮件]Re: [openstack-dev] 
[daisycloud-core] Kolla Mitaka requirementssupported by CentOS



2016-07-02 20:42 GMT+02:00 jason :
> Pip Package Name Supported By Centos CentOS Name Repo Name
> 
==
> ansible   yes
> ansible1.9.noarchepel
> docker-py  yes
> python-docker-py.noarchextras
> gitdb  yes
> python-gitdb.x86_64epel
> GitPython  yes
> GitPython.noarchepel
> oslo.config yes
> python2-oslo-config.noarch centos-openstack-mitaka
> pbryes
> python-pbr.noarch   epel
> setuptools yes
> python-setuptools.noarchbase
> six yes
> python-six.noarch base
> pycryptoyes
> python2-crypto  epel
> graphvizno
> Jinja2no (Note: Jinja2 2.7.2 will be installed as
> dependency by ansible)
>

As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
It's proven very hard to prevent EPEL pushing broken updates, or push
updates to fit OpenStack requirements.

Actually, all the dependency above but ansible, docker and git python
modules are in CentOS Cloud SIG repositories.
If you are interested to work w/ CentOS Cloud SIG, we can add missing
dependencies in our repositories.


>
> As above table shows, only two (graphviz and Jinja2) are not supported
> by centos currently. As those not supported packages are definitly not
> used by OpenStack as well as Daisy. So basicaly we can use pip to
> install them after installing other packages by yum. But note that
> Jinja2 2.7.2 will be installed as dependency while yum install
> ansible, so we need to using pip to install jinja2 2.8 after that to
> overide the old one. Also note that we must make sure pip is ONLY used
> for installing those two not supported packages.
>
> But before you trying to use pip, please consider these:
>
> 1) graphviz is just for 

Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-07-06 Thread Flavio Percoco

On 24/06/16 12:17 -0400, Sean Dague wrote:

On 06/24/2016 11:48 AM, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2016-06-24 10:59:14 +0200:

On 06/23/2016 11:21 PM, Clark Boylan wrote:

On Thu, Jun 23, 2016, at 02:15 PM, Doug Hellmann wrote:

Excerpts from Thomas Goirand's message of 2016-06-23 23:04:28 +0200:

On 06/23/2016 06:11 PM, Doug Hellmann wrote:

I'd like for the community to set a goal for Ocata to have Python
3 functional tests running for all projects.

As Tony points out, it's a bit late to have this as a priority for
Newton, though work can and should continue. But given how close
we are to having the initial phase of the port done (thanks Victor!),
and how far we are from discussions of priorities for Ocata, it
seems very reasonable to set a community-wide goal for our next
release cycle.

Thoughts?

Doug


+1

Just think about it for a while. If we get Nova to work with Py3, and
everything else is working, including all functional tests in Tempest,
then after Otaca, we could even start to *REMOVE* Py2 support after
Otaca+1. That would be really awesome to stop all the compat layer
madness and use the new features available in Py3.


We'll need to get some input from other distros and from deployers
before we decide on a timeline for dropping Python 2. For now, let's
focus on making Python 3 work. Then we can all rejoice while having the
discussion of how much longer to support Python 2. :-)



I really would love to ship a full stack running Py3 for Debian Stretch.
However, for this, it'd be super helful to have as much visibility as
possible. Are we setting a hard deadline for the Otaca release? Or is
this just a goal we only "would like" to reach, but it's not really a
big deal if we don't reach it?


Let's see what PTLs have to say about planning, but I think if not
Ocata then we'd want to set one for the P release. We're running
out of supported lifetime for Python 2.7.


Keep in mind that there is interest in running OpenStack on PyPy which
is python 2.7. We don't have to continue supporting CPython 2.7
necessarily but we may want to support python 2.7 by way of PyPy.


PyPy folks have been working on python 3 support for some time already:
http://doc.pypy.org/en/latest/release-pypy3.3-v5.2-alpha1.html
It's an alpha, but by the time we consider dropping Python 2 it will
probably be released :)


We're targeting Python >=3.4, right now.  We'll have to decide as
a community whether PyPy support is a sufficient reason to keep
support for "older" versions (either 2.x or earlier versions of 3).
Before we can have that discussion, though, we need to actually run on
Python 3, so let's focus on that and evaluate the landscape of other
interpreters when the porting work is done.


+1, please don't get ahead of things until there is real full stack
testing running on python3.

It would also be good if some of our operators were running on python 3
and providing feedback that it works in the real world before we even
talk about dropping. Because our upstream testing (even the full stack
testing) only can catch so much.

So next steps:

1) full stack testing of everything we've got on python3 - (are there
volunteers to get that going?)
2) complete Nova port to enable full stack testing on python3 for iaas base
3) encourage operators to deploy with python3 in production
4) gather real world feedback, develop rest of plan



Just one to +1 the above steps. I'd be very hesitant to make any plan until we
are able to get not only nova but all the projects in the starter-kit:compute[0]
running pn python3 (and w/ a full stack test).

[0] https://governance.openstack.org/reference/tags/starter-kit_compute.html

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-07-06 Thread M Ranga Swami Reddy
+1
Good luck Scott for your next role..



On Wed, Jul 6, 2016 at 2:36 AM, Walter A. Boring IV
 wrote:
> This is great!   I know I'm a bit late to replying to this on the ML, due to
> my vacation,
> but I whole heartedly agree!
>
> +1
>
> Walt
>
> On 06/27/2016 10:27 AM, Sean McGinnis wrote:
>>
>> I would like to nominate Scott D'Angelo to core. Scott has been very
>> involved in the project for a long time now and is always ready to help
>> folks out on IRC. His contributions [1] have been very valuable and he
>> is a thorough reviewer [2].
>>
>> Please let me know if there are any objects to this within the next
>> week. If there are none I will switch Scott over by next week, unless
>> all cores approve prior to then.
>>
>> Thanks!
>>
>> Sean McGinnis (smcginnis)
>>
>> [1]
>> https://review.openstack.org/#/q/owner:%22Scott+DAngelo+%253Cscott.dangelo%2540hpe.com%253E%22+status:merged
>> [2] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Keep / Remove 800 UTC Meeting time

2016-07-06 Thread Rob Cresswell
There doesn't seem to have been any responses here, so I've gone ahead and 
proposed a patch to alter the meeting times: 
https://review.openstack.org/#/c/338130/

Rob


On 30 June 2016 at 09:29, Rob Cresswell 
> wrote:
Hi everyone,

I've mentioned in the past few meetings that attendance for the 800 UTC meeting 
time has been dwindling. As it stands, there are really only 3 regular 
attendees (myself included). I think we should consider scrapping it, and just 
use the 2000 UTC slot each week as a combined Horizon / Drivers meeting.

Does anyone have any strong objections to this? I'm more than happy to run the 
meeting if people would like to attend, but it seems wasteful to drag people 
into it each week if its empty.

Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Charm][Congress] Upstreaming JuJu Charm for Openstack Congress

2016-07-06 Thread James Page
Hi Bryan

On Tue, 5 Jul 2016 at 21:21 SULLIVAN, BRYAN L  wrote:

> I've been working in the OPNFV (https://wiki.opnfv.org/) on a JuJu Charm
> to install the OpenStack Congress service on the OPNFV reference platform.
> This charm should be useful for anyone that wants to install Congress for
> use with an OpenStack deployment, using the JuJu tool. I want to get the
> charm upstreamed as an official openstack.org git repo similar to the
> other repos for JuJu Charms for OpenStack services. I participate in the
> Congress team in OpenStack, who don't know the process for getting this
> charm upstreamed into an OpenStack repo, so I am reaching out to anyone on
> this list for help.
>

I can help you with that as I did most of the project-config changes for
the original move of OpenStack charms to git/gerrit a few months back.


> I have been working with gnuoy on #juju and narindergupta on #opnfv-joid
> on this. The charm has been tested and used to successfully install
> Congress on the OPNFV Colorado release (Mitaka-based, re OpenStack). While
> there are some features we need to continue to work on (e.g. Horizon
> integration), the charm is largely complete and stable. We are currently
> integrating it into the OPNFV CI/CD system through which it will be
> regularly/automatically tested in any OPNFV JuJu-based deployment (with
> this charm, Congress will become a default service deployed in OPNFV thru
> JuJu).
>

That all sounds awesome; it would also be great to get that system doing
testing of reviews and providing feedback to proposed changes; we can cover
some of that using Canonical CI (you'll notice that feeding back on reviews
on the charms already being developed under git/gerrit), but having other
3rd party CI systems provide feedback would also be great!.

Any pointers on how to get started toward creating an OpenStack git repo
> for this are appreciated.
>

https://review.openstack.org/#/c/323716/ is probably a good start; this was
for the hacluster charm which got missed during the original migration.

You can add an additional field to the gerrit/projects.yaml entry to detail
the repository to do a one-time import from:

 upstream: https://github.com/gnuoy/charm-congress


Lets make sure we get any outstanding changes merged into that repository
first, or we can use your git repo for the one-time import if that's easier
for now.

I'd really love to have congress as part of the OpenStack Charms project
(we're currently working towards official project status at the moment) but
I'd also like to ensure that you're engaged directly with reviews and
changes to the congress charm specifically, so I think its worth having a
slight different ACL configuration that includes the core charm acl config,
but also adds an additional charms-congress-core group with permissions to
Code-Review: +2 and Workflow: +1 for congress charm repository.

Does that make sense? you can reach out to me in #openstack-charms to
discuss in more detail if you like.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-07-06 Thread Sean McGinnis
I'm a little late following through on this, but since Scott is on
vacation right now anyway I suppose that's OK.

Since there were no objections and all respondents were positive, I've
now added Scott to the cinder-core group.

Welcome Scott!

Sean

On Mon, Jun 27, 2016 at 12:27:06PM -0500, Sean McGinnis wrote:
> I would like to nominate Scott D'Angelo to core. Scott has been very
> involved in the project for a long time now and is always ready to help
> folks out on IRC. His contributions [1] have been very valuable and he
> is a thorough reviewer [2].
> 
> Please let me know if there are any objects to this within the next
> week. If there are none I will switch Scott over by next week, unless
> all cores approve prior to then.
> 
> Thanks!
> 
> Sean McGinnis (smcginnis)
> 
> [1] 
> https://review.openstack.org/#/q/owner:%22Scott+DAngelo+%253Cscott.dangelo%2540hpe.com%253E%22+status:merged
> [2] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] It is impossible to queue UpdateDnsmasqTask

2016-07-06 Thread Georgy Kibardin
Bulat is suggesting to move with 4. He suggest to merge all actions of
UpdateDnsmasqTask into one puppet task. There are three actions: syncing
admin network list to heira, dhcp ranges update and cobbler sync. The
problem I see with this approach is that current implementation does not
suppose passing any additional data to "puppet apply". Cobbler sync seems
to be a reasonable part of updating dhcp ranges config.

Best,
Georgy

On Thu, Jun 16, 2016 at 7:25 PM, Georgy Kibardin 
wrote:

> Hi All,
>
> Currently we can only run one instance of subj. at time. An attempt to run
> second one causes an exception. This behaviour at least may cause a cluster
> to stuck forever in "removing" state (reproduces here
> https://bugs.launchpad.net/fuel/+bug/1544493) or just produce
> incomprehensible "task already running" message. So we need to address the
> problem somehow. I see the following ways to fix it:
>
> 1. Just put the cluster into "error" state which would allow user to
> remove it later.
>   pros: simple and fixes the problem at hand (#1544493)
>   cons: it would be hard to detect "come againg later" situation; quite a
> lame behavior: why don't you "come again later" yourself.
>
> 2. Implement generic queueing in nailgun.
> pros: quite simple
> cons: it doesn't look like nailgun responsibility
>
> 3. Implement generic queueing in astute.
>pros: this behaviour makes sense for astute.
>cons: the implementation would be quite complex, we need to synchronize
> execution between separate worker processes.
>
> 4. Split the task so that each part would work with particular cluster.
>pros: we don't extend our execution model
>cons: untrivial implementation; no guarantee that we are always able to
> split master node tasks on per cluster basis.
>
> Best,
> Georgy
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Python 35 jobs currently broken (was Re: New Python35 Jobs coming)

2016-07-06 Thread Andreas Jaeger
On 2016-07-05 21:38, Andreas Jaeger wrote:
> On 07/05/2016 02:52 PM, Andreas Jaeger wrote:
>> [...]
>> The change has merged and the python35 non-voting tests are run as part
>> of new changes now.
>>
>> The database setup for the python35-db variant is not working yet, and
>> needs adjustment on the infra side.
> 
> 
> The python35-db jobs are working now as well.

Note that currently *all* python35 and python35-db jobs will fail, we
need to build new xenial images first. This might take until tomorrow
this time.

Error message is
"Package php5-cli is not available, but is referred to by another package."

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting of Jul.6

2016-07-06 Thread joehuang
Hi, team,


IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.



The agenda of this weekly meeting is:

# bugs review: https://bugs.launchpad.net/tricircle

# bump to Nova 2.1: https://review.openstack.org/#/c/324226/

# L2/L3 networking with the help of 'Add Unicast Flooding Vteps to Provider 
Network': https://review.openstack.org/#/c/282180/

# todo list review https://etherpad.openstack.org/p/TricircleToDo


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev