[openstack-dev] [oslo][all] Oslo specs review weeks[Sept 27 - Oct 7]

2016-09-26 Thread ChangBo Guo
Hi ALL,

We will release Newton in Oct 6, and we have been working hard to fix bugs
and prepare the release. I think it's good time to
review good ideas. As we discussed in the Oslo weekly meeting [1],  we
would like to proposal an event of Oslo specs review weeks(Sept27 - Oct 7)
Welecom oslo folks and others review the specs of Oslo during these two
weeks,  hope we can merge these specs[2].

I also created a etherpad link [3] to collect requirements from consuming
projects, please add item if you have any questions or ideas.

[1]
http://eavesdrop.openstack.org/meetings/oslo/2016/oslo.2016-09-26-16.00.log.html
[2]
https://review.openstack.org/#/q/project:openstack/oslo-specs++%28status:open++OR+status:abandoned%29
[3] https://etherpad.openstack.org/p/ocata-oslo-ideas


Thanks

ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest news on placement API and Ocata rough goals

2016-09-26 Thread Jay Pipes

On 09/23/2016 05:07 PM, Sylvain Bauza wrote:

Le 23/09/2016 18:41, Jay Pipes a écrit :

5. Nested resource providers

Things like SR-IOV PCI devices are actually resource providers that
are embedded within another resource provider (the compute node
itself). In order to tag things like SR-IOV PFs or VFs with a set of
traits, we need to have discovery code run on the compute node that
registers things like SR-IOV PF/VFs or SR-IOV FPGAs as nested resource
providers.

Some steps needed here:

a) agreement on schema for placement DB for representing this nesting
relationship
b) write the discovery code in nova-compute for adding these resource
providers to the placement API when found



Again, that looks like a stretch goal to me, given how small we already
discussed about that yet. But sure, Ocata would be fine for discussing
first.


OK, so I was able to carve out some coding time today and pushed up the 
initial code that implements the nested resource providers modeling:


https://review.openstack.org/#/c/377138/

Basically, it adds a couple fields to represent the hierarchy of 
providers and logic to ensure that the tree of providers isn't broken 
accidentally. The modeling is done to allow for an unlimited amount of 
nesting levels (parent -> child relations) while being efficient for 
*most* tree operations (get me all nodes in a tree, get me direct 
children, get my parent, etc).


A followup patch adds a new method 
ResourceProviderList.get_all_by_root_provider_uuid() to get all resource 
providers in a single tree.


https://review.openstack.org/377215

More patches to follow that creates the resource provider records for 
SR-IOV PFs as children to the compute host's resource provider.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-26 Thread Joshua Harlow

Huang Zhiteng wrote:



On Tue, Sep 27, 2016 at 12:00 AM, Joshua Harlow > wrote:

Huang Zhiteng wrote:


On Mon, Sep 26, 2016 at 12:05 PM, Joshua Harlow

>>
wrote:

 Huang Zhiteng wrote:

 In eBay, we did some inhouse change to Nova so that our
big data
 type of
 use case can have physical disks as ephemeral disk for
this type of
 flavors.  It works well so far.   My 2 cents.


 Is there a published patch (or patchset) anywhere that
people can
 look at for said in-house changes?


Unfortunately no, but I think we can publish it if there are enough
interests.  However, I don't think that can be easily adopted onto
upstream Nova since it depends on other in-house changes we've
done to Nova.


Is there any blog, or other that explains the full bunch of changes
that ebay has done (u got me curious)?

The nice thing about OSS is that if u just get the patchsets out
(even to github or somewhere), those patches may trigger things to
change to match your usecase better just by the nature of people
being able to read them; but if they are never put out there, then
well ya, it's a little hard to get anything to change.


Anything stopping a full release of all in-house changes?

Even if they are not 'super great quality' it really doesn't matter :)

Apology for sidetracking the topic a bit.  While we encourage our
engineers to embrace community and open source, I think we didn't do a
good job to actually emphasize that. 'Time To Market' is another factor,
usually a feature requirement becomes deployed service in 2,3 sprint
(4~6 weeks), but you know how much can be done in same amount of time in
community, especially with Nova. :)


Ya, sorry for side-tracking,

Overall yes I do know getting changes done in upstream is not a 4-6 week 
process (though maybe someday it could be). In general I don't want to 
turn this into a rant, and thankfully I think there is a decent LWN 
article about this kind of situation already. You might like it :)


https://lwn.net/Articles/647524/ (replace embedded linux/kernel in this 
with openstack and imho it's equally useful/relevant).


-Josh






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Tricircle session planning for Barcelonda design summit

2016-09-26 Thread joehuang
Hello,

Fortunately Tricircle will have two workroom slots for the design discussion in 
Barcelonda design summit, let's planning the design topics to be discussed in 
the summit. Note: if there are still some topics need to be discussed, we can 
continue the discussion at round-table with online etherpad in Friday Oct 28.

Please input your candidate topics in the etherpad: 
https://etherpad.openstack.org/p/ocata-tricircle-sessions-planning

Best Regards
Chaoyi Huang(joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic-inspector-core team update

2016-09-26 Thread Zhenguo Niu
+1, well deserved!

On Mon, Sep 26, 2016 at 8:55 PM, milanisko k  wrote:

> Thanks guys! :D
>
> --
> milan
>
> po 26. 9. 2016 v 14:46 odesílatel Jim Rollenhagen 
> napsal:
>
>> // jim
>>
>>
>> On Mon, Sep 26, 2016 at 5:24 AM, Dmitry Tantsur 
>> wrote:
>> > Hi folks!
>> >
>> > As you probably know, Imre has decided to leave us for other
>> challenges, so
>> > our small core team has become even smaller. I'm removing him on his
>> > request.
>> >
>> > I suggest adding Milan Kovacik (milan or mkovacik on IRC) to the
>> > ironic-inspector-core team. He's been pretty active on ironic-inspector
>> > recently, doing meaningful reviews, and he's driving our HA work
>> forward.
>> >
>> > Please vote with +1/-1. If no objections are recorded, the change will
>> be in
>> > effect next Monday.
>> >
>> > Thanks!
>>
>> +1, Milan seems to be killing it. :)
>>
>> // jim
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-26 Thread Huang Zhiteng
On Tue, Sep 27, 2016 at 9:21 AM, Zhenyu Zheng 
wrote:

> Hi,
>
> Thanks for the reply, actually approach one is not we are looking for, our
> demands is to attach the real physical volume from compute node to VMs,
> by this way we can achieve the performance we need for usecases such as
> big data, this can be done by cinder using BlockDeviceDriver, it is quite
> different from the approach one you mentioned. The only problem now is
> that we cannot practially ensure the compute resource located on the same
> host with the volume, as Matt mentioned above, currently we have to
> arrange 1:1 AZ in Cinder and Nova to do this and it is not practical in
> commercial
> deployments.
>
That's exactly what I suggested, you don't need Cinder to 'passthrough' a
physical drive to a VM.  Do it with Nova (with some change) is much easier
than trying to coordinate between two services.

>
> Thanks.
>
> On Mon, Sep 26, 2016 at 9:48 PM, Erlon Cruz  wrote:
>
>>
>>
>> On Fri, Sep 23, 2016 at 10:19 PM, Zhenyu Zheng > > wrote:
>>
>>> Hi,
>>>
>>> Thanks all for the information, as for the filter Erlon(
>>> InstanceLocalityFilter) mentioned, this only solves a part of the
>>> problem,
>>> we can create new volumes for existing instances using this filter and
>>> then attach to it, but the root volume still cannot
>>> be guranteed to be on the same host as the compute resource, right?
>>>
>>>
>> You have two options to use a disk in the same node as the instance.
>> 1 - The easiest, just don't use Cinder volumes. When you create an
>> instance from an image, the default behavior in Nova, is to create the root
>> disk in the local host (/var/lib/nova/instances). This have the advantage
>> that Nova will cache the image locally and will avoid the need of copying
>> the image over the wire (or having to configure image caching in Cinder).
>>
>> 2 - Use Cinder volumes as root disk. Nova will somehow have to pass the
>> hints to the scheduler so it properly can use the InstanceLocalityFilter.
>> If you place this in Nova, and make sure that all requests have the proper
>> hint, then the volumes created will be scheduled and the host.
>>
>> Is there any reason why you can't use the first approach?
>>
>>
>>
>>
>>> The idea here is that all the volumes uses local disks.
>>> I was wondering if we already have such a plan after the Resource
>>> Provider structure has accomplished?
>>>
>>> Thanks
>>>
>>> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz  wrote:
>>>
 Not sure exactly what you mean, but in Cinder using the
 InstanceLocalityFilter[1], you can  schedule a volume to the same compute
 node the instance is located. Is this what you need?

 [1] http://docs.openstack.org/developer/cinder/scheduler-fil
 ters.html#instancelocalityfilter

 On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <
 jsbry...@electronicjungle.net> wrote:

> Kevin,
>
> This is functionality that has been requested in the past but has
> never been implemented.
>
> The best way to proceed would likely be to propose a blueprint/spec
> for this and start working this through that.
>
> -Jay
>
>
> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>
> Hi Novaers and Cinders:
>
> Quite often application requirements would demand using locally
> attached disks (or direct attached disks) for OpenStack compute instances.
> One such example is running virtual hadoop clusters via OpenStack.
>
> We can now achieve this by using BlockDeviceDriver as Cinder driver
> and using AZ in Nova and Cinder, illustrated in[1], which is not very
> feasible in large scale production deployment.
>
> Now that Nova is working on resource provider trying to build an
> generic-resource-pool, is it possible to perform "volume-based-scheduling"
> to build instances according to volume? As this could be much easier to
> build instances like mentioned above.
>
> Or do we have any other ways of doing this?
>
> References:
> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-l
> ocal-disks-for-instances.html
>
> Thanks,
>
> Kevin Zheng
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 

Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-26 Thread Huang Zhiteng
On Tue, Sep 27, 2016 at 12:00 AM, Joshua Harlow 
wrote:

> Huang Zhiteng wrote:
>
>>
>> On Mon, Sep 26, 2016 at 12:05 PM, Joshua Harlow > > wrote:
>>
>> Huang Zhiteng wrote:
>>
>> In eBay, we did some inhouse change to Nova so that our big data
>> type of
>> use case can have physical disks as ephemeral disk for this type
>> of
>> flavors.  It works well so far.   My 2 cents.
>>
>>
>> Is there a published patch (or patchset) anywhere that people can
>> look at for said in-house changes?
>>
>>
>> Unfortunately no, but I think we can publish it if there are enough
>> interests.  However, I don't think that can be easily adopted onto
>> upstream Nova since it depends on other in-house changes we've done to
>> Nova.
>>
>>
> Is there any blog, or other that explains the full bunch of changes that
> ebay has done (u got me curious)?
>
> The nice thing about OSS is that if u just get the patchsets out (even to
> github or somewhere), those patches may trigger things to change to match
> your usecase better just by the nature of people being able to read them;
> but if they are never put out there, then well ya, it's a little hard to
> get anything to change.


> Anything stopping a full release of all in-house changes?
>
> Even if they are not 'super great quality' it really doesn't matter :)

Apology for sidetracking the topic a bit.  While we encourage our engineers
to embrace community and open source, I think we didn't do a good job to
actually emphasize that.  'Time To Market' is another factor, usually a
feature requirement becomes deployed service in 2,3 sprint (4~6 weeks), but
you know how much can be done in same amount of time in community,
especially with Nova. :)

>
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards
Huang Zhiteng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-26 Thread Zhenyu Zheng
Hi,

Thanks for the reply, actually approach one is not we are looking for, our
demands is to attach the real physical volume from compute node to VMs,
by this way we can achieve the performance we need for usecases such as big
data, this can be done by cinder using BlockDeviceDriver, it is quite
different from the approach one you mentioned. The only problem now is that
we cannot practially ensure the compute resource located on the same
host with the volume, as Matt mentioned above, currently we have to arrange
1:1 AZ in Cinder and Nova to do this and it is not practical in commercial
deployments.

Thanks.

On Mon, Sep 26, 2016 at 9:48 PM, Erlon Cruz  wrote:

>
>
> On Fri, Sep 23, 2016 at 10:19 PM, Zhenyu Zheng 
> wrote:
>
>> Hi,
>>
>> Thanks all for the information, as for the filter Erlon(
>> InstanceLocalityFilter) mentioned, this only solves a part of the
>> problem,
>> we can create new volumes for existing instances using this filter and
>> then attach to it, but the root volume still cannot
>> be guranteed to be on the same host as the compute resource, right?
>>
>>
> You have two options to use a disk in the same node as the instance.
> 1 - The easiest, just don't use Cinder volumes. When you create an
> instance from an image, the default behavior in Nova, is to create the root
> disk in the local host (/var/lib/nova/instances). This have the advantage
> that Nova will cache the image locally and will avoid the need of copying
> the image over the wire (or having to configure image caching in Cinder).
>
> 2 - Use Cinder volumes as root disk. Nova will somehow have to pass the
> hints to the scheduler so it properly can use the InstanceLocalityFilter.
> If you place this in Nova, and make sure that all requests have the proper
> hint, then the volumes created will be scheduled and the host.
>
> Is there any reason why you can't use the first approach?
>
>
>
>
>> The idea here is that all the volumes uses local disks.
>> I was wondering if we already have such a plan after the Resource
>> Provider structure has accomplished?
>>
>> Thanks
>>
>> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz  wrote:
>>
>>> Not sure exactly what you mean, but in Cinder using the
>>> InstanceLocalityFilter[1], you can  schedule a volume to the same compute
>>> node the instance is located. Is this what you need?
>>>
>>> [1] http://docs.openstack.org/developer/cinder/scheduler-fil
>>> ters.html#instancelocalityfilter
>>>
>>> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <
>>> jsbry...@electronicjungle.net> wrote:
>>>
 Kevin,

 This is functionality that has been requested in the past but has never
 been implemented.

 The best way to proceed would likely be to propose a blueprint/spec for
 this and start working this through that.

 -Jay


 On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:

 Hi Novaers and Cinders:

 Quite often application requirements would demand using locally
 attached disks (or direct attached disks) for OpenStack compute instances.
 One such example is running virtual hadoop clusters via OpenStack.

 We can now achieve this by using BlockDeviceDriver as Cinder driver and
 using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
 in large scale production deployment.

 Now that Nova is working on resource provider trying to build an
 generic-resource-pool, is it possible to perform "volume-based-scheduling"
 to build instances according to volume? As this could be much easier to
 build instances like mentioned above.

 Or do we have any other ways of doing this?

 References:
 [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-l
 ocal-disks-for-instances.html

 Thanks,

 Kevin Zheng


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-26 Thread Fox, Kevin M
I think some of the disconnect here is a potential misunderstanding about what 
kolla-kubernetes is

Ultimately, to me, kolla-kubernetes is a database of architecture bits to 
successfully deploy and manage OpenStack on k8s. Its building blocks. Pretty 
much what you asked for.

There are a bunch of ways of building openstacks. There is no one true way. It 
really depends on what the operator wants the cloud to do. Is a daemonset or a 
petset the best way to deploy a cinder volume pod in k8s? The answer is, it 
depends. (We have an example where one or the other is better now)

kolla-kubernetes is taking the building block approach. It takes a bit of 
information in from the operator or other tool, along with their main openstack 
configs, and generates k8s templates that are optimized for that case.

Who builds the configs, who tells it when to build what templates, and in what 
order they are started is a separate thing.

You should be able to do a 'kollakube template pod nova-api' and just see what 
it thinks is best.

If you want a nice set of documents, it should be easy to loop across them and 
dump them to html.

I think doing them in a machine readable way rather then a document makes much 
more sense, as it can be reused in multiple projects such as tripleo, fuel, and 
others and we all can share a common database. We're trying to build a 
community around this database.

Asking to basically make a new project, that does just a human only readable 
version of the same database seems like a lot of work, with many fewer useful 
outcomes.

Please help the community make a great machine and human readable reference 
architecture system by contributing to the kolla-kubernetes project. There are 
plenty of opportunity to help out.

Maybe making some tools to make the data contained in the database more human 
friendly would suit your interests? Maybe a nice web frontend that asks a few 
questions and renders templates out in nice human friendly ways?

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Monday, September 26, 2016 9:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to 
deploy OpenStack on k8s

On 23/09/16 17:47 +, Steven Dake (stdake) wrote:
>Flavio,
>
>Forgive the top post and lack of responding inline – I am dealing with lookout 
>2016 which apparently has a bug here [0].
>
>Your question:
>
>I can contribute to kolla-kubernetes all you want but that won't give me what I
>asked for in my original email and I'm pretty sure there are opinions about the
>"recommended" way for running OpenStack on kubernetes. Questions like: Should I
>run rabbit in a container? Should I put my database in there too? Now with
>PetSets it might be possible. Can we be smarter on how we place the services in
>the cluster? Or should we go with the traditional controller/compute/storage
>architecture.
>
>You may argue that I should just read the yaml files from kolla-kubernetes and
>start from there. May be true but that's why I asked if there was something
>written already.
>Your question ^
>
>My answer:
>I think what you are really after is why kolla-kubernetes has made the choices 
>we have made.  I would not argue that reading the code would answer that 
>question because it does not.  Instead it answers how those choices were 
>implemented.
>
>You are mistaken in thinking that contributing to kolla-kubernetes won’t give 
>you what you really want.  Participation in the Kolla community will answer 
>for you *why* choices were made as they were.  Many choices are left 
>unanswered as of yet and Red Hat can make a big impact in the future of the 
>decision making about *why*.  You have to participate to have your voice 
>heard.  If you are expecting the Kolla team to write a bunch of documentation 
>to explain *why* we have made the choices we have, we frankly don’t have time 
>for that.  Ryan and Michal may promise it with architecture diagrams and other 
>forms of incomplete documentation, but that won’t provide you a holistic view 
>of *why* and is wasted efforts on their part (no offense Michal and Ryan – I 
>think it’s a worthy goal.  The timing for such a request is terrible and I 
>don’t want to derail the team into endless discussions about the best way to 
>do things).
>
>The best way to do things is sorted out via the gerrit review process using 
>the standard OpenStack workflow through an open development process.

Steve,

Thanks for getting back on this. Unfortunatelly, I think you keep missing my
point and my goal.

I'd like to document the architectural choices and see if there's a common
ground in which different teams can collaborate on. In addition to this, we'll
also see at what point these teams will start diverging in architectural
choices. Will the time invested on this be entirely wasted? Maybe.

I'm failing to see what is wrong about my 

Re: [openstack-dev] Pecan Version 1.2

2016-09-26 Thread Ryan Petrello
Apologies for the trouble this caused.  As Dave mentioned, this change 
warranted a new major version of pecan, and I missed it.  I've reverted the 
offending commit and re-released a new version of pecan (1.2.1) to PyPI:


https://github.com/pecan/pecan/commit/4cfe319738304ca5dcc97694e12b3d2b2e24b1bb
https://github.com/pecan/pecan/commit/b3699aeae1f70b223a84308894523a64ede2b083
https://pypi.python.org/pypi/pecan/1.2.1

Once the dust settles in a few days, I'll re-release the new functionality in 
a major point release of pecan.


On 09/26/16 09:21 PM, Dave McCowan (dmccowan) wrote:


The Barbican project uses Pecan as our web framework.

At some point recently, OpenStack started picking up their new version 1.2.  
This version [1] changed one of their APIs such that certain calls that used to 
return 200 now return 204.  This has caused immediate problems for Barbican 
(our gates for /master, stable/newton, and stable/mitaka all fail) and a 
potential larger impact (changing the return code of REST calls is not 
acceptable for a stable API).

Before I start hacking three releases of Barbican to work around Pecan's 
change, I'd like to ask:  are any other projects having trouble with
Pecan Version 1.2?  Would it be possible/appropriate to block this version as 
not working for OpenStack?

Thanks,
Dave McCowan


[1]
http://pecan.readthedocs.io/en/latest/changes.html
https://github.com/pecan/pecan/issues/72




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Meeting Tuesday September 27th at 19:00 UTC

2016-09-26 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday September 27th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

We didn't have a meeting last week because much of the team was at
late cycle sprint in Germany. In case you missed it or would like a
refresher, the meeting minutes and full logs from our last meeting,
two weeks ago, are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-09-13-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-09-13-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-09-13-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] First Ocata blueprint completion

2016-09-26 Thread Matt Riedemann

The award goes to the hyper-v storage QoS feature:

https://blueprints.launchpad.net/nova/+spec/hyperv-storage-qos

It's been around for a long time, sorry it took this long to get it in 
but thanks to the hyper-v subteam for persisting with that one and being 
quick to respond to comments and questions.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Version 1.2

2016-09-26 Thread Matt Riedemann

On 9/26/2016 5:49 PM, Matt Riedemann wrote:

On 9/26/2016 5:15 PM, Dave McCowan (dmccowan) wrote:

I don't know what triggered the update.  Our gates started breaking on
September 23, but I can't find a commit around that time that would have
caused this to happen.

From: Clay Gerrard >
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
Date: Monday, September 26, 2016 at 6:03 PM
To: "OpenStack Development Mailing List (not for usage questions)"
>
Subject: Re: [openstack-dev] Pecan Version 1.2

I'm interested to hear how this works out.

I thought upper-constraints was somehow supposed to work to prevent
this?  Like maybe don't install a brand new shiny upstream version on
the gate infrastructure test jobs until it passes all our tests?
Prevent a fire drill?  That bug was active back in July - but I guess
1.2 was released pretty recently?   maybe I don't understand the
timeline.

-Clay

On Mon, Sep 26, 2016 at 2:21 PM, Dave McCowan (dmccowan)
> wrote:


The Barbican project uses Pecan as our web framework.

At some point recently, OpenStack started picking up their new
version 1.2.  This version [1] changed one of their APIs such that
certain calls that used to return 200 now return 204.  This has
caused immediate problems for Barbican (our gates for /master,
stable/newton, and stable/mitaka all fail) and a potential larger
impact (changing the return code of REST calls is not acceptable for
a stable API).

Before I start hacking three releases of Barbican to work around
Pecan's change, I'd like to ask:  are any other projects having
trouble with
Pecan Version 1.2?  Would it be possible/appropriate to block this
version as not working for OpenStack?

Thanks,
Dave McCowan


[1]
http://pecan.readthedocs.io/en/latest/changes.html

https://github.com/pecan/pecan/issues/72




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There is a bot that updates upper-constraints, so it was updated here:

https://github.com/openstack/requirements/commit/21015dfb3c3e9365721f589d11910a366f83


Reviews on these are basically, if they pass CI they get merged, unless
we're in an release candidate mode, which for master we aren't anymore
(since master is now ocata).

As fungi pointed out, there are some representative jobs run on these
changes but it's not an exhaustive list, it's mostly the integrated-gate
jobs, which barbican is not a part of which is how it slipped through.

By the way, you're broken on stable/mitaka because barbican isn't using
upper-constraints in barbican. Note the version of pecan in
stable/mitaka is 1.0.4. Same story for stable/newton, pecan is 1.1.2 in
stable/newton and is frozen.

So a large part of the fix here is for barbican to use upper-constraints
in it's unit test jobs. Looks like you can thank tonyb for doing this
for you:

https://review.openstack.org/#/c/358404/

Which says it's also in stable/newton, so I don't know how you're busted
in stable/newton.



This is why stable/newton is broken for you, you don't have this merged yet:

https://review.openstack.org/#/c/371695/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Version 1.2

2016-09-26 Thread Matt Riedemann

On 9/26/2016 5:15 PM, Dave McCowan (dmccowan) wrote:

I don't know what triggered the update.  Our gates started breaking on
September 23, but I can't find a commit around that time that would have
caused this to happen.

From: Clay Gerrard >
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
Date: Monday, September 26, 2016 at 6:03 PM
To: "OpenStack Development Mailing List (not for usage questions)"
>
Subject: Re: [openstack-dev] Pecan Version 1.2

I'm interested to hear how this works out.

I thought upper-constraints was somehow supposed to work to prevent
this?  Like maybe don't install a brand new shiny upstream version on
the gate infrastructure test jobs until it passes all our tests?
Prevent a fire drill?  That bug was active back in July - but I guess
1.2 was released pretty recently?   maybe I don't understand the
timeline.

-Clay

On Mon, Sep 26, 2016 at 2:21 PM, Dave McCowan (dmccowan)
> wrote:


The Barbican project uses Pecan as our web framework.

At some point recently, OpenStack started picking up their new
version 1.2.  This version [1] changed one of their APIs such that
certain calls that used to return 200 now return 204.  This has
caused immediate problems for Barbican (our gates for /master,
stable/newton, and stable/mitaka all fail) and a potential larger
impact (changing the return code of REST calls is not acceptable for
a stable API).

Before I start hacking three releases of Barbican to work around
Pecan's change, I'd like to ask:  are any other projects having
trouble with
Pecan Version 1.2?  Would it be possible/appropriate to block this
version as not working for OpenStack?

Thanks,
Dave McCowan


[1]
http://pecan.readthedocs.io/en/latest/changes.html

https://github.com/pecan/pecan/issues/72



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There is a bot that updates upper-constraints, so it was updated here:

https://github.com/openstack/requirements/commit/21015dfb3c3e9365721f589d11910a366f83

Reviews on these are basically, if they pass CI they get merged, unless 
we're in an release candidate mode, which for master we aren't anymore 
(since master is now ocata).


As fungi pointed out, there are some representative jobs run on these 
changes but it's not an exhaustive list, it's mostly the integrated-gate 
jobs, which barbican is not a part of which is how it slipped through.


By the way, you're broken on stable/mitaka because barbican isn't using 
upper-constraints in barbican. Note the version of pecan in 
stable/mitaka is 1.0.4. Same story for stable/newton, pecan is 1.1.2 in 
stable/newton and is frozen.


So a large part of the fix here is for barbican to use upper-constraints 
in it's unit test jobs. Looks like you can thank tonyb for doing this 
for you:


https://review.openstack.org/#/c/358404/

Which says it's also in stable/newton, so I don't know how you're busted 
in stable/newton.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Version 1.2

2016-09-26 Thread Jeremy Stanley
On 2016-09-26 15:03:19 -0700 (-0700), Clay Gerrard wrote:
[...]
> I thought upper-constraints was somehow supposed to work to prevent this?
> Like maybe don't install a brand new shiny upstream version on the gate
> infrastructure test jobs until it passes all our tests?  Prevent a fire
> drill?
[...]

There are some hopefully-representative jobs run against proposed
changes to upper-constraints.txt, but no way we could conceivably
run every job against them. Those jobs mostly attempt to determine
whether an update will wedge most projects but aren't likely to
catch an issue that impacts only one or a few.

What the upper constraints implementation _does_ give us, however,
is a central location we can quickly block breaking dep updates once
discovered rather than having to wait for them to propagate through
global requirements and get merged into tons of individual project
repos.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #91

2016-09-26 Thread Iury Gregory
Hi Puppeteers,

If you have any topic to add for this week, please use the etherpad:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160927

See you tomorrow =)

2016-09-20 12:07 GMT-03:00 Iury Gregory :

> No topic/discussion in our agenda. Meeting cancelled, see you next week!
>
>
> 2016-09-19 18:00 GMT-03:00 Iury Gregory :
>
>> Hi Puppeteers,
>>
>> If you have any topic to add for this week, please use the etherpad:
>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160920
>>
>> See you tomorrow =)
>>
>> 2016-09-12 13:23 GMT-03:00 Emilien Macchi :
>>
>>> Hi,
>>>
>>> Tomorrow we'll have our weekly meeting:
>>> https://etherpad.openstack.org/p/puppet-openstack-weekly-mee
>>> ting-20160913
>>> Feel free to add topics as usual,
>>>
>>> Thanks!
>>>
>>> On Thu, Sep 8, 2016 at 9:57 AM, Iury Gregory 
>>> wrote:
>>> > No topic/discussion in our agenda, we cancelled the meeting, see you
>>> next
>>> > week!
>>> >
>>> > 2016-09-05 15:39 GMT-03:00 Iury Gregory :
>>> >>
>>> >> Hi Puppeteers,
>>> >>
>>> >> If you have any topic to add for this week, please use the etherpad:
>>> >> https://etherpad.openstack.org/p/puppet-openstack-weekly-mee
>>> ting-20160906
>>> >>
>>> >> See you tomorrow =)
>>> >>
>>> >> 2016-08-30 12:02 GMT-03:00 Emilien Macchi :
>>> >>>
>>> >>> No topic this week, meeting cancelled!
>>> >>>
>>> >>> See you next week :)
>>> >>>
>>> >>> On Mon, Aug 29, 2016 at 1:45 PM, Emilien Macchi 
>>> >>> wrote:
>>> >>> > Hi,
>>> >>> >
>>> >>> > If you have any topic to add for this week, please use the
>>> etherpad:
>>> >>> >
>>> >>> > https://etherpad.openstack.org/p/puppet-openstack-weekly-mee
>>> ting-20160830
>>> >>> >
>>> >>> > See you tomorrow,
>>> >>> >
>>> >>> > On Tue, Aug 23, 2016 at 1:08 PM, Iury Gregory <
>>> iurygreg...@gmail.com>
>>> >>> > wrote:
>>> >>> >> No topic/discussion in our agenda, we cancelled the meeting, see
>>> you
>>> >>> >> next
>>> >>> >> week!
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >> 2016-08-22 16:19 GMT-03:00 Iury Gregory :
>>> >>> >>>
>>> >>> >>> Hi Puppeteers!
>>> >>> >>>
>>> >>> >>> We'll have our weekly meeting tomorrow at 3pm UTC on
>>> >>> >>> #openstack-meeting-4
>>> >>> >>>
>>> >>> >>> Here's a first agenda:
>>> >>> >>>
>>> >>> >>> https://etherpad.openstack.org/p/puppet-openstack-weekly-mee
>>> ting-20160823
>>> >>> >>>
>>> >>> >>> Feel free to add topics, and any outstanding bug and patch.
>>> >>> >>>
>>> >>> >>> See you tomorrow!
>>> >>> >>> Thanks,
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >> --
>>> >>> >>
>>> >>> >> ~
>>> >>> >> Att[]'s
>>> >>> >> Iury Gregory Melo Ferreira
>>> >>> >> Master student in Computer Science at UFCG
>>> >>> >> E-mail:  iurygreg...@gmail.com
>>> >>> >> ~
>>> >>> >>
>>> >>> >>
>>> >>> >> 
>>> __
>>> >>> >> OpenStack Development Mailing List (not for usage questions)
>>> >>> >> Unsubscribe:
>>> >>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>> >>
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> > --
>>> >>> > Emilien Macchi
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Emilien Macchi
>>> >>>
>>> >>>
>>> >>> 
>>> __
>>> >>> OpenStack Development Mailing List (not for usage questions)
>>> >>> Unsubscribe:
>>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >>
>>> >> ~
>>> >> Att[]'s
>>> >> Iury Gregory Melo Ferreira
>>> >> Master student in Computer Science at UFCG
>>> >> E-mail:  iurygreg...@gmail.com
>>> >> ~
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> > ~
>>> > Att[]'s
>>> > Iury Gregory Melo Ferreira
>>> > Master student in Computer Science at UFCG
>>> > E-mail:  iurygreg...@gmail.com
>>> > ~
>>> >
>>> > 
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>>
>>> --
>>> Emilien Macchi
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> 

Re: [openstack-dev] Pecan Version 1.2

2016-09-26 Thread Dave McCowan (dmccowan)
I don't know what triggered the update.  Our gates started breaking on 
September 23, but I can't find a commit around that time that would have caused 
this to happen.

From: Clay Gerrard >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, September 26, 2016 at 6:03 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] Pecan Version 1.2

I'm interested to hear how this works out.

I thought upper-constraints was somehow supposed to work to prevent this?  Like 
maybe don't install a brand new shiny upstream version on the gate 
infrastructure test jobs until it passes all our tests?  Prevent a fire drill?  
That bug was active back in July - but I guess 1.2 was released pretty 
recently?   maybe I don't understand the timeline.

-Clay

On Mon, Sep 26, 2016 at 2:21 PM, Dave McCowan (dmccowan) 
> wrote:

The Barbican project uses Pecan as our web framework.

At some point recently, OpenStack started picking up their new version 1.2.  
This version [1] changed one of their APIs such that certain calls that used to 
return 200 now return 204.  This has caused immediate problems for Barbican 
(our gates for /master, stable/newton, and stable/mitaka all fail) and a 
potential larger impact (changing the return code of REST calls is not 
acceptable for a stable API).

Before I start hacking three releases of Barbican to work around Pecan's 
change, I'd like to ask:  are any other projects having trouble with
Pecan Version 1.2?  Would it be possible/appropriate to block this version as 
not working for OpenStack?

Thanks,
Dave McCowan


[1]
http://pecan.readthedocs.io/en/latest/changes.html
https://github.com/pecan/pecan/issues/72


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Version 1.2

2016-09-26 Thread Clay Gerrard
I'm interested to hear how this works out.

I thought upper-constraints was somehow supposed to work to prevent this?
Like maybe don't install a brand new shiny upstream version on the gate
infrastructure test jobs until it passes all our tests?  Prevent a fire
drill?  That bug was active back in July - but I guess 1.2 was released
pretty recently?   maybe I don't understand the timeline.

-Clay

On Mon, Sep 26, 2016 at 2:21 PM, Dave McCowan (dmccowan)  wrote:

>
> The Barbican project uses Pecan as our web framework.
>
> At some point recently, OpenStack started picking up their new version
> 1.2.  This version [1] changed one of their APIs such that certain calls
> that used to return 200 now return 204.  This has caused immediate problems
> for Barbican (our gates for /master, stable/newton, and stable/mitaka all
> fail) and a potential larger impact (changing the return code of REST calls
> is not acceptable for a stable API).
>
> Before I start hacking three releases of Barbican to work around Pecan's
> change, I'd like to ask:  are any other projects having trouble with
> Pecan Version 1.2?  Would it be possible/appropriate to block this version
> as not working for OpenStack?
>
> Thanks,
> Dave McCowan
>
>
> [1]
> http://pecan.readthedocs.io/en/latest/changes.html
> https://github.com/pecan/pecan/issues/72
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [Congress] ceilometer client `alarms.list()` HTTPException (HTTP N/A)

2016-09-26 Thread Eric K
Hi all,

Looking for some help to resolve a breakage in Congress ceilometer-driver.
When Congress attempts to perform `alarms.list()` on ceilometer client,
the following exception is generated.
First suspicion is Congress-end configuration problem, but it¹s only
`alarms.list()` that fails. Others such as `events.list()` succeeds.

I¹ve reproduced it here in a small example. Any thoughts on how we could
resolve or work around it? Thanks a ton!

On a fresh devstack install with ceilometer clients version 2.6.1,
client.alarms.list() errors when other things (like client.events.list())
succeeds.


Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from keystoneauth1 import session # Version 2.12.1
>>> from keystoneauth1.identity import v2
>>> import ceilometerclient.client as cc
>>> auth = v2.Password(auth_url='http://192.168.218.145:5000/v2.0',
>>>username='admin', password='password', tenant_name='admin')
>>> sess = session.Session(auth=auth)
>>> client = cc.get_client(version='2', session=sess)
>>> client.events.list()  # succeeds
[]
>>> client.alarms.list()  # fails
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/usr/local/lib/python2.7/dist-packages/ceilometerclient/v2/alarms.py",
line 83, in list
return self._list(options.build_url(self._path(), q))
  File 
"/usr/local/lib/python2.7/dist-packages/ceilometerclient/common/base.py",
line 63, in _list
resp = self.api.get(url)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py",
line 187, in get
return self.request(url, 'GET', **kwargs)
  File 
"/usr/local/lib/python2.7/dist-packages/ceilometerclient/client.py", line
473, in request
raise exc.from_response(resp, body)
ceilometerclient.exc.HTTPException: HTTPException (HTTP N/A)


Same information filed in bug:
https://bugs.launchpad.net/python-ceilometerclient/+bug/1626404



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Pecan Version 1.2

2016-09-26 Thread Dave McCowan (dmccowan)

The Barbican project uses Pecan as our web framework.

At some point recently, OpenStack started picking up their new version 1.2.  
This version [1] changed one of their APIs such that certain calls that used to 
return 200 now return 204.  This has caused immediate problems for Barbican 
(our gates for /master, stable/newton, and stable/mitaka all fail) and a 
potential larger impact (changing the return code of REST calls is not 
acceptable for a stable API).

Before I start hacking three releases of Barbican to work around Pecan's 
change, I'd like to ask:  are any other projects having trouble with
Pecan Version 1.2?  Would it be possible/appropriate to block this version as 
not working for OpenStack?

Thanks,
Dave McCowan


[1]
http://pecan.readthedocs.io/en/latest/changes.html
https://github.com/pecan/pecan/issues/72

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections][TC] TC Candidacy

2016-09-26 Thread Jeremy Stanley
On 2016-09-26 15:39:23 -0400 (-0400), Jay Pipes wrote:
[...]
> >* Reinstate Stackforge as the primary incubator for new projects
> 
> Having Stackforge as a separate Github organization and set of repositories
> was a maintenance nightmare due to the awkwardness of renaming projects when
> they "moved into OpenStack".
> 
> Also, reminder: The Big Tent != the dissolution of Stackforge.
[...]

Also a reminder that this wouldn't be a reinstatement. While it was
sometimes common for people without a clear understanding of
community process to assume there was a requirement that projects
had to start in Stackforge and then graduate to being Official,
that's not actually how incubation worked. In fact, Stackforge was
(and still is though not by that name any longer as we retired the
branding around it) just a label for unofficial projects. The
"incubated" projects before the big tent were semi-official, could
use the "openstack" Git namespace (back when we still restricted
it), could publish to docs.openstack.org, and so on.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for Mentors and Mentees!

2016-09-26 Thread Kendall Nelson
Hello All!

We are excited to kick off the next round of mentoring sponsored by the
Women of OpenStack! Anyone can sign up who is interested in mentoring or in
need of a mentor!

Mentors! We can set you up as a technical mentor based on your areas of
expertise within the community or we can match you with a mentee looking
for more career advice.

Mentees! We have a variety of Stackers interested in giving you both
technical and career advice! We have mentors from a wide range of projects
and working groups willing to help you get involved in different aspects of
the community.

We would like to make as many matches before the Barcelona Summit as we can
so you can meet with your mentee/mentor there if you are both attending.

Sign-up to get involved here:

https://openstackfoundation.formstack.com/forms/mentor_mentee_signup_pre_barcelona


All the Best,

Kendall Nelson (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Developer Mailing List Digest September 17-23

2016-09-26 Thread Mike Perez
HTML version: 
http://www.openstack.org/blog/2016/09/openstack-developer-mailing-list-digest-20160923/

Announcing firehose.openstack.org
=
* A MQTT based unified message bus for infra services.
* This allows a single place to go for consuming messages of events from infra
  services.
* Two interfaces for subscribing to topics:
  - MQTT protocol on the default port
  - Websockets over port 80
* Launchpad and gerrit events are the only things currently sending message to
  firehose, but the plan is to expand this.
* An example [1] of gerritbot on the consuming side, which has support for
  subscribing to gerrit event stream over MQTT.
* A spec giving details on firehose [2].
* Docs on firehose [3].
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103985.html


Release countdown for week R-1, 26-30
=
* Focus: All teams should be working on release-critical bugs before the final
  release.
* General
  - 29th September is the deadline for the new release candidates or release
from intermediary projects.
  - Quiet period to follow before the last release candidates on 6th October.
* Release actions:
  - Projects not following the milestone-based release model who want
a stable/newton branch created should talk to the release team.
  - Watch for translation patches and merge them quickly to ensure we have as
many user-facing strings translated as possible in the release candidates.
-- If your project has already been branched, make sure those patches are
applied to the stable branch.
  - Liaisons for projects with independent deliverables should import the
release history by preparing patches to openstack/release.
* Important Dates:
  - Newton last RC, 29 September
  - Newton final release, 6 October
  - Newton release schedule [4]
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103252.html


Removal of Security and OpenStackSalt Project Teams From the Big Tent
=
* The Security and OpenStackSalt projects are without PTLs. Projects leaderless
  default to the Technical Committee for decision of what to do with the
  project [5]. Majority of the Technical Committee has agreed to have these
  projects removed.
* OpenStackSalt is a relatively new addition to the Big Tent, so if they got
  their act together, they could be reproposed.
* We still need to care about security., and we still need a home for the
  vulnerability management team (VMT). The suggested way forward is to have the
  VMT apply to be its own official project team, and have security be a working
  group.
* The Mitaka PTL for the Security mentions missing the election date, but
  provides some things the team has been working on:
  - Issuing Security Notes for Glance, Nova, Horizon, Bandit, Neutron and
Barbican.
  - Updating the security guide (the book we wrote on securing OpenStack)
  - Hosting a midcycle and inducting new members
  - Supporting the VMT with several embargoed and complex vulnerabilities
  - Building up a security blog
  - Making OpenStack the biggest open source project to ever receive the Core
  - Infrastructure Initiative Best Practices Badge
  - Working on the OpenStack Security Whitepaper
  - Developing CI security tooling such as Bandit
* One of the Technical Committee members privately received information that
  explains why the security PTL was not on top of things. With ~60 teams around
  there will always be one of two that miss, but here we're not sure it passes
  the bar of “non-alignment with the community” that would make the security
  team unfit to be an official OpenStack Team.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-September/thread.html#104170


[1] - 
http://git.openstack.org/cgit/openstack-infra/gerritbot/commit/?id=7c6e57983d499b16b3fabb864cf3b
[2] - http://specs.openstack.org/openstack-infra/infra-specs/specs/firehose.html
[3] - http://docs.openstack.org/infra/system-config/firehose.html
[4] - http://releases.openstack.org/newton/schedule.html
[5] - 
http://docs.openstack.org/project-team-guide/open-community.html#technical-committee-and-ptl-elections

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections][TC] TC Candidacy

2016-09-26 Thread Jay Pipes

John, appreciate your candor and candidacy. Some questions inline for you...

On 09/26/2016 06:57 AM, John Davidge wrote:

Last year, the TC moved OpenStack away from the Integrated Release, and
into The Big Tent. This removed the separation between those projects
considered integral to OpenStack, and those which enhance it.


Who decides what is integral to OpenStack and what merely "enhances" it, 
though? The TC? The DefCore group? The Board of Directors? One might say 
all three groups have a say in defining what "is OpenStack", no? And 
therefore all three groups would decide what is "integral" to OpenStack.



Since then, the number of official projects has gone from ~12 to 60.
While this was a fantastic move for community inclusivity, it has
also made life harder for operators and customers


We do indeed have a long way to go in improving the operator's 
experience for many OpenStack projects.


However, remember that many of the OpenStack projects came into 
existence because operators were asking for a certain use case to be 
fulfilled. I'm uncertain how putting some projects into a 
not-really-OpenStack-but-related bucket will help operators much. Is the 
pain for operators that there are too many projects in OpenStack, or is 
the pain that those projects are not universally installable or usable 
in the same fashion?



and has diminished the focus on OpenStack’s core
purpose.


What is OpenStack's core purpose? :) The OpenStack mission is 
intentionally encompassing of a wide variety of users and use cases. The 
Big Tent, by the way, did not affect this fact. The OpenStack mission 
pre-exists the Big Tent. All the Big Tent did was say that projects that 
wanted to be official OpenStack projects needed to follow the 4 Opens, 
submit to TC governance, and further the mission of OpenStack.


It sounds like you would like to limit the scope of the OpenStack 
mission, which is not the same as getting rid of the Big Tent. If that's 
the case, hey, totally cool with me :) But let's be specific about what 
it is you are suggesting.



Managing every team under one roof has led to issues for both the core and
the newer projects. The experience so far has taught us that there isn’t a
single set of rules that can be helpfully applied to both.


Hmm, I disagree about that. I think that experience actually *has* shown 
us that there is a single set of rules that can/should be applied to all 
projects that wish to be called an OpenStack project.



I believe that now is the time to take The Big Tent’s ideas and
iterate upon them to create a new model that can promote inclusivity,
while still preserving a clear focus for the core of OpenStack. The
main points of this new model are:

* Define OpenStack as its core components


Which components would these be? Folks can (and will) argue with you 
that a particular service is critical and should be considered core. But 
differing opinions here will lead to a decision-making inertia that will 
be difficult to overcome. You've been warned. :)



* Introduce a new home for complementary projects - The OpenStack Family
* Reinstate Stackforge as the primary incubator for new projects


Having Stackforge as a separate Github organization and set of 
repositories was a maintenance nightmare due to the awkwardness of 
renaming projects when they "moved into OpenStack".


Also, reminder: The Big Tent != the dissolution of Stackforge.


OpenStack will once again be a focused set of closely aligned projects
working together to provide an operating system for the datacenter.


As I've said before, this was never really reality, even since the 
beginning of OpenStack. :)


> The

OpenStack Family will provide a home for projects that work to improve the
experience of an OpenStack cloud (think Ceilometer, Heat, etc), while
protecting them from some of the more prescriptive rules that go with being
a core OpenStack component. Stackforge will be the main focus of
early-stage innovation, with a clearly defined path towards graduation into
The OpenStack Family.


Who gets to decide this graduation? The TC? The DefCore committee? The 
Board of Directors? What criteria would you use in the graduation 
requirements? Would they be technical criteria or governance/process 
criteria?


What technical benefits would graduating give to a project? If no 
technical benefits -- the benefits would be entirely marketing, 
political or reputational -- then should the *Technical* Committee be 
the group that decides whether a project graduates?


These are all questions that you will inevitably be asked to consider 
when you go (back down) the route you suggest. So, I think it's worth 
responding here in your TC candidacy mail.


All the best,
-jay

> I believe that this model[4] can go a long way

towards solving many of the pain points that we are seeing with OpenStack
today.

This transformation is one that I think is very important for the future
of OpenStack. We have a fantastic project 

Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Sam Yaple
On Mon, Sep 26, 2016 at 4:32 PM, Jeffrey Zhang 
wrote:

> Hey Sam,
>
> Yes. world readable is bad. But writable for current running service is
> also bad.
>
> But in nova.conf, the rootwrap_config is configurable. It can be changed
> to a custom file to gain root permission.
>
> # nova.conf
> rootwrap_config = /tmp/rootrwap.conf
>
> # /tmp/rootwrap.conf
> [DEFAULT]
> filters_path = /tmp/rootwrap.conf.d/
>

Sorry Jeffrey, you are mistaken about this. Just because you change the
filters_path means nothing. The filters_path is hardcoded in the
/etc/sudoers.d/nova file. Sudo will not work with any arbitary path you
specify. If you want to make the service config files nova:nova 0400 you
can. Though there is no added benefit in doing this in my opinion. It is
not a bad precaution I suppose, though it may affect some peoples
development cycle with Kolla. I remember I personally changed the config
from inside the running docker container once or twice while testing.

SamYaple


> so, for the file should be
>
> 0640 root:nova nova.conf
>
>
> On Mon, Sep 26, 2016 at 10:43 PM, Sam Yaple  wrote:
>
>> On Mon, Sep 26, 2016 at 1:18 PM, Jeffrey Zhang 
>> wrote:
>>
>>> Using the same user for running service and the configuration files is
>>> a danger. i.e. the service running user shouldn't change the
>>> configuration files.
>>>
>>> a simple attack like:
>>> * a hacker hacked into nova-api container with nova user
>>> * he can change the /etc/nova/rootwrap.conf file and
>>> /etc/nova/rootwrap.d file, which he can get much greater authority
>>> with sudo
>>> * he also can change the /etc/nova/nova.conf file to use another
>>> privsep_command.helper_command to get greater authority
>>> [privsep_entrypoint]
>>> helper_command=sudo nova-rootwrap /etc/nova/rootwrap.conf
>>> privsep-helper --config-file /etc/nova/nova.conf
>>>
>>> This is not true. The helper command required /etc/sudoers.d/*
>> configuration files to work. So just because it was changed to something
>> else, doesn't mean an attacker could actually do anything to adjust that,
>> considering /etc/nova/rootwrap* is already owned by root. This was fixed
>> early on in the Kolla lifecycle, pre-liberty.
>>
>> Feel free to adjust /etc/nova/nova.conf to root:root, but you won't be
>> gaining any security advantage in doing so, you will be making it worse
>> (see below). I don't know of a need for it to be owned by the service user,
>> other than that is how all openstack things are packaged and those are the
>> permissions in the repo and other deploy tools.
>>
>>
>>> So right rule should be: do not let the service running user have
>>> write permission to configuration files,
>>>
>>> about for the nova.conf file, i think root:root with 644 permission
>>> is enough
>>> for the directory file, root:root with 755 is enough.
>>>
>>
>> So this actually makes it _less_ secure. The 0600 permissions were chosen
>> for a reason.  The nova.conf file has passwords to the DB and rabbitmq. If
>> the configuration files are world readable then those passwords could leak
>> to an unprivileged user on the host.
>>
>>
>>> A related BP[0] and PS[1] is created
>>>
>>> [0] https://blueprints.launchpad.net/kolla/+spec/config-readonly
>>> [1] https://review.openstack.org/376465
>>>
>>> On Sat, Sep 24, 2016 at 11:08 PM, 1392607554 <1392607...@qq.com> wrote:
>>>
 configuration file owner and permission in container

 --
 Regrad,
 zhubingbing

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Sam Yaple
On Mon, Sep 26, 2016 at 3:03 PM, Christian Berendt <
bere...@betacloud-solutions.de> wrote:

> > On 26 Sep 2016, at 16:43, Sam Yaple  wrote:
> >
> > So this actually makes it _less_ secure. The 0600 permissions were
> chosen for a reason.  The nova.conf file has passwords to the DB and
> rabbitmq. If the configuration files are world readable then those
> passwords could leak to an unprivileged user on the host.
>
> Confirmed. Please do not make configuration files world readable.
>
> We use volumes for the configuration file directories. Why do we not
> simply use read only volumes? This way we do not have to touch the current
> implementation (files are owned by the service user with 0600 permissions)
> and can make the configuration files read only.
>

This is already done. When I first setup the config bind mounting we did
make sure it was read only. See [1]. The way configs work in Kolla is the
files from that readonly bind mount are copied into the appropriate
directory in the container on container startup.

[1]
https://github.com/openstack/kolla/blob/b1f986c3492faa2d5386fc7baabbd6d8e370554a/ansible/roles/nova/tasks/start_compute.yml#L11

>
> Christian.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][keystone] keystone Newton RC2 available

2016-09-26 Thread Doug Hellmann
Hello everyone,

A new release candidate for keystone for the end of the Newton cycle
is available!  You can find the source code tarball at:

https://tarballs.openstack.org/keystone/keystone-10.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the final
Newton release on 6 October. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/keystone/+filebug

and tag it *newton-rc-potential* to bring it to the keystone release
crew's attention.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Preparing RC2

2016-09-26 Thread Armando M.
Neutrinos,

A this point, please consider the list of fixes for RC2 [1] final. We are
no longer considering adding anything to the list unless it's critical and
preventing merges.

Zuul is pretty backed up so we need to clear the currently backlog before
considering adding anything else. We have a WIP from which we'll release.
Ihar and I will refresh it by EOB Tue Sept 27.

If there is anything else you would like to address, please reach out on
IRC.

Thanks,
Armando

[1] https://launchpad.net/neutron/+milestone/newton-rc2
[2] https://review.openstack.org/#/c/376998/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security] New blog post - "Secure Development in Python"

2016-09-26 Thread Travis McPeak
For those that aren't aware the OSSP maintains a blog:
http://openstack-security.github.io/.

I published a new post today about resources created by the OSSP to help
developers write secure Python code.  You can view it here:
http://openstack-security.github.io/organization/2016/09/26/python-secure-development.html
.

We hope you find it useful, and as always, if you have questions please
send them to the Dev ML (with the [Security] tag) or visit us on
#openstack-security on IRC.

-Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam report

2016-09-26 Thread Loo, Ruby
Hi,

Here is this week's subteam report for Ironic. As usual, this is pulled 
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff between 16 Sep 2016 and 26 Sep 2016)
- Ironic: 195 bugs (+9) + 216 wishlist items (+1). 0 new (-1), 159 in progress 
(+14), 0 critical, 31 high (-1) and 17 incomplete (+2)
- Inspector: 11 bugs + 19 wishlist items (+1). 1 new, 10 in progress (-2), 0 
critical, 1 high (-1) and 1 incomplete
- Nova bugs with Ironic tag: 6 (-2). 0 new, 0 critical, 0 high

Gate improvements (jlvillal, lucasagomes, dtantsur)
===
* trello: 
https://trello.com/c/HWVHhxOj/1-multi-tenant-networking-network-isolation
- switching jobs to xenial is still in progress
- one non-voting inspector job was switched
- vasyl to switch other non-voting jobs

Notifications (mariojv)
===
* trello: https://trello.com/c/MD8HNcwJ/17-notifications
- Small spec updates landed
- Patches still pending review:
- https://review.openstack.org/#/c/347242/ yuriyz has a spec up for 
CRUD notifications, needs reviewers
- https://review.openstack.org/#/c/321865/ power state notifications 
ready for review; oustanding issues seem to be related to deciding on what to 
include in payload

Serial console (yossy, hshiina, yuikotakadamori)

* trello: https://trello.com/c/nm3I8djr/20-serial-console- Nova patch: needs 
review (Ocata is open)  https://review.openstack.org/#/c/328
157/
- ironic bug: needs review https://review.openstack.org/#/c/363647/

Enhanced root device hints (lucasagomes)

* trello: https://trello.com/c/f9DTEvDB/21-enhanced-root-device-hints
- Ocata is now open, waiting on a ironic-lib release to unblock https://review.o
penstack.org/#/c/366742/

Install guide migration (JayF and mat128)
=
* trello: https://trello.com/c/PwH1pexJ/23-rescue-mode
- Waiting for https://review.openstack.org/#/c/376603/ "Skip slow ironic tests o
n install-guide changes"
- otherwise, the patches are ready to land :)

Inspector (dtansur)
===
- we've made the final release, thanks all
- we've got a tempest equivalent of all our bash-based jobs
- I've just switched the non-voting discovery job to Xenial to see if it 
works
- if it does, it will be the last to replace the bash job
- early ocata changes that need attention/are good to review:
- the states patch (HA)
- Listing introspection statuses endpoint spec

Bifrost (TheJulia)
===
- Documentation on how to pronounce Bifrost merged :)

.

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO Core nominations

2016-09-26 Thread Carlos Camacho Gonzalez
Hi guys!!!,

This is awesome, I really appreciate all the support/guidance and help
from you! The list is quite long, so forth, a big thanks for all of
you.

Cheers,
Carlos


On Mon, Sep 26, 2016 at 6:04 PM, Steven Hardy  wrote:
> On Thu, Sep 15, 2016 at 10:20:07AM +0100, Steven Hardy wrote:
>> Hi all,
>>
>> As we work to finish the last remaining tasks for Newton, it's a good time
>> to look back over the cycle, and recognize the excellent work done by
>> several new contributors.
>>
>> We've seen a different contributor pattern develop recently, where many
>> folks are subsystem experts and mostly focus on a particular project or
>> area of functionality.  I think this is a good thing, and it's hopefully
>> going to allow our community to scale more effectively over time (and it
>> fits pretty nicely with our new composable/modular architecture).
>>
>> We do still need folks who can review with the entire TripleO architecture
>> in mind, but I'm very confident folks will start out as subsystem experts
>> and over time broaden their area of experience to encompass more of
>> the TripleO projects (we're already starting to see this IMO).
>>
>> We've had some discussion in the past[1] about strictly defining subteams,
>> vs just adding folks to tripleo-core and expecting good judgement to be
>> used (e.g only approve/+2 stuff you're familiar with - and note that it's
>> totally fine for a core reviewer to continue to +1 things if the patch
>> looks OK but is outside their area of experience).
>>
>> So, I'm in favor of continuing that pattern and just welcoming some of our
>> subsystem expert friends to tripleo-core, let me know if folks feel
>> strongly otherwise :)
>>
>> The nominations, are based partly on the stats[2] and partly on my own
>> experience looking at reviews, patches and IRC discussion with these folks
>> - I've included details of the subsystems I expect these folks to focus
>> their +2A power on (at least initially):
>>
>> 1. Brent Eagles
>>
>> Brent has been doing some excellent work mostly related to Neutron this
>> cycle - his reviews have been increasingly detailed, and show a solid
>> understanding of our composable services architecture.  He's also provided
>> a lot of valuable feedback on specs such as dpdk and sr-iov.  I propose
>> Brent continues this exellent Neutron focussed work, while also expanding
>> his review focus such as the good feedback he's been providing on new
>> Mistral actions in tripleo-common for custom-roles.
>>
>> 2. Pradeep Kilambi
>>
>> Pradeep has done a large amount of pretty complex work around Ceilomenter
>> and Aodh over the last two cycles - he's dealt with some pretty tough
>> challenges around upgrades and has consistently provided good review
>> feedback and solid analysis via discussion on IRC.  I propose Prad
>> continues this excellent Ceilomenter/Aodh focussed work, while also
>> expanding review focus aiming to cover more of t-h-t and other repos over
>> time.
>>
>> 3. Carlos Camacho
>>
>> Carlos has been mostly focussed on composability, and has done a great job
>> of working through the initial architecture implementation, including
>> writing some very detailed initial docs[3] to help folks make the transition
>> to the new architecture.  I'd suggest that Carlos looks to maintain this
>> focus on composable services, while also building depth of reviews in other
>> repos.
>>
>> 4. Ryan Brady
>>
>> Ryan has been one of the main contributors implementing the new Mistral
>> based API in tripleo-common.  His reviews, patches and IRC discussion have
>> consistently demonstrated that he's an expert on the mistral
>> actions/workflows and I think it makes sense for him to help with review
>> velocity in this area, and also look to help with those subsystems
>> interacting with the API such as tripleoclient.
>>
>> 5. Dan Sneddon
>>
>> For many cycles, Dan has been driving direction around our network
>> architecture, and he's been consistently doing a relatively small number of
>> very high-quality and insightful reviews on both os-net-config and the
>> network templates for tripleo-heat-templates.  I'd suggest Dan continues
>> this focus, and he's indicated he may have more bandwidth to help with
>> reviews around networking in future.
>>
>> Please can I get feedback from exisitng core reviewers - you're free to +1
>> these nominations (or abstain), but any -1 will veto the process.  I'll
>> wait one week, and if we have consensus add the above folks to
>> tripleo-core.
>
> Ok, so we got quite a few +1s and no objections, so I will go ahead and add
> the folks listed above to tripleo-core, congratulations (and thanks!) guys,
> keep up the great work! :)
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [qa] tempest-cores update

2016-09-26 Thread Ken'ichi Ohmichi
Hi Hugh,


2016-09-25 22:50 GMT-07:00 Hugh Blemings :
> Hi Ohmichi-san,
>
> Firstly congratulations on becoming PTL for Quality Assurance!

Thanks

>> As previous mail, Marc has resigned from tempest-cores.
>> In addition, David also has done from tempest-cores to concentrate on new
>> work.
>> Thank you two for many contributions to the project and I wish your
>> continuous successes.
>
>
> I'm including a link for your email about tempest-cores in this week's
> Lwood.  Could you please tell David's surname so I can include that in the
> item ?

Oh, sorry about that.
He is David Kranz.

Thanks
Ken Ohmichi

---

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-26 Thread Flavio Percoco

On 23/09/16 17:47 +, Steven Dake (stdake) wrote:

Flavio,

Forgive the top post and lack of responding inline – I am dealing with lookout 
2016 which apparently has a bug here [0].

Your question:

I can contribute to kolla-kubernetes all you want but that won't give me what I
asked for in my original email and I'm pretty sure there are opinions about the
"recommended" way for running OpenStack on kubernetes. Questions like: Should I
run rabbit in a container? Should I put my database in there too? Now with
PetSets it might be possible. Can we be smarter on how we place the services in
the cluster? Or should we go with the traditional controller/compute/storage
architecture.

You may argue that I should just read the yaml files from kolla-kubernetes and
start from there. May be true but that's why I asked if there was something
written already.
Your question ^

My answer:
I think what you are really after is why kolla-kubernetes has made the choices 
we have made.  I would not argue that reading the code would answer that 
question because it does not.  Instead it answers how those choices were 
implemented.

You are mistaken in thinking that contributing to kolla-kubernetes won’t give 
you what you really want.  Participation in the Kolla community will answer for 
you *why* choices were made as they were.  Many choices are left unanswered as 
of yet and Red Hat can make a big impact in the future of the decision making 
about *why*.  You have to participate to have your voice heard.  If you are 
expecting the Kolla team to write a bunch of documentation to explain *why* we 
have made the choices we have, we frankly don’t have time for that.  Ryan and 
Michal may promise it with architecture diagrams and other forms of incomplete 
documentation, but that won’t provide you a holistic view of *why* and is 
wasted efforts on their part (no offense Michal and Ryan – I think it’s a 
worthy goal.  The timing for such a request is terrible and I don’t want to 
derail the team into endless discussions about the best way to do things).

The best way to do things is sorted out via the gerrit review process using the 
standard OpenStack workflow through an open development process.


Steve,

Thanks for getting back on this. Unfortunatelly, I think you keep missing my
point and my goal.

I'd like to document the architectural choices and see if there's a common
ground in which different teams can collaborate on. In addition to this, we'll
also see at what point these teams will start diverging in architectural
choices. Will the time invested on this be entirely wasted? Maybe.

I'm failing to see what is wrong about my request. You mention that I need to
contribute to have my voice heard in Kolla as if I'm trying to change anything
in it. Spoiler alert: I'm not.

I'd like to first work on what I've mentioned in my email and then take the next
step. It's also important to note that I've not asked the Kolla team to do this
themselves. I've said that I'd like to hear thoughts and friendly discussions on
this from different teams (not just kolla), which could easily happen over
email. For example, we could stop arguing whether my email makes sense or not
and perhaps start dropping some ideas here.

Anyway, I appreciate your input and you taking the time to explain the status
and efforts of the Kolla team. As far as my contributions go, this is a way for
me to start contributing on the deployment on containers efforts around the
community and more specifically on the kubernetes side. It might not be what
everyone wants but I believe it does help and it'll create a common place for
collaboration on this topic amongts different communities (including OPs).

Flavio


Flavio,

Consider this an invitation to come join us – we want Red Hat’s participation.

Regards
-steve


[0]  
http://answers.microsoft.com/en-us/msoffice/forum/msoffice_outlook-mso_mac/outlook-for-mac-2016-replying-inline-with-html-no/298b830e-11ea-416c-b951-918d8f9562cb

From: Flavio Percoco 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, September 23, 2016 at 3:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to 
deploy OpenStack on k8s

On 22/09/16 20:55 +, Steven Dake (stdake) wrote:
Flavio,

Apologies for delay in response – my backlog is large.

Forgive me if I parsed your message incorrectly.

It's probably me failing to communicate my intent or just the intent not being
good enough or worth it at all.

It came across to me as “How do I blaze a trail for OpenStack on Kubernetes?”.  
That was asked of me personally 3 years ago which led to the formation of the 
Kolla project inside Red Hat.  Our initial effort at that activity failed.  
Instead we decided kubernetes wasn’t ready for trailblazing in this space and 

Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Jeffrey Zhang
Hey Sam,

Yes. world readable is bad. But writable for current running service is
also bad.

But in nova.conf, the rootwrap_config is configurable. It can be changed to
a custom file to gain root permission.

# nova.conf
rootwrap_config = /tmp/rootrwap.conf

# /tmp/rootwrap.conf
[DEFAULT]
filters_path = /tmp/rootwrap.conf.d/

so, for the file should be

0640 root:nova nova.conf


On Mon, Sep 26, 2016 at 10:43 PM, Sam Yaple  wrote:

> On Mon, Sep 26, 2016 at 1:18 PM, Jeffrey Zhang 
> wrote:
>
>> Using the same user for running service and the configuration files is
>> a danger. i.e. the service running user shouldn't change the
>> configuration files.
>>
>> a simple attack like:
>> * a hacker hacked into nova-api container with nova user
>> * he can change the /etc/nova/rootwrap.conf file and
>> /etc/nova/rootwrap.d file, which he can get much greater authority
>> with sudo
>> * he also can change the /etc/nova/nova.conf file to use another
>> privsep_command.helper_command to get greater authority
>> [privsep_entrypoint]
>> helper_command=sudo nova-rootwrap /etc/nova/rootwrap.conf
>> privsep-helper --config-file /etc/nova/nova.conf
>>
>> This is not true. The helper command required /etc/sudoers.d/*
> configuration files to work. So just because it was changed to something
> else, doesn't mean an attacker could actually do anything to adjust that,
> considering /etc/nova/rootwrap* is already owned by root. This was fixed
> early on in the Kolla lifecycle, pre-liberty.
>
> Feel free to adjust /etc/nova/nova.conf to root:root, but you won't be
> gaining any security advantage in doing so, you will be making it worse
> (see below). I don't know of a need for it to be owned by the service user,
> other than that is how all openstack things are packaged and those are the
> permissions in the repo and other deploy tools.
>
>
>> So right rule should be: do not let the service running user have
>> write permission to configuration files,
>>
>> about for the nova.conf file, i think root:root with 644 permission
>> is enough
>> for the directory file, root:root with 755 is enough.
>>
>
> So this actually makes it _less_ secure. The 0600 permissions were chosen
> for a reason.  The nova.conf file has passwords to the DB and rabbitmq. If
> the configuration files are world readable then those passwords could leak
> to an unprivileged user on the host.
>
>
>> A related BP[0] and PS[1] is created
>>
>> [0] https://blueprints.launchpad.net/kolla/+spec/config-readonly
>> [1] https://review.openstack.org/376465
>>
>> On Sat, Sep 24, 2016 at 11:08 PM, 1392607554 <1392607...@qq.com> wrote:
>>
>>> configuration file owner and permission in container
>>>
>>> --
>>> Regrad,
>>> zhubingbing
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Jeffrey Zhang
On Mon, Sep 26, 2016 at 11:03 PM, Christian Berendt <
bere...@betacloud-solutions.de> wrote:

> Confirmed. Please do not make configuration files world readable.
>
> We use volumes for the configuration file directories. Why do we not
> simply use read only volumes? This way we do not have to touch the current
> implementation (files are owned by the service user with 0600 permissions)
> and can make the configuration files read only.
>

​what do you mean here?

use /var/lib/kolla/config_file/nova.conf file directly rathen then copy it
to /etc/nova/nova.conf
or mount the nova.conf to /etc/nova.conf in container directly?

​



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][release][nova][magnum][murano][horizon][searchlight][trove] newton translations

2016-09-26 Thread Doug Hellmann
There are several projects with open translations from the import
job this morning. These may trigger additional release candidates,
which will need to be filed by Thursday. Please keep an eye on the
existing patches [1], as well as any others coming this week, and
merge them quickly. Then set up another release candidate by Thursday
to include them in your final releases (remember, we tag the final
from an existing tagged candidate).

Doug

[1] 
https://review.openstack.org/#/q/owner:%22OpenStack+Proposal+Bot%22+status:open+branch:stable/newton+topic:zanata/translations

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit login issues and zuul-cloner related job failures

2016-09-26 Thread Clark Boylan
Hello,

Just a heads up that Launchpad/Ubuntu One's openid services is
functioning again and logins for Gerrit and other services should now be
functional.

We also saw a bunch of failing jobs this morning due to a
misconfiguration of zuul-cloner. This change has been reverted and jobs
are happy again. It is now safe to recheck these.

Let us know if you continue to see problems or have questions,

Clark 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] newton rc-2 deadline

2016-09-26 Thread Nikhil Komawar
Added notes:

For all the bugs that are identified likely candidates need to be tagged
*newton-rc-potential*.

This tag has been added to the official glance bug tags list and can
also be used as a filter to view all of the potential rc bugs.

References:
https://bugs.launchpad.net/glance/+bugs?field.tag=newton-rc-potential
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103827.html

On 9/23/16 4:05 PM, Brian Rosmaita wrote:
> We're going to need to cut rc-2 for Glance to accommodate some new
> translations, so there is an opportunity to include some conservative
> bugfixes.  Any such must be merged before 16:00 UTC on Wed 28 Sept, so I
> am setting a deadline of 12:00 UTC on Tue 27 Sept for approval.  If you
> have a bugfix that is a worthy candidate, please reply to this email with
> the appropriate info.
>
> cheers,
> brian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][zaqar] zaqar Newton RC2 available

2016-09-26 Thread Doug Hellmann
Hello everyone,

A new release candidate for zaqar for the end of the Newton cycle
is available!  You can find the source code tarball at:

https://tarballs.openstack.org/zaqar/zaqar-3.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the final
Newton release on 6 October. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/zaqar/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/zaqar/+filebug

and tag it *newton-rc-potential* to bring it to the zaqar release
crew's attention.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO Core nominations

2016-09-26 Thread Steven Hardy
On Thu, Sep 15, 2016 at 10:20:07AM +0100, Steven Hardy wrote:
> Hi all,
> 
> As we work to finish the last remaining tasks for Newton, it's a good time
> to look back over the cycle, and recognize the excellent work done by
> several new contributors.
> 
> We've seen a different contributor pattern develop recently, where many
> folks are subsystem experts and mostly focus on a particular project or
> area of functionality.  I think this is a good thing, and it's hopefully
> going to allow our community to scale more effectively over time (and it
> fits pretty nicely with our new composable/modular architecture).
> 
> We do still need folks who can review with the entire TripleO architecture
> in mind, but I'm very confident folks will start out as subsystem experts
> and over time broaden their area of experience to encompass more of
> the TripleO projects (we're already starting to see this IMO).
> 
> We've had some discussion in the past[1] about strictly defining subteams,
> vs just adding folks to tripleo-core and expecting good judgement to be
> used (e.g only approve/+2 stuff you're familiar with - and note that it's
> totally fine for a core reviewer to continue to +1 things if the patch
> looks OK but is outside their area of experience).
> 
> So, I'm in favor of continuing that pattern and just welcoming some of our
> subsystem expert friends to tripleo-core, let me know if folks feel
> strongly otherwise :)
> 
> The nominations, are based partly on the stats[2] and partly on my own
> experience looking at reviews, patches and IRC discussion with these folks
> - I've included details of the subsystems I expect these folks to focus
> their +2A power on (at least initially):
> 
> 1. Brent Eagles
> 
> Brent has been doing some excellent work mostly related to Neutron this
> cycle - his reviews have been increasingly detailed, and show a solid
> understanding of our composable services architecture.  He's also provided
> a lot of valuable feedback on specs such as dpdk and sr-iov.  I propose
> Brent continues this exellent Neutron focussed work, while also expanding
> his review focus such as the good feedback he's been providing on new
> Mistral actions in tripleo-common for custom-roles.
> 
> 2. Pradeep Kilambi
> 
> Pradeep has done a large amount of pretty complex work around Ceilomenter
> and Aodh over the last two cycles - he's dealt with some pretty tough
> challenges around upgrades and has consistently provided good review
> feedback and solid analysis via discussion on IRC.  I propose Prad
> continues this excellent Ceilomenter/Aodh focussed work, while also
> expanding review focus aiming to cover more of t-h-t and other repos over
> time.
> 
> 3. Carlos Camacho
> 
> Carlos has been mostly focussed on composability, and has done a great job
> of working through the initial architecture implementation, including
> writing some very detailed initial docs[3] to help folks make the transition
> to the new architecture.  I'd suggest that Carlos looks to maintain this
> focus on composable services, while also building depth of reviews in other
> repos.
> 
> 4. Ryan Brady
> 
> Ryan has been one of the main contributors implementing the new Mistral
> based API in tripleo-common.  His reviews, patches and IRC discussion have
> consistently demonstrated that he's an expert on the mistral
> actions/workflows and I think it makes sense for him to help with review
> velocity in this area, and also look to help with those subsystems
> interacting with the API such as tripleoclient.
> 
> 5. Dan Sneddon
> 
> For many cycles, Dan has been driving direction around our network
> architecture, and he's been consistently doing a relatively small number of
> very high-quality and insightful reviews on both os-net-config and the
> network templates for tripleo-heat-templates.  I'd suggest Dan continues
> this focus, and he's indicated he may have more bandwidth to help with
> reviews around networking in future.
> 
> Please can I get feedback from exisitng core reviewers - you're free to +1
> these nominations (or abstain), but any -1 will veto the process.  I'll
> wait one week, and if we have consensus add the above folks to
> tripleo-core.

Ok, so we got quite a few +1s and no objections, so I will go ahead and add
the folks listed above to tripleo-core, congratulations (and thanks!) guys,
keep up the great work! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-26 Thread Joshua Harlow

Huang Zhiteng wrote:


On Mon, Sep 26, 2016 at 12:05 PM, Joshua Harlow > wrote:

Huang Zhiteng wrote:

In eBay, we did some inhouse change to Nova so that our big data
type of
use case can have physical disks as ephemeral disk for this type of
flavors.  It works well so far.   My 2 cents.


Is there a published patch (or patchset) anywhere that people can
look at for said in-house changes?


Unfortunately no, but I think we can publish it if there are enough
interests.  However, I don't think that can be easily adopted onto
upstream Nova since it depends on other in-house changes we've done to Nova.



Is there any blog, or other that explains the full bunch of changes that 
ebay has done (u got me curious)?


The nice thing about OSS is that if u just get the patchsets out (even 
to github or somewhere), those patches may trigger things to change to 
match your usecase better just by the nature of people being able to 
read them; but if they are never put out there, then well ya, it's a 
little hard to get anything to change.


Anything stopping a full release of all in-house changes?

Even if they are not 'super great quality' it really doesn't matter :)

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Cores using -2 votes

2016-09-26 Thread Flavio Percoco

On 23/09/16 10:12 -0500, Ian Cordasco wrote:

 

-Original Message-
From: Nikhil Komawar 
Reply: Nikhil Komawar 
Date: September 23, 2016 at 10:04:51
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Ian Cordasco 
Subject:  Re: [openstack-dev] [Glance] Cores using -2 votes


thanks Ian, this is great info.

Just a side question, do you have example for -Workflow , say in cases
when I'd +2ed but to keep a check on process and approve after the
freeze -W'ed it?


So the important thing to keep in mind is that: "Code-Review", "Verified", and 
"Workflow" are all labels. And they all have different values (-2, -1, 0, +1, +2). So you could 
absolutely have a search for

    label:Code-Review=+2, AND label:Workflow=-1,



FWIW, back in the Mitaka cycle I created a dashboard that seemed to work well
for the release. I'll drop the link here in case you guys want to use it:

http://bit.ly/glance-dashboard

This link contains a section with patches "I've" -2'd. 


Hope this helps, let me know if the link still works (it seemed to work here 
and you need to be logged in),
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Steven Dake (stdake)
Sam is correct here.  This is the why behind the how ☺

Regards
-steve

From: Sam Yaple 
Reply-To: "s...@yaple.net" , "OpenStack Development Mailing 
List (not for usage questions)" 
Date: Monday, September 26, 2016 at 7:43 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla] the user in container should NOT have 
write permission for configuration file

On Mon, Sep 26, 2016 at 1:18 PM, Jeffrey Zhang 
> wrote:
Using the same user for running service and the configuration files is
a danger. i.e. the service running user shouldn't change the
configuration files.

a simple attack like:
* a hacker hacked into nova-api container with nova user
* he can change the /etc/nova/rootwrap.conf file and
/etc/nova/rootwrap.d file, which he can get much greater authority
with sudo
* he also can change the /etc/nova/nova.conf file to use another
privsep_command.helper_command to get greater authority
[privsep_entrypoint]
helper_command=sudo nova-rootwrap /etc/nova/rootwrap.conf
privsep-helper --config-file /etc/nova/nova.conf
This is not true. The helper command required /etc/sudoers.d/* configuration 
files to work. So just because it was changed to something else, doesn't mean 
an attacker could actually do anything to adjust that, considering 
/etc/nova/rootwrap* is already owned by root. This was fixed early on in the 
Kolla lifecycle, pre-liberty.

Feel free to adjust /etc/nova/nova.conf to root:root, but you won't be gaining 
any security advantage in doing so, you will be making it worse (see below). I 
don't know of a need for it to be owned by the service user, other than that is 
how all openstack things are packaged and those are the permissions in the repo 
and other deploy tools.

So right rule should be: do not let the service running user have
write permission to configuration files,

about for the nova.conf file, i think root:root with 644 permission
is enough
for the directory file, root:root with 755 is enough.

So this actually makes it _less_ secure. The 0600 permissions were chosen for a 
reason.  The nova.conf file has passwords to the DB and rabbitmq. If the 
configuration files are world readable then those passwords could leak to an 
unprivileged user on the host.


A related BP[0] and PS[1] is created

[0] https://blueprints.launchpad.net/kolla/+spec/config-readonly
[1] https://review.openstack.org/376465

On Sat, Sep 24, 2016 at 11:08 PM, 1392607554 
<1392607...@qq.com> wrote:
configuration file owner and permission in container

--
Regrad,
zhubingbing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] "admin" role and "rule:admin_or_owner" confusion

2016-09-26 Thread rezroo
I am still confused how the "cloud admin" role is fulfilled in Liberty 
release. For example, I used "nova --debug delete" to see how the 
project:admin/user:admin deletes an instance of the demo project. 
Basically, we use the project:admin/user:admin token to get a list of 
instances for all tenants and then reference the instance of demo using 
the admin project tenant-id in the:


curl -g -i -X DELETE 
http://172.31.5.216:8774/v2.1/85b0992a5845455083db84d909c218ab/servers/6c876149-ecc4-4467-b727-9dff7b059390


So 85b0992a5845455083db84d909c218ab is admin tenant id, and 
6c876149-ecc4-4467-b727-9dff7b059390 is owned by demo project.


I am able to reproduce this using curl commands - but what's confusing 
me is that the token I get from keystone clearly shows is_admin is 0:


"user": {"username": "admin", "roles_links": [], "id": 
"9b29c721bc3844a784dcffbb8c8a47f8", "roles": [{"name": "admin"}], 
"name": "admin"}, "metadata": {"is_admin": 0, "roles": 
["6a6893ea36394a2ab0b93d225ab01e25"]}}}


And the rules for compute:delete seem to require is_admin to be true. 
nova/policy.json has two rules for "compute:delete":


/Line  81 "compute:delete": "rule:admin_or_owner",
Line  88 "compute:delete": "",/

First question - why is line 88 needed?

Second, on line  3 admin_or_owner definition requires is_admin to be true:

/"admin_or_owner": "is_admin:True or project_id:%(project_id)s",/

which if my understanding is correct, is never true unless the keystone 
admin_token is used, and is certainly not true the token I got using 
curl. So why is my curl request using this token able to delete the 
instance?


Thanks,

Reza


On 9/2/2016 12:51 PM, Morgan Fainberg wrote:


On Sep 2, 2016 09:39, "rezroo" > wrote:

>
> Hello - I'm using Liberty release devstack for the below scenario. I 
have created project "abcd" with "john" as Member. I've launched one 
instance, I can use curl to list the instance. No problem.

>
> I then modify /etc/nova/policy.json and redefine "admin_or_owner" as 
follows:

>
> "admin_or_owner":  "role:admin or is_admin:True or 
project_id:%(project_id)s",

>
> My expectation was that I would be able to list the instance in abcd 
using a token of admin. However, when I use the token of user "admin" 
in project "admin" to list the instances I get the following error:

>
> stack@vlab:~/token$ curl 
http://localhost:8774/v2.1/378a4b9e0b594c24a8a753cfa40ecc14/servers/detail 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: 
f221164cd9b44da6beec70d6e1f3382f"
> {"badRequest": {"message": "Malformed request URL: URL's project_id 
'378a4b9e0b594c24a8a753cfa40ecc14' doesn't match Context's project_id 
'f73175d9cc8b4fb58ad22021f03bfef5'", "code": 400}}

>
> 378a4b9e0b594c24a8a753cfa40ecc14 is project id of abcd and 
f73175d9cc8b4fb58ad22021f03bfef5 is project id of admin.

>
> I'm confused by this behavior and the reported error, because if the 
project id used to acquire the token is the same as the project id in 
/servers/detail then I would be an "owner". So where is the "admin" in 
"admin_or_owner"? Shouldn't the "role:admin" allow me to do whatever 
functionality "rule:admin_or_owner" allows in policy.json, regardless 
of the project id used to acquire the token?

>
> I do understand that I can use the admin user and project to get all 
instances of all tenants:
> curl 
http://localhost:8774/v2.1/f73175d9cc8b4fb58ad22021f03bfef5/servers/detail?all_tenants=1 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: $1"

>
> My question is more centered around why nova has the additional 
check to make sure that the token project id matches the url project 
id - and whether this is a keystone requirement, or only nova/cinder 
and programs that have a project-id in their API choose to do this. In 
other words, is it the developers of each project that decide to only 
expose some APIs for administrative functionality (such all-tenants), 
but restrict everything else to owners, or keystone requires this check?

>
> Thanks,
>
> Reza
>
>
> 
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 


> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I believe this is a nova specific extra check. There is (iirc) a way 
to list out the instances for a given tenant but I do not recall the 
specifics.


Keystone does not know anything about the resource ownership in Nova. 
The Nova check is fully self-contained.


--Morgan
Please excuse brevity and typos, sent from a mobile device.



__

Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Christian Berendt
> On 26 Sep 2016, at 16:43, Sam Yaple  wrote:
> 
> So this actually makes it _less_ secure. The 0600 permissions were chosen for 
> a reason.  The nova.conf file has passwords to the DB and rabbitmq. If the 
> configuration files are world readable then those passwords could leak to an 
> unprivileged user on the host.

Confirmed. Please do not make configuration files world readable.

We use volumes for the configuration file directories. Why do we not simply use 
read only volumes? This way we do not have to touch the current implementation 
(files are owned by the service user with 0600 permissions) and can make the 
configuration files read only.

Christian.


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] IRC Meeting time suggestion

2016-09-26 Thread Omar Shykhkerimov
Hello team!

As every big project Murano is getting more contributors working on it. Our
current IRC-meetings are held from 5pm till 6pm UTC every Tuesday, but it
looks like this time isn't comfortable for every Murano contributor.

Here is the suggestion: have alternating meetings: at 5PM UTC on Tuesday on
the first week and at 12AM UTC (7 hours earlier) on Tuesday on the second
week. I hope it will allow more people to visit the meetings.

So the suggested schedule is:

27 of September: from 5pm to 6pm UTC at #openstack-meeting-alt (as usual)

4 of October: from 12am to 1pm UTC at #openstack-meeting-alt

11 of October: from 5pm to 6pm UTC at #openstack-meeting-alt

18 of October: from 12am to 1pm UTC at #openstack-meeting-alt

...

and so on.

Looking forward to your opinions - whether this timetable is more
comfortable.
Please tell us in this thread if you have objections on suggested schedule.


Thanks,

Omar Shykhkerimov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [infra] Request for old branches removal

2016-09-26 Thread Emilien Macchi
Greatings Infra,

This is an official request to remove old branches for Puppet OpenStack modules:

puppet-ceilometer
puppet-cinder
puppet-glance
puppet-heat
puppet-horizon
puppet-keystone
puppet-neutron
puppet-nova
puppet-openstack_extras
puppet-openstacklib
puppet-swift
puppet-tempest

Please remove all branches before Kilo (Kilo was already removed).

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Sam Yaple
On Mon, Sep 26, 2016 at 1:18 PM, Jeffrey Zhang 
wrote:

> Using the same user for running service and the configuration files is
> a danger. i.e. the service running user shouldn't change the
> configuration files.
>
> a simple attack like:
> * a hacker hacked into nova-api container with nova user
> * he can change the /etc/nova/rootwrap.conf file and
> /etc/nova/rootwrap.d file, which he can get much greater authority
> with sudo
> * he also can change the /etc/nova/nova.conf file to use another
> privsep_command.helper_command to get greater authority
> [privsep_entrypoint]
> helper_command=sudo nova-rootwrap /etc/nova/rootwrap.conf
> privsep-helper --config-file /etc/nova/nova.conf
>
> This is not true. The helper command required /etc/sudoers.d/*
configuration files to work. So just because it was changed to something
else, doesn't mean an attacker could actually do anything to adjust that,
considering /etc/nova/rootwrap* is already owned by root. This was fixed
early on in the Kolla lifecycle, pre-liberty.

Feel free to adjust /etc/nova/nova.conf to root:root, but you won't be
gaining any security advantage in doing so, you will be making it worse
(see below). I don't know of a need for it to be owned by the service user,
other than that is how all openstack things are packaged and those are the
permissions in the repo and other deploy tools.


> So right rule should be: do not let the service running user have
> write permission to configuration files,
>
> about for the nova.conf file, i think root:root with 644 permission
> is enough
> for the directory file, root:root with 755 is enough.
>

So this actually makes it _less_ secure. The 0600 permissions were chosen
for a reason.  The nova.conf file has passwords to the DB and rabbitmq. If
the configuration files are world readable then those passwords could leak
to an unprivileged user on the host.


> A related BP[0] and PS[1] is created
>
> [0] https://blueprints.launchpad.net/kolla/+spec/config-readonly
> [1] https://review.openstack.org/376465
>
> On Sat, Sep 24, 2016 at 11:08 PM, 1392607554 <1392607...@qq.com> wrote:
>
>> configuration file owner and permission in container
>>
>> --
>> Regrad,
>> zhubingbing
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Ocata is open

2016-09-26 Thread Jim Rollenhagen
Hi friends,

The newton branch was cut last week. It was an amazing cycle,
with tons of awesome features. Thanks to everyone that contributed
and focused to get so much done! :)

Details are here:
http://docs.openstack.org/releasenotes/ironic/newton.html

Now, on to Ocata. The branch is open, and we have lots to do.
Let's make this one as good as the last!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal for Nova Integration tests

2016-09-26 Thread Prasanth Anbalagan
Stehen

Yes. These will be additional to functional/unit and exist outside
tempest (due to the low-level nature of the tests). Unlike third-party
CI, the plan is to keep all efforts upstream since the tests would focus
completely on open technology. So we are open to where the tests need to
reside. This could be in-tree like a directory called
"integration" (along side of "unit" and "functional") here -
https://github.com/openstack/nova/tree/master/nova/tests. 

Thanks
Prasanth

On Mon, 2016-09-26 at 13:08 +0100, Stephen Finucane wrote:

> On Fri, 2016-09-23 at 13:33 -0400, Prasanth Anbalagan wrote:
> > Adding the project to email subject.
> > 
> > Thanks
> > Prasanth Anbalagan
> > 
> > On Fri, 2016-09-23 at 12:56 -0400, Prasanth Anbalagan wrote:
> > > 
> > > Hi,
> > > 
> > > Continuing the topic on the need for integration style tests for
> > > Nova
> > > (brought up earlier during the weekly meeting at #openstack-
> > > meeting,
> > > Sep 22). The proposal is for a project to hold integration tests
> > > that
> > > includes low-level testing and runs against a devstack backend. I
> > > have
> > > included more details here -
> > > https://etherpad.openstack.org/p/integration-tests
> > > 
> > > Please comment on the need for the project, whether or not any
> > > similar
> > > efforts are in place, approaches suggested, taking forward the
> > > initiative, etc.
> 
> I missed that conversation, so to clarify, these would be additional
> integration tests (as opposed to functional, unit tests) but kept
> outside of tempest, correct? If so, how do these differ from what
> third-party CIs already provide? [1] If they do provide something
> different (i.e. they're run upstream), could these be kept in-tree
> (like neutron's scenario tests [2]) rather than in a different project?
> 
> Stephen
> 
> [1] 
> http://docs.openstack.org/developer/nova/test_strategy.html#infra-vs-third-party
> [2] 
> https://github.com/openstack/neutron/blob/master/TESTING.rst#scenario-tests
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][rpm] 3rd-party gates promotion to voting gates

2016-09-26 Thread Anita Kuno

On 16-09-26 07:48 AM, Haïkel wrote:

Hi,

following our discussions about 3rd party gates in RPM packaging project,
I suggest that we vote in order to promote the following gates as voting:
- MOS CI
- SUSE CI

After promotion, all patchsets submitted will have to validate these gates
in order to get merged. And gates maintainers should ensure that the gates
are running properly.

Please vote before (and/or during) our thursday meeting.


+1 to promote both MOS and SUSE CI as voting gates.

Regards,
H.


I'm not sure what you mean by voting gates. Gates don't vote, an 
individual job can leave a verified +1 in the check queue or/and a 
verified +2 in the gate queue.


Third party CI systems do not vote verified +2 in gerrit. They may if 
the project chooses vote verified +1 on a project.


If you need clarification in what third party ci systems may do in 
gerrit, you are welcome to reply to this email, join the 
#openstack-infra channel or participate in a third party meeting: 
http://eavesdrop.openstack.org/#Third_Party_Meeting


Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-26 Thread Erlon Cruz
On Fri, Sep 23, 2016 at 10:19 PM, Zhenyu Zheng 
wrote:

> Hi,
>
> Thanks all for the information, as for the filter Erlon(
> InstanceLocalityFilter) mentioned, this only solves a part of the problem,
> we can create new volumes for existing instances using this filter and
> then attach to it, but the root volume still cannot
> be guranteed to be on the same host as the compute resource, right?
>
>
You have two options to use a disk in the same node as the instance.
1 - The easiest, just don't use Cinder volumes. When you create an instance
from an image, the default behavior in Nova, is to create the root disk in
the local host (/var/lib/nova/instances). This have the advantage that Nova
will cache the image locally and will avoid the need of copying the image
over the wire (or having to configure image caching in Cinder).

2 - Use Cinder volumes as root disk. Nova will somehow have to pass the
hints to the scheduler so it properly can use the InstanceLocalityFilter.
If you place this in Nova, and make sure that all requests have the proper
hint, then the volumes created will be scheduled and the host.

Is there any reason why you can't use the first approach?




> The idea here is that all the volumes uses local disks.
> I was wondering if we already have such a plan after the Resource Provider
> structure has accomplished?
>
> Thanks
>
> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz  wrote:
>
>> Not sure exactly what you mean, but in Cinder using the
>> InstanceLocalityFilter[1], you can  schedule a volume to the same compute
>> node the instance is located. Is this what you need?
>>
>> [1] http://docs.openstack.org/developer/cinder/scheduler-fil
>> ters.html#instancelocalityfilter
>>
>> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <
>> jsbry...@electronicjungle.net> wrote:
>>
>>> Kevin,
>>>
>>> This is functionality that has been requested in the past but has never
>>> been implemented.
>>>
>>> The best way to proceed would likely be to propose a blueprint/spec for
>>> this and start working this through that.
>>>
>>> -Jay
>>>
>>>
>>> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>>
>>> Hi Novaers and Cinders:
>>>
>>> Quite often application requirements would demand using locally attached
>>> disks (or direct attached disks) for OpenStack compute instances. One such
>>> example is running virtual hadoop clusters via OpenStack.
>>>
>>> We can now achieve this by using BlockDeviceDriver as Cinder driver and
>>> using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
>>> in large scale production deployment.
>>>
>>> Now that Nova is working on resource provider trying to build an
>>> generic-resource-pool, is it possible to perform "volume-based-scheduling"
>>> to build instances according to volume? As this could be much easier to
>>> build instances like mentioned above.
>>>
>>> Or do we have any other ways of doing this?
>>>
>>> References:
>>> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-l
>>> ocal-disks-for-instances.html
>>>
>>> Thanks,
>>>
>>> Kevin Zheng
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-26 Thread Stephen Finucane
On Wed, 2016-09-21 at 11:16 -0400, Doug Hellmann wrote:
> Excerpts from Matt Riedemann's message of 2016-09-21 09:49:29 -0500:
> > 
> > Nova has policy defaults in code now and we can generate the
> > sample 
> > using oslopolicy-sample-generator but we'd like to get the default 
> > policy sample in the Nova developer documentation also, like we
> > have for 
> > nova.conf.sample.
> > 
> > I see we use the sphinxconfiggen extension for building the 
> > nova.conf.sample in our docs, but I don't see anything like that
> > for 
> > generating docs for a sample policy file.
> > 
> > Has anyone already started working on that, or is interested in
> > working 
> > on that? I've never written a sphinx extension before but I'm
> > guessing 
> > it could be borrowed a bit from how sphinxconfiggen was written in 
> > oslo.config.
> > 
> 
> I don't have time to do it myself, but I can help get someone else
> started and work with them on code reviews in oslo.policy.

I can take a look at this, though I'll only persue a "sphinxpolicygen"
module for now (we don't use oslo_config's "sphinxext" module yet).

Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-26 Thread Daniel P. Berrange
On Mon, Sep 26, 2016 at 09:31:39PM +0800, Alex Xu wrote:
> 2016-09-23 20:38 GMT+08:00 Daniel P. Berrange :
> 
> > On Fri, Sep 23, 2016 at 07:32:36AM -0500, Matt Riedemann wrote:
> > > On 9/23/2016 3:54 AM, Daniel P. Berrange wrote:
> > > > On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:
> > > > > Sergey is working on a spec to use the standardized virt driver
> > instance
> > > > > diagnostics in the os-diagnostics API. A question came up during
> > review of
> > > > > the spec about how to define a disk 'id':
> > > > >
> > > > > https://review.openstack.org/#/c/357884/2/specs/ocata/
> > approved/restore-vm-diagnostics.rst@140
> > > > >
> > > > > The existing diagnostics code doesn't set a disk id in the list of
> > disk
> > > > > dicts, but I think with at least libvirt we can set that to the
> > target
> > > > > device from the disk device xml.
> > > > >
> > > > > The xenapi code for getting this info is a bit confusing for me at
> > least,
> > > > > but it looks like it's possible to get the disks, but the id might
> > need to
> > > > > be parsed out (as a side note, it looks like the cpu/memory/disk
> > diagnostics
> > > > > are not even populated in the get_instance_diagnostics method for
> > xen).
> > > > >
> > > > > vmware is in the same boat as xen, it's not fully implemented:
> > > > >
> > > > > https://github.com/openstack/nova/blob/
> > 64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/
> > vmwareapi/vmops.py#L1561
> > > > >
> > > > > Hyper-v and Ironic virt drivers haven't implemented
> > get_instance_diagnostics
> > > > > yet.
> > > >
> > > > The key value of this field (which we should call "device_name", not
> > "id"),
> > > > is to allow the stats data to be correlated with the entries in the
> > block
> > > > device mapping list used to configure storage when bootin the VM. As
> > such
> > > > we should declare its value to match the corresponding field in BDM.
> > > >
> > > > Regards,
> > > > Daniel
> > > >
> > >
> > > Well, except that we don't want people specifying a device name in the
> > block
> > > device list when creating a server, and the libvirt driver ignores that
> > > altogether. In fact, I think Dan Smith was planning on adding a
> > microversion
> > > in Ocata to remove that field from the server create request since we
> > can't
> > > guarantee it's what you'll end up with for all virt drivers.
> >
> > We don't want people specifying it, but we should report the auto-allocated
> > names back when you query the data after instance creation, don't we ? If
> > we don't, then there's no way for users to correlate the disks that they
> > requested with the instance diagnostic stats, which severely limits their
> > usefulness.
> >
> 
> So what use-case for this API? I thought it is used by admin user to
> diagnose the cloud. If that is the right use-case, we can expose the disk
> image path in the API for admin user to correlate the disks. In the
> libvirt, it would looks like
> "/opt/stack/data/nova/instances/cbc7985c-434d-4ec3-8d96-d99ad6afb618/disk".
> As this is admin-only API, and for diagnostics, this info is safe to expose
> in this API.

You can't assume that all disks have a local path in the filesystem.
Any disks using a QEMU built-in network client (eg rbd) will not
appear there.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-26 Thread Alex Xu
2016-09-23 20:38 GMT+08:00 Daniel P. Berrange :

> On Fri, Sep 23, 2016 at 07:32:36AM -0500, Matt Riedemann wrote:
> > On 9/23/2016 3:54 AM, Daniel P. Berrange wrote:
> > > On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:
> > > > Sergey is working on a spec to use the standardized virt driver
> instance
> > > > diagnostics in the os-diagnostics API. A question came up during
> review of
> > > > the spec about how to define a disk 'id':
> > > >
> > > > https://review.openstack.org/#/c/357884/2/specs/ocata/
> approved/restore-vm-diagnostics.rst@140
> > > >
> > > > The existing diagnostics code doesn't set a disk id in the list of
> disk
> > > > dicts, but I think with at least libvirt we can set that to the
> target
> > > > device from the disk device xml.
> > > >
> > > > The xenapi code for getting this info is a bit confusing for me at
> least,
> > > > but it looks like it's possible to get the disks, but the id might
> need to
> > > > be parsed out (as a side note, it looks like the cpu/memory/disk
> diagnostics
> > > > are not even populated in the get_instance_diagnostics method for
> xen).
> > > >
> > > > vmware is in the same boat as xen, it's not fully implemented:
> > > >
> > > > https://github.com/openstack/nova/blob/
> 64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/
> vmwareapi/vmops.py#L1561
> > > >
> > > > Hyper-v and Ironic virt drivers haven't implemented
> get_instance_diagnostics
> > > > yet.
> > >
> > > The key value of this field (which we should call "device_name", not
> "id"),
> > > is to allow the stats data to be correlated with the entries in the
> block
> > > device mapping list used to configure storage when bootin the VM. As
> such
> > > we should declare its value to match the corresponding field in BDM.
> > >
> > > Regards,
> > > Daniel
> > >
> >
> > Well, except that we don't want people specifying a device name in the
> block
> > device list when creating a server, and the libvirt driver ignores that
> > altogether. In fact, I think Dan Smith was planning on adding a
> microversion
> > in Ocata to remove that field from the server create request since we
> can't
> > guarantee it's what you'll end up with for all virt drivers.
>
> We don't want people specifying it, but we should report the auto-allocated
> names back when you query the data after instance creation, don't we ? If
> we don't, then there's no way for users to correlate the disks that they
> requested with the instance diagnostic stats, which severely limits their
> usefulness.
>

So what use-case for this API? I thought it is used by admin user to
diagnose the cloud. If that is the right use-case, we can expose the disk
image path in the API for admin user to correlate the disks. In the
libvirt, it would looks like
"/opt/stack/data/nova/instances/cbc7985c-434d-4ec3-8d96-d99ad6afb618/disk".
As this is admin-only API, and for diagnostics, this info is safe to expose
in this API.



>
> > I'm fine with calling the field device_name though.
>
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Jeffrey Zhang
Using the same user for running service and the configuration files is
a danger. i.e. the service running user shouldn't change the
configuration files.

a simple attack like:
* a hacker hacked into nova-api container with nova user
* he can change the /etc/nova/rootwrap.conf file and
/etc/nova/rootwrap.d file, which he can get much greater authority
with sudo
* he also can change the /etc/nova/nova.conf file to use another
privsep_command.helper_command to get greater authority
[privsep_entrypoint]
helper_command=sudo nova-rootwrap /etc/nova/rootwrap.conf
privsep-helper --config-file /etc/nova/nova.conf

So right rule should be: do not let the service running user have
write permission to configuration files,

about for the nova.conf file, i think root:root with 644 permission
is enough
for the directory file, root:root with 755 is enough.

A related BP[0] and PS[1] is created

[0] https://blueprints.launchpad.net/kolla/+spec/config-readonly
[1] https://review.openstack.org/376465

On Sat, Sep 24, 2016 at 11:08 PM, 1392607554 <1392607...@qq.com> wrote:

> configuration file owner and permission in container
>
> --
> Regrad,
> zhubingbing
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Setting up vmware workstation 12 vm to have numa nodes and pci devices

2016-09-26 Thread Carlton, Paul (Cloud Services)

Gary/All


I run devstack environments in vmware workstation and I'd like to create a vm 
that has multiple numa nodes and pci devices so I can test nova code that 
utilizes these features.  I've tried playing with the setting documented in the 
vmware documentation, i.e. adding numa.vcpu.maxPerVirtualNode etc in the 
configuration file, without success.  I wondered if you had any experience of 
doing this or could point me at any information that might help?


Thanks






Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard Enterprise
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Office: +44 (0) 1173 162189
Mobile:+44 (0)7768 994283
Email:paul.carl...@hpe.com
Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, 
Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error, you should 
delete it from your system immediately and advise the sender. To any recipient 
of this message within HP, unless otherwise stated you should consider this 
message and attachments as "HP CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic-inspector-core team update

2016-09-26 Thread milanisko k
Thanks guys! :D

--
milan

po 26. 9. 2016 v 14:46 odesílatel Jim Rollenhagen 
napsal:

> // jim
>
>
> On Mon, Sep 26, 2016 at 5:24 AM, Dmitry Tantsur 
> wrote:
> > Hi folks!
> >
> > As you probably know, Imre has decided to leave us for other challenges,
> so
> > our small core team has become even smaller. I'm removing him on his
> > request.
> >
> > I suggest adding Milan Kovacik (milan or mkovacik on IRC) to the
> > ironic-inspector-core team. He's been pretty active on ironic-inspector
> > recently, doing meaningful reviews, and he's driving our HA work forward.
> >
> > Please vote with +1/-1. If no objections are recorded, the change will
> be in
> > effect next Monday.
> >
> > Thanks!
>
> +1, Milan seems to be killing it. :)
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic-inspector-core team update

2016-09-26 Thread Jim Rollenhagen
// jim


On Mon, Sep 26, 2016 at 5:24 AM, Dmitry Tantsur  wrote:
> Hi folks!
>
> As you probably know, Imre has decided to leave us for other challenges, so
> our small core team has become even smaller. I'm removing him on his
> request.
>
> I suggest adding Milan Kovacik (milan or mkovacik on IRC) to the
> ironic-inspector-core team. He's been pretty active on ironic-inspector
> recently, doing meaningful reviews, and he's driving our HA work forward.
>
> Please vote with +1/-1. If no objections are recorded, the change will be in
> effect next Monday.
>
> Thanks!

+1, Milan seems to be killing it. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [qa] Testing config drive creation in our CI

2016-09-26 Thread Jim Rollenhagen
On Mon, Sep 26, 2016 at 7:29 AM, Sean Dague  wrote:
> On 09/26/2016 07:15 AM, Dmitry Tantsur wrote:
>> On 09/26/2016 01:09 PM, Sean Dague wrote:
>>> This should probably be set at the job level, and not buried inside
>>> devstack to be different based on hypervisor. That's going to be a lot
>>> more confusing to unwind later.
>>
>> Fair. So should we just set DEVSTACK_GATE_CONFIGDRIVE for all our jobs?
>> Do you think it should go somewhere here:
>> https://github.com/openstack-infra/devstack-gate/blob/7ecc7dd4067d99e0fa7525a9fffc8b05e1a7b58f/devstack-vm-gate.sh#L343-L370
>> ?
>
> The devstack-gate change is probably sufficient.
>
> That being said, you can also add "config_drive": True to the server
> create json per request, and it will use a config drive. That may be a
> better option for testing, as it will let you specify at the test level
> what needs config drive testing.
>
> http://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#id7

Well, we use some of the nova tests from tempest's trees, so that
probably isn't an option here. :)

We've been trying a bit to move things out of d-s-g and into project-config,
so I'd rather put it into project-config unless we have a good reason not to
do so.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSC] Bug, spec, BP or something else for adding new commands to OSC?

2016-09-26 Thread Sergey Belous
Hello everyone.

I started working on this blueprint "Implement neutron quota commands" [1]
and now there is a patch, that adds quota delete command to
python-openstackclient [2] (I will also very appreciate if you will look on
it when will have a free time, thanks :)

But some times ago I had discussion in irc about this blueprint and as I
understand, the way when we add some new command (for example, quota
delete) only for neutron (networking part of quota management in os-client)
is not the best way. For example with quota delete, its better to add
support of this command for all parts, that quotas management currently
implemented in os-client (networking, volume, compute). I think it’s good
idea, and the patch mentioned above adds quota delete command for all these
parts (networking, volume, compute), but… blueprint exist only for neutron
quota commands.

So, my main questions is how to deal with tracking of this work? I mean,
should I (or someone else) create a bug or spec or rfe or another blueprint
with the same proposals (something like "add quota delete command for
nova/cinder") and mention it in my patch and in the next patches?


[1]
https://blueprints.launchpad.net/python-openstackclient/+spec/neutron-client-quota
[2] https://review.openstack.org/#/c/376311/
-- 
Best Regards,
Sergey Belous
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] what's up in upgrades?

2016-09-26 Thread mathieu bultel
Hi all,

Since we decided to switch on OVB provisioner for the CI, the upgrade
jobs is now working really better and consistently.
Unfortunately, I can answer now if we will reach the timeout with those
jobs because the current state of the upgrade (mitaka to newton) is
currently broken due to a number of issues which we are working on.

I'm making some cleaning in the (huge) review
(https://review.openstack.org/#/c/323750) atm, but if all of you are ok,
it would be great to merge this work, even if the upgrade (full
overcloud) is not complete successfully.
It would allow us, at least, to trigger the experimental pipeline and
see what's going on with the upgrade.

( and example of the state of the full upgrade job:
http://logs.openstack.org/50/323750/54/experimental-tripleo/gate-tripleo-ci-centos-7-ovb-nonha-upgrades-nv/45916e6/console.html)

On 09/09/2016 09:29 AM, mathieu bultel wrote:
> On 09/09/2016 12:42 AM, Emilien Macchi wrote:
>> On Thu, Sep 8, 2016 at 4:18 PM, David Moreau Simard  wrote:
>>> How long does the upgrade job take ?
>> undercloud upgrade is a matter of 10 min max.
>> overcloud upgrade will be much more, I don't have metrics right now.
>> matbu maybe?
> It really depend on the infra which run the upgrade. I don't know much
> about the upstream infra but
> on my local box, with a SSD, 8 cores and 32Go of RAM, It could take
> around 1h30 2hours for a full upgrade.
> On centos ci infra and with RDO, some jobs can takes 4hours or so ...
>
> I'm really curious to see how long a full upgrade will take with upstream.
>
> Right now, the full upgrade job didn't go far from the controller
> upgrade (step 2).
> AFAIK, the timeout in upstream is 3 hours minus 10 minutes ...
> I think if we keep a 2 nodes deployment with only pacemaker, it would be
> enough... I will keep you in touch of my progress here..
>
> But, even if the jobs takes 2 or 3 hours to vote, I think it would be a
> real huge gain for the tripleo work.
>>> David Moreau Simard
>>> Senior Software Engineer | Openstack RDO
>>>
>>> dmsimard = [irc, github, twitter]
>>>
>>>
>>> On Thu, Sep 8, 2016 at 2:27 PM, Emilien Macchi  wrote:
 What's up in upgrades?

 1) Undercloud: Mitaka to Newton

 We just approved a patch in openstack-infra/tripleo-ci that test
 undercloud upgrades.
 I don't think we should make it vote as for now it's quite
 experimental. Though I'm wondering if we should move it to the check
 pipeline as non-voting (currently in experimental queue).

 This is a first iteration and if you plan to upgrade your undercloud,
 you'll still have to do manual steps that we do in tripleo-ci. They
 are described here:
 https://github.com/openstack-infra/tripleo-ci/blob/41e8560cf3d313f2be69df64e4c95a3240dfa402/scripts/tripleo.sh#L554-L577

 We need to decide where to put these bits: in tripleoclient? in
 instack-undercloud? Let's talk about it.


 2) Overcloud: Mitaka to Newton

 matbu and myself are working together on a CI job that will test the
 upgrade of an undercloud + overcloud from Mitaka to Newton.
 Work is here: https://review.openstack.org/#/c/364859 and
 https://review.openstack.org/#/c/323750/ (both patches are going to
 merge together so we have one single patch for review).
 The idea is to use multinode job for now as a first iteration, with
 the simplest scenario possible so we can easily iterate later.


 3) Overcloud: Newton to Newton
 I'm working on a simple patch that would test updates from Newton to
 Newton: https://review.openstack.org/#/c/351330/ like we had with OVB
 jobs but this time using multinode.


 Any feedback, help, is highly welcome.
 --
 Emilien Macchi

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][rpm] 3rd-party gates promotion to voting gates

2016-09-26 Thread Ivan Udovichenko
Hi,

Thank you for bringing this topic up!

+1 from me.

We will do our best to keep services up-and-running from MOS (Mirantis)
side.

On 09/26/2016 02:48 PM, Haïkel wrote:
> Hi,
> 
> following our discussions about 3rd party gates in RPM packaging project,
> I suggest that we vote in order to promote the following gates as voting:
> - MOS CI
> - SUSE CI
> 
> After promotion, all patchsets submitted will have to validate these gates
> in order to get merged. And gates maintainers should ensure that the gates
> are running properly.
> 
> Please vote before (and/or during) our thursday meeting.
> 
> 
> +1 to promote both MOS and SUSE CI as voting gates.
> 
> Regards,
> H.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal for Nova Integration tests

2016-09-26 Thread Stephen Finucane
On Fri, 2016-09-23 at 13:33 -0400, Prasanth Anbalagan wrote:
> Adding the project to email subject.
> 
> Thanks
> Prasanth Anbalagan
> 
> On Fri, 2016-09-23 at 12:56 -0400, Prasanth Anbalagan wrote:
> > 
> > Hi,
> > 
> > Continuing the topic on the need for integration style tests for
> > Nova
> > (brought up earlier during the weekly meeting at #openstack-
> > meeting,
> > Sep 22). The proposal is for a project to hold integration tests
> > that
> > includes low-level testing and runs against a devstack backend. I
> > have
> > included more details here -
> > https://etherpad.openstack.org/p/integration-tests
> > 
> > Please comment on the need for the project, whether or not any
> > similar
> > efforts are in place, approaches suggested, taking forward the
> > initiative, etc.

I missed that conversation, so to clarify, these would be additional
integration tests (as opposed to functional, unit tests) but kept
outside of tempest, correct? If so, how do these differ from what
third-party CIs already provide? [1] If they do provide something
different (i.e. they're run upstream), could these be kept in-tree
(like neutron's scenario tests [2]) rather than in a different project?

Stephen

[1] http://docs.openstack.org/developer/nova/test_strategy.html#infra-v
s-third-party
[2] https://github.com/openstack/neutron/blob/master/TESTING.rst#scenar
io-tests

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging][rpm] 3rd-party gates promotion to voting gates

2016-09-26 Thread Haïkel
Hi,

following our discussions about 3rd party gates in RPM packaging project,
I suggest that we vote in order to promote the following gates as voting:
- MOS CI
- SUSE CI

After promotion, all patchsets submitted will have to validate these gates
in order to get merged. And gates maintainers should ensure that the gates
are running properly.

Please vote before (and/or during) our thursday meeting.


+1 to promote both MOS and SUSE CI as voting gates.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic-inspector-core team update

2016-09-26 Thread Miles Gould

On 26/09/16 10:24, Dmitry Tantsur wrote:

I suggest adding Milan Kovacik (milan or mkovacik on IRC) to the
ironic-inspector-core team. He's been pretty active on ironic-inspector
recently, doing meaningful reviews, and he's driving our HA work forward.

Please vote with +1/-1. If no objections are recorded, the change will
be in effect next Monday.


Do I get a vote? +1 if so :-)

Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [qa] Testing config drive creation in our CI

2016-09-26 Thread Sean Dague
On 09/26/2016 07:15 AM, Dmitry Tantsur wrote:
> On 09/26/2016 01:09 PM, Sean Dague wrote:
>> This should probably be set at the job level, and not buried inside
>> devstack to be different based on hypervisor. That's going to be a lot
>> more confusing to unwind later.
> 
> Fair. So should we just set DEVSTACK_GATE_CONFIGDRIVE for all our jobs?
> Do you think it should go somewhere here:
> https://github.com/openstack-infra/devstack-gate/blob/7ecc7dd4067d99e0fa7525a9fffc8b05e1a7b58f/devstack-vm-gate.sh#L343-L370
> ?

The devstack-gate change is probably sufficient.

That being said, you can also add "config_drive": True to the server
create json per request, and it will use a config drive. That may be a
better option for testing, as it will let you specify at the test level
what needs config drive testing.

http://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#id7

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [qa] Testing config drive creation in our CI

2016-09-26 Thread Dmitry Tantsur

On 09/26/2016 01:09 PM, Sean Dague wrote:

This should probably be set at the job level, and not buried inside
devstack to be different based on hypervisor. That's going to be a lot
more confusing to unwind later.


Fair. So should we just set DEVSTACK_GATE_CONFIGDRIVE for all our jobs? Do you 
think it should go somewhere here: 
https://github.com/openstack-infra/devstack-gate/blob/7ecc7dd4067d99e0fa7525a9fffc8b05e1a7b58f/devstack-vm-gate.sh#L343-L370 
?




-Sean

On 09/26/2016 04:55 AM, Dmitry Tantsur wrote:

Just bringing QA folks attention: please merge
https://review.openstack.org/#/c/375467/ as we've regressed in our
testing coverage (see below for details).

On 09/23/2016 08:21 PM, Jim Rollenhagen wrote:

On Fri, Sep 23, 2016 at 7:37 AM, Dmitry Tantsur 
wrote:

Hi folks!

We've found out that we're not testing creating of config drives in
our CI.
It ended up in one combination being actually broken (pxe_* +
wholedisk +
configdrive). I would like to cover this testing gap. Is there any
benefit
in NOT using config drives in all jobs? I assume we should not bother
too
much testing the metadata service, as it's not within our code base
(unlike
config drive).

I've proposed https://review.openstack.org/375362 to switch our tempest
plugin to testing config drives, please vote. As you see one job
fails on it
- this is the breakage I was talking about. It will (hopefully) get
fixed
with the next release of ironic-lib.


Right, so as Pavlo mentioned in the patch, configdrive used to be the
default
for devstack, and as such we forced configdrive for all tests. When
that was
changed, we didn't notice because somehow metadata service worked.
https://github.com/openstack-dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4


I agree, we should go back to using configdrive for all tests.

// jim



Finally, we need to run all jobs on ironic-lib, not only one, as
ironic-lib
is not the basis for all deployment variants. This will probably happen
after we switch our DSVM jobs to Xenial though.

-- Dmitry

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [qa] Testing config drive creation in our CI

2016-09-26 Thread Sean Dague
This should probably be set at the job level, and not buried inside
devstack to be different based on hypervisor. That's going to be a lot
more confusing to unwind later.

-Sean

On 09/26/2016 04:55 AM, Dmitry Tantsur wrote:
> Just bringing QA folks attention: please merge
> https://review.openstack.org/#/c/375467/ as we've regressed in our
> testing coverage (see below for details).
> 
> On 09/23/2016 08:21 PM, Jim Rollenhagen wrote:
>> On Fri, Sep 23, 2016 at 7:37 AM, Dmitry Tantsur 
>> wrote:
>>> Hi folks!
>>>
>>> We've found out that we're not testing creating of config drives in
>>> our CI.
>>> It ended up in one combination being actually broken (pxe_* +
>>> wholedisk +
>>> configdrive). I would like to cover this testing gap. Is there any
>>> benefit
>>> in NOT using config drives in all jobs? I assume we should not bother
>>> too
>>> much testing the metadata service, as it's not within our code base
>>> (unlike
>>> config drive).
>>>
>>> I've proposed https://review.openstack.org/375362 to switch our tempest
>>> plugin to testing config drives, please vote. As you see one job
>>> fails on it
>>> - this is the breakage I was talking about. It will (hopefully) get
>>> fixed
>>> with the next release of ironic-lib.
>>
>> Right, so as Pavlo mentioned in the patch, configdrive used to be the
>> default
>> for devstack, and as such we forced configdrive for all tests. When
>> that was
>> changed, we didn't notice because somehow metadata service worked.
>> https://github.com/openstack-dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4
>>
>>
>> I agree, we should go back to using configdrive for all tests.
>>
>> // jim
>>
>>>
>>> Finally, we need to run all jobs on ironic-lib, not only one, as
>>> ironic-lib
>>> is not the basis for all deployment variants. This will probably happen
>>> after we switch our DSVM jobs to Xenial though.
>>>
>>> -- Dmitry
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections][TC] TC Candidacy

2016-09-26 Thread John Davidge
Hi everyone,

I'd like to submit my candidacy for the OpenStack Technical Committee. You
may know me as john-davidge on IRC.

I've been an active member of the OpenStack community since 2012 (Folsom).
I met many of you for the first time while presenting the Curvature Network
Visualization Dashboard[1] at the Grizzly Summit in Portland. Since then
I've been working 100% upstream, mostly in neutron[2][3], and for a couple
of different sponsors. Right now I'm employed in the OpenStack Innovation
Center (OSIC) - a joint venture between Rackspace and Intel - where I'm
leading a small team contributing to neutron, and helping to shape the OSIC
development roadmap. Over the years I have seen and participated in a lot
of the changes that have led our community to where it is today. I've
experienced the things that have improved our lives as developers, and
operators, as well as the things that have caused us difficulties.

I know where I would like to see OpenStack head in the future, and I feel
strongly about making it a success. The most important thing that I would
like to see changed is the overarching framework under which all of
OpenStack operates:

The Big Tent.

Last year, the TC moved OpenStack away from the Integrated Release, and
into The Big Tent. This removed the separation between those projects
considered integral to OpenStack, and those which enhance it. Since then,
the number of official projects has gone from ~12 to 60. While this was a
fantastic move for community inclusivity, it has also made life harder for
operators and customers, and has diminished the focus on OpenStack's core
purpose.

Managing every team under one roof has led to issues for both the core and
the newer projects. The experience so far has taught us that there isn't a
single set of rules that can be helpfully applied to both. I believe that
now is the time to take The Big Tent's ideas and iterate upon them to
create a new model that can promote inclusivity, while still preserving a
clear focus for the core of OpenStack. The main points of this new model
are:

* Define OpenStack as its core components
* Introduce a new home for complementary projects - The OpenStack Family
* Reinstate Stackforge as the primary incubator for new projects

OpenStack will once again be a focused set of closely aligned projects
working together to provide an operating system for the datacenter. The
OpenStack Family will provide a home for projects that work to improve the
experience of an OpenStack cloud (think Ceilometer, Heat, etc), while
protecting them from some of the more prescriptive rules that go with being
a core OpenStack component. Stackforge will be the main focus of
early-stage innovation, with a clearly defined path towards graduation into
The OpenStack Family. I believe that this model[4] can go a long way
towards solving many of the pain points that we are seeing with OpenStack
today.

This transformation is one that I think is very important for the future
of OpenStack. We have a fantastic project surrounded by a talented
community, of which I am very proud to call myself a member. Trust me with
your vote, and I'll work hard to ensure its continued success.

Thank you for your consideration,

John

[1] https://www.youtube.com/watch?v=pmpRhcwyJIo - Curvature
[2] https://www.youtube.com/watch?v=GjuF-3fB0IQ - IPv6 Prefix Delegation
[3] https://www.youtube.com/watch?v=4ag1NiCVBDo - Neutron Purge
[4] https://johndavidge.wordpress.com/mr-openstack-tear-down-this-tent/


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic-inspector-core team update

2016-09-26 Thread Anton Arefiev
+1. Thanks for your work Milan and congrats!

On Mon, Sep 26, 2016 at 12:41 PM, Sam Betts (sambetts) 
wrote:

> +1 from me! Thanks for the contributions Milan :D
>
> Sam
>
> On 26/09/2016 10:24, "Dmitry Tantsur"  wrote:
>
> >Hi folks!
> >
> >As you probably know, Imre has decided to leave us for other challenges,
> >so our
> >small core team has become even smaller. I'm removing him on his request.
> >
> >I suggest adding Milan Kovacik (milan or mkovacik on IRC) to the
> >ironic-inspector-core team. He's been pretty active on ironic-inspector
> >recently, doing meaningful reviews, and he's driving our HA work forward.
> >
> >Please vote with +1/-1. If no objections are recorded, the change will be
> >in
> >effect next Monday.
> >
> >Thanks!
> >
> >___
> ___
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Anton Arefiev
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic-inspector-core team update

2016-09-26 Thread Sam Betts (sambetts)
+1 from me! Thanks for the contributions Milan :D

Sam

On 26/09/2016 10:24, "Dmitry Tantsur"  wrote:

>Hi folks!
>
>As you probably know, Imre has decided to leave us for other challenges,
>so our 
>small core team has become even smaller. I'm removing him on his request.
>
>I suggest adding Milan Kovacik (milan or mkovacik on IRC) to the
>ironic-inspector-core team. He's been pretty active on ironic-inspector
>recently, doing meaningful reviews, and he's driving our HA work forward.
>
>Please vote with +1/-1. If no objections are recorded, the change will be
>in 
>effect next Monday.
>
>Thanks!
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] ironic-inspector-core team update

2016-09-26 Thread Dmitry Tantsur

Hi folks!

As you probably know, Imre has decided to leave us for other challenges, so our 
small core team has become even smaller. I'm removing him on his request.


I suggest adding Milan Kovacik (milan or mkovacik on IRC) to the 
ironic-inspector-core team. He's been pretty active on ironic-inspector 
recently, doing meaningful reviews, and he's driving our HA work forward.


Please vote with +1/-1. If no objections are recorded, the change will be in 
effect next Monday.


Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [qa] Testing config drive creation in our CI

2016-09-26 Thread Dmitry Tantsur
Just bringing QA folks attention: please merge 
https://review.openstack.org/#/c/375467/ as we've regressed in our testing 
coverage (see below for details).


On 09/23/2016 08:21 PM, Jim Rollenhagen wrote:

On Fri, Sep 23, 2016 at 7:37 AM, Dmitry Tantsur  wrote:

Hi folks!

We've found out that we're not testing creating of config drives in our CI.
It ended up in one combination being actually broken (pxe_* + wholedisk +
configdrive). I would like to cover this testing gap. Is there any benefit
in NOT using config drives in all jobs? I assume we should not bother too
much testing the metadata service, as it's not within our code base (unlike
config drive).

I've proposed https://review.openstack.org/375362 to switch our tempest
plugin to testing config drives, please vote. As you see one job fails on it
- this is the breakage I was talking about. It will (hopefully) get fixed
with the next release of ironic-lib.


Right, so as Pavlo mentioned in the patch, configdrive used to be the default
for devstack, and as such we forced configdrive for all tests. When that was
changed, we didn't notice because somehow metadata service worked.
https://github.com/openstack-dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4

I agree, we should go back to using configdrive for all tests.

// jim



Finally, we need to run all jobs on ironic-lib, not only one, as ironic-lib
is not the basis for all deployment variants. This will probably happen
after we switch our DSVM jobs to Xenial though.

-- Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting - 09/26/2016

2016-09-26 Thread Renat Akhmerov
Hi team,

I’m inviting you to a team meeting today. It will be at #openstack-meeting at 
16.00 UTC as usually.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
RC2 release readiness
Maintaining Launchpad projects (mistral and python-mistralclient)
We need to maintain a client project also (bugs, BPs, releases)
Open discussion

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ocata Design Summit - Proposed slot allocation

2016-09-26 Thread Thierry Carrez
Zhipeng Huang wrote:
> Hi Thierry,
> 
> Nomad [1] (not an official project yet :) ) team would like to request
> one room for developer meetup, if you have any given-back room available
> that would be fantastic :)
> 
> [1]https://wiki.openstack.org/wiki/Nomad  

Hi Zhipeng,

We didn't get that much space back, so I wasn't able to satisfy your
late request. I'll keep you posted if we get more space back in the future.

Note that in Barcelona we'll have a big room set up with roundtables all
Friday -- so it is easy for your team to gather there if they want to.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack, Tempest, and TLS

2016-09-26 Thread Dmitry Tantsur

On 09/24/2016 02:04 AM, Clark Boylan wrote:

Earlier this month there was a thread on replacing stud in devstack for
the tls-proxy service [0]. Over the last week or so a bunch of work has
happened around this so I figured I would send an update.





Also noticed that Ironic's devstack plugin isn't configured to deal with
a devstack that runs the other services with TLS. This is mostly
addressed by a small change to set the correct glance protocol and swift
url [4]. However tests for this continue to fail if TLS is enabled
because the IPA image does not trust the devstack created CA which has
signed the cert in front of glance.


There is a patch to implement such trust: https://review.openstack.org/358457

However, we lack a similar change for ironic-inspector still.



Would be great if people could review these. Assuming reviews happen we
should be able to run the core set of tempest jobs with TLS enabled real
soon now. This will help us avoid regressions like the one that hit OSC
in which it could no longer speak to a neutron fronted with a proxy
terminating TLS.

Also, I am learning that many of our services require redundant and
confusing configuration. Ironic for example needs to have
glance_protocol set even though it appears to get the actual glance
endpoint from the keystone catalog. You also have to tell it where to
find swift except that if it is already using the catalog why can't it
find swift there? Many service configs have an auth_url and auth_uri
under [keystone_authtoken]. The values for them are different, but I am
not sure why we need to have an auth_uri and auth_uri and why they
should be different urls (yes both are urls). Cinder requires you set
both osapi_volume_base_URL and public_endpoint to get proper https
happening.


Note: I think everything in [keystone_authtoken] sections comes from 
keystonemiddleware, not from services.




Should I be filing bugs for these things? are they known issues? is
anyone interested in simplifying our configs?


+1, please do. Thanks for looking into it.



[0]
http://lists.openstack.org/pipermail/openstack-dev/2016-September/102843.html
[1] https://review.openstack.org/#/c/374328/
[2] https://review.openstack.org/373219
[3] https://review.openstack.org/375724
[4] https://review.openstack.org/375649

Thanks,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [sahara] Unable to ping virtual machines with floating IPs

2016-09-26 Thread Sudipta Biswas

Hi,

I am trying an OpenStack Sahara based on OpenStack Mitaka (Lab 
experiment) - using a single controller and compute node.


The controller node is running inside a virtual machine on top of the 
compute node. (This runs Sahara as well)
The controller node has two interfaces - one public via br0 (LB) on the 
compute node and one private via br-ex (ovs). Both the IPs are reachable 
from the controller to the compute host. I use the public interface as 
the management network.


I run the neutron-l3-agent with br-ex (configured with 192. range) as 
the external_bridge on the compute host.
I see that the neutron router port state for the 192. network remains in 
BUILD state, even though the interfaces (namespaces) are all created 
properly on the compute node and even the router IPs are reachable.


I am running the neutron-openvswitch-agent with bridge_mappings set to 
default:br-ex
I have created an External FLAT network on the controller with the same 
subnet range as br-ex (that is 192.x.x.x) to use as floating ips.


The reason I did this is because, I don't have free floating public IPs 
- hence I created a network topology that looks kind of like below:




On this, everytime I boot a virtual machine - and attach a floating IP 
(192. range) - the IP doesn't ping.
However, if I restart the iptables on the compute node (that runs the 
l3-agent and the openvswitch agent) - the floating IP becomes pingable 
and I also can login to the virtual machine from either the controller 
or the compute node.


Can someone help me understand this behavior?

Thanks,
Sudipto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-26 Thread Flavio Percoco

On 22/09/16 17:15 -0400, Anita Kuno wrote:

On 16-09-21 01:11 PM, Doug Hellmann wrote:

Excerpts from Clint Byrum's message of 2016-09-21 08:56:24 -0700:

I think it might also be useful if we could make the meeting bot remind
teams of any pending actions they need to take such as elections upon
#startmeeting.

I could see that being useful, yes.

I am not convinced this situation arose due to lack of available 
information.


You may be right here but I don't think having other means to spread this
information is a bad thing, if there's a way to automate this, of course.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-26 Thread Hugh Blemings

Hiya,

On 24/09/2016 03:46, Mike Perez wrote:

On 11:03 Sep 21, Doug Hellmann wrote:



A separate mailing list just for “important announcements” would
need someone to decide what is “important”. It would also need
everyone to be subscribed, or we would have to cross-post to the
existing list. That’s why we use topic tags on the mailing list, so
that it is possible to filter messages based on what is important
to the reader, rather than the sender.


This has came up in the past and I have suggested that people who
can't spend that much time on the lists to refer to the Dev Digest at
blog.openstack.org which mentioned the PTL elections being open.


Fwiw, I'd endorse Mike's comments about the Dev digest - it's an easily
digestible (sorry!) and concise summary of what's happening on
openstack-dev - I refer to it regularly myself.

Two other sources that come to mind for less detailed but topical
summaries of traffic are Jason Baker's summary on opensource.com [0] and
Lwood [1] which I put together each week.  Both flag upcoming Election
related topics pretty reliably and might suit some folk.

For what my $0.20 is worth I don't think splitting out into further
logistics or announcement oriented lists would be beneficial in the long
term.

Cheers,
Hugh


[0] https://opensource.com/business/16/9/openstack-news-september-26
[1] http://hugh.blemings.id.au/openstack/lwood/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev