Re: [openstack-dev] [nova] Latest news on placement API and Ocata rough goals

2016-09-23 Thread Alex Xu
2016-09-24 5:07 GMT+08:00 Sylvain Bauza :

>
>
> Le 23/09/2016 18:41, Jay Pipes a écrit :
>
>> Hi Stackers,
>>
>> In Newton, we had a major goal of having Nova sending inventory and
>> allocation records from the nova-compute daemon to the new placement API
>> service over HTTP (i.e. not RPC). I'm happy to say we achieved this goal.
>> We had a stretch goal from the mid-cycle of implementing the custom
>> resource class support. I'm sorry to say that we did not reach this goal,
>> though Ironic did indeed get its part merged and we should be able to
>> complete this work before the summit in Nova.
>>
>> Through the hard work of many folks [1] we were able to merge code that
>> added a brand new REST API service (/placement) with endpoints for
>> read/write operations against resource providers, inventories, allocations,
>> and usage records. We were able to get patches merged that modified the
>> resource tracker in the nova-compute to write the compute node's inventory
>> and allocation records to the placement API in a fashion that avoided
>> required action on the part of the operator to keep the nova-computes up
>> and running.
>>
>>
> Thanks Jay for giving us again your views.
>
> For Ocata AND BEYOND, I'd here are a number of rough priorities and goals
>> that we need to work on...
>>
>> 1. Shared storage properly implemented
>>
>> To fulfill the original use case around accurate reporting of shared
>> resources, we need to complete a few subtasks:
>>
>> a) complete the aggregates/ endpoints in the placement API so that
>> resource providers can be associated with aggregates
>> b) have the scheduler reporting client tracking more than just the
>> resource provider for the compute node
>>
>>
> I saw some patches about that. Let me know the changes so I could review
> them.
> For the client one, lemme know if you need some help.
>
> 2. Custom resource classes
>>
>> This actually isn't all that much work, but just needs some focus. We
>> need the following done in this area:
>>
>> a) (very simple) REST API added to the placement API for GET/PUT resource
>> class names
>> b) modify the ResourceClass Enum field to be a StringField -- which is
>> wire-compatible with Enum -- and add some code on each side of the
>> client/server communication that caches the standard resource classes as
>> constants that Nova and placement code can share
>> c) modify the Ironic virt driver to pass the new node_class attribute on
>> nodes into the resource tracker and have the resource tracker create
>> resource provider records for each Ironic node with a single inventory
>> record for each of those resource providers for the node class
>> d) modify the resource tracker to track the allocation of instances to
>> resource providers
>>
>>
> So, first about that, sorry. I said during the midcycle that I could
> implement the above REST API but given I had an August time very short, I
> finally had no time for that. Now that we're in September, I can resume my
> implementation for a) and b).
>
> That said, we still have the spec to be approved by Ocata.
>
>
> 3. Integration of Nova scheduler with Placement API
>>
>> We would like the Nova scheduler to be able to query the placement API
>> for quantitative information in Ocata. So, code will need to be pushed that
>> adds a call to the placement API for resource provider UUIDs that meet a
>> given request for some amount of resources. This result will then be used
>> to filter a request in the Nova scheduler for ComputeNode objects to
>> satisfy the qualitative side of the request.
>>
>>
> We tried to discuss about that during the midcycle, but it seemed we had
> some confusions about what could be calling the placement and where.
> From my perspective, I was thinking the current scheduler would call out
> the placement API (or even directly using the Nova objects) during the
> HostManager call so that it would return less hosts for calling the
> filters. Thoughts?
>
> 4. Progress on qualitative request components (traits)
>>
>> A number of things can be done in this area:
>>
>> a) get os-traits interface stable and include all catalogued standardized
>> trait strings
>> b) agree on schema in placement DB for storing and querying traits
>> against resource providers
>>
>>
> Given Ocata is a short cycle, and given the current specs are not yet
> fully discussed, I wonder if we would have time for having the above
> implemented ?
> I don't want to say we won't do that, just that it looks like a stretch
> goal for Ocata. At least, I think the discussion in the spec is a priority
> for Ocata, sure.



Yea, very short cycle. I'm plan to update the spec. The update part is
about hiding the standard traits validation behind the placement API. I and
Yingxin work on the PoC, to show how was that looks like, hope we can done
that in next week. Then I hope we have enough thing for people discussion.
It is also good for people to measure what is worth to be done Ocata, 

Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-23 Thread Zhenyu Zheng
Hi,

Thanks all for the information, as for the filter Erlon(
InstanceLocalityFilter) mentioned, this only solves a part of the problem,
we can create new volumes for existing instances using this filter and then
attach to it, but the root volume still cannot
be guranteed to be on the same host as the compute resource, right?

The idea here is that all the volumes uses local disks.
I was wondering if we already have such a plan after the Resource Provider
structure has accomplished?

Thanks

On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz  wrote:

> Not sure exactly what you mean, but in Cinder using the
> InstanceLocalityFilter[1], you can  schedule a volume to the same compute
> node the instance is located. Is this what you need?
>
> [1] http://docs.openstack.org/developer/cinder/scheduler-filters.html#
> instancelocalityfilter
>
> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <
> jsbry...@electronicjungle.net> wrote:
>
>> Kevin,
>>
>> This is functionality that has been requested in the past but has never
>> been implemented.
>>
>> The best way to proceed would likely be to propose a blueprint/spec for
>> this and start working this through that.
>>
>> -Jay
>>
>>
>> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>
>> Hi Novaers and Cinders:
>>
>> Quite often application requirements would demand using locally attached
>> disks (or direct attached disks) for OpenStack compute instances. One such
>> example is running virtual hadoop clusters via OpenStack.
>>
>> We can now achieve this by using BlockDeviceDriver as Cinder driver and
>> using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
>> in large scale production deployment.
>>
>> Now that Nova is working on resource provider trying to build an
>> generic-resource-pool, is it possible to perform "volume-based-scheduling"
>> to build instances according to volume? As this could be much easier to
>> build instances like mentioned above.
>>
>> Or do we have any other ways of doing this?
>>
>> References:
>> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-l
>> ocal-disks-for-instances.html
>>
>> Thanks,
>>
>> Kevin Zheng
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-23 Thread Steven Dake (stdake)
+1!  The security project adds tremendous value to OpenStack.

Regards
-steve


From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, September 23, 2016 at 10:35 AM
To: openstack-dev 
Subject: Re: [openstack-dev] [security] [salt] Removal of Security and 
OpenStackSalt project teams from the Big Tent

Excerpts from Rob C's message of 2016-09-23 17:46:46 +0100:
I wanted to provide a quick update from Security.
We had our weekly IRC meeting yesterday, dhellman was kind enough to attend
to help broker some of the discussion. In advance of the meeting I prepared
a blog post where I tried to articulate my position and where I think
things need to go next [1]. This was discussed at length during the IRC
meeting [2]. We discussed the option of becoming a WG or staying in the big
tent, this resulted in a vote, where the team all indicated their desire to
stay within the big tent.
My proposal for the future is outlined in some depth with [1] but the
summary is that we've identified the areas that we need to improve on in
order to be better members of the community, we want to stay within the
big-tent and for me to maintain leadership through this transformational
process with a view to having multiple candidates stand in the next
election.
Cheers
-Rob

Thanks, Rob. Based on the discussions yesterday I think the team has a
better understanding of the communication issues and I'm convinced that
everyone is committed to improving. I support keeping the team in the
tent.

Doug

[1]
https://openstack-security.github.io/organization/2016/09/22/maturing-the-security-project.html
[2]
http://eavesdrop.openstack.org/meetings/security/2016/security.2016-09-22-17.00.log.html
On Fri, Sep 23, 2016 at 4:23 AM, Davanum Srinivas 
> wrote:
> Steven,
>
> Fair point.
>
> Thanks,
> Dims
>
> On Thu, Sep 22, 2016 at 11:04 PM, Steven Dake (stdake) 
> >
> wrote:
> > Dims,
> >
> > This isn’t any of my particular business except it could affect emerging
> technology projects (which I find important to OpenStack’s future)
> negatively – so I thought I’d chime in.
> >
> > A lack of activity in a specs repo doesn’t mean much to me.  For
> example, as Kolla was an emerging project we didn’t use any specs process
> at all (or very rarely).  There is a reason behind this. Now that Kolla is
> stable and reliable and we feel we are not an emerging project, we plan to
> make use of a specs repo starting in Ocata.
> >
> > I have no particular concerns with the other commentary – but please
> don’t judge a project by activity or lack of activity in one repo of its
> deliverables.  Judge it holistically (You are judging holistically.  I
> believe a lack of one repo’s activity shouldn’t be part of that judgement).
> >
> > Regards
> > -steve
> >
> >
> > On 9/21/16, 2:08 PM, "Davanum Srinivas" 
> > > wrote:
> >
> > Jakub,
> >
> > Please see below.
> >
> > On Wed, Sep 21, 2016 at 3:46 PM, Jakub Pavlik <
> jakub.pav...@tcpcloud.eu> wrote:
> > > Hello all,
> > >
> > > it took us 2 years of hard working to get these official.
> OpenStack-Salt is
> > > now used by around 40 production deployments and it is focused
> very on
> > > operation and popularity is growing. You are removing the project
> week after
> > > one of top contributor announced that they will use that as part of
> > > solution. We made a mistakes, however I do not think that is
> reason to
> > > remove us. I do no think that quality of the project is measured
> like this.
> > > Our PTL got ill and did not do properly his job for last 3 weeks,
> but this
> > > can happen anybody.
> > >
> > >  It is up to you. If you think that we are useless for community,
> then
> > > remove us and we will have to continue outside of this community.
> However
> > > growing successful use cases will not be under official openstack
> community,
> > > which makes my feeling bad.
> >
> > Data points so far are:
> > 1. No response during Barcelona planning for rooms
> > 2. Lack of candidates for PTL election
> > 3. No activity in the releases/ repository hence no entries in
> > https://releases.openstack.org/
> > 4. Meetings are not so regular?
> > http://eavesdrop.openstack.org/meetings/openstack_salt/2016/
> (supposed
> > to be weekly)
> > 5. Is the specs repo really active?
> > http://git.openstack.org/cgit/openstack/openstack-salt-specs/ is the
> > work being done elsewhere?
> > 6. Is there an effort to add stuff to the CI jobs running on
> openstack
> > infrastructure? (can't seem to find much
> > 
> > 

Re: [openstack-dev] [neutron][stadium] Query regarding procedure for inclusion in Neutron Stadium

2016-09-23 Thread Armando M.
On 22 September 2016 at 22:36, Takashi Yamamoto 
wrote:

> hi,
>
> On Fri, Sep 23, 2016 at 4:20 AM, Armando M.  wrote:
> >
> >
> > On 22 September 2016 at 00:46, reedip banerjee 
> wrote:
> >>
> >> Dear Neutron Core members,
> >>
> >> I have a query regarding the procedure for inclusion in the Neutron
> >> Stadium.
> >> I wanted to know if a project can apply for Big Tent and Neutron Stadium
> >> together ( means can a project be accepted in the Neutron Stadium and
> as a
> >> result into the Big Tent )
> >>
> >> I was checking out the checklist in  [1], and IMO , I think that we need
> >> to conform to the checklist to be added to the Neutron Stadium ( along
> with
> >> the other requirements  like keeping in sync with the core neutron
> concepts)
> >>
> >> But IIUC, certain items in the checklist would be completed if a project
> >> is already included in the Big Tent.
> >
> >
> > I would not worry about those, as these are rather trivial to implement
> in
> > conjunction with Stadium inclusion. I'd worry about the fork that the
> > project relies on, which is a big show stopper for Stadium inclusion.
> >
> > [1] https://github.com/openstack/tap-as-a-service/blob/master/
> setup.cfg#L50
>
> just a clarification; this is not a fork of ovs-agent. it's a separate
> agent.
>

It may not strictly be a fork, but I am not grasping the technical reason
as to why one needs yet another agent on the node. Besides, this might end
up interfering with the OVS agent as it is handling the same resources [1],
without any level of coordination.

[1]
https://github.com/openstack/tap-as-a-service/blob/master/neutron_taas/services/taas/drivers/linux/ovs_taas.py#L43:L44


> >
> >>
> >>
> >> So my doubt is ,should a project apply for the Big Tent first, and after
> >> inclusion, apply for Neutron Stadium ? Or can a project be integrated to
> >> Neutron Stadium and Big Tent simultaneously ( I am a bit sceptical about
> >> this though)?
> >
> >
> > You are confusing the two things. A project either belongs to list [1] or
> > list [2], and you can't be in both at the same time. To be part of either
> > list a project must comply with a set of criteria. As for the latter, a
> > couple of steps may sound like a catch 22: for instance you can't make
> docs
> > available on docs.o.o unless you are in [2] and you can't be in [2]
> > unless...and here's the trick, unless you are a point where you can
> > successfully demonstrate that the project has substantial documentation
> (of
> > any sort, API included). The process of 'demonstrating'/application for
> > inclusion in list [2] follows the RFE submission process that we have
> > adopted for a while, and that means submitting a spec. Since the request
> > does not require a conventional design process, I was going to prepare an
> > ad-hoc template and make it available soon. So watch the neutron-specs
> repo
> > for updates.
> >
> > In the meantime, I'd urge you to go over the checklist and make sure you
> can
> > address the hard parts.
> >
> > If you ask me, if you go with [1], it makes no sense to go and apply to
> be
> > in [2].
> >
> > HTH
> > Armando
> >
> > [1] http://governance.openstack.org/reference/projects/
> > [2] http://governance.openstack.org/reference/projects/neutron.html
> >
> >>
> >>
> >>
> >> [1]
> >> http://docs.openstack.org/developer/neutron/stadium/
> governance.html#checklist
> >> --
> >> Thanks and Regards,
> >> Reedip Banerjee
> >> IRC: reedip
> >>
> >>
> >>
> >>
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Devstack, Tempest, and TLS

2016-09-23 Thread Clark Boylan
Earlier this month there was a thread on replacing stud in devstack for
the tls-proxy service [0]. Over the last week or so a bunch of work has
happened around this so I figured I would send an update.

Tempest passes against devstack with some edits to one of the object
tests to properly handle 304s [1].

Multinode devstack and tempest pass with a small change to devstack-gate
[2] to copy the CA to all test nodes which needs a small change to
devstack [3] to avoid overwriting the CA. Note the devstack-gate change
needs to deal with some new ansible issues so isn't ready for merging
just yet.

Also noticed that Ironic's devstack plugin isn't configured to deal with
a devstack that runs the other services with TLS. This is mostly
addressed by a small change to set the correct glance protocol and swift
url [4]. However tests for this continue to fail if TLS is enabled
because the IPA image does not trust the devstack created CA which has
signed the cert in front of glance.

Would be great if people could review these. Assuming reviews happen we
should be able to run the core set of tempest jobs with TLS enabled real
soon now. This will help us avoid regressions like the one that hit OSC
in which it could no longer speak to a neutron fronted with a proxy
terminating TLS.

Also, I am learning that many of our services require redundant and
confusing configuration. Ironic for example needs to have
glance_protocol set even though it appears to get the actual glance
endpoint from the keystone catalog. You also have to tell it where to
find swift except that if it is already using the catalog why can't it
find swift there? Many service configs have an auth_url and auth_uri
under [keystone_authtoken]. The values for them are different, but I am
not sure why we need to have an auth_uri and auth_uri and why they
should be different urls (yes both are urls). Cinder requires you set
both osapi_volume_base_URL and public_endpoint to get proper https
happening.

Should I be filing bugs for these things? are they known issues? is
anyone interested in simplifying our configs?

[0]
http://lists.openstack.org/pipermail/openstack-dev/2016-September/102843.html
[1] https://review.openstack.org/#/c/374328/
[2] https://review.openstack.org/373219
[3] https://review.openstack.org/375724
[4] https://review.openstack.org/375649

Thanks,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] gate failures (fragile integrated tests)

2016-09-23 Thread Richard Jones
This seems to also be a connection issue to Firefox, but just in a
different place:
http://logs.openstack.org/69/375669/1/check/gate-horizon-dsvm-integration-deprecated-ubuntu-xenial/468d333/console.html#_2016-09-23_19_38_05_221999


 Richard


On 23 September 2016 at 22:18, Akihiro Motoki  wrote:

> Hi horizoners,
>
> As you may have noticed, the main fixes have been merged and the horizon
> gate failure rate seems to be recovered.
> There are still some remaining issues around the integration tests, but
> top failure rate problems now have been fixed or have a workaround.
> Thanks for your patience.
>
>
> 2016-09-23 9:30 GMT+09:00 Akihiro Motoki :
>
>> Hi horizoners,
>>
>> The current horizon gate is half broken as both integrated tests are
>> 30-40% failure rate.
>> (See https://bugs.launchpad.net/horizon/+bug/1626536 and
>> https://bugs.launchpad.net/horizon/+bug/1626643)
>> Fixes for these bugs are now under the gate.
>>
>> Please avoid using 'recheck' if one of integrated tests fails.
>>
>> Cores, let these fixes be merged first.
>> Until then, avoid giving +A for others to merge them fast.
>>
>> Thanks,
>> Akihiro
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] newton rc-2 deadline

2016-09-23 Thread Timothy Symanczyk
https://bugs.launchpad.net/glance/+bug/1620833

Would be my suggestion for a conservative / quick-hit / high impact bug to
consider for inclusion. Especially since this cycle is when the
documentation for using db.simple was added, it seems like it¹d be nice if
this were also fixed.


Tim



On 9/23/16, 1:05 PM, "Brian Rosmaita"  wrote:

>We're going to need to cut rc-2 for Glance to accommodate some new
>translations, so there is an opportunity to include some conservative
>bugfixes.  Any such must be merged before 16:00 UTC on Wed 28 Sept, so I
>am setting a deadline of 12:00 UTC on Tue 27 Sept for approval.  If you
>have a bugfix that is a worthy candidate, please reply to this email with
>the appropriate info.
>
>cheers,
>brian
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest news on placement API and Ocata rough goals

2016-09-23 Thread Sylvain Bauza



Le 23/09/2016 18:41, Jay Pipes a écrit :

Hi Stackers,

In Newton, we had a major goal of having Nova sending inventory and 
allocation records from the nova-compute daemon to the new placement 
API service over HTTP (i.e. not RPC). I'm happy to say we achieved 
this goal. We had a stretch goal from the mid-cycle of implementing 
the custom resource class support. I'm sorry to say that we did not 
reach this goal, though Ironic did indeed get its part merged and we 
should be able to complete this work before the summit in Nova.


Through the hard work of many folks [1] we were able to merge code 
that added a brand new REST API service (/placement) with endpoints 
for read/write operations against resource providers, inventories, 
allocations, and usage records. We were able to get patches merged 
that modified the resource tracker in the nova-compute to write the 
compute node's inventory and allocation records to the placement API 
in a fashion that avoided required action on the part of the operator 
to keep the nova-computes up and running.




Thanks Jay for giving us again your views.

For Ocata AND BEYOND, I'd here are a number of rough priorities and 
goals that we need to work on...


1. Shared storage properly implemented

To fulfill the original use case around accurate reporting of shared 
resources, we need to complete a few subtasks:


a) complete the aggregates/ endpoints in the placement API so that 
resource providers can be associated with aggregates
b) have the scheduler reporting client tracking more than just the 
resource provider for the compute node




I saw some patches about that. Let me know the changes so I could review 
them.

For the client one, lemme know if you need some help.


2. Custom resource classes

This actually isn't all that much work, but just needs some focus. We 
need the following done in this area:


a) (very simple) REST API added to the placement API for GET/PUT 
resource class names
b) modify the ResourceClass Enum field to be a StringField -- which is 
wire-compatible with Enum -- and add some code on each side of the 
client/server communication that caches the standard resource classes 
as constants that Nova and placement code can share
c) modify the Ironic virt driver to pass the new node_class attribute 
on nodes into the resource tracker and have the resource tracker 
create resource provider records for each Ironic node with a single 
inventory record for each of those resource providers for the node class
d) modify the resource tracker to track the allocation of instances to 
resource providers




So, first about that, sorry. I said during the midcycle that I could 
implement the above REST API but given I had an August time very short, 
I finally had no time for that. Now that we're in September, I can 
resume my implementation for a) and b).


That said, we still have the spec to be approved by Ocata.



3. Integration of Nova scheduler with Placement API

We would like the Nova scheduler to be able to query the placement API 
for quantitative information in Ocata. So, code will need to be pushed 
that adds a call to the placement API for resource provider UUIDs that 
meet a given request for some amount of resources. This result will 
then be used to filter a request in the Nova scheduler for ComputeNode 
objects to satisfy the qualitative side of the request.




We tried to discuss about that during the midcycle, but it seemed we had 
some confusions about what could be calling the placement and where.
From my perspective, I was thinking the current scheduler would call 
out the placement API (or even directly using the Nova objects) during 
the HostManager call so that it would return less hosts for calling the 
filters. Thoughts?



4. Progress on qualitative request components (traits)

A number of things can be done in this area:

a) get os-traits interface stable and include all catalogued 
standardized trait strings
b) agree on schema in placement DB for storing and querying traits 
against resource providers




Given Ocata is a short cycle, and given the current specs are not yet 
fully discussed, I wonder if we would have time for having the above 
implemented ?
I don't want to say we won't do that, just that it looks like a stretch 
goal for Ocata. At least, I think the discussion in the spec is a 
priority for Ocata, sure.



5. Nested resource providers

Things like SR-IOV PCI devices are actually resource providers that 
are embedded within another resource provider (the compute node 
itself). In order to tag things like SR-IOV PFs or VFs with a set of 
traits, we need to have discovery code run on the compute node that 
registers things like SR-IOV PF/VFs or SR-IOV FPGAs as nested resource 
providers.


Some steps needed here:

a) agreement on schema for placement DB for representing this nesting 
relationship
b) write the discovery code in nova-compute for adding these resource 
providers to the placement API when found



Re: [openstack-dev] [tripleo] Fernet Key rotation

2016-09-23 Thread Adam Young

On 08/11/2016 06:25 AM, Steven Hardy wrote:

On Wed, Aug 10, 2016 at 11:31:29AM -0400, Zane Bitter wrote:

On 09/08/16 21:21, Adam Young wrote:

On 08/09/2016 06:00 PM, Zane Bitter wrote:

In either case a good mechanism might be to use a Heat Software
Deployment via the Heat API directly (i.e. not as part of a stack) to
push changes to the servers. (I say 'push' but it's more a case of
making the data available for os-collect-config to grab it.)

This is the part that interests me most.  The rest, I'll code in python
and we can call either from mistral or from Cron.  What would a stack
like this look like?  Are there comparable examples?

Basically use the "openstack software config create" command to upload a
script and the "openstack software deployment create" command to deploy it
to a server. I don't have an example I can point you at, but the data is in
essentially the same format as the properties of the corresponding Heat
resources.[1][2] Steve Baker would know if we have any more detailed docs.

Actually we wrapped a mistral workflow and CLI interface around this for
operator convenience, so you can just do:

[stack@instack ~]$ cat run_ls.sh
#!/bin/sh
ls /tmp

[stack@instack ~]$ openstack overcloud execute -s overcloud-controller-0 
run_ls.sh

This runs a mistral workflow that creates the heat software config and
software deployment, waits for the deployment to complete, then returns the
result.

Wiring in a periodic mistral workflow which does the same should be
possible, but tbh I've not yet looked into the deferred authentication
method in that case (e.g I assume it uses trusts but I've not tried it
yet).

This is the mistral workflow, it could pretty easily be reused or adapted
for the use-case described I think:

https://github.com/openstack/tripleo-common/blob/master/workbooks/deployment.yaml

Again, thanks for the stellar blooging, Steve.  POC was posted earlier 
this month.


http://adam.younglogic.com/2016/09/fernet-overcloud/

Packing up the tarball on the undercloud is the eay part.  I would like 
to come up with a general approach for securely distributing 
keys/secrets from undercloud to overcloud.  It might make sense to make 
use of Barbican for that in future release.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How people handle the ML (bring your patterns/best practices)

2016-09-23 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2016-09-23 11:03:24 -0700:
> Since I've heard this a few times over the years (the ML is to hard to 
> read, to much volume, to hard to keep track, to hard/this or that and 
> so-on),
> 
> I thought it would make sense to start to document what folks that have 
> been able to keep track of the ML have been doing and perhaps we can 
> then move such a document to a wiki or other more official document at 
> some stage.
> 
> So to help get these ideas out there I started:
> 
> https://etherpad.openstack.org/p/how-people-read-the-ml
> 
> And
> 

Great idea, Josh, thanks for starting this.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] newton rc-2 deadline

2016-09-23 Thread Brian Rosmaita
We're going to need to cut rc-2 for Glance to accommodate some new
translations, so there is an opportunity to include some conservative
bugfixes.  Any such must be merged before 16:00 UTC on Wed 28 Sept, so I
am setting a deadline of 12:00 UTC on Tue 27 Sept for approval.  If you
have a bugfix that is a worthy candidate, please reply to this email with
the appropriate info.

cheers,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Steven Dake (stdake)
Jay apologies.

That was an overgeneralization.  The fuel team was not part of the kolla-mesos 
team to my knowledge.  To my knowledge the kolla-mesos team has moved on to 
kubernetes upstream work and isn’t all that involved in fuel work.

Cheers
-steve


From: Jay Pipes 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, September 23, 2016 at 11:43 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to 
deploy OpenStack on k8s

On 09/23/2016 01:04 PM, Steven Dake (stdake) wrote:
I also fail to see how training the Fuel team with the choices Kolla
has made in implementation puts OpenStack first.

Sorry, could you elaborate on what exactly you mean above? What do you
mean by "training the Fuel team with the choices Kolla has made"?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Fox, Kevin M
There's a bit of why documentation here (though somewhat dated now):
https://review.openstack.org/#/c/361186/

And here:
https://github.com/openstack/kolla-kubernetes/blob/master/specs/ansible-deployment.rst

But there is still a bunch of stuff that we're still figuring out via the 
review process.

We've got an opinion on how to do 0 downtime minor rolling upgrades of api 
services fully automated by kubernetes, for example. Is our solution the best 
way? who knows. But its the best way we can think of currently. The only way to 
really gain this knowledge right now is to review stuff as we all come to a 
common understanding about the "best way". I'm guessing in a number of months, 
when we have the majority of openstack services working smoothly we will have 
enough knowledge to really document it well. We only currently have the compute 
kit stuff working well. For now, we can only revise our best practices as we 
find new issues when we add new services.

Please do join us and lets work together to find the best solution for 
openstack on kubernetes. We all want that.

One other thing.

We've been trying to keep workfow/config generation separate from the parts 
that generate the templates and hand them over to kubernetes. This means, that 
you can use something other then ansible to generate the config and step 
through/orchestrate the deployment. not all of the templates are 100% doing 
this yet, but we're actively working on it.

So, if you wanted to do config management and worflow in say, mistral and/or 
heat it should work. I think it would fit very well with TripleO's current 
architecture. We designed kolla-kubernetes here to be flexible to these sorts 
of needs. I'd also be happy to talk more about this if you'd like.

Thanks,
Kevin



From: Steven Dake (stdake) [std...@cisco.com]
Sent: Friday, September 23, 2016 10:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to 
deploy OpenStack on k8s

Flavio,

Forgive the top post and lack of responding inline – I am dealing with lookout 
2016 which apparently has a bug here [0].

Your question:

I can contribute to kolla-kubernetes all you want but that won't give me what I
asked for in my original email and I'm pretty sure there are opinions about the
"recommended" way for running OpenStack on kubernetes. Questions like: Should I
run rabbit in a container? Should I put my database in there too? Now with
PetSets it might be possible. Can we be smarter on how we place the services in
the cluster? Or should we go with the traditional controller/compute/storage
architecture.

You may argue that I should just read the yaml files from kolla-kubernetes and
start from there. May be true but that's why I asked if there was something
written already.
Your question ^

My answer:
I think what you are really after is why kolla-kubernetes has made the choices 
we have made.  I would not argue that reading the code would answer that 
question because it does not.  Instead it answers how those choices were 
implemented.

You are mistaken in thinking that contributing to kolla-kubernetes won’t give 
you what you really want.  Participation in the Kolla community will answer for 
you *why* choices were made as they were.  Many choices are left unanswered as 
of yet and Red Hat can make a big impact in the future of the decision making 
about *why*.  You have to participate to have your voice heard.  If you are 
expecting the Kolla team to write a bunch of documentation to explain *why* we 
have made the choices we have, we frankly don’t have time for that.  Ryan and 
Michal may promise it with architecture diagrams and other forms of incomplete 
documentation, but that won’t provide you a holistic view of *why* and is 
wasted efforts on their part (no offense Michal and Ryan – I think it’s a 
worthy goal.  The timing for such a request is terrible and I don’t want to 
derail the team into endless discussions about the best way to do things).

The best way to do things is sorted out via the gerrit review process using the 
standard OpenStack workflow through an open development process.

Flavio,

Consider this an invitation to come join us – we want Red Hat’s participation.

Regards
-steve


[0]  
http://answers.microsoft.com/en-us/msoffice/forum/msoffice_outlook-mso_mac/outlook-for-mac-2016-replying-inline-with-html-no/298b830e-11ea-416c-b951-918d8f9562cb

From: Flavio Percoco 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, September 23, 2016 at 3:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to 
deploy OpenStack on k8s

On 22/09/16 20:55 +, Steven Dake (stdake) wrote:
Flavio,

Apologies for delay in 

Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Jay Pipes

On 09/23/2016 01:04 PM, Steven Dake (stdake) wrote:

I also fail to see how training the Fuel team with the choices Kolla
has made in implementation puts OpenStack first.


Sorry, could you elaborate on what exactly you mean above? What do you 
mean by "training the Fuel team with the choices Kolla has made"?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Testing config drive creation in our CI

2016-09-23 Thread Jim Rollenhagen
On Fri, Sep 23, 2016 at 7:37 AM, Dmitry Tantsur  wrote:
> Hi folks!
>
> We've found out that we're not testing creating of config drives in our CI.
> It ended up in one combination being actually broken (pxe_* + wholedisk +
> configdrive). I would like to cover this testing gap. Is there any benefit
> in NOT using config drives in all jobs? I assume we should not bother too
> much testing the metadata service, as it's not within our code base (unlike
> config drive).
>
> I've proposed https://review.openstack.org/375362 to switch our tempest
> plugin to testing config drives, please vote. As you see one job fails on it
> - this is the breakage I was talking about. It will (hopefully) get fixed
> with the next release of ironic-lib.

Right, so as Pavlo mentioned in the patch, configdrive used to be the default
for devstack, and as such we forced configdrive for all tests. When that was
changed, we didn't notice because somehow metadata service worked.
https://github.com/openstack-dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4

I agree, we should go back to using configdrive for all tests.

// jim

>
> Finally, we need to run all jobs on ironic-lib, not only one, as ironic-lib
> is not the basis for all deployment variants. This will probably happen
> after we switch our DSVM jobs to Xenial though.
>
> -- Dmitry
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-23 Thread Erlon Cruz
Not sure exactly what you mean, but in Cinder using the
InstanceLocalityFilter[1], you can  schedule a volume to the same compute
node the instance is located. Is this what you need?

[1]
http://docs.openstack.org/developer/cinder/scheduler-filters.html#instancelocalityfilter

On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <
jsbry...@electronicjungle.net> wrote:

> Kevin,
>
> This is functionality that has been requested in the past but has never
> been implemented.
>
> The best way to proceed would likely be to propose a blueprint/spec for
> this and start working this through that.
>
> -Jay
>
>
> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>
> Hi Novaers and Cinders:
>
> Quite often application requirements would demand using locally attached
> disks (or direct attached disks) for OpenStack compute instances. One such
> example is running virtual hadoop clusters via OpenStack.
>
> We can now achieve this by using BlockDeviceDriver as Cinder driver and
> using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
> in large scale production deployment.
>
> Now that Nova is working on resource provider trying to build an
> generic-resource-pool, is it possible to perform "volume-based-scheduling"
> to build instances according to volume? As this could be much easier to
> build instances like mentioned above.
>
> Or do we have any other ways of doing this?
>
> References:
> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-
> local-disks-for-instances.html
>
> Thanks,
>
> Kevin Zheng
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How people handle the ML (bring your patterns/best practices)

2016-09-23 Thread Joshua Harlow
Since I've heard this a few times over the years (the ML is to hard to 
read, to much volume, to hard to keep track, to hard/this or that and 
so-on),


I thought it would make sense to start to document what folks that have 
been able to keep track of the ML have been doing and perhaps we can 
then move such a document to a wiki or other more official document at 
some stage.


So to help get these ideas out there I started:

https://etherpad.openstack.org/p/how-people-read-the-ml

And

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Steven Dake (stdake)
Flavio,

Forgive the top post and lack of responding inline – I am dealing with lookout 
2016 which apparently has a bug here [0].

Your question:

I can contribute to kolla-kubernetes all you want but that won't give me what I
asked for in my original email and I'm pretty sure there are opinions about the
"recommended" way for running OpenStack on kubernetes. Questions like: Should I
run rabbit in a container? Should I put my database in there too? Now with
PetSets it might be possible. Can we be smarter on how we place the services in
the cluster? Or should we go with the traditional controller/compute/storage
architecture.

You may argue that I should just read the yaml files from kolla-kubernetes and
start from there. May be true but that's why I asked if there was something
written already.
Your question ^

My answer:
I think what you are really after is why kolla-kubernetes has made the choices 
we have made.  I would not argue that reading the code would answer that 
question because it does not.  Instead it answers how those choices were 
implemented.

You are mistaken in thinking that contributing to kolla-kubernetes won’t give 
you what you really want.  Participation in the Kolla community will answer for 
you *why* choices were made as they were.  Many choices are left unanswered as 
of yet and Red Hat can make a big impact in the future of the decision making 
about *why*.  You have to participate to have your voice heard.  If you are 
expecting the Kolla team to write a bunch of documentation to explain *why* we 
have made the choices we have, we frankly don’t have time for that.  Ryan and 
Michal may promise it with architecture diagrams and other forms of incomplete 
documentation, but that won’t provide you a holistic view of *why* and is 
wasted efforts on their part (no offense Michal and Ryan – I think it’s a 
worthy goal.  The timing for such a request is terrible and I don’t want to 
derail the team into endless discussions about the best way to do things).

The best way to do things is sorted out via the gerrit review process using the 
standard OpenStack workflow through an open development process.

Flavio,

Consider this an invitation to come join us – we want Red Hat’s participation.

Regards
-steve


[0]  
http://answers.microsoft.com/en-us/msoffice/forum/msoffice_outlook-mso_mac/outlook-for-mac-2016-replying-inline-with-html-no/298b830e-11ea-416c-b951-918d8f9562cb

From: Flavio Percoco 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, September 23, 2016 at 3:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to 
deploy OpenStack on k8s

On 22/09/16 20:55 +, Steven Dake (stdake) wrote:
Flavio,

Apologies for delay in response – my backlog is large.

Forgive me if I parsed your message incorrectly.

It's probably me failing to communicate my intent or just the intent not being
good enough or worth it at all.

It came across to me as “How do I blaze a trail for OpenStack on Kubernetes?”.  
That was asked of me personally 3 years ago which led to the formation of the 
Kolla project inside Red Hat.  Our initial effort at that activity failed.  
Instead we decided kubernetes wasn’t ready for trailblazing in this space and 
used a far more mature project (Ansible) to solve the “OpenStack in Containers” 
problems and build from there.

We have since expanded our scope to re-solve the “How do I blaze a trail for 
Openstack on Kubernetes?” question since Kubernetes is now ready for this sort 
of trailblazing.  Fuel and several other folks decided to create derived works 
of the Kolla community’s innovations in this area.  I would contend that Fuel 
didn’t need to behave in such a way because the Kolla community is open, 
friendly, mature, diversely affiliated, has a reasonable philosophy and good 
set of principles as well as a strong leadership pipeline.

Rather than go blaze a trail when one already exists or create a derived work, 
why not increase your footprint in Kolla instead?  Red Hat has invested in 
Kolla for some time now, and their footprint hasn’t magically disappeared over 
night.   We will give you what you want within reasonable boundaries (the 
boundaries all open-source projects set of their contributors).  We also accept 
more work than the typical OpenStack project might, so it’s not like you will 
have to bring donuts into the office for every patch you merge into Kolla.

As to your more direct question of reference architecture, that is a totally 
loaded term that I’ll leave untouched.

To answer your question of “Does Kolla have a set of best practices” the answer 
is yes in kolla-ansible and kolla itself and strongly forming set of best 
practices in kolla-kubernetes.

As I mentioned in my email, I don't really care about the 

[openstack-dev] [oslo] Ocata summit planning (reminder!)

2016-09-23 Thread Joshua Harlow

Just a reminder for any folks interested in oslo and the summit,

I'd be great to have any  or  or other ideas on:

https://etherpad.openstack.org/p/ocata-oslo-summit-planning

So just wanted to make sure people are aware of that URL/etherpad and 
have been thinking about what to add to it (I have some ideas but I 
don't want to be the sole one that talks about things at the summit, 
because it might just turn into me rambling about something, ha).


So please write down anything u (or others) may be thinking :)

Much appreciated :)

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-23 Thread Mike Perez
On 11:03 Sep 21, Doug Hellmann wrote:
> 
> > On Sep 21, 2016, at 8:58 AM, Filip Pytloun  
> > wrote:
> > 
> > Hello,
> > 
> > it's definately our bad that we missed elections in OpenStackSalt
> > project. Reason is similar to Rob's - we are active on different
> > channels (mostly IRC as we keep regular meetings) and don't used to
> > reading mailing lists with lots of generic topics (it would be good to
> > have separate mailing list for such calls and critical topics or
> > individual mails to project's core members).
> 
> With 59 separate teams, even emailing the PTLs directly is becoming
> impractical. I can’t imagine trying to email all of the core members
> directly.
> 
> A separate mailing list just for “important announcements” would need someone
> to decide what is “important”. It would also need everyone to be subscribed,
> or we would have to cross-post to the existing list. That’s why we use topic
> tags on the mailing list, so that it is possible to filter messages based on
> what is important to the reader, rather than the sender.

This has came up in the past and I have suggested that people who can't spend
that much time on the lists to refer to the Dev Digest at blog.openstack.org
which mentioned the PTL elections being open.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-23 Thread Doug Hellmann
Excerpts from Rob C's message of 2016-09-23 17:46:46 +0100:
> I wanted to provide a quick update from Security.
> 
> We had our weekly IRC meeting yesterday, dhellman was kind enough to attend
> to help broker some of the discussion. In advance of the meeting I prepared
> a blog post where I tried to articulate my position and where I think
> things need to go next [1]. This was discussed at length during the IRC
> meeting [2]. We discussed the option of becoming a WG or staying in the big
> tent, this resulted in a vote, where the team all indicated their desire to
> stay within the big tent.
> 
> My proposal for the future is outlined in some depth with [1] but the
> summary is that we've identified the areas that we need to improve on in
> order to be better members of the community, we want to stay within the
> big-tent and for me to maintain leadership through this transformational
> process with a view to having multiple candidates stand in the next
> election.
> 
> Cheers
> -Rob

Thanks, Rob. Based on the discussions yesterday I think the team has a
better understanding of the communication issues and I'm convinced that
everyone is committed to improving. I support keeping the team in the
tent.

Doug

> 
> [1]
> https://openstack-security.github.io/organization/2016/09/22/maturing-the-security-project.html
> [2]
> http://eavesdrop.openstack.org/meetings/security/2016/security.2016-09-22-17.00.log.html
> 
> On Fri, Sep 23, 2016 at 4:23 AM, Davanum Srinivas  wrote:
> 
> > Steven,
> >
> > Fair point.
> >
> > Thanks,
> > Dims
> >
> > On Thu, Sep 22, 2016 at 11:04 PM, Steven Dake (stdake) 
> > wrote:
> > > Dims,
> > >
> > > This isn’t any of my particular business except it could affect emerging
> > technology projects (which I find important to OpenStack’s future)
> > negatively – so I thought I’d chime in.
> > >
> > > A lack of activity in a specs repo doesn’t mean much to me.  For
> > example, as Kolla was an emerging project we didn’t use any specs process
> > at all (or very rarely).  There is a reason behind this. Now that Kolla is
> > stable and reliable and we feel we are not an emerging project, we plan to
> > make use of a specs repo starting in Ocata.
> > >
> > > I have no particular concerns with the other commentary – but please
> > don’t judge a project by activity or lack of activity in one repo of its
> > deliverables.  Judge it holistically (You are judging holistically.  I
> > believe a lack of one repo’s activity shouldn’t be part of that judgement).
> > >
> > > Regards
> > > -steve
> > >
> > >
> > > On 9/21/16, 2:08 PM, "Davanum Srinivas"  wrote:
> > >
> > > Jakub,
> > >
> > > Please see below.
> > >
> > > On Wed, Sep 21, 2016 at 3:46 PM, Jakub Pavlik <
> > jakub.pav...@tcpcloud.eu> wrote:
> > > > Hello all,
> > > >
> > > > it took us 2 years of hard working to get these official.
> > OpenStack-Salt is
> > > > now used by around 40 production deployments and it is focused
> > very on
> > > > operation and popularity is growing. You are removing the project
> > week after
> > > > one of top contributor announced that they will use that as part of
> > > > solution. We made a mistakes, however I do not think that is
> > reason to
> > > > remove us. I do no think that quality of the project is measured
> > like this.
> > > > Our PTL got ill and did not do properly his job for last 3 weeks,
> > but this
> > > > can happen anybody.
> > > >
> > > >  It is up to you. If you think that we are useless for community,
> > then
> > > > remove us and we will have to continue outside of this community.
> > However
> > > > growing successful use cases will not be under official openstack
> > community,
> > > > which makes my feeling bad.
> > >
> > > Data points so far are:
> > > 1. No response during Barcelona planning for rooms
> > > 2. Lack of candidates for PTL election
> > > 3. No activity in the releases/ repository hence no entries in
> > > https://releases.openstack.org/
> > > 4. Meetings are not so regular?
> > > http://eavesdrop.openstack.org/meetings/openstack_salt/2016/
> > (supposed
> > > to be weekly)
> > > 5. Is the specs repo really active?
> > > http://git.openstack.org/cgit/openstack/openstack-salt-specs/ is the
> > > work being done elsewhere?
> > > 6. Is there an effort to add stuff to the CI jobs running on
> > openstack
> > > infrastructure? (can't seem to find much
> > > http://codesearch.openstack.org/?q=salt=nope=zuul%
> > 2Flayout.yaml=project-config)
> > >
> > > I'll stop here and switch to #openstack-salt channel to help work you
> > > all through if there is a consensus/willingness from the
> > > openstack-salt team that there's significant work to be done. If you
> > > think you are better off not on the governance, that would be your
> > > call as well.
> > >
> > > 

Re: [openstack-dev] [Nova] Proposal for Nova Integration tests

2016-09-23 Thread Prasanth Anbalagan
Adding the project to email subject.

Thanks
Prasanth Anbalagan

On Fri, 2016-09-23 at 12:56 -0400, Prasanth Anbalagan wrote:
> Hi,
> 
> Continuing the topic on the need for integration style tests for Nova
> (brought up earlier during the weekly meeting at #openstack-meeting,
> Sep 22). The proposal is for a project to hold integration tests that
> includes low-level testing and runs against a devstack backend. I have
> included more details here -
> https://etherpad.openstack.org/p/integration-tests
> 
> Please comment on the need for the project, whether or not any similar
> efforts are in place, approaches suggested, taking forward the
> initiative, etc.
> 
> Thanks
> Prasanth Anbalagan
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-23 Thread Mike Perez
On 13:17 Sep 21, Rob C wrote:
> For my part, I missed the elections, that's my bad. I normally put a
> calendar item in for that issue. I don't think that my missing the election
> date should result in the group being treated in this way. Members of the
> TC have contacted me about unrelated things recently, I have always been
> available however my schedule has made it hard for me to sift through -dev
> recently and I missed the volley of nomination emails. This is certainly a
> failing on my part.
> 
> It's certainly true that the security team, and our cores tend not to pay
> as much attention to the -dev mailing list as we should. The list is pretty
> noisy and  traditionally we always had a separate list that we used for
> security and since moving away from that we tend to focus on IRC or direct
> emails. Though as can be seen with our core announcements etc, we do try to
> do things the "openstack way"

Yes the list can be a bit much. I write a digest of some important threads from
the list. For example the elections being open:

http://www.openstack.org/blog/2016/09/openstack-developer-mailing-list-digest-20160916/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Steven Dake (stdake)
Bogdan,

I recognize English isn’t your first language, so forgive me if I have 
mis-parsed your message.  I think the question you are asking is “Can we have 
cooperation to standardize on how best to do OpenStack on Kubernetes”.  We 
tried an analog of that with Mirantis around Mesos, and that resulted in many 
derived works, one of which was fuel-ccp.  Fuel has made it abundantly clear 
they intend to compete with Kolla, which is fine.  I recognize we are one 
community and need to put OpenStack first here, and project teams second, but I 
also fail to see how training the Fuel team with the choices Kolla has made in 
implementation puts OpenStack first.

Standard organizational best practice with competitive teams in any 
organization is either to stamp out the competition or let them compete 
independently to grow the pie for everyone. The activity you propose would not 
put OpenStack first because of this organizational best practice.

Our code base is completely open.  Our irc channels are completely open.  Our 
mailing list participation is completely open.  Our architecture discussions 
are completely open.  Our project is OPEN.  If you really want to participate 
in Kolla the door remains open.  I find it hard to see a way for that to happen 
given the history and Mirantis’s stated intent, but anything is possible if the 
right people change their minds.

Regards
-steve

On 9/23/16, 8:37 AM, "Bogdan Dobrelya"  wrote:

Yeah, would be very nice to have/reuse a place for highest level and
projects independent specs to outline key architecture decisions like
(WARN [tl;dr]: biased examples from a non existent prototype go below):

* Shared nothing for stateful DB/MQ components (this means no shared
storage for state keeping, but replicas instead)
* And maybe place stateful and SDN/NFV/HW bound components *out* of COE
scope (they are well known to like only
stateless/serverless/schemaless/overlay only unicorns. A joke!..)
* CM tools-agnostic containers build pipelines
* Building images from sources but ship only artifacts w/o build deps
* No entry points magic in build pipeline for containers images to be
spawned by COE platforms as apps.
* Rework components to support 12 factor apps requirements, e.g.
redirect to stdout/stderr only, do not use implicit communication
channels etc.
* Runtime only data driven approach (no j2 templates for build pipeline
please!)
and more things...

On 22.09.2016 16:49, Flavio Percoco wrote:
> On 22/09/16 10:09 -0400, Davanum Srinivas wrote:
>> Flavio
>>
>> Please see below:
>>
>> On Thu, Sep 22, 2016 at 7:04 AM, Flavio Percoco 
>> wrote:
>>> Greetings,
>>>
>>> I've recently started looking into the container technologies around
>>> OpenStack.
>>> More specifically, I've been looking into the tools that allow for
>>> deploying
>>> OpenStack on containers, which is what I'm the most interested in
>>> right now
>>> as
>>> part of the TripleO efforts.
>>>
>>> I'm familiar with the Kolla project and the tools managed by this
>>> team. In
>>> fact,
>>> TripleO currently uses kolla images for the containerized nova-compute
>>> deployment.
>>>
>>> I am, however, looking beyond a docker based deployment. I'd like to
>>> explore
>>> in
>>> more depth a Kubernetes based deployment of OpenStack. I'm familiar with
>>> both
>>> kolla-kubernetes and fuel-ccp, their structure and direction*. Both
>>> projects
>>> have now advanced a bit in their implementations and made some
>>> decisions.
>>>
>>> As someone that started looking into this topic just recently, I'd
>>> love to
>>> see
>>> our communities collaborate more wherever possible. For example, it'd be
>>> great
>>> to see us working on a reference architecture for deploying OpenStack on
>>> kubernetes, letting the implementation details aside for a bit. I'd
>>> assume
>>> some
>>> folks have done this already and I bet we can all learn more from it
>>> if we
>>> work
>>> on this together.
>>>
>>> So, let me go ahead and ask some further questions here, I might be
>>> missing
>>> some
>>> history and/or context:
>>>
>>> - Is there any public documentation that acts as a reference
>>> architecture
>>> for
>>>  deploying OpenStack on kubernetes?
>>> - Is this something the architecture working group could help with?
>>> Or would
>>> it
>>>  be better to hijack one of kolla meetings?
>>>
>>> The restult I'd love to see from this collaboration is a reference
>>> architecture
>>> explaining how OpenStack should be run on Kubernetes.
>>
>> At this moment, fuel-ccp-* is an experiment, it's not under
>> governance, there is no expectation of any 

[openstack-dev] Proposal for Nova Integration tests

2016-09-23 Thread Prasanth Anbalagan
Hi,

Continuing the topic on the need for integration style tests for Nova
(brought up earlier during the weekly meeting at #openstack-meeting, Sep
22). The proposal is for a project to hold integration tests that
includes low-level testing and runs against a devstack backend. I have
included more details here -
https://etherpad.openstack.org/p/integration-tests

Please comment on the need for the project, whether or not any similar
efforts are in place, approaches suggested, taking forward the
initiative, etc.

Thanks
Prasanth Anbalagan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-23 Thread Rob C
I wanted to provide a quick update from Security.

We had our weekly IRC meeting yesterday, dhellman was kind enough to attend
to help broker some of the discussion. In advance of the meeting I prepared
a blog post where I tried to articulate my position and where I think
things need to go next [1]. This was discussed at length during the IRC
meeting [2]. We discussed the option of becoming a WG or staying in the big
tent, this resulted in a vote, where the team all indicated their desire to
stay within the big tent.

My proposal for the future is outlined in some depth with [1] but the
summary is that we've identified the areas that we need to improve on in
order to be better members of the community, we want to stay within the
big-tent and for me to maintain leadership through this transformational
process with a view to having multiple candidates stand in the next
election.

Cheers
-Rob

[1]
https://openstack-security.github.io/organization/2016/09/22/maturing-the-security-project.html
[2]
http://eavesdrop.openstack.org/meetings/security/2016/security.2016-09-22-17.00.log.html

On Fri, Sep 23, 2016 at 4:23 AM, Davanum Srinivas  wrote:

> Steven,
>
> Fair point.
>
> Thanks,
> Dims
>
> On Thu, Sep 22, 2016 at 11:04 PM, Steven Dake (stdake) 
> wrote:
> > Dims,
> >
> > This isn’t any of my particular business except it could affect emerging
> technology projects (which I find important to OpenStack’s future)
> negatively – so I thought I’d chime in.
> >
> > A lack of activity in a specs repo doesn’t mean much to me.  For
> example, as Kolla was an emerging project we didn’t use any specs process
> at all (or very rarely).  There is a reason behind this. Now that Kolla is
> stable and reliable and we feel we are not an emerging project, we plan to
> make use of a specs repo starting in Ocata.
> >
> > I have no particular concerns with the other commentary – but please
> don’t judge a project by activity or lack of activity in one repo of its
> deliverables.  Judge it holistically (You are judging holistically.  I
> believe a lack of one repo’s activity shouldn’t be part of that judgement).
> >
> > Regards
> > -steve
> >
> >
> > On 9/21/16, 2:08 PM, "Davanum Srinivas"  wrote:
> >
> > Jakub,
> >
> > Please see below.
> >
> > On Wed, Sep 21, 2016 at 3:46 PM, Jakub Pavlik <
> jakub.pav...@tcpcloud.eu> wrote:
> > > Hello all,
> > >
> > > it took us 2 years of hard working to get these official.
> OpenStack-Salt is
> > > now used by around 40 production deployments and it is focused
> very on
> > > operation and popularity is growing. You are removing the project
> week after
> > > one of top contributor announced that they will use that as part of
> > > solution. We made a mistakes, however I do not think that is
> reason to
> > > remove us. I do no think that quality of the project is measured
> like this.
> > > Our PTL got ill and did not do properly his job for last 3 weeks,
> but this
> > > can happen anybody.
> > >
> > >  It is up to you. If you think that we are useless for community,
> then
> > > remove us and we will have to continue outside of this community.
> However
> > > growing successful use cases will not be under official openstack
> community,
> > > which makes my feeling bad.
> >
> > Data points so far are:
> > 1. No response during Barcelona planning for rooms
> > 2. Lack of candidates for PTL election
> > 3. No activity in the releases/ repository hence no entries in
> > https://releases.openstack.org/
> > 4. Meetings are not so regular?
> > http://eavesdrop.openstack.org/meetings/openstack_salt/2016/
> (supposed
> > to be weekly)
> > 5. Is the specs repo really active?
> > http://git.openstack.org/cgit/openstack/openstack-salt-specs/ is the
> > work being done elsewhere?
> > 6. Is there an effort to add stuff to the CI jobs running on
> openstack
> > infrastructure? (can't seem to find much
> > http://codesearch.openstack.org/?q=salt=nope=zuul%
> 2Flayout.yaml=project-config)
> >
> > I'll stop here and switch to #openstack-salt channel to help work you
> > all through if there is a consensus/willingness from the
> > openstack-salt team that there's significant work to be done. If you
> > think you are better off not on the governance, that would be your
> > call as well.
> >
> > Thanks,
> > Dims
> >
> > > Thanks,
> > >
> > > Jakub
> > >
> > >
> > > On 21.9.2016 21:03, Doug Hellmann wrote:
> > >>
> > >> Excerpts from Filip Pytloun's message of 2016-09-21 20:36:42
> +0200:
> > >>>
> > >>> On 2016/09/21 13:23, Doug Hellmann wrote:
> > 
> >  The idea of splitting the contributor list comes up pretty
> regularly
> >  and we rehash the same suggestions each time.  Given that what
> we
> >  have now worked fine for 57 

[openstack-dev] [nova] Latest news on placement API and Ocata rough goals

2016-09-23 Thread Jay Pipes

Hi Stackers,

In Newton, we had a major goal of having Nova sending inventory and 
allocation records from the nova-compute daemon to the new placement API 
service over HTTP (i.e. not RPC). I'm happy to say we achieved this 
goal. We had a stretch goal from the mid-cycle of implementing the 
custom resource class support. I'm sorry to say that we did not reach 
this goal, though Ironic did indeed get its part merged and we should be 
able to complete this work before the summit in Nova.


Through the hard work of many folks [1] we were able to merge code that 
added a brand new REST API service (/placement) with endpoints for 
read/write operations against resource providers, inventories, 
allocations, and usage records. We were able to get patches merged that 
modified the resource tracker in the nova-compute to write the compute 
node's inventory and allocation records to the placement API in a 
fashion that avoided required action on the part of the operator to keep 
the nova-computes up and running.


For Ocata AND BEYOND, I'd here are a number of rough priorities and 
goals that we need to work on...


1. Shared storage properly implemented

To fulfill the original use case around accurate reporting of shared 
resources, we need to complete a few subtasks:


a) complete the aggregates/ endpoints in the placement API so that 
resource providers can be associated with aggregates
b) have the scheduler reporting client tracking more than just the 
resource provider for the compute node


2. Custom resource classes

This actually isn't all that much work, but just needs some focus. We 
need the following done in this area:


a) (very simple) REST API added to the placement API for GET/PUT 
resource class names
b) modify the ResourceClass Enum field to be a StringField -- which is 
wire-compatible with Enum -- and add some code on each side of the 
client/server communication that caches the standard resource classes as 
constants that Nova and placement code can share
c) modify the Ironic virt driver to pass the new node_class attribute on 
nodes into the resource tracker and have the resource tracker create 
resource provider records for each Ironic node with a single inventory 
record for each of those resource providers for the node class
d) modify the resource tracker to track the allocation of instances to 
resource providers


3. Integration of Nova scheduler with Placement API

We would like the Nova scheduler to be able to query the placement API 
for quantitative information in Ocata. So, code will need to be pushed 
that adds a call to the placement API for resource provider UUIDs that 
meet a given request for some amount of resources. This result will then 
be used to filter a request in the Nova scheduler for ComputeNode 
objects to satisfy the qualitative side of the request.


4. Progress on qualitative request components (traits)

A number of things can be done in this area:

a) get os-traits interface stable and include all catalogued 
standardized trait strings
b) agree on schema in placement DB for storing and querying traits 
against resource providers


5. Nested resource providers

Things like SR-IOV PCI devices are actually resource providers that are 
embedded within another resource provider (the compute node itself). In 
order to tag things like SR-IOV PFs or VFs with a set of traits, we need 
to have discovery code run on the compute node that registers things 
like SR-IOV PF/VFs or SR-IOV FPGAs as nested resource providers.


Some steps needed here:

a) agreement on schema for placement DB for representing this nesting 
relationship
b) write the discovery code in nova-compute for adding these resource 
providers to the placement API when found


Anyway, in conclusion, we've got a ton of work to do and I'm going to 
spend time before the summit trying to get good agreement on direction 
and proposed implementation for a number of the items listed above. 
Hopefully by mid-October we'll have a good idea of assignees for various 
work and what is going to be realistic to complete in Ocata.


Best,
-jay

[1] I'd like to personally thank Chris Dent, Dan Smith, Sean Dague, Ed 
Leafe, Sylvain Bauza, Andrew Laski, Alex Xu and Matt Riedemann for 
tolerating my sometimes lengthy absences and for pushing through 
communication breakdowns resulting from my inability to adequately 
express my ideas or document agreed solutions.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Cores using -2 votes

2016-09-23 Thread Nikhil Komawar
I empathize with you Erno. I think this is a case of sheer
mis-understanding and probably (I think) the mention of that instance /
situation in this email is only for referential purposes.


There are some critical deadlines one sub-set of people have to met and
there are other priorities other sub-set have. Co-ordination across
continents and low bandwidth conversations that do not necessarily
communicate the /intent/ every time results into such situations.


Let's /all/ move on and not regress on it. We do need improvement in the
process for /sure/ and I've already communicated my intentions with
rosmaita about them. You can expect something later next week as time
permits.



On 9/23/16 12:30 PM, Erno Kuvaja wrote:
> On Fri, Sep 23, 2016 at 3:42 PM, Ian Cordasco  wrote:
>> Hi all,
>>
>> A few weeks ago, there was a controversy in which a patch had been
>> -2'd until other concerns were resolved and then the core who used
>> their -2 powers disappeared and could not lift it after those concerns
>> had been resolved. This lead to a situation where the -2'd patch was
>> abandoned and then resubmitted with a new Change-Id so it could be
>> approved in time for a milestone.
>>
>> In chatting with some folks, it's become apparent that all of us
>> Glance cores need to keep a dashboard around of items that we've -2'd.
>>
>> The basic form of that is:
>>
>> https://review.openstack.org/#/q/reviewer:self+AND+label:code-review-2+AND+(project:openstack/glance+OR+project:openstack/glance_store+OR+project:openstack/python-glanceclient)
>>
>> Or the query in particular is:
>>
>> reviewer:self AND label:code-review-2 AND
>> (project:openstack/glance OR project:openstack/glance_store OR
>> project:openstack/python-glanceclient)
>>
>> That said, this will show any patch you have reviewed that has a -2 on
>> it. (This also ignores specs.)
>>
>> To find what *you* have -2'd, the query is a little bit different:
>>
>> label:code-review-2, AND
>> (project:openstack/glance OR project:openstack/glance_store OR
>> project:openstack/python-glanceclient)
>>
>> For example,
>>
>> label:code-review-2,sigmavirus24 AND (project:openstack/glance OR
>> project:openstack/glance_store OR
>> project:openstack/python-glanceclient)
>>
>> is my query.
>>
>> I think we would all appreciate it if as cores we could keep an eye on
>> our own -2's and keep them up-to-date.
>>
>> I suspect people here will want to ignore anything that was abandoned
>> so you can also do:
>>
>> label:code-review-2, AND -status:abandoned AND
>> (project:openstack/glance OR project:openstack/glance_store OR
>> project:openstack/python-glanceclient)
>>
>> Finally, if you use Gertty, you can use this query to do the same thing:
>>
>> label:Code-Review=-2, AND -status:abandoned AND
>> project:^openstack/.*glance.*
>>
>> Cheers,
>> --
>> Ian Cordasco
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Ok, as it's becoming clear that the one wrong assumption did not slip
> from people (as I originally did not want to point it out), lets
> clarify this.
>
> There is difference between "not being able to" as referred in the
> original mailchain and "not willing to before verifying as the issues
> were flagged and their corrections slipped under radar long time
> before". I actually followed up that situation daily basis and got
> online on my holidays to make sure those -2s were not left hanging
> there without reason. That was the cause of the frustration and
> initial e-mail. Yes, it's important to track -2s, it's equally
> important to not assume it's not relevant just because you happen to
> think someone has addressed the reason for it.
>
> No bad feelings,
> Erno "jokke" Kuvaja
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Cores using -2 votes

2016-09-23 Thread Ian Cordasco
 

-Original Message-
From: Erno Kuvaja 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: September 23, 2016 at 11:33:46
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Glance] Cores using -2 votes

> Ok, as it's becoming clear that the one wrong assumption did not slip
> from people (as I originally did not want to point it out), lets
> clarify this.
>  
> There is difference between "not being able to" as referred in the
> original mailchain and "not willing to before verifying as the issues
> were flagged and their corrections slipped under radar long time
> before". I actually followed up that situation daily basis and got
> online on my holidays to make sure those -2s were not left hanging
> there without reason. That was the cause of the frustration and
> initial e-mail. Yes, it's important to track -2s, it's equally
> important to not assume it's not relevant just because you happen to
> think someone has addressed the reason for it.

Hey Erno,

That wasn't apparent from what I saw in those interactions. Thank you for 
clarifying.

To be entirely transparent, the reason I wrote this is because I kept 
forgetting to make these dashboards for myself and Brian asked me this morning 
on IRC to revisit a -2 I had left on a review. Since it was fresh in my mind, I 
thought I'd leave it here. Sorry I didn't include that originally.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Cores using -2 votes

2016-09-23 Thread Ian Cordasco
 

-Original Message-
From: Bashmakov, Alexander 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: September 23, 2016 at 11:08:55
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Glance] Cores using -2 votes

> These queries might be a good addition to the Glance dashboard in 
> https://github.com/openstack/gerrit-dash-creator  
> under the "Patches I -2'd" section: 
> https://github.com/openstack/gerrit-dash-creator/blob/master/dashboards/glance.dash#L33
>   

Heh, I didn't know that section existed there. I must not have updated my 
dashboard in a long time. I also wasn't aware "self" would work there as the 
Gerrit docs describe using a "group or individual name".

Thanks Alexander!

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Ryan Hallisey
Thanks for starting the discussion Fabio.

> As someone that started looking into this topic just recently, I'd love to see
> our communities collaborate more wherever possible. For example, it'd be great
> to see us working on a reference architecture for deploying OpenStack on
> kubernetes, letting the implementation details aside for a bit. I'd assume 
> some
> folks have done this already and I bet we can all learn more from it if we 
> work
> on this together.

Agreed Flavio. Members of the kolla-kubernetes community have some ideas of
how this will look.  I can put together some diagrams over the weekend to depict
this and maybe others that have some ideas can comment and share theirs.

> So, let me go ahead and ask some further questions here, I might be missing 
> some
> history and/or context:
> - Is there any public documentation that acts as a reference architecture for
>  deploying OpenStack on kubernetes?

These specs [1][2] might be a good start.

> - Is this something the architecture working group could help with? Or would 
> it
>  be better to hijack one of kolla meetings?

kolla-kubernetes has a booked slot in the weekly kolla meetings. This could be
discussed there.

>> So issue is, I know of few other openstacks on k8s and everyone does
>> that slightly differently. So far we lack proof points and real world
>> data to determine best approaches to stuff. This is still not-to-well
>> researched field. Right now it's mostly opinions and assumptions.
>> We're not ready to make document without having a flame war around
>> it;) Not enough knowledge in our collective brains.

> Awesome input, thanks.

Michal is right, there are a bunch of implementations that exist. The tricky
part is pulling together all the groups to figure out the best solution.

When the kolla-kubernetes project was created, my hope that this new repo would
be a place where anyone curious about the OpenStack and Kubernetes interaction
could come and express their opinion in code or conversation. The community 
still
remains open to any changes with it's implementation and the current
implementation is a reflection of who is participating.

I agree that it would be ideal for a single place to collaborate. It would be
awesome to bring together the community that is looking to solve this
problem around a single project. Doesn't matter what that project is, but I'd
like for more collaboration :).

>> As for Kolla-k8s we are still deep in development, so we are free to
>> take best course of action we know of. We don't have any technical
>> debt now. Current state of stuff represents what we thing is best
>> approach.

> I wonder if we can start writing these assumptions down and update them as we
> go. I don't expect you to do it, I'm happy to help with this. We could put it 
> in
> kolla-k8s docs if that makes sense to other kolla-k8s folks.

It's not that Kolla-k8s has tech debt, but rather the community is still 
testing the
waters with its implementation. For instance, the community is looking at a 
workflow
that will execute the deployment of OpenStack and hand off to Kubernetes to 
manage it.
This solution raises some questions: why do you need a workflow at all? Why not
use Kubernetes, a Container Orchestration Engine, to orchestrate the services?  
A lot
of these fundamental questions were outlined in this spec [1] and the answers 
to them
are still WIP [3].

> I'll probably start pinging you guys on IRC with questions so I can help 
> writing
> this down.

That would be fantastic! There's also room for collaboration at summit too.
Kolla-kubernetes will have a design session/fishbowl scheduled.

>> There is also part that k8s is constantly growing and it lacks certain
>> features that created these issues in the first place, if k8s solves
>> them on their side, that will affect decision on our side.

> Thanks a lot, Michal. This is indeed the kind of info I was looking for and
> where I'd love to start from.

Agreed Michal.  The community has been adapting on the fly based on features 
coming
out of Kubernetes.  Things like init containers and petsets were recent features
that have found their way into kolla-kubernetes.

The flow of work in kolla-kubernetes has been following the work items in the
spec [1], but in a different order.  The basic outline for putting OpenStack on
Kubernetes will follow a similar path. Where as things like the templates will
be similar, but the orchestration method can vary. I think that's where the
biggest controversy lies.

Thanks!
-Ryan

[1] - https://review.openstack.org/#/c/304182/
[2] - https://specs.openstack.org/openstack/fuel-specs/specs/10.0/ccp.html
[3] - https://review.openstack.org/#/c/335279/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Cores using -2 votes

2016-09-23 Thread Erno Kuvaja
On Fri, Sep 23, 2016 at 3:42 PM, Ian Cordasco  wrote:
> Hi all,
>
> A few weeks ago, there was a controversy in which a patch had been
> -2'd until other concerns were resolved and then the core who used
> their -2 powers disappeared and could not lift it after those concerns
> had been resolved. This lead to a situation where the -2'd patch was
> abandoned and then resubmitted with a new Change-Id so it could be
> approved in time for a milestone.
>
> In chatting with some folks, it's become apparent that all of us
> Glance cores need to keep a dashboard around of items that we've -2'd.
>
> The basic form of that is:
>
> https://review.openstack.org/#/q/reviewer:self+AND+label:code-review-2+AND+(project:openstack/glance+OR+project:openstack/glance_store+OR+project:openstack/python-glanceclient)
>
> Or the query in particular is:
>
> reviewer:self AND label:code-review-2 AND
> (project:openstack/glance OR project:openstack/glance_store OR
> project:openstack/python-glanceclient)
>
> That said, this will show any patch you have reviewed that has a -2 on
> it. (This also ignores specs.)
>
> To find what *you* have -2'd, the query is a little bit different:
>
> label:code-review-2, AND
> (project:openstack/glance OR project:openstack/glance_store OR
> project:openstack/python-glanceclient)
>
> For example,
>
> label:code-review-2,sigmavirus24 AND (project:openstack/glance OR
> project:openstack/glance_store OR
> project:openstack/python-glanceclient)
>
> is my query.
>
> I think we would all appreciate it if as cores we could keep an eye on
> our own -2's and keep them up-to-date.
>
> I suspect people here will want to ignore anything that was abandoned
> so you can also do:
>
> label:code-review-2, AND -status:abandoned AND
> (project:openstack/glance OR project:openstack/glance_store OR
> project:openstack/python-glanceclient)
>
> Finally, if you use Gertty, you can use this query to do the same thing:
>
> label:Code-Review=-2, AND -status:abandoned AND
> project:^openstack/.*glance.*
>
> Cheers,
> --
> Ian Cordasco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Ok, as it's becoming clear that the one wrong assumption did not slip
from people (as I originally did not want to point it out), lets
clarify this.

There is difference between "not being able to" as referred in the
original mailchain and "not willing to before verifying as the issues
were flagged and their corrections slipped under radar long time
before". I actually followed up that situation daily basis and got
online on my holidays to make sure those -2s were not left hanging
there without reason. That was the cause of the frustration and
initial e-mail. Yes, it's important to track -2s, it's equally
important to not assume it's not relevant just because you happen to
think someone has addressed the reason for it.

No bad feelings,
Erno "jokke" Kuvaja

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Cores using -2 votes

2016-09-23 Thread Bashmakov, Alexander
These queries might be a good addition to the Glance dashboard in 
https://github.com/openstack/gerrit-dash-creator under the "Patches I -2'd" 
section: 
https://github.com/openstack/gerrit-dash-creator/blob/master/dashboards/glance.dash#L33

> -Original Message-
> From: Ian Cordasco [mailto:sigmaviru...@gmail.com]
> Sent: Friday, September 23, 2016 8:12 AM
> To: Nikhil Komawar ; OpenStack Development
> Mailing List (not for usage questions) 
> Subject: Re: [openstack-dev] [Glance] Cores using -2 votes
> 
> 
> 
> -Original Message-
> From: Nikhil Komawar 
> Reply: Nikhil Komawar 
> Date: September 23, 2016 at 10:04:51
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Cc: Ian Cordasco 
> Subject:  Re: [openstack-dev] [Glance] Cores using -2 votes
> 
> > thanks Ian, this is great info.
> >
> > Just a side question, do you have example for -Workflow , say in cases
> > when I'd +2ed but to keep a check on process and approve after the
> > freeze -W'ed it?
> 
> So the important thing to keep in mind is that: "Code-Review", "Verified",
> and "Workflow" are all labels. And they all have different values (-2, -1, 0, 
> +1,
> +2). So you could absolutely have a search for
> 
>     label:Code-Review=+2, AND label:Workflow=-1, name>
> 
> That could combine with other portions of the queries I wrote in my first
> email. :-)
> 
> Cheers,
> --
> Ian Cordasco
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Joshua Harlow

Flavio Percoco wrote:

Greetings,

I've recently started looking into the container technologies around
OpenStack.
More specifically, I've been looking into the tools that allow for
deploying
OpenStack on containers, which is what I'm the most interested in right
now as
part of the TripleO efforts.

I'm familiar with the Kolla project and the tools managed by this team.
In fact,
TripleO currently uses kolla images for the containerized nova-compute
deployment.

I am, however, looking beyond a docker based deployment. I'd like to
explore in
more depth a Kubernetes based deployment of OpenStack. I'm familiar with
both
kolla-kubernetes and fuel-ccp, their structure and direction*. Both
projects
have now advanced a bit in their implementations and made some decisions.

As someone that started looking into this topic just recently, I'd love
to see
our communities collaborate more wherever possible. For example, it'd be
great
to see us working on a reference architecture for deploying OpenStack on
kubernetes, letting the implementation details aside for a bit. I'd
assume some
folks have done this already and I bet we can all learn more from it if
we work
on this together.



Can u describe here what u think 'deploying OpenStack on kubernetes' 
means to you, what is the boundary of OpenStack and what is the boundary 
of kubernetes in your mind? For example where does ironic fit in your 
view; where does nova fit in your view also. Is nova going to be 
deployed ontop of kubernetes and VM's will be spun up where? What about 
the baremetal (or VMs?) that kubernetes would need to run on (where is 
that coming from?).


To me the 'OpenStack on kubernetes' is not really something a simple 
statement can answer, so I'd like to know what u think that statement 
means :)


Btw, there is a sig-openstack in k8s, they also have a slack channel, 
and a google group 
https://groups.google.com/d/forum/kubernetes-sig-openstack (I'm not such 
a big fan of requiring people to find slack or google groups, but it is 
what it is...)


Overall +1 to 'communities collaborate more wherever possible'

I was trying to setup a keystone meeting with the sig-auth folks (I 
guess sig-auth is pretty much the equivalent of the keystone group in 
k8s); and someone is more than welcome to take that over (but again it 
depends on where in your mind a thing like keystone lives after put 
ontop/underneath/inside/all-around k8s+openstack).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Bogdan Dobrelya
Yeah, would be very nice to have/reuse a place for highest level and
projects independent specs to outline key architecture decisions like
(WARN [tl;dr]: biased examples from a non existent prototype go below):

* Shared nothing for stateful DB/MQ components (this means no shared
storage for state keeping, but replicas instead)
* And maybe place stateful and SDN/NFV/HW bound components *out* of COE
scope (they are well known to like only
stateless/serverless/schemaless/overlay only unicorns. A joke!..)
* CM tools-agnostic containers build pipelines
* Building images from sources but ship only artifacts w/o build deps
* No entry points magic in build pipeline for containers images to be
spawned by COE platforms as apps.
* Rework components to support 12 factor apps requirements, e.g.
redirect to stdout/stderr only, do not use implicit communication
channels etc.
* Runtime only data driven approach (no j2 templates for build pipeline
please!)
and more things...

On 22.09.2016 16:49, Flavio Percoco wrote:
> On 22/09/16 10:09 -0400, Davanum Srinivas wrote:
>> Flavio
>>
>> Please see below:
>>
>> On Thu, Sep 22, 2016 at 7:04 AM, Flavio Percoco 
>> wrote:
>>> Greetings,
>>>
>>> I've recently started looking into the container technologies around
>>> OpenStack.
>>> More specifically, I've been looking into the tools that allow for
>>> deploying
>>> OpenStack on containers, which is what I'm the most interested in
>>> right now
>>> as
>>> part of the TripleO efforts.
>>>
>>> I'm familiar with the Kolla project and the tools managed by this
>>> team. In
>>> fact,
>>> TripleO currently uses kolla images for the containerized nova-compute
>>> deployment.
>>>
>>> I am, however, looking beyond a docker based deployment. I'd like to
>>> explore
>>> in
>>> more depth a Kubernetes based deployment of OpenStack. I'm familiar with
>>> both
>>> kolla-kubernetes and fuel-ccp, their structure and direction*. Both
>>> projects
>>> have now advanced a bit in their implementations and made some
>>> decisions.
>>>
>>> As someone that started looking into this topic just recently, I'd
>>> love to
>>> see
>>> our communities collaborate more wherever possible. For example, it'd be
>>> great
>>> to see us working on a reference architecture for deploying OpenStack on
>>> kubernetes, letting the implementation details aside for a bit. I'd
>>> assume
>>> some
>>> folks have done this already and I bet we can all learn more from it
>>> if we
>>> work
>>> on this together.
>>>
>>> So, let me go ahead and ask some further questions here, I might be
>>> missing
>>> some
>>> history and/or context:
>>>
>>> - Is there any public documentation that acts as a reference
>>> architecture
>>> for
>>>  deploying OpenStack on kubernetes?
>>> - Is this something the architecture working group could help with?
>>> Or would
>>> it
>>>  be better to hijack one of kolla meetings?
>>>
>>> The restult I'd love to see from this collaboration is a reference
>>> architecture
>>> explaining how OpenStack should be run on Kubernetes.
>>
>> At this moment, fuel-ccp-* is an experiment, it's not under
>> governance, there is no expectation of any releases, there are no
>> specs or docs that i know of. So kolla/kolla-kubernetes is probably
>> the best accumulator of kubernetes knowledge specifically about
>> running openstack.
>>
>> Note that tcpcloud folks may also have something, but haven't seen any
>> public information or reference architecture from them. Definitely
>> don't know of any plans from that team as well to open up and share.
> 
> Yeah, I know all of the above, which is why I said I don't really care
> about the
> implementation detail of things. I think the knowledge the folks in
> fuel-ccp
> have and the knowledge folks in the kolla team have could produce a base
> knowledge for folks looking into deploying OpenStack on kubernetes.
> 
> It'd be great to see this happening and I'm sure teams would benefit
> from it
> too.
> 
> Flavio
> 
>>> Thanks in advance. I look forward to see us collaborate more on this
>>> area,
>>> Flavio
>>>
>>> * thanks to all fuel and kolla contributors that helped me understand
>>> better
>>> the
>>>  work in each of these projects and the direction they are headed
>>> .
>>> -- 
>>> @flaper87
>>> Flavio Percoco
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> Thanks,
>> Dims
>>
>> -- 
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 

[openstack-dev] [neutron][upgrades] Bi-weekly upgrades work status. 9/19/2016

2016-09-23 Thread Morales, Victor
Hi neutrinos,

The idea of this email is to summarize the effort that we're making during the 
implemetation of Rolling upgrades in Neutron, as well as 
sharing the upcoming changes.

Announcements


Neutron Newton RC1 has been created and this contains the following changes 
related to OVO:
https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fneutron%29%0Astatus%3Amerged+after%3A2016%2D04%2D04+before%3A2016%2D09%2D19=Neutron+Upgrades+%2D+Newton%2DVersioned+Object+Creation+and+Integration=%28topic%3Abp%2Fadopt%2Doslo%2Dversioned%2Dobjects%2Dfor%2Ddb+OR+topic%3Aovo%29+DB+model+classes=topic%3Abug%2F1597913

Here, let's just outline general plan:
- Move DB model classes to avoid cyclic imports. 
https://review.openstack.org/#/q/status:open+topic:bug/1597913
- Land Oslo-Versioned Objects
- Adopt them in plugin code, this means the replacement of the exisiting 
calls for corresponding OVO functions. 

Ocata release will last 4.5 months only. Though the cycle is short, the plan is 
to make it the first release that supports partial upgrade
for neutron-servers. It means we will need to forbid contract alembic scripts 
during this cycle.

Model Relocation 
=

SubnetServiceType, FlatAllocation, GreAllocation and GreEndpoints models have 
been already moved into neutron/db/models folder.  The 
plan is to move the DB model classes that share file with mixin class ( 
https://review.openstack.org/#/q/status:open+topic:bug/1597913 )

OVO Neutron Framework
===

There are some cases where the API receives filters which are not defined in 
the model.( e. g. for the query to filter Subnet model class 
is using 'admin_state_up' as filter).  This behaviour is not allowed in the 
strict OVO implementation, so it was required to make optional
this restriction. https://review.openstack.org/#/c/365659/

Subnet OVO has been created but its integration is in progress, so any feedback 
is welcome
https://review.openstack.org/#/c/321001/ 
 https://review.openstack.org/#/c/351740/

Regarding the way to replace inner and outer joins on the current way that 
models have been implemented is something that has not 
been defined yet.  The initial approach to follow is trying to create a new 
classmethod in the most relevant OVO class and move that logic
into the OVO class.  Obviously, this varies case by case.

It has been identified some cases where methods passes DB session as an 
argument instead of a Application Context.  This has a direct impact on the way 
to
replace code with OVO classes because they use context for doing DB changes 
internally.  It was decided to consider changes on method signature whenever
it's possible with the only exception to don't modify the any method that 
afects the API.

OVO Implementation Dashboard  ->  
https://docs.google.com/spreadsheets/d/1FeeQlQITsZSj_wpOXiLbS36dirb_arX0XEWBdFVPMB8

http://eavesdrop.openstack.org/meetings/neutron_upgrades/2016/neutron_upgrades.2016-09-12-15.01.log.html
http://eavesdrop.openstack.org/meetings/neutron_upgrades/2016/neutron_upgrades.2016-09-19-15.00.log.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][trove] trove and trove-dashboard Newton RC2 available

2016-09-23 Thread Doug Hellmann
Hello everyone,

The release candidates for trove and trove-dashboard for the end
of the Newton cycle is available!  You can find the RC2 source code
tarballs at:

https://tarballs.openstack.org/trove/trove-6.0.0.0rc2.tar.gz
https://tarballs.openstack.org/trove-dashboard/trove-dashboard-7.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, these candidates will be formally released as the
final Newton release on 6 October. You are therefore strongly
encouraged to test and validate these tarballs!

Alternatively, you can directly test the stable/newton release
branch at:

http://git.openstack.org/cgit/openstack/trove/log/?h=stable/newton
http://git.openstack.org/cgit/openstack/trove-dashboard/log/?h=stable/newton

If you find an issue that could be considered release-critical,
please file it at:

https://bugs.launchpad.net/trove/+filebug

and tag it *newton-rc-potential* to bring it to the trove release
crew's attention.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO Core nominations

2016-09-23 Thread Dan Prince
On Thu, 2016-09-15 at 10:20 +0100, Steven Hardy wrote:
> Hi all,
> 
> As we work to finish the last remaining tasks for Newton, it's a good
> time
> to look back over the cycle, and recognize the excellent work done by
> several new contributors.
> 
> We've seen a different contributor pattern develop recently, where
> many
> folks are subsystem experts and mostly focus on a particular project
> or
> area of functionality.  I think this is a good thing, and it's
> hopefully
> going to allow our community to scale more effectively over time (and
> it
> fits pretty nicely with our new composable/modular architecture).
> 
> We do still need folks who can review with the entire TripleO
> architecture
> in mind, but I'm very confident folks will start out as subsystem
> experts
> and over time broaden their area of experience to encompass more of
> the TripleO projects (we're already starting to see this IMO).
> 
> We've had some discussion in the past[1] about strictly defining
> subteams,
> vs just adding folks to tripleo-core and expecting good judgement to
> be
> used (e.g only approve/+2 stuff you're familiar with - and note that
> it's
> totally fine for a core reviewer to continue to +1 things if the
> patch
> looks OK but is outside their area of experience).
> 
> So, I'm in favor of continuing that pattern and just welcoming some
> of our
> subsystem expert friends to tripleo-core, let me know if folks feel
> strongly otherwise :)
> 
> The nominations, are based partly on the stats[2] and partly on my
> own
> experience looking at reviews, patches and IRC discussion with these
> folks
> - I've included details of the subsystems I expect these folks to
> focus
> their +2A power on (at least initially):
> 
> 1. Brent Eagles
> 
> Brent has been doing some excellent work mostly related to Neutron
> this
> cycle - his reviews have been increasingly detailed, and show a solid
> understanding of our composable services architecture.  He's also
> provided
> a lot of valuable feedback on specs such as dpdk and sr-iov.  I
> propose
> Brent continues this exellent Neutron focussed work, while also
> expanding
> his review focus such as the good feedback he's been providing on new
> Mistral actions in tripleo-common for custom-roles.
> 
> 2. Pradeep Kilambi
> 
> Pradeep has done a large amount of pretty complex work around
> Ceilomenter
> and Aodh over the last two cycles - he's dealt with some pretty tough
> challenges around upgrades and has consistently provided good review
> feedback and solid analysis via discussion on IRC.  I propose Prad
> continues this excellent Ceilomenter/Aodh focussed work, while also
> expanding review focus aiming to cover more of t-h-t and other repos
> over
> time.
> 
> 3. Carlos Camacho
> 
> Carlos has been mostly focussed on composability, and has done a
> great job
> of working through the initial architecture implementation, including
> writing some very detailed initial docs[3] to help folks make the
> transition
> to the new architecture.  I'd suggest that Carlos looks to maintain
> this
> focus on composable services, while also building depth of reviews in
> other
> repos.
> 
> 4. Ryan Brady
> 
> Ryan has been one of the main contributors implementing the new
> Mistral
> based API in tripleo-common.  His reviews, patches and IRC discussion
> have
> consistently demonstrated that he's an expert on the mistral
> actions/workflows and I think it makes sense for him to help with
> review
> velocity in this area, and also look to help with those subsystems
> interacting with the API such as tripleoclient.
> 
> 5. Dan Sneddon
> 
> For many cycles, Dan has been driving direction around our network
> architecture, and he's been consistently doing a relatively small
> number of
> very high-quality and insightful reviews on both os-net-config and
> the
> network templates for tripleo-heat-templates.  I'd suggest Dan
> continues
> this focus, and he's indicated he may have more bandwidth to help
> with
> reviews around networking in future.
> 
> Please can I get feedback from exisitng core reviewers - you're free
> to +1
> these nominations (or abstain), but any -1 will veto the
> process.  I'll
> wait one week, and if we have consensus add the above folks to
> tripleo-core.
> 
> Finally, there are quite a few folks doing great work that are not on
> this
> list, but seem to be well on track towards core status.  Some of
> those
> folks I've already reached out to, but if you're not nominated now,
> please
> don't be disheartened, and feel free to chat to me on IRC about
> it.  Also
> note the following:
> 
>  - We need folks to regularly show up, establishing a long-term
> pattern of
>    doing useful reviews, but core status isn't about raw number of
> reviews,
>    it's about consistent downvotes and detailed, well considered and
>    insightful feedback that helps increase quality and catch issues
> early.
> 
>  - Try to spend some time reviewing stuff outside your normal area of

Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-23 Thread Jay S. Bryant

Kevin,

This is functionality that has been requested in the past but has never 
been implemented.


The best way to proceed would likely be to propose a blueprint/spec for 
this and start working this through that.


-Jay


On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:

Hi Novaers and Cinders:

Quite often application requirements would demand using locally 
attached disks (or direct attached disks) for OpenStack compute 
instances. One such example is running virtual hadoop clusters via 
OpenStack.


We can now achieve this by using BlockDeviceDriver as Cinder driver 
and using AZ in Nova and Cinder, illustrated in[1], which is not very 
feasible in large scale production deployment.


Now that Nova is working on resource provider trying to build an 
generic-resource-pool, is it possible to perform 
"volume-based-scheduling" to build instances according to volume? As 
this could be much easier to build instances like mentioned above.


Or do we have any other ways of doing this?

References:
[1] 
http://cloudgeekz.com/71/how-to-setup-openstack-to-use-local-disks-for-instances.html


Thanks,

Kevin Zheng


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Cores using -2 votes

2016-09-23 Thread Ian Cordasco
 

-Original Message-
From: Nikhil Komawar 
Reply: Nikhil Komawar 
Date: September 23, 2016 at 10:04:51
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Ian Cordasco 
Subject:  Re: [openstack-dev] [Glance] Cores using -2 votes

> thanks Ian, this is great info.
>  
> Just a side question, do you have example for -Workflow , say in cases
> when I'd +2ed but to keep a check on process and approve after the
> freeze -W'ed it?

So the important thing to keep in mind is that: "Code-Review", "Verified", and 
"Workflow" are all labels. And they all have different values (-2, -1, 0, +1, 
+2). So you could absolutely have a search for

    label:Code-Review=+2, AND label:Workflow=-1,

That could combine with other portions of the queries I wrote in my first 
email. :-)

Cheers,
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Cores using -2 votes

2016-09-23 Thread Nikhil Komawar
thanks Ian, this is great info.

Just a side question, do you have example for -Workflow , say in cases
when I'd +2ed but to keep a check on process and approve after the
freeze -W'ed it?

Nonetheless, I have been using the labels and owner/reviewer/project
field to good advantage so would like to acknowledge what you have to
say -- again, this is very effective and I will too encourage folks to
use such a bookmark.

If you need more options for querying, please read up the gerrit docs
https://review.openstack.org/Documentation/user-search.html

On 9/23/16 10:42 AM, Ian Cordasco wrote:
> Hi all,
>
> A few weeks ago, there was a controversy in which a patch had been
> -2'd until other concerns were resolved and then the core who used
> their -2 powers disappeared and could not lift it after those concerns
> had been resolved. This lead to a situation where the -2'd patch was
> abandoned and then resubmitted with a new Change-Id so it could be
> approved in time for a milestone.
>
> In chatting with some folks, it's become apparent that all of us
> Glance cores need to keep a dashboard around of items that we've -2'd.
>
> The basic form of that is:
>
> https://review.openstack.org/#/q/reviewer:self+AND+label:code-review-2+AND+(project:openstack/glance+OR+project:openstack/glance_store+OR+project:openstack/python-glanceclient)
>
> Or the query in particular is:
>
> reviewer:self AND label:code-review-2 AND
> (project:openstack/glance OR project:openstack/glance_store OR
> project:openstack/python-glanceclient)
>
> That said, this will show any patch you have reviewed that has a -2 on
> it. (This also ignores specs.)
>
> To find what *you* have -2'd, the query is a little bit different:
>
> label:code-review-2, AND
> (project:openstack/glance OR project:openstack/glance_store OR
> project:openstack/python-glanceclient)
>
> For example,
>
> label:code-review-2,sigmavirus24 AND (project:openstack/glance OR
> project:openstack/glance_store OR
> project:openstack/python-glanceclient)
>
> is my query.
>
> I think we would all appreciate it if as cores we could keep an eye on
> our own -2's and keep them up-to-date.
>
> I suspect people here will want to ignore anything that was abandoned
> so you can also do:
>
> label:code-review-2, AND -status:abandoned AND
> (project:openstack/glance OR project:openstack/glance_store OR
> project:openstack/python-glanceclient)
>
> Finally, if you use Gertty, you can use this query to do the same thing:
>
> label:Code-Review=-2, AND -status:abandoned AND
> project:^openstack/.*glance.*
>
> Cheers,
> --
> Ian Cordasco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Next scheduler subteam meeting

2016-09-23 Thread Ed Leafe
The next meeting of the Nova scheduler subteam will be on Monday, September 26 
at 1400 UTC in #openstack-meeting-alt
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160926T14

The agenda is here: https://wiki.openstack.org/wiki/Meetings/NovaScheduler

Please add any issues you would like to discuss to the agenda.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Cores using -2 votes

2016-09-23 Thread Ian Cordasco
Hi all,

A few weeks ago, there was a controversy in which a patch had been
-2'd until other concerns were resolved and then the core who used
their -2 powers disappeared and could not lift it after those concerns
had been resolved. This lead to a situation where the -2'd patch was
abandoned and then resubmitted with a new Change-Id so it could be
approved in time for a milestone.

In chatting with some folks, it's become apparent that all of us
Glance cores need to keep a dashboard around of items that we've -2'd.

The basic form of that is:

https://review.openstack.org/#/q/reviewer:self+AND+label:code-review-2+AND+(project:openstack/glance+OR+project:openstack/glance_store+OR+project:openstack/python-glanceclient)

Or the query in particular is:

    reviewer:self AND label:code-review-2 AND
(project:openstack/glance OR project:openstack/glance_store OR
project:openstack/python-glanceclient)

That said, this will show any patch you have reviewed that has a -2 on
it. (This also ignores specs.)

To find what *you* have -2'd, the query is a little bit different:

    label:code-review-2, AND
(project:openstack/glance OR project:openstack/glance_store OR
project:openstack/python-glanceclient)

For example,

    label:code-review-2,sigmavirus24 AND (project:openstack/glance OR
project:openstack/glance_store OR
project:openstack/python-glanceclient)

is my query.

I think we would all appreciate it if as cores we could keep an eye on
our own -2's and keep them up-to-date.

I suspect people here will want to ignore anything that was abandoned
so you can also do:

    label:code-review-2, AND -status:abandoned AND
(project:openstack/glance OR project:openstack/glance_store OR
project:openstack/python-glanceclient)

Finally, if you use Gertty, you can use this query to do the same thing:

    label:Code-Review=-2, AND -status:abandoned AND
project:^openstack/.*glance.*

Cheers,
--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ocata Design Summit - Proposed slot allocation

2016-09-23 Thread Thierry Carrez
e...@itsonlyme.name wrote:
> Also if you don't plan to use all of your
>> allocated slots, let us know so that we can propose them to other teams.
>>
> Just so that we are not forgotten (in case there is some space left),
> the storlets dev team would greatly appreciate 2fb and 2-3wr.

Would you be open to slots on Friday ? Most of the slots I recovered are
on the Friday morning.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][summit][kolla] Returning one workroom session to the foundation

2016-09-23 Thread Thierry Carrez
Steven Dake (stdake) wrote:
> [...]
> The session we would like to return to the foundation is the *_Kolla
> Workroom Session Wednesday 3:05-3:45 session_*.  We have no conflicts in
> this time slot and aren’t returning it as a result of some conflict in
> the core team’s schedule that I am aware of.  I also believe the time
> slot and day are really fantastic (Wednesday – when people are fresh –
> 3pm after lunch settles) (from my 4+ years of planning summits for
> various projects).  My hope it is put to good use, perhaps to give an
> emerging technology project some planning time at summit.

Thanks! This will certainly help. On Monday I'll collect the given-back
rooms and try to fulfill as many extra requests as we can. If another
team wants to give back slots, please do so before Sunday so that I can
include those as well.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Centralizing policy (was: Anyone interested in writing a policy generator sphinx extension?)

2016-09-23 Thread Andrew Laski


On Thu, Sep 22, 2016, at 01:34 PM, Alexander Makarov wrote:
> Andrew,
> 
> the idea is to shift existing RBAC implementation:
> currently policy is enforced in the service (Nova, for instance)
> against the result of token validation, which is, in general, an access 
> check;
> I'm thinking about performing policy enforcement along with access check
> in a single operation and only if necessary -
> not every operation is protected and requires token validation,
> though now keystone middleware validates a token on every request.

There's a lot of information necessary to determine which policy checks
to run, and even more information necessary in order to run the policy
check.

For example: When booting an instance there are policies that are run
based on the data included in the POST body of the request. Should
Keystone be passed that POST body and contain logic to determine which
checks to run? Or would Nova pass a list of checks to Keystone and say
"do these policies pass?" I don't see an advantage to either of those.

Also, a lot of policy checks rely on data that must be looked up from
the Nova db. Checking policy for deletion of an instance requires
looking up that instance to check the context user_id/project_id against
the instance user_id/project_id. So Nova would need to do that lookup
and pass that information to Keystone to use for a policy check. Which
also means that Keystone can't do the check at the time of token
validation because at that point it doesn't have enough data for the
policy check.


> 
> AFAIK Nova is using some custom logic to change local policies at
> run-time,
> so I assume there may be a benefit in dynamic centralized storage 
> managed via API,
> so that Horizon can even provide a UI for that.

There is no special mechanism in Nova for changing policies at run time.
Policy defaults are now embedded in the Nova code so it's possible to
run without a policy file, and it's possible to override those policies
with a policy file. But once Nova is running the only way to update
policies is to update the policy file. That works the same in every
project because the reload mechanism is handled automatically in
oslo.policy.

Furthermore, what's more interesting to Horizon I think is the question
of what is a user allowed to do with the API. And that question requires
more information than just policy to answer. An example is that it's
possible to resize an instance to a smaller size if the instance is on a
Xen hypervisor. So in order for Horizon to allow/disallow that option
for a user it needs information that Nova is in possession of, and
Keystone never will be. That's why the Nova community has been
discussing the idea of exposing "capabilities" in the Nova API which be
let a user know what they can do based on policy settings and many other
factors.

> 
> There are many questions in the matter, and my main is:
> if we do RBAC in OpenStack the best way?

I'm sure we don't do RBAC in the best way. However, just like quota
enforcement, I think it's something that is going to need to be handled
in each project individually. But unlike quota administration I don't
see the benefit of centralizing policy administration. It's fairly
static data, and on the occasions that it needs to change it can be
updated with a configuration management solution and the live policy
reloading that oslo.policy has.

> 
> 
> On 21.09.2016 20:16, Andrew Laski wrote:
> >
> > On Wed, Sep 21, 2016, at 12:02 PM, Joshua Harlow wrote:
> >> Andrew Laski wrote:
> >>> However, I have asked twice now on the review what the benefit of doing
> >>> this is and haven't received a response so I'll ask here. The proposal
> >>> would add additional latency to nearly every API operation in a service
> >>> and in return what do they get? Now that it's possible to register sane
> >>> policy defaults within a project most operators do not even need to
> >>> think about policy for projects that do that. And any policy changes
> >>> that are necessary are easily handled by a config management system.
> >>>
> >>> I would expect to see a pretty significant benefit in exchange for
> >>> moving policy control out of Nova, and so far it's not clear to me what
> >>> that would be.
> >> One way to do this is to setup something like etc.d or zookeeper and
> >> have policy files be placed into certain 'keys' in there by keystone,
> >> then consuming projects would 'watch' those keys for being changed (and
> >> get notified when they are changed); the project would then reload its
> >> policy when the other service (keystone) write a new key/policy.
> >>
> >> https://coreos.com/etcd/docs/latest/api.html#waiting-for-a-change
> >>
> >> or
> >> https://zookeeper.apache.org/doc/r3.4.5/zookeeperProgrammers.html#ch_zkWatches
> >>
> >> or (pretty sure consul has something similar),
> >>
> >> This is pretty standard stuff folks :-/ and it's how afaik things like
> >> https://github.com/skynetservices/skydns work (and more), and it would
> 

Re: [openstack-dev] [AODH] event-alarm timeout discussion

2016-09-23 Thread gordon chung


On 23/09/2016 2:18 AM, Zhai, Edwin wrote:

>
> There are many targets(topics)/endpoints in above ceilometer code. But
> in AODH, we just have one topic, 'alarm.all', and one endpoint. If it is
> still multi-threaded, there is already potential race condition here,
> but event-alarm tiemout make it worse.
>
> https://github.com/openstack/aodh/blob/master/aodh/event.py#L61-L63

see my reply to other message, but yes, it is multithreaded. there's not 
race currently because we don't do anything that needs to honour ordering.

>
> event evaluator is triggered by event only, that is, it's not called at
> all until next event comes. If no event comes, evaluator just sleeps so
> that can't check timeout and update_alarm. In other words, 'timeout.end'
> is just for waking up evaluator.
>

what's the purpose of the thread being created? i thought the idea was 
to receive alarm.timeout.start event -> creates a thread? can we not:
1. receive alarm.timeout.start -> create an alarm with timeout thread
2a. if event received, kill timeout thread, update alarm.
2b. if timeout reached, send alarm notification, update alarm.

^ that is just a random thought, i didn't think about exactly how to 
implement. right now i'm not clear who is generating this 
alarm.timeout.end event and why it needs to do that at all.

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Testing config drive creation in our CI

2016-09-23 Thread Vladyslav Drok
This makes sense to me. Another approach is here -
https://review.openstack.org/375467

Thanks,
Vlad

On Fri, Sep 23, 2016 at 2:37 PM, Dmitry Tantsur  wrote:

> Hi folks!
>
> We've found out that we're not testing creating of config drives in our
> CI. It ended up in one combination being actually broken (pxe_* + wholedisk
> + configdrive). I would like to cover this testing gap. Is there any
> benefit in NOT using config drives in all jobs? I assume we should not
> bother too much testing the metadata service, as it's not within our code
> base (unlike config drive).
>
> I've proposed https://review.openstack.org/375362 to switch our tempest
> plugin to testing config drives, please vote. As you see one job fails on
> it - this is the breakage I was talking about. It will (hopefully) get
> fixed with the next release of ironic-lib.
>
> Finally, we need to run all jobs on ironic-lib, not only one, as
> ironic-lib is not the basis for all deployment variants. This will probably
> happen after we switch our DSVM jobs to Xenial though.
>
> -- Dmitry
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [AODH] event-alarm timeout discussion

2016-09-23 Thread gordon chung


On 23/09/2016 3:19 AM, Zhai, Edwin wrote:
> "Each notification listener is associated with an executor which
> controls how incoming notification messages will be received and
> dispatched. By default, the most simple executor is used - the blocking
> executor. This executor processes inbound notifications on the server’s
> thread, blocking it from processing additional notifications until it
> finishes with the current one."

we use threading executor[1] not blocking. unless something has changed 
with oslo.messaging recently, the behaviour is: you have 64 threads 
grabbing messages off queue. because it's all threads and we don't 
really know when they context switch so you can't guarantee order 
(unless you switch to one thread).

[1] 
https://github.com/openstack/aodh/blob/9c5df8400b621141db93a259900efe3702eb6241/aodh/messaging.py#L54

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-23 Thread Daniel P. Berrange
On Fri, Sep 23, 2016 at 07:32:36AM -0500, Matt Riedemann wrote:
> On 9/23/2016 3:54 AM, Daniel P. Berrange wrote:
> > On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:
> > > Sergey is working on a spec to use the standardized virt driver instance
> > > diagnostics in the os-diagnostics API. A question came up during review of
> > > the spec about how to define a disk 'id':
> > > 
> > > https://review.openstack.org/#/c/357884/2/specs/ocata/approved/restore-vm-diagnostics.rst@140
> > > 
> > > The existing diagnostics code doesn't set a disk id in the list of disk
> > > dicts, but I think with at least libvirt we can set that to the target
> > > device from the disk device xml.
> > > 
> > > The xenapi code for getting this info is a bit confusing for me at least,
> > > but it looks like it's possible to get the disks, but the id might need to
> > > be parsed out (as a side note, it looks like the cpu/memory/disk 
> > > diagnostics
> > > are not even populated in the get_instance_diagnostics method for xen).
> > > 
> > > vmware is in the same boat as xen, it's not fully implemented:
> > > 
> > > https://github.com/openstack/nova/blob/64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/vmwareapi/vmops.py#L1561
> > > 
> > > Hyper-v and Ironic virt drivers haven't implemented 
> > > get_instance_diagnostics
> > > yet.
> > 
> > The key value of this field (which we should call "device_name", not "id"),
> > is to allow the stats data to be correlated with the entries in the block
> > device mapping list used to configure storage when bootin the VM. As such
> > we should declare its value to match the corresponding field in BDM.
> > 
> > Regards,
> > Daniel
> > 
> 
> Well, except that we don't want people specifying a device name in the block
> device list when creating a server, and the libvirt driver ignores that
> altogether. In fact, I think Dan Smith was planning on adding a microversion
> in Ocata to remove that field from the server create request since we can't
> guarantee it's what you'll end up with for all virt drivers.

We don't want people specifying it, but we should report the auto-allocated
names back when you query the data after instance creation, don't we ? If
we don't, then there's no way for users to correlate the disks that they
requested with the instance diagnostic stats, which severely limits their
usefulness.

> I'm fine with calling the field device_name though.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-23 Thread Matt Riedemann

On 9/23/2016 3:54 AM, Daniel P. Berrange wrote:

On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:

Sergey is working on a spec to use the standardized virt driver instance
diagnostics in the os-diagnostics API. A question came up during review of
the spec about how to define a disk 'id':

https://review.openstack.org/#/c/357884/2/specs/ocata/approved/restore-vm-diagnostics.rst@140

The existing diagnostics code doesn't set a disk id in the list of disk
dicts, but I think with at least libvirt we can set that to the target
device from the disk device xml.

The xenapi code for getting this info is a bit confusing for me at least,
but it looks like it's possible to get the disks, but the id might need to
be parsed out (as a side note, it looks like the cpu/memory/disk diagnostics
are not even populated in the get_instance_diagnostics method for xen).

vmware is in the same boat as xen, it's not fully implemented:

https://github.com/openstack/nova/blob/64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/vmwareapi/vmops.py#L1561

Hyper-v and Ironic virt drivers haven't implemented get_instance_diagnostics
yet.


The key value of this field (which we should call "device_name", not "id"),
is to allow the stats data to be correlated with the entries in the block
device mapping list used to configure storage when bootin the VM. As such
we should declare its value to match the corresponding field in BDM.

Regards,
Daniel



Well, except that we don't want people specifying a device name in the 
block device list when creating a server, and the libvirt driver ignores 
that altogether. In fact, I think Dan Smith was planning on adding a 
microversion in Ocata to remove that field from the server create 
request since we can't guarantee it's what you'll end up with for all 
virt drivers.


I'm fine with calling the field device_name though.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] gate failures (fragile integrated tests)

2016-09-23 Thread Akihiro Motoki
Hi horizoners,

As you may have noticed, the main fixes have been merged and the horizon
gate failure rate seems to be recovered.
There are still some remaining issues around the integration tests, but top
failure rate problems now have been fixed or have a workaround.
Thanks for your patience.


2016-09-23 9:30 GMT+09:00 Akihiro Motoki :

> Hi horizoners,
>
> The current horizon gate is half broken as both integrated tests are
> 30-40% failure rate.
> (See https://bugs.launchpad.net/horizon/+bug/1626536 and
> https://bugs.launchpad.net/horizon/+bug/1626643)
> Fixes for these bugs are now under the gate.
>
> Please avoid using 'recheck' if one of integrated tests fails.
>
> Cores, let these fixes be merged first.
> Until then, avoid giving +A for others to merge them fast.
>
> Thanks,
> Akihiro
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Massively Distributed] Working sessions in Barcelona

2016-09-23 Thread lebre . adrien
Dear all

The Massively Distributed Cloud Working Group [1] will meet on Thursday morning 
during the next summit in Barcelona. 
We have three slots. The current proposal is to discuss distributed 
clouds/Fog/Edge use-cases and requirements during the two first ones and to use 
the last one to define concrete actions for the Ocata cycle. 

If you are interested by taking part to the exchanges, please give a look at 
[2] and do not hesitate to complete/amend the agenda proposal. 

Thanks,  
Ad_rien_ 

[1] https://wiki.openstack.org/wiki/Massively_Distributed_Clouds
[2] 
https://etherpad.openstack.org/p/massively_distribute-barcelona_working_sessions

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Testing config drive creation in our CI

2016-09-23 Thread Dmitry Tantsur

Hi folks!

We've found out that we're not testing creating of config drives in our CI. It 
ended up in one combination being actually broken (pxe_* + wholedisk + 
configdrive). I would like to cover this testing gap. Is there any benefit in 
NOT using config drives in all jobs? I assume we should not bother too much 
testing the metadata service, as it's not within our code base (unlike config 
drive).


I've proposed https://review.openstack.org/375362 to switch our tempest plugin 
to testing config drives, please vote. As you see one job fails on it - this is 
the breakage I was talking about. It will (hopefully) get fixed with the next 
release of ironic-lib.


Finally, we need to run all jobs on ironic-lib, not only one, as ironic-lib is 
not the basis for all deployment variants. This will probably happen after we 
switch our DSVM jobs to Xenial though.


-- Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Picking a new tag

2016-09-23 Thread Rob C
I agree that sometimes simply filtering for "security" can get a bit noisy
because only very occasionally is an email mentioning it or even using the
[security] tag actually trying to get the attention of the OSSP. Most of
the time (from my filters anyway) it's either a Neutron Security Groups
issue or someone simply using [Security] as a bit of metadata.

However, I'm hesitant to move away from it, as we should be paying
attention to things that do come through with the [Security] tag, it's the
easiest way for someone to try to get us involved if they're having an
issue.

Speaking personally, I think that if we have a sec-project tag, or
something similar, I'll simply end up having twice as many filters, on for
the new tag and one for anyone who's using [security]

I'm interested to know what other users might think though.

On Fri, Sep 23, 2016 at 7:00 AM, Tony Breeds 
wrote:

> On Fri, Sep 23, 2016 at 12:12:53AM +, Jeremy Stanley wrote:
>
> > It actually is, but Mailman (unhelpfully) lists tags by their long
> > descriptions. Go ahead and click on the Details link next to the
> > Cross-project coordination topic and you'll see that's actually the
> > name for the [all] tag.
>
> Gah!  I shoudl have clicked all the links.
>
> Thanks.
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Flavio Percoco

On 22/09/16 10:49 -0500, Michał Jastrzębski wrote:

So issue is, I know of few other openstacks on k8s and everyone does
that slightly differently. So far we lack proof points and real world
data to determine best approaches to stuff. This is still not-to-well
researched field. Right now it's mostly opinions and assumptions.
We're not ready to make document without having a flame war around
it;) Not enough knowledge in our collective brains.


Awesome input, thanks.


As for Kolla-k8s we are still deep in development, so we are free to
take best course of action we know of. We don't have any technical
debt now. Current state of stuff represents what we thing is best
approach.


I wonder if we can start writing these assumptions down and update them as we
go. I don't expect you to do it, I'm happy to help with this. We could put it in
kolla-k8s docs if that makes sense to other kolla-k8s folks.

I'll probably start pinging you guys on IRC with questions so I can help writing
this down.


There is also part that k8s is constantly growing and it lacks certain
features that created these issues in the first place, if k8s solves
them on their side, that will affect decision on our side.


Thanks a lot, Michal. This is indeed the kind of info I was looking for and
where I'd love to start from.

Flavio


Welcome to the Chaos;)

On 22 September 2016 at 09:53, Flavio Percoco  wrote:

On 22/09/16 09:39 -0500, Michał Jastrzębski wrote:


Flavio,

So as you surely know k8s is an orchiestration tools, docker is
container engine. If you are running k8s, you still run docker:)



I know this (although, if we really want to nitpick you could technically
use
something else than docker :P). In my email I mentioned that I'm interested
in
documenting how one would deploy OpenStack on kubernetes, which is likely
different from how you'd deploy OpenStack on docker (or any other container
runtime).


Kolla-kubernetes is a part of Big Tent, is project developed by Kolla
community and we're close to our big showdown:) Come over to our
session in Barcelona, in the meantime I suggest you look at
https://github.com/openstack/kolla-kubernetes and join us at
#openstack-kolla



Thakns for the info. As I mentioned in my email, I know kolla-kubernetes,
I've
reviewed some specs and patches, etc. I am, however, interested in something
different which is how you'd deploy OpenStack on k8s. Is kolla-kubernetes
doing
this the right way? Is there a better way to do it? These are the kind of
things
I'd love to document. I know some OPs have contributed to kolla-kubernetes
too.

Thanks for getting back,
Flavio




On 22 September 2016 at 09:09, Davanum Srinivas  wrote:


Flavio

Please see below:

On Thu, Sep 22, 2016 at 7:04 AM, Flavio Percoco 
wrote:


Greetings,

I've recently started looking into the container technologies around
OpenStack.
More specifically, I've been looking into the tools that allow for
deploying
OpenStack on containers, which is what I'm the most interested in right
now
as
part of the TripleO efforts.

I'm familiar with the Kolla project and the tools managed by this team.
In
fact,
TripleO currently uses kolla images for the containerized nova-compute
deployment.

I am, however, looking beyond a docker based deployment. I'd like to
explore
in
more depth a Kubernetes based deployment of OpenStack. I'm familiar with
both
kolla-kubernetes and fuel-ccp, their structure and direction*. Both
projects
have now advanced a bit in their implementations and made some
decisions.

As someone that started looking into this topic just recently, I'd love
to
see
our communities collaborate more wherever possible. For example, it'd be
great
to see us working on a reference architecture for deploying OpenStack on
kubernetes, letting the implementation details aside for a bit. I'd
assume
some
folks have done this already and I bet we can all learn more from it if
we
work
on this together.

So, let me go ahead and ask some further questions here, I might be
missing
some
history and/or context:

- Is there any public documentation that acts as a reference
architecture
for
 deploying OpenStack on kubernetes?
- Is this something the architecture working group could help with? Or
would
it
 be better to hijack one of kolla meetings?

The restult I'd love to see from this collaboration is a reference
architecture
explaining how OpenStack should be run on Kubernetes.



At this moment, fuel-ccp-* is an experiment, it's not under
governance, there is no expectation of any releases, there are no
specs or docs that i know of. So kolla/kolla-kubernetes is probably
the best accumulator of kubernetes knowledge specifically about
running openstack.

Note that tcpcloud folks may also have something, but haven't seen any
public information or reference architecture from them. Definitely
don't know of any plans from that team as well to open up and share.


Thanks in advance. I look forward to see us 

Re: [openstack-dev] [kolla][fuel][tripleo] Reference architecture to deploy OpenStack on k8s

2016-09-23 Thread Flavio Percoco

On 22/09/16 20:55 +, Steven Dake (stdake) wrote:

Flavio,

Apologies for delay in response – my backlog is large.

Forgive me if I parsed your message incorrectly.


It's probably me failing to communicate my intent or just the intent not being
good enough or worth it at all.


It came across to me as “How do I blaze a trail for OpenStack on Kubernetes?”.  
That was asked of me personally 3 years ago which led to the formation of the 
Kolla project inside Red Hat.  Our initial effort at that activity failed.  
Instead we decided kubernetes wasn’t ready for trailblazing in this space and 
used a far more mature project (Ansible) to solve the “OpenStack in Containers” 
problems and build from there.

We have since expanded our scope to re-solve the “How do I blaze a trail for 
Openstack on Kubernetes?” question since Kubernetes is now ready for this sort 
of trailblazing.  Fuel and several other folks decided to create derived works 
of the Kolla community’s innovations in this area.  I would contend that Fuel 
didn’t need to behave in such a way because the Kolla community is open, 
friendly, mature, diversely affiliated, has a reasonable philosophy and good 
set of principles as well as a strong leadership pipeline.

Rather than go blaze a trail when one already exists or create a derived work, 
why not increase your footprint in Kolla instead?  Red Hat has invested in 
Kolla for some time now, and their footprint hasn’t magically disappeared over 
night.   We will give you what you want within reasonable boundaries (the 
boundaries all open-source projects set of their contributors).  We also accept 
more work than the typical OpenStack project might, so it’s not like you will 
have to bring donuts into the office for every patch you merge into Kolla.

As to your more direct question of reference architecture, that is a totally 
loaded term that I’ll leave untouched.

To answer your question of “Does Kolla have a set of best practices” the answer 
is yes in kolla-ansible and kolla itself and strongly forming set of best 
practices in kolla-kubernetes.


As I mentioned in my email, I don't really care about the implementation right
now. I'm not trying to change the current teams, goals, or anything. I would go
as far as saying that the acknowledgement of the existing teams in my original
email was merely a way to identify a set of teams that might be interested in
writing this reference architecture.

Is it a loaded term? Maybe, is this point relevant for my original question? I'd
say no. It doesn't matter what we call this, not to me, not right now.

Don't get me wrong, I understand where you're coming from and I appreciate your
input. Unfortunately, I think you addressed my email from the wrong angle as I'm
a step (or many steps) early from doing any kind of implementation and I tried
to be clear about this in my original email.

I can contribute to kolla-kubernetes all you want but that won't give me what I
asked for in my original email and I'm pretty sure there are opinions about the
"recommended" way for running OpenStack on kubernetes. Questions like: Should I
run rabbit in a container? Should I put my database in there too? Now with
PetSets it might be possible. Can we be smarter on how we place the services in
the cluster? Or should we go with the traditional controller/compute/storage
architecture.

You may argue that I should just read the yaml files from kolla-kubernetes and
start from there. May be true but that's why I asked if there was something
written already.

Thanks for your email,
Flavio


Regards
-steve



On 9/22/16, 4:04 AM, "Flavio Percoco"  wrote:

   Greetings,

   I've recently started looking into the container technologies around 
OpenStack.
   More specifically, I've been looking into the tools that allow for deploying
   OpenStack on containers, which is what I'm the most interested in right now 
as
   part of the TripleO efforts.

   I'm familiar with the Kolla project and the tools managed by this team. In 
fact,
   TripleO currently uses kolla images for the containerized nova-compute
   deployment.

   I am, however, looking beyond a docker based deployment. I'd like to explore 
in
   more depth a Kubernetes based deployment of OpenStack. I'm familiar with both
   kolla-kubernetes and fuel-ccp, their structure and direction*. Both projects
   have now advanced a bit in their implementations and made some decisions.

   As someone that started looking into this topic just recently, I'd love to 
see
   our communities collaborate more wherever possible. For example, it'd be 
great
   to see us working on a reference architecture for deploying OpenStack on
   kubernetes, letting the implementation details aside for a bit. I'd assume 
some
   folks have done this already and I bet we can all learn more from it if we 
work
   on this together.

   So, let me go ahead and ask some further questions here, I might be missing 
some
   history and/or 

Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support

2016-09-23 Thread Haïkel
2016-09-21 16:34 GMT+02:00 Steven Dake (stdake) :
>
>
>
> On 9/20/16, 11:18 AM, "Haïkel"  wrote:
>
> 2016-09-19 19:40 GMT+02:00 Jeffrey Zhang :
> > Kolla core reviewer team,
> >
> > Kolla supports multiple Linux distros now, including
> >
> > * Ubuntu
> > * CentOS
> > * RHEL
> > * Fedora
> > * Debian
> > * OracleLinux
> >
> > But only Ubuntu, CentOS, and OracleLinux are widely used and we have
> > robust gate to ensure the quality.
> >
> > For fedora, Kolla hasn't any test for it and nobody reports any bug
> > about it( i.e. nobody use fedora as base distro image). We (kolla
> > team) also do not have enough resources to support so many Linux
> > distros. I prefer to deprecate fedora support now.  This is talked in
> > past but inconclusive[0].
> >
> > Please vote:
> >
> > 1. Kolla needs support fedora( if so, we need some guys to set up the
> > gate and fix all the issues ASAP in O cycle)
> > 2. Kolla should deprecate fedora support
> >
> > [0] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-June/098526.html
> >
>
>
> /me has no voting rights
>
> As RDO maintainer and Fedora developer, I support option 2. as it'd be
> very time-consuming to maintain Fedora support..
>
>
> >
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> >
>
> Haikel,
>
> Quck Q – are you saying maintaining fedora in kolla is time consuming or that 
> maintaining rdo for fedora is time consuming (and something that is being 
> dropped)?
>

Both, in my experience in maintaining RDO on Fedora, I encountered
similar issues than Kolla. It's doable but a lot of work.
One of the biggest problem are updates, you may have disruptive
updates on python modules packages quite frequently, or even rarer,
get some updates reverted.
So keeping Fedora in a good shape would require a decent amount of efforts.

Regards,
H.



> Thanks for improving clarity on this situation.
>
> Regards
> -steve
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What is definition of critical bugfixes?

2016-09-23 Thread Rikimaru Honjo

On 2016/09/20 23:42, Matt Riedemann wrote:

On 9/20/2016 4:25 AM, Rikimaru Honjo wrote:

Hi All,

I requested to review my patch in the last Weekly Nova team meeting.[1]
In this meeting, Mr. Dan Smith said following things about my patch.

* This patch is too large to merge in rc2.[2]
* Fix after Newton and backport to newton and mitaka.[3]

In my understanding, we can backport only critical bugfixes and security
patches
in Phase II.[4]
And, stable/mitaka move to Phase II after newton.

What is definition of critical bugfixes?
And, can I backport my patch to mitaka after newton?

[1]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-178

[2]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-194

[3]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-185

[4]http://docs.openstack.org/project-team-guide/stable-branches.html#support-phases


Best regards,


Critical generally means data loss, security issues, or upgrade impacts, i.e. 
does a bug cause data loss or prevent upgrades to a given release?

Thank you for explaining!
IMO, my reported bug has potential of data loss by unexpected detaching volume.


Latent known issues are generally not considered critical bug fixes, especially 
if they are large and complicated which means they are prone to introduce 
regressions.

When are issues considered critical bugs or not?
Is it after committing to gerrit?
(In other words, can I commit to N-2 branch after newton? Of course, 
considering critical or not is the other problem.)

Sorry to repeat questions.
--
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntts.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [neutron][lbaas][barbican] Listener create fails

2016-09-23 Thread Jayanthi Jeyakumar
-- Forwarded message --
From: 
Date: Fri, Sep 23, 2016 at 11:50 AM
Subject: [neutron][lbaas][barbican] Listener create fails
To: jeyakumar@gmail.com


You have to be a subscriber to post to this mailing list, so your
message has been automatically rejected. You can subscribe at:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

If you think that your messages are being rejected in error, contact
the mailing list owner at %(listowner)



-- Forwarded message --
From: Jayanthi Jeyakumar 
To: openstack-dev@lists.openstack.org
Cc:
Date: Fri, 23 Sep 2016 11:50:29 +0530
Subject: [neutron][lbaas][barbican] Listener create fails
Hello All,

Setup : Liberty with Barbican

When i create a listener it fails with error "could not process TLS
container , invalid user/password"

DEBUG: keystoneclient.session REQ: curl -g -i -X POST
http://x.x.x.x:9696/v2.0/lbaas/listeners.json -H "User-Agent:
python-neutronclient" -H "Content-Type: application/json" -H "Accept:
application/json" -H "X-Auth-Token:
{SHA1}2b74be94ec992cd8d53d930d743b344428eb1a4f"
-d '{"listener": {"protocol": "TERMINATED_HTTPS", "name": "ebay_lb1_list1",
"default_tls_container_ref": "http://10.106.100.55:9311/v1/
containers/05b750e5-ef14-4afc-b4fe-2b4949cf3356", "admin_state_up": true,
"protocol_port": "443", "loadbalancer_id": "773b8813-9325-43bf-8147-
69de1424fed5"}}'

RESP BODY: {"NeutronError": {"message": "Could not process TLS container
http://x.x.x.x:9311/v1/containers/05b750e5-ef14-4afc-b4fe-2b4949cf3356,
Invalid user / password (Disable debug mode to suppress these details.)",
"type": "CertManagerError", "detail": ""}}

added this to my neutron.conf
service_plugins = neutron_lbaas.services.loadbalancer.plugin.
LoadBalancerPluginv2

neutron_lbaas.conf
[service_auth]
auth_uri = http://localhost:35357/v2.0
admin_tenant_name = admin
admin_user = admin
admin_password = password
auth_version = 2

[service_providers]
service_provider=LOADBALANCERV2:NetScaler:neutron_lbaas.drivers.
netscaler.netscaler_driver_v2.NetScalerLoadBalancerDriverV2:default

Please let me know if anymore configurations need to be done..

Thanks,
Jay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Meeting minutes for IRC meeting 0800UTC Sep. 23 2016

2016-09-23 Thread hu . zhijiang
Minutes:
http://eavesdrop.openstack.org/meetings/daisycloud/2016/daisycloud.2016-09-23-07.59.html
 

Minutes (text): 
http://eavesdrop.openstack.org/meetings/daisycloud/2016/daisycloud.2016-09-23-07.59.txt
 

Log:
http://eavesdrop.openstack.org/meetings/daisycloud/2016/daisycloud.2016-09-23-07.59.log.html
 


Have a good weekend!

B.R.,
Zhijiang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-23 Thread Daniel P. Berrange
On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:
> Sergey is working on a spec to use the standardized virt driver instance
> diagnostics in the os-diagnostics API. A question came up during review of
> the spec about how to define a disk 'id':
> 
> https://review.openstack.org/#/c/357884/2/specs/ocata/approved/restore-vm-diagnostics.rst@140
> 
> The existing diagnostics code doesn't set a disk id in the list of disk
> dicts, but I think with at least libvirt we can set that to the target
> device from the disk device xml.
> 
> The xenapi code for getting this info is a bit confusing for me at least,
> but it looks like it's possible to get the disks, but the id might need to
> be parsed out (as a side note, it looks like the cpu/memory/disk diagnostics
> are not even populated in the get_instance_diagnostics method for xen).
> 
> vmware is in the same boat as xen, it's not fully implemented:
> 
> https://github.com/openstack/nova/blob/64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/vmwareapi/vmops.py#L1561
> 
> Hyper-v and Ironic virt drivers haven't implemented get_instance_diagnostics
> yet.

The key value of this field (which we should call "device_name", not "id"),
is to allow the stats data to be correlated with the entries in the block
device mapping list used to configure storage when bootin the VM. As such
we should declare its value to match the corresponding field in BDM.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for debian distro support

2016-09-23 Thread Christian Berendt
> On 22 Sep 2016, at 17:16, Ryan Hallisey  wrote:
> 
> I agree with Michal and Martin. I was a little reluctant to respond here 
> because the Debian additions are new, while Fedora has been around since the 
> beginning and never got a ton of testing.
> 
> Berendt what's your take here?

It is fine for me to keep Debian if someone committed to continue working on it.

Christian.


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-23 Thread Zhenyu Zheng
Hi Novaers and Cinders:

Quite often application requirements would demand using locally attached
disks (or direct attached disks) for OpenStack compute instances. One such
example is running virtual hadoop clusters via OpenStack.

We can now achieve this by using BlockDeviceDriver as Cinder driver and
using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
in large scale production deployment.

Now that Nova is working on resource provider trying to build an
generic-resource-pool, is it possible to perform "volume-based-scheduling"
to build instances according to volume? As this could be much easier to
build instances like mentioned above.

Or do we have any other ways of doing this?

References:
[1]
http://cloudgeekz.com/71/how-to-setup-openstack-to-use-local-disks-for-instances.html

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [AODH] event-alarm timeout discussion

2016-09-23 Thread Zhai, Edwin

Just check oslo messaging doc, don't know if it's out of date.

"Each notification listener is associated with an executor which controls how 
incoming notification messages will be received and dispatched. By default, the 
most simple executor is used - the blocking executor. This executor processes 
inbound notifications on the server’s thread, blocking it from processing 
additional notifications until it finishes with the current one."


Note: If the “eventlet” executor is used, the threading and time library need to 
be monkeypatched.



On Fri, 23 Sep 2016, Zhai, Edwin wrote:


Thanks for your clarification, see my comments below.

On Thu, 22 Sep 2016, gordon chung wrote:




On 22/09/2016 2:40 AM, Zhai, Edwin wrote:


See
https://github.com/openstack/aodh/blob/master/aodh/evaluator/event.py#L158

evaluate_events is the handler of the endpoint for 'alarm.all', it
iterates the event list and evaluate them one by one with project
alarms. If both 'timeout.end' and 'X' are in the event list, I assume
they are handled in sequence at different iterations of for loop. Am I
right?


not exactly. the code above is actually an endpoint for event listener.
the event listener itself is threaded so in theory, we have 64 of these
endpoints/loops. you can override the threads to have just one but
that's where things slow down a lot. we handle this in ceilometer by
having many single thread listeners each handling it's own queue[1]. i
still need to publish diagram on how that works.

[1]
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L308


There are many targets(topics)/endpoints in above ceilometer code. But in 
AODH, we just have one topic, 'alarm.all', and one endpoint. If it is still 
multi-threaded, there is already potential race condition here, but 
event-alarm tiemout make it worse.


https://github.com/openstack/aodh/blob/master/aodh/event.py#L61-L63



deleted your sequence diagram since it's malformed in my response but
that is pretty cool.

a few questions:
- when alarm creation event arrives at evaluator it creates a thread to
process alarm. this thread will timeout and raise a new event if it
doesn't receive event in time? i don't understand why we need a
timeout.end event? can the evaluator not just update_alarm and notify if
we timeout? or update_alarm and skip notify if we receive event on time?


event evaluator is triggered by event only, that is, it's not called at all 
until next event comes. If no event comes, evaluator just sleeps so that 
can't check timeout and update_alarm. In other words, 'timeout.end' is just 
for waking up evaluator.




cheers,

--
gord



Best Rgds,
Edwin



Best Rgds,
Edwin__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [AODH] event-alarm timeout discussion

2016-09-23 Thread Zhai, Edwin

Thanks for your clarification, see my comments below.

On Thu, 22 Sep 2016, gordon chung wrote:




On 22/09/2016 2:40 AM, Zhai, Edwin wrote:


See
https://github.com/openstack/aodh/blob/master/aodh/evaluator/event.py#L158

evaluate_events is the handler of the endpoint for 'alarm.all', it
iterates the event list and evaluate them one by one with project
alarms. If both 'timeout.end' and 'X' are in the event list, I assume
they are handled in sequence at different iterations of for loop. Am I
right?


not exactly. the code above is actually an endpoint for event listener.
the event listener itself is threaded so in theory, we have 64 of these
endpoints/loops. you can override the threads to have just one but
that's where things slow down a lot. we handle this in ceilometer by
having many single thread listeners each handling it's own queue[1]. i
still need to publish diagram on how that works.

[1]
https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L308


There are many targets(topics)/endpoints in above ceilometer code. But in AODH, 
we just have one topic, 'alarm.all', and one endpoint. If it is still 
multi-threaded, there is already potential race condition here, but event-alarm 
tiemout make it worse.


https://github.com/openstack/aodh/blob/master/aodh/event.py#L61-L63



deleted your sequence diagram since it's malformed in my response but
that is pretty cool.

a few questions:
- when alarm creation event arrives at evaluator it creates a thread to
process alarm. this thread will timeout and raise a new event if it
doesn't receive event in time? i don't understand why we need a
timeout.end event? can the evaluator not just update_alarm and notify if
we timeout? or update_alarm and skip notify if we receive event on time?


event evaluator is triggered by event only, that is, it's not called at all 
until next event comes. If no event comes, evaluator just sleeps so that can't 
check timeout and update_alarm. In other words, 'timeout.end' is just for waking 
up evaluator.




cheers,

--
gord



Best Rgds,
Edwin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Picking a new tag

2016-09-23 Thread Tony Breeds
On Fri, Sep 23, 2016 at 12:12:53AM +, Jeremy Stanley wrote:

> It actually is, but Mailman (unhelpfully) lists tags by their long
> descriptions. Go ahead and click on the Details link next to the
> Cross-project coordination topic and you'll see that's actually the
> name for the [all] tag.

Gah!  I shoudl have clicked all the links.

Thanks.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev