[openstack-dev] [nova] About response of Get server details API v2

2014-08-07 Thread Kanno, Masaki
Hi all,

Jclouds put out a stack trace when I tried "Get server details" of API v2 by 
jclouds.
I looked into a response body of the API, and found that a value of "image" was 
an empty string as follows.
I think the value of "image" should be an empty dictionary like "Get server 
details" of API v3.
What do you think?

{"server": {
  "status": "ACTIVE", 
  "updated": "2014-08-07T09:54:26Z", 
<>
  "key_name": null, 
  "image": "",  <-- here
  "OS-EXT-STS:task_state": null, 
  "OS-EXT-STS:vm_state": "active", 
<>
  "config_drive": "", 
  "metadata": {}
  }
}


Best regards,
 Kanno


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Kevin Benton
Can you link to the etherpad you mentioned?

In the mean time, apologies for another analogy in
advance. :-)

If I give you an API to sort a list, I'm free to implement it however I
want as long as I return a sorted list. However, there is no way me to know
based on a call to this API that you might only be looking for the second
largest element, so it won't be the most efficient approach because I will
always have to sort the entire list.
If I give you a higher level API to declare that you want elements of a
list that match a criteria in a certain order, then the API can make the
optimization to not actually sort the whole list if you just need the first
of the largest two elements.

The former is analogous to the security groups API, and the latter to the
GBP API.
On Aug 7, 2014 4:00 PM, "Aaron Rosen"  wrote:

>
>
>
> On Thu, Aug 7, 2014 at 12:08 PM, Kevin Benton  wrote:
>
>> >I mean't 'side stepping' why GBP allows for the comment you made
>> previous, "With the latter, a mapping driver could determine that
>> communication between these two hosts can be prevented by using an ACL on a
>> router or a switch, which doesn't violate the user's intent and buys a
>> performance improvement and works with ports that don't support security
>> groups.".
>>
>> >Neutron's current API is a logical abstraction and enforcement can be
>> done however one chooses to implement it. I'm really trying to understand
>> at the network level why GBP allows for these optimizations and performance
>> improvements you talked about.
>>
>> You absolutely cannot enforce security groups on a firewall/router that
>> sits at the boundary between networks. If you try, you are lying to the
>> end-user because it's not enforced at the port level. The current neutron
>> APIs force you to decide where things like that are implemented.
>>
>
> The current neutron API's are just logical abstractions. Where and how
> things are actually enforced are 100% an implementation detail of a vendors
> system.  Anyways, moving the discussion to the etherpad...
>
>>
>
>> The higher level abstractions give you the freedom to move the
>> enforcement by allowing the expression of broad connectivity requirements.
>>
> >Why are you bringing up logging connections?
>>
>> This was brought up as a feature proposal to FWaaS because this is a
>> basic firewall feature missing from OpenStack. However, this does not
>> preclude a FWaaS vendor from logging.
>>
>> >Personally, I think one could easily write up a very short document
>> probably less than one page with examples showing/exampling how the current
>> neutron API works even without a much networking background.
>>
>> The difficulty of the API for establishing basic connectivity isn't
>> really the problem. It's when you have to compose a bunch of requirements
>> and make sure nothing is violating auditing and connectivity constraints
>> that it becomes a problem. We are arguing about the levels of abstraction.
>> You could also write up a short document explaining to novice programmers
>> how to use C to read and write database entries to an sqlite database, but
>> that doesn't mean it's the best level of abstraction for what the users are
>> trying to accomplish.
>>
>> I'll let someone else explain the current GBP API because I'm not working
>> on that. I'm just trying to convince you of the value of declarative
>> network configuration.
>>
>>
>> On Thu, Aug 7, 2014 at 12:02 PM, Aaron Rosen 
>> wrote:
>>
>>>
>>>
>>>
>>> On Thu, Aug 7, 2014 at 9:54 AM, Kevin Benton  wrote:
>>>
 You said you had no idea what group based policy was buying us so I
 tried to illustrate what the difference between declarative and imperative
 network configuration looks like. That's the major selling point of GBP so
 I'm not sure how that's 'side stepping' any points. It removes the need for
 the user to pick between implementation details like security
 groups/FWaaS/ACLs.

>>>
>>> I mean't 'side stepping' why GBP allows for the comment you made
>>> previous, "With the latter, a mapping driver could determine that
>>> communication between these two hosts can be prevented by using an ACL on a
>>> router or a switch, which doesn't violate the user's intent and buys a
>>> performance improvement and works with ports that don't support security
>>> groups.".
>>>
>>> Neutron's current API is a logical abstraction and enforcement can be
>>> done however one chooses to implement it. I'm really trying to understand
>>> at the network level why GBP allows for these optimizations and performance
>>> improvements you talked about.
>>>
>>>
>>>
 >So are you saying that GBP allows someone to be able to configure an
 application that at the end of the day is equivalent  to
 networks/router/FWaaS rules without understanding networking concepts?

 It's one thing to understand the ports an application leverages and
 another to understand the differences between configuring VM firewalls,
 s

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-07 Thread Luke Gorrie
On 8 August 2014 02:06, Michael Still  wrote:

> 1: I think that ultimately should live in infra as part of check, but
> I'd be ok with it starting as a third party if that delivers us
> something faster. I'd be happy enough to donate resources to get that
> going if we decide to go with this plan.
>

Can we cooperate somehow?

We are already working on bringing up a third party CI covering QEMU 2.1
and Libvirt 1.2.7. The intention of this CI is to test the software
configuration that we are recommending for NFV deployments (including
vhost-user feature which appeared in those releases), and to provide CI
cover for the code we are offering for Neutron.

Michele Paolino is working on this and the relevant nova/devstack changes.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Bug#1231298 - size parameter for volume creation

2014-08-07 Thread Ganapathy, Sandhya
Hi,

This is to discuss Bug #1231298 - https://bugs.launchpad.net/cinder/+bug/1231298

Bug description : When one creates a volume from a snapshot or another volume, 
the size argument is calculated automatically. In the case of an image it needs 
to be specified though, for something larger than the image min_disk attribute. 
It would be nice to automatically get that size if it's not passed.

That's is a behavior of Cinder API.

Conclusion reached with this bug is that, we need to modify cinder client in 
order to accept optional size parameter (as the cinder's API allows)  and 
calculate the size automatically during volume creation from image.
There is also an opinion that size should not be an optional parameter during 
volume creation - does this mean, Cinder's API should be changed in order to 
make size a mandatory parameter.

Which direction should I take to fix this bug?

Thanks,
Sandhya.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Robert Collins
On 8 August 2014 10:52, Yuriy Taraday  wrote:

> I don't dislike rebases because I sometimes use a bit longer version of it.
> I would be glad to avoid them because they destroy history that can help me
> later.

rebase doesn't destroy any history. gc destroys history.

See git reflog - you can recover all of your history in high fidelity
(and there are options to let you control just how deep the rabbit
hole goes).

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Kashyap Chamarthy
On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
> On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:

[. . .]

> >
> >Excellent sugestion. I've wondered multiple times that if we could
> >dedicate a good chunk (or whole) of a specific release for heads down
> >bug fixing/stabilization. As it has been stated elsewhere on this list:
> >there's no pressing need for a whole lot of new code submissions, rather
> >we focusing on fixing issues that affect _existing_ users/operators.
> 
> There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to differ
> on that viewpoint. :)

Sure. new code submissions might be exciting, and might not find it as
unalloyed joy to fix someone *else*'s bugs. People can differ, as long
as: there's a clear indication of commitment to stand by when bugs occur
and help investigate cross-project issues involving their work -- to me
this shows that they care about the project in long-term and gets you
more karma. Not just only throw some half-assed code (not implying they
do) and go about their ways. While users/operators have to find out the
hard way that it's a pain in the neck to even set up, or so fragile that
you sneeze and it all falls apart.

I like Nikola's response[1] and the 'snippet' he posted, which sets the
expectations in a crystal clear language.

> That said, I entirely agree with you and wish efforts to stabilize would
> take precedence over feature work.


  [1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/042299.html

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-08-07 Thread Preston L. Bannister
Did this ever go anywhere?

http://lists.openstack.org/pipermail/openstack-dev/2014-January/024315.html

Looking at what is needed to get backup working in OpenStack, and this
seems the most recent reference.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-07 Thread Li Ma
Getting a massive amount of information from data storage to be displayed is 
where most of the activity happens in OpenStack. The two activities of reading 
data and writing (creating, updating and deleting) data are fundamentally 
different.

The optimization for these two opposite database activities can be done by 
physically separating the databases that service these two different 
activities. All the writes go to database servers, which then replicates the 
written data to the database server(s) dedicated to servicing the reads.

Currently, AFAIK, many OpenStack deployment in production try to take 
advantage of MySQL (includes Percona or MariaDB) multi-master Galera cluster. 
It is possible to design and implement a read/write separation schema 
for such a DB cluster.

Actually, OpenStack has a method for read scalability via defining 
master_connection and slave_connection in configuration, but this method 
lacks of flexibility due to deciding master or slave in the logical 
context(code). It's not transparent for application developer. 
As a result, it is not widely used in all the OpenStack projects.

So, I'd like to propose a transparent read/write separation method 
for oslo.db that every project may happily takes advantage of it 
without any code modification.

Moreover, I'd like to put it in the mailing list in advance to 
make sure it is acceptable for oslo.db.

I'd appreciate any comments.

br.
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Multi-ironic-conductor issue

2014-08-07 Thread Jander lu
Hi, all

if I have more than one ironic conductor, so does each conductor should has
their own PXE server and DHCP namespace or they just share one centralized
pxe server or dhcp server ? if they share one centralized pxe and dhcp
server, so how does they support HA?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-07 Thread Michael Still
It seems to me that the tension here is that there are groups who
would really like to use features in newer libvirts that we don't CI
on in the gate. Is it naive to think that a possible solution here is
to do the following:

 - revert the libvirt version_cap flag
 - instead implement a third party CI with the latest available
libvirt release [1]
 - document clearly in the release notes the versions of dependancies
that we tested against in a given release: hypervisor versions (gate
and third party), etc etc

Michael

1: I think that ultimately should live in infra as part of check, but
I'd be ok with it starting as a third party if that delivers us
something faster. I'd be happy enough to donate resources to get that
going if we decide to go with this plan.

On Fri, Aug 8, 2014 at 12:38 AM, Matt Riedemann
 wrote:
>
>
> On 7/18/2014 2:55 AM, Daniel P. Berrange wrote:
>>
>> On Thu, Jul 17, 2014 at 12:13:13PM -0700, Johannes Erdfelt wrote:
>>>
>>> On Thu, Jul 17, 2014, Russell Bryant  wrote:

 On 07/17/2014 02:31 PM, Johannes Erdfelt wrote:
>
> It kind of helps. It's still implicit in that you need to look at what
> features are enabled at what version and determine if it is being
> tested.
>
> But the behavior is still broken since code is still getting merged
> that
> isn't tested. Saying that is by design doesn't help the fact that
> potentially broken code exists.


 Well, it may not be tested in our CI yet, but that doesn't mean it's not
 tested some other way, at least.
>>>
>>>
>>> I'm skeptical. Unless it's tested continuously, it'll likely break at
>>> some time.
>>>
>>> We seem to be selectively choosing the continuous part of CI. I'd
>>> understand if it was reluctantly because of immediate problems but
>>> this reads like it's acceptable long-term too.
>>>
 I think there are some good ideas in other parts of this thread to look
 at how we can more reguarly rev libvirt in the gate to mitigate this.

 There's also been work going on to get Fedora enabled in the gate, which
 is a distro that regularly carries a much more recent version of libvirt
 (among other things), so that's another angle that may help.
>>>
>>>
>>> That's an improvement, but I'm still not sure I understand what the
>>> workflow will be for developers.
>>
>>
>> That's exactly why we want to have the CI system using newer libvirt
>> than it does today. The patch to cap the version doesn't change what
>> is tested - it just avoids users hitting untested paths by default
>> so they're not exposed to any potential instability until we actually
>> get a more updated CI system
>>
>>> Do they need to now wait for Fedora to ship a new version of libvirt?
>>> Fedora is likely to help the problem because of how quickly it generally
>>> ships new packages and their release schedule but it would still hold
>>> back some features?
>>
>>
>> Fedora has an add-on repository ("virt-preview") which contains the
>> latest QEMU + libvirt RPMs for current stable release - this is lags
>> upstream by a matter of days, so there would be no appreciable delay
>> in getting access to newest possible releases.
>>
> Also, this explanation doesn't answer my question about what happens
> when the gate finally gets around to actually testing those potentially
> broken code paths.


 I think we would just test out the bump and make sure it's working fine
 before it's enabled for every job.  That would keep potential breakage
 localized to people working on debugging/fixing it until it's ready to
 go.
>>>
>>>
>>> The downside is that new features for libvirt could be held back by
>>> needing to fix other unrelated features. This is certainly not a bigger
>>> problem than users potentially running untested code simply because they
>>> are on a newer version of libvirt.
>>>
>>> I understand we have an immediate problem and I see the short-term value
>>> in the libvirt version cap.
>>>
>>> I try to look at the long-term and unless it's clear to me that a
>>> solution is proposed to be short-term and there are some understood
>>> trade-offs then I'll question the long-term implications of it.
>>
>>
>> Once CI system is regularly tracking upstream releases within a matter of
>> days, then the version cap is a total non-issue from a feature
>> availability POV. It is none the less useful in the long term, for
>> example,
>> if there were a problem we miss in testing, which a deployer then hits in
>> the field, the version cap would allow them to get their deployment to
>> avoid use of the newer libvirt feature, which could be a useful workaround
>> for them until a fix is available.
>>
>> Regards,
>> Daniel
>>
>
> FYI, there is a proposed revert of the libvirt version cap change mentioned
> previously in this thread [1].
>
> Just bringing it up again here since the discussion should happen in the ML
> rather than gerrit.
>
> [1] https://review.openst

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Michael Still
On Thu, Aug 7, 2014 at 11:20 PM, Russell Bryant  wrote:
> On 08/07/2014 09:07 AM, Sean Dague wrote:> I think the difference is
> slot selection would just be Nova drivers. I
>> think there is an assumption in the old system that everyone in Nova
>> core wants to prioritize the blueprints. I think there are a bunch of
>> folks in Nova core that are happy having signaling from Nova drivers on
>> high priority things to review. (I know I'm in that camp.)
>>
>> Lacking that we all have picking algorithms to hack away at the 500 open
>> reviews. Which basically means it's a giant random queue.
>>
>> Having a few blueprints that *everyone* is looking at also has the
>> advantage that the context for the bits in question will tend to be
>> loaded into multiple people's heads at the same time, so is something
>> that's discussable.
>>
>> Will it fix the issue, not sure, but it's an idea.
>
> OK, got it.  So, success critically depends on nova-core being willing
> to take review direction and priority setting from nova-drivers.  That
> sort of assumption is part of why I think agile processes typically
> don't work in open source.  We don't have the ability to direct people
> with consistent and reliable results.
>
> I'm afraid if people doing the review are not directly involved in at
> least ACKing the selection and commiting to review something, putting
> stuff in slots seems futile.

I think some of this discussion is because I haven't had a chance to
write a summary of the meetup yet for the public mailing list. That's
something I will try and do today.

We talked about having a regular discussion in our weekly meeting of
what reviews were strategic at a given point in time. In my mind if we
do the runway thing, then that list of reviews would be important bug
fixes and slot occupying features. I think an implied side effect of
the runway system is that nova-drivers would -2 blueprint reviews
which were not occupying a slot.

(If we start doing more -2's I think we will need to explore how to
not block on someone with -2's taking a vacation. Some sort of role
account perhaps).

I think at the moment nova is lost in the tactical, instead of trying
to rise above that to solve strategic problems. That's a big risk to
the project, because its not how we handle the big issues our users
actually care about.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Fri, Aug 8, 2014 at 3:03 AM, Chris Friesen 
wrote:

> On 08/07/2014 04:52 PM, Yuriy Taraday wrote:
>
>  I hope you don't think that this thread was about rebases vs merges.
>> It's about keeping track of your changes without impact on review process.
>>
>
> But if you rebase, what is stopping you from keeping whatever private
> history you want and then rebase the desired changes onto the version that
> the current review tools are using?


That's almost what my proposal is about - allowing developer to keep
private history and store uploaded changes separately.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Chris Friesen

On 08/07/2014 04:52 PM, Yuriy Taraday wrote:


I hope you don't think that this thread was about rebases vs merges.
It's about keeping track of your changes without impact on review process.


But if you rebase, what is stopping you from keeping whatever private 
history you want and then rebase the desired changes onto the version 
that the current review tools are using?


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Passing a list of ResourceGroup's attributes back to its members

2014-08-07 Thread Zane Bitter

On 07/08/14 13:22, Tomas Sedovic wrote:

Hi all,

I have a ResourceGroup which wraps a custom resource defined in another
template:

 servers:
   type: OS::Heat::ResourceGroup
   properties:
 count: 10
 resource_def:
   type: my_custom_server
   properties:
 prop_1: "..."
 prop_2: "..."
 ...

And a corresponding provider template and environment file.

Now I can get say the list of IP addresses or any custom value of each
server from the ResourceGroup by using `{get_attr: [servers,
ip_address]}` and outputs defined in the provider template.

But I can't figure out how to pass that list back to each server in the
group.

This is something we use in TripleO for things like building a MySQL
cluster, where each node in the cluster (the ResourceGroup) needs the
addresses of all the other nodes.


Yeah, this is kind of the perpetual problem with clusters. I've been 
hoping that DNSaaS will show up in OpenStack soon and that that will be 
a way to fix this issue.


The other option is to have the cluster members discover each other 
somehow (mDNS?), but people seem loath to do that.



Right now, we have the servers ungrouped in the top-level template so we
can build this list manually. But if we move to ResourceGroups (or any
other scaling mechanism, I think), this is no longer possible.


So I believe the current solution is to abuse a Launch Config resource 
as a store for the data, and then later retrieve it somehow? Possibly 
you could do something along similar lines, but it's unclear how the 
'later retrieval' part would work... presumably it would have to involve 
something outside of Heat closing the loop :(



We can't pass the list to ResourceGroup's `resource_def` section because
that causes a circular dependency.

And I'm not aware of a way to attach a SoftwareConfig to a
ResourceGroup. SoftwareDeployment only allows attaching a config to a
single server.


Yeah, and that would be a tricky thing to implement well, because a 
resource group may not be a group of servers (but in many cases it may 
be a group of nested stacks that each contain one or more servers, and 
you'd want to be able to handle that too).



Is there a way to do this that I'm missing? And if there isn't, is this
something we could add to Heat? E.g. extending a SoftwareDeployment to
accept ResourceGroups or adding another resource for that purpose.

Thanks,
Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 7:36 PM, Ben Nemec  wrote:

> On 08/06/2014 05:35 PM, Yuriy Taraday wrote:
> > On Wed, Aug 6, 2014 at 11:00 PM, Ben Nemec 
> wrote:
> >> You keep mentioning detached HEAD and reflog.  I have never had to deal
> >> with either when doing a rebase, so I think there's a disconnect here.
> >> The only time I see a detached HEAD is when I check out a change from
> >> Gerrit (and I immediately stick it in a local branch, so it's a
> >> transitive state), and the reflog is basically a safety net for when I
> >> horribly botch something, not a standard tool that I use on a daily
> basis.
> >>
> >
> > It usually takes some time for me to build trust in utility that does a
> lot
> > of different things at once while I need only one small piece of that.
> So I
> > usually do smth like:
> > $ git checkout HEAD~2
> > $ vim
> > $ git commit
> > $ git checkout mybranch
> > $ git rebase --onto HEAD@{1} HEAD~2
> > instead of almost the same workflow with interactive rebase.
>
> I'm sorry, but "I don't trust the well-tested, widely used tool that Git
> provides to make this easier so I'm going to reimplement essentially the
> same thing in a messier way myself" is a non-starter for me.  I'm not
> surprised you dislike rebases if you're doing this, but it's a solved
> problem.  Use git rebase -i.
>

I'm sorry, I must've mislead you by using word 'trust' in that sentence.
It's more like understanding. I like to understand how things work. I don't
like treating tools as black boxes. And I also don't like when tool does a
lot of things at once with no way back. So yes, I decompose 'rebase -i' a
bit and get slightly (1 command, really) longer workflow. But at least I
can stop at any point and think if I'm really finished at this step. And
sometimes interactive rebase works for me better than this, sometimes it
doesn't. It all depends on situation.

I don't dislike rebases because I sometimes use a bit longer version of it.
I would be glad to avoid them because they destroy history that can help me
later.

I think I've said all I'm going to say on this.


I hope you don't think that this thread was about rebases vs merges. It's
about keeping track of your changes without impact on review process.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Infra] Devstack and Testing for ironic-python-agent``

2014-08-07 Thread Jay Faulkner
Hi all,


At the recent Ironic mid-cycle meetup, we got the first version of the 
ironic-python-agent (IPA) driver merged. There are a few reviews we need merged 
(and their dependencies) across a few other projects in order to begin testing 
it automatically. We would like to eventually gate IPA and Ironic with tempest 
testing similar to what the pxe driver does today.


For IPA to work in devstack (openstack-dev/devstack repo):

 - https://review.openstack.org/#/c/112095 Adds swift temp URL support to 
Devstack

 - https://review.openstack.org/#/c/108457 Adds IPA support to Devstack



Docs on running IPA in devstack (openstack/ironic repo):

 - https://review.openstack.org/#/c/112136/



For IPA to work in the devstack-gate environment (openstack-infra/config & 
openstack-infra/devstack-gate repos):

 - https://review.openstack.org/#/c/112143 Add IPA support to devstack-gate

 - https://review.openstack.org/#/c/112134 Consolidate and rename Ironic jobs

 - https://review.openstack.org/#/c/112693 Add check job for IPA + tempest


Once these are all merged, we'll have IPA testing via a nonvoting check job, 
using the IPA-CoreOS deploy ramdisk, in both the ironic and ironic-python-agent 
projects. This will be promoted to voting once proven stable.


However, this is only one of many possible IPA deploy ramdisk images. We're 
currently publishing a CoreOS ramdisk, but we also have an effort to create a 
ramdisk with diskimage-builder (https://review.openstack.org/#/c/110487/) , as 
well as plans for an ISO image (for use with things like iLo). As we gain 
additional images, we'd like to run those images through the same suite of 
tests prior to publishing them, so that images which would break IPA's gate 
wouldn't get published. The final state testing matrix should look something 
like this, with check and gate jobs in each project covering the variations 
unique to that project, and one representative test in consuming project's test 
pipelines.


IPA:

 - tempest runs against Ironic+agent_ssh with CoreOS ramdisk

 - tempest runs against Ironic+agent_ssh with DIB ramdisk

 - (other IPA tests)



IPA would then, as a post job, generate and publish the images, as we currently 
do with IPA-CoreOS ( 
http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz ). 
Because IPA would gate on tempest tests against each image, we'd avoid ever 
publishing a bad deploy ramdisk.


Ironic:

 - tempest runs against Ironic+agent_ssh with most suitable ramdisk (due to 
significantly decreased ram requirements, this will likely be an image created 
by DIB once it exists)

 - tempest runs against Ironic+pxe_ssh

 - (what ever else Ironic runs)



Nova and other integrated projects will continue to run a single job, using 
Ironic with its default deploy driver (currently pxe_ssh).





Using this testing matrix, we'll ensure that there is coverage of each 
cross-project dependency, without bloating each project's test matrix 
unnecessarily. If, for instance, a change in Nova passes the Ironic pxe_ssh job 
and lands, but then breaks the agent_ssh job and thus blocks Ironic's gate, 
this would indicate a layering violation between Ironic and its deploy drivers 
(from Nova's perspective, nothing should change between those drivers). 
Similarly, if IPA tests failed against the CoreOS image (due to Ironic OR Nova 
change), but the DIB image passed in both Ironic and Nova tests, then it's 
almost certainly an *IPA* bug.


Thanks so much for your time, and for the Openstack Ironic community being 
welcoming to us as we have worked towards this alternate deploy driver and work 
towards improving it even further as Kilo opens.


--

Jay Faulkner
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 10:28 AM, Chris Friesen 
wrote:

> On 08/06/2014 05:41 PM, Zane Bitter wrote:
>
>> On 06/08/14 18:12, Yuriy Taraday wrote:
>>>
>>> Well, as per Git author, that's how you should do with not-CVS. You have
>>> cheap merges - use them instead of erasing parts of history.
>>>
>>
>> This is just not true.
>>
>> http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html
>>
>> Choice quotes from the author of Git:
>>
>> * 'People can (and probably should) rebase their _private_ trees'
>> * 'you can go wild on the "git rebase" thing'
>> * 'we use "git rebase" etc while we work on our problems.'
>> * '"git rebase" is not wrong.'
>>
>
> Also relevant:
>
> "...you must never pull into a branch that isn't already
> in good shape."
>
> "Don't merge upstream code at random points."
>
> "keep your own history clean"


And in the very same thread he says "I don't like how you always rebased
patches" and "none of these rules should be absolutely black-and-white".
But let's not get driven into discussion of what Linus said (or I'll have
to rewatch his ages old talk in Google to get proper quotes).
In no way I want to promote exposing private trees with all those
intermediate changes. And my proposal is not against rebasing (although we
could use -R option for git-review more often to publish what we've tested
and to let reviewers see diffs between patchsets). It is for letting people
keep history of their work towards giving you a crystal-clean change
request series.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Eoghan Glynn


> On 08/07/2014 01:41 PM, Eoghan Glynn wrote:
> > My point was simply that we don't have direct control over the
> > contributors' activities
> 
> This is not correct and I've seen it repeated too often to let it go
> uncorrected: we (the OpenStack project as a whole) have a lot of control
> over contributors to OpenStack. There is a Technical Committee and a
> Board of Directors, corporate members and sponsors... all of these can
> do a lot to make things happen. For example, the Platinum members of the
> Foundation are required at the moment to have at least 'two full time
> equivalents' and I don't see why the board couldn't change that
> requirement, make it more specific.
> 
> OpenStack is not an amateurish project done by volunteers in their free
> time.  We have lots of leverage we can apply to get things done.

There was no suggestion of amatuerish-ness, or even volunteerism,
in my post.

Simply a recognition of the reality that we are not operating in
a traditional command & control environment.

TBH I'm surprised such an assertion would be considered controversial.

But I'd be happy to hear how you envisage rate-limiting WIP would
play out in practice?

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Brant Knudson
On Thu, Aug 7, 2014 at 12:54 PM, Kevin L. Mitchell <
kevin.mitch...@rackspace.com> wrote:

> On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
> > > In any case, the operative point is that CONF. must
> > always be
> > > evaluated inside run-time code, never at module load time.
> >
> > ...unless you call register_opts() safely, which is what I'm
> > proposing.
>
> No, calling register_opts() at a different point only fixes the import
> issue you originally complained about; it does not fix the problem that
> the configuration option is evaluated at the wrong time.  The example
> code you included in your original email evaluates the configuration
> option at module load time, BEFORE the configuration has been loaded,
> which means that the argument default will be the default of the
> configuration option, rather than the configured value of the
> configuration option.  Configuration options must be evaluated at
> RUN-TIME, after configuration is loaded; they must not be evaluated at
> LOAD-TIME, which is what your original code does.
> --
> Kevin L. Mitchell 
> Rackspace
>

We had this problem in Keystone[1]. There were some config parameters
passed to a function decorator (it was the cache timeout time). You'd
change the value in the config file and it would have no effect... the
default was still used. Luckily the cache decorator also took a function so
it was an easy fix, just pass `lambda: CONF.foo`. The mistaken code was
made possible because the config options were registered at import time.
Keystone now registers its config options at run-time so using CONF.foo at
import-time fails with an error that the option isn't registered.

[1] https://bugs.launchpad.net/keystone/+bug/1265670

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Stefano Maffulli
On 08/07/2014 01:41 PM, Eoghan Glynn wrote:
> My point was simply that we don't have direct control over the
> contributors' activities

This is not correct and I've seen it repeated too often to let it go
uncorrected: we (the OpenStack project as a whole) have a lot of control
over contributors to OpenStack. There is a Technical Committee and a
Board of Directors, corporate members and sponsors... all of these can
do a lot to make things happen. For example, the Platinum members of the
Foundation are required at the moment to have at least 'two full time
equivalents' and I don't see why the board couldn't change that
requirement, make it more specific.

OpenStack is not an amateurish project done by volunteers in their free
time.  We have lots of leverage we can apply to get things done.

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] introducing cyclops

2014-08-07 Thread Eoghan Glynn



> Dear All,
> 
> Let me use my first post to this list to introduce Cyclops and initiate a
> discussion towards possibility of this platform as a future incubated
> project in OpenStack.
> 
> We at Zurich university of Applied Sciences have a python project in open
> source (Apache 2 Licensing) that aims to provide a platform to do
> rating-charging-billing over ceilometer. We call is Cyclops (A Charging
> platform for OPenStack CLouds).
> 
> The initial proof of concept code can be accessed here:
> https://github.com/icclab/cyclops-web &
> https://github.com/icclab/cyclops-tmanager
> 
> Disclaimer: This is not the best code out there, but will be refined and
> documented properly very soon!
> 
> A demo video from really early days of the project is here:
> https://www.youtube.com/watch?v=ZIwwVxqCio0 and since this video was made,
> several bug fixes and features were added.
> 
> The idea presentation was done at Swiss Open Cloud Day at Bern and the talk
> slides can be accessed here:
> http://piyush-harsh.info/content/ocd-bern2014.pdf , and more recently the
> research paper on the idea was published in 2014 World Congress in Computer
> Science (Las Vegas), which can be accessed here:
> http://piyush-harsh.info/content/GCA2014-rcb.pdf
> 
> I was wondering, if our effort is something that OpenStack
> Ceilometer/Telemetry release team would be interested in?
> 
> I do understand that initially rating-charging-billing service may have been
> left out by choice as they would need to be tightly coupled with existing
> CRM/Billing systems, but Cyclops design (intended) is distributed, service
> oriented architecture with each component allowing for possible integration
> with external software via REST APIs. And therefore Cyclops by design is
> CRM/Billing platform agnostic. Although Cyclops PoC implementation does
> include a basic bill generation module.
> 
> We in our team are committed to this development effort and we will have
> resources (interns, students, researchers) work on features and improve the
> code-base for a foreseeable number of years to come.
> 
> Do you see a chance if our efforts could make in as an incubated project in
> OpenStack within Ceilometer?

Hi Piyush,

Thanks for bringing this up!

I should preface my remarks by setting out a little OpenStack
history, in terms of the original decision not to include the
rating and billing stages of the pipeline under the ambit of
the ceilometer project.

IIUC, the logic was that such rating/billing policies were very
likely to be:

  (a) commercially sensitive for competing cloud operators

and:

  (b) already built-out via existing custom/proprietary systems

The folks who were directly involved at the outset of ceilometer
can correct me if I've misrepresented the thinking that pertained
at the time.

While that logic seems to still apply, I would be happy to learn
more about the work you've done already on this, and would be
open to hearing arguments for and against. Are you planning to
attend the Kilo summit in Paris (Nov 3-7)? If so, it would be a
good opportunity to discuss further in person.

In the meantime, stackforge provides a low-bar-to-entry for
projects in the OpenStack ecosystem that may, or may not, end up
as incubated projects or as dependencies taken by graduated
projects. So you might consider moving your code there?

Cheers,
Eoghan


 
> I really would like to hear back from you, comments, suggestions, etc.
> 
> Kind regards,
> Piyush.
> ___
> Dr. Piyush Harsh, Ph.D.
> Researcher, InIT Cloud Computing Lab
> Zurich University of Applied Sciences (ZHAW)
> [Site] http://piyush-harsh.info
> [Research Lab] http://www.cloudcomp.ch/
> Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Mohammad Banikazemi


Thierry Carrez  wrote on 08/07/2014 06:23:56 AM:

> From: Thierry Carrez 
> To: openstack-dev@lists.openstack.org
> Date: 08/07/2014 06:25 AM
> Subject: Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy
> and the way forward
>
> Armando M. wrote:
> > This thread is moving so fast I can't keep up!
> >
> > The fact that troubles me is that I am unable to grasp how we move
> > forward, which was the point of this thread to start with. It seems we
> > have 2 options:
> >
> > - We make GBP to merge as is, in the Neutron tree, with some minor
> > revision (e.g. naming?);
> > - We make GBP a stackforge project, that integrates with Neutron in
some
> > shape or form;
> >
> > Another option, might be something in between, where GBP is in tree,
but
> > in some sort of experimental staging area (even though I am not sure
how
> > well baked this idea is).
> >
> > Now, as a community we all need make a decision; arguing about the fact
> > that the blueprint was approved is pointless.
> I agree with you: it is possible to change your mind on a topic and
> revisit past decisions.
> In past OpenStack history we did revert merged
> commits and remove existing functionality because we felt it wasn't that
> much of a great idea after all. Here we are talking about making the
> right decision *before* the final merging and shipping into a release,
> which is kind of an improvement. The spec system was supposed to help
> limit such cases, but it's not bullet-proof.
>
> In the end, if there is no consensus on that question within the Neutron
> project (and I hear both sides have good arguments), our governance
> gives the elected Neutron PTL the power to make the final call. If the
> disagreement is between projects (like if Nova disagreed with the
> Neutron decision), then the issue could be escalated to the TC.
>

It is good to know that the OpenStack governance provides a way to resolve
these issues but I really hope that we can reach a consensus.

Best,

Mohammad



> Regards,
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Aaron Rosen
On Thu, Aug 7, 2014 at 12:08 PM, Kevin Benton  wrote:

> >I mean't 'side stepping' why GBP allows for the comment you made
> previous, "With the latter, a mapping driver could determine that
> communication between these two hosts can be prevented by using an ACL on a
> router or a switch, which doesn't violate the user's intent and buys a
> performance improvement and works with ports that don't support security
> groups.".
>
> >Neutron's current API is a logical abstraction and enforcement can be
> done however one chooses to implement it. I'm really trying to understand
> at the network level why GBP allows for these optimizations and performance
> improvements you talked about.
>
> You absolutely cannot enforce security groups on a firewall/router that
> sits at the boundary between networks. If you try, you are lying to the
> end-user because it's not enforced at the port level. The current neutron
> APIs force you to decide where things like that are implemented.
>

The current neutron API's are just logical abstractions. Where and how
things are actually enforced are 100% an implementation detail of a vendors
system.  Anyways, moving the discussion to the etherpad...

>

> The higher level abstractions give you the freedom to move the enforcement
> by allowing the expression of broad connectivity requirements.
>
>Why are you bringing up logging connections?
>
> This was brought up as a feature proposal to FWaaS because this is a basic
> firewall feature missing from OpenStack. However, this does not preclude a
> FWaaS vendor from logging.
>
> >Personally, I think one could easily write up a very short document
> probably less than one page with examples showing/exampling how the current
> neutron API works even without a much networking background.
>
> The difficulty of the API for establishing basic connectivity isn't really
> the problem. It's when you have to compose a bunch of requirements and make
> sure nothing is violating auditing and connectivity constraints that it
> becomes a problem. We are arguing about the levels of abstraction. You
> could also write up a short document explaining to novice programmers how
> to use C to read and write database entries to an sqlite database, but that
> doesn't mean it's the best level of abstraction for what the users are
> trying to accomplish.
>
> I'll let someone else explain the current GBP API because I'm not working
> on that. I'm just trying to convince you of the value of declarative
> network configuration.
>
>
> On Thu, Aug 7, 2014 at 12:02 PM, Aaron Rosen 
> wrote:
>
>>
>>
>>
>> On Thu, Aug 7, 2014 at 9:54 AM, Kevin Benton  wrote:
>>
>>> You said you had no idea what group based policy was buying us so I
>>> tried to illustrate what the difference between declarative and imperative
>>> network configuration looks like. That's the major selling point of GBP so
>>> I'm not sure how that's 'side stepping' any points. It removes the need for
>>> the user to pick between implementation details like security
>>> groups/FWaaS/ACLs.
>>>
>>
>> I mean't 'side stepping' why GBP allows for the comment you made
>> previous, "With the latter, a mapping driver could determine that
>> communication between these two hosts can be prevented by using an ACL on a
>> router or a switch, which doesn't violate the user's intent and buys a
>> performance improvement and works with ports that don't support security
>> groups.".
>>
>> Neutron's current API is a logical abstraction and enforcement can be
>> done however one chooses to implement it. I'm really trying to understand
>> at the network level why GBP allows for these optimizations and performance
>> improvements you talked about.
>>
>>
>>
>>> >So are you saying that GBP allows someone to be able to configure an
>>> application that at the end of the day is equivalent  to
>>> networks/router/FWaaS rules without understanding networking concepts?
>>>
>>> It's one thing to understand the ports an application leverages and
>>> another to understand the differences between configuring VM firewalls,
>>> security groups, FWaaS, and router ACLs.
>>>
>>
>> Sure, but how does group based policy solve this. Security Groups and
>> FWaaS are just different places of enforcement. Say I want different
>> security enforcement on my router than on my instances. One still needs to
>> know enough to tell group based policy this right?  They need to know
>> enough that there are different enforcement points? How is doing this with
>> Group based policy make it easier?
>>
>>
>>
>>> > I'm also curious how this GBP is really less error prone than the
>>> model we have today as it seems the user will basically have to tell
>>> neutron the same information about how he wants his networking to function.
>>>
>>> With GBP, the user just gives the desired end result (e.g. allow
>>> connectivity between endpoint groups via TCP port 22 with all connections
>>> logged). Without it, the user has to do the following:
>>>
>>
>> Why are you b

Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Edgar Magana
That I understand it!
Thanks for the clarification.

Edgar

From: Ryan Moats mailto:rmo...@us.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, August 7, 2014 at 2:45 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming


Edgar Magana mailto:edgar.mag...@workday.com>> wrote 
on 08/07/2014 04:37:39 PM:

> From: Edgar Magana mailto:edgar.mag...@workday.com>>
> To: "OpenStack Development Mailing List (not for usage questions)"
> mailto:openstack-dev@lists.openstack.org>>
> Date: 08/07/2014 04:40 PM
> Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming
>
> Ryan,
>
> COPS implies a common protocol to communicate with PEPs, which
> implies the same communication mechanism basically.
> So, you are implying that "endpoints" in GBP will use "different"
> protocol to communicate with "decisions" entities?

Nope, I'm saying that the members of groups are not *required* to do 
enforcement.
They *could* (based on the implementation), but calling them PEPs means they 
would *have* to.

Ryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Ryan Moats


Edgar Magana  wrote on 08/07/2014 04:37:39 PM:

> From: Edgar Magana 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 08/07/2014 04:40 PM
> Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy -
Renaming
>
> Ryan,
>
> COPS implies a common protocol to communicate with PEPs, which
> implies the same communication mechanism basically.
> So, you are implying that “endpoints” in GBP will use “different”
> protocol to communicate with “decisions” entities?

Nope, I'm saying that the members of groups are not *required* to do
enforcement.
They *could* (based on the implementation), but calling them PEPs means
they would *have* to.

Ryan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Edgar Magana
Thanks for sharing this Sumit.
Again, my apologies for not attending the meeting, I just I couldn’t.

It seems you had a good discussion about the naming and I do respect the
decision.

Cheers,

Edgar


On 8/7/14, 2:32 PM, "Sumit Naiksatam"  wrote:

>Ryan, point well taken. I am paraphrasing the discussion from today's
>GBP sub team meeting on the options considered and the eventual
>proposal for "policy-point" and "policy-group":
>
>18:36:50  so regarding the endpoint terminology
>18:36:53  any suggestions?
>18:36:56  ivar-lazzaro:  If you are expressing your intent of
>doing enforcement at both points you do care then.
>18:37:09  regXboi: Edgar Magana suggested using the IETF
>phrasing -- enforcement point
>18:37:31  i was thinking “edgar point” would be good.  and we
>won’t have to change our slides from EP.
>18:37:44  ivar-lazzaro:  would be great to see an example
>using the CLI how one sets something up that in GBP that does
>enforcement at the instance and router.
>18:37:44  mschoen ++
>18:37:55  rockyg: although enforcement point tends to
>be used in a slightly different context
>18:38:02  mscohen ++
>18:38:04  I was involved in the early IETF policy days, and
>I'm not a big from of ep
>18:38:04  mscohen: we dont want to overload the
>terminology
>18:38:13  regXboi: +1
>18:38:17  I’m not entirely sure “enforcement point” is the
>same as our usage of endpoint
>18:38:25  rkukura: exactly
>18:38:28  SumitNaiksatam: i am joking of course
>18:38:42  mscohen: :-)
>18:38:54  Yeah.  that's the problem with endpoint.  It's right
>for networking, but it already has another definition in
>virtualization world.
>18:38:54  how about network-endpoint (someone else
>suggested that)?
>18:38:55  I think enforcement point is more like the SG or
>FWaaS that is used to render the intent
>18:39:07  rkukura: agree
>18:39:09  so... let's hit the thesaurus
>18:39:16  Rkukara, agree
>18:39:38  I had always throught endpoint was the right word
>for both our usage and for keystone, with similar meanings, but
>different meta-levels
>18:40:01  rkukura: if we can find something different, let's
>consider it
>18:40:11  there is enough of a hill to climb
>18:40:35  how about terminus?
>18:40:52 * regXboi keeps reading synonyms
>18:41:06  network-endpoint?
>18:41:12  um... no
>18:41:27  I think that won't help
>18:41:58  policy-point/policy groups?
>18:42:07  group member?
>18:42:14  termination-point, gbp-id, policy point maybe
>18:42:18  sorry i dropped off again!
>18:42:23  I think member
>18:42:31  unless that's already used somewhere
>18:42:33  i was saying earlier, what about policy-point?
>18:42:36  #chair SumitNaiksatam
>18:42:37  Current chairs: SumitNaiksatam SumitNaiksatam_
>banix rkukura s3wong
>18:42:41  regXboi: Just “member” and “group”?
>18:42:44  s3wong: :-)
>18:43:04  SumitNaiksatam: so now either way works for you :-)
>18:43:09  rkurkura: too general I think...
>18:43:15  policy-provider, policy-consumer
>18:43:16  er rkukura ... sorry
>18:43:17  i still like endpoint better.
>18:43:23  bourn or bourne 1  (bɔːn)
>18:43:23 
>18:43:23  — n
>18:43:23  1.  a destination; goal
>18:43:23  2.  a boundary
>18:43:25  I think policy-point and policy-group
>18:43:27  yyywu: :-)
>18:43:34  Bourne-point?
>18:43:40  rockyg: :-)
>18:44:04  more in favor of policy-point and policy-group?
>18:44:36  i thnk LouisF suggested as well
>18:44:49  +1 to policy-point
>18:44:50  +1 to policy-point and policy-group
>18:44:55  +1
>18:44:56  SumitNaiksatam: +1 too
>18:45:07  +1
>18:45:08  FINALLY... YEAH
>18:45:18  okay so how about we float this in the ML?
>18:45:21  +1
>18:45:31  +1
>18:45:35  Yes... lets do that
>18:45:37  +1
>18:45:44  so that we dont end up picking up an
>overlapping terminology again
>18:45:55  who wants to do it? as in send to the ML?
>18:46:07 * SumitNaiksatam waiting to hand out an AI :-P
>18:46:16  regXboi: ?
>18:46:17  I can do it
>18:46:26  hmm?
>18:46:31  rms_13: ah you put your hand up first
>18:46:36 * regXboi apologies - bouncing between multiple IRC meetings
>18:46:47  policy-endpoint ?
>18:46:57  #action rms_13 to send “policy-point”
>“policy-group” suggestion to mailing list
>
>On Thu, Aug 7, 2014 at 2:18 PM, Ryan Moats  wrote:
>> Edgar-
>>
>> I can't speak for anyone else, but in my mind at least (and having been
>> involved in the work that led up to 3198),
>> the members of the groups being discussed here are not PEPs.   As 3198
>> states, being a PEP implies running COPS
>> and I don't see that as necessary for membership in GBP groups.
>>
>> Ryan Moats
>>
>> Edgar Magana  wrote on 08/07/2014 04:02:43 PM:
>>
>>> From: Edgar Magana 
>>
>>
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Date: 08/07/2014 04:03 PM
>>> Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy -
>>> Renaming
>>
>>>
>>> I am sorry that I could not attend the GBP meeting.
>>> Is there any reason why the IEFT standard is not considered?
>>> http://tools.ietf.org/html/rfc3198
>>>
>>>

Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Edgar Magana
Ryan,

COPS implies a common protocol to communicate with PEPs, which implies the same 
communication mechanism basically.
So, you are implying that "endpoints" in GBP will use "different" protocol to 
communicate with "decisions" entities?

It that is the case.. Well it sounds very complex for a simple GBP initial 
project. Then, the discussion will be in a different level.

Edgar

From: Ryan Moats mailto:rmo...@us.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, August 7, 2014 at 2:18 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming


Edgar-

I can't speak for anyone else, but in my mind at least (and having been 
involved in the work that led up to 3198),
the members of the groups being discussed here are not PEPs.   As 3198 states, 
being a PEP implies running COPS
and I don't see that as necessary for membership in GBP groups.

Ryan Moats

Edgar Magana mailto:edgar.mag...@workday.com>> wrote 
on 08/07/2014 04:02:43 PM:

> From: Edgar Magana mailto:edgar.mag...@workday.com>>
> To: "OpenStack Development Mailing List (not for usage questions)"
> mailto:openstack-dev@lists.openstack.org>>
> Date: 08/07/2014 04:03 PM
> Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming
>
> I am sorry that I could not attend the GBP meeting.
> Is there any reason why the IEFT standard is not considered?
> http://tools.ietf.org/html/rfc3198
>
> I would like to understand the argument why we are creating new
> names instead of using the standard ones.
>
> Edgar
>
> From: Ronak Shah 
> mailto:ronak.malav.s...@gmail.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, August 7, 2014 at 1:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming
>
> Hi,
> Following a very interesting and vocal thread on GBP for last couple
> of days and the GBP meeting today, GBP sub-team proposes following
> name changes to the resource.
>

> policy-point for endpoint
> policy-group for endpointgroup (epg)
>
> Please reply if you feel that it is not ok with reason and suggestion.
>
> I hope that it wont be another 150 messages thread :)
>
> Ronak___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Sumit Naiksatam
Ryan, point well taken. I am paraphrasing the discussion from today's
GBP sub team meeting on the options considered and the eventual
proposal for "policy-point" and "policy-group":

18:36:50  so regarding the endpoint terminology
18:36:53  any suggestions?
18:36:56  ivar-lazzaro:  If you are expressing your intent of
doing enforcement at both points you do care then.
18:37:09  regXboi: Edgar Magana suggested using the IETF
phrasing -- enforcement point
18:37:31  i was thinking “edgar point” would be good.  and we
won’t have to change our slides from EP.
18:37:44  ivar-lazzaro:  would be great to see an example
using the CLI how one sets something up that in GBP that does
enforcement at the instance and router.
18:37:44  mschoen ++
18:37:55  rockyg: although enforcement point tends to
be used in a slightly different context
18:38:02  mscohen ++
18:38:04  I was involved in the early IETF policy days, and
I'm not a big from of ep
18:38:04  mscohen: we dont want to overload the terminology
18:38:13  regXboi: +1
18:38:17  I’m not entirely sure “enforcement point” is the
same as our usage of endpoint
18:38:25  rkukura: exactly
18:38:28  SumitNaiksatam: i am joking of course
18:38:42  mscohen: :-)
18:38:54  Yeah.  that's the problem with endpoint.  It's right
for networking, but it already has another definition in
virtualization world.
18:38:54  how about network-endpoint (someone else
suggested that)?
18:38:55  I think enforcement point is more like the SG or
FWaaS that is used to render the intent
18:39:07  rkukura: agree
18:39:09  so... let's hit the thesaurus
18:39:16  Rkukara, agree
18:39:38  I had always throught endpoint was the right word
for both our usage and for keystone, with similar meanings, but
different meta-levels
18:40:01  rkukura: if we can find something different, let's
consider it
18:40:11  there is enough of a hill to climb
18:40:35  how about terminus?
18:40:52 * regXboi keeps reading synonyms
18:41:06  network-endpoint?
18:41:12  um... no
18:41:27  I think that won't help
18:41:58  policy-point/policy groups?
18:42:07  group member?
18:42:14  termination-point, gbp-id, policy point maybe
18:42:18  sorry i dropped off again!
18:42:23  I think member
18:42:31  unless that's already used somewhere
18:42:33  i was saying earlier, what about policy-point?
18:42:36  #chair SumitNaiksatam
18:42:37  Current chairs: SumitNaiksatam SumitNaiksatam_
banix rkukura s3wong
18:42:41  regXboi: Just “member” and “group”?
18:42:44  s3wong: :-)
18:43:04  SumitNaiksatam: so now either way works for you :-)
18:43:09  rkurkura: too general I think...
18:43:15  policy-provider, policy-consumer
18:43:16  er rkukura ... sorry
18:43:17  i still like endpoint better.
18:43:23  bourn or bourne 1  (bɔːn)
18:43:23 
18:43:23  — n
18:43:23  1.  a destination; goal
18:43:23  2.  a boundary
18:43:25  I think policy-point and policy-group
18:43:27  yyywu: :-)
18:43:34  Bourne-point?
18:43:40  rockyg: :-)
18:44:04  more in favor of policy-point and policy-group?
18:44:36  i thnk LouisF suggested as well
18:44:49  +1 to policy-point
18:44:50  +1 to policy-point and policy-group
18:44:55  +1
18:44:56  SumitNaiksatam: +1 too
18:45:07  +1
18:45:08  FINALLY... YEAH
18:45:18  okay so how about we float this in the ML?
18:45:21  +1
18:45:31  +1
18:45:35  Yes... lets do that
18:45:37  +1
18:45:44  so that we dont end up picking up an
overlapping terminology again
18:45:55  who wants to do it? as in send to the ML?
18:46:07 * SumitNaiksatam waiting to hand out an AI :-P
18:46:16  regXboi: ?
18:46:17  I can do it
18:46:26  hmm?
18:46:31  rms_13: ah you put your hand up first
18:46:36 * regXboi apologies - bouncing between multiple IRC meetings
18:46:47  policy-endpoint ?
18:46:57  #action rms_13 to send “policy-point”
“policy-group” suggestion to mailing list

On Thu, Aug 7, 2014 at 2:18 PM, Ryan Moats  wrote:
> Edgar-
>
> I can't speak for anyone else, but in my mind at least (and having been
> involved in the work that led up to 3198),
> the members of the groups being discussed here are not PEPs.   As 3198
> states, being a PEP implies running COPS
> and I don't see that as necessary for membership in GBP groups.
>
> Ryan Moats
>
> Edgar Magana  wrote on 08/07/2014 04:02:43 PM:
>
>> From: Edgar Magana 
>
>
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: 08/07/2014 04:03 PM
>> Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy -
>> Renaming
>
>>
>> I am sorry that I could not attend the GBP meeting.
>> Is there any reason why the IEFT standard is not considered?
>> http://tools.ietf.org/html/rfc3198
>>
>> I would like to understand the argument why we are creating new
>> names instead of using the standard ones.
>>
>> Edgar
>>
>> From: Ronak Shah 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: Thursday, August 7, 2014 at 1:17 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>

Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Ryan Moats
Edgar-

I can't speak for anyone else, but in my mind at least (and having been
involved in the work that led up to 3198),
the members of the groups being discussed here are not PEPs.   As 3198
states, being a PEP implies running COPS
and I don't see that as necessary for membership in GBP groups.

Ryan Moats

Edgar Magana  wrote on 08/07/2014 04:02:43 PM:

> From: Edgar Magana 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 08/07/2014 04:03 PM
> Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy -
Renaming
>
> I am sorry that I could not attend the GBP meeting.
> Is there any reason why the IEFT standard is not considered?
> http://tools.ietf.org/html/rfc3198
>
> I would like to understand the argument why we are creating new
> names instead of using the standard ones.
>
> Edgar
>
> From: Ronak Shah 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<
> openstack-dev@lists.openstack.org>
> Date: Thursday, August 7, 2014 at 1:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming
>
> Hi,
> Following a very interesting and vocal thread on GBP for last couple
> of days and the GBP meeting today, GBP sub-team proposes following
> name changes to the resource.
>

> policy-point for endpoint
> policy-group for endpointgroup (epg)
>
> Please reply if you feel that it is not ok with reason and suggestion.
>
> I hope that it wont be another 150 messages thread :)
>
> Ronak___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Win The Enterprise Work Group Update

2014-08-07 Thread Anne Gentle
Hi Carol, thanks for the summary presentation. I listened in to the board
meeting for this portion. More below.


On Wed, Aug 6, 2014 at 4:55 PM, Barrett, Carol L 
wrote:

>  I want to provide the community an update on the Win The Enterprise work
> group that came together in a BoF session in Atlanta.
>
> The work group led a discussion with the OpenStack Board at their 7/22
> meeting on the findings of our analysis of Enterprise IT requirements gaps.
> A summary of the presentation and next steps can be found here:
> *https://drive.google.com/file/d/0BxtM4AiszlEySmJwMHpDTGFDZHc/edit?usp=sharing*
> 
>
> Based upon the analysis and discussion, the actions for the work group are:
>
>1. Form a Deployment team to take on the Deployment oriented
>requirements that came up from the different teams. This team will have
>both Technical and Marketing members.
>
>
>1. *Please let me know if you’re interested in joining*
>
>
>1. Form a Monitoring team to take on the Monitoring oriented
>requirements that came up from the different teams. This team will have
>both Technical and Marketing members.
>
>
>1. *Please let me know if you’re interested in joining*
>
> For Technical gaps, we need to assess final accepted Juno blueprints
> versus requirements and develop additional blueprints through community
> participation and implementation support to bring into the Kilo Design
> Summit
>
>1. For Documentation gaps, we need to work with either existing
>documentation teams or the Marketing team to create.
>
>

Yes, I'd love to work with you on this. There are definitely marketing
deliverables that do not belong in the docs program, but there are also
docs that exist in the docs program already. Looks like the enterprise
group identified:

Security Guide http://docs.openstack.org/security-guide/content/
High Availability Guide
http://docs.openstack.org/high-availability-guide/content/
 Upgrades
http://docs.openstack.org/openstack-ops/content/ch_ops_upgrades.html

The newest is the Architecture Design Guide
http://docs.openstack.org/arch-design/content/ - just a few weeks old. I'd
like to get some technical reviewers to take a look at that guide. Ideally
we can repurpose that content for marketing deliverables or enhance it in
place.

I can go on and on, so what's the best way for me to work with you on
priorities and expectations?

Let me know - perhaps a phone call is best for starters.
Thanks,
Anne



>
>1. For Marketing Perceptions, we need to create a content and
>collateral plan with owners and execute.
>
>
> Our goals are:
>
>1. Prepare and intercept the Kilo Design Summit pre-plannning and
>sessions in Paris with new BPs that implement the requirements
>2. Intercept Paris Summit Analyst and Press outreach plans with
>content addressing top perception issues
>3. Complete the needed documentation/collateral ahead of the Paris
>summit
>4. Target the Enterprise IT Strategy track in Paris on the key
>Enterprise IT requirements to address documentation gaps, and provide
>how-to info for deployments.
>
>
> *Call to Action:* Please let me know if you want to be involved in any of
> the work group activities. Lots of opportunities for you to help advance
> OpenStack adoption in this segment!
>
> If you have any questions or want more info, pls get in touch.
> Carol Barrett
> Intel Corp
> +1 503 712 7623
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Ben Nemec
LGTM.  Plenty of things I could add to your list, but they're all
post-import. :-)

-Ben

On 08/07/2014 01:58 PM, Yuriy Taraday wrote:
> Hello, oslo cores.
> 
> I've finished polishing up oslo.concurrency repo at [0] - please take a
> look at it. I used my new version of graduate.sh [1] to generate it, so
> history looks a bit different from what you might be used to.
> 
> I've made as little changes as possible, so there're still some steps left
> that should be done after new repo is created:
> - fix PEP8 errors H405 and E126;
> - use strutils from oslo.utils;
> - remove eventlet dependency (along with random sleeps), but proper testing
> with eventlet should remain;
> - fix for bug [2] should be applied from [3] (although it needs some
> improvements);
> - oh, there's really no limit for this...
> 
> I'll finalize and publish relevant change request to openstack-infra/config
> soon.
> 
> Looking forward to any feedback!
> 
> [0] https://github.com/YorikSar/oslo.concurrency
> [1] https://review.openstack.org/109779
> [2] https://bugs.launchpad.net/oslo/+bug/1327946
> [3] https://review.openstack.org/108954
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Swift trust authentication, status and concerns

2014-08-07 Thread Michael McCune
hi Sahara folks,

This serves as a detailed status update for the Swift trust authentication 
spec[1], and to bring up concerns about integration for the Juno cycle.

So far I have pushed a few reviews that start to lay the groundwork for the 
infrastructure needed to complete this blueprint. I have tried to keep the 
changes as low impact as possible so as not to create incompatible commits. I 
will continue this for as long as makes sense, but we are approaching the point 
at which disruptive changes will be introduced.

Currently, I am working on delegating and revoking trusts for job executions. 
The next steps will be to finish the periodic updater that will distribute 
authentication tokens to cluster instances. After this I plan to start 
integrating the job binaries to use the authentication tokens as this will all 
be contained within Sahara.

Once these pieces are done I will focus on the Swift-Hadoop component and 
finalize the workflow creation to support the new Swift references. I will hold 
these changes until we are ready to switch to this new style of authentication 
as this will be disruptive to our current deployments. I would like to get some 
assistance understanding the Swift-Hadoop component, any guidance would be 
greatly appreciated.

That's the status update, I'm confident that over the next few weeks much of 
this will be implemented and getting ready for review.

I do have concerns around how we will integrate and release this update. Once 
the trust authentication is in place we will be changing the way Swift 
information is distributed to the cluster instances. This means that existing 
vm images will need to be updated with the new Swift-Hadoop component. We will 
need to create new public images for all plugins that use Hadoop and Swift. We 
will also need to update the publicly available versions of the Swift-Hadoop 
component to ensure that Sahara-image-elements continues to work.

We will also need to upgrade the gate testing machines to incorporate these 
changes and most likely I will need to be able to run these tests on a local 
cluster I can control before I push them for review. I am soliciting any advice 
about how I could run the gate tests from my local machine or cluster.

For the new Swift-Hadoop component I propose that we bump the version to 2.0 to 
indicate the incompatibility between it and the 1.0 version.

regards,
mike


[1]: 
https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-authentication

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Edgar Magana
I am sorry that I could not attend the GBP meeting.
Is there any reason why the IEFT standard is not considered?
http://tools.ietf.org/html/rfc3198

I would like to understand the argument why we are creating new names instead 
of using the standard ones.

Edgar

From: Ronak Shah mailto:ronak.malav.s...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, August 7, 2014 at 1:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

Hi,
Following a very interesting and vocal thread on GBP for last couple of days 
and the GBP meeting today, GBP sub-team proposes following name changes to the 
resource.


policy-point for endpoint
policy-group for endpointgroup (epg)

Please reply if you feel that it is not ok with reason and suggestion.

I hope that it wont be another 150 messages thread :)

Ronak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Andrew Mann
Can you include the definition/description of what each is here as well?  I
think there was a description in the 100+ thread of doom, but I don't want
to go back there :)


On Thu, Aug 7, 2014 at 3:17 PM, Ronak Shah 
wrote:

> Hi,
> Following a very interesting and vocal thread on GBP for last couple of
> days and the GBP meeting today, GBP sub-team proposes following name
> changes to the resource.
>
>
> policy-point for endpoint
> policy-group for endpointgroup (epg)
>
> Please reply if you feel that it is not ok with reason and suggestion.
>
> I hope that it wont be another 150 messages thread :)
>
> Ronak
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Eoghan Glynn

> > If we try to limit the number of WIP slots, then surely aspiring
> > contributors will simply work around that restriction by preparing
> > the code they're interested in on their own private branches, or
> > in their github forks?
> >
> > OK, some pragmatic contributors will adjust their priorities to
> > align with the available slots. And some companies employing
> > large numbers of contributors will enforce policies to align
> > their developers' effort with the gatekeepers' priorities.
> >
> > But I suspect we'd also have a good number who would take the
> > risk that their code never lands and work on it anyway. Given
> > that such efforts would really be flying beneath the radar and
> > may never see the light of day, that would seem like true waste
> > to me.
> 
> Is that a problem? 

Well I guess it wouldn't be, if we're willing to tolerate waste.

But IIUC the motiviation behind applying the ideas of kanban is
to minimize waste piling up at bottlenecks in the pipeline.

My point was simply that we don't have direct control over the
contributors' activities, so that limiting WIP slots wouldn't
cut out the waste, rather it would force it underground.

This seems worse to me because either:

 (a) lots of good ideas end up being lost, as a critical mass
 of other contributors don't get to see them

and/or:

 (b) contributors figure out ways to by-pass the rate-limiting
 on gerrit and share their code in other ways

Just a thought ...

Cheers,
Eoghan


> If such developers are going to work on their pet
> project anyway, it's really up to the core team whether or not they
> think it makes sense to merge the changes upstream.
> 
> If the core team doesn't think they're worth merging (given the
> constraints on reviewer/approver time) then so be it.  At that point
> either we accept that we're going to leave possible contributions by the
> wayside or else we increase the core team (and infrastructure, and other
> strategic resources)  to be able to handle the load.
> 
> Chris
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-07 Thread Ronak Shah
Hi,
Following a very interesting and vocal thread on GBP for last couple of
days and the GBP meeting today, GBP sub-team proposes following name
changes to the resource.


policy-point for endpoint
policy-group for endpointgroup (epg)

Please reply if you feel that it is not ok with reason and suggestion.

I hope that it wont be another 150 messages thread :)

Ronak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Doug Wiegley
Personally, I prefer IRC for general meeting stuff, with separate
breakouts to voice for topics that warrant it.

Doug


On 8/7/14, 2:28 AM, "Stephen Balukoff"  wrote:

>Hi Brandon,
>
>
>I don't think we've set a specific date to make the transition to IRC
>meetings. Is there a particular urgency about this that we should be
>aware of?
>
>
>Stephen
>
>
>
>On Wed, Aug 6, 2014 at 7:58 PM, Brandon Logan
> wrote:
>
>When is the plan to move the meeting to IRC?
>
>On Wed, 2014-08-06 at 15:30 -0700, Stephen Balukoff wrote:
>> Action items from today's Octavia meeting:
>>
>>
>> 1. We're going to hold off for a couple days on merging the
>> constitution and preliminary road map to give people (and in
>> particular Ebay) a chance to review and comment.
>> 2. Stephen is going to try to get Octavia v0.5 design docs into gerrit
>> review by the end of the week, or early next week at the latest.
>>
>> 3. If those with specific networking concerns could codify this and/or
>> figure out a way to write these down and share with the list, that
>> would be great. This is going to be important to ensure that our
>> "operator-grade load balancer" solution can actually meet the needs of
>> the operators developing it.
>>
>> Thanks,
>>
>> Stephen
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Aug 5, 2014 at 2:34 PM, Stephen Balukoff
>>  wrote:
>> Hello!
>>
>>
>> We plan on resuming weekly meetings to discuss things related
>> to the Octavia project starting tomorrow: August 6th at
>> 13:00PDT (20:00UTC). In order to facilitate high-bandwidth
>> discussion as we bootstrap the project, we have decided to
>> hold these meetings via webex, with the plan to eventually
>> transition to IRC. Please contact me directly if you would
>> like to get in on the webex.
>>
>>
>> Tomorrow's meeting agenda is currently as follows:
>>
>>
>> * Discuss Octavia constitution and project direction documents
>> currently under gerrit review:
>> https://review.openstack.org/#/c/110563/
>>
>>
>>
>> * Discuss reviews of design proposals currently under gerrit
>> review:
>> https://review.openstack.org/#/c/111440/
>> https://review.openstack.org/#/c/111445/
>>
>>
>> * Discuss operator network topology requirements based on data
>> currently being collected by HP, Rackspace and Blue Box.
>> (Other operators are certainly welcome to collect and share
>> their data as well! I'm looking at you, Ebay. ;) )
>>
>>
>> Please feel free to respond with additional agenda items!
>>
>>
>> Stephen
>>
>>
>> --
>> Stephen Balukoff
>> Blue Box Group, LLC
>> (800)613-4305 x807 
>>
>>
>>
>>
>> --
>> Stephen Balukoff
>> Blue Box Group, LLC
>> (800)613-4305 x807 
>
>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
>
>
>
>-- 
>Stephen Balukoff 
>Blue Box Group, LLC
>(800)613-4305 x807 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 10:58 PM, Yuriy Taraday  wrote:

> Hello, oslo cores.
>
> I've finished polishing up oslo.concurrency repo at [0] - please take a
> look at it. I used my new version of graduate.sh [1] to generate it, so
> history looks a bit different from what you might be used to.
>
> I've made as little changes as possible, so there're still some steps left
> that should be done after new repo is created:
> - fix PEP8 errors H405 and E126;
> - use strutils from oslo.utils;
> - remove eventlet dependency (along with random sleeps), but proper
> testing with eventlet should remain;
> - fix for bug [2] should be applied from [3] (although it needs some
> improvements);
> - oh, there's really no limit for this...
>
> I'll finalize and publish relevant change request to
> openstack-infra/config soon.
>

Here it is: https://review.openstack.org/112666

Looking forward to any feedback!
>
> [0] https://github.com/YorikSar/oslo.concurrency
> [1] https://review.openstack.org/109779
> [2] https://bugs.launchpad.net/oslo/+bug/1327946
>  [3] https://review.openstack.org/108954
>
> --
>
> Kind regards, Yuriy.
>



-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Brandon Logan
Those are definitely other big reasons, and probably the reason it is
planned to move to IRC in the future, no matter what.  I was just
wondering how soon, if soon at all.

On Thu, 2014-08-07 at 12:35 -0700, Stefano Maffulli wrote:
> On Thu 07 Aug 2014 12:12:26 PM PDT, Brandon Logan wrote:
> > It's just my own preference.  Others like webex/hangouts because it can
> > be easier to talk about topics than in IRC, but with this many people
> > and the latency delays, it can become quite cumbersome.  Plus, it makes
> > it easier for meeting notes.  I'll deal with it while the majority
> > really prefer it.
> 
> Most of all, if you're interested in including people whose primary 
> language is not English, IRC (or text-based communication) is a lot 
> more accessible than voice.
> 
> Also, skimming through written logs from IRC is a lot easier/faster 
> than listening to audio recordings for those that couldn't join in real 
> time.
> 
> /stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Stefano Maffulli
On Thu 07 Aug 2014 12:12:26 PM PDT, Brandon Logan wrote:
> It's just my own preference.  Others like webex/hangouts because it can
> be easier to talk about topics than in IRC, but with this many people
> and the latency delays, it can become quite cumbersome.  Plus, it makes
> it easier for meeting notes.  I'll deal with it while the majority
> really prefer it.

Most of all, if you're interested in including people whose primary 
language is not English, IRC (or text-based communication) is a lot 
more accessible than voice.

Also, skimming through written logs from IRC is a lot easier/faster 
than listening to audio recordings for those that couldn't join in real 
time.

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Core team proposals

2014-08-07 Thread Sean Dague
On 08/07/2014 02:09 PM, Dean Troyer wrote:
> I want to nominate Ian Wienand (IRC: ianw) to the DevStack core team.
>  Ian has been a consistent contributor and reviewer for some time now.
>  He also manages the Red Hat CI that runs tests on Fedora, RHEL and
> CentOS so those platforms have been a particular point of interest for
> him.  Ian has also been active in the config and devstack-gate projects
> among others.
> 
> Reviews: https://review.openstack.org/#/q/reviewer:%22Ian+Wienand+%22,n,z
> 
> Stackalytics:
> http://stackalytics.com/?user_id=iwienand&metric=marks&module=devstack&release=all
> 
> I also want to (finally?) remove long-standing core team members Vish
> Ishaya and Jesse Andrews, who between them were responsible for
> instigating the whole 'build a stack script' back in the day.
> 
> Please respond in the usual manner, +1 or concerns.
> 
> Thanks
> dt

+1, happy to have Ian on the team.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Doug Hellmann

On Aug 6, 2014, at 5:10 PM, Michael Still  wrote:

> On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez  wrote:
> 
>> We seem to be unable to address some key issues in the software we
>> produce, and part of it is due to strategic contributors (and core
>> reviewers) being overwhelmed just trying to stay afloat of what's
>> happening. For such projects, is it time for a pause ? Is it time to
>> define key cycle goals and defer everything else ?
> 
> The nova team has been thinking about these issues recently too --
> especially at our mid cycle meetup last week. We are drawing similar
> conclusions to be honest.
> 
> Two nova cores were going to go away and write up a proposal for how
> nova could handle a more focussed attempt to land code in Kilo, but
> they haven't had a chance to do that yet. To keep this conversation
> rolling, here's a quick summary of what they proposed:
> 
> - we rate limit the total number of blueprints under code review at
> any one time to a fixed number of "slots". I secretly prefer the term
> "runway", so I am going to use that for the rest of this email. A
> suggested initial number of runways was proposed at ten.
> 
> - the development process would be much like juno for a blueprint --
> you propose a spec, get it approved, write some code, and then you
> request a runway to land the code in. Depending on your relative
> priority compared to other code attempting to land, you queue until
> traffic control assigns you a runway.
> 
> - code occupying a runway gets nova core review attention, with the
> expectation of fast iteration. If we find a blueprint has stalled in a
> runway, it is removed and put back onto the queue based on its
> priority (you don't get punished for being bumped).
> 
> This proposal is limiting the number of simultaneous proposals a core
> needs to track, not the total number landed. The expectation is that
> the time taken on a runway is short, and then someone else will occupy
> it. Its mostly about focus -- instead of doing 100 core reviews on 100
> patches so they never land, trying to do those reviews on the 10
> patches so they all land.

I’ve been trying to highlight “review priorities” each week in the Oslo 
meeting, with moderate success. Our load is a lot lower than nova’s, but so is 
our review team. Perhaps having a more explicit cap on new feature work like 
this would work better.

I’m looking forward to seeing what you come up with as an approach.

Doug

> 
> We also talked about tweaking the ratio of "tech debt" runways vs
> 'feature" runways. So, perhaps every second release is focussed on
> burning down tech debt and stability, whilst the others are focussed
> on adding features. I would suggest if we do such a thing, Kilo should
> be a "stability' release.
> 
> Michael
> 
> -- 
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Weekly meetings resuming + agenda

2014-08-07 Thread Brandon Logan
It's just my own preference.  Others like webex/hangouts because it can
be easier to talk about topics than in IRC, but with this many people
and the latency delays, it can become quite cumbersome.  Plus, it makes
it easier for meeting notes.  I'll deal with it while the majority
really prefer it.

Thanks,
Brandon

On Thu, 2014-08-07 at 01:28 -0700, Stephen Balukoff wrote:
> Hi Brandon,
> 
> 
> I don't think we've set a specific date to make the transition to IRC
> meetings. Is there a particular urgency about this that we should be
> aware of?
> 
> 
> Stephen
> 
> 
> On Wed, Aug 6, 2014 at 7:58 PM, Brandon Logan
>  wrote:
> When is the plan to move the meeting to IRC?
> 
> On Wed, 2014-08-06 at 15:30 -0700, Stephen Balukoff wrote:
> > Action items from today's Octavia meeting:
> >
> >
> > 1. We're going to hold off for a couple days on merging the
> > constitution and preliminary road map to give people (and in
> > particular Ebay) a chance to review and comment.
> > 2. Stephen is going to try to get Octavia v0.5 design docs
> into gerrit
> > review by the end of the week, or early next week at the
> latest.
> >
> > 3. If those with specific networking concerns could codify
> this and/or
> > figure out a way to write these down and share with the
> list, that
> > would be great. This is going to be important to ensure that
> our
> > "operator-grade load balancer" solution can actually meet
> the needs of
> > the operators developing it.
> >
> > Thanks,
> >
> > Stephen
> >
> >
> >
> >
> >
> >
> >
> >
> > On Tue, Aug 5, 2014 at 2:34 PM, Stephen Balukoff
> >  wrote:
> > Hello!
> >
> >
> > We plan on resuming weekly meetings to discuss
> things related
> > to the Octavia project starting tomorrow: August 6th
> at
> > 13:00PDT (20:00UTC). In order to facilitate
> high-bandwidth
> > discussion as we bootstrap the project, we have
> decided to
> > hold these meetings via webex, with the plan to
> eventually
> > transition to IRC. Please contact me directly if you
> would
> > like to get in on the webex.
> >
> >
> > Tomorrow's meeting agenda is currently as follows:
> >
> >
> > * Discuss Octavia constitution and project direction
> documents
> > currently under gerrit review:
> > https://review.openstack.org/#/c/110563/
> >
> >
> >
> > * Discuss reviews of design proposals currently
> under gerrit
> > review:
> > https://review.openstack.org/#/c/111440/
> > https://review.openstack.org/#/c/111445/
> >
> >
> > * Discuss operator network topology requirements
> based on data
> > currently being collected by HP, Rackspace and Blue
> Box.
> > (Other operators are certainly welcome to collect
> and share
> > their data as well! I'm looking at you, Ebay. ;) )
> >
> >
> > Please feel free to respond with additional agenda
> items!
> >
> >
> > Stephen
> >
> >
> > --
> > Stephen Balukoff
> > Blue Box Group, LLC
> > (800)613-4305 x807
> >
> >
> >
> >
> > --
> > Stephen Balukoff
> > Blue Box Group, LLC
> > (800)613-4305 x807
> 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> -- 
> Stephen Balukoff 
> Blue Box Group, LLC 
> (800)613-4305 x807
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Kevin Benton
>I mean't 'side stepping' why GBP allows for the comment you made previous,
"With the latter, a mapping driver could determine that communication
between these two hosts can be prevented by using an ACL on a router or a
switch, which doesn't violate the user's intent and buys a performance
improvement and works with ports that don't support security groups.".

>Neutron's current API is a logical abstraction and enforcement can be done
however one chooses to implement it. I'm really trying to understand at the
network level why GBP allows for these optimizations and performance
improvements you talked about.

You absolutely cannot enforce security groups on a firewall/router that
sits at the boundary between networks. If you try, you are lying to the
end-user because it's not enforced at the port level. The current neutron
APIs force you to decide where things like that are implemented. The higher
level abstractions give you the freedom to move the enforcement by allowing
the expression of broad connectivity requirements.

>Why are you bringing up logging connections?

This was brought up as a feature proposal to FWaaS because this is a basic
firewall feature missing from OpenStack. However, this does not preclude a
FWaaS vendor from logging.

>Personally, I think one could easily write up a very short document
probably less than one page with examples showing/exampling how the current
neutron API works even without a much networking background.

The difficulty of the API for establishing basic connectivity isn't really
the problem. It's when you have to compose a bunch of requirements and make
sure nothing is violating auditing and connectivity constraints that it
becomes a problem. We are arguing about the levels of abstraction. You
could also write up a short document explaining to novice programmers how
to use C to read and write database entries to an sqlite database, but that
doesn't mean it's the best level of abstraction for what the users are
trying to accomplish.

I'll let someone else explain the current GBP API because I'm not working
on that. I'm just trying to convince you of the value of declarative
network configuration.


On Thu, Aug 7, 2014 at 12:02 PM, Aaron Rosen  wrote:

>
>
>
> On Thu, Aug 7, 2014 at 9:54 AM, Kevin Benton  wrote:
>
>> You said you had no idea what group based policy was buying us so I tried
>> to illustrate what the difference between declarative and imperative
>> network configuration looks like. That's the major selling point of GBP so
>> I'm not sure how that's 'side stepping' any points. It removes the need for
>> the user to pick between implementation details like security
>> groups/FWaaS/ACLs.
>>
>
> I mean't 'side stepping' why GBP allows for the comment you made previous,
> "With the latter, a mapping driver could determine that communication
> between these two hosts can be prevented by using an ACL on a router or a
> switch, which doesn't violate the user's intent and buys a performance
> improvement and works with ports that don't support security groups.".
>
> Neutron's current API is a logical abstraction and enforcement can be done
> however one chooses to implement it. I'm really trying to understand at the
> network level why GBP allows for these optimizations and performance
> improvements you talked about.
>
>
>
>> >So are you saying that GBP allows someone to be able to configure an
>> application that at the end of the day is equivalent  to
>> networks/router/FWaaS rules without understanding networking concepts?
>>
>> It's one thing to understand the ports an application leverages and
>> another to understand the differences between configuring VM firewalls,
>> security groups, FWaaS, and router ACLs.
>>
>
> Sure, but how does group based policy solve this. Security Groups and
> FWaaS are just different places of enforcement. Say I want different
> security enforcement on my router than on my instances. One still needs to
> know enough to tell group based policy this right?  They need to know
> enough that there are different enforcement points? How is doing this with
> Group based policy make it easier?
>
>
>
>> > I'm also curious how this GBP is really less error prone than the
>> model we have today as it seems the user will basically have to tell
>> neutron the same information about how he wants his networking to function.
>>
>> With GBP, the user just gives the desired end result (e.g. allow
>> connectivity between endpoint groups via TCP port 22 with all connections
>> logged). Without it, the user has to do the following:
>>
>
> Why are you bringing up logging connections? Neutron has no concept of
> this at all today in it's code base. Is logging something related to GBP?
>
>>
>>1. create a network/subnet for each endpoint group
>>2. allow all traffic on the security groups since the logging would
>>need to be accomplished with FWaaS
>>3. create an FWaaS instance
>>4. attach the FWaaS to both networks
>>
>> Today FWaaS

Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Doug Hellmann

On Aug 7, 2014, at 12:39 PM, Kevin L. Mitchell  
wrote:

> On Thu, 2014-08-07 at 17:27 +0100, Matthew Booth wrote:
>> On 07/08/14 16:27, Kevin L. Mitchell wrote:
>>> On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
 A (the?) solution is to register_opts() in foo before importing any
 modules which might also use oslo.config.
>>> 
>>> Actually, I disagree.  The real problem here is the definition of
>>> bar_func().  The default value of the parameter "arg" will likely always
>>> be the default value of foo_opt, rather than the configured value,
>>> because "CONF.foo_opt" will be evaluated at module load time.  The way
>>> bar_func() should be defined would be:
>>> 
>>>def bar_func(arg=None):
>>>if not arg:
>>>arg = CONF.foo_opt
>>>…
>>> 
>>> That ensures that arg will be the configured value, and should also
>>> solve the import conflict.
>> 
>> That's different behaviour, because you can no longer pass arg=None. The
>> fix isn't to change the behaviour of the code.
> 
> Well, the point is that the code as written is incorrect.  And if 'None'
> is an input you want to allow, then use an incantation like:
> 
>_unset = object()
> 
>def bar_func(arg=_unset):
>if arg is _unset:
>arg = CONF.foo_opt
>…
> 
> In any case, the operative point is that CONF. must always be
> evaluated inside run-time code, never at module load time.

It would be even better to take the extra step of registering the option at 
runtime at the point it is about to be used by calling register_opt() inside 
bar_func() instead of when bar is imported. That avoids import order concerns, 
and re-enforces the idea that options should be declared local to the code that 
uses them and their values should be passed to other code, rather than having 2 
modules tightly bound together through a global configuration value.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Yuriy Taraday
Hello, oslo cores.

I've finished polishing up oslo.concurrency repo at [0] - please take a
look at it. I used my new version of graduate.sh [1] to generate it, so
history looks a bit different from what you might be used to.

I've made as little changes as possible, so there're still some steps left
that should be done after new repo is created:
- fix PEP8 errors H405 and E126;
- use strutils from oslo.utils;
- remove eventlet dependency (along with random sleeps), but proper testing
with eventlet should remain;
- fix for bug [2] should be applied from [3] (although it needs some
improvements);
- oh, there's really no limit for this...

I'll finalize and publish relevant change request to openstack-infra/config
soon.

Looking forward to any feedback!

[0] https://github.com/YorikSar/oslo.concurrency
[1] https://review.openstack.org/109779
[2] https://bugs.launchpad.net/oslo/+bug/1327946
[3] https://review.openstack.org/108954

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Chris Friesen

On 08/07/2014 12:32 PM, Eoghan Glynn wrote:


If we try to limit the number of WIP slots, then surely aspiring
contributors will simply work around that restriction by preparing
the code they're interested in on their own private branches, or
in their github forks?

OK, some pragmatic contributors will adjust their priorities to
align with the available slots. And some companies employing
large numbers of contributors will enforce policies to align
their developers' effort with the gatekeepers' priorities.

But I suspect we'd also have a good number who would take the
risk that their code never lands and work on it anyway. Given
that such efforts would really be flying beneath the radar and
may never see the light of day, that would seem like true waste
to me.


Is that a problem?  If such developers are going to work on their pet 
project anyway, it's really up to the core team whether or not they 
think it makes sense to merge the changes upstream.


If the core team doesn't think they're worth merging (given the 
constraints on reviewer/approver time) then so be it.  At that point 
either we accept that we're going to leave possible contributions by the 
wayside or else we increase the core team (and infrastructure, and other 
strategic resources)  to be able to handle the load.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Backports for 2014.1.2

2014-08-07 Thread Sergey Lukjanov
Hey sahara folks,

I'm going to push 2014.1.2 tag to stable/icehouse branch next week,
so, please, propose backports before the weekend and ping us to
backport some sensitive fixes.

Thanks you!
-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes Aug 7

2014-08-07 Thread Sergey Lukjanov
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-08-07-18.02.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-08-07-18.02.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Eoghan Glynn


> Multidisciplinary training rules! As an architect with field experience
> building roads, sidewalks, roofs, city planning (and training in lean
> manufacturing and services) I think I can have a say ;)
> 
> > You're not really introducing a successful Kanban here, you're just
> > clarifying that there should be a set number of workstations.
> 
> Right, and to clarify I'm really thinking kanban here, expanding on the
> few lines Mikal used to explain the 'slots' concept.
> 
> > Our current system is like a gigantic open space with hundreds of
> > half-finished pieces, and a dozen workers keep on going from one to
> > another with no strong pattern. The proposed system is to limit the
> > number of half-finished pieces fighting for the workers attention at any
> > given time, by setting a clear number of workstations.
> 
> Correct, and I think we should add a limit to the amount of WIP, too. So
> we have a visible limit to people, workstations and Work In Progress.
> This way we can *immediately*, at any given time, identify problems.

If there's a trend in the replies from folks with experience
of running manufacturing or construction pipelines out in the
wild, it seems to be:

  extend the reach of the back-pressure further up the funnel 

This makes logical sense, but IMO simply doesn't apply in our
case, given the lack of direct command & control over the stuff
that contributors actually want to work on.

If we try to limit the number of WIP slots, then surely aspiring
contributors will simply work around that restriction by preparing
the code they're interested in on their own private branches, or
in their github forks?

OK, some pragmatic contributors will adjust their priorities to
align with the available slots. And some companies employing
large numbers of contributors will enforce policies to align
their developers' effort with the gatekeepers' priorities.

But I suspect we'd also have a good number who would take the
risk that their code never lands and work on it anyway. Given
that such efforts would really be flying beneath the radar and
may never see the light of day, that would seem like true waste
to me.

I don't have a good solution, just wanted to point out that
aspect.

Cheers,
Eoghan

> > A true Kanban would be an interface between developers and reviewers,
> > where reviewers define what type of change they have to review to
> > complete production objectives, *and* developers would strive to produce
> > enough to keep the kanban above the red line, but not too much (which
> > would be piling up waste).
> 
> Exactly where I am aiming at: reducing waste, which we already have but
> nobody (few, at different times) sees. By switching to a pull 'Just In
> Time' mode we'd see waste accumulate much earlier than as we do now.
> 
> > Without that discipline, Kanbans are useless. Unless the developers
> > adapt what they work on based on release objectives, you don't really
> > reduce waste/inventory at all, it just piles up waiting for available
> > "runway slots". As I said in my original email, the main issue here is
> > the imbalance between too many people proposing changes and not enough
> > people caring about the project itself enough to be trusted with core
> > reviewers rights.
> 
> I agree with you. Right now we're accumulating waste in the form of code
> proposals (raw pieces that need to be processed) but reviewers and core
> reviewers' attention span is limited (the number of 'workstations' is
> finite but we don't have such limit exposed) and nobody sees the
> accumulation of backlog until it's very late, at the end of the release
> cycle.
> 
> A lot of the complaints I hear and worsening time to merge patches seems
> to indicate that we're over capacity and we didn't notice.
> 
> > The only way to be truly pull-based is
> > to define a set of production objectives and have those objectives
> > trickle up to the developers so that they don't work on something else.
> 
> Yeah, don't we try to do that with blueprints/specs and priority? But we
> don't set a limit, it's almost a 'free for all' send your patches in and
> someone will evaluate them. Except there is a limit to what we can produce.
> 
> I think fundamentally we need to admit that there are 24 hours in a day
> and that core reviewers have to sleep, sometimes. There is a finite
> amount of patches that can be processed in a given time interval.
> 
> It's about finding a way to keep producing OpenStack at the highest
> speed possible, keeping quality, listening to 'downstream' first.
> 
> > The solution is about setting release cycle goals and strongly
> > communicating that everything out of those goals is clearly priority 2.
> 
> I don't think there is any 'proposal' just yet, only a half-baked idea
> thrown out there by the nova team during a meeting and fluffed up by me
> on the list. Still only a half-baked idea.
> 
> I realized this is a digression from the original thread though. I'll
> talk to Russell and Nikola

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Joe Gordon
On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> With the incredible growth of OpenStack, our development community is
> facing complex challenges. How we handle those might determine the
> ultimate success or failure of OpenStack.
>
> With this cycle we hit new limits in our processes, tools and cultural
> setup. This resulted in new limiting factors on our overall velocity,
> which is frustrating for developers. This resulted in the burnout of key
> firefighting resources. This resulted in tension between people who try
> to get specific work done and people who try to keep a handle on the big
> picture.
>
> It all boils down to an imbalance between strategic and tactical
> contributions. At the beginning of this project, we had a strong inner
> group of people dedicated to fixing all loose ends. Then a lot of
> companies got interested in OpenStack and there was a surge in tactical,
> short-term contributions. We put on a call for more resources to be
> dedicated to strategic contributions like critical bugfixing,
> vulnerability management, QA, infrastructure... and that call was
> answered by a lot of companies that are now key members of the OpenStack
> Foundation, and all was fine again. But OpenStack contributors kept on
> growing, and we grew the narrowly-focused population way faster than the
> cross-project population.


> At the same time, we kept on adding new projects to incubation and to
> the integrated release, which is great... but the new developers you get
> on board with this are much more likely to be tactical than strategic
> contributors. This also contributed to the imbalance. The penalty for
> that imbalance is twofold: we don't have enough resources available to
> solve old, known OpenStack-wide issues; but we also don't have enough
> resources to identify and fix new issues.
>
> We have several efforts under way, like calling for new strategic
> contributors, driving towards in-project functional testing, making
> solving rare issues a more attractive endeavor, or hiring resources
> directly at the Foundation level to help address those. But there is a
> topic we haven't raised yet: should we concentrate on fixing what is
> currently in the integrated release rather than adding new projects ?
>

TL;DR: Our development model is having growing pains. until we sort out the
growing pains adding more projects spreads us too thin.

In addition to the issues mentioned above, with the scale of OpenStack
today we have many major cross project issues to address and no good place
to discuss them.


>
> We seem to be unable to address some key issues in the software we
> produce, and part of it is due to strategic contributors (and core
> reviewers) being overwhelmed just trying to stay afloat of what's
> happening. For such projects, is it time for a pause ? Is it time to
> define key cycle goals and defer everything else ?
>


I really like this idea, as Michael and others alluded to in above, we are
attempting to set cycle goals for Kilo in Nova. but I think it is worth
doing for all of OpenStack. We would like to make a list of key goals
before the summit so that we can plan our summit sessions around the goals.
On a really high level one way to look at this is, in Kilo we need to pay
down our technical debt.

The slots/runway idea is somewhat separate from defining key cycle goals;
we can be approve blueprints based on key cycle goals without doing slots.
 But with so many concurrent blueprints up for review at any given time,
the review teams are doing a lot of multitasking and humans are not very
good at multitasking. Hopefully slots can help address this issue, and
hopefully allow us to actually merge more blueprints in a given cycle.


>
> On the integrated release side, "more projects" means stretching our
> limited strategic resources more. Is it time for the Technical Committee
> to more aggressively define what is "in" and what is "out" ? If we go
> through such a redefinition, shall we push currently-integrated projects
> that fail to match that definition out of the "integrated release" inner
> circle ?
>
> The TC discussion on what the integrated release should or should not
> include has always been informally going on. Some people would like to
> strictly limit to end-user-facing projects. Some others suggest that
> "OpenStack" should just be about integrating/exposing/scaling smart
> functionality that lives in specialized external projects, rather than
> trying to outsmart those by writing our own implementation. Some others
> are advocates of carefully moving up the stack, and to resist from
> further addressing IaaS+ services until we "complete" the pure IaaS
> space in a satisfactory manner. Some others would like to build a
> roadmap based on AWS services. Some others would just add anything that
> fits the incubation/integration requirements.


> On one side this is a long-term discussion, but on the other we also
> need to make quick decis

[openstack-dev] [devstack] Core team proposals

2014-08-07 Thread Dean Troyer
I want to nominate Ian Wienand (IRC: ianw) to the DevStack core team.  Ian
has been a consistent contributor and reviewer for some time now.  He also
manages the Red Hat CI that runs tests on Fedora, RHEL and CentOS so those
platforms have been a particular point of interest for him.  Ian has also
been active in the config and devstack-gate projects among others.

Reviews: https://review.openstack.org/#/q/reviewer:%22Ian+Wienand+%22,n,z

Stackalytics:
http://stackalytics.com/?user_id=iwienand&metric=marks&module=devstack&release=all

I also want to (finally?) remove long-standing core team members Vish
Ishaya and Jesse Andrews, who between them were responsible for instigating
the whole 'build a stack script' back in the day.

Please respond in the usual manner, +1 or concerns.

Thanks
dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco Driver Blueprint

2014-08-07 Thread Jay Faulkner
Hey,

I agree with Dmitry that this spec has a huge scope. If you resubmitted one 
with only the power interface, that could be considered for an exception.

A few specific reasons:

1) Auto-enrollment -- should probably be held off at the moment
 - This is something that was talked about extensively at mid-cycle meetup and 
will be a topic of much debate in Ironic for Kilo. Whatever comes out of that 
debate, if it ends up being considered within scope, would be what your spec 
would want to integrate with.
 - I'd suggest you come in IRC, say hello, and work with us as we go into kilo 
figuring out if auto-enrollment belongs in Ironic and if so, how your hardware 
could integrate with that system.

2) Power driver
 - If you split this out into another spec and resubmitted, it'd be at least a 
small enough scope to be considered.  Just as a note though; Ironic has very 
specific priorities for Juno, the top of which is getting graduated. This means 
some new features have fallen aside in favor of graduation requirements.

Thanks,
Jay

From: Dmitry Tantsur 
Sent: Thursday, August 07, 2014 4:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco 
Driver Blueprint

Hi!

Didn't read the spec thoroughly, but I'm concerned by it's huge scope.
It's actually several specs squashed into one (not too detailed). My
vote is splitting it into a chain of specs (at least 3: power driver,
discovery, other configurations) and seek exception separately.
Actually, I'm +1 on making exception for power driver, but -0 on the
others, until I see a separate spec for them.

Dmitry.

On Thu, 2014-08-07 at 09:30 +0530, GopiKrishna Saripuri wrote:
> Hi,
>
>
> I've submitted Ironic Cisco driver blueprint post proposal freeze
> date. This driver is critical for Cisco and few customers to test as
> part of their private cloud expansion. The driver implementation is
> ready along with unit-tests. Will submit the code for review once
> blueprint is accepted.
>
>
> The Blueprint review link: https://review.openstack.org/#/c/110217/
>
>
> Please let me know If its possible to include this in Juno release.
>
>
>
> Regards
> GopiKrishna S
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Aaron Rosen
On Thu, Aug 7, 2014 at 9:54 AM, Kevin Benton  wrote:

> You said you had no idea what group based policy was buying us so I tried
> to illustrate what the difference between declarative and imperative
> network configuration looks like. That's the major selling point of GBP so
> I'm not sure how that's 'side stepping' any points. It removes the need for
> the user to pick between implementation details like security
> groups/FWaaS/ACLs.
>

I mean't 'side stepping' why GBP allows for the comment you made previous,
"With the latter, a mapping driver could determine that communication
between these two hosts can be prevented by using an ACL on a router or a
switch, which doesn't violate the user's intent and buys a performance
improvement and works with ports that don't support security groups.".

Neutron's current API is a logical abstraction and enforcement can be done
however one chooses to implement it. I'm really trying to understand at the
network level why GBP allows for these optimizations and performance
improvements you talked about.



> >So are you saying that GBP allows someone to be able to configure an
> application that at the end of the day is equivalent  to
> networks/router/FWaaS rules without understanding networking concepts?
>
> It's one thing to understand the ports an application leverages and
> another to understand the differences between configuring VM firewalls,
> security groups, FWaaS, and router ACLs.
>

Sure, but how does group based policy solve this. Security Groups and FWaaS
are just different places of enforcement. Say I want different security
enforcement on my router than on my instances. One still needs to know
enough to tell group based policy this right?  They need to know enough
that there are different enforcement points? How is doing this with Group
based policy make it easier?



> > I'm also curious how this GBP is really less error prone than the model
> we have today as it seems the user will basically have to tell neutron the
> same information about how he wants his networking to function.
>
> With GBP, the user just gives the desired end result (e.g. allow
> connectivity between endpoint groups via TCP port 22 with all connections
> logged). Without it, the user has to do the following:
>

Why are you bringing up logging connections? Neutron has no concept of this
at all today in it's code base. Is logging something related to GBP?

>
>1. create a network/subnet for each endpoint group
>2. allow all traffic on the security groups since the logging would
>need to be accomplished with FWaaS
>3. create an FWaaS instance
>4. attach the FWaaS to both networks
>
> Today FWaaS api is still incomplete as there is no real point of
enforcement in it's api (though it really seems that it should just be
router ports) it's just global on the router right  now.


>
>1. add an FWaaS policy and the FWaaS rules to allow the correct traffic
>
> I'd more or less agree with these steps. Would you mind also giving the
steps involved in group based policy so we can compare?



> The declarative approach is less error prone because the user can give
> neutron the desired state of connectivity rather than a mentally compiled
> set of instructions describing how to configure a bunch of individual
> network components. How well do you think someone will handle the latter
> approach that got all of their networking knowledge from one college course
> 5 years ago?
>

Personally, I think one could easily write up a very short document
probably less than one page with examples showing/exampling how the current
neutron API works even without a much networking background.  The funny
thing is I've been trying to completely understand the proposed group
policy api for a little while now and I'm still having trouble. Seems like
we're taking abstractions that are quite well known/understood and changing
them to a different model that requires one to know about this new
terminology:


Endpoint (EP): An L2/L3 addressable entity.
Endpoint Group (EPG): A collection of endpoints.
Contract: It defines how the application services provided by an EPG can be
accessed. In effect it specifies how an EPG communicates with other EPGs. A
Contract consists of Policy Rules.
Policy Rule: These are individual rules used to define the communication
criteria between EPGs. Each rule contains a Filter, Classifier, and Action.
Classifier: Characterizes the traffic that a particular Policy Rule acts
on. Corresponding action is taken on traffic that satisfies this
classification criteria.
Action: The action that is taken for a matching Policy Rule defined in a
Contract.
Filter: Provides a way to tag a Policy Rule with Capability and Role labels.
Capability: It is a Policy Label that defines what part of a Contract a
particular EPG provides.
Role: It is a Policy Label that defines what part of a Contract an EPG
wants to consume.
Contract Scope: An EPG conveys its intent to provide or consume a Co

Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Kevin L. Mitchell
On Thu, 2014-08-07 at 17:41 +0100, Matthew Booth wrote:
> ... or arg is an object which defines __nonzero__(), or defines
> __getattr__() and then explodes because of the unexpected lookup of a
> __nonzero__ attribute. Or it's False (no quotes when printed by the
> debugger), but has a unicode type and therefore evaluates to True[1].

If you're passing such exotic objects as parameters that could
potentially be drawn from configuration instead, maybe that code needs
to be refactored a bit :)

> However, if you want to compare a value with None and write 'foo is
> None' it will always do exactly what you expect, regardless what you
> pass to it. I think it's also nicer to the reviewer and the
> maintainer,
> who then don't need to go looking for context to check if anything
> invalid might be passed in.

In the vast majority of cases, however, we use a value that evaluates to
False to indicate "use the default", where "default" may be drawn from
configuration.  Yes, there are cases where we must treat, say, 0 as
distinct from None, but when we don't need to, we should keep the code
as simple as possible.  After all, I doubt anyone would seriously
suggest that we must always use something like the "_unset" sentinel,
even when None has no special meaning…
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Kevin L. Mitchell
On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
> > In any case, the operative point is that CONF. must
> always be
> > evaluated inside run-time code, never at module load time.
> 
> ...unless you call register_opts() safely, which is what I'm
> proposing.

No, calling register_opts() at a different point only fixes the import
issue you originally complained about; it does not fix the problem that
the configuration option is evaluated at the wrong time.  The example
code you included in your original email evaluates the configuration
option at module load time, BEFORE the configuration has been loaded,
which means that the argument default will be the default of the
configuration option, rather than the configured value of the
configuration option.  Configuration options must be evaluated at
RUN-TIME, after configuration is loaded; they must not be evaluated at
LOAD-TIME, which is what your original code does.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Kevin L. Mitchell
On Thu, 2014-08-07 at 17:44 +0100, Matthew Booth wrote:
> These are tricky, case-by-case workarounds to a general problem which
> can be solved by simply calling register_opts() in a place where it's
> guaranteed to be safe. 

No, THE CODE IS WRONG.  It is evaluating a configuration value at
_module load time_ rather than at run time.  It is wrong for the same
reason that "def foo(arg={})" is wrong.  If you fix the time at which
evaluation occurs, importing is no longer a problem.

> Is there any reason not to call register_opts()
> before importing other modules?

-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Stefano Maffulli
On 08/07/2014 01:49 AM, Thierry Carrez wrote:
> As an ex factory IT manager, I feel compelled to comment on that :)

Multidisciplinary training rules! As an architect with field experience
building roads, sidewalks, roofs, city planning (and training in lean
manufacturing and services) I think I can have a say ;)

> You're not really introducing a successful Kanban here, you're just
> clarifying that there should be a set number of workstations.

Right, and to clarify I'm really thinking kanban here, expanding on the
few lines Mikal used to explain the 'slots' concept.

> Our current system is like a gigantic open space with hundreds of
> half-finished pieces, and a dozen workers keep on going from one to
> another with no strong pattern. The proposed system is to limit the
> number of half-finished pieces fighting for the workers attention at any
> given time, by setting a clear number of workstations.

Correct, and I think we should add a limit to the amount of WIP, too. So
we have a visible limit to people, workstations and Work In Progress.
This way we can *immediately*, at any given time, identify problems.

> A true Kanban would be an interface between developers and reviewers,
> where reviewers define what type of change they have to review to
> complete production objectives, *and* developers would strive to produce
> enough to keep the kanban above the red line, but not too much (which
> would be piling up waste).

Exactly where I am aiming at: reducing waste, which we already have but
nobody (few, at different times) sees. By switching to a pull 'Just In
Time' mode we'd see waste accumulate much earlier than as we do now.

> Without that discipline, Kanbans are useless. Unless the developers
> adapt what they work on based on release objectives, you don't really
> reduce waste/inventory at all, it just piles up waiting for available
> "runway slots". As I said in my original email, the main issue here is
> the imbalance between too many people proposing changes and not enough
> people caring about the project itself enough to be trusted with core
> reviewers rights.

I agree with you. Right now we're accumulating waste in the form of code
proposals (raw pieces that need to be processed) but reviewers and core
reviewers' attention span is limited (the number of 'workstations' is
finite but we don't have such limit exposed) and nobody sees the
accumulation of backlog until it's very late, at the end of the release
cycle.

A lot of the complaints I hear and worsening time to merge patches seems
to indicate that we're over capacity and we didn't notice.

> The only way to be truly pull-based is
> to define a set of production objectives and have those objectives
> trickle up to the developers so that they don't work on something else.

Yeah, don't we try to do that with blueprints/specs and priority? But we
don't set a limit, it's almost a 'free for all' send your patches in and
someone will evaluate them. Except there is a limit to what we can produce.

I think fundamentally we need to admit that there are 24 hours in a day
and that core reviewers have to sleep, sometimes. There is a finite
amount of patches that can be processed in a given time interval.

It's about finding a way to keep producing OpenStack at the highest
speed possible, keeping quality, listening to 'downstream' first.

> The solution is about setting release cycle goals and strongly
> communicating that everything out of those goals is clearly priority 2.

I don't think there is any 'proposal' just yet, only a half-baked idea
thrown out there by the nova team during a meeting and fluffed up by me
on the list. Still only a half-baked idea.

I realized this is a digression from the original thread though. I'll
talk to Russell and Nikola off-list (since they sent interesting
comments, too) and John and Dan to see if they're still interested in
formulating a comprehensive proposal.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug squashing day

2014-08-07 Thread Sumit Naiksatam
Indeed, thanks much Eugene for taking on this critical activity.
Please let me know if I can help in any way as well.

On Thu, Aug 7, 2014 at 7:39 AM, Kyle Mestery  wrote:
> On Thu, Aug 7, 2014 at 9:31 AM, Eugene Nikanorov
>  wrote:
>> Hi neutron folks,
>>
>> Today should have been 'Bug squashing day' where we go over existing bugs
>> filed for the project and triage/prioritize/comment on them.
>>
>> I've created an etherpad with (hopefully) full list of neutron bugs:
>> https://etherpad.openstack.org/p/neutron-bug-squashing-day-2014-08-07
>>
>> I was able to walk through a couple of almost thousand of bugs we have.
>> My target was to reduce the number of open bugs, so some of them I moved to
>> incomplete/invalid/won't fix state (not many though); then, to reduce the
>> number of high importance bugs, especially if they're hanging for too long.
>>
>> As you can see, bugs in the etherpad are sorted by importance.
>> Some of my observations include:
>> - almost all bugs with High priority really seem like issues we should be
>> fixing.
>> In many cases submitter or initial contributor abandoned his work on the
>> bug...
>> - there are a couple of important bugs related to DVR where previously
>> working stuff
>> is broken, but in all cases there are DVR subteam members working on those,
>> so we're good here so far.
>>
>> I also briefly described resolution for each bug, where 'n/a' means that bug
>> just needs to be fixed/work should be continued without any change to state.
>> I'm planning to continue to go over this list and expect more bugs will go
>> away which previously have been marked as medium/low or wishlist.
>>
>> If anyone is willing to help - you're welcome!
>>
> Thanks for setting this up Eugene! I've been down with a nasty cold
> yesterday and still not feeling well today, I will try to be on
> #openstack-neutron off and on today. I encourage other Neutron cores
> to be there as well so we can coordinate the work and help new bugfix
> submitters.
>
> Thanks!
> Kyle
>
>> Thanks,
>> Eugene.
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Passing a list of ResourceGroup's attributes back to its members

2014-08-07 Thread Tomas Sedovic
Hi all,

I have a ResourceGroup which wraps a custom resource defined in another
template:

servers:
  type: OS::Heat::ResourceGroup
  properties:
count: 10
resource_def:
  type: my_custom_server
  properties:
prop_1: "..."
prop_2: "..."
...

And a corresponding provider template and environment file.

Now I can get say the list of IP addresses or any custom value of each
server from the ResourceGroup by using `{get_attr: [servers,
ip_address]}` and outputs defined in the provider template.

But I can't figure out how to pass that list back to each server in the
group.

This is something we use in TripleO for things like building a MySQL
cluster, where each node in the cluster (the ResourceGroup) needs the
addresses of all the other nodes.

Right now, we have the servers ungrouped in the top-level template so we
can build this list manually. But if we move to ResourceGroups (or any
other scaling mechanism, I think), this is no longer possible.

We can't pass the list to ResourceGroup's `resource_def` section because
that causes a circular dependency.

And I'm not aware of a way to attach a SoftwareConfig to a
ResourceGroup. SoftwareDeployment only allows attaching a config to a
single server.


Is there a way to do this that I'm missing? And if there isn't, is this
something we could add to Heat? E.g. extending a SoftwareDeployment to
accept ResourceGroups or adding another resource for that purpose.

Thanks,
Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug squashing day

2014-08-07 Thread Edgar Magana
I will be around today. I can help on bug triaging.. Catch me on IRC
(emagana)

Thank for coordinating this Eugene!

Edgar

On 8/7/14, 9:19 AM, "Akihiro Motoki"  wrote:

>Thanks Eugene for initiating Neutron bug squashing day!
>I would like to focus on bug triaging and checking bug status rather
>than fixing bugs.
>There seem be many bugs which are not valid anymore.
>
>Thanks,
>Akihiro
>
>On Thu, Aug 7, 2014 at 11:31 PM, Eugene Nikanorov
> wrote:
>> Hi neutron folks,
>>
>> Today should have been 'Bug squashing day' where we go over existing
>>bugs
>> filed for the project and triage/prioritize/comment on them.
>>
>> I've created an etherpad with (hopefully) full list of neutron bugs:
>> https://etherpad.openstack.org/p/neutron-bug-squashing-day-2014-08-07
>>
>> I was able to walk through a couple of almost thousand of bugs we have.
>> My target was to reduce the number of open bugs, so some of them I
>>moved to
>> incomplete/invalid/won't fix state (not many though); then, to reduce
>>the
>> number of high importance bugs, especially if they're hanging for too
>>long.
>>
>> As you can see, bugs in the etherpad are sorted by importance.
>> Some of my observations include:
>> - almost all bugs with High priority really seem like issues we should
>>be
>> fixing.
>> In many cases submitter or initial contributor abandoned his work on the
>> bug...
>> - there are a couple of important bugs related to DVR where previously
>> working stuff
>> is broken, but in all cases there are DVR subteam members working on
>>those,
>> so we're good here so far.
>>
>> I also briefly described resolution for each bug, where 'n/a' means
>>that bug
>> just needs to be fixed/work should be continued without any change to
>>state.
>> I'm planning to continue to go over this list and expect more bugs will
>>go
>> away which previously have been marked as medium/low or wishlist.
>>
>> If anyone is willing to help - you're welcome!
>>
>> Thanks,
>> Eugene.
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread John Griffith
On Thu, Aug 7, 2014 at 9:02 AM, John Griffith 
wrote:

>
>
>
> On Thu, Aug 7, 2014 at 6:20 AM, Sean Dague  wrote:
>
>> On 08/07/2014 07:58 AM, Angus Salkeld wrote:
>> > On Wed, 2014-08-06 at 15:48 -0600, John Griffith wrote:
>> >> I have to agree with Duncan here.  I also don't know if I fully
>> >> understand the limit in options.  Stress test seems like it
>> >> could/should be different (again overlap isn't a horrible thing) and I
>> >> don't see it as siphoning off resources so not sure of the issue.
>> >>  We've become quite wrapped up in projects, programs and the like
>> >> lately and it seems to hinder forward progress more than anything
>> >> else.
>> > h
>> >>
>> >> I'm also not convinced that Tempest is where all things belong, in
>> >> fact I've been thinking more and more that a good bit of what Tempest
>> >> does today should fall more on the responsibility of the projects
>> >> themselves.  For example functional testing of features etc, ideally
>> >> I'd love to have more of that fall on the projects and their
>> >> respective teams.  That might even be something as simple to start as
>> >> saying "if you contribute a new feature, you have to also provide a
>> >> link to a contribution to the Tempest test-suite that checks it".
>> >>  Sort of like we do for unit tests, cross-project tracking is
>> >> difficult of course, but it's a start.  The other idea is maybe
>> >> functional test harnesses live in their respective projects.
>> >>
>> >
>> > Couldn't we reduce the scope of tempest (and rally) : make tempest the
>> > API verification and rally the secenario/performance tester? Make each
>> > tool do less, but better. My point being to split the projects by
>> > functionality so there is less need to share code and stomp on each
>> > other's toes.
>>
>> Who is going to propose the split? Who is going to manage the
>> coordination of the split? What happens when their is disagreement about
>> the location of something like booting and listing a server -
>>
>> https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L44-L64
>>
>> Because today we've got fundamental disagreements between the teams on
>> scope, long standing (as seen in these threads), so this won't
>> organically solve itself.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ​last paragraph regarding the "split" wasn't mine, but...  I think it's
> good for people to express ideas on the ML like this.  It may not be
> feasible, but I think the more people you have thinking about how to move
> forward and expressing their ideas (even if they don't work) is a good and
> healthy thing.
>
> As far as proposing a split, there's obviously a ton of detail that needs
> to be considered here and honestly it may just be a horrible idea right
> from the start.  That being said, to answer some of your questions, quite
> honestly IMO these are some of the things that I think it would be good for
> the TC to take an active role in.  Seems reasonable to have bodies like the
> TC work on governing and laying out technical process and direction.
>
> Anyway, ​I think the bottom line is that better collaboration is something
> we need to work on.  That in and of itself would've have likely thwarted
> this this thread to begin with (and I think that was one of the key points
> it tried to make).
>
> As far as the question at hand of Rally... I would surely hope that
> there's a way to for QA and Rally teams to actually collaborate and work
> together on this.  I also understand completely that lack of collaboration
> is probably what got us to this point in the first place.  It just seems to
> me that there's a middle ground somewhere but it's going to require some
> give and take from both sides.
>
> By the way, personally I feel that the movement over the last year that
> everybody needs to have their own program or project is a big problem.  The
> other thing that nobody wants to consider is why not just put some code on
> github independent of OpenStack?  Contribute things to the projects and
> build cool things for OpenStack outside of OpenStack.  Make sense?
>
> Questions about functional test responsibilities for projects etc should
> probably be a future discussion if there's interest and if it makes any
> sense at all (ie summit topic?).
>
> ​Just a note, I don't mean for the above to point fingers or even remotely
suggest that I think I have all the answers etc.  I just would like to spur
some serious thought on how we scale and grow going forward and that
includes Tempest and it role.

Currently I have zero complaints (really... zero) about Tempest, the QA or
Infra teams.  I do see more snags like the one we currently have in our
future though, and I think we need to come up with some way of 

Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-07 Thread Jim Rollenhagen
On August 7, 2014 at 8:36:16 AM, Matt Wagner (matt.wag...@redhat.com) wrote:
On 07/08/14 14:17 +0200, Dmitry Tantsur wrote: 
 
>2. We'll need to speed up spec reviews, because we're adding one more 
>blocker on the way to the code being merged :) Maybe it's no longer a 
>problem actually, we're doing it faster now. 

I'm not sure if this will actually delay stuff getting merged. 

It certainly has the potential to do so. If people submit a draft in 
Launchpad and it takes us a week to review it, that's adding a lot of 
latency. 

OTOH, if we're on top of things, I think it could actually increase 
overall throughput. We'd spent less time reviewing specs that are just 
entirely out of scope, and we'd be able to help steer spec-writers 
onto the right path sooner. They, in turn, would waste less time 
writing specs that are then rejected wholesale, or deemed to need 
significant reworking. 

I'm not really disagreeing with you--we need to be vigilant and make 
sure we don't introduce a bottleneck with this. But I also think that, 
as long as we do that, we might actually speed things up overall. 
I agree. I think we have been doing much better with specs, especially since 
growing that core team and explicitly defining our priorities for Juno.

I don’t think this will increase latency in reviews - the initial review is 
quick and easy to do, as it’s just deciding overall if we want the feature. I 
think this may actually *reduce* latency - specs that are not wanted will get 
-2’d quickly, and steps that are wanted will have at least one core that is 
excited about the feature (if no cores care about the feature, it likely won’t 
be “pre-approved” or whatever we’re calling it).

I +1’d this at the meetup, doing it here again for public consumption. :)

// jim



-- Matt 
___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Kevin Benton
You said you had no idea what group based policy was buying us so I tried
to illustrate what the difference between declarative and imperative
network configuration looks like. That's the major selling point of GBP so
I'm not sure how that's 'side stepping' any points. It removes the need for
the user to pick between implementation details like security
groups/FWaaS/ACLs.

>So are you saying that GBP allows someone to be able to configure an
application that at the end of the day is equivalent  to
networks/router/FWaaS rules without understanding networking concepts?

It's one thing to understand the ports an application leverages and another
to understand the differences between configuring VM firewalls, security
groups, FWaaS, and router ACLs.

> I'm also curious how this GBP is really less error prone than the model
we have today as it seems the user will basically have to tell neutron the
same information about how he wants his networking to function.

With GBP, the user just gives the desired end result (e.g. allow
connectivity between endpoint groups via TCP port 22 with all connections
logged). Without it, the user has to do the following:

   1. create a network/subnet for each endpoint group
   2. allow all traffic on the security groups since the logging would need
   to be accomplished with FWaaS
   3. create an FWaaS instance
   4. attach the FWaaS to both networks
   5. add an FWaaS policy and the FWaaS rules to allow the correct traffic

The declarative approach is less error prone because the user can give
neutron the desired state of connectivity rather than a mentally compiled
set of instructions describing how to configure a bunch of individual
network components. How well do you think someone will handle the latter
approach that got all of their networking knowledge from one college course
5 years ago?

IIRC one of the early nova parity requests was an API to automagically
setup a neutron network and router so no extra work would be required to
get instances connected to the Internet. That wasn't requested because
people thought neutron networks were too easy to setup already. :-)


On Thu, Aug 7, 2014 at 9:10 AM, Aaron Rosen  wrote:

> Hi Kevin,
>
> I feel as your latest response is completely side stepping the points we
> have been trying to get to in the last series of emails. At the end of the
> day I don't believe we are changing the laws of networking (or perhaps we
> are?).  Thus I think it's important to actually get down to the networking
> level to actually figure out why optimizations such as this one are enabled
> via GBP and not the model we have:
>
>
> "With the latter, a mapping driver could determine that communication
> between these two hosts can be prevented by using an ACL on a router or a
> switch, which doesn't violate the user's intent and buys a performance
> improvement and works with ports that don't support security groups."
>
>
>
> On Wed, Aug 6, 2014 at 7:48 PM, Kevin Benton  wrote:
>
>> Do you not see a difference between explicitly configuring networks, a
>> router and FWaaS rules with logging and just stating that two groups of
>> servers can only communicate via one TCP port with all connections logged?
>> The first is very prone to errors for someone deploying an application
>> without a strong networking background, and the latter is basically just
>> stating the requirements and letting Neutron figure out how to implement
>> it.
>>
>> So are you saying that GBP allows someone to be able to configure an
> application that at the end of the day is equivalent  to
> networks/router/FWaaS rules without understanding networking concepts? I'm
> also curious how this GBP is really less error prone than the model we have
> today as it seems the user will basically have to tell neutron the same
> information about how he wants his networking to function.
>
>
>
>> Just stating requirements becomes even more important when something like
>> the logging requirement comes from someone other than the app deployer
>> (e.g. a security team). In the above example, someone could set everything
>> up using security groups; however, when the logging requirement came in
>> from the security team, they would have to undo all of that work and
>> replace it with something like FWaaS that can centrally log all of the
>> connections.
>>
>> It's the difference between using puppet and bash scripts. Sure you can
>> write a script that uses awk/sed to ensure that an ini file has a
>> particular setting and then restart a service if the setting is changed,
>> but it's much easier and less error prone to just write a puppet manifest
>> that uses the INI module with a pointer to the file, the section name, the
>> key, and the value with a notification to restart the service.
>>
>>
>>
>> On Wed, Aug 6, 2014 at 7:40 PM, Aaron Rosen 
>> wrote:
>>
>>>
>>> On Wed, Aug 6, 2014 at 5:27 PM, Kevin Benton  wrote:
>>>
 Web tier can communicate with anything except for the DB.
 App tier c

Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Matthew Booth
On 07/08/14 17:39, Kevin L. Mitchell wrote:
> On Thu, 2014-08-07 at 17:27 +0100, Matthew Booth wrote:
>> On 07/08/14 16:27, Kevin L. Mitchell wrote:
>>> On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
 A (the?) solution is to register_opts() in foo before importing any
 modules which might also use oslo.config.
>>>
>>> Actually, I disagree.  The real problem here is the definition of
>>> bar_func().  The default value of the parameter "arg" will likely always
>>> be the default value of foo_opt, rather than the configured value,
>>> because "CONF.foo_opt" will be evaluated at module load time.  The way
>>> bar_func() should be defined would be:
>>>
>>> def bar_func(arg=None):
>>> if not arg:
>>> arg = CONF.foo_opt
>>> …
>>>
>>> That ensures that arg will be the configured value, and should also
>>> solve the import conflict.
>>
>> That's different behaviour, because you can no longer pass arg=None. The
>> fix isn't to change the behaviour of the code.
> 
> Well, the point is that the code as written is incorrect.  And if 'None'
> is an input you want to allow, then use an incantation like:
> 
> _unset = object()
> 
> def bar_func(arg=_unset):
> if arg is _unset:
> arg = CONF.foo_opt
> …
> 
> In any case, the operative point is that CONF. must always be
> evaluated inside run-time code, never at module load time.

...unless you call register_opts() safely, which is what I'm proposing.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Matthew Booth
On 07/08/14 17:34, Dan Smith wrote:
>> That's different behaviour, because you can no longer pass arg=None. The
>> fix isn't to change the behaviour of the code.
> 
> So use a sentinel...

That would also be a change to the behaviour of the code, because you
can no longer pass in the sentinel.

These are tricky, case-by-case workarounds to a general problem which
can be solved by simply calling register_opts() in a place where it's
guaranteed to be safe. Is there any reason not to call register_opts()
before importing other modules?

Matt

-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Matthew Booth
On 07/08/14 17:11, Kevin L. Mitchell wrote:
> On Thu, 2014-08-07 at 10:55 -0500, Matt Riedemann wrote:
>>
>> On 8/7/2014 10:27 AM, Kevin L. Mitchell wrote:
>>> On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
 A (the?) solution is to register_opts() in foo before importing any
 modules which might also use oslo.config.
>>>
>>> Actually, I disagree.  The real problem here is the definition of
>>> bar_func().  The default value of the parameter "arg" will likely always
>>> be the default value of foo_opt, rather than the configured value,
>>> because "CONF.foo_opt" will be evaluated at module load time.  The way
>>> bar_func() should be defined would be:
>>>
>>>  def bar_func(arg=None):
>>>  if not arg:
>>>  arg = CONF.foo_opt
>>>  …
>>>
>>> That ensures that arg will be the configured value, and should also
>>> solve the import conflict.
>>>
>>
>> Surely you mean:
>>
>> if arg is not None:
>>
>> right?! I'm pretty sure there is a hacking check for that now too...
> 
> No, I meant "if not arg", which would be true if arg is None—or 0, or
> empty string, or False.  If those alternate false values are potential
> values of arg, then clearly an "if arg is None" would be the correct
> incantation.  However, an "if arg is not None" would never be
> appropriate for this logic :)  (And the hacking check is against "if arg
> == None" or "if arg != None"…)

... or arg is an object which defines __nonzero__(), or defines
__getattr__() and then explodes because of the unexpected lookup of a
__nonzero__ attribute. Or it's False (no quotes when printed by the
debugger), but has a unicode type and therefore evaluates to True[1].

However, if you want to compare a value with None and write 'foo is
None' it will always do exactly what you expect, regardless what you
pass to it. I think it's also nicer to the reviewer and the maintainer,
who then don't need to go looking for context to check if anything
invalid might be passed in.

Matt

[1] I actually hit this. I still don't understand it.
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Kevin L. Mitchell
On Thu, 2014-08-07 at 17:27 +0100, Matthew Booth wrote:
> On 07/08/14 16:27, Kevin L. Mitchell wrote:
> > On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
> >> A (the?) solution is to register_opts() in foo before importing any
> >> modules which might also use oslo.config.
> > 
> > Actually, I disagree.  The real problem here is the definition of
> > bar_func().  The default value of the parameter "arg" will likely always
> > be the default value of foo_opt, rather than the configured value,
> > because "CONF.foo_opt" will be evaluated at module load time.  The way
> > bar_func() should be defined would be:
> > 
> > def bar_func(arg=None):
> > if not arg:
> > arg = CONF.foo_opt
> > …
> > 
> > That ensures that arg will be the configured value, and should also
> > solve the import conflict.
> 
> That's different behaviour, because you can no longer pass arg=None. The
> fix isn't to change the behaviour of the code.

Well, the point is that the code as written is incorrect.  And if 'None'
is an input you want to allow, then use an incantation like:

_unset = object()

def bar_func(arg=_unset):
if arg is _unset:
arg = CONF.foo_opt
…

In any case, the operative point is that CONF. must always be
evaluated inside run-time code, never at module load time.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Dan Smith
> That's different behaviour, because you can no longer pass arg=None. The
> fix isn't to change the behaviour of the code.

So use a sentinel...

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Matthew Booth
On 07/08/14 16:27, Kevin L. Mitchell wrote:
> On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
>> A (the?) solution is to register_opts() in foo before importing any
>> modules which might also use oslo.config.
> 
> Actually, I disagree.  The real problem here is the definition of
> bar_func().  The default value of the parameter "arg" will likely always
> be the default value of foo_opt, rather than the configured value,
> because "CONF.foo_opt" will be evaluated at module load time.  The way
> bar_func() should be defined would be:
> 
> def bar_func(arg=None):
> if not arg:
> arg = CONF.foo_opt
> …
> 
> That ensures that arg will be the configured value, and should also
> solve the import conflict.

That's different behaviour, because you can no longer pass arg=None. The
fix isn't to change the behaviour of the code.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug squashing day

2014-08-07 Thread Akihiro Motoki
Thanks Eugene for initiating Neutron bug squashing day!
I would like to focus on bug triaging and checking bug status rather
than fixing bugs.
There seem be many bugs which are not valid anymore.

Thanks,
Akihiro

On Thu, Aug 7, 2014 at 11:31 PM, Eugene Nikanorov
 wrote:
> Hi neutron folks,
>
> Today should have been 'Bug squashing day' where we go over existing bugs
> filed for the project and triage/prioritize/comment on them.
>
> I've created an etherpad with (hopefully) full list of neutron bugs:
> https://etherpad.openstack.org/p/neutron-bug-squashing-day-2014-08-07
>
> I was able to walk through a couple of almost thousand of bugs we have.
> My target was to reduce the number of open bugs, so some of them I moved to
> incomplete/invalid/won't fix state (not many though); then, to reduce the
> number of high importance bugs, especially if they're hanging for too long.
>
> As you can see, bugs in the etherpad are sorted by importance.
> Some of my observations include:
> - almost all bugs with High priority really seem like issues we should be
> fixing.
> In many cases submitter or initial contributor abandoned his work on the
> bug...
> - there are a couple of important bugs related to DVR where previously
> working stuff
> is broken, but in all cases there are DVR subteam members working on those,
> so we're good here so far.
>
> I also briefly described resolution for each bug, where 'n/a' means that bug
> just needs to be fixed/work should be continued without any change to state.
> I'm planning to continue to go over this list and expect more bugs will go
> away which previously have been marked as medium/low or wishlist.
>
> If anyone is willing to help - you're welcome!
>
> Thanks,
> Eugene.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Kevin L. Mitchell
On Thu, 2014-08-07 at 10:55 -0500, Matt Riedemann wrote:
> 
> On 8/7/2014 10:27 AM, Kevin L. Mitchell wrote:
> > On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
> >> A (the?) solution is to register_opts() in foo before importing any
> >> modules which might also use oslo.config.
> >
> > Actually, I disagree.  The real problem here is the definition of
> > bar_func().  The default value of the parameter "arg" will likely always
> > be the default value of foo_opt, rather than the configured value,
> > because "CONF.foo_opt" will be evaluated at module load time.  The way
> > bar_func() should be defined would be:
> >
> >  def bar_func(arg=None):
> >  if not arg:
> >  arg = CONF.foo_opt
> >  …
> >
> > That ensures that arg will be the configured value, and should also
> > solve the import conflict.
> >
> 
> Surely you mean:
> 
> if arg is not None:
> 
> right?! I'm pretty sure there is a hacking check for that now too...

No, I meant "if not arg", which would be true if arg is None—or 0, or
empty string, or False.  If those alternate false values are potential
values of arg, then clearly an "if arg is None" would be the correct
incantation.  However, an "if arg is not None" would never be
appropriate for this logic :)  (And the hacking check is against "if arg
== None" or "if arg != None"…)
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The future of the integrated release

2014-08-07 Thread Duncan Thomas
On 7 August 2014 16:39, John Griffith  wrote:
> On Thu, Aug 7, 2014 at 9:28 AM, Eric Harney  wrote:
>> On 08/07/2014 09:55 AM, John Griffith wrote:
>> > There are three things that have just crushed productivity and
>> > motivation
>> > in Cinder this release (IMO):
>> > 1. Overwhelming number of drivers (tactical contributions)
>> > 2. Overwhelming amount of churn, literally hundreds of little changes to
>> > modify docstrings, comments etc but no real improvements to code

>> I'm not sure that there is much data to support that this has been a
>> problem to the point of impacting productivity.  Even if some patches
>> make changes that aren't too significant, those tend to be quick to
>> review.  Personally, I haven't found this to be a troublesome area, and
>> it's been clear that Cinder does need some cleanup/refactoring work in
>> some areas.

> Ok...
> s/There are three things that have just crushed productivity and
> motivation/There are three things that have just crushed MY productivity and
> motivation/g

For the record, I feel the same way as John... Cinder has become a
great big, complex API wrapper for other products... it is a direction
that I think means we aren't developing anything new or nevel within
cinder, we're making it harder to even think about doing that, and
what we have is becoming flakier and flakier.

>> Just going on my gut feeling, I'd argue that we too often have patchsets
>> that are too large and should be split into a series of smaller commits,
>> and that concerns me more, because these are both harder to review and
>> harder to catch bugs in.
>
> I totally agree with you on this, no argument at all.  The never ending
> stream of six additions, typo fixes and new hacking adds however is a
> different category for me.

I started the code cleanup tag to help reduce the impact of these, we
can hopefully punt them to a couple of one week periods per release.
Have to see how that works out over time.

>> I'd add:
>> 4. Quite a few people have put time into working on third-party driver
>> CI, presumably at the expense of the other usual efforts.  This is fine,
>> and a good thing, but it surely impacted the amount of attention given
>> to other efforts with our small team.

> I do think this has certainly had a significant impact on some folks for
> sure.  But I've already ranted about that and won't do it again here :)

I'm one of the people who's been pushing hard on CI, to the point I'm
sure that some people are really fed up with my emails, but I do feel
that if we can get it done and working then the future benefits are
substantial. And all of the people complaining that openstack /
devstack is too flaky to get CI working I think are realising a
problem that really needs to be fixed. If we can't stand up a simple
system to run basic tests on, we have clear problems.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Nikola Đipanov
On 08/07/2014 03:20 PM, Russell Bryant wrote:
> On 08/07/2014 09:07 AM, Sean Dague wrote:> I think the difference is
> slot selection would just be Nova drivers. I
>> think there is an assumption in the old system that everyone in Nova
>> core wants to prioritize the blueprints. I think there are a bunch of
>> folks in Nova core that are happy having signaling from Nova drivers on
>> high priority things to review. (I know I'm in that camp.)
>>
>> Lacking that we all have picking algorithms to hack away at the 500 open
>> reviews. Which basically means it's a giant random queue.
>>
>> Having a few blueprints that *everyone* is looking at also has the
>> advantage that the context for the bits in question will tend to be
>> loaded into multiple people's heads at the same time, so is something
>> that's discussable.
>>
>> Will it fix the issue, not sure, but it's an idea.
> 
> OK, got it.  So, success critically depends on nova-core being willing
> to take review direction and priority setting from nova-drivers.  That
> sort of assumption is part of why I think agile processes typically
> don't work in open source.  We don't have the ability to direct people
> with consistent and reliable results.
> 
> I'm afraid if people doing the review are not directly involved in at
> least ACKing the selection and commiting to review something, putting
> stuff in slots seems futile.
> 

Forgive my bluntness here, but here's how I read the slots proposal:

If there is a limited number of slots, and let's say there is a group of
people (nova-core or nova-drivers) that can decide which ones get
attention for a given cycle based on "technical merit alone" (and
disregard what Russell notices above - there is no real way to make
people review things), I can easily see this as a very polite way of
saying - core gets their stuff in and then maybe if there is a slot or 2
left, we'll consider other proposals... but we can talk about them -
sure :).

So why not own up to it? As ttx points out elsewhere in the thread - it
is about setting expectations really.

Now for a small intermezzo - here's a snippet from a wiki of an
OpenStack-like project being run in a slightly different parallel
universe (not called OpenStack tho - trademarks may cross boundaries of
parallel universes):

"""
We (The Core) are The Upstream and have control of what gets in. We will
land no less than 0 outside features this cycle and that is all that we
will guarantee! If you want your stuff to be considered:

1) write up a BP and a nice spec so we know what you're thinking, ping
us to read it and comment on it.
2) Write some code and post it in our Gerrit code review system - ping
us to read and comment on it.
3) (Ideally) run your code in production and tell us how it solves the
particular problem you have, and why other people may have the same
problem, the code is public on Gerrit so get other people to use the
code as well and comment how useful it is - maybe they will propose
changes and fixes.
4) If The Core sees this feature as something that fits the direction of
the project (you got that feedback hopefully at 1) and gets the feel
that there is enough community interest (for some easy things non-0 is
fine for others it may not be), The Core will review it and land it at
some point, after you've fixed their nits too. We are landing code
that's been battle tested at this point, so we won't have so many of
those hopefully.
5) For additional karma - work on bugs and help us, The Core, review and
fix those, and as we see that you are an awesome contributor, we are
likely to take your pings and code seriously.
"""

In the parallel universe the velocity has been capped in a more natural
way, there are no arbitrary numbers (of slots for example) that cannot
be backed up by actual metrics - there are no numbers at all for that
matter. Also - cores do not worry about the number of open reviews in
Gerrit - the more is better, as it means there is more stuff being tried
out!

They focus on stability and fixing bugs, until a time when there is a
feature that is really interesting and ready for them to land it.

If you think about it - the only real difference between the two is how
expectations are communicated.

Now I have no idea if we as individual projects, or as a community are
ready to set such expectations, but reading all the discussions around
review times and gate failures over the last 2 cycles or so - it seems
to me that "velocity" is something we want to control, and stability is
something we want to increase and we seem to be coming up with very
round-about ways of doing it, which we back up with arbitrary metrics
like "number of (open) reviews" that seem to me are just a distraction.
Why not just do it and say _very loudly_ that we are doing it, and
then... do it :).

We would need to give up trying to set expectations to our
"stakeholders" but the reality is (it seems to me) that we are in the
business of making cloud software (and are reasonably

Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Matt Riedemann



On 8/7/2014 10:27 AM, Kevin L. Mitchell wrote:

On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:

A (the?) solution is to register_opts() in foo before importing any
modules which might also use oslo.config.


Actually, I disagree.  The real problem here is the definition of
bar_func().  The default value of the parameter "arg" will likely always
be the default value of foo_opt, rather than the configured value,
because "CONF.foo_opt" will be evaluated at module load time.  The way
bar_func() should be defined would be:

 def bar_func(arg=None):
 if not arg:
 arg = CONF.foo_opt
 …

That ensures that arg will be the configured value, and should also
solve the import conflict.



Surely you mean:

if arg is not None:

right?! I'm pretty sure there is a hacking check for that now too...

:)

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-08-07 Thread Salvatore Orlando
Thanks Armando,

The fix for the bug you pointed out was the reason of the failure we've
been seeing.
The follow-up patch merged and I've removed the wip status from the patch
for the full job [1]

Salvatore

[1] https://review.openstack.org/#/c/88289/


On 7 August 2014 16:50, Armando M.  wrote:

> Hi Salvatore,
>
> I did notice the issue and I flagged this bug report:
>
> https://bugs.launchpad.net/nova/+bug/1352141
>
> I'll follow up.
>
> Cheers,
> Armando
>
>
> On 7 August 2014 01:34, Salvatore Orlando  wrote:
>
>> I had to put the patch back on WIP because yesterday a bug causing a 100%
>> failure rate slipped in.
>> It should be an easy fix, and I'm already working on it.
>> Situations like this, exemplified by [1] are a bit frustrating for all
>> the people working on improving neutron quality.
>> Now, if you allow me a little rant, as Neutron is receiving a lot of
>> attention for all the ongoing discussion regarding this group policy stuff,
>> would it be possible for us to receive a bit of attention to ensure both
>> the full job and the grenade one are switched to voting before the juno-3
>> review crunch.
>>
>> We've already had the attention of the QA team, it would probably good if
>> we could get the attention of the infra core team to ensure:
>> 1) the jobs are also deemed by them stable enough to be switched to voting
>> 2) the relevant patches for openstack-infra/config are reviewed
>>
>> Regards,
>> Salvatore
>>
>> [1]
>> http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwie3UnbWVzc2FnZSc6IHUnRmxvYXRpbmcgaXAgcG9vbCBub3QgZm91bmQuJywgdSdjb2RlJzogNDAwfVwiIEFORCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzQwMDExMDIwNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
>>
>>
>> On 23 July 2014 14:59, Matthew Treinish  wrote:
>>
>>> On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote:
>>> > Here I am again bothering you with the state of the full job for
>>> Neutron.
>>> >
>>> > The patch for fixing an issue in nova's server external events
>>> extension
>>> > merged yesterday [1]
>>> > We do not have yet enough data points to make a reliable assessment,
>>> but of
>>> > out 37 runs since the patch merged, we had "only" 5 failures, which
>>> puts
>>> > the failure rate at about 13%
>>> >
>>> > This is ugly compared with the current failure rate of the smoketest
>>> (3%).
>>> > However, I think it is good enough to start making the full job voting
>>> at
>>> > least for neutron patches.
>>> > Once we'll be able to bring down failure rate to anything around 5%,
>>> we can
>>> > then enable the job everywhere.
>>>
>>> I think that sounds like a good plan. I'm also curious how the failure
>>> rates
>>> compare to the other non-neutron jobs, that might be a useful comparison
>>> too
>>> for deciding when to flip the switch everywhere.
>>>
>>> >
>>> > As much as I hate asymmetric gating, I think this is a good compromise
>>> for
>>> > avoiding developers working on other projects are badly affected by the
>>> > higher failure rate in the neutron full job.
>>>
>>> So we discussed this during the project meeting a couple of weeks ago
>>> [3] and
>>> there was a general agreement that doing it asymmetrically at first
>>> would be
>>> better. Everyone should be wary of the potential harms with doing it
>>> asymmetrically and I think priority will be given to fixing issues that
>>> block
>>> the neutron gate should they arise.
>>>
>>> > I will therefore resume work on [2] and remove the WIP status as soon
>>> as I
>>> > can confirm a failure rate below 15% with more data points.
>>> >
>>>
>>> Thanks for keeping on top of this Salvatore. It'll be good to finally be
>>> at
>>> least partially gating with a parallel job.
>>>
>>> -Matt Treinish
>>>
>>> >
>>> > [1] https://review.openstack.org/#/c/103865/
>>> > [2] https://review.openstack.org/#/c/88289/
>>> [3]
>>> http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28
>>>
>>> >
>>> >
>>> > On 10 July 2014 11:49, Salvatore Orlando  wrote:
>>> >
>>> > >
>>> > >
>>> > >
>>> > > On 10 July 2014 11:27, Ihar Hrachyshka  wrote:
>>> > >
>>> > >> -BEGIN PGP SIGNED MESSAGE-
>>> > >> Hash: SHA512
>>> > >>
>>> > >> On 10/07/14 11:07, Salvatore Orlando wrote:
>>> > >> > The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
>>> > >> > it seems there has been an improvement on the failure rate, which
>>> > >> > seem to have dropped to 25% from over 40%. Still, since the patch
>>> > >> > merged there have been 11 failures already in the full job out of
>>> > >> > 42 jobs executed in total. Of these 11 failures: - 3 were due to
>>> > >> > problems in the patches being tested - 1 had the same root cause
>>> as
>>> > >> > bug 1329564. Indeed the related job started before the patch
>>> merged
>>> > >> > but finished 

Re: [openstack-dev] [cinder] The future of the integrated release

2014-08-07 Thread John Griffith
On Thu, Aug 7, 2014 at 9:28 AM, Eric Harney  wrote:

> On 08/07/2014 09:55 AM, John Griffith wrote:
> > ​Seems everybody that's been around a while has noticed "issues" this
> > release and have talked about it, thanks Thierry for putting it together
> so
> > well and kicking off the ML thread here.
> >
> > I'd agree with everything that you stated, I've also floated the idea
> this
> > past week with a few members of the Core Cinder team to have an "every
> > other" release for new drivers submissions in Cinder (I'm expecting this
> to
> > be a HUGELY popular proposal [note sarcastic tone]).
> >
> > There are three things that have just crushed productivity and motivation
> > in Cinder this release (IMO):
> > 1. Overwhelming number of drivers (tactical contributions)
> > 2. Overwhelming amount of churn, literally hundreds of little changes to
> > modify docstrings, comments etc but no real improvements to code
>
> I'm not sure that there is much data to support that this has been a
> problem to the point of impacting productivity.  Even if some patches
> make changes that aren't too significant, those tend to be quick to
> review.  Personally, I haven't found this to be a troublesome area, and
> it's been clear that Cinder does need some cleanup/refactoring work in
> some areas.
>

​Ok...
s/There are three things that have just crushed productivity and
motivation/There
are three things that have just crushed MY productivity and motivation/g

better?


> Just going on my gut feeling, I'd argue that we too often have patchsets
> that are too large and should be split into a series of smaller commits,
> and that concerns me more, because these are both harder to review and
> harder to catch bugs in.
>
​I totally agree with you on this, no argument at all.  The never ending
stream of six additions, typo fixes and new hacking adds however is a
different category for me.
​

>
> > 3. A new sense of pride in hitting the -1 button on reviews.  A large
> > number of reviews these days seem to be -1 due to punctuation or
> > misspelling in comments and docstrings.  There's also a lot of "my way of
> > writing this method is better because it's *clever*" taking place.
>
> I still don't really have a good sense of how much this happens and what
> the impact is.  But, the basic problem with this argument is that if we
> feel that #2 and #3 are both problems, we are effectively inviting the
> code/documentation to get sloppier and rot over time.  It needs to
> either be cleaned up in review or patched later.
>
​See my search/replace above, guess it's just me.  I see it quite often, I
could try and gather some numbers but honestly it seems like almost every
other patch I review has a -1 for something along these lines.
​

>
> (Or if there's a dispute about "need" there, we at least need to be ok
> with letting people who feel that this is worthwhile fix it up.)
>
> I'd add:
> 4. Quite a few people have put time into working on third-party driver
> CI, presumably at the expense of the other usual efforts.  This is fine,
> and a good thing, but it surely impacted the amount of attention given
> to other efforts with our small team.
>
​I do think this has certainly had a significant impact on some folks for
sure.  But I've already ranted about that and won't do it again here :)​

>
> > In Cinder's case I don't think new features is a problem, in fact we
> can't
> > seem to get new features worked on and released because of all the other
> > distractions.  That being said doing a maintenance or hardening only type
> > of release is for sure good with me.
> >
> > Anyway, I've had some plans to talk about how we might fix some of this
> in
> > Cinder at next week's sprint.  If there's a broader community effort
> along
> > these lines that's even better.
> >
> > Thanks,
> > John
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Ben Nemec
On 08/06/2014 05:35 PM, Yuriy Taraday wrote:
> Oh, looks like we got a bit of a race condition in messages. I hope you
> don't mind.
> 
> 
> On Wed, Aug 6, 2014 at 11:00 PM, Ben Nemec  wrote:
> 
>> On 08/06/2014 01:42 PM, Yuriy Taraday wrote:
>>> On Wed, Aug 6, 2014 at 6:20 PM, Ben Nemec 
>> wrote:
>>>
 On 08/06/2014 03:35 AM, Yuriy Taraday wrote:
> I'd like to stress this to everyone: I DO NOT propose squashing
>> together
> commits that should belong to separate change requests. I DO NOT
>> propose
 to
> upload all your changes at once. I DO propose letting developers to
>> keep
> local history of all iterations they have with a change request. The
> history that absolutely doesn't matter to anyone but this developer.

 Right, I understand that may not be the intent, but it's almost
 certainly going to be the end result.  You can't control how people are
 going to use this feature, and history suggests if it can be abused, it
 will be.

>>>
>>> Can you please outline the abuse scenario that isn't present nowadays?
>>> People upload huge changes and are encouraged to split them during
>> review.
>>> The same will happen within proposed workflow. More experienced
>> developers
>>> split their change into a set of change requests. The very same will
>> happen
>>> within proposed workflow.
>>
>> There will be a documented option in git-review that automatically
>> squashes all commits.  People _will_ use that incorrectly because from a
>> submitter perspective it's easier to deal with one review than multiple,
>> but from a reviewer perspective it's exactly the opposite.
>>
> 
> It won't be documented as such. It will include "use with care" and "years
> of Git experience: 3+" stickers. Autosquashing will never be mentioned
> there. Only a detailed explanation of how to work with it and (probably)
> how it works. No rogue dev will get through it without getting the true
> understanding.
> 
> By the way, currently git-review suggests to squash your outstanding
> commits but there is no overwhelming flow of overly huge change requests,
> is there?
> 
 On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler 
 wrote:
>
>> Ben Nemec  writes:
>>
>>> On 08/05/2014 03:14 PM, Yuriy Taraday wrote:

 When you're developing some big change you'll end up with trying
 dozens of different approaches and make thousands of mistakes. For
 reviewers this is just unnecessary noise (commit title "Scratch my
 last CR, that was bullshit") while for you it's a precious history
 that can provide basis for future research or bug-hunting.
>>>
>>> So basically keeping a record of how not to do it?  I get that, but I
>>> think I'm more onboard with the suggestion of sticking those dead end
>>> changes into a separate branch.  There's no particular reason to keep
>>> them on your working branch anyway since they'll never merge to
>> master.
>>>  They're basically unnecessary conflicts waiting to happen.
>>
>> Yeah, I would never keep broken or unfinished commits around like
>> this.
>> In my opinion (as a core Mercurial developer), the best workflow is to
>> work on a feature and make small and large commits as you go along.
>> When
>> the feature works, you begin squashing/splitting the commits to make
>> them into logical pieces, if they aren't already in good shape. You
>> then
>> submit the branch for review and iterate on it until it is accepted.
>>
>
> Absolutely true. And it's mostly the same workflow that happens in
> OpenStack: you do your cool feature, you carve meaningful small
> self-contained pieces out of it, you submit series of change requests.
> And nothing in my proposal conflicts with it. It just provides a way to
> make developer's side of this simpler (which is the intent of
>> git-review,
> isn't it?) while not changing external artifacts of one's work: the
>> same
> change requests, with the same granularity.
>
>
>> As a reviewer, it cannot be stressed enough how much small, atomic,
>> commits help. Squashing things together into large commits make
>> reviews
>> very tricky and removes the possibility of me accepting a later commit
>> while still discussing or rejecting earlier commits (cherry-picking).
>>
>
> That's true, too. But please don't think I'm proposing to squash
 everything
> together and push 10k-loc patches. I hate that, too. I'm proposing to
>> let
> developer use one's tools (Git) in a simpler way.
> And the simpler way (for some of us) would be to have one local branch
 for
> every change request, not one branch for the whole series. Switching
> between branches is very well supported by Git and doesn't require
>> extra
> thinking. Jumping around in detached HEAD state and editing commits
 during
> rebase requires rememberi

Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-07 Thread Matt Wagner

On 07/08/14 14:17 +0200, Dmitry Tantsur wrote:


2. We'll need to speed up spec reviews, because we're adding one more
blocker on the way to the code being merged :) Maybe it's no longer a
problem actually, we're doing it faster now.


I'm not sure if this will actually delay stuff getting merged.

It certainly has the potential to do so. If people submit a draft in
Launchpad and it takes us a week to review it, that's adding a lot of
latency.

OTOH, if we're on top of things, I think it could actually increase
overall throughput. We'd spent less time reviewing specs that are just
entirely out of scope, and we'd be able to help steer spec-writers
onto the right path sooner. They, in turn, would waste less time
writing specs that are then rejected wholesale, or deemed to need
significant reworking.

I'm not really disagreeing with you--we need to be vigilant and make
sure we don't introduce a bottleneck with this. But I also think that,
as long as we do that, we might actually speed things up overall.

-- Matt


pgpRWLWhQkzZP.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The future of the integrated release

2014-08-07 Thread Eric Harney
On 08/07/2014 09:55 AM, John Griffith wrote:
> ​Seems everybody that's been around a while has noticed "issues" this
> release and have talked about it, thanks Thierry for putting it together so
> well and kicking off the ML thread here.
> 
> I'd agree with everything that you stated, I've also floated the idea this
> past week with a few members of the Core Cinder team to have an "every
> other" release for new drivers submissions in Cinder (I'm expecting this to
> be a HUGELY popular proposal [note sarcastic tone]).
> 
> There are three things that have just crushed productivity and motivation
> in Cinder this release (IMO):
> 1. Overwhelming number of drivers (tactical contributions)
> 2. Overwhelming amount of churn, literally hundreds of little changes to
> modify docstrings, comments etc but no real improvements to code

I'm not sure that there is much data to support that this has been a
problem to the point of impacting productivity.  Even if some patches
make changes that aren't too significant, those tend to be quick to
review.  Personally, I haven't found this to be a troublesome area, and
it's been clear that Cinder does need some cleanup/refactoring work in
some areas.

Just going on my gut feeling, I'd argue that we too often have patchsets
that are too large and should be split into a series of smaller commits,
and that concerns me more, because these are both harder to review and
harder to catch bugs in.

> 3. A new sense of pride in hitting the -1 button on reviews.  A large
> number of reviews these days seem to be -1 due to punctuation or
> misspelling in comments and docstrings.  There's also a lot of "my way of
> writing this method is better because it's *clever*" taking place.

I still don't really have a good sense of how much this happens and what
the impact is.  But, the basic problem with this argument is that if we
feel that #2 and #3 are both problems, we are effectively inviting the
code/documentation to get sloppier and rot over time.  It needs to
either be cleaned up in review or patched later.

(Or if there's a dispute about "need" there, we at least need to be ok
with letting people who feel that this is worthwhile fix it up.)

I'd add:
4. Quite a few people have put time into working on third-party driver
CI, presumably at the expense of the other usual efforts.  This is fine,
and a good thing, but it surely impacted the amount of attention given
to other efforts with our small team.

> In Cinder's case I don't think new features is a problem, in fact we can't
> seem to get new features worked on and released because of all the other
> distractions.  That being said doing a maintenance or hardening only type
> of release is for sure good with me.
> 
> Anyway, I've had some plans to talk about how we might fix some of this in
> Cinder at next week's sprint.  If there's a broader community effort along
> these lines that's even better.
> 
> Thanks,
> John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Kevin L. Mitchell
On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
> A (the?) solution is to register_opts() in foo before importing any
> modules which might also use oslo.config.

Actually, I disagree.  The real problem here is the definition of
bar_func().  The default value of the parameter "arg" will likely always
be the default value of foo_opt, rather than the configured value,
because "CONF.foo_opt" will be evaluated at module load time.  The way
bar_func() should be defined would be:

def bar_func(arg=None):
if not arg:
arg = CONF.foo_opt
…

That ensures that arg will be the configured value, and should also
solve the import conflict.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains (a proposal to update the style guidelines for imports)

2014-08-07 Thread Matthew Booth
On 07/08/14 12:15, Matthew Booth wrote:
> I'm sure this is well known, but I recently encountered this problem for
> the second time.
> 
> ---
> foo:
> import oslo.config as cfg
> 
> import bar
> 
> CONF = cfg.CONF
> CONF.register_opts('foo_opt')
> 
> ---
> bar:
> import oslo.config as cfg
> 
> CONF = cfg.CONF
> 
> def bar_func(arg=CONF.foo_opt):
>   pass
> ---
> 
> importing foo results in an error in bar because CONF.foo_opt doesn't
> exist. This is because bar is imported before CONF.register_opts.
> CONF.import_opt() fails in the same way because it just imports foo and
> hits the exact same problem when foo imports bar.
> 
> A (the?) solution is to register_opts() in foo before importing any
> modules which might also use oslo.config. This also allows import_opt()
> to work in bar, which you should do to remove any dependency on import
> order:
> 
> ---
> foo:
> import oslo.config as cfg
> 
> CONF = cfg.CONF
> CONF.register_opts('foo_opt')
> 
> import bar
> 
> ---
> bar:
> import oslo.config as cfg
> 
> CONF = cfg.CONF
> CONF.import_opt('foo_opt', 'foo')
> 
> def bar_func(arg=CONF.foo_opt):
>   pass
> ---
> 
> Even if it's old news it's worth a refresher because it was a bit of a
> headscratcher.

This became pertinent, because it's now blocking:
https://review.openstack.org/#/c/104145/. It has a -1 for not following
the style guidelines in the ordering of imports.

I think we should update our style guidelines to recommend that any
module which calls register_opts() or import_opt() do so *before*
importing another module which might also call one of those functions.
If this is done consistently, it means that any module can import_opt()
from another module and be certain that it won't be re-imported itself
before register_opts() has been called.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] dns for overcloud nodes

2014-08-07 Thread Giulio Fidente

On 08/07/2014 01:32 PM, Jan Provaznik wrote:

Hi,
by default we don't set nameserver when setting up neutron subnet used
by overcloud nodes, then nameserver points to the machine where
undercloud's dnsmasq is running.

I wonder if we should not change *default* devtest setup to allow dns
resolving not only for local network but for internet? Proper DNS
resolving is handy e.g. for package update scenario.

This would mean:

a) set explicitly nameserver when configuring neutron subnet (as it's
done for network in overcloud [1])


I'd +1 this one

We allow for customization of the OC gateway [1] so it seems reasonable 
to me to just add the config bits for customization of the OC dns servers


1. 
https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_undercloud.sh#L325

--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-07 Thread Aaron Rosen
Hi Kevin,

I feel as your latest response is completely side stepping the points we
have been trying to get to in the last series of emails. At the end of the
day I don't believe we are changing the laws of networking (or perhaps we
are?).  Thus I think it's important to actually get down to the networking
level to actually figure out why optimizations such as this one are enabled
via GBP and not the model we have:

"With the latter, a mapping driver could determine that communication
between these two hosts can be prevented by using an ACL on a router or a
switch, which doesn't violate the user's intent and buys a performance
improvement and works with ports that don't support security groups."



On Wed, Aug 6, 2014 at 7:48 PM, Kevin Benton  wrote:

> Do you not see a difference between explicitly configuring networks, a
> router and FWaaS rules with logging and just stating that two groups of
> servers can only communicate via one TCP port with all connections logged?
> The first is very prone to errors for someone deploying an application
> without a strong networking background, and the latter is basically just
> stating the requirements and letting Neutron figure out how to implement
> it.
>
> So are you saying that GBP allows someone to be able to configure an
application that at the end of the day is equivalent  to
networks/router/FWaaS rules without understanding networking concepts? I'm
also curious how this GBP is really less error prone than the model we have
today as it seems the user will basically have to tell neutron the same
information about how he wants his networking to function.



> Just stating requirements becomes even more important when something like
> the logging requirement comes from someone other than the app deployer
> (e.g. a security team). In the above example, someone could set everything
> up using security groups; however, when the logging requirement came in
> from the security team, they would have to undo all of that work and
> replace it with something like FWaaS that can centrally log all of the
> connections.
>
> It's the difference between using puppet and bash scripts. Sure you can
> write a script that uses awk/sed to ensure that an ini file has a
> particular setting and then restart a service if the setting is changed,
> but it's much easier and less error prone to just write a puppet manifest
> that uses the INI module with a pointer to the file, the section name, the
> key, and the value with a notification to restart the service.
>
>
>
> On Wed, Aug 6, 2014 at 7:40 PM, Aaron Rosen  wrote:
>
>>
>> On Wed, Aug 6, 2014 at 5:27 PM, Kevin Benton  wrote:
>>
>>> Web tier can communicate with anything except for the DB.
>>> App tier can only communicate with Web and DB.
>>> DB can communicate with App.
>>>
>>> These high-level constraints can then be implemented as security groups
>>> like you showed, or network hardware ACLs like I had shown.
>>> But if you start with the security groups API, you are forcing it to be
>>> implemented there.
>>>
>>>
>> I still have no idea what group based policy is buying us then. It seems
>> to me that the key point we've identified going backing and forth here is
>> the difference between the current model and the GBP model is that GBP
>> constricts topology which allows us to write these types of enforcement
>> rules. The reason we want this is because it yields performance
>> optimizations (for some reason, which I don't think we've gotten into).
>> Would you agree this is accurate?
>>
>> Honestly, I know a lot of work has been put into this. I haven't said I'm
>> for or against it either. I'm really just trying to understand what is the
>> motivation for this and why does it make neutron better.
>>
>> Best,
>>
>> Aaron
>>
>>
>>
>>>
>>> On Wed, Aug 6, 2014 at 6:06 PM, Aaron Rosen 
>>> wrote:
>>>



 On Wed, Aug 6, 2014 at 4:46 PM, Kevin Benton  wrote:

> That's the point. By using security groups, you are forcing a certain
> kind of enforcement that must be honored and might not be necessary if the
> original intent was just to isolate between groups. In the example you
> gave, it cannot be implemented on the router without violating the
> constraints of the security group.
>
>
> Hi Kevin,

 Mind proposing an alternative example then. The only way I can see this
 claim to be made is because Group Based policy is actually limiting what a
 tenants desired topology can be?

 Thanks,

 Aaron



>  On Wed, Aug 6, 2014 at 5:39 PM, Aaron Rosen 
> wrote:
>
>>
>>
>>
>> On Wed, Aug 6, 2014 at 4:18 PM, Kevin Benton 
>> wrote:
>>
>>> >Given this information I don't see any reason why the backend
>>> system couldn't do enforcement at the logical router and if it did so
>>> neither parties would know.
>>>
>>> With security groups you are specifying that nothing can contact
>>> these dev

Re: [openstack-dev] introducing cyclops

2014-08-07 Thread Endre Karlson
Hi, are you on IRC? :)

Endre


2014-08-07 12:01 GMT+02:00 Piyush Harsh :

> Dear All,
>
> Let me use my first post to this list to introduce Cyclops and initiate a
> discussion towards possibility of this platform as a future incubated
> project in OpenStack.
>
> We at Zurich university of Applied Sciences have a python project in open
> source (Apache 2 Licensing) that aims to provide a platform to do
> rating-charging-billing over ceilometer. We call is Cyclops (A Charging
> platform for OPenStack CLouds).
>
> The initial proof of concept code can be accessed here:
> https://github.com/icclab/cyclops-web &
> https://github.com/icclab/cyclops-tmanager
>
> *Disclaimer: This is not the best code out there, but will be refined and
> documented properly very soon!*
>
> A demo video from really early days of the project is here:
> https://www.youtube.com/watch?v=ZIwwVxqCio0 and since this video was
> made, several bug fixes and features were added.
>
> The idea presentation was done at Swiss Open Cloud Day at Bern and the
> talk slides can be accessed here:
> http://piyush-harsh.info/content/ocd-bern2014.pdf, and more recently the
> research paper on the idea was published in 2014 World Congress in Computer
> Science (Las Vegas), which can be accessed here:
> http://piyush-harsh.info/content/GCA2014-rcb.pdf
>
> I was wondering, if our effort is something that OpenStack
> Ceilometer/Telemetry release team would be interested in?
>
> I do understand that initially rating-charging-billing service may have
> been left out by choice as they would need to be tightly coupled with
> existing CRM/Billing systems, but Cyclops design (intended) is distributed,
> service oriented architecture with each component allowing for possible
> integration with external software via REST APIs. And therefore Cyclops by
> design is CRM/Billing platform agnostic. Although Cyclops PoC
> implementation does include a basic bill generation module.
>
> We in our team are committed to this development effort and we will have
> resources (interns, students, researchers) work on features and improve the
> code-base for a foreseeable number of years to come.
>
> Do you see a chance if our efforts could make in as an incubated project
> in OpenStack within Ceilometer?
>
> I really would like to hear back from you, comments, suggestions, etc.
>
> Kind regards,
> Piyush.
> ___
> Dr. Piyush Harsh, Ph.D.
> Researcher, InIT Cloud Computing Lab
> Zurich University of Applied Sciences (ZHAW)
> [Site] http://piyush-harsh.info
> [Research Lab] http://www.cloudcomp.ch/
> Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread John Griffith
On Thu, Aug 7, 2014 at 6:20 AM, Sean Dague  wrote:

> On 08/07/2014 07:58 AM, Angus Salkeld wrote:
> > On Wed, 2014-08-06 at 15:48 -0600, John Griffith wrote:
> >> I have to agree with Duncan here.  I also don't know if I fully
> >> understand the limit in options.  Stress test seems like it
> >> could/should be different (again overlap isn't a horrible thing) and I
> >> don't see it as siphoning off resources so not sure of the issue.
> >>  We've become quite wrapped up in projects, programs and the like
> >> lately and it seems to hinder forward progress more than anything
> >> else.
> > h
> >>
> >> I'm also not convinced that Tempest is where all things belong, in
> >> fact I've been thinking more and more that a good bit of what Tempest
> >> does today should fall more on the responsibility of the projects
> >> themselves.  For example functional testing of features etc, ideally
> >> I'd love to have more of that fall on the projects and their
> >> respective teams.  That might even be something as simple to start as
> >> saying "if you contribute a new feature, you have to also provide a
> >> link to a contribution to the Tempest test-suite that checks it".
> >>  Sort of like we do for unit tests, cross-project tracking is
> >> difficult of course, but it's a start.  The other idea is maybe
> >> functional test harnesses live in their respective projects.
> >>
> >
> > Couldn't we reduce the scope of tempest (and rally) : make tempest the
> > API verification and rally the secenario/performance tester? Make each
> > tool do less, but better. My point being to split the projects by
> > functionality so there is less need to share code and stomp on each
> > other's toes.
>
> Who is going to propose the split? Who is going to manage the
> coordination of the split? What happens when their is disagreement about
> the location of something like booting and listing a server -
>
> https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L44-L64
>
> Because today we've got fundamental disagreements between the teams on
> scope, long standing (as seen in these threads), so this won't
> organically solve itself.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
​last paragraph regarding the "split" wasn't mine, but...  I think it's
good for people to express ideas on the ML like this.  It may not be
feasible, but I think the more people you have thinking about how to move
forward and expressing their ideas (even if they don't work) is a good and
healthy thing.

As far as proposing a split, there's obviously a ton of detail that needs
to be considered here and honestly it may just be a horrible idea right
from the start.  That being said, to answer some of your questions, quite
honestly IMO these are some of the things that I think it would be good for
the TC to take an active role in.  Seems reasonable to have bodies like the
TC work on governing and laying out technical process and direction.

Anyway, ​I think the bottom line is that better collaboration is something
we need to work on.  That in and of itself would've have likely thwarted
this this thread to begin with (and I think that was one of the key points
it tried to make).

As far as the question at hand of Rally... I would surely hope that there's
a way to for QA and Rally teams to actually collaborate and work together
on this.  I also understand completely that lack of collaboration is
probably what got us to this point in the first place.  It just seems to me
that there's a middle ground somewhere but it's going to require some give
and take from both sides.

By the way, personally I feel that the movement over the last year that
everybody needs to have their own program or project is a big problem.  The
other thing that nobody wants to consider is why not just put some code on
github independent of OpenStack?  Contribute things to the projects and
build cool things for OpenStack outside of OpenStack.  Make sense?

Questions about functional test responsibilities for projects etc should
probably be a future discussion if there's interest and if it makes any
sense at all (ie summit topic?).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-08-07 Thread Armando M.
Hi Salvatore,

I did notice the issue and I flagged this bug report:

https://bugs.launchpad.net/nova/+bug/1352141

I'll follow up.

Cheers,
Armando


On 7 August 2014 01:34, Salvatore Orlando  wrote:

> I had to put the patch back on WIP because yesterday a bug causing a 100%
> failure rate slipped in.
> It should be an easy fix, and I'm already working on it.
> Situations like this, exemplified by [1] are a bit frustrating for all the
> people working on improving neutron quality.
> Now, if you allow me a little rant, as Neutron is receiving a lot of
> attention for all the ongoing discussion regarding this group policy stuff,
> would it be possible for us to receive a bit of attention to ensure both
> the full job and the grenade one are switched to voting before the juno-3
> review crunch.
>
> We've already had the attention of the QA team, it would probably good if
> we could get the attention of the infra core team to ensure:
> 1) the jobs are also deemed by them stable enough to be switched to voting
> 2) the relevant patches for openstack-infra/config are reviewed
>
> Regards,
> Salvatore
>
> [1]
> http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwie3UnbWVzc2FnZSc6IHUnRmxvYXRpbmcgaXAgcG9vbCBub3QgZm91bmQuJywgdSdjb2RlJzogNDAwfVwiIEFORCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzQwMDExMDIwNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
>
>
> On 23 July 2014 14:59, Matthew Treinish  wrote:
>
>> On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote:
>> > Here I am again bothering you with the state of the full job for
>> Neutron.
>> >
>> > The patch for fixing an issue in nova's server external events extension
>> > merged yesterday [1]
>> > We do not have yet enough data points to make a reliable assessment,
>> but of
>> > out 37 runs since the patch merged, we had "only" 5 failures, which puts
>> > the failure rate at about 13%
>> >
>> > This is ugly compared with the current failure rate of the smoketest
>> (3%).
>> > However, I think it is good enough to start making the full job voting
>> at
>> > least for neutron patches.
>> > Once we'll be able to bring down failure rate to anything around 5%, we
>> can
>> > then enable the job everywhere.
>>
>> I think that sounds like a good plan. I'm also curious how the failure
>> rates
>> compare to the other non-neutron jobs, that might be a useful comparison
>> too
>> for deciding when to flip the switch everywhere.
>>
>> >
>> > As much as I hate asymmetric gating, I think this is a good compromise
>> for
>> > avoiding developers working on other projects are badly affected by the
>> > higher failure rate in the neutron full job.
>>
>> So we discussed this during the project meeting a couple of weeks ago [3]
>> and
>> there was a general agreement that doing it asymmetrically at first would
>> be
>> better. Everyone should be wary of the potential harms with doing it
>> asymmetrically and I think priority will be given to fixing issues that
>> block
>> the neutron gate should they arise.
>>
>> > I will therefore resume work on [2] and remove the WIP status as soon
>> as I
>> > can confirm a failure rate below 15% with more data points.
>> >
>>
>> Thanks for keeping on top of this Salvatore. It'll be good to finally be
>> at
>> least partially gating with a parallel job.
>>
>> -Matt Treinish
>>
>> >
>> > [1] https://review.openstack.org/#/c/103865/
>> > [2] https://review.openstack.org/#/c/88289/
>> [3]
>> http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28
>>
>> >
>> >
>> > On 10 July 2014 11:49, Salvatore Orlando  wrote:
>> >
>> > >
>> > >
>> > >
>> > > On 10 July 2014 11:27, Ihar Hrachyshka  wrote:
>> > >
>> > >> -BEGIN PGP SIGNED MESSAGE-
>> > >> Hash: SHA512
>> > >>
>> > >> On 10/07/14 11:07, Salvatore Orlando wrote:
>> > >> > The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
>> > >> > it seems there has been an improvement on the failure rate, which
>> > >> > seem to have dropped to 25% from over 40%. Still, since the patch
>> > >> > merged there have been 11 failures already in the full job out of
>> > >> > 42 jobs executed in total. Of these 11 failures: - 3 were due to
>> > >> > problems in the patches being tested - 1 had the same root cause as
>> > >> > bug 1329564. Indeed the related job started before the patch merged
>> > >> > but finished after. So this failure "doesn't count". - 1 was for an
>> > >> > issue introduced about a week ago which actually causing a lot of
>> > >> > failures in the full job [3]. Fix should be easy for it; however
>> > >> > given the nature of the test we might even skip it while it's
>> > >> > fixed. - 3 were for bug 1333654 [4]; for this bug discussion is
>> > >> > going on on gerrit regarding the most suitable approach. - 3 were
>> > >> > fo

Re: [openstack-dev] [Neutron] Bug squashing day

2014-08-07 Thread Kyle Mestery
On Thu, Aug 7, 2014 at 9:31 AM, Eugene Nikanorov
 wrote:
> Hi neutron folks,
>
> Today should have been 'Bug squashing day' where we go over existing bugs
> filed for the project and triage/prioritize/comment on them.
>
> I've created an etherpad with (hopefully) full list of neutron bugs:
> https://etherpad.openstack.org/p/neutron-bug-squashing-day-2014-08-07
>
> I was able to walk through a couple of almost thousand of bugs we have.
> My target was to reduce the number of open bugs, so some of them I moved to
> incomplete/invalid/won't fix state (not many though); then, to reduce the
> number of high importance bugs, especially if they're hanging for too long.
>
> As you can see, bugs in the etherpad are sorted by importance.
> Some of my observations include:
> - almost all bugs with High priority really seem like issues we should be
> fixing.
> In many cases submitter or initial contributor abandoned his work on the
> bug...
> - there are a couple of important bugs related to DVR where previously
> working stuff
> is broken, but in all cases there are DVR subteam members working on those,
> so we're good here so far.
>
> I also briefly described resolution for each bug, where 'n/a' means that bug
> just needs to be fixed/work should be continued without any change to state.
> I'm planning to continue to go over this list and expect more bugs will go
> away which previously have been marked as medium/low or wishlist.
>
> If anyone is willing to help - you're welcome!
>
Thanks for setting this up Eugene! I've been down with a nasty cold
yesterday and still not feeling well today, I will try to be on
#openstack-neutron off and on today. I encourage other Neutron cores
to be there as well so we can coordinate the work and help new bugfix
submitters.

Thanks!
Kyle

> Thanks,
> Eugene.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-07 Thread Matt Riedemann



On 7/18/2014 2:55 AM, Daniel P. Berrange wrote:

On Thu, Jul 17, 2014 at 12:13:13PM -0700, Johannes Erdfelt wrote:

On Thu, Jul 17, 2014, Russell Bryant  wrote:

On 07/17/2014 02:31 PM, Johannes Erdfelt wrote:

It kind of helps. It's still implicit in that you need to look at what
features are enabled at what version and determine if it is being
tested.

But the behavior is still broken since code is still getting merged that
isn't tested. Saying that is by design doesn't help the fact that
potentially broken code exists.


Well, it may not be tested in our CI yet, but that doesn't mean it's not
tested some other way, at least.


I'm skeptical. Unless it's tested continuously, it'll likely break at
some time.

We seem to be selectively choosing the continuous part of CI. I'd
understand if it was reluctantly because of immediate problems but
this reads like it's acceptable long-term too.


I think there are some good ideas in other parts of this thread to look
at how we can more reguarly rev libvirt in the gate to mitigate this.

There's also been work going on to get Fedora enabled in the gate, which
is a distro that regularly carries a much more recent version of libvirt
(among other things), so that's another angle that may help.


That's an improvement, but I'm still not sure I understand what the
workflow will be for developers.


That's exactly why we want to have the CI system using newer libvirt
than it does today. The patch to cap the version doesn't change what
is tested - it just avoids users hitting untested paths by default
so they're not exposed to any potential instability until we actually
get a more updated CI system


Do they need to now wait for Fedora to ship a new version of libvirt?
Fedora is likely to help the problem because of how quickly it generally
ships new packages and their release schedule but it would still hold
back some features?


Fedora has an add-on repository ("virt-preview") which contains the
latest QEMU + libvirt RPMs for current stable release - this is lags
upstream by a matter of days, so there would be no appreciable delay
in getting access to newest possible releases.


Also, this explanation doesn't answer my question about what happens
when the gate finally gets around to actually testing those potentially
broken code paths.


I think we would just test out the bump and make sure it's working fine
before it's enabled for every job.  That would keep potential breakage
localized to people working on debugging/fixing it until it's ready to go.


The downside is that new features for libvirt could be held back by
needing to fix other unrelated features. This is certainly not a bigger
problem than users potentially running untested code simply because they
are on a newer version of libvirt.

I understand we have an immediate problem and I see the short-term value
in the libvirt version cap.

I try to look at the long-term and unless it's clear to me that a
solution is proposed to be short-term and there are some understood
trade-offs then I'll question the long-term implications of it.


Once CI system is regularly tracking upstream releases within a matter of
days, then the version cap is a total non-issue from a feature
availability POV. It is none the less useful in the long term, for example,
if there were a problem we miss in testing, which a deployer then hits in
the field, the version cap would allow them to get their deployment to
avoid use of the newer libvirt feature, which could be a useful workaround
for them until a fix is available.

Regards,
Daniel



FYI, there is a proposed revert of the libvirt version cap change 
mentioned previously in this thread [1].


Just bringing it up again here since the discussion should happen in the 
ML rather than gerrit.


[1] https://review.openstack.org/#/c/110754/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Package python-django-pyscss dependencies on CentOS

2014-08-07 Thread Matthias Runge

On 07/08/14 11:11, Timur Sufiev wrote:

Thanks,

now it is clear that this requirement can be safely dropped.


As I said, it's required during build time, if you execute the tests
during build.
It's not a runtime dependency; the page you were referring to is from 
the build system.


Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Bug squashing day

2014-08-07 Thread Eugene Nikanorov
Hi neutron folks,

Today should have been 'Bug squashing day' where we go over existing bugs
filed for the project and triage/prioritize/comment on them.

I've created an etherpad with (hopefully) full list of neutron bugs:
https://etherpad.openstack.org/p/neutron-bug-squashing-day-2014-08-07

I was able to walk through a couple of almost thousand of bugs we have.
My target was to reduce the number of open bugs, so some of them I moved to
incomplete/invalid/won't fix state (not many though); then, to reduce the
number of high importance bugs, especially if they're hanging for too long.

As you can see, bugs in the etherpad are sorted by importance.
Some of my observations include:
- almost all bugs with High priority really seem like issues we should be
fixing.
In many cases submitter or initial contributor abandoned his work on the
bug...
- there are a couple of important bugs related to DVR where previously
working stuff
is broken, but in all cases there are DVR subteam members working on those,
so we're good here so far.

I also briefly described resolution for each bug, where 'n/a' means that
bug just needs to be fixed/work should be continued without any change to
state.
I'm planning to continue to go over this list and expect more bugs will go
away which previously have been marked as medium/low or wishlist.

If anyone is willing to help - you're welcome!

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu

Oops, thanks

On 2014年08月07日 22:08, Ben Nemec wrote:

Unfortunately this is a known issue.  We're working on a fix:
https://bugs.launchpad.net/oslo/+bug/1327946

On 08/07/2014 03:57 AM, Alex Xu wrote:

When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
  .

Checking the code found that invoke utils.synchronized without parameter
lock_path, the code will try to use
posix semaphore.

But posix semaphore won't release even the process crashed. Should we
fix it? I saw a lot of call for synchronized
without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Ben Nemec
Unfortunately this is a known issue.  We're working on a fix:
https://bugs.launchpad.net/oslo/+bug/1327946

On 08/07/2014 03:57 AM, Alex Xu wrote:
> When I startup nova-network, it stuck at trying get lock for ebtables.
> 
> @utils.synchronized('ebtables', external=True)
> def ensure_ebtables_rules(rules, table='filter'):
>  .
> 
> Checking the code found that invoke utils.synchronized without parameter 
> lock_path, the code will try to use
> posix semaphore.
> 
> But posix semaphore won't release even the process crashed. Should we 
> fix it? I saw a lot of call for synchronized
> without lock_path.
> 
> Thanks
> Alex
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread John Griffith
On Thu, Aug 7, 2014 at 7:33 AM, Anne Gentle  wrote:

>
>
>
> On Thu, Aug 7, 2014 at 8:20 AM, Russell Bryant  wrote:
>
>> On 08/07/2014 09:07 AM, Sean Dague wrote:> I think the difference is
>> slot selection would just be Nova drivers. I
>> > think there is an assumption in the old system that everyone in Nova
>> > core wants to prioritize the blueprints. I think there are a bunch of
>> > folks in Nova core that are happy having signaling from Nova drivers on
>> > high priority things to review. (I know I'm in that camp.)
>> >
>> > Lacking that we all have picking algorithms to hack away at the 500 open
>> > reviews. Which basically means it's a giant random queue.
>> >
>> > Having a few blueprints that *everyone* is looking at also has the
>> > advantage that the context for the bits in question will tend to be
>> > loaded into multiple people's heads at the same time, so is something
>> > that's discussable.
>> >
>> > Will it fix the issue, not sure, but it's an idea.
>>
>> OK, got it.  So, success critically depends on nova-core being willing
>> to take review direction and priority setting from nova-drivers.  That
>> sort of assumption is part of why I think agile processes typically
>> don't work in open source.  We don't have the ability to direct people
>> with consistent and reliable results.
>>
>> I'm afraid if people doing the review are not directly involved in at
>> least ACKing the selection and commiting to review something, putting
>> stuff in slots seems futile.
>>
>>
> My original thinking was I'd set aside a "meeting time" to review specs
> especially for doc issues and API designs. What I found quickly was that
> the 400+ queue in one project alone was not only daunting but felt like I
> wasn't going to make a dent as a single person, try as I may.
>
> I did my best but would appreciate any change in process to help with
> prioritization. I'm pretty sure it will help someone like me, looking at
> cross-project queues of specs, to know what to review first, second, third,
> and what to circle back on.
>
>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Seems everybody that's been around a while has noticed "issues" this
release and have talked about it, thanks Thierry for putting it together so
well and kicking off the ML thread here.

I'd agree with everything that you stated, I've also floated the idea this
past week with a few members of the Core Cinder team to have an "every
other" release for new drivers submissions in Cinder (I'm expecting this to
be a HUGELY popular proposal [note sarcastic tone]).

There are three things that have just crushed productivity and motivation
in Cinder this release (IMO):
1. Overwhelming number of drivers (tactical contributions)
2. Overwhelming amount of churn, literally hundreds of little changes to
modify docstrings, comments etc but no real improvements to code
3. A new sense of pride in hitting the -1 button on reviews.  A large
number of reviews these days seem to be -1 due to punctuation or
misspelling in comments and docstrings.  There's also a lot of "my way of
writing this method is better because it's *clever*" taking place.

In Cinder's case I don't think new features is a problem, in fact we can't
seem to get new features worked on and released because of all the other
distractions.  That being said doing a maintenance or hardening only type
of release is for sure good with me.

Anyway, I've had some plans to talk about how we might fix some of this in
Cinder at next week's sprint.  If there's a broader community effort along
these lines that's even better.

Thanks,
John
​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Anne Gentle
On Thu, Aug 7, 2014 at 8:20 AM, Russell Bryant  wrote:

> On 08/07/2014 09:07 AM, Sean Dague wrote:> I think the difference is
> slot selection would just be Nova drivers. I
> > think there is an assumption in the old system that everyone in Nova
> > core wants to prioritize the blueprints. I think there are a bunch of
> > folks in Nova core that are happy having signaling from Nova drivers on
> > high priority things to review. (I know I'm in that camp.)
> >
> > Lacking that we all have picking algorithms to hack away at the 500 open
> > reviews. Which basically means it's a giant random queue.
> >
> > Having a few blueprints that *everyone* is looking at also has the
> > advantage that the context for the bits in question will tend to be
> > loaded into multiple people's heads at the same time, so is something
> > that's discussable.
> >
> > Will it fix the issue, not sure, but it's an idea.
>
> OK, got it.  So, success critically depends on nova-core being willing
> to take review direction and priority setting from nova-drivers.  That
> sort of assumption is part of why I think agile processes typically
> don't work in open source.  We don't have the ability to direct people
> with consistent and reliable results.
>
> I'm afraid if people doing the review are not directly involved in at
> least ACKing the selection and commiting to review something, putting
> stuff in slots seems futile.
>
>
My original thinking was I'd set aside a "meeting time" to review specs
especially for doc issues and API designs. What I found quickly was that
the 400+ queue in one project alone was not only daunting but felt like I
wasn't going to make a dent as a single person, try as I may.

I did my best but would appreciate any change in process to help with
prioritization. I'm pretty sure it will help someone like me, looking at
cross-project queues of specs, to know what to review first, second, third,
and what to circle back on.


> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Sean Dague
On 08/06/2014 11:51 AM, Eoghan Glynn wrote:
> 
> 
>> Hi everyone,
>>
>> With the incredible growth of OpenStack, our development community is
>> facing complex challenges. How we handle those might determine the
>> ultimate success or failure of OpenStack.
>>
>> With this cycle we hit new limits in our processes, tools and cultural
>> setup. This resulted in new limiting factors on our overall velocity,
>> which is frustrating for developers. This resulted in the burnout of key
>> firefighting resources. This resulted in tension between people who try
>> to get specific work done and people who try to keep a handle on the big
>> picture.
>>
>> It all boils down to an imbalance between strategic and tactical
>> contributions. At the beginning of this project, we had a strong inner
>> group of people dedicated to fixing all loose ends. Then a lot of
>> companies got interested in OpenStack and there was a surge in tactical,
>> short-term contributions. We put on a call for more resources to be
>> dedicated to strategic contributions like critical bugfixing,
>> vulnerability management, QA, infrastructure... and that call was
>> answered by a lot of companies that are now key members of the OpenStack
>> Foundation, and all was fine again. But OpenStack contributors kept on
>> growing, and we grew the narrowly-focused population way faster than the
>> cross-project population.
>>
>> At the same time, we kept on adding new projects to incubation and to
>> the integrated release, which is great... but the new developers you get
>> on board with this are much more likely to be tactical than strategic
>> contributors. This also contributed to the imbalance. The penalty for
>> that imbalance is twofold: we don't have enough resources available to
>> solve old, known OpenStack-wide issues; but we also don't have enough
>> resources to identify and fix new issues.
>>
>> We have several efforts under way, like calling for new strategic
>> contributors, driving towards in-project functional testing, making
>> solving rare issues a more attractive endeavor, or hiring resources
>> directly at the Foundation level to help address those. But there is a
>> topic we haven't raised yet: should we concentrate on fixing what is
>> currently in the integrated release rather than adding new projects ?
>>
>> We seem to be unable to address some key issues in the software we
>> produce, and part of it is due to strategic contributors (and core
>> reviewers) being overwhelmed just trying to stay afloat of what's
>> happening. For such projects, is it time for a pause ? Is it time to
>> define key cycle goals and defer everything else ?
>>
>> On the integrated release side, "more projects" means stretching our
>> limited strategic resources more. Is it time for the Technical Committee
>> to more aggressively define what is "in" and what is "out" ? If we go
>> through such a redefinition, shall we push currently-integrated projects
>> that fail to match that definition out of the "integrated release" inner
>> circle ?
>>
>> The TC discussion on what the integrated release should or should not
>> include has always been informally going on. Some people would like to
>> strictly limit to end-user-facing projects. Some others suggest that
>> "OpenStack" should just be about integrating/exposing/scaling smart
>> functionality that lives in specialized external projects, rather than
>> trying to outsmart those by writing our own implementation. Some others
>> are advocates of carefully moving up the stack, and to resist from
>> further addressing IaaS+ services until we "complete" the pure IaaS
>> space in a satisfactory manner. Some others would like to build a
>> roadmap based on AWS services. Some others would just add anything that
>> fits the incubation/integration requirements.
>>
>> On one side this is a long-term discussion, but on the other we also
>> need to make quick decisions. With 4 incubated projects, and 2 new ones
>> currently being proposed, there are a lot of people knocking at the door.
>>
>> Thanks for reading this braindump this far. I hope this will trigger the
>> open discussions we need to have, as an open source project, to reach
>> the next level.
> 
> 
> Thanks Thierry, for this timely post.
> 
> You've touched on multiple trains-of-thought that could indeed
> justify separate threads of their own.
> 
> I agree with your read on the diverging growth rates in the
> strategic versus the tactical elements of the community.
> 
> I would also be supportive of the notion of taking a cycle out to
> fully concentrate on solving existing quality/scaling/performance
> issues, if that's what you meant by pausing to define key cycle
> goals while deferring everything else.
> 
> Though FWIW I think scaling back the set of currently integrated
> projects is not the appropriate solution to the problem of over-
> stretched strategic resources on the QA/infra side of the house.
> 
> Rather, I think the proposed move to in-projec

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Russell Bryant
On 08/07/2014 09:07 AM, Sean Dague wrote:> I think the difference is
slot selection would just be Nova drivers. I
> think there is an assumption in the old system that everyone in Nova
> core wants to prioritize the blueprints. I think there are a bunch of
> folks in Nova core that are happy having signaling from Nova drivers on
> high priority things to review. (I know I'm in that camp.)
> 
> Lacking that we all have picking algorithms to hack away at the 500 open
> reviews. Which basically means it's a giant random queue.
> 
> Having a few blueprints that *everyone* is looking at also has the
> advantage that the context for the bits in question will tend to be
> loaded into multiple people's heads at the same time, so is something
> that's discussable.
> 
> Will it fix the issue, not sure, but it's an idea.

OK, got it.  So, success critically depends on nova-core being willing
to take review direction and priority setting from nova-drivers.  That
sort of assumption is part of why I think agile processes typically
don't work in open source.  We don't have the ability to direct people
with consistent and reliable results.

I'm afraid if people doing the review are not directly involved in at
least ACKing the selection and commiting to review something, putting
stuff in slots seems futile.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Sean Dague
On 08/07/2014 08:54 AM, Russell Bryant wrote:
> On 08/07/2014 04:49 AM, Thierry Carrez wrote:
>> Stefano Maffulli wrote:
>>> On Wed 06 Aug 2014 02:10:23 PM PDT, Michael Still wrote:
  - we rate limit the total number of blueprints under code review at
 any one time to a fixed number of "slots". I secretly prefer the term
 "runway", so I am going to use that for the rest of this email. A
 suggested initial number of runways was proposed at ten.
>>>
>>> oh, I like the 'slots/runaway model'. Sounds to me like kan ban (in 
>>> the Toyota sense not the hipster developer sense).
>>>
>>> A light in my head just went on.
>>>
>>> Let me translate what you're thinking about in other terms: the 
>>> slot/runaway model would switch what is now a push model into a "pull" 
>>> model. Currently we have patches coming in, pushed up for review. We 
>>> have then on gerrit reviewers and core reviewers shuffling through 
>>> these changesets, doing work and approve/comment. The reviewers have 
>>> little to no way to notice when they're overloaded and managers have no 
>>> way either. There is no way to identify when the process is suffering, 
>>> slowing down or not satisfying demand, if not when the backlog blows 
>>> up. As recent discussions demonstrate, this model is failing under our 
>>> growth.
>>>
>>> By switching to a model where we have a set of slots/runaway (buckets, 
>>> in Toyota's terminology) reviewers would have a clear way to *pull* new 
>>> reviews into their workstations to be processed. It's as simple as a 
>>> supermaket aisle: when there is no more pasta on the shelf, a clerk 
>>> goes in the backend and gets more pasta to refurbish the shelf. There 
>>> is no sophisticated algorithm to predict demand: it's the demand of 
>>> pasta that drives new pull requests (of pasta or changes to review).
>>>
>>> This pull mechanism would help make it very visible where the 
>>> bottlenecks are. At Toyota, for example, the amount of kanbans is the 
>>> visible way to understand the capacity of the plant. The amount of 
>>> slots/runaways would probably give us similar overview of the capacity 
>>> of each project and give us tools to solve bottlenecks before they 
>>> become emergencies.
>>
>> As an ex factory IT manager, I feel compelled to comment on that :)
>> You're not really introducing a successful Kanban here, you're just
>> clarifying that there should be a set number of workstations.
>>
>> Our current system is like a gigantic open space with hundreds of
>> half-finished pieces, and a dozen workers keep on going from one to
>> another with no strong pattern. The proposed system is to limit the
>> number of half-finished pieces fighting for the workers attention at any
>> given time, by setting a clear number of workstations.
>>
>> A true Kanban would be an interface between developers and reviewers,
>> where reviewers define what type of change they have to review to
>> complete production objectives, *and* developers would strive to produce
>> enough to keep the kanban above the red line, but not too much (which
>> would be piling up waste).
>>
>> Without that discipline, Kanbans are useless. Unless the developers
>> adapt what they work on based on release objectives, you don't really
>> reduce waste/inventory at all, it just piles up waiting for available
>> "runway slots". As I said in my original email, the main issue here is
>> the imbalance between too many people proposing changes and not enough
>> people caring about the project itself enough to be trusted with core
>> reviewers rights.
>>
>> This proposal is not solving that, so it is not the miracle cure that
>> will end all developers frustration, nor is it turning our push-based
>> model into a sane pull-based one. The only way to be truly pull-based is
>> to define a set of production objectives and have those objectives
>> trickle up to the developers so that they don't work on something else.
>> The solution is about setting release cycle goals and strongly
>> communicating that everything out of those goals is clearly priority 2.
>>
>> Now I'm not saying this is a bad idea. Having too many reviews to
>> consider at the same time dilutes review attention to the point where we
>> don't finalize anything. Having runway slots makes sure there is a focus
>> on a limited set of features at a time, which increases the chances that
>> those get finalized.
>>
> 
> I found this response to be very insightful, thank you.
> 
> I feel like this idea is essentially trying to figure out how to apply
> an agile process to nova.  Lots and lots of people have tried to figure
> out how to make it work for open source, and there's several reasons
> that it just doesn't.  This came up in a thread last year here:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2013-April/007872.html/
> 
> With that said, I really do appreciate the hunger to find new and better
> ways to manage our work.  It's certainly needed and I hope to
> conti

Re: [openstack-dev] [all] The future of the integrated release

2014-08-07 Thread Russell Bryant
On 08/07/2014 04:49 AM, Thierry Carrez wrote:
> Stefano Maffulli wrote:
>> On Wed 06 Aug 2014 02:10:23 PM PDT, Michael Still wrote:
>>>  - we rate limit the total number of blueprints under code review at
>>> any one time to a fixed number of "slots". I secretly prefer the term
>>> "runway", so I am going to use that for the rest of this email. A
>>> suggested initial number of runways was proposed at ten.
>>
>> oh, I like the 'slots/runaway model'. Sounds to me like kan ban (in 
>> the Toyota sense not the hipster developer sense).
>>
>> A light in my head just went on.
>>
>> Let me translate what you're thinking about in other terms: the 
>> slot/runaway model would switch what is now a push model into a "pull" 
>> model. Currently we have patches coming in, pushed up for review. We 
>> have then on gerrit reviewers and core reviewers shuffling through 
>> these changesets, doing work and approve/comment. The reviewers have 
>> little to no way to notice when they're overloaded and managers have no 
>> way either. There is no way to identify when the process is suffering, 
>> slowing down or not satisfying demand, if not when the backlog blows 
>> up. As recent discussions demonstrate, this model is failing under our 
>> growth.
>>
>> By switching to a model where we have a set of slots/runaway (buckets, 
>> in Toyota's terminology) reviewers would have a clear way to *pull* new 
>> reviews into their workstations to be processed. It's as simple as a 
>> supermaket aisle: when there is no more pasta on the shelf, a clerk 
>> goes in the backend and gets more pasta to refurbish the shelf. There 
>> is no sophisticated algorithm to predict demand: it's the demand of 
>> pasta that drives new pull requests (of pasta or changes to review).
>>
>> This pull mechanism would help make it very visible where the 
>> bottlenecks are. At Toyota, for example, the amount of kanbans is the 
>> visible way to understand the capacity of the plant. The amount of 
>> slots/runaways would probably give us similar overview of the capacity 
>> of each project and give us tools to solve bottlenecks before they 
>> become emergencies.
> 
> As an ex factory IT manager, I feel compelled to comment on that :)
> You're not really introducing a successful Kanban here, you're just
> clarifying that there should be a set number of workstations.
> 
> Our current system is like a gigantic open space with hundreds of
> half-finished pieces, and a dozen workers keep on going from one to
> another with no strong pattern. The proposed system is to limit the
> number of half-finished pieces fighting for the workers attention at any
> given time, by setting a clear number of workstations.
> 
> A true Kanban would be an interface between developers and reviewers,
> where reviewers define what type of change they have to review to
> complete production objectives, *and* developers would strive to produce
> enough to keep the kanban above the red line, but not too much (which
> would be piling up waste).
> 
> Without that discipline, Kanbans are useless. Unless the developers
> adapt what they work on based on release objectives, you don't really
> reduce waste/inventory at all, it just piles up waiting for available
> "runway slots". As I said in my original email, the main issue here is
> the imbalance between too many people proposing changes and not enough
> people caring about the project itself enough to be trusted with core
> reviewers rights.
> 
> This proposal is not solving that, so it is not the miracle cure that
> will end all developers frustration, nor is it turning our push-based
> model into a sane pull-based one. The only way to be truly pull-based is
> to define a set of production objectives and have those objectives
> trickle up to the developers so that they don't work on something else.
> The solution is about setting release cycle goals and strongly
> communicating that everything out of those goals is clearly priority 2.
> 
> Now I'm not saying this is a bad idea. Having too many reviews to
> consider at the same time dilutes review attention to the point where we
> don't finalize anything. Having runway slots makes sure there is a focus
> on a limited set of features at a time, which increases the chances that
> those get finalized.
> 

I found this response to be very insightful, thank you.

I feel like this idea is essentially trying to figure out how to apply
an agile process to nova.  Lots and lots of people have tried to figure
out how to make it work for open source, and there's several reasons
that it just doesn't.  This came up in a thread last year here:

http://lists.openstack.org/pipermail/openstack-dev/2013-April/007872.html/

With that said, I really do appreciate the hunger to find new and better
ways to manage our work.  It's certainly needed and I hope to
continuously improve.

It seems one of the biggest benefits of this sort of proposal is rate
limiting how often we say "yes" so that we have more co

Re: [openstack-dev] [nova] Deprecating CONF.block_device_allocate_retries_interval

2014-08-07 Thread John Garbutt
On 6 August 2014 18:54, Jay Pipes  wrote:
> So, Liyi Meng has an interesting patch up for Nova:
>
> https://review.openstack.org/#/c/104876
>
> 1) We should just deprecate both the options, with a note in the option help
> text that these options are not used when volume size is not 0, and that the
> interval is calculated based on volume size

This feels bad.

> 2) We should deprecate the CONF.block_device_allocate_retries_interval
> option only, and keep the CONF.block_device_allocate_retries configuration
> option as-is, changing the help text to read something like "Max number of
> retries. We calculate the interval of the retry based on the size of the
> volume."

What about a slight modification to (2)...

3) CONF.block_device_allocate_retries_interval=-1 means calculate
using volume size, and we make it the default, so people can still
override it if they want to. But we also deprecate the option with a
view of removing it during Kilo? Move
CONF.block_device_allocate_retries as max retries.

> I bring this up on the mailing list because I think Liyi's patch offers an
> interesting future direction to the way that we think about our retry
> approach in Nova. Instead of having hard-coded or configurable interval
> times, I think Liyi's approach of calculating the interval length based on
> some input values is a good direction to take.

Seems like the right direction.

But I do worry that its quite dependent on the storage backend.
Sometimes the volume create is almost "free" regardless of the volume
size (with certain types of CoW). So maybe we end up needing some kind
of scaling factor on the weights. I kinda hope I am over thinking
that, and in reality it all works fine. I suspect that is the case.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >