Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-12 Thread Ihar Hrachyshka

Cathy Zhang  wrote:

Agree with Paul and Louis. The networking-sfc repo should be preserved to  
support the service function chain functionality. Flow classifier is just  
needed to specify what flows will go through the service port chain.


The flow classifier API is designed as a separate plugin which is  
independent of the port chain plugin. We will support the effort of  
evolving it to a common service classifier API and moving it out of the  
networking-sfc repo when the time comes.


The problem with that kind of thinking is that TC is needed for other  
features beyond sfc, some of them with strict compatibility requirements  
(like the new fwaas API; or security groups). Once we adopt some API for  
those features, we can’t just drop their support, saying we still iterate  
on it. I am all for being more flexible and not get in the backwards  
compatible van if we are really explicit that those APIs based on new TC  
are not supported whatsoever (I know fwaas had that EXPERIMENTAL tag  
throughout its whole history and lived with that; but I am not sure it  
helped the project adoption; that said, for new projects with no binding  
guarantees to users like SFC it may be a burden to consider this API  
stable).


All I am saying is that IF we merge some classifier API into neutron core  
and start using it for core, non-experimental, features, we cannot later  
move to some newer version of this API [that you will iterate on] without  
leaving a huge pile of compatibility code that would not exist in the first  
place if only we thought about proper API in advance. If that’s what you  
envision, fine; but I believe it will make adoption of the ‘evolving’ API a  
lot slower than it could otherwise be.


Are experimental features the only features we see adopt the TC API in the  
near future?


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2015-11-12 Thread Korzeniewski, Artur
Hi Sean,
I'm interested in introducing to Neutron the multinode partial upgrade job in 
Grenade.

Can you explain how multinode is currently working in Grenade and how Nova is 
doing the partial upgrade?

Regards,
Artur Korzeniewski
IRC: korzen

Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173, 80-298 Gdansk


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [nova] Image Signature Verification

2015-11-12 Thread Poulos, Brianna L.
Hello,

There has recently been additional discussion about the best way to handle
image signature verification in glance and nova [1].  There are two
options being discussed for the signature (the examples below using
'RSA-PSS' as the type, and SHA-256 as the hash method):

1. The signature is of the glance checksum of the image data (currently a
hash which is hardcoded to be MD5)
signature = RSA-PSS(SHA-256(MD5(IMAGE-CONTENT)))

2. The signature of the image data directly
signature = RSA-PSS(SHA-256(IMAGE-CONTENT))

The 1st option is what is currently in glance's liberty release [2].  This
approach was chosen with the understanding that the glance checksum would
be updated to be configurable [3].  Although the 2nd option was initially
proposed, the glance community opposed it during the pre-Liberty virtual
mini-summit in May 2015 (due to the performance impact of doing two hashes
of the image data--one for the 'checksum' and the other for the
signature), and it was decided to proceed with the 1st option during the
Liberty summit [4].

During the Mitaka Summit, making the glance checksum configurable was
discussed during a design session [5].  It was decided that instead of
making the 'checksum' image field configurable, it would be preferable to
compute a separate, configurable (on a per-image basis, with a site-wide
default) hash, and then use that hash when MD5 wasn't sufficient (such as
in the case of signature verification). This second hash would be computed
at the same time the MD5 'checksum' was computed.

Which brings us to the nova spec which is under discussion [1], which is
to add the ability to verify signatures in nova.  The nova community has
made it clear that the promise of providing a configurable hash in glance
is not good enough--they never want to support any signatures that use MD5
in any way, shape, or form; nor do they want to rely on asking glance for
what hash option was used.  To that end, the push is to use the 2nd option
to verify signatures in nova from the start.

Since the glance community no longer seems opposed to the idea of
computing two hashes (the second hash being optional, of course), the 2nd
option has now become valid from the glance perspective.  This would
require modifying the existing implementation in glance to verify a
signature of the image data, rather than verifying a checksum of the image
data, but would have no additional performance hit beyond the cost to
compute the second hash.  Note that the image data would still only be
read once -- the checksum update (for the MD5 hash) and the signature
verification update (for the signature hash) would occur in the same loop.
Although this would mean that signatures generated using option 1 would no
longer verify, since signatures generated using option 1 are based on an
MD5 hash (and were waiting for the checksum configurability before
becoming a viable cryptographic option anyway), this does not pose a
significant issue.

Also note that the verification in glance is provided as a benefit to the
user, so that the user can know that the signature properties were defined
correctly at upload, rather than having to wait until the image is booted
by nova to see a signature verification fail due to an improperly-defined
signature property.  However, the main purpose of the image signature
verification feature is to provide a guarantee between when a user signs
it and when nova boots it, and so it is more important to have the
verification occur in nova.

It would be beneficial to have a consistent approach between both the nova
and glance projects (and any future projects that make use of signature
verification).  Otherwise, the feature is not likely to be used by anyone.

Is anyone opposed to proceeding with using option 2, in both glance and
nova?  


[1] 
https://review.openstack.org/#/c/188874/19/specs/mitaka/approved/image-veri
fication.rst
[2] 
http://specs.openstack.org/openstack/glance-specs/specs/liberty/image-signi
ng-and-verification-support.html
[3] https://review.openstack.org/#/c/191542/
[4] 
https://etherpad.openstack.org/p/liberty-glance-image-signing-and-encryptio
n
[5] 
https://etherpad.openstack.org/p/mitaka-glance-image-signing-and-encryption

Thanks,
~Brianna


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User Summit

2015-11-12 Thread David Pursehouse
On Mon, Nov 9, 2015 at 10:40 PM David Pursehouse 
wrote:

<...>


> * As noted in another recent thread by Khai, the hashtags support
>>   (user-defined tags applied to changes) exists but depends on notedb
>>   which is not ready for use yet (targeted for 3.0 which is probably
>>   at least 6 months off).
>>
>>
> We're looking into the possibility of enabling only enough of the notedb
> to make hashtags work in 2.12.
>
>
>

Unfortunately it looks like it's not going to be possible to do this.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

2015-11-12 Thread Ihar Hrachyshka

Artur  wrote:


My TZ is UTC +1:00.
Do we have any favorite day? Maybe Tuesday?


I believe Tue is already too packed with irc meetings to be considered (we  
have, for the least, main neutron meetings and neutron-drivers meetings  
there).


We have folks in US and Central Europe and Russia and Japan… I believe the  
best time would be somewhere around 13:00 to 15:00 UTC (that time would  
still be ‘before midnight' for Japan; afternoon for Europe, and morning for  
US East Coast).


I have checked neutron meetings at [1], and I see that we have 13:00 UTC  
slots free for all days; 14:00 UTC slot available for Thu; and 15:00 UTC  
slots for Mon and Fri (I don’t believe we want to have it on Fri though).  
Also overall Mondays are all free.


Should I create a doodle for those options? Or are there any alternative  
suggestions?


[1]: http://git.openstack.org/cgit/openstack-infra/irc-meetings/tree/meetings

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

2015-11-12 Thread Martin Hickey

Hi Ihar,

Any of those options would suit me, thanks.

Cheers,
Martin




From:   Ihar Hrachyshka 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   12/11/2015 21:39
Subject:Re: [openstack-dev] [neutron][upgrade] new 'all things
upgrade'subteam



Artur  wrote:

> My TZ is UTC +1:00.
> Do we have any favorite day? Maybe Tuesday?

I believe Tue is already too packed with irc meetings to be considered (we

have, for the least, main neutron meetings and neutron-drivers meetings
there).

We have folks in US and Central Europe and Russia and Japan… I believe the

best time would be somewhere around 13:00 to 15:00 UTC (that time would
still be ‘before midnight' for Japan; afternoon for Europe, and morning for

US East Coast).

I have checked neutron meetings at [1], and I see that we have 13:00 UTC
slots free for all days; 14:00 UTC slot available for Thu; and 15:00 UTC
slots for Mon and Fri (I don’t believe we want to have it on Fri though).
Also overall Mondays are all free.

Should I create a doodle for those options? Or are there any alternative
suggestions?

[1]:
http://git.openstack.org/cgit/openstack-infra/irc-meetings/tree/meetings

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-12 Thread Zane Bitter

On 10/11/15 10:11, Alan Pevec wrote:

Hi,

while we continue discussion about the future of stable branches in
general and stable/juno in particular, I'd like to execute the current
plan which was[1]

2014.2.4 (eol) early November, 2015. release manager: apevec

Iff there's enough folks interested (I'm not) in keep Juno alive
longer, they could resurrect it but until concrete plan is done let's
be honest and stick to the agreed plan.

This is a call to stable-maint teams for Nova, Keystone, Glance,
Cinder, Neutron, Horizon, Heat, Ceilometer, Trove and Sahara to review
open stable/juno changes[2] and approve/abandon them as appropriate.
Proposed timeline is:
* Thursday Nov 12 stable/juno freeze[3]
* Thursday Nov 19 release 2014.2.1


We're currently working through a (substantial, unfortunately) backlog 
of juno-backport-potential Heat bugs that were accidentally overlooked 
at the time when they should have been backported:


https://bugs.launchpad.net/heat/+bugs?field.searchtext==-importance%3Alist=FIXCOMMITTED%3Alist=FIXRELEASED_option=any=juno-backport-potential+_combinator=ANY=Search

Some have been merged already. Some are up for review now:

https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:stable/juno,n,z

It would be nice to treat the remaining 'High' priority one as a 
blocker. The rest aren't a blocker for the release but it would be 
really nice to at least have time to get them all backported and merged 
before the stable branch gets deleted.


thanks,
Zane.


Cheers,
Alan

[1] 
https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fjuno_releases_.2812_months.29

[2] 
https://review.openstack.org/#/q/status:open+AND+branch:stable/juno+AND+%28project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer+OR+project:openstack/trove+OR+project:openstack/sahara%29,n,z

[3] documented  in
https://wiki.openstack.org/wiki/StableBranch#Stable_release_managers
TODO add in new location
http://docs.openstack.org/project-team-guide/stable-branches.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2015-11-12 Thread Andrew Beekhof

> On 12 Nov 2015, at 10:44 PM, Vladimir Kuklin  wrote:
> 
> Hi, Andrew
> 
> >Ah good, I understood it correctly then :)
> > I would be interested in your opinion of how the other agent does the 
> > bootstrapping (ie. without notifications or master/slave).
> >That makes sense, the part I’m struggling with is that it sounds like the 
> >other agent shouldn’t work at all.
> > Yet we’ve used it extensively and not experienced these kinds of hangs.
> Regarding other scripts - I am not aware of any other scripts that actually 
> handle cloned rabbitmq server. I may be mistaking, of course. So if you are 
> aware if these scripts succeed in creating rabbitmq cluster which actually 
> survives 1-node or all-node failure scenarios and reassembles the cluster 
> automatically - please, let us know.

The one I linked to in my original reply does:

   
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster

> 
> > Changing the state isn’t ideal but there is precedent, the part that has me 
> > concerned is the error codes coming out of notify.
> > Apart from producing some log messages, I can’t think how it would produce 
> > any recovery.
> 
> > Unless you’re relying on the subsequent monitor operation to notice the 
> > error state.
> > I guess that would work but you might be waiting a while for it to notice.
> 
> Yes, we are relying on subsequent monitor operations. We also have several 
> OCF check levels to catch a case when one node does not have rabbitmq 
> application started properly (btw, there was a strange bug that we had to 
> wait for several non-zero checks to fail to get the resource to restart 
> http://bugs.clusterlabs.org/show_bug.cgi?id=5243) .

It appears I misunderstood your bug the first time around :-(
Do you still have logs of this occuring?

> I now remember, why we did notify errors - for error logging, I guess.
>  
> 
> On Thu, Nov 12, 2015 at 1:30 AM, Andrew Beekhof  wrote:
> 
> > On 11 Nov 2015, at 11:35 PM, Vladimir Kuklin  wrote:
> >
> > Hi, Andrew
> >
> > Let me answer your questions.
> >
> > This agent is active/active which actually marks one of the node as 
> > 'pseudo'-master which is used as a target for other nodes to join to. We 
> > also check which node is a master and use it in monitor action to check 
> > whether this node is clustered with this 'master' node. When we do cluster 
> > bootstrap, we need to decide which node to mark as a master node. Then, 
> > when it starts (actually, promotes), we can finally pick its name through 
> > notification mechanism and ask other nodes to join this cluster.
> 
> Ah good, I understood it correctly then :)
> I would be interested in your opinion of how the other agent does the 
> bootstrapping (ie. without notifications or master/slave).
> 
> >
> > Regarding disconnect_node+forget_cluster_node this is quite simple - we 
> > need to eject node from the cluster. Otherwise it is mentioned in the list 
> > of cluster nodes and a lot of cluster actions, e.g. list_queues, will hang 
> > forever as well as forget_cluster_node action.
> 
> That makes sense, the part I’m struggling with is that it sounds like the 
> other agent shouldn’t work at all.
> Yet we’ve used it extensively and not experienced these kinds of hangs.
> 
> >
> > We also handle this case whenever a node leaves the cluster. If you 
> > remember, I wrote an email to Pacemaker ML regarding getting notifications 
> > on node unjoin event '[openstack-dev] [Fuel][Pacemaker][HA] Notifying 
> > clones of offline nodes’.
> 
> Oh, I recall that now.
> 
> > So we went another way and added a dbus daemon listener that does the same 
> > when node lefts corosync cluster (we know that this is a little bit racy, 
> > but disconnect+forget actions pair is idempotent).
> >
> > Regarding notification commands - we changed behaviour to the one that 
> > fitter our use cases better and passed our destructive tests. It could be 
> > Pacemaker-version dependent, so I agree we should consider changing this 
> > behaviour. But so far it worked for us.
> 
> Changing the state isn’t ideal but there is precedent, the part that has me 
> concerned is the error codes coming out of notify.
> Apart from producing some log messages, I can’t think how it would produce 
> any recovery.
> 
> Unless you’re relying on the subsequent monitor operation to notice the error 
> state.
> I guess that would work but you might be waiting a while for it to notice.
> 
> >
> > On Wed, Nov 11, 2015 at 2:12 PM, Andrew Beekhof  wrote:
> >
> > > On 11 Nov 2015, at 6:26 PM, bdobre...@mirantis.com wrote:
> > >
> > > Thank you Andrew.
> > > Answers below.
> > > >>>
> > > Sounds interesting, can you give any comment about how it differs to the 
> > > other[i] upstream agent?
> > > Am I right that this one is effectively A/P and wont function without 
> > > some kind of shared storage?
> > > Any particular 

Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2015-11-12 Thread Vladimir Kuklin
Hi, Andrew

Thanks for a quick turnaround.

> The one I linked to in my original reply does:
>
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster

I do not have logs of testing of this script. Maybe, Bogdan has something
to tell about results of testing this script. From the first glance it does
not contain gigantic amount of workarounds we injected into our script to
handle various situations when a node fails to join or tries  to join a
cluster that does not want to accept it (in this case you need to kick it
from the cluster with forget_cluster_node and it starts an RPC multicall in
rabbitmq internals to all cluster nodes, including the dead one, hanging
forever). Actually, we started a long time ago with an approach similar to
the one in the script above, but we faced a lot of issues in the case when
a node tries to join a cluster after a dirty failover or a long time of
being out of the cluster. I do not have all the logs of which particular
cases we were handling while introducing that additional logic (it was an
agile process, if you know what I mean :-) ), but we finally came up with
this almost 2K lines code script. We are actively communicating with
Pivotal folks on improving methods of monitoring RabbitMQ cluster nodes or
even switching to RabbitMQ clusterer+autocluster plugins and writing new
smaleer and fancier OCF script, but this is only in plans for further Fuel
releases, I guess.

>
> > Changing the state isn’t ideal but there is precedent, the part that
has me concerned is the error codes coming out of notify.
> > Apart from producing some log messages, I can’t think how it would
produce any recovery.
>
> > Unless you’re relying on the subsequent monitor operation to notice the
error state.
> > I guess that would work but you might be waiting a while for it to
notice.
>
> Yes, we are relying on subsequent monitor operations. We also have
several OCF check levels to catch a case when one node does not have
rabbitmq application started properly (btw, there was a strange bug that we
had to wait for several non-zero checks to fail to get the resource to
restart http://bugs.clusterlabs.org/show_bug.cgi?id=5243) .

Regarding this bug - it was very easy to reproduce - just add additional
check to 'Dummy' resource with non-intersecting interval returning
ERR_GENERIC code and the default check returning SUCCESS code. You will
find that it is restarting only after 2 consequent failures of non-zero
level check.

On Thu, Nov 12, 2015 at 10:58 PM, Andrew Beekhof 
wrote:

>
> > On 12 Nov 2015, at 10:44 PM, Vladimir Kuklin 
> wrote:
> >
> > Hi, Andrew
> >
> > >Ah good, I understood it correctly then :)
> > > I would be interested in your opinion of how the other agent does the
> bootstrapping (ie. without notifications or master/slave).
> > >That makes sense, the part I’m struggling with is that it sounds like
> the other agent shouldn’t work at all.
> > > Yet we’ve used it extensively and not experienced these kinds of hangs.
> > Regarding other scripts - I am not aware of any other scripts that
> actually handle cloned rabbitmq server. I may be mistaking, of course. So
> if you are aware if these scripts succeed in creating rabbitmq cluster
> which actually survives 1-node or all-node failure scenarios and
> reassembles the cluster automatically - please, let us know.
>
> The one I linked to in my original reply does:
>
>
> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster
>
> >
> > > Changing the state isn’t ideal but there is precedent, the part that
> has me concerned is the error codes coming out of notify.
> > > Apart from producing some log messages, I can’t think how it would
> produce any recovery.
> >
> > > Unless you’re relying on the subsequent monitor operation to notice
> the error state.
> > > I guess that would work but you might be waiting a while for it to
> notice.
> >
> > Yes, we are relying on subsequent monitor operations. We also have
> several OCF check levels to catch a case when one node does not have
> rabbitmq application started properly (btw, there was a strange bug that we
> had to wait for several non-zero checks to fail to get the resource to
> restart http://bugs.clusterlabs.org/show_bug.cgi?id=5243) .
>
> It appears I misunderstood your bug the first time around :-(
> Do you still have logs of this occuring?
>
> > I now remember, why we did notify errors - for error logging, I guess.
> >
> >
> > On Thu, Nov 12, 2015 at 1:30 AM, Andrew Beekhof 
> wrote:
> >
> > > On 11 Nov 2015, at 11:35 PM, Vladimir Kuklin 
> wrote:
> > >
> > > Hi, Andrew
> > >
> > > Let me answer your questions.
> > >
> > > This agent is active/active which actually marks one of the node as
> 'pseudo'-master which is used as a target for other nodes to join to. We
> also check which node is a master and use it in monitor action to check
> 

Re: [openstack-dev] Last sync from oslo-incubator

2015-11-12 Thread Matt Riedemann



On 11/12/2015 1:30 PM, Davanum Srinivas wrote:

Hi,

A long time ago, oslo-incubator had a lot of code, we have made most
of the code to oslo.* libraries. There's very little code left and
we'd like to stop hosting common code in oslo-incubator repository. We
encourage everyone to adopt the code they have in their repos under
openstack/common into their own namespace/packaging as we will be
getting rid of any remaining python modules in oslo-incubator.

If there are a couple of projects sharing code, like say cinder and
manila (HT to bswartz), one of those projects can choose to host a
common library between them, similar to how nova and cinder share
os-brick with its own core team. The overall guide to starting
projects is documented here [1] so you will needs bits of that for
existing projects. when in doubt, look in governance, project-config,
requirements repos and model your own.

Thanks,
Dims

[1] http://docs.openstack.org/infra/manual/creators.html




*single tear for days past of doing awful oslo-incubator sync changes*

sniff

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Proposing Michal Rostecki for Core Reviewer (nihilfer on IRC)

2015-11-12 Thread Steven Dake (stdake)
Hey folks,

Its been awhile since we have had a  core reviewer nomination, but I really 
feel like Michal has the right stuff.  If you look at the 30 day stats for 
kolla[1] , he is #3 in reviews (70 reviews) with 6 commits in a 30 day period.  
He is beating 2/3rds of our core reviewer team on all stats.  I think his 
reviews, while could use a little more depth, are solid and well considered.  
That said he participates on the mailing list more then others and has been 
very active in IRC.  He has come up to speed on the code base in quick order 
and I expect if he keeps pace, the top reviewers in Kolla will be challenged to 
maintain their spots :)  Consider this proposal as a +1 vote from me.

As a reminder, our core review process requires 3 core reviewer +1 votes, with 
no core reviewer -1 (veto) votes within a 1 week period.  If your on the fence, 
best to abstain :)  I'll close voting November 20th, or sooner if there I a 
veto vote or a unanimous vote.

Regards
-steve

http://stackalytics.com/report/contribution/kolla-group/30
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Neutron] SR-IOV subteam

2015-11-12 Thread Nikola Đipanov
Top posting since I wanted to just add the [Neutron] tag to the subject
as I imagine there are a few folks in Neutron-land who will be
interested in this.

We had the first meeting this week [1] and there were some cross project
topics mentioned (especially around scheduling) so feel free to review
and comment.

[1]
http://eavesdrop.openstack.org/meetings/sriov/2015/sriov.2015-11-10-13.09.log.html

On 11/10/2015 01:42 AM, Nikola Đipanov wrote:
> On 11/04/2015 07:56 AM, Moshe Levi wrote:
>> Maybe we can you use the pci- passthrough meeting slot 
>> http://eavesdrop.openstack.org/#PCI_Passthrough_Meeting 
>> It been a long time since we had a meeting. 
>>
> 
> I think that slot works well (at least for me). I'd maybe change the
> cadence to bi-weekly in the beginning and see if we need to increase it
> as the cycle progresses.
> 
> Here's the patch proposing the said changes:
> 
> https://review.openstack.org/243382
> 
> On 11/09/2015 06:33 PM, Beliveau, Ludovic wrote:
>> Is there a meeting planned for this week ?
>>
>> Thanks,
>> /ludovic
> 
> Why not - let's have it today 13:00 UTC as the above patch suggests and
> chat more on there.
> 
> Thanks,
> N.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (greg hill)

2015-11-12 Thread Julien Danjou
On Wed, Nov 11 2015, Joshua Harlow wrote:

> Greetings all stackers,
>
> I propose that we add Greg Hill[1] to the taskflow-core[2] team.
>
> Greg (aka jimbo) has been actively contributing to taskflow for a
> while now, both in helping make taskflow better via code
> contribution(s) and by helping spread more usage/knowledge of taskflow
> at rackspace (since the big-data[3] team uses taskflow internally).
> He has helped provided quality reviews and is doing an awesome job
> with the various taskflow concepts and helping make taskflow the best
> it can be!
>
> Overall I think he would make a great addition to the core review team.

Sure, I wonder why you even ask our opinion. :)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Michal Rostecki for Core Reviewer (nihilfer on IRC)

2015-11-12 Thread Paul Bourke

+1

On 12/11/15 08:41, Steven Dake (stdake) wrote:

Hey folks,

Its been awhile since we have had a  core reviewer nomination, but I
really feel like Michal has the right stuff.  If you look at the 30 day
stats for kolla[1] , he is #3 in reviews (70 reviews) with 6 commits in
a 30 day period.  He is beating 2/3rds of our core reviewer team on all
stats.  I think his reviews, while could use a little more depth, are
solid and well considered.  That said he participates on the mailing
list more then others and has been very active in IRC.  He has come up
to speed on the code base in quick order and I expect if he keeps pace,
the top reviewers in Kolla will be challenged to maintain their spots :)
  Consider this proposal as a +1 vote from me.

As a reminder, our core review process requires 3 core reviewer +1
votes, with no core reviewer –1 (veto) votes within a 1 week period.  If
your on the fence, best to abstain :)  I'll close voting November 20th,
or sooner if there I a veto vote or a unanimous vote.

Regards
-steve

http://stackalytics.com/report/contribution/kolla-group/30


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA] weekly High Availability meetings on IRC start next Monday

2015-11-12 Thread Sergii Golovatiuk
Hi Adam,

It's great we are moving forward with HA community. Thank you so much for
brining HA to next level. However, I have couple of comments

[1] contains agenda. I guess we should move it to
https://etherpad.openstack.org. That will allow people to add own topics to
discuss. Action items can be put there as well.

[2] declares meetings at 9am UTC which might be tough for US based folks. I
might be wrong here as I don't know the location of HA experts.

 [1] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
 [2] http://eavesdrop.openstack.org/#High_Availability_Meeting

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Nov 10, 2015 at 6:18 PM, Adam Spiers  wrote:

> Hi all,
>
> After some discussion in Tokyo by stakeholders in OpenStack High
> Availability, I'm pleased to announce that from next Monday we're
> starting a series of weekly meetings on IRC.  Details are here:
>
>   https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
>   http://eavesdrop.openstack.org/#High_Availability_Meeting
>
> The agenda for the first meeting is set and will focus on
>
>   1. the pros and cons of the existing approaches to hypervisor HA
>  which rely on automatic resurrection[0] of VMs, and
>
>   2. how we might be able to converge on a best-of-breed solution.
>
> All are welcome to join!
>
> On a related note, even if you can't attend the meeting, you can still
> use the new FreeNode IRC channel #openstack-ha for all HA-related
> discussion.
>
> Cheers,
> Adam
>
> [0] In the OpenStack community resurrection is commonly referred to
> as "evacuation", which is a slightly unfortunate misnomer.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2015-11-12 Thread Ihar Hrachyshka

Artur  wrote:


Hi Sean,
I’m interested in introducing to Neutron the multinode partial upgrade  
job in Grenade.


Can you explain how multinode is currently working in Grenade and how  
Nova is doing the partial upgrade?




Let’s work on this in the upgrade subteam first, and reach out to folks  
only if we are not clear about some specifics. Sean Collins already  
expressed his interest in it, so we should make sure you sync on the  
feature.


I also believe that the first step to get the job set is making neutron own  
its grenade future, by migrating to grenade plugin maintained in neutron  
tree. We were probably not clear about that before, hence your immediate  
interest in ‘partial’ details.


Let’s sync tomorrow in irc on the matter.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help with getting keystone to migrate to Debian testing: fixing repoze.what and friends

2015-11-12 Thread Clint Byrum
Excerpts from Jamie Lennox's message of 2015-11-12 15:28:06 -0800:
> On 12 November 2015 at 15:09, Clint Byrum  wrote:
> 
> > Excerpts from Clint Byrum's message of 2015-11-11 10:57:26 -0800:
> > > Excerpts from Morgan Fainberg's message of 2015-11-10 20:17:12 -0800:
> > > > On Nov 10, 2015 16:48, "Clint Byrum"  wrote:
> > > > >
> > > > > Excerpts from Morgan Fainberg's message of 2015-11-10 15:31:16 -0800:
> > > > > > On Tue, Nov 10, 2015 at 3:20 PM, Thomas Goirand 
> > wrote:
> > > > > >
> > > > > > > Hi there!
> > > > > > >
> > > > > > > All of Liberty would be migrating from Sid to Testing (which is
> > the
> > > > > > > pre-condition for an upload to offical Debian backports) if I
> > didn't
> > > > > > > have a really annoying situation with the repoze.{what,who}
> > packages.
> > > > I
> > > > > > > feel like I could get some help from the Python export folks
> > here.
> > > > > > >
> > > > > > > What is it about?
> > > > > > > =
> > > > > > >
> > > > > > > Here's the dependency chain:
> > > > > > >
> > > > > > > - Keystone depends on pysaml2.
> > > > > > > - Pysaml2 depends on python-repoze.who >=2, which I uploaded to
> > Sid.
> > > > > > > - python-repoze.what depends on python-repoze.who < 1.99
> > > > > > >
> > > > > > > Unfortunately, python-repoze.who doesn't migrate to Debian
> > Testing
> > > > > > > because it would make python-repoze.what broken.
> > > > > > >
> > > > > > > To make the situation worse, python-repoze.what build-depends on
> > > > > > > python-repoze.who-testutil, which itself doesn't work with
> > > > > > > python-repoze.who >= 2.
> > > > > > >
> > > > > > > Note: repoze.who-testutil is within the package
> > > > > > > python-repoze.who-plugins who also contains 4 other plugins
> > which are
> > > > > > > all broken with repoze.who >= 2, but the others could be dropped
> > from
> > > > > > > Debian easily). We can't drop repoze.what completely, because
> > there's
> > > > > > > turbogears2 and another package who needs it.
> > > > > > >
> > > > > > > There's no hope from upstream, as all of these seem to be
> > abandoned
> > > > > > > projects.
> > > > > > >
> > > > > > > So I'm a bit stuck here, helpless, and I don't know how to fix
> > the
> > > > > > > situation... :(
> > > > > > >
> > > > > > > What to fix?
> > > > > > > 
> > > > > > > Make repoze.what and repoze.who-testutil work with repoze.who >=
> > 2.
> > > > > > >
> > > > > > > Call for help
> > > > > > > =
> > > > > > > I'm a fairly experienced package maintainer, but I still consider
> > > > myself
> > > > > > > a poor Python coder (probably because I spend all my time
> > packaging
> > > > > > > rather than programming in Python: I know a way better other
> > > > programing
> > > > > > > languages).
> > > > > > >
> > > > > > > So I would enjoy a lot having some help here, also because my
> > time is
> > > > > > > very limited and probably better invested working on packages to
> > > > assist
> > > > > > > the whole OpenStack project, rather than upstream code on some
> > weirdo
> > > > > > > dependencies that I don't fully understand.
> > > > > > >
> > > > > > > So, would anyone be able to invest a bit of time, and help me
> > fix the
> > > > > > > problems with repoze.what / repoze.who in Debian? If you can
> > help,
> > > > > > > please ping me on IRC.
> > > > > > >
> > > > > > > Cheers,
> > > > > > >
> > > > > > > Thomas Goirand (zigo)
> > > > > > >
> > > > > > >
> > > > > > It looks like pysaml2 might be ok with < 1.99 of repoze.who here:
> > > > > > https://github.com/rohe/pysaml2/blob/master/setup.py#L30
> > > > > >
> > > > > > I admit I haven't tested it, but the requirements declaration
> > doesn't
> > > > seem
> > > > > > to enforce the need for > 2. If that is in-fact the case that > 2
> > is
> > > > > > needed, we are a somewhat of an impass with dead/abandonware
> > holding us
> > > > > > ransom. I'm not sure what the proper handling of that ends up
> > being in
> > > > the
> > > > > > debian world.
> > > > >
> > > > > repoze.who doesn't look abandoned to me, so it is just repoze.what:
> > > > >
> > > > > https://github.com/repoze/repoze.who/commits/master
> > > > >
> > > > > who's just not being released (does anybody else smell a Laurel and
> > > > > Hardy skit coming on?)
> > > >
> > > > Seriously!
> > > >
> > > > >
> > > > > Also, this may have been something temporary, that then got left
> > around
> > > > > because nobody bothered to try the released versions:
> > > > >
> > > > >
> > > >
> > https://github.com/repoze/repoze.what/commit/b9fc014c0e174540679678af99f04b01756618de
> > > > >
> > > > > note, 2.0a1 wasn't compatible.. but perhaps 2.2 would work fine?
> > > > >
> > > > >
> > > >
> > > > Def something to try out. If this is still an outstanding issue next
> > week
> > > > (when I have a bit more time) I'll see what I can do to test out the
> > > > variations.
> > >
> > > FYI, I tried 2.0 and 

Re: [openstack-dev] [HA] weekly High Availability meetings on IRC start next Monday

2015-11-12 Thread Adam Spiers
Sergii Golovatiuk  wrote:
> > > [2] declares meetings at 9am UTC which might be tough for US based
> > folks. I
> > > might be wrong here as I don't know the location of HA experts.
> > >
> > >  [2] http://eavesdrop.openstack.org/#High_Availability_Meeting
> >
> > Yes, I was aware of this :-/  The problem is that the agenda for the
> > first meeting will focus on hypervisor HA, and the interested parties
> > who met in Tokyo are all based in either Europe or Asia (Japan and
> > Australia).  It's hard but possible to find a time which accommodates
> > two continents, but almost impossible to find a time which
> > accommodates three :-/
> >
> >
> I ran into issues setting event in UTC. Use Ghana/Accra in google calendar
> as it doesn't have UTC time zone

Speaking on behalf of my home town, United Kingdom/London would also
work ;-)

But even easier, just add this URL to your Google Calendar:

  http://eavesdrop.openstack.org/calendars/high-availability-meeting.ics

or if you want to really spam your calendar, you can add all OpenStack
meetings in one go :-)

  http://eavesdrop.openstack.org/irc-meetings.ical

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Could we highlight +0 reviews in review.openstack.org

2015-11-12 Thread David Pursehouse
On Thu, Nov 12, 2015 at 1:14 PM Amrith Kumar  wrote:

> Thanks Jermey!
>
> I'll go look and see where one logs this request on upstream Gerrit
>
>
https://code.google.com/p/gerrit/issues/list
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2015-11-12 Thread Armando M.
On 12 November 2015 at 11:41, Korzeniewski, Artur <
artur.korzeniew...@intel.com> wrote:

> Hi Sean,
>
> I’m interested in introducing to Neutron the multinode partial upgrade job
> in Grenade.
>

Great to hear!


>
>
> Can you explain how multinode is currently working in Grenade and how Nova
> is doing the partial upgrade?
>

sc68cal and garyk may also be good contacts to reach to get help on this
(well overdue) initiative, there were a number of patches that needed some
love, started by russellb. Did you by any chance look into them?

https://review.openstack.org/#/c/220649/
https://review.openstack.org/#/c/189710/
https://review.openstack.org/#/c/189417/
https://review.openstack.org/#/c/189712/

I hope I have not forgotten any.

Cheers,
Armando



>
>
> Regards,
>
> Artur Korzeniewski
>
> IRC: korzen
>
> 
>
> Intel Technology Poland sp. z o.o.
>
> KRS 101882
>
> ul. Slowackiego 173, 80-298 Gdansk
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Could we highlight +0 reviews in review.openstack.org

2015-11-12 Thread Amrith Kumar
Thanks Jermey!

I'll go look and see where one logs this request on upstream Gerrit

-amrith

> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> Sent: Thursday, November 12, 2015 11:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] Could we highlight +0 reviews in
> review.openstack.org
> 
> On 2015-11-12 16:33:52 + (+), Amrith Kumar wrote:
> > When you look at a review on https://review.openstack.org, you see a
> > section of the screen that shows +2's, +1's, -1's and -2's above
> > Verified and Workflow. But we don't show +0's there.
> [...]
> 
> This is because Gerrit tries to treat a 0 vote as "unset" or "absent" so your
> suggestion needs to be filed upstream with Google's Gerrit developer team
> as a feature request (and may already be). The reviewer list is always carried
> over from one patchset to the next even if their associated votes are not, so
> there is no way in the reviewers box of the Web UI right now to distinguish
> someone who commented without voting (a.k.a. a 0 vote) on the current
> patchset from someone who only voted on previous patchsets.
> --
> Jeremy Stanley
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?

2015-11-12 Thread Ihar Hrachyshka

Henry Fourie  wrote:


Ihar,
   Networking-sfc installs flows on br-int and br-tun for steering
traffic to the SFC port-pairs. On each bridge additional tables are used
for an egress forwarding pipeline (from the service VM port) and an
ingress pipeline (to the service VM port). Rpc operations between the
OVS driver and agents is used to initiate the flow installation.

We'd like to work with you on defining the L2 extensions.


Hi Henry,

thanks for taking time to specify your needs. Could you please update the  
etherpad [1] with these details?


Speaking of new ovs tables you need, how do you reference to them? Is there  
any ordering guarantees for flows you will need to set that should be  
provided in scope of that public API of the agent? I wonder whether just  
pushing flows into the existing tables at random points in time can be  
unstable and break the usual flow assumed by the main agent loop.


Am I making sense?

[1] https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help with getting keystone to migrate to Debian testing: fixing repoze.what and friends

2015-11-12 Thread Jamie Lennox
On 12 November 2015 at 15:09, Clint Byrum  wrote:

> Excerpts from Clint Byrum's message of 2015-11-11 10:57:26 -0800:
> > Excerpts from Morgan Fainberg's message of 2015-11-10 20:17:12 -0800:
> > > On Nov 10, 2015 16:48, "Clint Byrum"  wrote:
> > > >
> > > > Excerpts from Morgan Fainberg's message of 2015-11-10 15:31:16 -0800:
> > > > > On Tue, Nov 10, 2015 at 3:20 PM, Thomas Goirand 
> wrote:
> > > > >
> > > > > > Hi there!
> > > > > >
> > > > > > All of Liberty would be migrating from Sid to Testing (which is
> the
> > > > > > pre-condition for an upload to offical Debian backports) if I
> didn't
> > > > > > have a really annoying situation with the repoze.{what,who}
> packages.
> > > I
> > > > > > feel like I could get some help from the Python export folks
> here.
> > > > > >
> > > > > > What is it about?
> > > > > > =
> > > > > >
> > > > > > Here's the dependency chain:
> > > > > >
> > > > > > - Keystone depends on pysaml2.
> > > > > > - Pysaml2 depends on python-repoze.who >=2, which I uploaded to
> Sid.
> > > > > > - python-repoze.what depends on python-repoze.who < 1.99
> > > > > >
> > > > > > Unfortunately, python-repoze.who doesn't migrate to Debian
> Testing
> > > > > > because it would make python-repoze.what broken.
> > > > > >
> > > > > > To make the situation worse, python-repoze.what build-depends on
> > > > > > python-repoze.who-testutil, which itself doesn't work with
> > > > > > python-repoze.who >= 2.
> > > > > >
> > > > > > Note: repoze.who-testutil is within the package
> > > > > > python-repoze.who-plugins who also contains 4 other plugins
> which are
> > > > > > all broken with repoze.who >= 2, but the others could be dropped
> from
> > > > > > Debian easily). We can't drop repoze.what completely, because
> there's
> > > > > > turbogears2 and another package who needs it.
> > > > > >
> > > > > > There's no hope from upstream, as all of these seem to be
> abandoned
> > > > > > projects.
> > > > > >
> > > > > > So I'm a bit stuck here, helpless, and I don't know how to fix
> the
> > > > > > situation... :(
> > > > > >
> > > > > > What to fix?
> > > > > > 
> > > > > > Make repoze.what and repoze.who-testutil work with repoze.who >=
> 2.
> > > > > >
> > > > > > Call for help
> > > > > > =
> > > > > > I'm a fairly experienced package maintainer, but I still consider
> > > myself
> > > > > > a poor Python coder (probably because I spend all my time
> packaging
> > > > > > rather than programming in Python: I know a way better other
> > > programing
> > > > > > languages).
> > > > > >
> > > > > > So I would enjoy a lot having some help here, also because my
> time is
> > > > > > very limited and probably better invested working on packages to
> > > assist
> > > > > > the whole OpenStack project, rather than upstream code on some
> weirdo
> > > > > > dependencies that I don't fully understand.
> > > > > >
> > > > > > So, would anyone be able to invest a bit of time, and help me
> fix the
> > > > > > problems with repoze.what / repoze.who in Debian? If you can
> help,
> > > > > > please ping me on IRC.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > > Thomas Goirand (zigo)
> > > > > >
> > > > > >
> > > > > It looks like pysaml2 might be ok with < 1.99 of repoze.who here:
> > > > > https://github.com/rohe/pysaml2/blob/master/setup.py#L30
> > > > >
> > > > > I admit I haven't tested it, but the requirements declaration
> doesn't
> > > seem
> > > > > to enforce the need for > 2. If that is in-fact the case that > 2
> is
> > > > > needed, we are a somewhat of an impass with dead/abandonware
> holding us
> > > > > ransom. I'm not sure what the proper handling of that ends up
> being in
> > > the
> > > > > debian world.
> > > >
> > > > repoze.who doesn't look abandoned to me, so it is just repoze.what:
> > > >
> > > > https://github.com/repoze/repoze.who/commits/master
> > > >
> > > > who's just not being released (does anybody else smell a Laurel and
> > > > Hardy skit coming on?)
> > >
> > > Seriously!
> > >
> > > >
> > > > Also, this may have been something temporary, that then got left
> around
> > > > because nobody bothered to try the released versions:
> > > >
> > > >
> > >
> https://github.com/repoze/repoze.what/commit/b9fc014c0e174540679678af99f04b01756618de
> > > >
> > > > note, 2.0a1 wasn't compatible.. but perhaps 2.2 would work fine?
> > > >
> > > >
> > >
> > > Def something to try out. If this is still an outstanding issue next
> week
> > > (when I have a bit more time) I'll see what I can do to test out the
> > > variations.
> >
> > FYI, I tried 2.0 and it definitely broke repoze.what's test suite. The
> API
> > is simply incompatible (shake your fists at whoever did that please). For
> > those not following along: please make a _NEW_ module when you break
> > your API.
> >
> > On the off chance it could just be dropped, I looked at turbogears2, and
> > this seems to be the only line 

Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-12 Thread Egor Guz
Eli,

First of all I would like to say thank you for your effort (I never seen so 
many path sets ;)), but I don’t think we should remove “tls_disabled=True” 
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work for 
some reasons.

I think it’s good idea to group tests per pipeline we should definitely follow 
it.

—
Egor

From: "Qiao,Liyong" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, November 11, 2015 at 23:02
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

hello all:

I will update some Magnum functional testing status, functional/integration 
testing
is important to us, since we change/modify the Heat template rapidly, we need to
verify the modification is correct, so we need to cover all templates Magnum 
has.
and currently we only has k8s testing(only test with atomic image), we need to
add more, like swarm(WIP), mesos(under plan), also , we may need to support COS 
image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after 
testing.

for each stage, the time costing is follows:

  *   devstack prepare: 5-6 mins
  *   Running devstack: 15 mins(include downloading atomic image)
  *   1) and 2) 15 mins
  *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html
for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack setup 
will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary kinds 
of testing
on this bay, if want to test some specify bay (for example, network_driver 
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things(create/delete), 
the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce to 45 
min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

  *   gate-functional-dsvm-magnum-api 30 mins
  *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on gate)
https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-12 Thread Tony Breeds
On Thu, Nov 12, 2015 at 02:40:19PM +0100, Alan Pevec wrote:

> AFAICT there are at least two blockers for 2014.2.4:
> - horizon - django_openstack_auth issue Tony mentions in
> https://review.openstack.org/172826

Horizon itself is fine BUT gets caught up in a mess of g-r updates.  The issue
at hand is that DOA has an uncapped requirement on oslo.config which then gets
a 2.x release which doesn't contain the oslo namespace.  The *right* way to fix
it is to cut a new relases of DOA including the a g-r sync.

There was some discussion on this in #openstack-stable:
http://eavesdrop.openstack.org/irclogs/%23openstack-stable/%23openstack-stable.2015-11-12.log.html

An alternate hack would be: https://review.openstack.org/#/c/244950/
but that seems fragile to me.

> - nova - gate-grenade-dsvm-ironic-sideways failures e.g. in
> https://review.openstack.org/227800

Matt took care of this: https://review.openstack.org/#/c/244688/
 
Yours Tony.


pgpA61G1u9HnK.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

2015-11-12 Thread Korzeniewski, Artur
My TZ is UTC +1:00.
Do we have any favorite day? Maybe Tuesday?
It is good idea to have at least 0,5h sync each week.

Regards,
Artur

From: Anna Kamyshnikova [mailto:akamyshnik...@mirantis.com]
Sent: Wednesday, November 11, 2015 10:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

Great news! Thanks Ihar!

I'm interested in working on this :) My TZ is UTC+3:00.

On Wed, Nov 11, 2015 at 12:14 AM, Martin Hickey 
> wrote:
I am interested too and will be available to the subteam.

On Tue, Nov 10, 2015 at 9:03 PM, Sean M. Collins 
> wrote:
I'm excited. I plan on attending and being part of the subteam. I think
the tags that Dan Smith recently introduced could be our deliverables,
where this subteam focuses on working towards Neutron being tagged with
these tags.

https://review.openstack.org/239771 - Introduce assert:supports-upgrade tag

https://review.openstack.org/239778 - Introduce assert:supports-rolling-upgrade 
tag
--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] hackathon day

2015-11-12 Thread Rosa, Andrea (HP Cloud Services)
Hi

I knew that people in China had a 3 days hackathon  few months ago I was 
thinking to have a similar thing in Europe.
My original idea was to propose to add an extra day after the mid-cycle but I 
am not sure if that is a good idea anymore:

CONS:
- the next  mid-cycle is going to be the first one outside USA and as for any 
new things it has some level of uncertainty, we know that we could have a less 
participant than the other meetups, so it is a risk to add the hackathon at 
this time
- probably it is better to have more than one day for getting the most out of 
an hackathon event
- ppl attending the hackathon are not necessarily the same person attending a 
mid-cycle event, I think at the hackathon as a very good opportunity for new 
contributors
- it is already late for this proposal, I know I had to propose it at the last 
summit, my fault

PROS:
- having the hackathon following the  mid-cycle gives us the opportunity to 
have more core reviewers available, which is a key point for getting right 
directions and stuff done
- cost effective: ppl interested in attending both events can save a travel 

It'd be good to have a feedback about the Chinese hackathon experience to 
understand if it's worth to put effort in making a similar event in other part 
of the world.
If it's not the mid-cycle I think there are other events where it could be good 
to have a couple of extra days for the hackathon, I am thinking for example at 
Fosdem [1]... well not for 2016 as it is on the week-end after the mid-cycle :)

Thanks
--
Andrea Rosa

[1] https://fosdem.org/






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] ::db classes

2015-11-12 Thread Denis Egorenko
Hi Clayton!

I would like to use the same solution for this problem as we have for
::logging class. So, we just leaving those default parameters
without $::os_service_default and putting old values (10, 20, etc) [1] and
also we should to raise warnings, as suggests Alex Schultz (for example,
[2]). For me - it is the best solution of problem.

[1]
https://github.com/openstack/puppet-cinder/blob/master/manifests/logging.pp#L98
[2] https://review.openstack.org/#/c/239800/5/manifests/backend/eqlx.pp

2015-11-11 18:04 GMT+03:00 Clayton O'Neill :

> On Wed, Nov 11, 2015 at 9:50 AM, Clayton O'Neill 
> wrote:
>
>> I discovered this issue last night and opened a bug on it (
>> https://bugs.launchpad.net/puppet-tuskar/+bug/1515273).
>>
>> This effects most of the modules, and the short version of it is that the
>> defaults in all the ::db classes are wrong for max_pool_size
>> and max_overflow.  We’re setting test to 10 and 20, but oslo_db actually
>> has no internal default.
>>
>
> To clarify: The modules following this pattern are setting max_pool_size
> and max_overflow to 10 and 20 respectively, but oslo_db has no internal
> default.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Performance] Please add your discussion items to the next meeting agenda

2015-11-12 Thread Dina Belova
Folks,

Last time we had Performance team meeting, we ha did a bit messy due to the
lack of time spent by team members to fill the meeting agenda. Let's fix it
this time! :)

Please add your items to the agenda
https://wiki.openstack.org/wiki/Meetings/Performance#Agenda_for_next_meeting
(I intentionally keep it empty right now to discuss points interesting to
*you*) and let's go through them next time :)


Cheers,
Dina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2015-11-12 Thread Vladimir Kuklin
Hi, Andrew

>Ah good, I understood it correctly then :)
> I would be interested in your opinion of how the other agent does the
bootstrapping (ie. without notifications or master/slave).
>That makes sense, the part I’m struggling with is that it sounds like the
other agent shouldn’t work at all.
> Yet we’ve used it extensively and not experienced these kinds of hangs.
Regarding other scripts - I am not aware of any other scripts that actually
handle cloned rabbitmq server. I may be mistaking, of course. So if you are
aware if these scripts succeed in creating rabbitmq cluster which actually
survives 1-node or all-node failure scenarios and reassembles the cluster
automatically - please, let us know.

> Changing the state isn’t ideal but there is precedent, the part that has
me concerned is the error codes coming out of notify.
> Apart from producing some log messages, I can’t think how it would
produce any recovery.

> Unless you’re relying on the subsequent monitor operation to notice the
error state.
> I guess that would work but you might be waiting a while for it to notice.

Yes, we are relying on subsequent monitor operations. We also have several
OCF check levels to catch a case when one node does not have rabbitmq
application started properly (btw, there was a strange bug that we had to
wait for several non-zero checks to fail to get the resource to restart
http://bugs.clusterlabs.org/show_bug.cgi?id=5243) . I now remember, why we
did notify errors - for error logging, I guess.


On Thu, Nov 12, 2015 at 1:30 AM, Andrew Beekhof  wrote:

>
> > On 11 Nov 2015, at 11:35 PM, Vladimir Kuklin 
> wrote:
> >
> > Hi, Andrew
> >
> > Let me answer your questions.
> >
> > This agent is active/active which actually marks one of the node as
> 'pseudo'-master which is used as a target for other nodes to join to. We
> also check which node is a master and use it in monitor action to check
> whether this node is clustered with this 'master' node. When we do cluster
> bootstrap, we need to decide which node to mark as a master node. Then,
> when it starts (actually, promotes), we can finally pick its name through
> notification mechanism and ask other nodes to join this cluster.
>
> Ah good, I understood it correctly then :)
> I would be interested in your opinion of how the other agent does the
> bootstrapping (ie. without notifications or master/slave).
>
> >
> > Regarding disconnect_node+forget_cluster_node this is quite simple - we
> need to eject node from the cluster. Otherwise it is mentioned in the list
> of cluster nodes and a lot of cluster actions, e.g. list_queues, will hang
> forever as well as forget_cluster_node action.
>
> That makes sense, the part I’m struggling with is that it sounds like the
> other agent shouldn’t work at all.
> Yet we’ve used it extensively and not experienced these kinds of hangs.
>
> >
> > We also handle this case whenever a node leaves the cluster. If you
> remember, I wrote an email to Pacemaker ML regarding getting notifications
> on node unjoin event '[openstack-dev] [Fuel][Pacemaker][HA] Notifying
> clones of offline nodes’.
>
> Oh, I recall that now.
>
> > So we went another way and added a dbus daemon listener that does the
> same when node lefts corosync cluster (we know that this is a little bit
> racy, but disconnect+forget actions pair is idempotent).
> >
> > Regarding notification commands - we changed behaviour to the one that
> fitter our use cases better and passed our destructive tests. It could be
> Pacemaker-version dependent, so I agree we should consider changing this
> behaviour. But so far it worked for us.
>
> Changing the state isn’t ideal but there is precedent, the part that has
> me concerned is the error codes coming out of notify.
> Apart from producing some log messages, I can’t think how it would produce
> any recovery.
>
> Unless you’re relying on the subsequent monitor operation to notice the
> error state.
> I guess that would work but you might be waiting a while for it to notice.
>
> >
> > On Wed, Nov 11, 2015 at 2:12 PM, Andrew Beekhof 
> wrote:
> >
> > > On 11 Nov 2015, at 6:26 PM, bdobre...@mirantis.com wrote:
> > >
> > > Thank you Andrew.
> > > Answers below.
> > > >>>
> > > Sounds interesting, can you give any comment about how it differs to
> the other[i] upstream agent?
> > > Am I right that this one is effectively A/P and wont function without
> some kind of shared storage?
> > > Any particular reason you went down this path instead of full A/A?
> > >
> > > [i]
> > >
> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster
> > > <<<
> > > It is based on multistate clone notifications. It requries nothing
> shared but Corosync info base CIB where all Pacemaker resources stored
> anyway.
> > > And it is fully A/A.
> >
> > Oh!  So I should skip the A/P parts before "Auto-configuration of a
> cluster with a Pacemaker”?
> > Is the idea that the 

[openstack-dev] What's Up, Doc? 13 November 2015

2015-11-12 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

Well, things are starting to ramp up again after Summit. 

This week, I've spent some time catching up with other PTLs, and also worked on 
the DocImpact spec. I've also implemented my plan to get the Speciality Team 
leads to provide content on their projects individually, so many thanks to them 
for replying with so much great info!

== Progress towards Mitaka ==

145 days to go!

100 bugs closed so far for this release.

DocImpact
* I've updated the spec and made contact with Infra to nut out the finer 
technical details: 
https://blueprints.launchpad.net/openstack-manuals/+spec/review-docimpact

RST Conversions
* Arch Guide
** https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-rst
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* Ops Guide
** https://blueprints.launchpad.net/openstack-manuals/+spec/ops-guide-rst
** Lana will reach out to O'Reilly to discuss the printed book before this work 
begins
* Config Ref
** Contact the Config Ref Speciality team: 
https://wiki.openstack.org/wiki/Documentation/ConfigRef
* Virtual Machine Image Guide
** https://blueprints.launchpad.net/openstack-manuals/+spec/image-guide-rst and 
https://review.openstack.org/#/c/244598/

Reorganisations
* Arch Guide
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* User Guides
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
** Contact the User Guide Speciality team: 
https://wiki.openstack.org/wiki/User_Guides

Document openstack-doc-tools and reorganise index page
* Thanks to Brian for volunteering to take this on!

Horizon UX work
* There's an interesting opportunity to improve the wording in Horizon, 
specifically around LBaaS, and also to develop a set of standards around the 
language used in the dashboard. If you're interested, let me know and I'll get 
you in contact with the right people.

== Speciality Teams ==

'''HA Guide - Bogdan Dobrelya'''
Created a poll to shift HA guide IRC meeting time 
(http://lists.openstack.org/pipermail/openstack-docs/2015-November/007830.html).
 Submitted a status for HA guide testing 
(http://lists.openstack.org/pipermail/openstack-docs/2015-October/007718.html), 
hoping to see more participants in testing. Trying to push few patches, and 
initiated related discussion: https://review.openstack.org/#/c/225418/ and 
https://review.openstack.org/235893. Thinking on how to effectively collaborate 
with Adam Spiers and his Openstack HA initiative 
(http://lists.openstack.org/pipermail/openstack-dev/2015-November/079127.html) 
to benefit HA guide project as well.

'''Installation Guide - Christian Berendt'''
Moved the team meeting from Google Hangout to IRC (online at 
http://eavesdrop.openstack.org/#Documentation_Install_Team_Meeting), we had our 
first team meeting on the IRC 
(http://eavesdrop.openstack.org/meetings/docinstallteam/2015/docinstallteam.2015-11-11-01.03.html),
 we discussed the inclusion of hypervisor specific appendixes, no consensus at 
the moment, really good progress on the Debian install guide, zigo is doing a 
good job. 

'''Networking Guide - Edgar Magana'''
We have a new spec for OVN: https://review.openstack.org/#/c/244367/ I am going 
to start pulling people into specific patches.

'''Security Guide - Nathaniel Dillon'''
We have added a new chapter on Manila and have completed the reworking of the 
case studies.

'''User Guides - Joseph Robinson'''
Our meeting format has changed from hangouts to IRC meetings, with the 
appropriate changes made to the meetbot script thanks to Brian. The mitaka 
specification: the priority at the moment is checking in with project docs 
liaisons to confirm what services are in scope, and then add a short statement 
to the specification outlining this scope. Responding to Cloud Admin Guide 
patches. Thanks to Andreas for the link.

'''Ops and Arch Guides - Shilla Saebi'''
The operations guide RST migration is on hold until we find out if we are going 
to move forward with docbook or RST. In the meantime we are looking to add 
content and update it. The architecture guide RST conversion is underway. The 
progress on chapters and pages can be checked via the wiki page: 
https://wiki.openstack.org/wiki/Documentation/Migrate

'''API Docs - Anne Gentle'''
Russell Sim plans to revise the templates for developer.openstack.org landing 
page to better match docs.openstack.org. We'll also map out a plan for 
fairy-slipper and the WADL migration. Spec to be updated soon.

'''Config Ref - Gauvain Pocentek'''
The initial structure for the Config Reference RST migration is up, the 
automation tools are ready to handle RST tables generation, and a couple pages 
have already been converted. We're waiting for the liberty branching to start 
pushing more 

Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2015-11-12 Thread Sean M. Collins
On Thu, Nov 12, 2015 at 05:55:51PM EST, Ihar Hrachyshka wrote:
> I also believe that the first step to get the job set is making neutron own
> its grenade future, by migrating to grenade plugin maintained in neutron
> tree.

I'd like to see what Sean Dague thinks of this - my worry is that if we
start pulling things into Neutron we lose valuable insight from people
who know a lot about Grenade.

Not to mention, Sean and I have had conversations about trying to get
Neutron as the default for DevStack - we can't just take our ball and go
in our own corner.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-12 Thread Paul Carver

On 11/12/2015 3:50 PM, Ihar Hrachyshka wrote:


All I am saying is that IF we merge some classifier API into neutron
core and start using it for core, non-experimental, features, we cannot
later move to some newer version of this API [that you will iterate on]
without leaving a huge pile of compatibility code that would not exist
in the first place if only we thought about proper API in advance. If
that’s what you envision, fine; but I believe it will make adoption of
the ‘evolving’ API a lot slower than it could otherwise be.


I don't think I disagree at all. But we don't have a classifier API in 
neutron core (unless we consider security groups to be it) and I don't 
think anyone is saying that the classifier in networking-sfc should be 
merged straight into core as-is. In fact I think we're saying exactly 
the opposite, that *a* classifier will sit in networking-sfc, outside of 
core neutron, until *some* classifier is merged into core neutron.


The point of networking-sfc isn't the classifier. A classifier is simply 
a prerequisite. So by all means let's work on defining and merging into 
core neutron a classifier that we can consider non-experimental and 
stable for all features to share and depend on, but we don't want SFC to 
be non-functional while we wait for that to happen. We can call the 
networking-sfc classifier experimental a slap a warning on that it'll be 
replaced with the core neutron classifier once such a thing has been 
implemented.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-12 Thread Matthias Runge
On 13/11/15 02:49, Tony Breeds wrote:
> On Thu, Nov 12, 2015 at 02:40:19PM +0100, Alan Pevec wrote:
> 
>> AFAICT there are at least two blockers for 2014.2.4: - horizon -
>> django_openstack_auth issue Tony mentions in 
>> https://review.openstack.org/172826
> 
> Horizon itself is fine BUT gets caught up in a mess of g-r updates.
> The issue at hand is that DOA has an uncapped requirement on
> oslo.config which then gets a 2.x release which doesn't contain the
> oslo namespace.  The *right* way to fix it is to cut a new relases
> of DOA including the a g-r sync.
> 
> There was some discussion on this in #openstack-stable: 
> http://eavesdrop.openstack.org/irclogs/%23openstack-stable/%23openstack-stable.2015-11-12.log.html
>
>  An alternate hack would be:
> https://review.openstack.org/#/c/244950/ but that seems fragile to
> me.
> 

Looking at
http://logs.openstack.org/periodic-stable/periodic-horizon-python27-juno/1f5b282/console.html

it's obvious, horizon falls down the same way like doa.

Speaking of your "hacky" patch: yes and no. It makes the gate passing,
it doesn't change the code itself. For most people, this will work fine.

The right way to do it, would be to create a juno branch for doa and
cap requirements there.

How to do this? Is there a procedure to request a branch?

Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-12 Thread Paul Carver

On 11/3/2015 1:03 PM, Sean M. Collins wrote:

Anyway, the code is currently up on GitHub - I just threw it on there
because I wanted to scratch my hacking itch quickly.

https://github.com/sc68cal/neutron-classifier



Sean,

How much is needed to turn your models into something runnable to the 
extent of populating a database? I'm not really all that proficient with 
SQL Alchemy or SQL in general so I can't really visualize what the 
polymorphism statements in your model actually create.


I'd like to create a few classifier rules and see what gets populated 
into the database and also to understand complicated of an SQL query is 
SQL Alchemy generating in order to reassemble each rule from its 
polymorphic representation in the database.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

2015-11-12 Thread Anna Kamyshnikova
All options sound good, but 15 UTC on Monday and Friday have
some overlapping with my local meetings, but I'll try to handle this
somehow if it suits for others.

On Fri, Nov 13, 2015 at 7:29 AM, Sean M. Collins  wrote:

> Monday sounds good to me. 1500 utc sounds good too.
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] zombie process after ceilometer-api

2015-11-12 Thread Madhu Mohan Nelemane
Hi,

I see a zombie process left out by ceilometer-api after a client command
like "ceilometer meter-list".  This seems to occur only when I change the
number of workers for [api] section in ceilometer.conf to 2 or more.
With default  value of workers = 1, there is no zombie.

Has anybody encountered this problem ? OR is there already a bug to tackle
this ?

Thanks,
Madhu Mohan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] hackathon day

2015-11-12 Thread Wang, Shane
Hi Rosa,

I am one of the organizers of China Hackathons.
I think it is a good idea to host a Hackathon in Europe right after the 
mid-cycle meetup. However, I suggest to allow more days instead of only 1 to 
fix bugs, or you have to get well prepared, because fixing bugs is about 
reproducing, fixing and reviewing - not that easy.
When we did Hackathon in August, we had half a day to share techniques like 
meetups. So possibly you can combine mid-cycle meetup with hackathon, which is 
really a good idea to save the cost. Also I would like to tell that the core 
invitation is one of the key aspects to make the event succeed. Because the 
event will definitely attracts many developers, new or old, and you will have 
many fixes, it needs more cores to spend the bandwidth reviewing those fixes 
within 1-2 days.

Best Regards.
--
Shane

-Original Message-
From: Rosa, Andrea (HP Cloud Services) [mailto:andrea.r...@hpe.com] 
Sent: Thursday, November 12, 2015 7:15 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [nova] hackathon day

Hi

I knew that people in China had a 3 days hackathon  few months ago I was 
thinking to have a similar thing in Europe.
My original idea was to propose to add an extra day after the mid-cycle but I 
am not sure if that is a good idea anymore:

CONS:
- the next  mid-cycle is going to be the first one outside USA and as for any 
new things it has some level of uncertainty, we know that we could have a less 
participant than the other meetups, so it is a risk to add the hackathon at 
this time
- probably it is better to have more than one day for getting the most out of 
an hackathon event
- ppl attending the hackathon are not necessarily the same person attending a 
mid-cycle event, I think at the hackathon as a very good opportunity for new 
contributors
- it is already late for this proposal, I know I had to propose it at the last 
summit, my fault

PROS:
- having the hackathon following the  mid-cycle gives us the opportunity to 
have more core reviewers available, which is a key point for getting right 
directions and stuff done
- cost effective: ppl interested in attending both events can save a travel 

It'd be good to have a feedback about the Chinese hackathon experience to 
understand if it's worth to put effort in making a similar event in other part 
of the world.
If it's not the mid-cycle I think there are other events where it could be good 
to have a couple of extra days for the hackathon, I am thinking for example at 
Fosdem [1]... well not for 2016 as it is on the week-end after the mid-cycle :)

Thanks
--
Andrea Rosa

[1] https://fosdem.org/






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-12 Thread Alan Pevec
> This is a call to stable-maint teams for Nova, Keystone, Glance,
> Cinder, Neutron, Horizon, Heat, Ceilometer, Trove and Sahara to review
> open stable/juno changes[2] and approve/abandon them as appropriate.

CCing CPLs listed in
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch

I got only ACK from Erno for Glance, anyone other project ready?


> [2] 
> https://review.openstack.org/#/q/status:open+AND+branch:stable/juno+AND+%28project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer+OR+project:openstack/trove+OR+project:openstack/sahara%29,n,z

AFAICT there are at least two blockers for 2014.2.4:
- horizon - django_openstack_auth issue Tony mentions in
https://review.openstack.org/172826
- nova - gate-grenade-dsvm-ironic-sideways failures e.g. in
https://review.openstack.org/227800


Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] zombie process after ceilometer-api

2015-11-12 Thread Madhu Mohan Nelemane
Here is the output of "ps aux | grep ceilometer-api" after "ceilometer
meter-list"

ceilome+ 27192  1.0  1.1 247400 60388 ?Ss   14:10   0:00
/usr/bin/python /usr/bin/ceilometer-api
--config-file=/etc/ceilometer/ceilometer.conf
--logfile=/var/log/ceilometer/api.log
ceilome+ 27655 15.3  0.0  0 0 ?Z14:11   0:00
[ceilometer-api] 

Its the second line I am concerned about.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [group-based-policy] QoS support in GBP

2015-11-12 Thread Duarte Cardoso, Igor
Hi OpenStack Group-based Policy team,

As I unofficially said before, I am interested in bringing basic QoS to GBP via 
the Neutron QoS API which currently offers bandwidth limitation at the port and 
network level, since Liberty.

I have added the item to today's Meeting Agenda for an initial discussion about 
this and where it would best fit for GBP.

Best regards,


[intel]

Igor Duarte Cardoso
Software Engineer
+353 61 777 858
SIE1-2-170
Intel Shannon Ltd.
Dromore House, East Park
Shannon, Co. Clare
IRELAND

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-12 Thread Matthias Runge
On 12/11/15 14:40, Alan Pevec wrote:
>> This is a call to stable-maint teams for Nova, Keystone, Glance,
>> Cinder, Neutron, Horizon, Heat, Ceilometer, Trove and Sahara to review
>> open stable/juno changes[2] and approve/abandon them as appropriate.
> 
> CCing CPLs listed in
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch
> 
> I got only ACK from Erno for Glance, anyone other project ready?
> 
> 
>> [2] 
>> https://review.openstack.org/#/q/status:open+AND+branch:stable/juno+AND+%28project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer+OR+project:openstack/trove+OR+project:openstack/sahara%29,n,z
> 
> AFAICT there are at least two blockers for 2014.2.4:
> - horizon - django_openstack_auth issue Tony mentions in
> https://review.openstack.org/172826
> - nova - gate-grenade-dsvm-ironic-sideways failures e.g. in
> https://review.openstack.org/227800
> 
> 

Horizon itself *should* be fine. For me, issues Tony mentioned are
actually inherited from keystoneclient.

Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] zombie process after ceilometer-api

2015-11-12 Thread gord chung
can you clarify what you mean by 'zombie process'? i'm not aware of a 
bug relating to this so could you open one.


just for reference, this is the suggested method for deploying the 
ceilometer-api: 
http://docs.openstack.org/developer/ceilometer/install/mod_wsgi.html


cheers,

On 12/11/2015 8:37 AM, Madhu Mohan Nelemane wrote:

Hi,

I see a zombie process left out by ceilometer-api after a client 
command like "ceilometer meter-list".  This seems to occur only when I 
change the number of workers for [api] section in ceilometer.conf to 2 
or more.

With default  value of workers = 1, there is no zombie.

Has anybody encountered this problem ? OR is there already a bug to 
tackle this ?


Thanks,
Madhu Mohan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] python-novaclient 2.35.0 release freeze

2015-11-12 Thread Matt Riedemann
We're planning to do a novaclient minor release before we start looking 
at some backwards incompatible changes which would force a major version 
change to 3.0 (like dropping the novaclient.v1_1 module).


The plan is to get that requested today.

This is the list of open reviews:

https://review.openstack.org/#/q/status:open+project:openstack/python-novaclient,n,z

These are the changes since 2.34.0:

mriedem@ubuntu:~/git/python-novaclient$ git log --oneline --no-merges 
2.34.0..

5f5ec35 Add v2 support for optional project_id in version discovery urls
4f16fe6 make project_id optional in urls for version discovery
09bb834 Refactor parsing metadata to a common function

If there is any high priority fixes that you think we should get into 
2.35.0 this week, please ping/speak up in #openstack-nova on IRC.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][gnocchi] defining scope/domain

2015-11-12 Thread gord chung



On 11/11/2015 4:19 AM, Julien Danjou wrote:

Yes, I think there is. We should write typical use cases and example in
the documentation so that people may get a grasp of what Gnocchi is and
what kind of problem it can or cannot solve.
cool stuff... how should we start this? etherpad[1]? i think we can 
publish it to main docs page as we come to agreement on each use case.


[1] https://etherpad.openstack.org/p/telemetry-gnocchi-use-cases

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-12 Thread Ihar Hrachyshka

Alan Pevec  wrote:


This is a call to stable-maint teams for Nova, Keystone, Glance,
Cinder, Neutron, Horizon, Heat, Ceilometer, Trove and Sahara to review
open stable/juno changes[2] and approve/abandon them as appropriate.


CCing CPLs listed in
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch

I got only ACK from Erno for Glance, anyone other project ready?


Neutron does not plan any more backports, and the queue is empty. We are  
good to go.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] hackathon day

2015-11-12 Thread Daniel P. Berrange
On Thu, Nov 12, 2015 at 11:15:09AM +, Rosa, Andrea (HP Cloud Services) 
wrote:
> Hi
> 
> I knew that people in China had a 3 days hackathon  few months ago I was 
> thinking to have a similar thing in Europe.
> My original idea was to propose to add an extra day after the mid-cycle but I 
> am not sure if that is a good idea anymore:

The day after the mid-cycle is the main day for travelling to FOSDEM, so
a bad choice for people who want to attend FOSDEM too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA] weekly High Availability meetings on IRC start next Monday

2015-11-12 Thread Adam Spiers
Hi Sergii,

Thanks a lot for the feedback!

Sergii Golovatiuk  wrote:
> Hi Adam,
> 
> It's great we are moving forward with HA community. Thank you so much for
> brining HA to next level. However, I have couple of comments
> 
> [1] contains agenda. I guess we should move it to
> https://etherpad.openstack.org. That will allow people to add own topics to
> discuss. Action items can be put there as well.
>
>  [1] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting

It's a wiki, so anyone can already add their own topics, and in fact
the page already encourages people to do that :-)

I'd prefer to keep it as a wiki because that is consistent with the
approach of all the other OpenStack meetings, as recommended by

  https://wiki.openstack.org/wiki/Meetings/CreateaMeeting#Add_a_Meeting

It also results in a better audit trail than etherpad (where changes
can be made anonymously).

Action items will be captured by the MeetBot:

  https://git.openstack.org/cgit/openstack-infra/meetbot/tree/doc/Manual.txt

> [2] declares meetings at 9am UTC which might be tough for US based folks. I
> might be wrong here as I don't know the location of HA experts.
> 
>  [2] http://eavesdrop.openstack.org/#High_Availability_Meeting

Yes, I was aware of this :-/  The problem is that the agenda for the
first meeting will focus on hypervisor HA, and the interested parties
who met in Tokyo are all based in either Europe or Asia (Japan and
Australia).  It's hard but possible to find a time which accommodates
two continents, but almost impossible to find a time which
accommodates three :-/

If it's any consolation, the meeting logs will be available
afterwards, and also the meeting is expected to be short (around 30
minutes) since the majority of work will continue via email and IRC
outside the meeting.  This first meeting is mainly to set a direction
for future collaboration.

However suggestions for how to handle this better are always welcome,
and if the geographical distribution of attendees of future meetings
changes then of course we can consider changing the time in order to
accommodate.  I want it to be an inclusive sub-community.

Cheers,
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Last sync from oslo-incubator

2015-11-12 Thread Davanum Srinivas
Hi,

A long time ago, oslo-incubator had a lot of code, we have made most
of the code to oslo.* libraries. There's very little code left and
we'd like to stop hosting common code in oslo-incubator repository. We
encourage everyone to adopt the code they have in their repos under
openstack/common into their own namespace/packaging as we will be
getting rid of any remaining python modules in oslo-incubator.

If there are a couple of projects sharing code, like say cinder and
manila (HT to bswartz), one of those projects can choose to host a
common library between them, similar to how nova and cinder share
os-brick with its own core team. The overall guide to starting
projects is documented here [1] so you will needs bits of that for
existing projects. when in doubt, look in governance, project-config,
requirements repos and model your own.

Thanks,
Dims

[1] http://docs.openstack.org/infra/manual/creators.html


-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How can I contribute to the scheduler codebase?

2015-11-12 Thread Cheng, Yingxin
Hi Sylvain,

Could you assign me scheduler-driver-use-stevedore? I can take that as my first 
step to nova. I wish I could help more, but still many things need to learn. So 
the simplest work comes first.

Yours,
-Yingxin

> -Original Message-
> From: Sylvain Bauza [mailto:sba...@redhat.com]
> Sent: Monday, November 9, 2015 11:09 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova] How can I contribute to the scheduler 
> codebase?
> 
> Hi,
> 
> During the last Nova scheduler meeting (held every Mondays 1400UTC on
> #openstack-meeting-alt), we identified some on-going effort that could 
> possibly
> be addressed by anyone wanting to step in. For the moment, we are still
> polishing the last bits of agreement, but those blueprints should be splitted 
> into
> small actional items that could be seen as low-hanging-fruits.
> 
> Given those tasks require a bit of context understanding, the best way to
> consider joining us to join the Nova scheduler weekly meeting (see above the
> timing) and join our team. We'll try to provide you a bit of guidance and
> explanations whenever needed so that you could get some work assigned to you.
> 
>  From an overall point of view, you can still get many ways to begin your Nova
> journey by reading
> https://wiki.openstack.org/wiki/Nova/Mentoring#What_should_I_work_on.3F
> 
> HTH,
> -Sylvain
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

2015-11-12 Thread Sean M. Collins
Monday sounds good to me. 1500 utc sounds good too.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-12 Thread Ton Ngo

Thanks Eli for the analysis.  I notice that the time to download the image
is only around 1:15 mins out of some 21 mins to set up devstack.  So it
seems trying to reduce the size of the image won't make a significant
improvement in the devstack time.   I wonder how the image size affects the
VM creation time for the cluster.  If we can look at the Heat event stream,
we might get an idea.
Ton,




From:   Egor Guz 
To: "openstack-dev@lists.openstack.org"

Date:   11/12/2015 05:25 PM
Subject:Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing
on gate.



Eli,

First of all I would like to say thank you for your effort (I never seen so
many path sets ;)), but I don’t think we should remove “tls_disabled=True”
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work
for some reasons.

I think it’s good idea to group tests per pipeline we should definitely
follow it.

?
Egor

From: "Qiao,Liyong" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
Date: Wednesday, November 11, 2015 at 23:02
To: "openstack-dev@lists.openstack.org<
mailto:openstack-dev@lists.openstack.org>"
>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on
gate.

hello all:

I will update some Magnum functional testing status, functional/integration
testing
is important to us, since we change/modify the Heat template rapidly, we
need to
verify the modification is correct, so we need to cover all templates
Magnum has.
and currently we only has k8s testing(only test with atomic image), we need
to
add more, like swarm(WIP), mesos(under plan), also , we may need to support
COS image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo
summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after
testing.

for each stage, the time costing is follows:

  *   devstack prepare: 5-6 mins
  *   Running devstack: 15 mins(include downloading atomic image)
  *   1) and 2) 15 mins
  *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html

for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack
setup will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary
kinds of testing
on this bay, if want to test some specify bay (for example, network_driver
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things
(create/delete), the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce
to 45 min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

  *   gate-functional-dsvm-magnum-api 30 mins
  *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on
gate)
https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Regarding Designate-Virtualenv Issues

2015-11-12 Thread Sharma Swati6
 Hi Folks,

I am trying to use Designate using Virtualenv approach from the link : 
http://docs.openstack.org/developer/designate/install/ubuntu-kilo.html#exercising-the-api

I was able to get the different Designate services started successfully.

I have certain environment variables in the virtualenv also-
export OS_USERNAME=openstack
export OS_PASSWORD=password
export OS_TENANT_NAME=myproject
export OS_TENANT_ID=123456789
export OS_AUTH_URL=https://127.0.0.1:9001/v2.0/

I then began exercising the API calls. When I am trying to fire this : 
http://IP.Address:9001/api_version/command , I am doubtful about what IP 
Address it should be. It did not work with https://127.0.0.1:9001/v2.0/

I also tried  'curl -s checkip.dyndns.org | sed -e 's/.*Current IP Address: //' 
-e 's/<.*$//' to find out the IP Address but it gives null result for the 
server address.

Can anyone please let me know where does the problem lie? Am I mentioning the 
IP Address wrong somewhere?

Thanks & Regards
 Swati Sharma
 System Engineer
 Tata Consultancy Services
 Cell:- +91-9717238784
 Mailto: sharma.swa...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2015-11-12 Thread Armando M.
On 12 November 2015 at 20:24, Sean M. Collins  wrote:

> On Thu, Nov 12, 2015 at 05:55:51PM EST, Ihar Hrachyshka wrote:
> > I also believe that the first step to get the job set is making neutron
> own
> > its grenade future, by migrating to grenade plugin maintained in neutron
> > tree.
>
> I'd like to see what Sean Dague thinks of this - my worry is that if we
> start pulling things into Neutron we lose valuable insight from people
> who know a lot about Grenade.


> Not to mention, Sean and I have had conversations about trying to get
> Neutron as the default for DevStack - we can't just take our ball and go
> in our own corner.
>

Agreed. (I feel like) we had a good discussion at the summit about this: we
clearly have key pieces that are and will stay within the realm of both
devstack and grenade.


>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing ongate.

2015-11-12 Thread Kai Qiang Wu
Right now, we seems can not reduce devstack runtime.  ANd @Ton, yes,
download image time seems OK in jenkins job, it found about 4~5 mins

But bay-creation time is interesting topic, it seems something related with
heat or VM setup time consumption. But needs some investigation.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Ton Ngo" 
To: "OpenStack Development Mailing List \(not for usage questions
\)" 
Date:   13/11/2015 01:13 pm
Subject:Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing
on  gate.



Thanks Eli for the analysis. I notice that the time to download the image
is only around 1:15 mins out of some 21 mins to set up devstack. So it
seems trying to reduce the size of the image won't make a significant
improvement in the devstack time. I wonder how the image size affects the
VM creation time for the cluster. If we can look at the Heat event stream,
we might get an idea.
Ton,


Inactive hide details for Egor Guz ---11/12/2015 05:25:15 PM---Eli, First
of all I would like to say thank you for your effort Egor Guz ---11/12/2015
05:25:15 PM---Eli, First of all I would like to say thank you for your
effort (I never seen so many path sets ;)),

From: Egor Guz 
To: "openstack-dev@lists.openstack.org" 
Date: 11/12/2015 05:25 PM
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on
gate.



Eli,

First of all I would like to say thank you for your effort (I never seen so
many path sets ;)), but I don’t think we should remove “tls_disabled=True”
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work
for some reasons.

I think it’s good idea to group tests per pipeline we should definitely
follow it.

―
Egor

From: "Qiao,Liyong" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
Date: Wednesday, November 11, 2015 at 23:02
To: "openstack-dev@lists.openstack.org<
mailto:openstack-dev@lists.openstack.org>"
>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on
gate.

hello all:

I will update some Magnum functional testing status, functional/integration
testing
is important to us, since we change/modify the Heat template rapidly, we
need to
verify the modification is correct, so we need to cover all templates
Magnum has.
and currently we only has k8s testing(only test with atomic image), we need
to
add more, like swarm(WIP), mesos(under plan), also , we may need to support
COS image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo
summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after
testing.

for each stage, the time costing is follows:

 *   devstack prepare: 5-6 mins
 *   Running devstack: 15 mins(include downloading atomic image)
 *   1) and 2) 15 mins
 *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html

for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack
setup will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary
kinds of testing
on this bay, if want to test some specify bay (for example, network_driver
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things
(create/delete), the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce
to 45 min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

 *   gate-functional-dsvm-magnum-api 30 mins
 *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on
gate)

Re: [openstack-dev] [ceilometer] zombie process after ceilometer-api

2015-11-12 Thread gord chung
if you have configured two workers and you have two ceilometer-api 
processes and an extra 'zombie' process, then it's correct.


someone else might be able to articulate this better but when you use 
workers, it creates child processes that represent the workers you set, 
and there is a parent process which manages them.


you'll probably notice the same if you configure addition 
collector/notification agent workers... and the same for other projects


cheers,

On 12/11/15 09:26 AM, Madhu Mohan Nelemane wrote:
Here is the output of "ps aux | grep ceilometer-api" after "ceilometer 
meter-list"


ceilome+ 27192  1.0  1.1 247400 60388 ?Ss   14:10   0:00 
/usr/bin/python /usr/bin/ceilometer-api 
--config-file=/etc/ceilometer/ceilometer.conf 
--logfile=/var/log/ceilometer/api.log
ceilome+ 27655 15.3  0.0  0 0 ?Z14:11   0:00 
[ceilometer-api] 


Its the second line I am concerned about.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] [metrics] Review metrics: average numbers

2015-11-12 Thread Ilya Shakhat
Hi Mike,

> Do I understand right, that average numbers
> > here are calculated out of open reviews, not total number of reviews?


Average numbers are calculated for reviews within the group. But I'd expect
them to be "time since the last vote", not "time when patch proposed" as
they do now.

> The most important number which I'm trying to get, is an average time
> > change requests waiting for reviewers since last vote or mark, from
> > all requests (not only those which remain in open state, like it is
> > now, I believe).


Do you mean to calculate stats not only for open, but also for those that
are already closed? Should it be for all times, or during specified period?

Regards,
--Ilya


2015-11-12 1:14 GMT+03:00 Jesus M. Gonzalez-Barahona :

> Hi, Mike,
>
> I'm not sure what you are looking for exactly, but maybe you can have a
> look at the quarterly reports. AFAIK, currently there is none specific
> to Fuel, but for example for Nova, you have:
>
> http://activity.openstack.org/dash/reports/2015-q3/pdf/projects/nova.pd
> f
>
> In page 6, you have "time waiting for reviewer" (from the moment a new
> patchset is produced, to the time a conclusive review vote is found in
> Gerrit), and "time waiting for developer" (from the conclusive review
> vote to next patchset).
>
> We're working now in a visualization for that kind of information. For
> now, we only have complete changeset values, check if you're
> interested:
>
> http://blog.bitergia.com/2015/10/22/understanding-the-code-review-proce
> ss-in-openstack/
>
> Saludos,
>
> Jesus.
>
> On Wed, 2015-11-11 at 21:45 +, Mike Scherbakov wrote:
> > Hi stackers,
> > I have a question about Stackalytics.
> > I'm trying to get some more data from code review stats. For Fuel,
> > for instance,
> > http://stackalytics.com/report/reviews/fuel-group/open
> > shows some useful stats. Do I understand right, that average numbers
> > here are calculated out of open reviews, not total number of reviews?
> >
> > The most important number which I'm trying to get, is an average time
> > change requests waiting for reviewers since last vote or mark, from
> > all requests (not only those which remain in open state, like it is
> > now, I believe).
> >
> > How hard would it be to get / extend Stackalytics to make it..?
> >
> > Thanks!
> > --
> > Mike Scherbakov
> > #mihgen
> > _
> > _
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> > cribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Bitergia: http://bitergia.com
> /me at Twitter: https://twitter.com/jgbarah
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] hackathon day

2015-11-12 Thread Wang, Shane
Open the mid-cycle one day ahead, feasible?

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Thursday, November 12, 2015 10:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] hackathon day

On Thu, Nov 12, 2015 at 11:15:09AM +, Rosa, Andrea (HP Cloud Services) 
wrote:
> Hi
> 
> I knew that people in China had a 3 days hackathon  few months ago I was 
> thinking to have a similar thing in Europe.
> My original idea was to propose to add an extra day after the mid-cycle but I 
> am not sure if that is a good idea anymore:

The day after the mid-cycle is the main day for travelling to FOSDEM, so a bad 
choice for people who want to attend FOSDEM too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-12 Thread Dmitry Nikishov
Stanislaw,

I agree that this approch would work well. However, does Puppet allow
managing capabilities and/or file ACLs? Or can they be easily set up when
installing RPM package? (is there a way to specify capabilities/ACLs in the
RPM spec file?) This doesn't seem to be supported out of the box.

I'm going to research if it is possible to manage capabilities and  ACLs
with what we have out of the box (RPM, Puppet).

On Wed, Nov 11, 2015 at 4:29 AM, Stanislaw Bogatkin 
wrote:

> Dmitry, I propose to give needed linux capabilities
> (like CAP_NET_BIND_SERVICE) to processes (services) which needs them and
> then start these processes from non-privileged user. It will give you
> ability to run each process without 'sudo' at all with well fine-grained
> permissions.
>
> On Tue, Nov 10, 2015 at 11:06 PM, Dmitry Nikishov 
> wrote:
>
>> Stanislaw,
>>
>> I've been experimenting with 'capsh' on the 6.1 master node and it
>> doesn't seem to preserve any capabilities when setting SECURE_NOROOT bit,
>> even if explicitely told to do so (via either --keep=1 or
>> "SECURE_KEEP_CAPS" bit).
>>
>> On Tue, Nov 10, 2015 at 11:20 AM, Dmitry Nikishov > > wrote:
>>
>>> Bartolomiej, Adam,
>>> Stanislaw is correct. And this is going to be ported to master. The goal
>>> currently is to reach an agreement on the implementation so that there's
>>> going to be a some kinf of compatibility during upgrades.
>>>
>>> Stanislaw,
>>> Do I understand correctly that you propose using something like sucap to
>>> launch from root, switch to a different user and then drop capabilities
>>> which are not required?
>>>
>>> On Tue, Nov 10, 2015 at 3:11 AM, Stanislaw Bogatkin <
>>> sbogat...@mirantis.com> wrote:
>>>
 Bartolomiej, it's customer-related patches, they, I think, have to be
 done for 6.1 prior to 8+ release.

 Dmitry, it's nice to hear about it. Did you consider to use linux
 capabilities on fuel-related processes instead of just using non-extended
 POSIX privileged/non-privileged permission checks?

 On Tue, Nov 10, 2015 at 10:11 AM, Bartlomiej Piotrowski <
 bpiotrow...@mirantis.com> wrote:

> We don't develop features for already released versions… It should be
> done for master instead.
>
> BP
>
> On Tue, Nov 10, 2015 at 7:02 AM, Adam Heczko 
> wrote:
>
>> Dmitry,
>> +1
>>
>> Do you plan to port your patchset to future Fuel releases?
>>
>> A.
>>
>> On Tue, Nov 10, 2015 at 12:14 AM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Hey guys.
>>>
>>> I've been working on making Fuel not to rely on superuser privileges
>>> at least for day-to-day operations. These include:
>>> a) running Fuel services (nailgun, astute etc)
>>> b) user operations (create env, deploy, update, log in)
>>>
>>> The reason for this is that many security policies simply do not
>>> allow root access (especially remote) to servers/environments.
>>>
>>> This feature/enhancement means that anything that currently is being
>>> run under root, will be evaluated and, if possible, put under a
>>> non-privileged
>>> user. This also means that remote root access will be disabled.
>>> Instead, users will have to log in with "fueladmin" user.
>>>
>>> Together with Omar  we've put together a blueprint[0]
>>> and a
>>> spec[1] for this feature. I've been developing this for Fuel 6.1, so
>>> there
>>> are two patches into fuel-main[2] and fuel-library[3] that can give
>>> you an
>>> impression of current approach.
>>>
>>> These patches do following:
>>> - Add fuel-admin-user package, which creates 'fueladmin'
>>> - Make all other fuel-* packages depend on fuel-admin-user
>>> - Put supervisord under 'fueladmin' user.
>>>
>>> Please review the spec/patches and let's have a discussion on the
>>> approach to
>>> this feature.
>>>
>>> Thank you.
>>>
>>> [0] https://blueprints.launchpad.net/fuel/+spec/fuel-nonsuperuser
>>> [1] https://review.openstack.org/243340
>>> [2] https://review.openstack.org/243337
>>> [3] https://review.openstack.org/243313
>>>
>>> --
>>> Dmitry Nikishov,
>>> Deployment Engineer,
>>> Mirantis, Inc.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Adam Heczko
>> Security Engineer @ Mirantis Inc.
>>
>>
>> __
>> OpenStack Development Mailing List (not for 

Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-12 Thread Matthew Mosesohn
Dmitry,

We really shouldn't put "user" creation into a single package and then
depend on it for daemons. If we want nailgun service to run as nailgun
user, it should be created in the fuel-nailgun package.
I think it makes the most sense to create multiple users, one for each
service.

Lastly, it makes a lot of sense to tie a "fuel" CLI user to
python-fuelclient package.

On Thu, Nov 12, 2015 at 6:42 PM, Dmitry Nikishov 
wrote:

> Stanislaw,
>
> I agree that this approch would work well. However, does Puppet allow
> managing capabilities and/or file ACLs? Or can they be easily set up when
> installing RPM package? (is there a way to specify capabilities/ACLs in the
> RPM spec file?) This doesn't seem to be supported out of the box.
>
> I'm going to research if it is possible to manage capabilities and  ACLs
> with what we have out of the box (RPM, Puppet).
>
> On Wed, Nov 11, 2015 at 4:29 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I propose to give needed linux capabilities
>> (like CAP_NET_BIND_SERVICE) to processes (services) which needs them and
>> then start these processes from non-privileged user. It will give you
>> ability to run each process without 'sudo' at all with well fine-grained
>> permissions.
>>
>> On Tue, Nov 10, 2015 at 11:06 PM, Dmitry Nikishov > > wrote:
>>
>>> Stanislaw,
>>>
>>> I've been experimenting with 'capsh' on the 6.1 master node and it
>>> doesn't seem to preserve any capabilities when setting SECURE_NOROOT bit,
>>> even if explicitely told to do so (via either --keep=1 or
>>> "SECURE_KEEP_CAPS" bit).
>>>
>>> On Tue, Nov 10, 2015 at 11:20 AM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Bartolomiej, Adam,
 Stanislaw is correct. And this is going to be ported to master. The
 goal currently is to reach an agreement on the implementation so that
 there's going to be a some kinf of compatibility during upgrades.

 Stanislaw,
 Do I understand correctly that you propose using something like sucap
 to launch from root, switch to a different user and then drop capabilities
 which are not required?

 On Tue, Nov 10, 2015 at 3:11 AM, Stanislaw Bogatkin <
 sbogat...@mirantis.com> wrote:

> Bartolomiej, it's customer-related patches, they, I think, have to be
> done for 6.1 prior to 8+ release.
>
> Dmitry, it's nice to hear about it. Did you consider to use linux
> capabilities on fuel-related processes instead of just using non-extended
> POSIX privileged/non-privileged permission checks?
>
> On Tue, Nov 10, 2015 at 10:11 AM, Bartlomiej Piotrowski <
> bpiotrow...@mirantis.com> wrote:
>
>> We don't develop features for already released versions… It should be
>> done for master instead.
>>
>> BP
>>
>> On Tue, Nov 10, 2015 at 7:02 AM, Adam Heczko 
>> wrote:
>>
>>> Dmitry,
>>> +1
>>>
>>> Do you plan to port your patchset to future Fuel releases?
>>>
>>> A.
>>>
>>> On Tue, Nov 10, 2015 at 12:14 AM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Hey guys.

 I've been working on making Fuel not to rely on superuser privileges
 at least for day-to-day operations. These include:
 a) running Fuel services (nailgun, astute etc)
 b) user operations (create env, deploy, update, log in)

 The reason for this is that many security policies simply do not
 allow root access (especially remote) to servers/environments.

 This feature/enhancement means that anything that currently is being
 run under root, will be evaluated and, if possible, put under a
 non-privileged
 user. This also means that remote root access will be disabled.
 Instead, users will have to log in with "fueladmin" user.

 Together with Omar  we've put together a blueprint[0]
 and a
 spec[1] for this feature. I've been developing this for Fuel 6.1,
 so there
 are two patches into fuel-main[2] and fuel-library[3] that can give
 you an
 impression of current approach.

 These patches do following:
 - Add fuel-admin-user package, which creates 'fueladmin'
 - Make all other fuel-* packages depend on fuel-admin-user
 - Put supervisord under 'fueladmin' user.

 Please review the spec/patches and let's have a discussion on the
 approach to
 this feature.

 Thank you.

 [0] https://blueprints.launchpad.net/fuel/+spec/fuel-nonsuperuser
 [1] https://review.openstack.org/243340
 [2] https://review.openstack.org/243337
 [3] https://review.openstack.org/243313

 --
 Dmitry Nikishov,
 Deployment Engineer,
 Mirantis, 

[openstack-dev] [telemetry] Bug opening policy

2015-11-12 Thread Julien Danjou
Hi telemetry team and contributors,

We are starting to distinguish a terrible trend in our bug tracking
system. Some people tend to:

1. Open a bug for a (trivial) issue (e.g. a typo in the documentation)
2. Submit a patch 2 minutes later fixing the bug opened at step 1.

We'd like to emphasize that this useless and represents actually a waste
of time both for the bug opener and us Launchpad guardians.

We do highly appreciate contributions, in any form, bug opening or bug
fixing – we just don't require having a bug opened for each of our
change.

The bug tracking system is made for registering issues that you don't
have solutions for. If you already have the fix for it, just send the
fix – we'll just be glad and we'll review it!

Thanks a lot!

Happy hacking,
-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status

2015-11-12 Thread Sridar Kandaswamy (skandasw)
Could not agree more. Thanks very much Paul.  And thanks also for always being 
a sounding board for common things across FWaaS and VPNaaS.

Thanks

Sridar

From: Madhusudhan Kandadai 
>
Reply-To: OpenStack List 
>
Date: Thursday, November 12, 2015 at 7:28 AM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status

Thanks Paul for leading over the previous releases. Looking forward to have 
your guidance in neutron.

On Thu, Nov 12, 2015 at 8:48 PM, Kyle Mestery 
> wrote:
On Thu, Nov 12, 2015 at 9:56 AM, Paul Michali 
> wrote:
Neutron community,

During the past several releases, while leading the VPNaaS project, I've seen 
many great enhancements and features added to the VPNaaS project by the 
community, including support for StrongSwan, Libreswan, completion of the 
project split out, functional and rally tests, endpoint groups, multiple local 
subnets, vendor drivers, etc.

There is still work needed (certificate support the most important, followed by 
documentation and scale testing), but there is a solid (in my bias and 
subjective opinion :) foundation in place for people to play with this 
capability.

As I mentioned to Armando at the summit, it's time for me to move on to other 
areas of Neutron, and as such, I'm stepping down as VPNaaS chair and wrapping 
up work on the project over the next few weeks. I'll still try to review VPNaaS 
commits as much as possible, and be available to advise in this area.

Towards that end, I've updated the VPNaaS wiki page 
(https://wiki.openstack.org/wiki/Meetings/VPNaaS) to list what I think are 
outstanding work that can be done in this area, from important to wish items.  
Meetings have transitioned to on-demand, and future meetings can either be done 
as an on-demand topic in the Neutron IRC meeting, or as an on-demand special 
meeting.

I'll go through the VPNaaS bugs in Launchpad and comment on them, as to my 
opinion of importance, priority, relevance, etc.

Regards,


Thanks for all your hard work over the previous releases Paul! Looking forward 
to what you'll be doing next in Neutron.

Thanks,
Kyle

PCM (pc_m)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] VPNaaS project status

2015-11-12 Thread Paul Michali
Neutron community,

During the past several releases, while leading the VPNaaS project, I've
seen many great enhancements and features added to the VPNaaS project by
the community, including support for StrongSwan, Libreswan, completion of
the project split out, functional and rally tests, endpoint groups,
multiple local subnets, vendor drivers, etc.

There is still work needed (certificate support the most important,
followed by documentation and scale testing), but there is a solid (in my
bias and subjective opinion :) foundation in place for people to play with
this capability.

As I mentioned to Armando at the summit, it's time for me to move on to
other areas of Neutron, and as such, I'm stepping down as VPNaaS chair and
wrapping up work on the project over the next few weeks. I'll still try to
review VPNaaS commits as much as possible, and be available to advise in
this area.

Towards that end, I've updated the VPNaaS wiki page (
https://wiki.openstack.org/wiki/Meetings/VPNaaS) to list what I think are
outstanding work that can be done in this area, from important to wish
items.  Meetings have transitioned to on-demand, and future meetings can
either be done as an on-demand topic in the Neutron IRC meeting, or as an
on-demand special meeting.

I'll go through the VPNaaS bugs in Launchpad and comment on them, as to my
opinion of importance, priority, relevance, etc.

Regards,

PCM (pc_m)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status

2015-11-12 Thread Kyle Mestery
On Thu, Nov 12, 2015 at 9:56 AM, Paul Michali  wrote:

> Neutron community,
>
> During the past several releases, while leading the VPNaaS project, I've
> seen many great enhancements and features added to the VPNaaS project by
> the community, including support for StrongSwan, Libreswan, completion of
> the project split out, functional and rally tests, endpoint groups,
> multiple local subnets, vendor drivers, etc.
>
> There is still work needed (certificate support the most important,
> followed by documentation and scale testing), but there is a solid (in my
> bias and subjective opinion :) foundation in place for people to play with
> this capability.
>
> As I mentioned to Armando at the summit, it's time for me to move on to
> other areas of Neutron, and as such, I'm stepping down as VPNaaS chair and
> wrapping up work on the project over the next few weeks. I'll still try to
> review VPNaaS commits as much as possible, and be available to advise in
> this area.
>
> Towards that end, I've updated the VPNaaS wiki page (
> https://wiki.openstack.org/wiki/Meetings/VPNaaS) to list what I think are
> outstanding work that can be done in this area, from important to wish
> items.  Meetings have transitioned to on-demand, and future meetings can
> either be done as an on-demand topic in the Neutron IRC meeting, or as an
> on-demand special meeting.
>
> I'll go through the VPNaaS bugs in Launchpad and comment on them, as to my
> opinion of importance, priority, relevance, etc.
>
> Regards,
>
>
Thanks for all your hard work over the previous releases Paul! Looking
forward to what you'll be doing next in Neutron.

Thanks,
Kyle


> PCM (pc_m)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status

2015-11-12 Thread Madhusudhan Kandadai
Thanks Paul for leading over the previous releases. Looking forward to have
your guidance in neutron.

On Thu, Nov 12, 2015 at 8:48 PM, Kyle Mestery  wrote:

> On Thu, Nov 12, 2015 at 9:56 AM, Paul Michali  wrote:
>
>> Neutron community,
>>
>> During the past several releases, while leading the VPNaaS project, I've
>> seen many great enhancements and features added to the VPNaaS project by
>> the community, including support for StrongSwan, Libreswan, completion of
>> the project split out, functional and rally tests, endpoint groups,
>> multiple local subnets, vendor drivers, etc.
>>
>> There is still work needed (certificate support the most important,
>> followed by documentation and scale testing), but there is a solid (in my
>> bias and subjective opinion :) foundation in place for people to play with
>> this capability.
>>
>> As I mentioned to Armando at the summit, it's time for me to move on to
>> other areas of Neutron, and as such, I'm stepping down as VPNaaS chair and
>> wrapping up work on the project over the next few weeks. I'll still try to
>> review VPNaaS commits as much as possible, and be available to advise in
>> this area.
>>
>> Towards that end, I've updated the VPNaaS wiki page (
>> https://wiki.openstack.org/wiki/Meetings/VPNaaS) to list what I think
>> are outstanding work that can be done in this area, from important to wish
>> items.  Meetings have transitioned to on-demand, and future meetings can
>> either be done as an on-demand topic in the Neutron IRC meeting, or as an
>> on-demand special meeting.
>>
>> I'll go through the VPNaaS bugs in Launchpad and comment on them, as to
>> my opinion of importance, priority, relevance, etc.
>>
>> Regards,
>>
>>
> Thanks for all your hard work over the previous releases Paul! Looking
> forward to what you'll be doing next in Neutron.
>
> Thanks,
> Kyle
>
>
>> PCM (pc_m)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, per-user projects, and Federation

2015-11-12 Thread Renat Akhmerov


> On 12 Nov 2015, at 00:11, Clint Byrum  wrote:
> 
> Excerpts from Zane Bitter's message of 2015-11-11 09:43:43 -0800:
>> 1. Keystone (or some Rabbit->Zaqar proxy service reading notifications 
>> from Keystone) sends "new federated user" notification out via Zaqar.
>> 2. Mistral picks up the message and checks policy to see what should be 
>> done.
>> 3. Mistral calls either Heat or Keystone to autoprovision user.
>> 
> 
> Zane I like most of what you said here, and agree with nearly all of it.
> I actually started typing a question asking why Zaqar, but I think I
> understand, and you can correct me if I'm wrong.
> 
> There's a notification bus. It is generally accessible to all of the
> things run by the operator if the operator wants it to be. Zaqar is for
> communication toward the user, whether from user hosted apps or operator
> hosted services. The thing we're discussiong seems entirely operator
> hosted, to operator hosted. Which to me, at first, meant we should just
> teach Mistral to listen to Keystone notifications and to run workflows
> using trusts acquired similarly to the way Heat acquires them.

Mistral uses trusts I think the same way that Heat does.

Renat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status

2015-11-12 Thread Edgar Magana
Great Work Done Paul!

Thanks a lot!

Edgar




On 11/12/15, 9:35 AM, "Eichberger, German"  wrote:

>Thanks for your hard work. Really appreciated!!
>
>Looking forward what you tackle next :-)
>
>All the best.
>German
>
>From: "Sridar Kandaswamy (skandasw)"
>Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>Date: Thursday, November 12, 2015 at 7:55 AM
>To: "OpenStack Development Mailing List (not for usage questions)"
>Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status
>
>Could not agree more. Thanks very much Paul.  And thanks also for always being 
>a sounding board for common things across FWaaS and VPNaaS.
>
>Thanks
>
>Sridar
>
>From: Madhusudhan Kandadai 
>>
>Reply-To: OpenStack List 
>>
>Date: Thursday, November 12, 2015 at 7:28 AM
>To: OpenStack List 
>>
>Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status
>
>Thanks Paul for leading over the previous releases. Looking forward to have 
>your guidance in neutron.
>
>On Thu, Nov 12, 2015 at 8:48 PM, Kyle Mestery 
>> wrote:
>On Thu, Nov 12, 2015 at 9:56 AM, Paul Michali 
>> wrote:
>Neutron community,
>
>During the past several releases, while leading the VPNaaS project, I've seen 
>many great enhancements and features added to the VPNaaS project by the 
>community, including support for StrongSwan, Libreswan, completion of the 
>project split out, functional and rally tests, endpoint groups, multiple local 
>subnets, vendor drivers, etc.
>
>There is still work needed (certificate support the most important, followed 
>by documentation and scale testing), but there is a solid (in my bias and 
>subjective opinion :) foundation in place for people to play with this 
>capability.
>
>As I mentioned to Armando at the summit, it's time for me to move on to other 
>areas of Neutron, and as such, I'm stepping down as VPNaaS chair and wrapping 
>up work on the project over the next few weeks. I'll still try to review 
>VPNaaS commits as much as possible, and be available to advise in this area.
>
>Towards that end, I've updated the VPNaaS wiki page 
>(https://wiki.openstack.org/wiki/Meetings/VPNaaS) to list what I think are 
>outstanding work that can be done in this area, from important to wish items.  
>Meetings have transitioned to on-demand, and future meetings can either be 
>done as an on-demand topic in the Neutron IRC meeting, or as an on-demand 
>special meeting.
>
>I'll go through the VPNaaS bugs in Launchpad and comment on them, as to my 
>opinion of importance, priority, relevance, etc.
>
>Regards,
>
>
>Thanks for all your hard work over the previous releases Paul! Looking forward 
>to what you'll be doing next in Neutron.
>
>Thanks,
>Kyle
>
>PCM (pc_m)
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, per-user projects, and Federation

2015-11-12 Thread Zane Bitter

On 11/11/15 13:11, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2015-11-11 09:43:43 -0800:

1. Keystone (or some Rabbit->Zaqar proxy service reading notifications
from Keystone) sends "new federated user" notification out via Zaqar.
2. Mistral picks up the message and checks policy to see what should be
done.
3. Mistral calls either Heat or Keystone to autoprovision user.



Zane I like most of what you said here, and agree with nearly all of it.
I actually started typing a question asking why Zaqar, but I think I
understand, and you can correct me if I'm wrong.

There's a notification bus. It is generally accessible to all of the
things run by the operator if the operator wants it to be. Zaqar is for
communication toward the user, whether from user hosted apps or operator
hosted services. The thing we're discussiong seems entirely operator
hosted, to operator hosted. Which to me, at first, meant we should just
teach Mistral to listen to Keystone notifications and to run workflows
using trusts acquired similarly to the way Heat acquires them.


Candidly, I'm using this as an opportunity to push my pet feature ;)

I genuinely think it could go either way in this case. What makes the 
difference for me is that if we teach Mistral to read notifications from 
Rabbit then only operators can use that feature, but if we teach Mistral 
to read messages from a Zaqar queue then users can make use of this 
feature too. We get more bang for our buck.


I think if this feature already existed, it would be an obvious 
candidate for this use case. And since it should exist anyway, we should 
just implement it instead of implementing another thing that isn't 
really useful for any other purposes.


As a (theoretical) user I really, really want this. A couple of example 
use cases:
- Heat sends a message and pauses for a response before updating or 
deleting a server resource. The user can configure Mistral to run a 
workflow to quiesce the server and send the response that allows Heat to 
continue.
- Ceilometer sends a message in the event of an alarm. Instead of piping 
it directly to Heat autoscaling, the user configures Mistral to listen 
on the queue and do signal conditioning on the alarm.


Some of the requirements that fall out of this make Zaqar an obvious choice:
- Needs to be user-facing (Keystone auth)
- We MUST NOT lose the message...
- even if the listener is temporarily down


However, it just ocurred to me that if we teach Mistral to read messages
in a Zaqar queue belonging to a user, then there's no weirdness around
user authentication and admin powers. Messages in a user's queue are
entirely acted on using trusts for that user.


Yes, that is also a very nice property.


That said, I think this is overly abstracted. I'd rather just see
operator hosted services listen to the notification bus and react to the
notifications they care about. You have to teach Mistral about trusts
either way so it can do things as a user, and having the notification
go an extra step:

Keystone->[notifications]->Zaqar->Mistral

vs.

Keystone->[notifications]->Mistral

Doesn't make a ton of sense to me.


I agree to some extent, I think I would prefer:

 Keystone->Zaqar->Mistral

Although, honestly, it's only Keystone's desire not to be tightly 
coupled with this that would prevent them from calling Mistral directly.


However, I think:

  Keystone->[notifications]->Zaqar->Mistral

would make sense if:

- Keystone were ordinarily connected to the notification bus; and
- everyone were doing it (i.e. some service existed to routinely pick 
notifications off the notification bus firehose, filter and sanitise 
them for user consumption and redirect them to Zaqar)


Kevin indicates in another part of the thread that the former may not be 
the case in a multi-region deployment, so that is one point against it.


The latter certainly isn't the case *yet*. But it has been discussed 
quite a bit. (Most recently 
https://etherpad.openstack.org/p/mitaka-horizon-async) A lot of things 
need it (it would be really useful for Heat, for example). I wouldn't 
say it's looking inevitable, but there's a good chance it may happen at 
some point.


cheers,
Zane.


But as Renat mentioned, the part about triggering Mistral workflows from
a message does not yet exist. As Tim pointed out, Congress could be a
solution to that (listening for a message and then starting the Mistral
workflow). That may be OK in the short term, but in the long term I'd
prefer that we implement the triggering thing in Mistral (since there
are *lots* of end-user use cases for this too), and have the workflow
optionally query Congress for the policy rather than having Congress in
the loop.



I agree 100% on the positioning of Congress vs. Mistral here.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, per-user projects, and Federation

2015-11-12 Thread Clint Byrum
Excerpts from Renat Akhmerov's message of 2015-11-12 07:52:42 -0800:
> 
> > On 12 Nov 2015, at 00:11, Clint Byrum  wrote:
> > 
> > Excerpts from Zane Bitter's message of 2015-11-11 09:43:43 -0800:
> >> 1. Keystone (or some Rabbit->Zaqar proxy service reading notifications 
> >> from Keystone) sends "new federated user" notification out via Zaqar.
> >> 2. Mistral picks up the message and checks policy to see what should be 
> >> done.
> >> 3. Mistral calls either Heat or Keystone to autoprovision user.
> >> 
> > 
> > Zane I like most of what you said here, and agree with nearly all of it.
> > I actually started typing a question asking why Zaqar, but I think I
> > understand, and you can correct me if I'm wrong.
> > 
> > There's a notification bus. It is generally accessible to all of the
> > things run by the operator if the operator wants it to be. Zaqar is for
> > communication toward the user, whether from user hosted apps or operator
> > hosted services. The thing we're discussiong seems entirely operator
> > hosted, to operator hosted. Which to me, at first, meant we should just
> > teach Mistral to listen to Keystone notifications and to run workflows
> > using trusts acquired similarly to the way Heat acquires them.
> 
> Mistral uses trusts I think the same way that Heat does.
> 

Well thats good news. So, keeping things simple, I think this is a case
of just being able to let admins set up a workflow that gets run based
on notifications, and that would need to include setting the user to
run as from content in the notification.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [group-based-policy] QoS support in GBP

2015-11-12 Thread Sumit Naiksatam
Thanks Igor. This is certainly of interest, let’s discuss during the IRC
meeting today.

Just a friendly reminder - for those in those in the US time zones, we
start an earlier today on account on the fall time changes.

~Sumit.

On Thu, Nov 12, 2015 at 6:47 AM, Duarte Cardoso, Igor <
igor.duarte.card...@intel.com> wrote:

> Hi OpenStack Group-based Policy team,
>
>
>
> As I unofficially said before, I am interested in bringing basic QoS to
> GBP via the Neutron QoS API which currently offers bandwidth limitation at
> the port and network level, since Liberty.
>
>
>
> I have added the item to today’s Meeting Agenda for an initial discussion
> about this and where it would best fit for GBP.
>
>
>
> Best regards,
>
>
>
>
>
> [image: intel]
>
>
>
> *Igor Duarte Cardoso*
>
> Software Engineer
>
> +353 61 777 858
>
> SIE1-2-170
>
> Intel Shannon Ltd.
>
> Dromore House, East Park
>
> Shannon, Co. Clare
>
> IRELAND
>
>
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Freezer] Roadmap

2015-11-12 Thread Mathieu, Pierre-Arthur
Hello ,

The Freezer roadmap is now available at 
https://wiki.openstack.org/wiki/FreezerRoadmap

Please, feel free to comment or send any Feedback.


Many Thanks,

Pierre


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [HA] ANNOUNCE: new "[HA]" topic category in mailman configuration

2015-11-12 Thread Adam Spiers
Hi all,

As you may know, Mailman allows server-side filtering of mailing list
traffic by topic categories:

  http://lists.openstack.org/cgi-bin/mailman/options/openstack-dev

(N.B. needs authentication)

Thierry has kindly added "[HA]" as a new topic category in the mailman
configuration for this list, so please tag all mails related to High
Availability with this prefix so that it can be detected by both
server-side and client-side mail filters belonging to people
interested in HA discussions.

Thanks!
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status

2015-11-12 Thread Eichberger, German
Thanks for your hard work. Really appreciated!!

Looking forward what you tackle next :-)

All the best.
German

From: "Sridar Kandaswamy (skandasw)"
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Thursday, November 12, 2015 at 7:55 AM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status

Could not agree more. Thanks very much Paul.  And thanks also for always being 
a sounding board for common things across FWaaS and VPNaaS.

Thanks

Sridar

From: Madhusudhan Kandadai 
>
Reply-To: OpenStack List 
>
Date: Thursday, November 12, 2015 at 7:28 AM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS project status

Thanks Paul for leading over the previous releases. Looking forward to have 
your guidance in neutron.

On Thu, Nov 12, 2015 at 8:48 PM, Kyle Mestery 
> wrote:
On Thu, Nov 12, 2015 at 9:56 AM, Paul Michali 
> wrote:
Neutron community,

During the past several releases, while leading the VPNaaS project, I've seen 
many great enhancements and features added to the VPNaaS project by the 
community, including support for StrongSwan, Libreswan, completion of the 
project split out, functional and rally tests, endpoint groups, multiple local 
subnets, vendor drivers, etc.

There is still work needed (certificate support the most important, followed by 
documentation and scale testing), but there is a solid (in my bias and 
subjective opinion :) foundation in place for people to play with this 
capability.

As I mentioned to Armando at the summit, it's time for me to move on to other 
areas of Neutron, and as such, I'm stepping down as VPNaaS chair and wrapping 
up work on the project over the next few weeks. I'll still try to review VPNaaS 
commits as much as possible, and be available to advise in this area.

Towards that end, I've updated the VPNaaS wiki page 
(https://wiki.openstack.org/wiki/Meetings/VPNaaS) to list what I think are 
outstanding work that can be done in this area, from important to wish items.  
Meetings have transitioned to on-demand, and future meetings can either be done 
as an on-demand topic in the Neutron IRC meeting, or as an on-demand special 
meeting.

I'll go through the VPNaaS bugs in Launchpad and comment on them, as to my 
opinion of importance, priority, relevance, etc.

Regards,


Thanks for all your hard work over the previous releases Paul! Looking forward 
to what you'll be doing next in Neutron.

Thanks,
Kyle

PCM (pc_m)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Could we highlight +0 reviews in review.openstack.org

2015-11-12 Thread Amrith Kumar
When you look at a review on https://review.openstack.org, you see a section of 
the screen that shows +2's, +1's, -1's and -2's above Verified and Workflow. 
But we don't show +0's there.

It would be useful when looking at a review to be also able to see if there are 
any 0's there. Maybe in a different color. Otherwise, it is easy to miss a 
perfectly legitimate review with a 0 score.

I think the UI would be more user friendly with that, maybe we could make it a 
user preference whether this is shown or not.

Thanks,

-amrith

--
Amrith Kumar
Tesora, Inc
amr...@tesora.com
amrith on Freenode




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-12 Thread Dmitry Nikishov
Matther,

I totally agree that each daemon should have it's own user which should be
created during installation of the relevant package. Probably I didn't
state this clear enough in the spec.

However, there are security requirements in place that root should not be
used at all. This means that there should be a some kind of maintenance or
system user ('fueladmin'), which would have enough privileges to configure
and manage Fuel node (e.g. run "sudo puppet apply" without password, create
mirrors etc). This also means that certain fuel- packages would be required
to have their files accessible to that user. That's the idea behind having
a package which would create 'fueladmin' user and including it into other
fuel- packages requirements lists.

So this part of the feature comes down to having a non-root user with sudo
privileges and passwordless sudo for certain commands (like 'puppet apply
') for scripting.

On Thu, Nov 12, 2015 at 9:52 AM, Matthew Mosesohn 
wrote:

> Dmitry,
>
> We really shouldn't put "user" creation into a single package and then
> depend on it for daemons. If we want nailgun service to run as nailgun
> user, it should be created in the fuel-nailgun package.
> I think it makes the most sense to create multiple users, one for each
> service.
>
> Lastly, it makes a lot of sense to tie a "fuel" CLI user to
> python-fuelclient package.
>
> On Thu, Nov 12, 2015 at 6:42 PM, Dmitry Nikishov 
> wrote:
>
>> Stanislaw,
>>
>> I agree that this approch would work well. However, does Puppet allow
>> managing capabilities and/or file ACLs? Or can they be easily set up when
>> installing RPM package? (is there a way to specify capabilities/ACLs in the
>> RPM spec file?) This doesn't seem to be supported out of the box.
>>
>> I'm going to research if it is possible to manage capabilities and  ACLs
>> with what we have out of the box (RPM, Puppet).
>>
>> On Wed, Nov 11, 2015 at 4:29 AM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> Dmitry, I propose to give needed linux capabilities
>>> (like CAP_NET_BIND_SERVICE) to processes (services) which needs them and
>>> then start these processes from non-privileged user. It will give you
>>> ability to run each process without 'sudo' at all with well fine-grained
>>> permissions.
>>>
>>> On Tue, Nov 10, 2015 at 11:06 PM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Stanislaw,

 I've been experimenting with 'capsh' on the 6.1 master node and it
 doesn't seem to preserve any capabilities when setting SECURE_NOROOT bit,
 even if explicitely told to do so (via either --keep=1 or
 "SECURE_KEEP_CAPS" bit).

 On Tue, Nov 10, 2015 at 11:20 AM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Bartolomiej, Adam,
> Stanislaw is correct. And this is going to be ported to master. The
> goal currently is to reach an agreement on the implementation so that
> there's going to be a some kinf of compatibility during upgrades.
>
> Stanislaw,
> Do I understand correctly that you propose using something like sucap
> to launch from root, switch to a different user and then drop capabilities
> which are not required?
>
> On Tue, Nov 10, 2015 at 3:11 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Bartolomiej, it's customer-related patches, they, I think, have to be
>> done for 6.1 prior to 8+ release.
>>
>> Dmitry, it's nice to hear about it. Did you consider to use linux
>> capabilities on fuel-related processes instead of just using non-extended
>> POSIX privileged/non-privileged permission checks?
>>
>> On Tue, Nov 10, 2015 at 10:11 AM, Bartlomiej Piotrowski <
>> bpiotrow...@mirantis.com> wrote:
>>
>>> We don't develop features for already released versions… It should
>>> be done for master instead.
>>>
>>> BP
>>>
>>> On Tue, Nov 10, 2015 at 7:02 AM, Adam Heczko 
>>> wrote:
>>>
 Dmitry,
 +1

 Do you plan to port your patchset to future Fuel releases?

 A.

 On Tue, Nov 10, 2015 at 12:14 AM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Hey guys.
>
> I've been working on making Fuel not to rely on superuser
> privileges
> at least for day-to-day operations. These include:
> a) running Fuel services (nailgun, astute etc)
> b) user operations (create env, deploy, update, log in)
>
> The reason for this is that many security policies simply do not
> allow root access (especially remote) to servers/environments.
>
> This feature/enhancement means that anything that currently is
> being
> run under root, will be evaluated and, if possible, put under a
> non-privileged
> user. This also means that 

Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-12 Thread Dmitry Nikishov
Matthew,

sorry, didn't mean to butcher your name :(

On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov 
wrote:

> Matther,
>
> I totally agree that each daemon should have it's own user which should be
> created during installation of the relevant package. Probably I didn't
> state this clear enough in the spec.
>
> However, there are security requirements in place that root should not be
> used at all. This means that there should be a some kind of maintenance or
> system user ('fueladmin'), which would have enough privileges to configure
> and manage Fuel node (e.g. run "sudo puppet apply" without password, create
> mirrors etc). This also means that certain fuel- packages would be required
> to have their files accessible to that user. That's the idea behind having
> a package which would create 'fueladmin' user and including it into other
> fuel- packages requirements lists.
>
> So this part of the feature comes down to having a non-root user with sudo
> privileges and passwordless sudo for certain commands (like 'puppet apply
> ') for scripting.
>
> On Thu, Nov 12, 2015 at 9:52 AM, Matthew Mosesohn 
> wrote:
>
>> Dmitry,
>>
>> We really shouldn't put "user" creation into a single package and then
>> depend on it for daemons. If we want nailgun service to run as nailgun
>> user, it should be created in the fuel-nailgun package.
>> I think it makes the most sense to create multiple users, one for each
>> service.
>>
>> Lastly, it makes a lot of sense to tie a "fuel" CLI user to
>> python-fuelclient package.
>>
>> On Thu, Nov 12, 2015 at 6:42 PM, Dmitry Nikishov 
>> wrote:
>>
>>> Stanislaw,
>>>
>>> I agree that this approch would work well. However, does Puppet allow
>>> managing capabilities and/or file ACLs? Or can they be easily set up when
>>> installing RPM package? (is there a way to specify capabilities/ACLs in the
>>> RPM spec file?) This doesn't seem to be supported out of the box.
>>>
>>> I'm going to research if it is possible to manage capabilities and  ACLs
>>> with what we have out of the box (RPM, Puppet).
>>>
>>> On Wed, Nov 11, 2015 at 4:29 AM, Stanislaw Bogatkin <
>>> sbogat...@mirantis.com> wrote:
>>>
 Dmitry, I propose to give needed linux capabilities
 (like CAP_NET_BIND_SERVICE) to processes (services) which needs them and
 then start these processes from non-privileged user. It will give you
 ability to run each process without 'sudo' at all with well fine-grained
 permissions.

 On Tue, Nov 10, 2015 at 11:06 PM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Stanislaw,
>
> I've been experimenting with 'capsh' on the 6.1 master node and it
> doesn't seem to preserve any capabilities when setting SECURE_NOROOT bit,
> even if explicitely told to do so (via either --keep=1 or
> "SECURE_KEEP_CAPS" bit).
>
> On Tue, Nov 10, 2015 at 11:20 AM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Bartolomiej, Adam,
>> Stanislaw is correct. And this is going to be ported to master. The
>> goal currently is to reach an agreement on the implementation so that
>> there's going to be a some kinf of compatibility during upgrades.
>>
>> Stanislaw,
>> Do I understand correctly that you propose using something like sucap
>> to launch from root, switch to a different user and then drop 
>> capabilities
>> which are not required?
>>
>> On Tue, Nov 10, 2015 at 3:11 AM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> Bartolomiej, it's customer-related patches, they, I think, have to
>>> be done for 6.1 prior to 8+ release.
>>>
>>> Dmitry, it's nice to hear about it. Did you consider to use linux
>>> capabilities on fuel-related processes instead of just using 
>>> non-extended
>>> POSIX privileged/non-privileged permission checks?
>>>
>>> On Tue, Nov 10, 2015 at 10:11 AM, Bartlomiej Piotrowski <
>>> bpiotrow...@mirantis.com> wrote:
>>>
 We don't develop features for already released versions… It should
 be done for master instead.

 BP

 On Tue, Nov 10, 2015 at 7:02 AM, Adam Heczko 
 wrote:

> Dmitry,
> +1
>
> Do you plan to port your patchset to future Fuel releases?
>
> A.
>
> On Tue, Nov 10, 2015 at 12:14 AM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Hey guys.
>>
>> I've been working on making Fuel not to rely on superuser
>> privileges
>> at least for day-to-day operations. These include:
>> a) running Fuel services (nailgun, astute etc)
>> b) user operations (create env, deploy, update, log in)
>>
>> The reason for this is that many security policies simply do not
>> allow 

Re: [openstack-dev] Could we highlight +0 reviews in review.openstack.org

2015-11-12 Thread Jeremy Stanley
On 2015-11-12 16:33:52 + (+), Amrith Kumar wrote:
> When you look at a review on https://review.openstack.org, you see
> a section of the screen that shows +2's, +1's, -1's and -2's above
> Verified and Workflow. But we don't show +0's there.
[...]

This is because Gerrit tries to treat a 0 vote as "unset" or
"absent" so your suggestion needs to be filed upstream with Google's
Gerrit developer team as a feature request (and may already be). The
reviewer list is always carried over from one patchset to the next
even if their associated votes are not, so there is no way in the
reviewers box of the Web UI right now to distinguish someone who
commented without voting (a.k.a. a 0 vote) on the current patchset
from someone who only voted on previous patchsets.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-12 Thread Andrew Laski

On 11/11/15 at 07:48pm, Jeremy Stanley wrote:

On 2015-11-11 11:25:09 -0600 (-0600), Chris Friesen wrote:

I didn't think that the overhead of deleting/creating an instance was *that*
much different than rebuilding an instance.

Do you have any information about where the "significant performance
advantage" was coming from?


The main reason I recall the suggestion coming up is that, due to
IPv4 address starvation in some provider regions, nova boot was
taking an hour or more to return waiting for an IP address to be
assigned. Using nova rebuild would have supposedly avoided the
address churn and thus improved instance turnaround for us.

There may also have been other reasons for recommending rebuild to
us, but if so I don't recall what they were.


In addition to waiting for IP allocation the significant differences 
between boot and rebuild are likely to be around scheduling and 
potential build failures.  Scheduling should take seconds at most and 
shouldn't be a significant factor in the general case.  However, 
differences between the scheduler view of resources and the compute view 
of resources means there's a greater chance of a build failing on a 
compute host, and needing a reschedule and potentially another image 
download to a host, than a rebuild.


Under ideal conditions there should not be much of a difference between 
a fresh boot and a rebuild, but booting a fresh instance has more edge 
cases that can be hit.



--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Bug opening policy

2015-11-12 Thread gord chung



On 12/11/2015 10:44 AM, Julien Danjou wrote:

Hi telemetry team and contributors,

We are starting to distinguish a terrible trend in our bug tracking
system. Some people tend to:

1. Open a bug for a (trivial) issue (e.g. a typo in the documentation)
2. Submit a patch 2 minutes later fixing the bug opened at step 1.

We'd like to emphasize that this useless and represents actually a waste
of time both for the bug opener and us Launchpad guardians.

We do highly appreciate contributions, in any form, bug opening or bug
fixing – we just don't require having a bug opened for each of our
change.

The bug tracking system is made for registering issues that you don't
have solutions for. If you already have the fix for it, just send the
fix – we'll just be glad and we'll review it!


or someone will ask you to open a bug depending on proximity to release.

but yes, please no typo bugs. feel free to forward this to managers -- 
my nickname is cdent on irc.*


* this is a joke.

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] hackathon day

2015-11-12 Thread Matt Riedemann



On 11/12/2015 5:15 AM, Rosa, Andrea (HP Cloud Services) wrote:

Hi

I knew that people in China had a 3 days hackathon  few months ago I was 
thinking to have a similar thing in Europe.
My original idea was to propose to add an extra day after the mid-cycle but I 
am not sure if that is a good idea anymore:

CONS:
- the next  mid-cycle is going to be the first one outside USA and as for any 
new things it has some level of uncertainty, we know that we could have a less 
participant than the other meetups, so it is a risk to add the hackathon at 
this time
- probably it is better to have more than one day for getting the most out of 
an hackathon event
- ppl attending the hackathon are not necessarily the same person attending a 
mid-cycle event, I think at the hackathon as a very good opportunity for new 
contributors
- it is already late for this proposal, I know I had to propose it at the last 
summit, my fault

PROS:
- having the hackathon following the  mid-cycle gives us the opportunity to 
have more core reviewers available, which is a key point for getting right 
directions and stuff done
- cost effective: ppl interested in attending both events can save a travel

It'd be good to have a feedback about the Chinese hackathon experience to 
understand if it's worth to put effort in making a similar event in other part 
of the world.
If it's not the mid-cycle I think there are other events where it could be good 
to have a couple of extra days for the hackathon, I am thinking for example at 
Fosdem [1]... well not for 2016 as it is on the week-end after the mid-cycle :)

Thanks
--
Andrea Rosa

[1] https://fosdem.org/






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I wouldn't want to mix this into the meetup. There are already the 
travel restrictions (I'll probably be leaving on Friday if I can go). 
Plus as you say, it's better for new contributors and that's not really 
what we're shooting for with the meetups since those are meant to make 
progress on big issues, not write/review code.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] hackathon day

2015-11-12 Thread Wang, Shane
Hey, how about driving a Hackathon on the same dates in 3 geos right before 
release? Who are interested in?

Best Regards.
--
Shane
-Original Message-
From: Wang, Shane [mailto:shane.w...@intel.com] 
Sent: Thursday, November 12, 2015 11:03 PM
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [nova] hackathon day

Open the mid-cycle one day ahead, feasible?

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Thursday, November 12, 2015 10:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] hackathon day

On Thu, Nov 12, 2015 at 11:15:09AM +, Rosa, Andrea (HP Cloud Services) 
wrote:
> Hi
> 
> I knew that people in China had a 3 days hackathon  few months ago I was 
> thinking to have a similar thing in Europe.
> My original idea was to propose to add an extra day after the mid-cycle but I 
> am not sure if that is a good idea anymore:

The day after the mid-cycle is the main day for travelling to FOSDEM, so a bad 
choice for people who want to attend FOSDEM too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-12 Thread Brad P. Crochet
On Tue, Nov 3, 2015 at 10:25 AM, Gregory Haynes  wrote:
> Hello everyone,
>
> I would like to propose adding Ian Wienand as a core reviewer on the
> diskimage-builder project. Ian has been making a significant number of
> contributions for some time to the project, and has been a great help in
> reviews lately. Thus, I think we could benefit greatly by adding him as
> a core reviewer.
>
> Current cores - Please respond with any approvals/objections by next Friday
> (November 13th).
>

+1

> Cheers,
> Greg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS
Principal Software Engineer
(c) 704.236.9385 (w) 919.301.3231

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (greg hill)

2015-11-12 Thread Joshua Harlow

Julien Danjou wrote:

On Wed, Nov 11 2015, Joshua Harlow wrote:


Greetings all stackers,

I propose that we add Greg Hill[1] to the taskflow-core[2] team.

Greg (aka jimbo) has been actively contributing to taskflow for a
while now, both in helping make taskflow better via code
contribution(s) and by helping spread more usage/knowledge of taskflow
at rackspace (since the big-data[3] team uses taskflow internally).
He has helped provided quality reviews and is doing an awesome job
with the various taskflow concepts and helping make taskflow the best
it can be!

Overall I think he would make a great addition to the core review team.


Sure, I wonder why you even ask our opinion. :)



Because I'm such a nice, respectful and friendly guy (obviously)... ;)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Backlog Specs: a way to send requirements to the developer community

2015-11-12 Thread Steve Gordon
- Original Message -
> From: "John Garbutt" 
> To: maishsk+openst...@maishsk.com

[SNIP]

> On 14 May 2015 at 19:52, Maish Saidel-Keesing  wrote:
> > Can we please have this as a default template and the default way to allow
> > Operators to submit a feature request for EVERY and ALL the OpenStack
> > projects ??
> 
> +1
> 
> Thats exactly how backlog specs came about.
> 
> I ran a cross project session at the last summit to try and
> standardise how all the different projects deal with specs.
> https://etherpad.openstack.org/p/kilo-crossproject-specs
> 
> From that, we agreed to introduce "Backlog" specs to hold ideas and
> problem statements, and un-targeted or un-assigned specs. We intended
> to roughly match what keystone was already doing, although our intent
> appears to have diverged a little.
> 
> This is the first design summit where we have the "concept" in place,
> and I would love your help road testing this.
> 
> If an operator working session agreed on a problem statement, or set
> of problem statements, putting those up as backlog specs reviews would
> be a great way to get feedback from the developer community.
> 
> >> On Thu, May 14, 2015 at 8:47 PM, John Garbutt 
> >>> You can read more about Nova's backlog specs here:
> >>> http://specs.openstack.org/openstack/nova-specs/specs/backlog/
> 
> Let me give more detail. To quote the above page:
> 
> "
> If you have a problem that needs solving, but you are not currently
> planning on implementing it, this is the place we can store your
> ideas.
> These specifications have the problem description completed, but all
> other sections are optional.
> "
> 
> So, you can have all details in the spec, or you can have only the
> problem statement complete. Its up to you as a submitter how much
> detail you want to provide. I would recommend adding rough ideas into
> the alternatives section, and leaving everything else blank except the
> problem statement.
> 
> We are trying to use a single process for "parked" developer ideas and
> operator ideas, so they are on an equal footing.
> 
> The reason its not just a "bug" or similar, is so we are able to
> review the idea with the submitter, to ensure we have a good enough
> problem description, so a developer can pick up and run with the idea,
> and turn it into a more complete spec targeted at a specific release.
> In addition, we are ensuring that the problem description is in scope.
> 
> With that extra detail, do you think this could work?
> 
> Thanks,
> John

Hi all,

I get the impression from the feedback on a recently submitted item [1] and 
also a read of a clarification that was made to the backlog page [2] that this 
is no longer the way the backlog works? Specifically, a proposed or desired 
solution is now required and the page now talks about it being a place for the 
project team to record decisions only (originally it seemed more focused on 
recording ideas).

>From an NFV/Telco working group perspective we have been trying to convince 
>folks for some time to stop leading with pre-defined solutions and focus more 
>- at least in the first instance - on documenting their use cases in a way 
>that they could be shared with the relevant OpenStack project teams via their 
>respective RFE/backlog processes. It seems to me though based on my 
>experiences having actually tried to submit one of these ideas as a dry run 
>there is not in fact a working process for recording these as originally 
>advertised [3][4][5]? Am I misinterpreting the intent of the change here?

Thanks,

Steve

[1] https://review.openstack.org/#/c/224325/
[2] 
http://git.openstack.org/cgit/openstack/nova-specs/commit/doc/source/specs/backlog/index.rst?id=525af38a5ce27bed70d950234e94a48584820943
[3] 
http://git.openstack.org/cgit/openstack/nova-specs/tree/doc/source/specs/backlog/index.rst?id=41bf5302e7ff2b9305b0e8a459e9fe715fba0c38
[4] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064284.html
[5] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064180.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, per-user projects, and Federation

2015-11-12 Thread Fox, Kevin M
+1 for Zane's suggestion. Getting Mistral+Zaqar working together has many other 
side benefits that are great enough that it will help push both Mistral and 
Zaqar to be more commonly deployed.

It also may be able to solve the single keystone, multiple region case more 
easily

If you have keystone sending only to one zaqar in only one region, you could 
still have Mistral in other regions accessing that zaqar to get the 
notifications, so long as the user had the choice of which region to talk to in 
their workflow. This would be much better then the rabbit solution which might 
require rabbit shovels or other cleverness to try and deal with the fact there 
might be multiple regions.

Thanks,
Kevin


From: Zane Bitter [zbit...@redhat.com]
Sent: Thursday, November 12, 2015 10:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, 
per-user projects, and Federation

On 11/11/15 13:11, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2015-11-11 09:43:43 -0800:
>> 1. Keystone (or some Rabbit->Zaqar proxy service reading notifications
>> from Keystone) sends "new federated user" notification out via Zaqar.
>> 2. Mistral picks up the message and checks policy to see what should be
>> done.
>> 3. Mistral calls either Heat or Keystone to autoprovision user.
>>
>
> Zane I like most of what you said here, and agree with nearly all of it.
> I actually started typing a question asking why Zaqar, but I think I
> understand, and you can correct me if I'm wrong.
>
> There's a notification bus. It is generally accessible to all of the
> things run by the operator if the operator wants it to be. Zaqar is for
> communication toward the user, whether from user hosted apps or operator
> hosted services. The thing we're discussiong seems entirely operator
> hosted, to operator hosted. Which to me, at first, meant we should just
> teach Mistral to listen to Keystone notifications and to run workflows
> using trusts acquired similarly to the way Heat acquires them.

Candidly, I'm using this as an opportunity to push my pet feature ;)

I genuinely think it could go either way in this case. What makes the
difference for me is that if we teach Mistral to read notifications from
Rabbit then only operators can use that feature, but if we teach Mistral
to read messages from a Zaqar queue then users can make use of this
feature too. We get more bang for our buck.

I think if this feature already existed, it would be an obvious
candidate for this use case. And since it should exist anyway, we should
just implement it instead of implementing another thing that isn't
really useful for any other purposes.

As a (theoretical) user I really, really want this. A couple of example
use cases:
- Heat sends a message and pauses for a response before updating or
deleting a server resource. The user can configure Mistral to run a
workflow to quiesce the server and send the response that allows Heat to
continue.
- Ceilometer sends a message in the event of an alarm. Instead of piping
it directly to Heat autoscaling, the user configures Mistral to listen
on the queue and do signal conditioning on the alarm.

Some of the requirements that fall out of this make Zaqar an obvious choice:
- Needs to be user-facing (Keystone auth)
- We MUST NOT lose the message...
- even if the listener is temporarily down

> However, it just ocurred to me that if we teach Mistral to read messages
> in a Zaqar queue belonging to a user, then there's no weirdness around
> user authentication and admin powers. Messages in a user's queue are
> entirely acted on using trusts for that user.

Yes, that is also a very nice property.

> That said, I think this is overly abstracted. I'd rather just see
> operator hosted services listen to the notification bus and react to the
> notifications they care about. You have to teach Mistral about trusts
> either way so it can do things as a user, and having the notification
> go an extra step:
>
> Keystone->[notifications]->Zaqar->Mistral
>
> vs.
>
> Keystone->[notifications]->Mistral
>
> Doesn't make a ton of sense to me.

I agree to some extent, I think I would prefer:

  Keystone->Zaqar->Mistral

Although, honestly, it's only Keystone's desire not to be tightly
coupled with this that would prevent them from calling Mistral directly.

However, I think:

   Keystone->[notifications]->Zaqar->Mistral

would make sense if:

- Keystone were ordinarily connected to the notification bus; and
- everyone were doing it (i.e. some service existed to routinely pick
notifications off the notification bus firehose, filter and sanitise
them for user consumption and redirect them to Zaqar)

Kevin indicates in another part of the thread that the former may not be
the case in a multi-region deployment, so that is one point against it.

The latter certainly isn't the case *yet*. But it has been discussed

Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-12 Thread Flavio Percoco

On 11/11/15 13:39 -0600, Matt Riedemann wrote:



On 11/11/2015 8:51 AM, Flavio Percoco wrote:

On 09/11/15 21:30 -0600, Matt Riedemann wrote:

On 11/9/2015 9:12 PM, Matthew Treinish wrote:

On Mon, Nov 09, 2015 at 10:54:43PM +, Kuvaja, Erno wrote:

On Mon, Nov 09, 2015 at 05:28:45PM -0500, Doug Hellmann wrote:

Excerpts from Matt Riedemann's message of 2015-11-09 16:05:29 -0600:


On 11/9/2015 10:41 AM, Thierry Carrez wrote:

Hi everyone,

A few cycles ago we set up the Release Cycle Management team which
was a bit of a frankenteam of the things I happened to be leading:
release management, stable branch maintenance and vulnerability

management.

While you could argue that there was some overlap between those
functions (as in, "all these things need to be released") logic
was not the primary reason they were put together.

When the Security Team was created, the VMT was spinned out of the
Release Cycle Management team and joined there. Now I think we
should spin out stable branch maintenance as well:

* A good chunk of the stable team work used to be stable point
release management, but as of stable/liberty this is now done by
the release management team and triggered by the project-specific
stable maintenance teams, so there is no more overlap in tooling
used there

* Following the kilo reform, the stable team is now focused on
defining and enforcing a common stable branch policy[1], rather
than approving every patch. Being more visible and having more
dedicated members can only help in that very specific mission

* The release team is now headed by Doug Hellmann, who is focused
on release management and does not have the history I had with
stable branch policy. So it might be the right moment to refocus
release management solely on release management and get the stable
team its own leadership

* Empowering that team to make its own decisions, giving it more
visibility and recognition will hopefully lead to more resources
being dedicated to it

* If the team expands, it could finally own stable branch health
and gate fixing. If that ends up all falling under the same roof,
that team could make decisions on support timeframes as well,
since it will be the primary resource to make that work


Isn't this kind of already what the stable maint team does? Well,
that and some QA people like mtreinish and sdague.



So.. good idea ? bad idea ? What do current stable-maint-core[2]
members think of that ? Who thinks they could step up to lead that

team ?


[1]
http://docs.openstack.org/project-team-guide/stable-branches.html
[2] https://review.openstack.org/#/admin/groups/530,members



With the decentralizing of the stable branch stuff in Liberty [1] it
seems like there would be less use for a PTL for stable branch
maintenance - the cats are now herding themselves, right? Or at
least that's the plan as far as I understood it. And the existing
stable branch wizards are more or less around for help and answering

questions.


The same might be said about releasing from master and the release
management team. There's still some benefit to having people
dedicated
to making sure projects all agree to sane policies and to keep up
with
deliverables that need to be released.


Except the distinction is that relmgt is actually producing
something. Relmgt
has the releases repo which does centralize library releases, reno
to do the
release notes, etc. What does the global stable core do? Right now
it's there
almost entirely to just add people to the project specific stable
core teams.

-Matt Treinish



I'd like to move the discussion from what are the roles of the
current stable-maint-core and more towards what the benefits would
be having a stable-maint team rather than the -core group alone.

Personally I think the stable maintenance should be quite a lot more
than unblocking gate and approving people allowed to merge to the
stable branches.



Sure, but that's not we're talking about here are we? The other
tasks, like
backporting changes for example, have been taken on by project teams.
Even in
your other email you mentioned that you've been doing backports and
other tasks
that you consider stable maint in a glance only context. That's
something we
changed in kilo which ttx referenced in [1] to enable that to happen,
and it was
the only way to scale things.

The discussion here is about the cross project effort around stable
branches,
which by design is a more limited scope now. Right now the cross
project effort
around stable branch policy is really 2 things (both of which ttx
already
mentioned):

1. Keeping the gates working on the stable branches
2. Defining and enforcing stable branch policy.

The only lever on #2 is that the global stable-maint-core is the only
group
which has add permissions to the per project stable core groups.
(also the
stable branch policy wiki, but that rarely changes) We specifically
shrunk it to
these 2 things in. [1] Well, really 3 things there, but since we're
not doing
integrated 

Re: [openstack-dev] [HA] weekly High Availability meetings on IRC start next Monday

2015-11-12 Thread Sergii Golovatiuk
Hi,


On Thu, Nov 12, 2015 at 3:06 PM, Adam Spiers  wrote:

> Hi Sergii,
>
> Thanks a lot for the feedback!
>
> Sergii Golovatiuk  wrote:
> > Hi Adam,
> >
> > It's great we are moving forward with HA community. Thank you so much for
> > brining HA to next level. However, I have couple of comments
> >
> > [1] contains agenda. I guess we should move it to
> > https://etherpad.openstack.org. That will allow people to add own
> topics to
> > discuss. Action items can be put there as well.
> >
> >  [1] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
>
> It's a wiki, so anyone can already add their own topics, and in fact
> the page already encourages people to do that :-)
>
> I'd prefer to keep it as a wiki because that is consistent with the
> approach of all the other OpenStack meetings, as recommended by
>
>   https://wiki.openstack.org/wiki/Meetings/CreateaMeeting#Add_a_Meeting
>
> It also results in a better audit trail than etherpad (where changes
> can be made anonymously).
>

I agree. Let's follow the rules.

>
> Action items will be captured by the MeetBot:
>
>
> https://git.openstack.org/cgit/openstack-infra/meetbot/tree/doc/Manual.txt
>
> > [2] declares meetings at 9am UTC which might be tough for US based
> folks. I
> > might be wrong here as I don't know the location of HA experts.
> >
> >  [2] http://eavesdrop.openstack.org/#High_Availability_Meeting
>
> Yes, I was aware of this :-/  The problem is that the agenda for the
> first meeting will focus on hypervisor HA, and the interested parties
> who met in Tokyo are all based in either Europe or Asia (Japan and
> Australia).  It's hard but possible to find a time which accommodates
> two continents, but almost impossible to find a time which
> accommodates three :-/
>
>
I ran into issues setting event in UTC. Use Ghana/Accra in google calendar
as it doesn't have UTC time zone


> If it's any consolation, the meeting logs will be available
> afterwards, and also the meeting is expected to be short (around 30
> minutes) since the majority of work will continue via email and IRC
> outside the meeting.  This first meeting is mainly to set a direction
> for future collaboration.
>
> However suggestions for how to handle this better are always welcome,
> and if the geographical distribution of attendees of future meetings
> changes then of course we can consider changing the time in order to
> accommodate.  I want it to be an inclusive sub-community.
>
> Cheers,
> Adam
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] [metrics] Review metrics: average numbers

2015-11-12 Thread Mike Scherbakov
Jesus,
thanks for sharing this. Looks like you've got quite comprehensive data
analysis tool for code review. Is there a way to get Fuel added somehow.. ?

Ilya,
> Do you mean to calculate stats not only for open, but also for those that
are already closed?
Yes. I need to calculate stats for ALL, including already closed, for some
period of time. For instance, last 30, 60, 90 days.

btw, I updated list of fuel-group repos, please review:
https://review.openstack.org/#/c/244936/.

Thank you,

On Thu, Nov 12, 2015 at 6:59 AM Ilya Shakhat  wrote:

> Hi Mike,
>
> > Do I understand right, that average numbers
>> > here are calculated out of open reviews, not total number of reviews?
>
>
> Average numbers are calculated for reviews within the group. But I'd
> expect them to be "time since the last vote", not "time when patch
> proposed" as they do now.
>
>
> > The most important number which I'm trying to get, is an average time
>> > change requests waiting for reviewers since last vote or mark, from
>> > all requests (not only those which remain in open state, like it is
>> > now, I believe).
>
>
> Do you mean to calculate stats not only for open, but also for those that
> are already closed? Should it be for all times, or during specified period?
>
> Regards,
> --Ilya
>
>
>
> 2015-11-12 1:14 GMT+03:00 Jesus M. Gonzalez-Barahona :
>
>> Hi, Mike,
>>
>> I'm not sure what you are looking for exactly, but maybe you can have a
>> look at the quarterly reports. AFAIK, currently there is none specific
>> to Fuel, but for example for Nova, you have:
>>
>> http://activity.openstack.org/dash/reports/2015-q3/pdf/projects/nova.pd
>> f
>> 
>>
>> In page 6, you have "time waiting for reviewer" (from the moment a new
>> patchset is produced, to the time a conclusive review vote is found in
>> Gerrit), and "time waiting for developer" (from the conclusive review
>> vote to next patchset).
>>
>> We're working now in a visualization for that kind of information. For
>> now, we only have complete changeset values, check if you're
>> interested:
>>
>> http://blog.bitergia.com/2015/10/22/understanding-the-code-review-proce
>> ss-in-openstack/
>> 
>>
>> Saludos,
>>
>> Jesus.
>>
>> On Wed, 2015-11-11 at 21:45 +, Mike Scherbakov wrote:
>> > Hi stackers,
>> > I have a question about Stackalytics.
>> > I'm trying to get some more data from code review stats. For Fuel,
>> > for instance,
>> > http://stackalytics.com/report/reviews/fuel-group/open
>> > shows some useful stats. Do I understand right, that average numbers
>> > here are calculated out of open reviews, not total number of reviews?
>> >
>> > The most important number which I'm trying to get, is an average time
>> > change requests waiting for reviewers since last vote or mark, from
>> > all requests (not only those which remain in open state, like it is
>> > now, I believe).
>> >
>> > How hard would it be to get / extend Stackalytics to make it..?
>> >
>> > Thanks!
>> > --
>> > Mike Scherbakov
>> > #mihgen
>> > _
>> > _
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>> > cribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> --
>> Bitergia: http://bitergia.com
>> /me at Twitter: https://twitter.com/jgbarah
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev