Re: [openstack-dev] DVR and FWaaS integration

2014-07-02 Thread Yi Sun
The NS FW will be on a centralized node for sure. For the DVR + FWaaS 
solution is really for EW traffic. If you are interested on the topic, 
please propose your preferred meeting time and join the meeting so that 
we can discuss about it.


Yi

On 7/2/14, 7:05 PM, joehuang wrote:


Hello,

It's hard to integrate DVR and FWaaS. My proposal is to split the 
FWaaS into two parts: one part is for east-west FWaaS, this part could 
be done on DVR side, and make it become distributed manner. The other 
part is for north-south part, this part could be done on Network Node 
side, that means work in central manner. After the split, north-south 
FWaaS could be implemented by software or hardware, meanwhile, 
east-west FWaaS is better to implemented by software with its 
distribution nature.


Chaoyi Huang ( Joe Huang )

OpenStack Solution Architect

IT Product Line

Tel: 0086 755-28423202 Cell: 0086 158 118 117 96 Email: 
joehu...@huawei.com


Huawei Area B2-3-D018S Bantian, Longgang District,Shenzhen 518129, 
P.R.China


*???:*Yi Sun [mailto:beyo...@gmail.com]
*:*2014?7?3?4:42
*???:*OpenStack Development Mailing List (not for usage questions)
*??:*Kyle Mestery (kmestery); Rajeev; Gary Duan; Carl (OpenStack Neutron)
*??:*Re: [openstack-dev] DVR and FWaaS integration

All,

After talk to Carl and FWaaS team , Both sides suggested to call a 
meeting to discuss about this topic in deeper detail. I heard that 
Swami is traveling this week. So I guess the earliest time we can have 
a meeting is sometime next week. I will be out of town on monday, so 
any day after Monday should work for me. We can do either IRC, google 
hang out, GMT or even a face to face.


For anyone interested, please propose your preferred time.

Thanks

Yi

On Sun, Jun 29, 2014 at 12:43 PM, Carl Baldwin > wrote:


In line...

On Jun 25, 2014 2:02 PM, "Yi Sun" > wrote:

>
> All,
> During last summit, we were talking about the integration issues 
between DVR and FWaaS. After the summit, I had one IRC meeting with 
DVR team. But after that meeting I was tight up with my work and did 
not get time to continue to follow up the issue. To not slow down the 
discussion, I'm forwarding out the email that I sent out as the follow 
up to the IRC meeting here, so that whoever may be interested on the 
topic can continue to discuss about it.

>
> First some background about the issue:
> In the normal case, FW and router are running together inside the 
same box so that FW can get route and NAT information from the router 
component. And in order to have FW to function correctly, FW needs to 
see the both directions of the traffic.
> DVR is designed in an asymmetric way that each DVR only sees one leg 
of the traffic. If we build FW on top of DVR, then FW functionality 
will be broken. We need to find a good method to have FW to work with DVR.

>
> ---forwarding email---
>  During the IRC meeting, we think that we could force the traffic to 
the FW before DVR. Vivek had more detail; He thinks that since the 
br-int knowns whether a packet is routed or switched, it is possible 
for the br-int to forward traffic to FW before it forwards to DVR. The 
whole forwarding process can be operated as part of service-chain 
operation. And there could be a FWaaS driver that understands the DVR 
configuration to setup OVS flows on the br-int.


I'm not sure what this solution would look like.  I'll have to get the 
details from Vivek.  It seems like this would effectively centralize 
the traffic that we worked so hard to decentralize.


It did cause me to wonder about something:  would it be possible to 
reign the symmetry to the traffic by directing any response traffic 
back to the DVR component which handled the request traffic?  I guess 
this would require running conntrack on the target side to track and 
identify return traffic.  I'm not sure how this would be inserted into 
the data path yet.  This is a half-baked idea here.


> The concern is that normally firewall and router are integrated together so that firewall can make 
right decision based on the routing result. But what we are suggesting 
is to split the firewall and router into two separated components, 
hence there could be issues. For example, FW will not be able to get 
enough information to setup zone. Normally Zone contains a group of 
interfaces that can be used in the firewall policy to enforce the 
direction of the policy. If we forward traffic to firewall before DVR, 
then we can only create policy based on subnets not the interface.
> Also, I'm not sure if we have ever planed to support SNAT on the 
DVR, but if we do, then it depends on at which point we forward 
traffic to the FW, the subnet may not even work for us anymore (even 
DNAT could have problem too).


I agree that splitting the firewall from routing presents some 
problems that may be difficult to overcome.  I don't know how it would 
be done while maintaining the benefits of DVR

Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Qiming Teng
On Wed, Jul 02, 2014 at 10:54:49AM -0700, Clint Byrum wrote:
> Excerpts from Qiming Teng's message of 2014-07-02 00:02:14 -0700:
> > Just some random thoughts below ...
> > 
> > On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > > In AWS, an autoscaling group includes health maintenance functionality 
> > > --- 
> > > both an ability to detect basic forms of failures and an ability to react 
> > > properly to failures detected by itself or by a load balancer.  What is 
> > > the thinking about how to get this functionality in OpenStack?  Since 
> > 
> > We are prototyping a solution to this problem at IBM Research - China
> > lab.  The idea is to leverage oslo.messaging and ceilometer events for
> > instance (possibly other resource such as port, securitygroup ...)
> > failure detection and handling.
> > 
> 
> Hm.. perhaps you should be contributing some reviews here as you may
> have some real insight:
> 
> https://review.openstack.org/#/c/100012/
> 
> This sounds a lot like what we're working on for continuous convergence.

Great.  I will look into this specs and see if I can contribute some
ideas.

> > > OpenStack's OS::Heat::AutoScalingGroup has a more general member type, 
> > > what is the thinking about what failure detection means (and how it would 
> > > be accomplished, communicated)?
> > 
> > When most OpenStack services are making use of oslo.notify, in theory, a
> > service should be able to send/receive events related to resource
> > status.  In our current prototype, at least host failure (detected in
> > Nova and reported with a patch), VM failure (detected by nova), and some
> > lifecycle events of other resources can be detected and then collected
> > by Ceilometer.  There is certainly a possibility to listen to the
> > message queue directly from Heat, but we only implemented the Ceilometer
> > centric approach.
> > 
> > > 
> > > I have not found design discussion of this; have I missed something?
> > > 
> > > I suppose the natural answer for OpenStack would be centered around 
> > > webhooks.  An OpenStack scaling group (OS SG = OS::Heat::AutoScalingGroup 
> > > or AWS::AutoScaling::AutoScalingGroup or OS::Heat::ResourceGroup or 
> > > OS::Heat::InstanceGroup) could generate a webhook per member, with the 
> > > meaning of the webhook being that the member has been detected as dead 
> > > and 
> > > should be deleted and removed from the group --- and a replacement member 
> > > created if needed to respect the group's minimum size.  
> > 
> > Well, I would suggest we generalize this into a event messaging or
> > signaling solution, instead of just 'webhooks'.  The reason is that
> > webhooks as it is implemented today is not carrying a payload of useful
> > information -- I'm referring to the alarms in Ceilometer.
> > 
> > There are other cases as well.  A member failure could be caused by a 
> > temporary communication problem, which means it may show up quickly when
> > a replacement member is already being created.  It may mean that we have
> > to respond to an 'online' event in addition to an 'offline' event?
> > 
> 
> The ideas behind convergence help a lot here. Skew happens in distributed
> systems, so we expect it constantly. In the extra-capacity situation
> above, we would just deal with it by scaling back down. There are also
> situations where we might accidentally create two physical resources
> because we got a 500 from the API but it was after the resource was
> being created. This is the same problem, and has the same answer: pick
> one and scale down (and if this is a critical server like a database,
> we'll need lifecycle callbacks that will prevent suddenly killing a node
> that would cost you uptime or recovery time).

Glad to know this is considered and handled with a generic solution.
As for recovering a server, I still suggest we have a per-resource-type
restart logic.  In the case of a nested stack, callbacks seem the right
way to go.

> > > When the member is 
> > > a Compute instance and Ceilometer exists, the OS SG could define a 
> > > Ceilometer alarm for each member (by including these alarms in the 
> > > template generated for the nested stack that is the SG), programmed to 
> > > hit 
> > > the member's deletion webhook when death is detected (I imagine there are 
> > > a few ways to write a Ceilometer condition that detects instance death). 
> > 
> > Yes.  Compute instance failure can be detected with a Ceilometer plugin.
> > In our prototype, we developed a Dispatcher plugin that can handle
> > events like 'compute.instance.delete.end', 'compute.instance.create.end'
> > after they have been processed based on a event_definitions.yaml file.
> > There could be other ways, I think.
> > 
> > The problem here today is about the recovery of SG member.  If it is a
> > compute instance, we can 'reboot', 'rebuild', 'evacuate', 'migrate' it,
> > just to name a few options.  The most brutal way to do this is like what
> > HARestarter is doing to

Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-02 Thread James Polley
On Thu, Jul 3, 2014 at 12:55 AM, Anita Kuno  wrote:

> On 07/01/2014 08:52 AM, CARVER, PAUL wrote:
> > Anant Patil wrote:
> >> I use tmux (an alternative to screen) a lot and I believe lot of other
> developers use it.
> >> I have been using devstack for some time now and would like to add the
> option of
> >> using tmux instead of screen for creating sessions for openstack
> services.
> >> I couldn't find a way to do that in current implementation of devstack.
> >
> > Is it just for familiarity or are there specific features lacking in
> screen that you think
> > would benefit devstack? I’ve tried tmux a couple of times but didn’t
> find any
> > compelling reason to switch from screen. I wouldn’t argue against anyone
> who
> > wants to use it for their day to day needs. But don’t just change
> devstack on a whim,
> > list out the objective benefits.
> >
> > Having a configuration option to switch between devstack-screen and
> devstack-tmux
> > seems like it would probably add more complexity than benefit,
> especially if there
> > are any functional differences. If there are functional differences it
> would be better
> > to decide which one is best (for devstack, not necessarily best for
> everyone in the world)
> > and go with that one only.
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> Actually as a tmux user I really like that devstack uses screen, that
> way when I have screen running in tmux, I can decide (since they have
> different default key combinations, Control+A for screen and Control+B
> for tmux) which utility I am talking to, screen or tmux.
>
> Thanks,
> Anita.
>

As a long-time nested-screen user I've run into this a lot. I've always run
a local screen session on my desktop/laptop; and then another session on
the machine I use to do real work.

When I used screen I always used to use "escape ^ee" on my local session
(as I used it less) and left the remote at the default, so that I could
choose which session I was controlling.

Now that I've switched to tmux, I do the same - but with tmux the command
is "set prefix C-e". I find tmux easier to get set up than screen because I
can have a simple shell script on my local machine that sets it up as a
"master" session.

If we did move from screen to tmux (it sounds like we have a long way to go
before we're convinced it's worth the effort), I don't think the key
bindings will be something that should sway the decision one way or the
other - they're very easy to change.


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Qiming Teng
On Wed, Jul 02, 2014 at 12:29:31PM -0400, Mike Spreitzer wrote:
> Qiming Teng  wrote on 07/02/2014 03:02:14 AM:
> 
> > Just some random thoughts below ...
> > 
> > On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > > ...
> > > I have not found design discussion of this; have I missed something?
> > > 
> > > I suppose the natural answer for OpenStack would be centered around 
> > > webhooks... 
> > 
> > Well, I would suggest we generalize this into a event messaging or
> > signaling solution, instead of just 'webhooks'.  The reason is that
> > webhooks as it is implemented today is not carrying a payload of useful
> > information -- I'm referring to the alarms in Ceilometer.
> 
> OK, this is great (and Steve Hardy provided more details in his reply), I 
> did not know about the existing abilities to have a payload.  However 
> Ceilometer alarms are still deficient in that way, right?  A Ceilometer 
> alarm's action list is simply a list of URLs, right?  I would be happy to 
> say let's generalize Ceilometer alarms to allow a payload in an action.

Yes. Steve kindly pointed out that an alarm could be used to carry a
payload, though not yet implemented.  My concern is actually about
'flexibility'.  For different purposes, an alarm may be required to
carry payload of different formats.  We need a specification/protocol
between Heat and Ceilometer so that Heat can specify in an alarm:
  - tell me when/which instance is down/up when sending me an alarm
about instance lifecycle.
  - tell me which instances from my group are affected when a host is
down
  - (other use cases?)

> > There are other cases as well.  A member failure could be caused by a 
> > temporary communication problem, which means it may show up quickly when
> > a replacement member is already being created.  It may mean that we have
> > to respond to an 'online' event in addition to an 'offline' event?
> > ...
> > The problem here today is about the recovery of SG member.  If it is a
> > compute instance, we can 'reboot', 'rebuild', 'evacuate', 'migrate' it,
> > just to name a few options.  The most brutal way to do this is like what
> > HARestarter is doing today -- delete followed by a create.
> 
> We could get into arbitrary subtlety, and maybe eventually will do better, 
> but I think we can start with a simple solution that is widely applicable. 
>  The simple solution is that once the decision has been made to do 
> convergence on a member (note that this is distinct from merely detecting 
> and noting a divergence) then it will be done regardless of whether the 
> doomed member later appears to have recovered, and the convergence action 
> for a scaling group member is to delete the old member and create a 
> replacement (not in that order).

Umh ... For transient errors, it won't be uncommon that some members may
appear unreachable (e.g. from a load balancer), as a result of say image
downloading saturating network bandwidth.  Sovling this using
convergence logic?  The observer sees only 2 members running instead of
3 which is the desired state, then convergene engine starts to create a
new member.  Now, the previously disappeared member showed up again.
What should the observer do?  Would it be smart enough to know that this
is the old member coming back to life thus cancel the creation of the new
member?  Would it be able to recognize that this instance was part of
a Resource Group at all? 

> > > When the member is a nested stack and Ceilometer exists, it could be 
> the 
> > > member stack's responsibility to include a Ceilometer alarm that 
> detects 
> > > the member stack's death and hit the member stack's deletion webhook. 
> > 
> > This is difficult.  A '(nested) stack' is a Heat specific abstraction --
> > recall that we have to annotate a nova server resource in its metadata
> > to which stack this server belongs.  Besides the 'visible' resources
> > specified in a template, Heat may create internal data structures and/or
> > resources (e.g. users) for a stack.  I am not quite sure a stack's death
> > can be easily detected from outside Heat.  It would be at least
> > cumbersome to have Heat notify Ceilometer that a stack is dead, and then
> > have Ceilometer send back a signal.
> 
> A (nested) stack is not only a heat-specific abstraction but its semantics 
> and failure modes are specific to the stack (at least, its template).  I 
> think we have no practical choice but to let the template author declare 
> how failure is detected.  It could be as simple as creating a Ceilometer 
> alarms that detect death one or more resources in the nested stack; it 
> could be more complicated Ceilometer stuff; it could be based on something 
> other than, or in addition to, Ceilometer.  If today there are not enough 
> sensors to detect failures of all kinds of resources, I consider that a 
> gap in telemetry (and think it is small enough that we can proceed 
> usefully today, and should plan on filling that gap 

Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-02 Thread Dean Troyer
I'm not going to get into a screen vs tmux debate, we removed tmux support
two years ago and changing that now is going to be a high bar to get
over...but it seems some expectations should be set here.


On Wed, Jul 2, 2014 at 11:05 PM, Anant Patil  wrote:
[...]

> I understand that the changes will introduce complexity, but I will try
> to abstract out this complexity so that we don't deal with it in
> everywhere in devstack but one.
>
> As Sean suggested I will go though all the screen calls in devstack and
> see if we have equivalent tmux calls and how we can abstract these
> things out and put in one place. I don't think there's going to be any
> functional differences, but I will investigate and point them out in the
> blueprint.
>

DevStack is an opinionated installer, and as such will not be all things to
all people.

Most of the screen usage is in the set of functions that handle process
start and stop in functions-common.  There is also some logging and support
for rejoin-stack.sh set up in stack.sh (I think, too lazy to look this late
at night)

As I said above, but am going to repeat here, making this change is going
to be a high bar to get over.  screen has certainly been a challenge for us
but at the moment we have about 90% of our issues with it solved.  Changing
the devil we know for a bag of unknown is not appealing to me at this point.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-02 Thread Anant Patil
I am sure there are many developers learning screen in order to work
with devstack and I don't want to rob them off of their efforts.

However, there are developers (like me :) ) who use tmux everyday. I had
used screen for some duration and moved to tmux as it provided some
features that helped me debug and develop effectively. Every single day
when I use devstack the thought of having tmux supported crosses my
mind.

While this is not the place to discuss pros and cons of tmux or screen,
I would still like to provide links to get a glimpse what tmux has to
offer.

http://en.wikipedia.org/wiki/Tmux
http://www.wikivs.com/wiki/Screen_vs_tmux
http://dominik.honnef.co/posts/2010/10/why_you_should_try_tmux_instead_of_screen/

How it benifits devstack?

Developers using tmux can easily adopt it. devstack will bring all the
advantanges that tmux brings. And I hope that the choice will even help
devstackers unhappy with screen to try, IMHO, a newer and more elegant
session manager.

Why optional?

Both screen and tmux are under active development. devstack with tmux
brings all the advantages that tmux brings and also the limitations that
tmux brings. Apple-to-apple comparison is not going to help any of us.

We spend time and learn and use tools of our choice so that we can work
effectively. No choice of having variety of tools supported in any
software only makes the adoption of that software more difficult.

Code Complexity

I understand that the changes will introduce complexity, but I will try
to abstract out this complexity so that we don't deal with it in
everywhere in devstack but one.

As Sean suggested I will go though all the screen calls in devstack and
see if we have equivalent tmux calls and how we can abstract these
things out and put in one place. I don't think there's going to be any
functional differences, but I will investigate and point them out in the
blueprint.

Your feedback is valuable for me. Looking for more of it!



On Thu, Jul 3, 2014 at 5:55 AM, Mathieu Gagné  wrote:

> On 2014-07-02 2:10 PM, Yuriy Taraday wrote:
>
>> One problem that comes to mind is that screen tries to reopen your
>> terminal when you attach to existing session or run a new one.
>> So if you have one user you log in to a test server (ubuntu? root?) and
>> another user that runs screen session (stack), you won't be able to
>> attach to the session unless you do some dance with access rights (like
>> "chmod a+rw `tty`" or smth).
>>
>>
> I run this command to fix this issue instead of messing with chmod:
>
>   script /dev/null
>
> --
> Mathieu
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Qiming Teng
On Wed, Jul 02, 2014 at 11:02:36AM +0100, Steven Hardy wrote:
> On Wed, Jul 02, 2014 at 03:02:14PM +0800, Qiming Teng wrote:
> > Just some random thoughts below ...
> > 
> > On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > > In AWS, an autoscaling group includes health maintenance functionality 
> > > --- 
> > > both an ability to detect basic forms of failures and an ability to react 
> > > properly to failures detected by itself or by a load balancer.  What is 
> > > the thinking about how to get this functionality in OpenStack?  Since 
> > 
> > We are prototyping a solution to this problem at IBM Research - China
> > lab.  The idea is to leverage oslo.messaging and ceilometer events for
> > instance (possibly other resource such as port, securitygroup ...)
> > failure detection and handling.
> 
> This sounds interesting, are you planning to propose a spec for heat
> describing this work and submit your patches to heat?

Steve, this work is still a prototype yet having the loop to be closed.
The basic idea is:

1. Ensure nova server redundancy by providing a VMCluster resource type
in the form of Heat plugin.  It could be contributed back to the
community if proved useful.  I have two concerns: 1) it is not a generic
solution yet, due to lacking support to template resources; 2) instead
of a new resource type, maybe a better approach is to add an optional
group of properties to Nova server specifying its HA requirement.

2. Detection of Host/VM failures.  Currently rely on Nova's detection of
VM lifecycle events.  I'm not sure it is applicable to hypervisors other
than KVM.  We have some patches to the ServiceGroup service in Nova so
that Host failures can be detected and reported too.  This can be a
patch to Nova.

3. Recovery from Host/VM failures.  We can either use the Events
collected by Ceilometer directly, or have Ceilometer convert Event into
Samples so that we can reuse the Alarm service (evaluator + notifier).
Neither way is working now.  For the Event path, we are blocked by the
authentication problem; for the Alarm path, we don't know how to carry a
payload via the AlarmUrl.

Some help and guidance would be highly appreciated.

> > 
> > > OpenStack's OS::Heat::AutoScalingGroup has a more general member type, 
> > > what is the thinking about what failure detection means (and how it would 
> > > be accomplished, communicated)?
> > 
> > When most OpenStack services are making use of oslo.notify, in theory, a
> > service should be able to send/receive events related to resource
> > status.  In our current prototype, at least host failure (detected in
> > Nova and reported with a patch), VM failure (detected by nova), and some
> > lifecycle events of other resources can be detected and then collected
> > by Ceilometer.  There is certainly a possibility to listen to the
> > message queue directly from Heat, but we only implemented the Ceilometer
> > centric approach.
> 
> It has been pointed out a few times that in large deployments, different
> services may not share the same message bus.  So while *an* option could be
> heat listenting to the message bus, I'd prefer that we maintain the alarm
> notifications via the ReST API as the primary signalling mechanism.

Agreed. IIRC, somewhere in the Ceilometer documentation, it was
suggested to use different use different queues for different purposes.
No objection to keep alarms as the primary mechanism till we have a
compelling reason to change.

> > > 
> > > I have not found design discussion of this; have I missed something?
> > > 
> > > I suppose the natural answer for OpenStack would be centered around 
> > > webhooks.  An OpenStack scaling group (OS SG = OS::Heat::AutoScalingGroup 
> > > or AWS::AutoScaling::AutoScalingGroup or OS::Heat::ResourceGroup or 
> > > OS::Heat::InstanceGroup) could generate a webhook per member, with the 
> > > meaning of the webhook being that the member has been detected as dead 
> > > and 
> > > should be deleted and removed from the group --- and a replacement member 
> > > created if needed to respect the group's minimum size.  
> > 
> > Well, I would suggest we generalize this into a event messaging or
> > signaling solution, instead of just 'webhooks'.  The reason is that
> > webhooks as it is implemented today is not carrying a payload of useful
> > information -- I'm referring to the alarms in Ceilometer.
> 
> The resource signal interface used by ceilometer can carry whatever data
> you like, so the existing solution works fine, we don't need a new one IMO.
> 
> For example look at this patch which converts WaitConditions to use the
> resource_signal interface:
> 
> https://review.openstack.org/#/c/101351/2/heat/engine/resources/wait_condition.py
> 
> We pass the data to the WaitCondition via a resource signal, the exact same
> transport that is used for alarm notifications from ceilometer.

I can understand the Heat side processing of signal payload.  From the
triggering side, I haven't se

[openstack-dev] [Neutron] [ML2] [L2GW] multiple l2gw project in ml2 team?

2014-07-02 Thread loy wolfe
I read the ml2 tracking reviews, find two similar spec for l2 gateway:

1) GW API: L2 bridging API - Piece 1: Basic use cases
https://review.openstack.org/#/c/93613/


2) API Extension for l2-gateway
https://review.openstack.org/#/c/100278/

also neutron external port spec has some relationship
https://review.openstack.org/#/c/87825/

all these spec address the same problem: how to establish bridging
connection between native neutron created vif and external port of physical
node. are there any unification action to merge these project?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-02 Thread Kyle Mestery
We're coming down to the wire here with regards to Neutron BPs in
Juno, and I wanted to bring up the topic of the flavor framework BP.
This is a critical BP for things like LBaaS, FWaaS, etc. We need this
work to land in Juno, as these other work items are dependent on it.
There are still two proposals [1] [2], and after the meeting last week
[3] it appeared we were close to conclusion on this. I now see a bunch
of comments on both proposals.

I'm going to again suggest we spend some time discussing this at the
Neutron meeting on Monday to come to a closure on this. I think we're
close. I'd like to ask Mark and Eugene to both look at the latest
comments, hopefully address them before the meeting, and then we can
move forward with this work for Juno.

Thanks for all the work by all involved on this feature! I think we're
close and I hope we can close on it Monday at the Neutron meeting!

Kyle

[1] https://review.openstack.org/#/c/90070/
[2] https://review.openstack.org/102723
[3] 
http://eavesdrop.openstack.org/meetings/networking_advanced_services/2014/networking_advanced_services.2014-06-27-17.30.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-02 Thread joehuang
Hello,

It’s hard to integrate DVR and FWaaS. My proposal is to split the FWaaS into 
two parts: one part is for east-west FWaaS, this part could be done on DVR 
side, and make it become distributed manner. The other part is for north-south 
part, this part could be done on Network Node side, that means work in central 
manner. After the split, north-south FWaaS could be implemented by software or 
hardware, meanwhile, east-west FWaaS is better to implemented by software with 
its distribution nature.

Chaoyi Huang ( Joe Huang )
OpenStack Solution Architect
IT Product Line
Tel: 0086 755-28423202 Cell: 0086 158 118 117 96 Email: joehu...@huawei.com
Huawei Area B2-3-D018S Bantian, Longgang District,Shenzhen 518129, P.R.China

发件人: Yi Sun [mailto:beyo...@gmail.com]
发送时间: 2014年7月3日 4:42
收件人: OpenStack Development Mailing List (not for usage questions)
抄送: Kyle Mestery (kmestery); Rajeev; Gary Duan; Carl (OpenStack Neutron)
主题: Re: [openstack-dev] DVR and FWaaS integration

All,
After talk to Carl and FWaaS team , Both sides suggested to call a meeting to 
discuss about this topic in deeper detail. I heard that Swami is traveling this 
week. So I guess the earliest time we can have a meeting is sometime next week. 
I will be out of town on monday, so any day after Monday should work for me. We 
can do either IRC, google hang out, GMT or even a face to face.
For anyone interested, please propose your preferred time.
Thanks
Yi

On Sun, Jun 29, 2014 at 12:43 PM, Carl Baldwin 
mailto:c...@ecbaldwin.net>> wrote:

In line...

On Jun 25, 2014 2:02 PM, "Yi Sun" mailto:beyo...@gmail.com>> 
wrote:
>
> All,
> During last summit, we were talking about the integration issues between DVR 
> and FWaaS. After the summit, I had one IRC meeting with DVR team. But after 
> that meeting I was tight up with my work and did not get time to continue to 
> follow up the issue. To not slow down the discussion, I'm forwarding out the 
> email that I sent out as the follow up to the IRC meeting here, so that 
> whoever may be interested on the topic can continue to discuss about it.
>
> First some background about the issue:
> In the normal case, FW and router are running together inside the same box so 
> that FW can get route and NAT information from the router component. And in 
> order to have FW to function correctly, FW needs to see the both directions 
> of the traffic.
> DVR is designed in an asymmetric way that each DVR only sees one leg of the 
> traffic. If we build FW on top of DVR, then FW functionality will be broken. 
> We need to find a good method to have FW to work with DVR.
>
> ---forwarding email---
>  During the IRC meeting, we think that we could force the traffic to the FW 
> before DVR. Vivek had more detail; He thinks that since the br-int knowns 
> whether a packet is routed or switched, it is possible for the br-int to 
> forward traffic to FW before it forwards to DVR. The whole forwarding process 
> can be operated as part of service-chain operation. And there could be a 
> FWaaS driver that understands the DVR configuration to setup OVS flows on the 
> br-int.

I'm not sure what this solution would look like.  I'll have to get the details 
from Vivek.  It seems like this would effectively centralize the traffic that 
we worked so hard to decentralize.

It did cause me to wonder about something:  would it be possible to reign the 
symmetry to the traffic by directing any response traffic back to the DVR 
component which handled the request traffic?  I guess this would require 
running conntrack on the target side to track and identify return traffic.  I'm 
not sure how this would be inserted into the data path yet.  This is a 
half-baked idea here.

> The concern is that normally firewall and router are integrated together so 
> that firewall can make right decision based on the routing result. But what 
> we are suggesting is to split the firewall and router into two separated 
> components, hence there could be issues. For example, FW will not be able to 
> get enough information to setup zone. Normally Zone contains a group of 
> interfaces that can be used in the firewall policy to enforce the direction 
> of the policy. If we forward traffic to firewall before DVR, then we can only 
> create policy based on subnets not the interface.
> Also, I’m not sure if we have ever planed to support SNAT on the DVR, but if 
> we do, then it depends on at which point we forward traffic to the FW, the 
> subnet may not even work for us anymore (even DNAT could have problem too).

I agree that splitting the firewall from routing presents some problems that 
may be difficult to overcome.  I don't know how it would be done while 
maintaining the benefits of DVR.

Another half-baked idea:  could multi-primary state replication be used between 
DVR components to enable firewall operation?  Maybe work on the HA router 
blueprint -- which is long overdue to be merged Btw -- could be leveraged.  The 
number of DVR "pieces" co

[openstack-dev] [barbican] Consumer Registration API

2014-07-02 Thread Douglas Mendizabal
I was looking through some Keystone docs and noticed that for version 3.0 of
their API [1] Keystone merged the Service and Admin API into a single core
API.  I haven’t gone digging through mail archives, but I imagine they had a
pretty good reason to do that.

Adam, I know you’ve already implemented quite a bit of this, and I hate to
ask this, but how do you feel about adding this to the regular API instead
of building out the Service API for Barbican?

[1] 
https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identi
ty-api-v3.md#whats-new-in-version-30


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-02 Thread Mike Bayer

I've just added a new section to this wiki, "MySQLdb + eventlet = sad",
summarizing some discussions I've had in the past couple of days about
the ongoing issue that MySQLdb and eventlet were not meant to be used
together.   This is a big one to solve as well (though I think it's
pretty easy to solve).

https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad



On 6/30/14, 12:56 PM, Mike Bayer wrote:
> Hi all -
>
> For those who don't know me, I'm Mike Bayer, creator/maintainer of
> SQLAlchemy, Alembic migrations and Dogpile caching.   In the past month
> I've become a full time Openstack developer working for Red Hat, given
> the task of carrying Openstack's database integration story forward.  
> To that extent I am focused on the oslo.db project which going forward
> will serve as the basis for database patterns used by other Openstack
> applications.
>
> I've summarized what I've learned from the community over the past month
> in a wiki entry at:
>
> https://wiki.openstack.org/wiki/Openstack_and_SQLAlchemy 
>
> The page also refers to an ORM performance proof of concept which you
> can see at https://github.com/zzzeek/nova_poc.
>
> The goal of this wiki page is to publish to the community what's come up
> for me so far, to get additional information and comments, and finally
> to help me narrow down the areas in which the community would most
> benefit by my contributions.
>
> I'd like to get a discussion going here, on the wiki, on IRC (where I am
> on freenode with the nickname zzzeek) with the goal of solidifying the
> blueprints, issues, and SQLAlchemy / Alembic features I'll be focusing
> on as well as recruiting contributors to help in all those areas.  I
> would welcome contributors on the SQLAlchemy / Alembic projects directly
> as well, as we have many areas that are directly applicable to Openstack.
>
> I'd like to thank Red Hat and the Openstack community for welcoming me
> on board and I'm looking forward to digging in more deeply in the coming
> months!
>
> - mike
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Do we have IRC meeting this week?

2014-07-02 Thread Lu, Lianhao
Looks like many are in Paris midcycle meet-up. Do we have the weekly IRC 
meeting today?

-Lianhao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-02 Thread Jeremy Stanley
On 2014-07-02 22:19:29 +0400 (+0400), Yuriy Taraday wrote:
[...]
> It looks like mirrors will have to bear having a number of dead branches in
> them - one for each release.

A release manager will delete proposed/juno when stable/juno is
branched from it, and branch deletions properly propagate to our
official mirrors (you may have to manually remove any local tracking
branches you've created, but that shouldn't be much of a concern).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-02 Thread Michael Still
On Thu, Jul 3, 2014 at 4:33 AM, Luke Gorrie  wrote:
>
> On 30 June 2014 21:04, Kevin Benton  wrote:
>>
>> As a maintainer of a small CI system that tends to get backed up during
>> milestone rush hours, it would be nice if we were allowed up to 12 hours.
>> However, as a developer this seems like too long to have to wait for the
>> results of a patch.
>
> Interesting question!
>
> Taking one hundred steps back :-) what is the main purpose of the 3rd party
> reviews, and what are the practical consequences when they are not promptly
> available?

The main purpose is to let change reviewers know that a change might
be problematic for a piece of code not well tested by the gate -- that
might be a driver we don't have hardware for, but it might also simply
be a scenario that is hard to express in the gate (for example, the
nova schema update testing turbo hipster does). In a perfect universe,
change reviewers would use these votes to decide if they should
approve a change or not.

> Is the main purpose to allow 3rd parties to automatically object to changes
> that will cause them problems, and the practical consequence of a slow
> review being that OpenStack may merge code that will cause a problem for
> that third party?

This is also true, but I feel that out of tree code objecting is a
secondary use case and not as persuasive as the first.

> How do genuine negative reviews by 3rd party CIs play out in practice? (Do
> the change author and the 3rd party get together offline to work out the
> problem? Or does the change-author treat Gerrit as an edit-compile-run loop
> to fix the problem themselves?) I'd love to see links to such reviews, if
> anybody has some? (I've only seen positive reviews and false-negative
> reviews from 3rd party CIs so far in my limited experience.)

I have seen both. Normally there's a failure, reviewers notice, and
then the developer spins trying out fixes by uploading new patch sets.

> Generally it seems like 12 hours is the blink of an eye in terms of the
> whole lifecycle of a change, or alternatively an eternity in terms of
> somebody sitting around waiting to take action on the result.

12 hours is way too long... Mostly because a 12 hour delay means
you're not keeping up with the workload (unless the test actually runs
for 12 hours, which I find hard to imagine).

My rule of thumb is three hours by the way. I'd like to say something
like "not significantly slower than jenkins", but that's hard to
quantify.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova} NFV patches

2014-07-02 Thread Ian Wells
1. [NFV] is a tag.
2. This would appear to be a set of 'review me' mails to the mailing list,
which I believe is frowned upon.
3. garyk's stuff is only questionably [NFV], I would argue, though all
worthwhile patches. (That's a completely subjective judgement, so take it
as you will.)

Might be a better idea in future to join the NFV meeting (which happened,
as ever, 7am PST this morning) and raise it there, or in the #openstack-nfv
channel on freenode (although it's a bit quiet there, admittedly).
-- 
Ian.


On 2 July 2014 05:05, Luke Gorrie  wrote:

> On 2 July 2014 10:39, Gary Kotton  wrote:
>
>>  There are some patches that are relevant to the NFV support. There are
>> as follows:
>>
>
> Additionally, we who are building Deutsche Telekom's open source NFV
> implementation will be able to make that available to the whole community
> if the VIF_VHOSTUSER spec is approved for Juno. Then we could help others
> to deploy this too which would be great.
>
> Blueprint: https://blueprints.launchpad.net/nova/+spec/vif-vhostuser
>
> Cheers,
> -Luke
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-02 Thread Mathieu Gagné

On 2014-07-02 2:10 PM, Yuriy Taraday wrote:

One problem that comes to mind is that screen tries to reopen your
terminal when you attach to existing session or run a new one.
So if you have one user you log in to a test server (ubuntu? root?) and
another user that runs screen session (stack), you won't be able to
attach to the session unless you do some dance with access rights (like
"chmod a+rw `tty`" or smth).



I run this command to fix this issue instead of messing with chmod:

  script /dev/null

--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Anyone using owner_is_tenant = False with image members?

2014-07-02 Thread Scott Devoid
Hi folks,

Background:

Among all services, I think glance is unique in only having a single
'owner' field for each image. Most other services include a 'user_id' and a
'tenant_id' for things that are scoped this way. Glance provides a way to
change this behavior by setting "owner_is_tenant" to false, which implies
that owner is user_id. This works great: new images are owned by the user
that created them.

Why do we want this?

We would like to make sure that the only person who can delete an image
(besides admins) is the person who uploaded said image. This achieves that
goal nicely. Images are private to the user, who may share them with other
users using the image-member API.

However, one problem is that we'd like to allow users to share with entire
projects / tenants. Additionally, we have a number of images (~400)
migrated over from a different OpenStack deployment, that are owned by the
tenant and we would like to make sure that users in that tenant can see
those images.

Solution?

I've implemented a small patch to the "is_image_visible" API call [1] which
checks the image.owner and image.members against context.owner and
context.tenant. This appears to work well, at least in my testing.

I am wondering if this is something folks would like to see integrated?
Also for glance developers, if there is a cleaner way to go about solving
this problem? [2]

~ Scott

[1]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L209
[2] https://review.openstack.org/104377
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC

2014-07-02 Thread Carl Baldwin
The Neutron L3 Subteam will meet tomorrow at the regular time in
#openstack-meeting-3.  The agenda [1] is posted, please update as
needed.

DVR is our priority.  I have had some encouraging success this week
deploying my own two-node devstack with distributed routers.  I would
like to discuss that as well as next steps.

I would like for the IPAM folks to come and discuss the status of that
work.  If you are interested in the neutron-ipam work please join the
meeting.  Given the new deadlines for posting and accepting blueprint
proposals we need to figure out where we need to make an effort to get
ours merged.  This also includes the BGP bp proposal.

We will not hold this meeting next week on the 10th.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Time to break backwards compatibility for *cloud-password file location?

2014-07-02 Thread James Polley
On Thu, Jul 3, 2014 at 5:33 AM, Sullivan, Jon Paul 
wrote:

> > -Original Message-
> > From: Giulio Fidente [mailto:gfide...@redhat.com]
> > Sent: 01 July 2014 13:08
> > Subject: Re: [openstack-dev] [TripleO] Time to break backwards
> > compatibility for *cloud-password file location?
> >
> > On 06/25/2014 11:25 AM, mar...@redhat.com wrote:
> > > On 25/06/14 10:52, James Polley wrote:
> > >> Until https://review.openstack.org/#/c/83250/, the setup-*-password
> > >> scripts used to drop password files into $CWD, which meant that if
> > >> you ran the script from a different location next time, your old
> > >> passwords wouldn't be found.
> > >>
> > >> https://review.openstack.org/#/c/83250/ changed this so that the
> > >> default behaviour is to put the password files in $TRIPLEO_ROOT; but
> > >> for backwards compatibility we left the script checking to see if
> > >> there's a file in the current directory, and using that file in
> > >> preference to $TRIPLEO_ROOT if it exists.
> > >>
> > >> However, this behaviour is still confusing to people. I'm not
> > >> entirely clear on why it's confusing (it makes perfect sense to
> > >> me...) but I imagine it's because we still have the problem that the
> > >> code works fine if run from one directory, but run from a different
> > directory it can't find passwords.
> > >>
> > >> There are two open patches which would break backwards compatibility
> > >> and only ever use the files in $TRIPLEO_ROOT:
> > >>
> > >> https://review.openstack.org/#/c/93981/
> > >> https://review.openstack.org/#/c/97657/
> > >>
> > >> The latter review is under more active development, and has
> > >> suggestions that the directory containing the password files should
> > >> be parameterised, defaulting to $TRIPLEO_ROOT. This would still break
> > >> for anyone who relies on the password files being in the directory
> > >> they run the script from, but at least there would be a fairly easy
> > fix for them.
> > >>
> > >
> > > How about we:
> > >
> > > * parameterize as suggested by Fabio in the review @
> > > https://review.openstack.org/#/c/97657/
>
> +1
>
> > >
> > > * move setting of this param to more visible location (setup, like
> > > devtest_variables or testenv). We can then give this better visibility
> > > in the dev/test autodocs with a warning about the 'old' behaviour
>
> +1
>
> > >
> > > * add a deprecation warning to the code that reads from
> > > $CWD/tripleo-overcloud-passwords to say that this will now need to be
> > > set as a parameter in ... wherever. How long is a good period for
> > this?
> >
> > +1
>
> +1
>
> Would it make sense to copy the passwords across such that the users
> behaviour is not changed were they to delete their old passwords file.  The
> deprecation warning would read that they can set  to point to the
> passwords file they are currently using, or delete  to pick
> up the new default location of  (which has defaulted to TRIPLEO_ROOT)
>

This sounds like something I was trying in the first 10 revisions of
https://review.openstack.org/#/c/83250. I ended up ditching it because it
seemed like the logic was getting too complex.

Eg, if someone has multiple sets of password files, this is fine when we
see the first set  - we just copy it to $TRIPLEO_ROOT and print the
deprecation warning.

But later on if we see a second file - do we clobber the one in
$TRIPLEO_ROOT? Do we skip the copy and just print the deprecation warning,
maybe with an addendum to point out that we've seen two different files? Do
we diff the one in $CWD and the one in $TRIPLEO_ROOT to check if they're
the same?

At the time I was working on 83250 I decided it was simplest to not make
any attempt to clean up, but maybe it's time to revisit that decision.

>
> actually, I have probably being the first suggesting that we should
> parametrize the path to the password files so I want to add my
> motivations here
>
> the big win that I see here is that people may want to customize only
> some of the passwords, for example, the undercloud admin
>
> the script creating the password files is *already* capable of pushing
> in the file only new passwords, without regenerating passwords which
> could have been manually set in there already
>
> this basically implements the 'feature' I mentioned except people just
> doesn't know it!
>
> so I'd like we to expose this as a feature, from the early stages as
> Marios suggests too, maybe from devtest_variables
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks,
> Jon-Paul Sullivan ☺ Cloud Services - @hpcloud
>
> Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park,
> Galway.
> Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John
> Rogerson's Quay, Dublin 2.
> Registered Number: 361933
>
> The contents of this message and any atta

Re: [openstack-dev] [Neutron][LBaaS] Agenda for weekly IRC meeting

2014-07-02 Thread Brandon Logan
Some things I can think of for the agenda:

New API
- Are shim layers really needed for Juno?
  - If the old API and new API will coexist independently, why is a
shim layer needed?
- Has the caveat that the pools resource can exist
  independently in both APIs.  This can be accomplished by
  renaming new API pools to something different or by doing a
  shim for the pool resource.
- Should the agent refactoring be included in the main object
  object model refactor?  The plugin might be able to just call the
  namespace driver directly, with some modification.
- Status of entities that only exist in the database and not in
  a backend (i.e. they are not linked to an existing load balancer)

On Wed, 2014-07-02 at 20:19 +, Jorge Miramontes wrote:
> Hey LBaaS folks!
> 
> 
> Please send me any agenda items you would like discussed tomorrow so I
> can organize the meeting. And as usual, please update the weekly
> standup etherpad. Everything should be organized on the main wiki page
> now ==> https://wiki.openstack.org/wiki/Neutron/LBaaS :)
> 
> 
> Cheers,
> --Jorge
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.vmware 0.4.0 released

2014-07-02 Thread Vui Chiap Lam


This is an announcement that oslo.vmware 0.4.0 has been released 
for the Juno cycle. It is available on PyPI and shoudl show up 
in our mirror shortly. 

The changes since the previous release include: 

$ git log --abbrev-commit --pretty=oneline --no-merges 0.3..0.4.0 
df06729 Sync excutils from Oslo 
34ccbe0 Updated from global requirements 
8075690 Use assertIsNone 
633686a Bump hacking to 0.9.x series 
c01e1ec replace iterator.next() with next(iterator) 
64d855f remove definitions of Python Source Code Encoding 
7d71775 Setup for translation 
c4e0294 Updated from global requirements 
46f9661 cleaning up index.rst file 
7c3bcff Add networkFolder in the traversal spec 
8c56715 Ensure port support does not break backward compatibility 
caababc replace string format arguments with function parameters 
a76bd25 Support for IPv6 and Non-standard ports 
c4d2d95 Support 'InvalidPowerState' exception 
77f7f53 Don't translate debug level logs in oslo-vmware 
264aafa Updated from global requirements 
7993b03 Sync changes from Nova error_util.py 
d10f029 Updated from global requirements 
9b12617 Remove __del__ usage in oslo.vmware driver 
506a4e0 Add a test to oslo.vmware test_image_transfer 
228c6b5 import run_cross_tests.sh from incubator 
e641467 Fix vim25:InvalidRequest when no profiles exist 
e39196b VMware: treat cases when SOAP reply does not have a body 
49097c0 Add unittest method "test_download_flat_image" 
80d6bfb Add missing unit tests for VMwareAPISession 

Please report bugs at: 
https://bugs.launchpad.net/oslo.vmware ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Agenda for weekly IRC meeting

2014-07-02 Thread Eichberger, German
Hi,

Please take the time to fill out the weekly standup info: 
https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup

Thanks,
German

From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Wednesday, July 02, 2014 1:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Agenda for weekly IRC meeting

Hey LBaaS folks!

Please send me any agenda items you would like discussed tomorrow so I can 
organize the meeting. And as usual, please update the weekly standup etherpad. 
Everything should be organized on the main wiki page now ==> 
https://wiki.openstack.org/wiki/Neutron/LBaaS :)

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Zane Bitter

On 02/07/14 02:41, Mike Spreitzer wrote:

Zane Bitter  wrote on 07/01/2014 06:58:47 PM:

 > On 01/07/14 15:47, Mike Spreitzer wrote:
 > > In AWS, an autoscaling group includes health maintenance functionality
 > > --- both an ability to detect basic forms of failures and an ability to
 > > react properly to failures detected by itself or by a load balancer.
 > >   What is the thinking about how to get this functionality in
OpenStack?
 > >   Since OpenStack's OS::Heat::AutoScalingGroup has a more general
member
 > > type, what is the thinking about what failure detection means (and how
 > > it would be accomplished, communicated)?
 > >
 > > I have not found design discussion of this; have I missed something?
 >
 > Yes :)
 >
 > https://review.openstack.org/#/c/95907/
 >
 > The idea is that Convergence will provide health maintenance for _all_
 > forms of resources in Heat. Once this is implemented, autoscaling gets
 > it for free by virtue of that fact that it manages resources using Heat
 > stacks.

Ah, right.  My reading of that design is not quite so simple.  Note that
in the User Stories section it calls for different treatment of Compute
instances depending on whether they are in a scaling group.


I don't believe that is a correct reading.

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] One more lifecycle plug point - in scaling groups

2014-07-02 Thread Zane Bitter

On 01/07/14 21:09, Mike Spreitzer wrote:

Zane Bitter  wrote on 07/01/2014 07:05:15 PM:

 > On 01/07/14 16:30, Mike Spreitzer wrote:
 > > Thinking about my favorite use case for lifecycle plug points for cloud
 > > providers (i.e., giving something a chance to make a holistic placement
 > > decision), it occurs to me that one more is needed: a scale-down plug
 > > point.  A plugin for this point has a distinctive job: to decide which
 > > group member(s) to remove from a scaling group (i.e.,
 > > OS::Heat::AutoScalingGroup or OS::Heat::InstanceGroup or
 > > OS::Heat::ResourceGroup or AWS::AutoScaling::AutoScalingGroup).  The
 > > plugin's signature could be something like this: given a list of group
 > > members and a number to remove, return the list of members to remove
 > > (or, equivalently, return the list of members to keep).  What do
you think?
 >
 > I think you're not thinking big enough ;)

I agree, I was taking only a small bite in hopes of a quick success.

 > There exist a whole class of applications that would benefit from
 > autoscaling but which are not quite stateless. (For example, a PaaS.) So
 > it's not enough to have plugins that place the choice of which node to
 > scale down under operator control; in fact it needs to be under
 > _application_ control.

Exactly.  There are two different roles that want such control; in
general, neither is happy if only the other gets it.  Now the question
becomes, how do we get them to play nice together?  In the case of
TripleO there may be an exceptionally easy out: the owner of an
application deployed on the undercloud may well be the same as the
provider of the undercloud (i.e., the operator whose end goal is to
provide the overcloud(s) ).


Let's assume that this feature takes that form of some additional data 
in the alarm trigger (i.e. the input to the scaling policy) that 
specifies which server(s) to delete first. The application would handle 
this by receiving the trigger from Ceilometer (or wherever) itself, and 
then inserting the additional data before passing it to Heat/autoscaling.


That gives us three options for e.g. a holistic scheduler to insert 
hints as to which servers to delete:


(a) Insert them into the outgoing triggers from Ceilometer. The 
application has the choice to override.


(b) Let the user explicitly configure the flow of notifications. So it 
could be any of:

Ceilometer -> Heat
Ceilometer -> Scheduler -> Heat
Ceilometer -> Application -> Heat
Ceilometer -> Scheduler -> Application -> Heat

(c) Insert them into incoming triggers in Heat whenever the application 
has not specified them. This is basically your original proposal.


I'm guessing that, of those, (c) is probably the winner. But we'd need 
to have that debate.


Another possible implementation is to do it with a notification and 
reply, rather than including it in the existing datapath.



 > This is on the roadmap, and TripleO really needs it, so hopefully it
 > will happen in Juno.

I assume you mean giving this control to the application, which I
presume amounts to giving it to the template author.  Is this written up
somewhere?


I had a quick look, but it doesn't appear we have a blueprint for it 
yet, unless you count the notifications blueprint that Steve mentioned 
(but I don't think that addresses this case specifically).


cheers,
Zane.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] no meeting this week

2014-07-02 Thread Doug Hellmann
July 4 is a holiday here in the US, and I think most of the core team
and liaisons will be off. Our next meeting will be 11 July 2014.

Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar] py26 and py27 jobs failing

2014-07-02 Thread Ryan Petrello
That's a pretty notable review to accommodate the pecan change.  In the
meantime, wouldn't something like this would get the py26 and py27 tests
passing?

index 9aced3f..4051fad 100644
--- a/climate/tests/api/test_root.py
+++ b/climate/tests/api/test_root.py
@@ -22,8 +22,7 @@ class TestRoot(api.APITest):
 response = self.get_json('/',
  expect_errors=True,
  path_prefix='')
-self.assertEqual(response.status_int, 200)
-self.assertEqual(response.content_type, "text/html")
+self.assertEqual(response.status_int, 204)
 self.assertEqual(response.body, '')

On 07/02/14 09:08:35 PM, Fuente, Pablo A wrote:
> Blazar cores,
>   Please review https://review.openstack.org/99389. We need this merged
> ASAP in order to get a +1 from Jenkins in our py26 and py27 jobs. Pecan
> new version returns a 204 instead 200 in one case (when the API returns
> and empty dictionary) and one test case is failing for this reason. This
> patch solve the bug as a side effect, because it returns the versions
> instead of an empty dictionary.
> 
> Pablo.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [blazar] py26 and py27 jobs failing

2014-07-02 Thread Fuente, Pablo A
Blazar cores,
Please review https://review.openstack.org/99389. We need this merged
ASAP in order to get a +1 from Jenkins in our py26 and py27 jobs. Pecan
new version returns a 204 instead 200 in one case (when the API returns
and empty dictionary) and one test case is failing for this reason. This
patch solve the bug as a side effect, because it returns the versions
instead of an empty dictionary.

Pablo.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-02 Thread Yi Sun
All,
After talk to Carl and FWaaS team , Both sides suggested to call a meeting
to discuss about this topic in deeper detail. I heard that Swami is
traveling this week. So I guess the earliest time we can have a meeting is
sometime next week. I will be out of town on monday, so any day after
Monday should work for me. We can do either IRC, google hang out, GMT or
even a face to face.
For anyone interested, please propose your preferred time.
Thanks
Yi


On Sun, Jun 29, 2014 at 12:43 PM, Carl Baldwin  wrote:

> In line...
>
> On Jun 25, 2014 2:02 PM, "Yi Sun"  wrote:
> >
> > All,
> > During last summit, we were talking about the integration issues between
> DVR and FWaaS. After the summit, I had one IRC meeting with DVR team. But
> after that meeting I was tight up with my work and did not get time to
> continue to follow up the issue. To not slow down the discussion, I'm
> forwarding out the email that I sent out as the follow up to the IRC
> meeting here, so that whoever may be interested on the topic can continue
> to discuss about it.
> >
> > First some background about the issue:
> > In the normal case, FW and router are running together inside the same
> box so that FW can get route and NAT information from the router component.
> And in order to have FW to function correctly, FW needs to see the both
> directions of the traffic.
> > DVR is designed in an asymmetric way that each DVR only sees one leg of
> the traffic. If we build FW on top of DVR, then FW functionality will be
> broken. We need to find a good method to have FW to work with DVR.
> >
> > ---forwarding email---
> >  During the IRC meeting, we think that we could force the traffic to the
> FW before DVR. Vivek had more detail; He thinks that since the br-int
> knowns whether a packet is routed or switched, it is possible for the
> br-int to forward traffic to FW before it forwards to DVR. The whole
> forwarding process can be operated as part of service-chain operation. And
> there could be a FWaaS driver that understands the DVR configuration to
> setup OVS flows on the br-int.
>
> I'm not sure what this solution would look like.  I'll have to get the
> details from Vivek.  It seems like this would effectively centralize the
> traffic that we worked so hard to decentralize.
>
> It did cause me to wonder about something:  would it be possible to reign
> the symmetry to the traffic by directing any response traffic back to the
> DVR component which handled the request traffic?  I guess this would
> require running conntrack on the target side to track and identify return
> traffic.  I'm not sure how this would be inserted into the data path yet.
> This is a half-baked idea here.
>
> > The concern is that normally firewall and router are integrated together
> so that firewall can make right decision based on the routing result. But
> what we are suggesting is to split the firewall and router into two
> separated components, hence there could be issues. For example, FW will not
> be able to get enough information to setup zone. Normally Zone contains a
> group of interfaces that can be used in the firewall policy to enforce the
> direction of the policy. If we forward traffic to firewall before DVR, then
> we can only create policy based on subnets not the interface.
> > Also, I’m not sure if we have ever planed to support SNAT on the DVR,
> but if we do, then it depends on at which point we forward traffic to the
> FW, the subnet may not even work for us anymore (even DNAT could have
> problem too).
>
> I agree that splitting the firewall from routing presents some problems
> that may be difficult to overcome.  I don't know how it would be done while
> maintaining the benefits of DVR.
>
> Another half-baked idea:  could multi-primary state replication be used
> between DVR components to enable firewall operation?  Maybe work on the HA
> router blueprint -- which is long overdue to be merged Btw -- could be
> leveraged.  The number of DVR "pieces" could easily far exceed that of
> active firewall components normally used in such a configuration so there
> could be a major scaling problem.  I'm really just thinking out loud here.
>
> Maybe you (or others) have other ideas?
>
> > Another thing that I may have to get detail is that how we handle the
> overlap subnet, it seems that the new namespaces are required.
>
> Can you elaborate here?
>
> Carl
>
> >
> > --- end of forwarding 
> >
> > YI
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Android-x86
http://www.android-x86.org
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://li

[openstack-dev] [Neutron][LBaaS] Agenda for weekly IRC meeting

2014-07-02 Thread Jorge Miramontes
Hey LBaaS folks!

Please send me any agenda items you would like discussed tomorrow so I can 
organize the meeting. And as usual, please update the weekly standup etherpad. 
Everything should be organized on the main wiki page now ==> 
https://wiki.openstack.org/wiki/Neutron/LBaaS :)

Cheers,
--Jorge
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Nomination for python-jenkins

2014-07-02 Thread Antoine Musso
Le 23/06/2014 10:42, Antoine Musso a écrit :
> Hello,
> 
> The python-jenkins module is a thin wrapper to interact with Jenkins. It
> has been migrated from Launchpad to Stackforge a couple months ago to
> attract more developers and easily upstream work down being done in over
> OpenStack projects (such as NodePool or Jenkins Job Builder).
> 
> I would like to propose Marc Abramowitz as a core reviewer to the
> python-jenkins module.  He wrote a test suite with 100% coverage and
> added support for python 3.
> 
> https://review.openstack.org/#/q/reviewer:%22Marc+Abramowitz%22+project:stackforge/python-jenkins,n,z
> 
> Please endorse/reject the nomination by replying to this thread.  Will
> find out a way to get him added if consensus is reached.

Hello,

After a week with no objection and after pinging James E. Blair and
Clark Boylan, I have added Marc Abramowitz as a core reviewer of
python-jenkins.

Congratulations!

-- 
Antoine "hashar" Musso

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Analyzing the critical path

2014-07-02 Thread Doug Wiegley
Hi Sam,

> Anything else?

I assume you mean for Juno.  Under “In addition”:

  *   Contact current driver owners, to update to new driver interface (some 
things will break with the shim, e.g. when the drivers are reaching around the 
plugin to the neutron db.)  Also let them know about the upcoming L7/TLS 
interfaces.
  *   New A10 driver being submitted

Thanks,
doug

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 2, 2014 at 9:29 AM
To: "OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org)" 
mailto:openstack-dev@lists.openstack.org>>, 
"Brandon Logan 
(brandon.lo...@rackspace.com)" 
mailto:brandon.lo...@rackspace.com>>
Subject: [openstack-dev] [Neutron][LBaaS] Analyzing the critical path

To reiterate the Juno release plan from: 
https://wiki.openstack.org/wiki/Juno_Release_Schedule
Feature freeze is at: 21st August.

I am listing tasks which we should consider to be done for Juno and who should 
handle them.

The following might be considered as critical path to get anything for Juno:

1.   LBaaS Plugin, API, DB – when is a code commit expected?

2.   CLI

3.   LBaaS Driver Interfaces – done - 
https://review.openstack.org/#/c/100690/

4.   Connecting the Plugin calls to the drive – I have not seen any 
reference to this. I think we should use the “provider” capabilities until 
“flavor” gets implemented. Is this addressed by item 1 above or does it 
required an additional task?

5.   HA proxy reference implementation – when is a code commit expected?

6.   Tempest Tests

Additional “Core” features

1.   Horizon UI

2.   Quota update/fix

3.   API Compatibility

a.   Connecting the “OLD API Plugin calls to “old/new” drivers. Is this 
still planned?

4.   Driver Compatibility

a.   Connecting the Plugin calls to “old” drivers. Is this still planned?

In addition/parallel

1.   TLS  –

a.   BP is approved.

b.  WIP code was committed and waiting for the code of the basic API/model 
to be available for start of review.

c.   HA Proxy reference implementation

d.  CLI

e.  Horizon Support

f.Tempest Tests

2.   L7 context switching

a.   BP in review

b.  WIP code in progress and waiting for the code of the basic API/model to 
be available for initial commit

c.   HA Proxy reference implementation

d.  CLI

e.  Horizon Support

f.Tempest Tests

Anything else?


Regards,
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-02 Thread Luke Gorrie
On 2 July 2014 20:33, Luke Gorrie  wrote:

> I'd love to see links to such reviews, if anybody has some? (I've only
> seen positive reviews and false-negative reviews from 3rd party CIs so far
> in my limited experience.)
>

I didn't say what I meant: reviews where a 3rd party CI has genuinely
objected to a change that was acceptable to all the other CIs i.e. a change
that created a problem specifically for that third party.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Time to break backwards compatibility for *cloud-password file location?

2014-07-02 Thread Sullivan, Jon Paul
> -Original Message-
> From: Giulio Fidente [mailto:gfide...@redhat.com]
> Sent: 01 July 2014 13:08
> Subject: Re: [openstack-dev] [TripleO] Time to break backwards
> compatibility for *cloud-password file location?
> 
> On 06/25/2014 11:25 AM, mar...@redhat.com wrote:
> > On 25/06/14 10:52, James Polley wrote:
> >> Until https://review.openstack.org/#/c/83250/, the setup-*-password
> >> scripts used to drop password files into $CWD, which meant that if
> >> you ran the script from a different location next time, your old
> >> passwords wouldn't be found.
> >>
> >> https://review.openstack.org/#/c/83250/ changed this so that the
> >> default behaviour is to put the password files in $TRIPLEO_ROOT; but
> >> for backwards compatibility we left the script checking to see if
> >> there's a file in the current directory, and using that file in
> >> preference to $TRIPLEO_ROOT if it exists.
> >>
> >> However, this behaviour is still confusing to people. I'm not
> >> entirely clear on why it's confusing (it makes perfect sense to
> >> me...) but I imagine it's because we still have the problem that the
> >> code works fine if run from one directory, but run from a different
> directory it can't find passwords.
> >>
> >> There are two open patches which would break backwards compatibility
> >> and only ever use the files in $TRIPLEO_ROOT:
> >>
> >> https://review.openstack.org/#/c/93981/
> >> https://review.openstack.org/#/c/97657/
> >>
> >> The latter review is under more active development, and has
> >> suggestions that the directory containing the password files should
> >> be parameterised, defaulting to $TRIPLEO_ROOT. This would still break
> >> for anyone who relies on the password files being in the directory
> >> they run the script from, but at least there would be a fairly easy
> fix for them.
> >>
> >
> > How about we:
> >
> > * parameterize as suggested by Fabio in the review @
> > https://review.openstack.org/#/c/97657/

+1

> >
> > * move setting of this param to more visible location (setup, like
> > devtest_variables or testenv). We can then give this better visibility
> > in the dev/test autodocs with a warning about the 'old' behaviour

+1

> >
> > * add a deprecation warning to the code that reads from
> > $CWD/tripleo-overcloud-passwords to say that this will now need to be
> > set as a parameter in ... wherever. How long is a good period for
> this?
> 
> +1

+1

Would it make sense to copy the passwords across such that the users behaviour 
is not changed were they to delete their old passwords file.  The deprecation 
warning would read that they can set  to point to the passwords file they 
are currently using, or delete  to pick up the new default 
location of  (which has defaulted to TRIPLEO_ROOT)

> 
> actually, I have probably being the first suggesting that we should
> parametrize the path to the password files so I want to add my
> motivations here
> 
> the big win that I see here is that people may want to customize only
> some of the passwords, for example, the undercloud admin
> 
> the script creating the password files is *already* capable of pushing
> in the file only new passwords, without regenerating passwords which
> could have been manually set in there already
> 
> this basically implements the 'feature' I mentioned except people just
> doesn't know it!
> 
> so I'd like we to expose this as a feature, from the early stages as
> Marios suggests too, maybe from devtest_variables
> --
> Giulio Fidente
> GPG KEY: 08D733BA
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as "HP CONFIDENTIAL".
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.i18n 0.1.0 released

2014-07-02 Thread Doug Hellmann
The Oslo team is pleased to announce the first release of oslo.i18n,
the library that replaces the gettextutils module from oslo-incubator.

The new library has been uploaded to PyPI, and there is a changeset in
the queue update the global requirements list and our package mirror:
https://review.openstack.org/104304

Documentation for the library is available on our developer docs site:
http://docs.openstack.org/developer/oslo.i18n/

The spec for the graduation blueprint includes some advice for
migrating to the new library:
http://git.openstack.org/cgit/openstack/oslo-specs/tree/specs/juno/graduate-oslo-i18n.rst

Please report bugs using the Oslo bug tracker in launchpad:
http://bugs.launchpad.net/oslo

Thanks to everyone who helped with reviews and patches to make this
release possible!

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][nova] Please run 'check experimental' on changes to nova v3 tests in tempest

2014-07-02 Thread David Kranz
Due to the status of nova v3, to save time, running the tempest v3 tests 
has been moved out of the gate/check jobs to the experimental queue. So 
please run 'check experimental' on v3-related patches.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Two questions about 'backup' API

2014-07-02 Thread Vishvananda Ishaya

On Jun 26, 2014, at 6:58 PM, wu jiang  wrote:

> Hi Vish, thanks for your reply.
> 
> About Q1, I mean that Nova doesn't have extra processions/works for 
> 'daily'/'weekly' than other backup_types like '123'/'test'.
> The 'daily' & 'weekly' don't have unique places in the API than any other 
> else.
> 
> But we gave them as examples in code comments especially in novaclient.
> 
> A few users asked me why their instances were not backup-ed automatically, 
> they thought we have a timing task to do this if the 'backup_type' equals to 
> 'daily'/'weekly' because we prompt them to use it.. 
> Therefore, it's useless and inclined to make confusion for this API IMO. No 
> need to show them in code comments & novaclient.
> 
> P.S. So maybe 'backup_name'/'backup_tag' is a better name, but we can't 
> modify the API for compatibility..

Yes the name is confusing.

Vish

> 
> 
> Thanks.
> 
> WingWJ
> 
> 
> On Fri, Jun 27, 2014 at 5:20 AM, Vishvananda Ishaya  
> wrote:
> On Jun 26, 2014, at 5:07 AM, wu jiang  wrote:
> 
> > Hi all,
> >
> > I tested the 'backup' API recently and got two questions about it:
> >
> > 1. Why 'daily' & 'weekly' appear in code comments & novaclient about 
> > 'backup_type' parameter?
> >
> > The 'backup_type' parameter is only a tag for this backup(image).
> > And there isn't corresponding validation for 'backup_type' about these two 
> > types.
> >
> > Moreover, there is also no periodic_task for 'backup' in compute host.
> > (It's fair to leave the choice to other third-parts system)
> >
> > So, why we leave 'daily | weekly' example in code comments and novaclient?
> > IMO it may lead confusion that Nova will do more actions for 'daily|weekly' 
> > backup request.
> 
> The tag affects the cleanup of old copies, so if you do a tag of ‘weekly’ and
> the rotation is 3, it will insure you only have 3 copies that are tagged 
> weekly.
> You could also have 3 copies of the daily tag as well.
> 
> >
> > 2. Is it necessary to backup instance when 'rotation' is equal to 0?
> >
> > Let's look at related codes in nova/compute/manager.py:
> > # def backup_instance(self, context, image_id, instance, backup_type, 
> > rotation):
> > #
> > #self._do_snapshot_instance(context, image_id, instance, rotation)
> > #self._rotate_backups(context, instance, backup_type, rotation)
> >
> > I knew Nova will delete all backup images according the 'backup_type' 
> > parameter when 'rotation' equals to 0.
> >
> > But according the logic above, Nova will generate one new backup in 
> > _do_snapshot_instance(), and delete it in _rotate_backups()..
> >
> > It's weird to snapshot a useless backup firstly IMO.
> > We need to add one new branch here: if 'rotation' is equal to 0, no need to 
> > backup, just rotate it.
> 
> That makes sense I suppose.
> 
> Vish
> 
> >
> >
> > So, what's your opinions? Look forward to your suggestion.
> > Thanks.
> >
> > WingWJ
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-02 Thread Luke Gorrie
On 30 June 2014 21:04, Kevin Benton  wrote:

> As a maintainer of a small CI system that tends to get backed up during
> milestone rush hours, it would be nice if we were allowed up to 12 hours.
> However, as a developer this seems like too long to have to wait for the
> results of a patch.
>
Interesting question!

Taking one hundred steps back :-) what is the main purpose of the 3rd party
reviews, and what are the practical consequences when they are not promptly
available?

Is the main purpose to allow 3rd parties to automatically object to changes
that will cause them problems, and the practical consequence of a slow
review being that OpenStack may merge code that will cause a problem for
that third party?

How do genuine negative reviews by 3rd party CIs play out in practice? (Do
the change author and the 3rd party get together offline to work out the
problem? Or does the change-author treat Gerrit as an edit-compile-run loop
to fix the problem themselves?) I'd love to see links to such reviews, if
anybody has some? (I've only seen positive reviews and false-negative
reviews from 3rd party CIs so far in my limited experience.)

Generally it seems like 12 hours is the blink of an eye in terms of the
whole lifecycle of a change, or alternatively an eternity in terms of
somebody sitting around waiting to take action on the result.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? July 2, 2014

2014-07-02 Thread Anne Gentle
The doc team met this week for the APAC time zone.
Minutes:
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-07-02-03.06.html
Log:
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-07-02-03.06.log.html

__In review and merged this past week__

The security guide has been removed from the openstack-manuals repo and
moved to the security-doc repo.

The docs-specs repository is now active at
https://github.com/openstack/docs-specs. I'd like to see these three specs
written for Juno for docs:

* https://blueprints
.launchpad.net/openstack-manuals/+spec/python-client-docs-for-app-devs I
requested more discussion on exactly what work should be done. Tom, do you
have time to write a spec? If not you, who is interested?

* https://blueprints
.launchpad.net/openstack-manuals/+spec/redesign-docs-site I plan to write a
spec this once we get a web design plan laid out in more detail.

* https://blueprints.launchpad.net/openstack-manuals/+spec/heat-templates We've
discussed a lot on the ML but it doesn't have a linked wiki page and could
use further description and decisions. Gauvain, could you write a spec
based on what we've discussed so far?

*
https://blueprints.launchpad.net/openstack-manuals/+spec/create-networking-guide
We've
got a lot of interest in this new book -- Lana's talking about a doc swarm
around it, and Phil Hopkins and Matt Kassawara know this area. Matt or
Phil, up for a doc spec for this one?

Training team, do you have "must have" specs to write?

I think the review team for docs-specs can be as small as:
Anne Gentle
Sean Roberts
Andreas Jaeger
Bryan Payne
or as large as docs-core. Projects are going either way with it. Our setup
is a separate team, but we can populate as we need.

Any input on docs-specs? Please let us know.

__High priority doc work__

I've been fielding questions about good areas to work in the docs. I'd like
these to be high priority:
- Networking in admin guide and the separate guide that needs to have a
spec written
- Identity, especially v2 vs. v3 in the admin and ops docs
- Orchestration templates: reference inclusion and how-to information
integrated in admin user guide

__Ongoing doc work__

We're reconsidering the priorities for web design work -- whether
docs.openstack.org or developer.openstack.org needs the overhaul. So stay
tuned as we work with the Foundation on the pre-work and planning.

__New incoming doc requests__

None this week.

__Doc tools updates__

Once this patch to oslosphinx merges, project indicate if they're
incubating with a change to your sphinx conf.py, setting html_theme_options
equal to {'incubating': True}'.
https://review.openstack.org/#/c/103935/
Thanks to Graham Hayes for that! The projects that need to set it when it's
available are:
-Designate
-Ironic
-Sahara
-Barbican
-Marconi

The latest version of openstack-doc-tools is 1.16.1 and has the latest
RelaxNG schema included.

__Other doc news__

 Still working on the naming for glance-provided services. Thierry noted
today that the mission can evolve without the program name changing yet in
https://review.openstack.org/#/c/98002/. So we hope to keep looking for the
right wording as the evolution happens.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-02 Thread Yuriy Taraday
Thanks for clarification.

On Wed, Jul 2, 2014 at 6:59 PM, Jeremy Stanley  wrote:

> On 2014-07-02 16:14:52 +0400 (+0400), Yuriy Taraday wrote:
> > Why do we need these short-lived 'proposed' branches in any form?
> > Why can't we just use release branches for this and treat them as
> > stable when appropriate tag is added to some commit in them?
>
> The primary reasons are:
>
> 1. People interpret "stable/juno" as an indication that it is a
> stable released branch, so "proposed/juno" makes it a little more
> obvious to those people that it isn't yet.
>

That could be dealt with by naming them "release/juno" instead, I think.

2. Current process delegates pre-release change approval to a
> different group of reviewers than post-release change approval, and
> the easiest way to enforce this is through Gerrit ACL matches on
> different git ref patterns for their respective target branches.
>

But this one is rather hard to overcome without temporary branch or
constant ACL changes.

It looks like mirrors will have to bear having a number of dead branches in
them - one for each release.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday July 3rd at 17:00 UTC

2014-07-02 Thread Matthew Treinish
Hi Everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
this Thursday, July 3rd at 17:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-Matt Treinish


pgp_7axhVvr7l.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-02 Thread Yuriy Taraday
One problem that comes to mind is that screen tries to reopen your terminal
when you attach to existing session or run a new one.
So if you have one user you log in to a test server (ubuntu? root?) and
another user that runs screen session (stack), you won't be able to attach
to the session unless you do some dance with access rights (like "chmod
a+rw `tty`" or smth).

Another downside of screen is not so good defaults. For example, invisible
status bar makes it harder for newcomers to realize that they are in screen
already and don't need to reattach to see logs (I had such issues with
almost every intern I was mentoring).

Overall for me tmux makes impression of being more robust then screen. This
and all other things that make it better than screen (license, for example
;) or free nesting) don't have much impact on Devstack though.

On Wed, Jul 2, 2014 at 6:55 PM, Anita Kuno  wrote:

> Actually as a tmux user I really like that devstack uses screen, that
> way when I have screen running in tmux, I can decide (since they have
> different default key combinations, Control+A for screen and Control+B
> for tmux) which utility I am talking to, screen or tmux.
>

I think if we ever make an effort to switch to tmux, we can use ctrl-A as
prefix key so that people won't need to rewire their finger reflexes.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Infra] Mid-Cycle Meet-up Registration Closed

2014-07-02 Thread Matthew Treinish

On Mon, Jun 09, 2014 at 02:08:58PM -0400, Matthew Treinish wrote:
> On Thu, May 29, 2014 at 12:07:07PM -0400, Matthew Treinish wrote:
> > 
> > Hi Everyone,
> > 
> > So we'd like to announce to everyone that we're going to be doing a combined
> > Infra and QA program mid-cycle meet-up. It will be the week of July 14th in
> > Darmstadt, Germany at Deutsche Telekom who has graciously offered to 
> > sponsor the
> > event. The plan is to use the week as both a time for face to face 
> > collaboration
> > for both programs respectively as well as having a couple days of 
> > bootstrapping
> > for new users/contributors. The intent was that this would be useful for 
> > people
> > who are interested in contributing to either Infra or QA, and those who are
> > running third party CI systems.
> > 
> > The current break down for the week that we're looking at is:
> > 
> > July 14th: Infra
> > July 15th: Infra
> > July 16th: Bootstrapping for new users
> > July 17th: More bootstrapping
> > July 18th: QA
> > 
> > We still have to work out more details, and will follow up once we have 
> > them.
> > But, we thought it would be better to announce the event earlier so people 
> > can
> > start to plan travel if they need it.
> > 
> > 
> > Thanks,
> > 
> > Matt Treinish
> > Jim Blair
> 
> 
> Just a quick follow-up, the agenda has changed slightly based on room
> availability since I first sent out the announcement. You can find up-to-date
> information on the meet-up wiki page:
> 
> https://wiki.openstack.org/wiki/Qa_Infra_Meetup_2014
> 
> Once we work out a detailed agenda of discussion topics/work items for the 3
> discussion days I'll update the wiki page.
> 
> Also, if you're intending to attend please put your name on the wiki page's
> registration section.
> 
> Thanks,
> 
> Matt Treinish

Hi Everyone,

Just a quick update, we have to close registration for the Infra/QA mid-cycle
meet-up. Based on the number of people who have signed up on the wiki page [1]
we are basically at the maximum capacity for the rooms we reserved. So if you
had intended to come but didn't sign up on the wiki unfortunately there isn't
any space left.

Thanks,

Matt Treinish

[1] https://wiki.openstack.org/wiki/Qa_Infra_Meetup_2014#Registration


pgpxJJzGjqWAl.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] guestagent config for overriding managers

2014-07-02 Thread Tim Simpson
I’m a fan of the later suggestion.
Thanks,

Tim

From: Craig Vyvial [mailto:cp16...@gmail.com]
Sent: Tuesday, July 01, 2014 11:35 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [trove] guestagent config for overriding managers

If you want to override the trove guestagent managers its looks really nasty to 
have EVERY manager on a single line here.

datastore_registry_ext = 
mysql:my.guestagent.datastore.mysql.manager.Manager,percona:my.guestagent.datastore.mysql.manager.Manager,...

This needs to be tidied up and split out some way.
Ideally each of these should be on a single line.

datastore_registry_ext = mysql:my.guestagent.datastore.mysql.manager.Manager
datastore_registry_ext = percona:my.guestagent.datastore.mysql.manager.Manager

or maybe...

datastores = mysql,precona
[mysql]
manager = my.guestagent.datastore.mysql.manager.Manager
[percona]
manager = my.guestagent.datastore.percona.manager.Manager

After typing out the second idea i dont like it as much as something like the 
first way.

Thoughts?

Thanks,
- Craig Vyvial
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Clint Byrum
Excerpts from Qiming Teng's message of 2014-07-02 00:02:14 -0700:
> Just some random thoughts below ...
> 
> On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > In AWS, an autoscaling group includes health maintenance functionality --- 
> > both an ability to detect basic forms of failures and an ability to react 
> > properly to failures detected by itself or by a load balancer.  What is 
> > the thinking about how to get this functionality in OpenStack?  Since 
> 
> We are prototyping a solution to this problem at IBM Research - China
> lab.  The idea is to leverage oslo.messaging and ceilometer events for
> instance (possibly other resource such as port, securitygroup ...)
> failure detection and handling.
> 

Hm.. perhaps you should be contributing some reviews here as you may
have some real insight:

https://review.openstack.org/#/c/100012/

This sounds a lot like what we're working on for continuous convergence.

> > OpenStack's OS::Heat::AutoScalingGroup has a more general member type, 
> > what is the thinking about what failure detection means (and how it would 
> > be accomplished, communicated)?
> 
> When most OpenStack services are making use of oslo.notify, in theory, a
> service should be able to send/receive events related to resource
> status.  In our current prototype, at least host failure (detected in
> Nova and reported with a patch), VM failure (detected by nova), and some
> lifecycle events of other resources can be detected and then collected
> by Ceilometer.  There is certainly a possibility to listen to the
> message queue directly from Heat, but we only implemented the Ceilometer
> centric approach.
> 
> > 
> > I have not found design discussion of this; have I missed something?
> > 
> > I suppose the natural answer for OpenStack would be centered around 
> > webhooks.  An OpenStack scaling group (OS SG = OS::Heat::AutoScalingGroup 
> > or AWS::AutoScaling::AutoScalingGroup or OS::Heat::ResourceGroup or 
> > OS::Heat::InstanceGroup) could generate a webhook per member, with the 
> > meaning of the webhook being that the member has been detected as dead and 
> > should be deleted and removed from the group --- and a replacement member 
> > created if needed to respect the group's minimum size.  
> 
> Well, I would suggest we generalize this into a event messaging or
> signaling solution, instead of just 'webhooks'.  The reason is that
> webhooks as it is implemented today is not carrying a payload of useful
> information -- I'm referring to the alarms in Ceilometer.
> 
> There are other cases as well.  A member failure could be caused by a 
> temporary communication problem, which means it may show up quickly when
> a replacement member is already being created.  It may mean that we have
> to respond to an 'online' event in addition to an 'offline' event?
> 

The ideas behind convergence help a lot here. Skew happens in distributed
systems, so we expect it constantly. In the extra-capacity situation
above, we would just deal with it by scaling back down. There are also
situations where we might accidentally create two physical resources
because we got a 500 from the API but it was after the resource was
being created. This is the same problem, and has the same answer: pick
one and scale down (and if this is a critical server like a database,
we'll need lifecycle callbacks that will prevent suddenly killing a node
that would cost you uptime or recovery time).

> > When the member is 
> > a Compute instance and Ceilometer exists, the OS SG could define a 
> > Ceilometer alarm for each member (by including these alarms in the 
> > template generated for the nested stack that is the SG), programmed to hit 
> > the member's deletion webhook when death is detected (I imagine there are 
> > a few ways to write a Ceilometer condition that detects instance death). 
> 
> Yes.  Compute instance failure can be detected with a Ceilometer plugin.
> In our prototype, we developed a Dispatcher plugin that can handle
> events like 'compute.instance.delete.end', 'compute.instance.create.end'
> after they have been processed based on a event_definitions.yaml file.
> There could be other ways, I think.
> 
> The problem here today is about the recovery of SG member.  If it is a
> compute instance, we can 'reboot', 'rebuild', 'evacuate', 'migrate' it,
> just to name a few options.  The most brutal way to do this is like what
> HARestarter is doing today -- delete followed by a create.
> 

Right, so lifecycle callbacks are useful here, as we can expose an
interface for delaying and even cancelling a lifecycle event.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-02 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2014-07-01 10:02:13 -0700:
> The argument has been made in the past that small features will require
> correspondingly small specs. If there's a counter-argument to this example
> (a "small" feature requiring a relatively large amount of spec effort), I'd
> love to have links to both the spec and the resulting implementation so we
> can discuss exactly why the spec was an unnecessary additional effort.
> 

Indeed. The line to be drawn isn't around the size, IMO, but around
communication.  Nobody has the bandwidth to watch all of the git
logs. Nobody has the bandwidth to poll all of the developers what has
changed in the interfaces available. So the line for me is whether or
not users and operators will need to know something is under way and
may want to comment _before_ a change to an interface is made.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Mike Spreitzer
Steven Hardy  wrote on 07/02/2014 06:02:36 AM:

> On Wed, Jul 02, 2014 at 03:02:14PM +0800, Qiming Teng wrote:
> > Just some random thoughts below ...
> > 
> > On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > > ...
> >
> The resource signal interface used by ceilometer can carry whatever data
> you like, so the existing solution works fine, we don't need a new one 
IMO.

That's great, I did not know about it.  Thanks for the details on this, I 
do
think it is an improvement.  Yes, it is a slight security downgrade --- 
the
signed URL is effectively a capability, and allowing a payload increases
the cross section of what an attacker can do with one stolen capability.
But I am willing to live with it.

> ...
> Are you aware of the "Existence of instance" meter in ceilometer?

I am, and was thinking it might be very directly usable.  Has anybody 
tested or demonstrated this?

> ...
> > How about just one signal responder per ScalingGroup?  A SG is 
supposed
> > to be in a better position to make the judgement: do I have to 
recreate
> > a failed member? am I recreating it right now or wait a few seconds?
> > maybe I should recreate the member on some specific AZs?
> 
> This is what we have already - you have one ScalingPolicy (which is a
> SignalResponder), and the ScalingPolicy is the place where you make the
> decision about what to do with the data provided from the alarm.

I think the existing ScalingPolicy is about a different issue. 
ScalingPolicy is mis-named; it is really only a ScalingAction, and it is 
about how to adjust the desired size.  It does not address the key missing 
piece here, which is how the scaling group updates its accounting of the 
number of members it has.  That accounting is done simply by counting 
members.  So if a member becomes dysfunctional but remains extant, the 
scaling group logic continues to count it.

Hmm, can a scaling group today properly cope with member deletion if 
prodded to do a ScalingPolicy(Action) that is 'add 1 member'?  (I had 
considered 'add 0 members' but that fails to produce a change in an 
important case --- when the size is now below minimum (fun fact about the 
code!). )

> ...
> > I am not in favor of the per-member webhook design.  But I vote for an
> > additional *implicit* parameter to a nested stack of any groups.  It
> > could be an index or a name.
> 
> I agree, we just need appropriate metadata in ceilometer, which can then 
be
> passed back to heat via the resource signal when the alarm happens.

We need to get the relevant meter samples in Ceilometer tagged with 
something that is unique to the [scaling group, member] and referencable 
in the template source.  For the case of a scaling group whose member type 
is nested stack, you could invent a way to implicitly pass such tagging 
down through all the intervening abstractions.  I was supposing the 
preferred solution would be for the template author to explicitly do this.

> ...

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Mike Spreitzer
Qiming Teng  wrote on 07/02/2014 03:02:14 AM:

> Just some random thoughts below ...
> 
> On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > ...
> > I have not found design discussion of this; have I missed something?
> > 
> > I suppose the natural answer for OpenStack would be centered around 
> > webhooks... 
> 
> Well, I would suggest we generalize this into a event messaging or
> signaling solution, instead of just 'webhooks'.  The reason is that
> webhooks as it is implemented today is not carrying a payload of useful
> information -- I'm referring to the alarms in Ceilometer.

OK, this is great (and Steve Hardy provided more details in his reply), I 
did not know about the existing abilities to have a payload.  However 
Ceilometer alarms are still deficient in that way, right?  A Ceilometer 
alarm's action list is simply a list of URLs, right?  I would be happy to 
say let's generalize Ceilometer alarms to allow a payload in an action.

> There are other cases as well.  A member failure could be caused by a 
> temporary communication problem, which means it may show up quickly when
> a replacement member is already being created.  It may mean that we have
> to respond to an 'online' event in addition to an 'offline' event?
> ...
> The problem here today is about the recovery of SG member.  If it is a
> compute instance, we can 'reboot', 'rebuild', 'evacuate', 'migrate' it,
> just to name a few options.  The most brutal way to do this is like what
> HARestarter is doing today -- delete followed by a create.

We could get into arbitrary subtlety, and maybe eventually will do better, 
but I think we can start with a simple solution that is widely applicable. 
 The simple solution is that once the decision has been made to do 
convergence on a member (note that this is distinct from merely detecting 
and noting a divergence) then it will be done regardless of whether the 
doomed member later appears to have recovered, and the convergence action 
for a scaling group member is to delete the old member and create a 
replacement (not in that order).

> > When the member is a nested stack and Ceilometer exists, it could be 
the 
> > member stack's responsibility to include a Ceilometer alarm that 
detects 
> > the member stack's death and hit the member stack's deletion webhook. 
> 
> This is difficult.  A '(nested) stack' is a Heat specific abstraction --
> recall that we have to annotate a nova server resource in its metadata
> to which stack this server belongs.  Besides the 'visible' resources
> specified in a template, Heat may create internal data structures and/or
> resources (e.g. users) for a stack.  I am not quite sure a stack's death
> can be easily detected from outside Heat.  It would be at least
> cumbersome to have Heat notify Ceilometer that a stack is dead, and then
> have Ceilometer send back a signal.

A (nested) stack is not only a heat-specific abstraction but its semantics 
and failure modes are specific to the stack (at least, its template).  I 
think we have no practical choice but to let the template author declare 
how failure is detected.  It could be as simple as creating a Ceilometer 
alarms that detect death one or more resources in the nested stack; it 
could be more complicated Ceilometer stuff; it could be based on something 
other than, or in addition to, Ceilometer.  If today there are not enough 
sensors to detect failures of all kinds of resources, I consider that a 
gap in telemetry (and think it is small enough that we can proceed 
usefully today, and should plan on filling that gap over time).

> > There is a small matter of how the author of the template used to 
create 
> > the member stack writes some template snippet that creates a 
Ceilometer 
> > alarm that is specific to a member stack that does not exist yet. 
> 
> How about just one signal responder per ScalingGroup?  A SG is supposed
> to be in a better position to make the judgement: do I have to recreate
> a failed member? am I recreating it right now or wait a few seconds?
> maybe I should recreate the member on some specific AZs?

That is confusing two issues.  The thing that is new here is making the 
scaling group recognize member failure; the primary reaction is to update 
its accounting of members (which, in the current code, must be done by 
making sure the failed member is deleted); recovery of other scaling group 
aspects is fairly old-hat, it is analogous to the problems that the 
scaling group already solves when asked to increase its size.

> ...
> > I suppose we could stipulate that if the member template includes a 
> > parameter with name "member_name" and type "string" then the OS OG 
takes 
> > care of supplying the correct value of that parameter; as illustrated 
in 
> > the asg_of_stacks.yaml of https://review.openstack.org/#/c/97366/ , a 
> > member template can use a template parameter to tag Ceilometer data 
for 
> > querying.  The URL of the member stack's deletion web

Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-02 Thread Dolph Mathews
On Wed, Jul 2, 2014 at 7:08 AM, Lingxian Kong  wrote:

> IMO, 'spec' is indeed a good idea and indeed useful for tracking
> features, although it's a little tough for us not using English as
> native language. But we still need to identify these 'small features',
> and core reviewers do some review, then approve them ASAP, so that we
> can avoid to waste a lot of time to wait for code implementaion.
>

If you are confident in the acceptability of your spec, I don't think you
should necessarily wait to begin work on an implementation. In fact, I'd
suggest that you not wait at all.

An implementation can help illustrate a spec, find it's weak points, or
even identify unimplementable parts of a spec.

That said, you should maintain such an implementation in Gerrit as a Work
In Progress so that it is not accidentally merged ahead of the spec.


>
> 2014-07-02 2:08 GMT+08:00 Devananda van der Veen  >:
> > On Tue, Jul 1, 2014 at 10:02 AM, Dolph Mathews 
> wrote:
> >> The argument has been made in the past that small features will require
> >> correspondingly small specs. If there's a counter-argument to this
> example
> >> (a "small" feature requiring a relatively large amount of spec effort),
> I'd
> >> love to have links to both the spec and the resulting implementation so
> we
> >> can discuss exactly why the spec was an unnecessary additional effort.
> >>
> >>
> >> On Tue, Jul 1, 2014 at 10:30 AM, Jason Dunsmore
> >>  wrote:
> >>>
> >>> On Mon, Jun 30 2014, Joshua Harlow wrote:
> >>>
> >>> > There is a balance here that needs to be worked out and I've seen
> >>> > specs start to turn into requirements for every single patch (even if
> >>> > the patch is pretty small). I hope we can rework the 'balance in the
> >>> > force' to avoid being so strict that every little thing requires a
> >>> > spec. This will not end well for us as a community.
> >>> >
> >>> > How have others thought the spec process has worked out so far? To
> >>> > much overhead, to little…?
> >>> >
> >>> > I personally am of the opinion that specs should be used for large
> >>> > topics (defining large is of course arbitrary); and I hope we find
> the
> >>> > right balance to avoid scaring everyone away from working with
> >>> > openstack. Maybe all of this is part of openstack maturing, I'm not
> >>> > sure, but it'd be great if we could have some guidelines around when
> >>> > is a spec needed and when isn't it and take it into consideration
> when
> >>> > requesting a spec that the person you have requested may get
> >>> > frustrated and just leave the community (and we must not have this
> >>> > happen) if you ask for it without explaining why and how clearly.
> >>>
> >>> +1 I think specs are too much overhead for small features.  A set of
> >>> guidelines about when specs are needed would be sufficient.  Leave the
> >>> option about when to submit a design vs. when to submit code to the
> >>> contributor.
> >>>
> >>> Jason
> >>>
> >
> > Yes, there needs to be balance, but as far as I have seen, folks are
> > finding the balance around when to require specs within each of the
> > project teams. I am curious if there are any specific examples where a
> > project's core team required a "large spec" for what they considered
> > to be a "small feature".
> >
> > I also feel strongly that the spec process has been very helpful for
> > the projects that I'm involved in for fleshing out the implications of
> > changes which may at first glance seem small, by requiring both
> > proposers and reviewers to think about and discuss the wider
> > ramifications for changes in a way that simply reviewing code often
> > does not.
> >
> > Just my 2c,
> > Devananda
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Regards!
> ---
> Lingxian Kong
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Analyzing the critical path

2014-07-02 Thread Brandon Logan
Hi Sam,
I'll comment on what I know in-line.

On Wed, 2014-07-02 at 15:29 +, Samuel Bercovici wrote:
> To reiterate the Juno release plan from:
> https://wiki.openstack.org/wiki/Juno_Release_Schedule
> 
> Feature freeze is at: 21st August.
> 
>  
> 
> I am listing tasks which we should consider to be done for Juno and
> who should handle them.
> 
>  
> 
> The following might be considered as critical path to get anything for
> Juno:
> 
> 1.  LBaaS Plugin, API, DB – when is a code commit expected?
The extension, plugin, and DB code are all implemented for the most
part.  Right now we are working on the "db" unit tests.  The "plugin"
tests have been converted from the old version.  The reason I am using
quotes is because the old plugin unit tests actually tested the
extension, and the old db unit tests actually tested the plugin (though
I can digress into a rant about how its not really unit tests, but I
won't bog this down).  So I'm probably jsut going to rename the modules.

After that is done I'll probably put this up on gerrit as a WIP because
there still things left to be included that others are working on such
as shim layers, agent tree refactor, namespace_driver refactor.
> 
> 2.  CLI

Craig Tracey has been working on this.  From what I understand he is
close to completion.

> 
> 3.  LBaaS Driver Interfaces – done -
> https://review.openstack.org/#/c/100690/
> 
> 4.  Connecting the Plugin calls to the drive – I have not seen any
> reference to this. I think we should use the “provider” capabilities
> until “flavor” gets implemented. Is this addressed by item 1 above or
> does it required an additional task?

Yes that is what is being done in #1.

> 
> 5.  HA proxy reference implementation – when is a code commit
> expected?

I believe Dustin Lundquist was/is working on this.  I'll let him give an
update.

> 
> 6.  Tempest Tests

Miguel Lavalle was working on this.

> 
>  
> 
> Additional “Core” features
> 
> 1.  Horizon UI

This one should be discussed more.  I think we talked about that this
can wait because the v1 API should remain unchanged and so Horizon can
use that at first until a change is made.

> 
> 2.  Quota update/fix

If by this you mean quota for loadbalancers, listeners, pools, members,
health monitors, then this will be done in the refactor.

> 
> 3.  API Compatibility

If by this you mean the new API and old API requests and responses work
simultaneously then that will be done in the refactor.

> 
> a.  Connecting the “OLD API Plugin calls to “old/new” drivers. Is
> this still planned?

The old API plugin will not be calling the new driver interface.  There
may be a shim layer that turns the new object model into the old one and
passes it to the drivers that do not follow the new driver interface.

> 
> 4.  Driver Compatibility

Mentioned this above.

> 
> a.  Connecting the Plugin calls to “old” drivers. Is this still
> planned?

Touched on it above, but there may be a shim layer that does this
translation.  I also think if the old API and new API can live side by
side this shim layer may not be needed.  That needs to be a different
discussion though.

> 
>  
> 

Everything below does indeed depend on the refactor so I haven't been as
involved as I should be in those discussions.  I'll let those who are
involved give updates.

> In addition/parallel
> 
> 1.  TLS  – 
> 
> a.  BP is approved. 
> 
> b. WIP code was committed and waiting for the code of the basic
> API/model to be available for start of review. 
> 
> c.  HA Proxy reference implementation
> 
> d. CLI
> 
> e. Horizon Support
> 
> f.   Tempest Tests
> 
> 2.  L7 context switching
> 
> a.  BP in review
> 
> b. WIP code in progress and waiting for the code of the basic
> API/model to be available for initial commit
> 
> c.  HA Proxy reference implementation
> 
> d. CLI
> 
> e. Horizon Support
> 
> f.   Tempest Tests
> 
>  
> 
> Anything else?
> 
>  
> 
>  
> 
> Regards,
> 
> -Sam.
> 
>  
> 
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] SpecProposalDeadline and SpecApprovalDeadline

2014-07-02 Thread Thierry Carrez
Hi everyone,

During this cycle we experiment with using -specs repositories. Those
proved very popular, and busy projects have now built a healthy backlog
of -specs reviews.

However, there will come a time when reviewing specs will become a
distraction. At some point in the cycle, we'll know that the code
corresponding to the spec under review has no chance to get implemented
and reviewed in time for landing before feature freeze in the current
development cycle. Some projects therefore suggested to use a clear
deadline in the release cycle for spec proposal and acceptance.

Those optional steps in the release cycle have been documented in the
wiki at:

https://wiki.openstack.org/wiki/SpecProposalDeadline
https://wiki.openstack.org/wiki/SpecApprovalDeadline

Specs-using projects may elect to use them in the Juno cycle. If there
is global consensus around them in the future, they may become an
integral part of our cycle (like FeatureFreeze is).

At this point, afaik only Neutron announced[1] dates for it:
Neutron SPD: July 10
Neutron SAD: July 20

If multiple projects converge on the same dates, I'll probably add those
on the release schedule for clearer communication.

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/039138.html
[2] https://wiki.openstack.org/wiki/Juno_Release_Schedule

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-07-02 Thread Salvatore Orlando
Hi again,

>From my analysis most of the failures affecting the neutron full job are
because of bugs [1] and [2] for which patch [3] and [4] have been proposed.
Both patches address the nova side of the neutron/nova notification system
for vif plugging.
It is worth noting that these bugs did manifest only in the neutron full
job not because of its "full" nature, but because of its "parallel" nature.

Openstackers with a good memory will probably remember we fixed the
parallel job back in January, before the massive "kernel bug" gate outage
[5]. However, since parallel testing was unfortunately never enabled on the
smoke job we run on the gate, we allowed new bugs to slip in.
For this reason I would recommend the following:
- once patches [3] and [4] have been reviewed and merge, re-assess neutron
full job failure rate over a period of 48 hours (72 if the period includes
at least 24 hours within a weekend - GMT time)
- turn neutron full job to voting if the previous step reveals a failure
rate below 10%, otherwise go back to the drawing board

In my opinion whether the full job should be enabled in an asymmetric
fashion or not should be a decision for the QA and Infra teams. Once the
full job is made voting there will inevitably be a higher failure rate. An
asymmetric gate will not cause backlogs on other projects, so less angry
people, but as Matt said it will still allow other bugs to slip in.
Personally I'm ok either way.

The reason why we're expecting a higher failure rate on the full job is
that we have already observed that some "known" bugs, such as the various
lock timeout issues affecting neutron tend to show with a higher frequency
on the full job because of its parallel nature.

Salvatore

[1] https://launchpad.net/bugs/1329546
[2] https://launchpad.net/bugs/1333654
[3] https://review.openstack.org/#/c/99182/
[4] https://review.openstack.org/#/c/103865/
[5] https://bugs.launchpad.net/neutron/+bug/1273386




On 25 June 2014 23:38, Matthew Treinish  wrote:

> On Tue, Jun 24, 2014 at 02:14:16PM +0200, Salvatore Orlando wrote:
> > There is a long standing patch [1] for enabling the neutron full job.
> > Little before the Icehouse release date, when we first pushed this, the
> > neutron full job had a failure rate of less than 10%. However, since has
> > come by, and perceived failure rates were higher, we ran again this
> > analysis.
>
> So I'm not exactly a fan of having the gates be asymmetrical.  It's very
> easy
> for breaks to slip in blocking the neutron gate if it's not voting
> everywhere.
> Especially because I think most people have been trained to ignore the full
> job because it's been nonvoting for so long. Is there a particular reason
> we
> just don't switch everything all at once? I think having a little bit of
> friction everywhere during the migration is fine. Especially if we do it
> way
> before a milestone. (as opposed to the original parallel switch which was
> right
> before H-3)
>
> >
> > Here are the findings in a nutshell.
> > 1) If we were to enable the job today we might expect about a 3-fold
> > increase in neutron job failures when compared with the smoke test. This
> is
> > unfortunately not acceptable and we therefore need to identify and fix
> the
> > issues causing the additional failure rate.
> > 2) However this also puts us in a position where if we wait until the
> > failure rate drops under a given threshold we might end up chasing a
> moving
> > target as new issues might be introduced at any time since the job is not
> > voting.
> > 3) When it comes to evaluating failure rates for a non voting job, taking
> > the rough numbers does not mean anything, as that will take in account
> > patches 'in progress' which end up failing the tests because of problems
> in
> > the patch themselves.
> >
> > Well, that was pretty much a lot for a "nutshell"; however if you're not
> > yet bored to death please go on reading.
> >
> > The data in this post are a bit skewed because of a rise in neutron job
> > failures in the past 36 hours. However, this rise affects both the full
> and
> > the smoke job so it does not invalidate what we say here. The results
> shown
> > below are representative of the gate status 12 hours ago.
> >
> > - Neutron smoke job failure rates (all queues)
> >   24 hours: 22.4% 48 hours: 19.3% 7 days: 8.96%
> > - Neutron smoke job failure rates (gate queue only):
> >   24 hours: 10.41% 48 hours: 10.20% 7 days: 3.53%
> > - Neutron full job failure rate (check queue only as it's non voting):
> >   24 hours: 31.54% 48 hours: 28.87% 7 days: 25.73%
> >
> > Check/Gate Ratio between neutron smoke failures
> > 24 hours: 2.15 48 hours: 1.89 7 days: 2.53
> >
> > Estimated job failure rate for neutron full job if it were to run in the
> > gate:
> > 24 hours: 14.67% 48 hours: 15.27% 7 days: 10.16%
> >
> > The numbers are therefore not terrible, but definitely not good enough;
> > looking at the last 7 days the full job will have a failure rate about 3
> > times hig

Re: [openstack-dev] [neutron]Performance of security group

2014-07-02 Thread Kyle Mestery
On Wed, Jul 2, 2014 at 3:43 AM, Ihar Hrachyshka  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> On 02/07/14 10:12, Miguel Angel Ajo wrote:
>>
>> Shihazhang,
>>
>> I really believe we need the RPC refactor done for this cycle, and
>> given the close deadlines we have (July 10 for spec submission and
>> July 20 for spec approval).
>>
>> Don't you think it's going to be better to split the work in
>> several specs?
>>
>> 1) ipset optimization   (you) 2) sg rpc optimization (without
>> fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you ,
>> me)
>>
>>
>> This way we increase the chances of having part of this for the
>> Juno cycle. If we go for something too complicated is going to take
>> more time for approval.
>>
>
> I agree. And it not only increases chances to get at least some of
> those highly demanded performance enhancements to get into Juno, it's
> also "the right thing to do" (c). It's counterproductive to put
> multiple vaguely related enhancements in single spec. This would dim
> review focus and put us into position of getting 'all-or-nothing'. We
> can't afford that.
>
> Let's leave one spec per enhancement. @Shihazhang, what do you think?
>
+100

File these as separate specs, and lets see how much of this we can get
into Juno.

Thanks for taking this enhancement and performance improvement on everyone!

Kyle

>>
>> Also, I proposed the details of "2", trying to bring awareness on
>> the topic, as I have been working with the scale lab in Red Hat to
>> find and understand those issues, I have a very good knowledge of
>> the problem and I believe I could make a very fast advance on the
>> issue at the RPC level.
>>
>> Given that, I'd like to work on this specific part, whether or not
>> we split the specs, as it's something we believe critical for
>> neutron scalability and thus, *nova parity*.
>>
>> I will start a separate spec for "2", later on, if you find it ok,
>> we keep them as separate ones, if you believe having just 1 spec
>> (for 1 & 2) is going be safer for juno-* approval, then we can
>> incorporate my spec in yours, but then "add-ipset-to-security" is
>> not a good spec title to put all this together.
>>
>>
>> Best regards, Miguel Ángel.
>>
>>
>> On 07/02/2014 03:37 AM, shihanzhang wrote:
>>>
>>> hi Miguel Angel Ajo Pelayo! I agree with you and modify my spes,
>>> but I will also optimization the RPC from security group agent to
>>> neutron server. Now the modle is 'port[rule1,rule2...], port...',
>>> I will change it to 'port[sg1, sg2..]', this can reduce the size
>>> of RPC respose message from neutron server to security group
>>> agent.
>>>
>>> At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo"
>>>  wrote:


 Ok, I was talking with Édouard @ IRC, and as I have time to
 work into this problem, I could file an specific spec for the
 security group RPC optimization, a masterplan in two steps:

 1) Refactor the current RPC communication for
 security_groups_for_devices, which could be used for full
 syncs, etc..

 2) Benchmark && make use of a fanout queue per security group
 to make sure only the hosts with instances on a certain
 security group get the updates as they happen.

 @shihanzhang do you find it reasonable?



 - Original Message -
> - Original Message -
>> @Nachi: Yes that could a good improvement to factorize the
>> RPC
> mechanism.
>>
>> Another idea: What about creating a RPC topic per security
>> group (quid of the
> RPC topic
>> scalability) on which an agent subscribes if one of its
>> ports is
> associated
>> to the security group?
>>
>> Regards, Édouard.
>>
>>
>
>
> Hmm, Interesting,
>
> @Nachi, I'm not sure I fully understood:
>
>
> SG_LIST [ SG1, SG2] SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
> port[SG_ID1, SG_ID2], port2 , port3
>
>
> Probably we may need to include also the SG_IP_LIST =
> [SG_IP1, SG_IP2] ...
>
>
> and let the agent do all the combination work.
>
> Something like this could make sense?
>
> Security_Groups = {SG1:{IPs:[],RULES:[],
> SG2:{IPs:[],RULES:[]} }
>
> Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
>
>
> @Edouard, actually I like the idea of having the agent
> subscribed to security groups they have ports on... That
> would remove the need to include all the security groups
> information on every call...
>
> But would need another call to get the full information of a
> set of security groups at start/resync if we don't already
> have any.
>
>
>>
>> On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang <
> ayshihanzh...@126.com >
>> wrote:
>>
>>
>>
>> hi Miguel Ángel, I am very agree with you about the
>> following point:
>>> * physical implementation on the hosts (ipsets, nf

Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Mike Spreitzer
Mike Spreitzer/Watson/IBM@IBMUS wrote on 07/02/2014 02:41:48 AM:

> Zane Bitter  wrote on 07/01/2014 06:58:47 PM:
> 
> > On 01/07/14 15:47, Mike Spreitzer wrote:
> > > In AWS, an autoscaling group includes health maintenance 
functionality
> > > --- both an ability to detect basic forms of failures and an ability 
to
> > > react properly to failures detected by itself or by a load balancer.
> > >   What is the thinking about how to get this functionality in 
OpenStack?
> > >   Since OpenStack's OS::Heat::AutoScalingGroup has a more general 
member
> > > type, what is the thinking about what failure detection means (and 
how
> > > it would be accomplished, communicated)?
> > >
> > > I have not found design discussion of this; have I missed something?
> > 
> > Yes :)
> > 
> > https://review.openstack.org/#/c/95907/
> > 
> > The idea is that Convergence will provide health maintenance for _all_ 

> > forms of resources in Heat. Once this is implemented, autoscaling gets 

> > it for free by virtue of that fact that it manages resources using 
Heat 
> > stacks.
> 
> Ah, right.  My reading of that design is not quite so simple...

There are a couple more issues that arise in this approach.  The biggest 
one is how to integrate application-level failure detection.  I added a 
comment to this effect on the Convergence spec.

Another issue is that, at least initially, Convergence is not "always on"; 
rather, it is an operation that can be invoked on a stack.  When would a 
scaling group invoke this action on itself (more precisely, itself 
considered as a nested stack)?  One obvious possibility is on a periodic 
basis.  It the convergence operation is pretty cheap when no divergence 
has been detected, then that might be acceptable.  Otherwise we might want 
the scaling group to set up some sort of notification, but that would be 
another batch of member-type specific code.

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-02 Thread Anita Kuno
On 06/30/2014 03:04 PM, Kevin Benton wrote:
> Hello all,
> 
> The subject of 3rd party CI voting responses came up in the 3rd-party IRC
> meeting today.[1] We would like to get feedback from the larger dev
> community on what acceptable response times are for third party CI systems.
> 
> As a maintainer of a small CI system that tends to get backed up during
> milestone rush hours, it would be nice if we were allowed up to 12 hours.
> However, as a developer this seems like too long to have to wait for the
> results of a patch.
> 
> One option might be to peg the response time to the main CI response
> assuming that it gets backed up in a similar manner.
> 
> What would be acceptable to the community?
> 
> 1.
> http://eavesdrop.openstack.org/meetings/third_party/2014/third_party.2014-06-30-18.01.html
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Thank you for starting this discussion, Kevin.

I look forward to getting some responses from developers. Even if you
don't have a time suggestion, please offer your description of your
workflow and how third party ci responses play a role in that workflow
so we can try to make a good decision here.

Third party ci admins are trying to come up with a standard for testing
response times that the community can accept. If we don't have any input
from developers here, third party ci admins are going to conclude that
developers don't care (I know you do) and pick a time frame that works
for them, which likely won't work for developers.

I look forward to having some options to draw from here. I hope you
consider posting your input.

My thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Analyzing the critical path

2014-07-02 Thread Samuel Bercovici
To reiterate the Juno release plan from: 
https://wiki.openstack.org/wiki/Juno_Release_Schedule
Feature freeze is at: 21st August.

I am listing tasks which we should consider to be done for Juno and who should 
handle them.

The following might be considered as critical path to get anything for Juno:

1.   LBaaS Plugin, API, DB - when is a code commit expected?

2.   CLI

3.   LBaaS Driver Interfaces - done - 
https://review.openstack.org/#/c/100690/

4.   Connecting the Plugin calls to the drive - I have not seen any 
reference to this. I think we should use the "provider" capabilities until 
"flavor" gets implemented. Is this addressed by item 1 above or does it 
required an additional task?

5.   HA proxy reference implementation - when is a code commit expected?

6.   Tempest Tests

Additional "Core" features

1.   Horizon UI

2.   Quota update/fix

3.   API Compatibility

a.   Connecting the "OLD API Plugin calls to "old/new" drivers. Is this 
still planned?

4.   Driver Compatibility

a.   Connecting the Plugin calls to "old" drivers. Is this still planned?

In addition/parallel

1.   TLS  -

a.   BP is approved.

b.  WIP code was committed and waiting for the code of the basic API/model 
to be available for start of review.

c.   HA Proxy reference implementation

d.  CLI

e.  Horizon Support

f.Tempest Tests

2.   L7 context switching

a.   BP in review

b.  WIP code in progress and waiting for the code of the basic API/model to 
be available for initial commit

c.   HA Proxy reference implementation

d.  CLI

e.  Horizon Support

f.Tempest Tests

Anything else?


Regards,
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Swift] Question re. keystone domains

2014-07-02 Thread Nathan Kinder


On 07/01/2014 12:15 PM, Dolph Mathews wrote:
> 
> On Tue, Jul 1, 2014 at 11:20 AM, Coles, Alistair  > wrote:
> 
> We have a change [1] under review in Swift to make access control
> lists compatible with migration to keystone v3 domains. The change
> makes two assumptions that I’d like to double-check with keystone
> folks:
> 
> __ __
> 
> __1.  __That a project can never move from one domain to another.
> 
> We're moving in this direction, at least. In Grizzly and Havana, we made
> no such restriction. In Icehouse, we introduced such a restriction by
> default, but it can be disabled. So far, we haven't gotten any
> complaints about adding the restriction, so maybe we should just add
> additional help text to the option in our config about why you would
> never want to disable the restriction, citing how it would break swift?
> 
> 
> 
> __2.  __That the underscore character cannot appear in a valid
> domain id – more specifically, that the string ‘_unknown’ cannot be
> confused with a domain id.
> 
> That's fairly sound. All of our domain ID's are system-assigned as
> UUIDs, except for the "default" domain which has an explicit
> id='default'. We don't do anything to validate the assumption, though.

I don't like the idea of making this assumption without explicit
validation.  If there is a need for a blacklisted domain id space, we
should enforce it to prevent problems down the road.

-NGK

> 
> 
> 
> __ __
> 
> Are those safe assumptions?
> 
> __ __
> 
> Thanks,
> 
> Alistair
> 
> __ __
> 
> [1] https://review.openstack.org/86430
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-07-02 Thread Mathieu Rohon
Hi, sorry for the late reply, I was out of office for 3 weeks.

I also love the idea of having a single thread in charge of writing
dataplane actions.
As Zang described, this thread would read events in a queue, which could be
populated by agent_drivers.
The main goal would be to avoid desynchronization. While a first
greenthread can yield while performing a dataplane action (for instance,
running a ofctl command) another greenthread could potentially process
another action that should be done after the first one is terminated.
enqueuing those actions and processing them in a single thread would avoid
such a behavior.

I also think this is orthogonal with the agent/resource architecture. I
think that agent driver could populate the queue, and the singleton thread
would call the correct resource driver, depending on the impacted port, to
interpret the order filed in the queue.

regards
Mathieu


On Fri, Jun 20, 2014 at 10:38 PM, Mohammad Banikazemi  wrote:

> Zang, thanks for your comments.
>
> I think what you are suggesting is perhaps orthogonal to having Resource
> and Agent drivers. By that I mean we can have what you are suggesting and
> keep the Resource and Agent drivers. The reason for having Resource drivers
> is to provide the means for possibly extending what an agent does in
> response to say changes to a port in a modular way. We can restrict the
> access to Resource drivers from the events loop only. That restriction is
> not there in the current model but would adding that address your concerns?
> What are your thoughts? As Salvatore has mentioned in his email in this
> thread, that is what the current OVS agent does wrt port updates. That is,
> the update to ports get processed from the events loop.
>
> As a separate but relevant issue, we can and should discuss whether having
> the Resource and Agent drivers is useful in making the agent more modular.
> The idea behind using these drivers is to have the agent use a collection
> of drivers rather than mixin classes so we can more easily select what
>  (and how) functionalities an agent support and reuse as much as we can
> across L2 agents. Are there better ways of achieving this? Any thoughts?
>
> Best,
>
> Mohammad
>
>
>
> [image: Inactive hide details for Zang MingJie ---06/19/2014 06:27:31
> AM---Hi: I don't like the idea of ResourceDriver and AgentDriver.]Zang
> MingJie ---06/19/2014 06:27:31 AM---Hi: I don't like the idea of
> ResourceDriver and AgentDriver. I suggested
>
> From: Zang MingJie 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>,
> Date: 06/19/2014 06:27 AM
>
> Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture
> --
>
>
>
> Hi:
>
> I don't like the idea of ResourceDriver and AgentDriver. I suggested
> use a singleton worker thread to manager all underlying setup, so the
> driver should do nothing other than fire a update event to the worker.
>
> The worker thread may looks like this one:
>
> # the only variable store all local state which survives between
> different events, including lvm, fdb or whatever
> state = {}
>
> # loop forever
> while True:
>event = ev_queue.pop()
>if not event:
>sleep() # may be interrupted when new event comes
>continue
>
>origin_state = state
>new_state = event.merge_state(state)
>
>if event.is_ovsdb_changed():
>if event.is_tunnel_changed():
>setup_tunnel(new_state, old_state, event)
>if event.is_port_tags_changed():
>setup_port_tags(new_state, old_state, event)
>
>if event.is_flow_changed():
>if event.is_flow_table_1_changed():
>setup_flow_table_1(new_state, old_state, event)
>if event.is_flow_table_2_changed():
>setup_flow_table_2(new_state, old_state, event)
>if event.is_flow_table_3_changed():
>setup_flow_table_3(new_state, old_state, event)
>if event.is_flow_table_4_changed():
>setup_flow_table_4(new_state, old_state, event)
>
>if event.is_iptable_changed():
>if event.is_iptable_nat_changed():
>setup_iptable_nat(new_state, old_state, event)
>if event.is_iptable_filter_changed():
>setup_iptable_filter(new_state, old_state, event)
>
>   state = new_state
>
> when any part has been changed by a event, the corresponding setup_xxx
> function rebuild the whole part, then use the restore like
> `iptables-restore` or `ovs-ofctl replace-flows` to reset the whole
> part.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
O

Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-07-02 Thread Doug Hellmann
On Wed, Jul 2, 2014 at 5:46 AM, Ihar Hrachyshka  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> On 01/07/14 17:14, Alexei Kornienko wrote:
>> Hi,
>>
>> Please see some minor comments inline. Do you think we can schedule
>> some time to discuss this topic on one of the upcoming meetings? We
>> can come out with some kind of the summary and actions plan to
>> start working on.
>
> Please check: https://wiki.openstack.org/wiki/Meetings/Oslo I think
> you can add your case in the agenda. That's how it works in e.g. Neutron.

I've been trying to keep the Oslo meetings focused on status updates
and issue reports from or liaisons. Let's work on reaching consensus
on this change here on the mailing list where we have the time and
space for a long-form conversation.

Doug

>
 There are a lot of questions that should be answered
 to implement this: Where such tests would run (jenking,
 local PC, devstack VM)?
> I would expect it to be exposed to jenkins thru 'tox'. We
> then can set up a separate job to run them and compare with a
> base line [TBD: what *is* baseline?] to make sure we don't
> introduce performance regressions.
>> Such tests cannot be exposed thru 'tox' since they require
>> some environment setup (rabbitmq-server, zeromq matchmaker,
>> etc.). Such setup is way out of scope for tox. Cause of
>> this we should find some other way to run such tests.
>> You may just assume server is already set and available thru a
>> common socket.
>>> Assuming that something is already setup is not an option. If we
>>> add new env to tox I assume that anyone can run it locally and
>>> get the same results as he would get from jenkins.
>
> Fair enough. You can look into tempest then. I don't know whether they
> already have some framework for performance testing though.
>
> /Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBCgAGBQJTs9UIAAoJEC5aWaUY1u57USMH/j4dfCdAl2QKCMLDWzopLqon
> 41gqq8DsKDhc/I24z2KNYqRPk43tQvR9dWhbnbs1XlqXk0keozQ2YNE8pvuZyqok
> VN4/l435ewHJ3OyZePQLANgDBrAVdWRmmLWJwGJZ8ceTfZRV1MN+vsLHGwCMFinU
> J/8UAR8XUF5UVrd/VONsvVSwDnx1v7vs8zM9aICfos0F4ByMU9bPThdeUiuJOTuU
> xNP+FEdlNwwlc4sAwm3qHKJIVe7gYSojIfQGhmBlXWpPKA1SYoT5I6qMZibsbtlM
> VE092du9APc8LLnE3DdMS6CuUuABbljUtAxUr5z8CirIDoh9n/c5+e8Jfbb8zHU=
> =OwOm
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-07-02 Thread Anita Kuno
On 07/01/2014 10:03 AM, Duncan Thomas wrote:
> On 1 July 2014 14:44, Anita Kuno  wrote:
> 
>> On 07/01/2014 05:56 AM, Duncan Thomas wrote:
>>> For the record, cinder gave a very clear definition of success in our
>>> 3rd party guidelines: Passes every test in tempest-dsm-full. If that
>>> needs documenting somewhere else, please let me know. It may of course
>>> change as we learn more about how 3rd party CI works out, so the fewer
>>> places it is duplicated the better, maybe?
>>>
>> Thanks Duncan, I wasn't aware of this. Can we start with a url for those
>> guidelines in your reply to this post and then go from there?
> 
> https://wiki.openstack.org/wiki/Cinder/certified-drivers should make
> it clear but doesn't, I'll get that cleared up.
> 
> https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification
> mentions it, and various weekly meeting minutes also mention it.
> 
> 
Thanks for sharing those links, Duncan.

Hmmm, my first response - given that long chew we had on the ml[0] about
the use of the word certified as well as the short confirmation we had
in the tc meeting[1] that the word certified would not be used, but
rather some version of the word 'tested' - how long until edits can be
made to the cinder wiki to comply with that agreement?

Thanks Duncan,
Anita.

[0] http://lists.openstack.org/pipermail/openstack-dev/2014-June/036933.html
[1]
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-06-17-20.03.html 2.b

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-07-02 Thread Anita Kuno
On 07/01/2014 12:27 PM, Asselin, Ramy wrote:
> Anita,
> 
> This line [1] is effectively a sub-set of tempest-dsm-full,  and what we're 
> currently running manually now. I far as I understood, this is the current 
> minimum. The exact sub-set (or full set, or if additional tests are allowed) 
> is still under discussion. 
> 
Fair enough, I need to make some time to attend Cinder weekly meetings.
Wednesdays are tough for me, but I need to make more of an effort.
Thanks for making me aware.
> I created a WIP reference patch [2] for the cinder team that mimics the above 
> script to run these tests based off the similar  Jenkins job-template used by 
> Openstack-Jenkins "{pipeline}-tempest-dsvm-*"
> 
Thanks for sharing the link to your patch, I have given it an initial
review for formatting. I'll be interested to see the discussion from
this, I'm not sure where a design like this fits but I appreciate you
putting up a patch so we can gather some input.

Thanks Ramy,
Anita.

> Ramy
> 
> [1] 
> https://github.com/openstack-dev/devstack/blob/master/driver_certs/cinder_driver_cert.sh#L97
> [2] 
> https://review.openstack.org/#/c/93141/1/modules/openstack_project/files/jenkins_job_builder/config/devstack-gate.yaml
> 
> -Original Message-
> From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
> Sent: Tuesday, July 01, 2014 7:03 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit
> 
> On 1 July 2014 14:44, Anita Kuno  wrote:
> 
>> On 07/01/2014 05:56 AM, Duncan Thomas wrote:
>>> For the record, cinder gave a very clear definition of success in our 
>>> 3rd party guidelines: Passes every test in tempest-dsm-full. If that 
>>> needs documenting somewhere else, please let me know. It may of 
>>> course change as we learn more about how 3rd party CI works out, so 
>>> the fewer places it is duplicated the better, maybe?
>>>
>> Thanks Duncan, I wasn't aware of this. Can we start with a url for 
>> those guidelines in your reply to this post and then go from there?
> 
> https://wiki.openstack.org/wiki/Cinder/certified-drivers should make it clear 
> but doesn't, I'll get that cleared up.
> 
> https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification
> mentions it, and various weekly meeting minutes also mention it.
> 
> 
> --
> Duncan Thomas
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] 3rd party ci names for use by official cinder mandated tests

2014-07-02 Thread Anita Kuno
On 07/01/2014 01:13 PM, Asselin, Ramy wrote:
> 3rd party ci names is currently becoming a bit controversial for what we're 
> trying to do in cinder: https://review.openstack.org/#/c/101013/
> The motivation for the above change is to aid developers understand what the 
> 3rd party ci systems are testing in order to avoid confusion.
> The goal is to aid developers reviewing cinder changes to understand which 
> 3rd party ci systems are running official cinder-mandated tests and which are 
> running unofficial/proprietary tests.
> Since the use of "cinder" is proposed to be "reserved" (per change under 
> review above), I'd like to propose the following for Cinder third-party names 
> under the following conditions:
> {Company-Name}-cinder-ci
> * This CI account name is to be used strictly for official 
> cinder-defined dsvm-full-{driver} tests.
> * No additional tests allowed on this account.
> oA different account name will be used for unofficial / proprietary tests.
> * Account will only post reviews to cinder patches.
> oA different account name will be used to post reviews in all other 
> projects.
> * Format of comments will be (as jgriffith commented in that review):
> 
> {company name}-cinder-ci
> 
>dsvm-full-{driver-name}   pass/fail
> 
> 
>dsvm-full-{other-driver-name} pass/fail
> 
> 
>dsvm-full-{yet-another-driver-name}   pass/fail
> 
> 
> Thoughts?
> 
> Ramy
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Thanks for starting this thread, Ramy.

I too would like Cinder third party ci systems (and systems that might
test Cinder now or in the future) to weigh in and share their thoughts.

We do need to agree on a naming policy and whatever that policy is will
frame future discussions with new accounts (and existing ones) so let's
get some thoughts offered here so we all can live with the outcome.

Thanks again, Ramy, I appreciate your help on this as we work toward a
resolution.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-02 Thread Jeremy Stanley
On 2014-07-02 16:14:52 +0400 (+0400), Yuriy Taraday wrote:
> Why do we need these short-lived 'proposed' branches in any form?
> Why can't we just use release branches for this and treat them as
> stable when appropriate tag is added to some commit in them?

The primary reasons are:

1. People interpret "stable/juno" as an indication that it is a
stable released branch, so "proposed/juno" makes it a little more
obvious to those people that it isn't yet.

2. Current process delegates pre-release change approval to a
different group of reviewers than post-release change approval, and
the easiest way to enforce this is through Gerrit ACL matches on
different git ref patterns for their respective target branches.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-07-02 Thread Gordon Sim

On 07/01/2014 04:14 PM, Alexei Kornienko wrote:

A lot of driver details leak outside the API and it makes it hard to
improve driver without changing the API.


I agree that some aspects of specific driver implementations leak into 
the public API for the messaging library as a whole. There are also some 
ambiguities around the intended and actual contracts.


However, I do also think that the existing driver abstraction, while not 
perfect, can be used to implement drivers using quite different approaches.


The current impl_rabbit and impl_qpid drivers may be hard(er) to change, 
but that's not because of the API (in my view).


[...]

This would allow us to change drivers without touching the API and test
their performance separately


I think you can do this already. It is true that there are some semantic 
differences when switching drivers, and it would be ideal to minimise 
those (or at least make them more explicit) at some point.


I did some experiments measuring the scalability of rpc calls for 
different drivers - mainly as an initial stress test for the proposed 
AMQP 1.0 driver -which showed significant differences even where the 
same broker was used, just by changing the driver.


I'm not arguing that there will never be a good reason to change either 
the driver API or indeed the public API, but I think a lot can be 
accomplished within the current framework if desired. (With more 
specific changes to APIs then proposed with some context as needed).


--Gordon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-02 Thread Anita Kuno
On 07/01/2014 08:52 AM, CARVER, PAUL wrote:
> Anant Patil wrote:
>> I use tmux (an alternative to screen) a lot and I believe lot of other 
>> developers use it.
>> I have been using devstack for some time now and would like to add the 
>> option of
>> using tmux instead of screen for creating sessions for openstack services.
>> I couldn't find a way to do that in current implementation of devstack.
> 
> Is it just for familiarity or are there specific features lacking in screen 
> that you think
> would benefit devstack? I’ve tried tmux a couple of times but didn’t find any
> compelling reason to switch from screen. I wouldn’t argue against anyone who
> wants to use it for their day to day needs. But don’t just change devstack on 
> a whim,
> list out the objective benefits.
> 
> Having a configuration option to switch between devstack-screen and 
> devstack-tmux
> seems like it would probably add more complexity than benefit, especially if 
> there
> are any functional differences. If there are functional differences it would 
> be better
> to decide which one is best (for devstack, not necessarily best for everyone 
> in the world)
> and go with that one only.
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Actually as a tmux user I really like that devstack uses screen, that
way when I have screen running in tmux, I can decide (since they have
different default key combinations, Control+A for screen and Control+B
for tmux) which utility I am talking to, screen or tmux.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting July 3 1800 UTC

2014-07-02 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140703T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Flavor meeting log captured?

2014-07-02 Thread Stephen Wong
Hi Vijay,

Yes, it was logged under advanced service:

http://eavesdrop.openstack.org/meetings/networking_advanced_services/2014/networking_advanced_services.2014-06-27-17.30.log.html

http://eavesdrop.openstack.org/meetings/networking_advanced_services/2014/networking_advanced_services.2014-06-27-17.30.html

- Stephen


On Wed, Jul 2, 2014 at 6:56 AM, Vijay Venkatachalam <
vijay.venkatacha...@citrix.com> wrote:

>  Hi,
>
>I didn't attend the flavor framework meeting that was scheduled on irc
> #openstack-meeting-3  last Friday. Will be interested to see the meeting
> log/minutes. Was it captured?
>
> Thanks,
> Vijay V
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-07-02 Thread Kyle Mestery
1400UTC works for me, and I've put it on my calendar. Thanks!

On Wed, Jul 2, 2014 at 6:27 AM, Gary Kotton  wrote:
> Hi,
> Sadly last night night we did not have enough people to make any progress.
> Lets try again next week Monday at 14:00 UTC. The meeting will take place
> on #openstack-vmware channel
> Alut a continua
> Gary
>
> On 6/30/14, 6:38 PM, "Kyle Mestery"  wrote:
>
>>On Mon, Jun 30, 2014 at 10:18 AM, Armando M.  wrote:
>>> Hi Gary,
>>>
>>> Thanks for sending this out, comments inline.
>>>
>>Indeed, thanks Gary!
>>
>>> On 29 June 2014 00:15, Gary Kotton  wrote:

 Hi,
 At the moment there are a number of different BP¹s that are proposed to
 enable different VMware network management solutions. The following
specs
 are in review:

 VMware NSX-vSphere plugin: https://review.openstack.org/102720
 Neutron mechanism driver for VMWare vCenter DVS network
 creation:https://review.openstack.org/#/c/101124/
 VMware dvSwitch/vSphere API support for Neutron ML2:
 https://review.openstack.org/#/c/100810/

>>I've commented in these reviews about combining efforts here, I'm glad
>>you're taking the lead to make this happen Gary. This is much
>>appreciated!
>>
 In addition to this there is also talk about HP proposing some for of
 VMware network management.
>>>
>>>
>>> I believe this is blueprint [1]. This was proposed a while ago, but now
>>>it
>>> needs to go through the new BP review process.
>>>
>>> [1] -
>>>https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.n
>>>et/neutron/%2Bspec/ovsvapp-esxi-vxlan&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A
>>>&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1
>>>a8dOes8mbE9NM9gvjG2PnJXhUU0%3D%0A&s=622a539e40b3b950c25f0b6cabf05bc81bb61
>>>159077c00f12d7882680e84a18b
>>>

 Each of the above has specific use case and will enable existing
vSphere
 users to adopt and make use of Neutron.

 Items #2 and #3 offer a use case where the user is able to leverage and
 manage VMware DVS networks. This support will have the following
 limitations:

 Only VLANs are supported (there is no VXLAN support)
 No security groups
 #3 ­ the spec indicates that it will make use of pyvmomi

(https://urldefense.proofpoint.com/v1/url?u=https://github.com/vmware/py
vmomi&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDt
ysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJXhUU0%3D
%0A&s=436b19122463f2b30a5b7fa31880f56ad0127cdaf0250999eba43564f8b559b9).
 There are a number of disclaimers here:

 This is currently blocked regarding the integration into the
requirements
 project (https://review.openstack.org/#/c/69964/)
 The idea was to have oslo.vmware leverage this in the future

(https://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack
/oslo.vmware&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQ
u%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJX
hUU0%3D%0A&s=e1559fa7ae956d02efe8a65e356f8f0dbfd8a276e5f2e0a4761894e1716
84b03)

 Item #1 will offer support for all of the existing Neutron API¹s and
there
 functionality. This solution will require a additional component
called NSX
 (https://www.vmware.com/support/pubs/nsx_pubs.html).

>>>
>>> It's great to see this breakdown, it's very useful in order to identify
>>>the
>>> potential gaps and overlaps amongst the various efforts around ESX and
>>> Neutron. This will also ensure a path towards a coherent code
>>>contribution.
>>>
 It would be great if we could all align our efforts and have some clear
 development items for the community. In order to do this I¹d like
suggest
 that we meet to sync and discuss all efforts. Please let me know if the
 following sounds ok for an initial meeting to discuss how we can move
 forwards:
  - Tuesday 15:00 UTC
  - IRC channel #openstack-vmware
>>>
>>>
>>> I am available to join.
>>>


 We can discuss the following:

 Different proposals
 Combining efforts
 Setting a formal time for meetings and follow ups

 Looking forwards to working on this stuff with the community and
providing
 a gateway to using Neutron and further enabling the adaption of
OpenStack.
>>>
>>>
>>> I think code contribution is only one aspect of this story; my other
>>>concern
>>> is that from a usability standpoint we would need to provide a clear
>>> framework for users to understand what these solutions can do for them
>>>and
>>> which one to choose.
>>>
>>> Going forward I think it would be useful if we produced an overarching
>>> blueprint that outlines all the ESX options being proposed for OpenStack
>>> Networking (and the existing ones, like NSX - formerly known as NVP, or
>>> nova-network), their benefits and drawbacks, their technical
>>>dependencie

Re: [openstack-dev] [neutron]Performance of security group

2014-07-02 Thread Miguel Angel Ajo


Nice Shihanzhang,

  Do you mean the ipset implementation is ready, or just the spec?.


  For the SG group refactor, I don't worry about who does it, or who
takes the credit, but I believe it's important we address this 
bottleneck during Juno trying to match nova's scalability.


Best regards,
Miguel Ángel.


On 07/02/2014 02:50 PM, shihanzhang wrote:

hi Miguel Ángel and Ihar Hrachyshka,
I agree with you that split  the work in several specs, I have finished
the work ( ipset optimization), you can do 'sg rpc optimization (without
fanout)'.
as the third part(sg rpc optimization (with fanout)),  I think we need
talk about it, because just using ipset to optimize security group agent
codes does not bring the best results!

 Best regards,
shihanzhang.








At 2014-07-02 04:43:24, "Ihar Hrachyshka"  wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/07/14 10:12, Miguel Angel Ajo wrote:


Shihazhang,

I really believe we need the RPC refactor done for this cycle, and
given the close deadlines we have (July 10 for spec submission and
July 20 for spec approval).

Don't you think it's going to be better to split the work in
several specs?

1) ipset optimization   (you) 2) sg rpc optimization (without
fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you ,
me)


This way we increase the chances of having part of this for the
Juno cycle. If we go for something too complicated is going to take
more time for approval.



I agree. And it not only increases chances to get at least some of
those highly demanded performance enhancements to get into Juno, it's
also "the right thing to do" (c). It's counterproductive to put
multiple vaguely related enhancements in single spec. This would dim
review focus and put us into position of getting 'all-or-nothing'. We
can't afford that.

Let's leave one spec per enhancement. @Shihazhang, what do you think?



Also, I proposed the details of "2", trying to bring awareness on
the topic, as I have been working with the scale lab in Red Hat to
find and understand those issues, I have a very good knowledge of
the problem and I believe I could make a very fast advance on the
issue at the RPC level.

Given that, I'd like to work on this specific part, whether or not
we split the specs, as it's something we believe critical for
neutron scalability and thus, *nova parity*.

I will start a separate spec for "2", later on, if you find it ok,
we keep them as separate ones, if you believe having just 1 spec
(for 1 & 2) is going be safer for juno-* approval, then we can
incorporate my spec in yours, but then "add-ipset-to-security" is
not a good spec title to put all this together.


Best regards, Miguel Ángel.


On 07/02/2014 03:37 AM, shihanzhang wrote:


hi Miguel Angel Ajo Pelayo! I agree with you and modify my spes,
but I will also optimization the RPC from security group agent to
neutron server. Now the modle is 'port[rule1,rule2...], port...',
I will change it to 'port[sg1, sg2..]', this can reduce the size
of RPC respose message from neutron server to security group
agent.

At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo"
 wrote:



Ok, I was talking with Édouard @ IRC, and as I have time to
work into this problem, I could file an specific spec for the
security group RPC optimization, a masterplan in two steps:

1) Refactor the current RPC communication for
security_groups_for_devices, which could be used for full
syncs, etc..

2) Benchmark && make use of a fanout queue per security group
to make sure only the hosts with instances on a certain
security group get the updates as they happen.

@shihanzhang do you find it reasonable?



- Original Message -

- Original Message -

@Nachi: Yes that could a good improvement to factorize the
RPC

mechanism.


Another idea: What about creating a RPC topic per security
group (quid of the

RPC topic

scalability) on which an agent subscribes if one of its
ports is

associated

to the security group?

Regards, Édouard.





Hmm, Interesting,

@Nachi, I'm not sure I fully understood:


SG_LIST [ SG1, SG2] SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
port[SG_ID1, SG_ID2], port2 , port3


Probably we may need to include also the SG_IP_LIST =
[SG_IP1, SG_IP2] ...


and let the agent do all the combination work.

Something like this could make sense?

Security_Groups = {SG1:{IPs:[],RULES:[],
SG2:{IPs:[],RULES:[]} }

Ports = {Port1:[SG1, SG2], Port2: [SG1]  }


@Edouard, actually I like the idea of having the agent
subscribed to security groups they have ports on... That
would remove the need to include all the security groups
information on every call...

But would need another call to get the full information of a
set of security groups at start/resync if we don't already
have any.




On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang <

ayshihanzh...@126.com >

wrote:



hi Miguel Ángel, I am very agree with you about the
following point:

* physical implementation on the hosts (ipsets, nftable

Re: [openstack-dev] Neutron ML2 Blueprints

2014-07-02 Thread Vladimir Kuklin
Sorry, guys, forgot to prepend [Fuel] prefix.


On Wed, Jul 2, 2014 at 5:24 PM, Vladimir Kuklin 
wrote:

> Guys, as we are working on 2 separate implementations of ML2 plugin
> support, I suggest to split it into 2 blueprints:
>
> 1. https://blueprints.launchpad.net/fuel/+spec/ml2-neutron - this
> blueprint is initial ML2 support and is being implemented by Sergey
> Vasilenko (@xenolog) and is almost ready and being tested right now. This
> implementation allows us to issue 5.1 release with ml2 support and unblock
> other teams that need ML2 support. HIs blueprint design document should be
> attached to this blueprint while current fuel-spec should be attached to
> the blueprint I suggest to create.
>
> 2. Create blueprint called "Merge Neutron and ML2 plugins upstream
> support" and target it for future releases. This implementation spec is in
> fuel-specs here: https://review.openstack.org/#/c/99807/  under work by
> Andrew Woodward (@xarses) and should be moved to this blueprint. AFAIK, he
> is experiencing problems with implementation and it will not make it to
> this release.
>
>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 45bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-02 Thread Vladimir Kuklin
Guys, as we are working on 2 separate implementations of ML2 plugin
support, I suggest to split it into 2 blueprints:

1. https://blueprints.launchpad.net/fuel/+spec/ml2-neutron - this blueprint
is initial ML2 support and is being implemented by Sergey Vasilenko
(@xenolog) and is almost ready and being tested right now. This
implementation allows us to issue 5.1 release with ml2 support and unblock
other teams that need ML2 support. HIs blueprint design document should be
attached to this blueprint while current fuel-spec should be attached to
the blueprint I suggest to create.

2. Create blueprint called "Merge Neutron and ML2 plugins upstream support"
and target it for future releases. This implementation spec is in
fuel-specs here: https://review.openstack.org/#/c/99807/  under work by
Andrew Woodward (@xarses) and should be moved to this blueprint. AFAIK, he
is experiencing problems with implementation and it will not make it to
this release.

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Flavor meeting log captured?

2014-07-02 Thread Vijay Venkatachalam
Hi,

   I didn't attend the flavor framework meeting that was scheduled on irc 
#openstack-meeting-3  last Friday. Will be interested to see the meeting 
log/minutes. Was it captured?

Thanks,
Vijay V

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-07-02 Thread Gordon Sim

On 07/01/2014 03:52 PM, Ihar Hrachyshka wrote:

On 01/07/14 15:55, Alexei Kornienko wrote:

And in addition I've provided some links to existing
implementation with places that IMHO cause bottlenecks. From my
point of view that code is doing obviously stupid things (like
closing/opening sockets for each message sent).


That indeed sounds bad.


That is enough for me to rewrite it even without additional
proofs that it's wrong.


[Full disclosure: I'm not as involved into oslo.messaging internals as
you probably are, so I may speak out dumb things.]

I wonder whether there are easier ways to fix that particular issue
without rewriting everything from scratch. Like, provide a pool of
connections and make send() functions use it instead of creating new
connections (?)


I did a little investigation, using the test scripts from Alexei. 
Looking at the protocol trace with wireshark, you can certainly see lots 
of redundant stuff. However I did *not* see connections being opened and 
closed.


What I did see was the AMQP (0-9-1) channel being opened and closed, and 
the exchange being declared for every published message. Both these 
actions are synchronous. This clearly impacts throughput and, in my view 
more importantly, will limit scalability by giving the broker redundant 
work.


However, I think this could be avoided (at least to the extreme extent 
it occurs at present) by a simple patch. I posted this up to gerrit in 
case anyone is interested: https://review.openstack.org/#/c/104194/


With this in place the ingress improves significantly[1]. The bottleneck 
then appears to be the acking of messages. This means the queue depth 
tends to get higher than before, and the traffic appears more 'bursty'. 
Each message is acked individually, whereas some degree of batching 
might be possible which would likely improve the efficiency there a 
little bit.


Anyway, just an observation I thought it would be interesting to share.

--Gordon.

[1] From less than 400 msgs/sec to well over 1000 msgs/sec on my laptop

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron ML2 Blueprints

2014-07-02 Thread Vladimir Kuklin
Guys, as we are working on 2 separate implementations of ML2 plugin
support, I suggest to split it into 2 blueprints:

1. https://blueprints.launchpad.net/fuel/+spec/ml2-neutron - this blueprint
is initial ML2 support and is being implemented by Sergey Vasilenko
(@xenolog) and is almost ready and being tested right now. This
implementation allows us to issue 5.1 release with ml2 support and unblock
other teams that need ML2 support. HIs blueprint design document should be
attached to this blueprint while current fuel-spec should be attached to
the blueprint I suggest to create.

2. Create blueprint called "Merge Neutron and ML2 plugins upstream support"
and target it for future releases. This implementation spec is in
fuel-specs here: https://review.openstack.org/#/c/99807/  under work by
Andrew Woodward (@xarses) and should be moved to this blueprint. AFAIK, he
is experiencing problems with implementation and it will not make it to
this release.





-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Mid-cycle sprint for remote people ?

2014-07-02 Thread Sylvain Bauza
Le 02/07/2014 14:51, Andrew Laski a écrit :
>
> On 07/02/2014 06:09 AM, Thierry Carrez wrote:
>> Sylvain Bauza wrote:
>>> Hi,
>>>
>>> I won't be able to attend the mid-cycle sprint due to a good family
>>> reason (a new baby 2.0 release expected to land by these dates), so I'm
>>> wondering if it's possible to webcast some of the sessions so people
>>> who
>>> are not there can still share their voices ?
>> It's the same issues we have with remote participation to design summit
>> sessions. It's possible to arrange a specific person (or set of persons)
>> to attend a specific session (via some combination of tools). It's
>> impossible to let everyone attend everything without significantly
>> impacting the quality of the in-person discussion.
>
> As a remote participant at the last mid-cycle meeting I'm going to
> agree with this.  I successfully participated in a few topics and
> otherwise just tried to listen in to what was being discussed. Getting
> involved with most discussions was difficult because although the
> audio was quite good for a remote setup there were two challenges.  I
> couldn't catch everything that was said and was likely to mention
> something that had already been discussed, which regressed the
> conversation.  And because one participant in the call is a room and
> not someone close to a microphone you end up having to treat it like a
> half-duplex connection which significantly slows down discussion.
>
> So while I enjoyed listening in and am very thankful for the
> opportunity to have done so, it is not at all a substitute for being
> there.
>
>> So ideally the agenda would identify key missing stakeholders for
>> specific sessions, and try to patch them in. And in all cases, you must
>> make sure any discussion is documented and no decision is final, so that
>> excluded people can still chime in.
>>
>> Regards,
>>
>

Hi Andrew and Thierry,

Thanks for your feedbacks. I truly understand that most of the time,
remote people slow down the live conversation because of the audio
quality or bad understandings.

With regards to that, I would propose the following :
 - key missing participants need to be identified for each session
(possibly based on voting) and only those would be possibly
participating remotely. Others could still listen to the stream but no
voice could be shared
 - for the key participants, a local peer has to be proposed who will be
attending the session and proxy the remote voice. No audio question from
the remotee, he has to type the questions to his peer so the peer can
speak in behalf of him
 - the key missing participant has to accept that the peer would
possibly not share exactly his ideas. That's a "voting delegation"
(well, sort of...) so if questions have to be made to the questioner,
that's the peer who would answers, without waiting the remote answer.
 - the peer has to accept the extra overload that represents a mixed
IRC+live communication.

The idea above is just to find a way to have some interactivity, but
with still the idea that in-person discussions stay the only way to
communicate.

-Sylvain


>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Swift] Question re. keystone domains

2014-07-02 Thread Coles, Alistair
Thanks Dolph, that’s helpful.

For backwards compatibility reasons we need to preserve legacy access control 
list behavior for projects/users in the default domain only. To achieve that, 
we need to persist the project’s domain id in Swift. If a project subsequently 
moved to/from the default domain then Swift wouldn’t ‘break’ but would 
potentially (dis)allow legacy ACLs based on stale domain information.

If the backwards compatibility feature in Swift were configurable (on/off) 
then, as you suggest, some text alongside the Keystone config option (and 
corresponding in Swift) can act as a warning not to enable both options.
Alistair


From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: 01 July 2014 20:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone] [Swift] Question re. keystone domains


On Tue, Jul 1, 2014 at 11:20 AM, Coles, Alistair 
mailto:alistair.co...@hp.com>> wrote:
We have a change [1] under review in Swift to make access control lists 
compatible with migration to keystone v3 domains. The change makes two 
assumptions that I’d like to double-check with keystone folks:


1.  That a project can never move from one domain to another.
We're moving in this direction, at least. In Grizzly and Havana, we made no 
such restriction. In Icehouse, we introduced such a restriction by default, but 
it can be disabled. So far, we haven't gotten any complaints about adding the 
restriction, so maybe we should just add additional help text to the option in 
our config about why you would never want to disable the restriction, citing 
how it would break swift?

2.  That the underscore character cannot appear in a valid domain id – more 
specifically, that the string ‘_unknown’ cannot be confused with a domain id.
That's fairly sound. All of our domain ID's are system-assigned as UUIDs, 
except for the "default" domain which has an explicit id='default'. We don't do 
anything to validate the assumption, though.

Are those safe assumptions?

Thanks,
Alistair

[1] https://review.openstack.org/86430

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Mid-cycle sprint for remote people ?

2014-07-02 Thread Andrew Laski


On 07/02/2014 06:09 AM, Thierry Carrez wrote:

Sylvain Bauza wrote:

Hi,

I won't be able to attend the mid-cycle sprint due to a good family
reason (a new baby 2.0 release expected to land by these dates), so I'm
wondering if it's possible to webcast some of the sessions so people who
are not there can still share their voices ?

It's the same issues we have with remote participation to design summit
sessions. It's possible to arrange a specific person (or set of persons)
to attend a specific session (via some combination of tools). It's
impossible to let everyone attend everything without significantly
impacting the quality of the in-person discussion.


As a remote participant at the last mid-cycle meeting I'm going to agree 
with this.  I successfully participated in a few topics and otherwise 
just tried to listen in to what was being discussed. Getting involved 
with most discussions was difficult because although the audio was quite 
good for a remote setup there were two challenges.  I couldn't catch 
everything that was said and was likely to mention something that had 
already been discussed, which regressed the conversation.  And because 
one participant in the call is a room and not someone close to a 
microphone you end up having to treat it like a half-duplex connection 
which significantly slows down discussion.


So while I enjoyed listening in and am very thankful for the opportunity 
to have done so, it is not at all a substitute for being there.



So ideally the agenda would identify key missing stakeholders for
specific sessions, and try to patch them in. And in all cases, you must
make sure any discussion is documented and no decision is final, so that
excluded people can still chime in.

Regards,




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-07-02 Thread shihanzhang
hi Miguel Ángel and Ihar Hrachyshka,
I agree with you that split  the work in several specs, I have finished the 
work ( ipset optimization), you can do 'sg rpc optimization (without fanout)'.
as the third part(sg rpc optimization (with fanout)),  I think we need talk 
about it, because just using ipset to optimize security group agent codes does 
not bring the best results!


Best regards,
shihanzhang.










At 2014-07-02 04:43:24, "Ihar Hrachyshka"  wrote:
>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>On 02/07/14 10:12, Miguel Angel Ajo wrote:
>> 
>> Shihazhang,
>> 
>> I really believe we need the RPC refactor done for this cycle, and
>> given the close deadlines we have (July 10 for spec submission and 
>> July 20 for spec approval).
>> 
>> Don't you think it's going to be better to split the work in
>> several specs?
>> 
>> 1) ipset optimization   (you) 2) sg rpc optimization (without
>> fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you ,
>> me)
>> 
>> 
>> This way we increase the chances of having part of this for the
>> Juno cycle. If we go for something too complicated is going to take
>> more time for approval.
>> 
>
>I agree. And it not only increases chances to get at least some of
>those highly demanded performance enhancements to get into Juno, it's
>also "the right thing to do" (c). It's counterproductive to put
>multiple vaguely related enhancements in single spec. This would dim
>review focus and put us into position of getting 'all-or-nothing'. We
>can't afford that.
>
>Let's leave one spec per enhancement. @Shihazhang, what do you think?
>
>> 
>> Also, I proposed the details of "2", trying to bring awareness on
>> the topic, as I have been working with the scale lab in Red Hat to
>> find and understand those issues, I have a very good knowledge of
>> the problem and I believe I could make a very fast advance on the
>> issue at the RPC level.
>> 
>> Given that, I'd like to work on this specific part, whether or not
>> we split the specs, as it's something we believe critical for 
>> neutron scalability and thus, *nova parity*.
>> 
>> I will start a separate spec for "2", later on, if you find it ok, 
>> we keep them as separate ones, if you believe having just 1 spec
>> (for 1 & 2) is going be safer for juno-* approval, then we can
>> incorporate my spec in yours, but then "add-ipset-to-security" is
>> not a good spec title to put all this together.
>> 
>> 
>> Best regards, Miguel Ángel.
>> 
>> 
>> On 07/02/2014 03:37 AM, shihanzhang wrote:
>>> 
>>> hi Miguel Angel Ajo Pelayo! I agree with you and modify my spes,
>>> but I will also optimization the RPC from security group agent to
>>> neutron server. Now the modle is 'port[rule1,rule2...], port...',
>>> I will change it to 'port[sg1, sg2..]', this can reduce the size
>>> of RPC respose message from neutron server to security group
>>> agent.
>>> 
>>> At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo" 
>>>  wrote:
 
 
 Ok, I was talking with Édouard @ IRC, and as I have time to
 work into this problem, I could file an specific spec for the
 security group RPC optimization, a masterplan in two steps:
 
 1) Refactor the current RPC communication for 
 security_groups_for_devices, which could be used for full
 syncs, etc..
 
 2) Benchmark && make use of a fanout queue per security group
 to make sure only the hosts with instances on a certain
 security group get the updates as they happen.
 
 @shihanzhang do you find it reasonable?
 
 
 
 - Original Message -
> - Original Message -
>> @Nachi: Yes that could a good improvement to factorize the
>> RPC
> mechanism.
>> 
>> Another idea: What about creating a RPC topic per security
>> group (quid of the
> RPC topic
>> scalability) on which an agent subscribes if one of its
>> ports is
> associated
>> to the security group?
>> 
>> Regards, Édouard.
>> 
>> 
> 
> 
> Hmm, Interesting,
> 
> @Nachi, I'm not sure I fully understood:
> 
> 
> SG_LIST [ SG1, SG2] SG_RULE_LIST = [SG_Rule1, SG_Rule2] .. 
> port[SG_ID1, SG_ID2], port2 , port3
> 
> 
> Probably we may need to include also the SG_IP_LIST =
> [SG_IP1, SG_IP2] ...
> 
> 
> and let the agent do all the combination work.
> 
> Something like this could make sense?
> 
> Security_Groups = {SG1:{IPs:[],RULES:[], 
> SG2:{IPs:[],RULES:[]} }
> 
> Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
> 
> 
> @Edouard, actually I like the idea of having the agent
> subscribed to security groups they have ports on... That
> would remove the need to include all the security groups
> information on every call...
> 
> But would need another call to get the full information of a
> set of security groups at start/resync if we don't already
> have a

Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-02 Thread Yuriy Taraday
Hello.

On Fri, Jun 27, 2014 at 4:44 PM, Thierry Carrez 
wrote:

> For all those reasons, we decided at the last summit to use unique
> pre-release branches, named after the series (for example,
> "proposed/juno"). That branch finally becomes "stable/juno" at release
> time. In parallel, we abandoned the usage of release branches for
> development milestones, which are now tagged directly on the master
> development branch.
>

I know that this question has been raised before but I still would like to
clarify this.
Why do we need these short-lived 'proposed' branches in any form? Why can't
we just use release branches for this and treat them as stable when
appropriate tag is added to some commit in them?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova} NFV patches

2014-07-02 Thread Luke Gorrie
On 2 July 2014 10:39, Gary Kotton  wrote:

>  There are some patches that are relevant to the NFV support. There are
> as follows:
>

Additionally, we who are building Deutsche Telekom's open source NFV
implementation will be able to make that available to the whole community
if the VIF_VHOSTUSER spec is approved for Juno. Then we could help others
to deploy this too which would be great.

Blueprint: https://blueprints.launchpad.net/nova/+spec/vif-vhostuser

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-02 Thread Lingxian Kong
IMO, 'spec' is indeed a good idea and indeed useful for tracking
features, although it's a little tough for us not using English as
native language. But we still need to identify these 'small features',
and core reviewers do some review, then approve them ASAP, so that we
can avoid to waste a lot of time to wait for code implementaion.

2014-07-02 2:08 GMT+08:00 Devananda van der Veen :
> On Tue, Jul 1, 2014 at 10:02 AM, Dolph Mathews  
> wrote:
>> The argument has been made in the past that small features will require
>> correspondingly small specs. If there's a counter-argument to this example
>> (a "small" feature requiring a relatively large amount of spec effort), I'd
>> love to have links to both the spec and the resulting implementation so we
>> can discuss exactly why the spec was an unnecessary additional effort.
>>
>>
>> On Tue, Jul 1, 2014 at 10:30 AM, Jason Dunsmore
>>  wrote:
>>>
>>> On Mon, Jun 30 2014, Joshua Harlow wrote:
>>>
>>> > There is a balance here that needs to be worked out and I've seen
>>> > specs start to turn into requirements for every single patch (even if
>>> > the patch is pretty small). I hope we can rework the 'balance in the
>>> > force' to avoid being so strict that every little thing requires a
>>> > spec. This will not end well for us as a community.
>>> >
>>> > How have others thought the spec process has worked out so far? To
>>> > much overhead, to little…?
>>> >
>>> > I personally am of the opinion that specs should be used for large
>>> > topics (defining large is of course arbitrary); and I hope we find the
>>> > right balance to avoid scaring everyone away from working with
>>> > openstack. Maybe all of this is part of openstack maturing, I'm not
>>> > sure, but it'd be great if we could have some guidelines around when
>>> > is a spec needed and when isn't it and take it into consideration when
>>> > requesting a spec that the person you have requested may get
>>> > frustrated and just leave the community (and we must not have this
>>> > happen) if you ask for it without explaining why and how clearly.
>>>
>>> +1 I think specs are too much overhead for small features.  A set of
>>> guidelines about when specs are needed would be sufficient.  Leave the
>>> option about when to submit a design vs. when to submit code to the
>>> contributor.
>>>
>>> Jason
>>>
>
> Yes, there needs to be balance, but as far as I have seen, folks are
> finding the balance around when to require specs within each of the
> project teams. I am curious if there are any specific examples where a
> project's core team required a "large spec" for what they considered
> to be a "small feature".
>
> I also feel strongly that the spec process has been very helpful for
> the projects that I'm involved in for fleshing out the implications of
> changes which may at first glance seem small, by requiring both
> proposers and reviewers to think about and discuss the wider
> ramifications for changes in a way that simply reviewing code often
> does not.
>
> Just my 2c,
> Devananda
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] guestagent config for overriding managers

2014-07-02 Thread Denis Makogon
Hi Craig.

Seems like perfect task to use stevedore and its plugin system. I do agree
that it looks very nasty to have huge dict of managers.
I don't like the idea of placing 'manager' under config groups, because
each config group should be registered, and when it's done only then you
can use options.

There should be another way to deal with it. As i already said, we should
take a look at stevedore.

Best regards,
Denis Makogon


On Wed, Jul 2, 2014 at 7:34 AM, Craig Vyvial  wrote:

> If you want to override the trove guestagent managers its looks really
> nasty to have EVERY manager on a single line here.
>
> datastore_registry_ext =
> mysql:my.guestagent.datastore.mysql.manager.Manager,percona:my.guestagent.datastore.mysql.manager.Manager,...
>
> This needs to be tidied up and split out some way.
> Ideally each of these should be on a single line.
>
> datastore_registry_ext =
> mysql:my.guestagent.datastore.mysql.manager.Manager
> datastore_registry_ext =
> percona:my.guestagent.datastore.mysql.manager.Manager
>
> or maybe...
>
> datastores = mysql,precona
> [mysql]
> manager = my.guestagent.datastore.mysql.manager.Manager
> [percona]
> manager = my.guestagent.datastore.percona.manager.Manager
>
> After typing out the second idea i dont like it as much as something like
> the first way.
>
> Thoughts?
>
> Thanks,
> - Craig Vyvial
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] CI setup related request (devstack-gate)

2014-07-02 Thread daya kamath
all,
i need to make some changes in my 3rd party CI system. specifically, 
i need to override devstack-gate-vm.sh for the mysql path - i can do this by 
setting SKIP_DEVSTACK_GATE_PROJECT in my examples.yaml file, although this 
means i will need to manually update the devstack-gate directory
however i also need to do this in the tempest directory.
(i need to overwrite the python-subunit and testtools versions in 
requirements.txt and test-requirments.txt files)


the problem is the scripts are set up to do a clean install for every run. 
git does a hard reset, checkout and clean (functions.sh)

am working on writing my own version of setup_project for some specific 
projects.

can this be better handled by allowing for specific projects to be merged 
instead of overwritten while setting up the workspace?

thanks!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-07-02 Thread Gary Kotton
Hi,
Sadly last night night we did not have enough people to make any progress.
Lets try again next week Monday at 14:00 UTC. The meeting will take place
on #openstack-vmware channel
Alut a continua
Gary

On 6/30/14, 6:38 PM, "Kyle Mestery"  wrote:

>On Mon, Jun 30, 2014 at 10:18 AM, Armando M.  wrote:
>> Hi Gary,
>>
>> Thanks for sending this out, comments inline.
>>
>Indeed, thanks Gary!
>
>> On 29 June 2014 00:15, Gary Kotton  wrote:
>>>
>>> Hi,
>>> At the moment there are a number of different BP¹s that are proposed to
>>> enable different VMware network management solutions. The following
>>>specs
>>> are in review:
>>>
>>> VMware NSX-vSphere plugin: https://review.openstack.org/102720
>>> Neutron mechanism driver for VMWare vCenter DVS network
>>> creation:https://review.openstack.org/#/c/101124/
>>> VMware dvSwitch/vSphere API support for Neutron ML2:
>>> https://review.openstack.org/#/c/100810/
>>>
>I've commented in these reviews about combining efforts here, I'm glad
>you're taking the lead to make this happen Gary. This is much
>appreciated!
>
>>> In addition to this there is also talk about HP proposing some for of
>>> VMware network management.
>>
>>
>> I believe this is blueprint [1]. This was proposed a while ago, but now
>>it
>> needs to go through the new BP review process.
>>
>> [1] - 
>>https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.n
>>et/neutron/%2Bspec/ovsvapp-esxi-vxlan&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A
>>&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1
>>a8dOes8mbE9NM9gvjG2PnJXhUU0%3D%0A&s=622a539e40b3b950c25f0b6cabf05bc81bb61
>>159077c00f12d7882680e84a18b
>>
>>>
>>> Each of the above has specific use case and will enable existing
>>>vSphere
>>> users to adopt and make use of Neutron.
>>>
>>> Items #2 and #3 offer a use case where the user is able to leverage and
>>> manage VMware DVS networks. This support will have the following
>>> limitations:
>>>
>>> Only VLANs are supported (there is no VXLAN support)
>>> No security groups
>>> #3 ­ the spec indicates that it will make use of pyvmomi
>>> 
>>>(https://urldefense.proofpoint.com/v1/url?u=https://github.com/vmware/py
>>>vmomi&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDt
>>>ysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJXhUU0%3D
>>>%0A&s=436b19122463f2b30a5b7fa31880f56ad0127cdaf0250999eba43564f8b559b9).
>>> There are a number of disclaimers here:
>>>
>>> This is currently blocked regarding the integration into the
>>>requirements
>>> project (https://review.openstack.org/#/c/69964/)
>>> The idea was to have oslo.vmware leverage this in the future
>>> 
>>>(https://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack
>>>/oslo.vmware&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQ
>>>u%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJX
>>>hUU0%3D%0A&s=e1559fa7ae956d02efe8a65e356f8f0dbfd8a276e5f2e0a4761894e1716
>>>84b03)
>>>
>>> Item #1 will offer support for all of the existing Neutron API¹s and
>>>there
>>> functionality. This solution will require a additional component
>>>called NSX
>>> (https://www.vmware.com/support/pubs/nsx_pubs.html).
>>>
>>
>> It's great to see this breakdown, it's very useful in order to identify
>>the
>> potential gaps and overlaps amongst the various efforts around ESX and
>> Neutron. This will also ensure a path towards a coherent code
>>contribution.
>>
>>> It would be great if we could all align our efforts and have some clear
>>> development items for the community. In order to do this I¹d like
>>>suggest
>>> that we meet to sync and discuss all efforts. Please let me know if the
>>> following sounds ok for an initial meeting to discuss how we can move
>>> forwards:
>>>  - Tuesday 15:00 UTC
>>>  - IRC channel #openstack-vmware
>>
>>
>> I am available to join.
>>
>>>
>>>
>>> We can discuss the following:
>>>
>>> Different proposals
>>> Combining efforts
>>> Setting a formal time for meetings and follow ups
>>>
>>> Looking forwards to working on this stuff with the community and
>>>providing
>>> a gateway to using Neutron and further enabling the adaption of
>>>OpenStack.
>>
>>
>> I think code contribution is only one aspect of this story; my other
>>concern
>> is that from a usability standpoint we would need to provide a clear
>> framework for users to understand what these solutions can do for them
>>and
>> which one to choose.
>>
>> Going forward I think it would be useful if we produced an overarching
>> blueprint that outlines all the ESX options being proposed for OpenStack
>> Networking (and the existing ones, like NSX - formerly known as NVP, or
>> nova-network), their benefits and drawbacks, their technical
>>dependencies,
>> system requirements, API supported etc. so that a user can make an
>>informed
>> decision when looking at ESX deployments in OpenStack.
>>
>>>
>>>
>>> Thanks
>>> Gary
>>>
>>
>> Cheers,
>> Armando
>>
>>>
>>> 

Re: [openstack-dev] [OpenStack][DevStack] Failed to install OpenStack with DevStack

2014-07-02 Thread Jay Lau
Thanks Ken'ichi, its working for me ;-)

Eli, perhaps you can try again as Ken'ichi's solution.


2014-07-02 14:35 GMT+08:00 Ken'ichi Ohmichi :

> Hi Jay,
>
> I faced the same problem and can pass it with adding the following line
> into localrc:
>
> LOGFILE=/opt/stack/logs/stack.sh.log
>
> Thanks
> Ken'ichi Ohmichi
>
> ---
> 2014-07-02 14:58 GMT+09:00 Jay Lau :
> > Hi,
> >
> > Does any one encounter this error when install devstack? How did you
> resolve
> > this issue?
> >
> > + [[ 1 -ne 0 ]]
> > + echo 'Error on exit'
> > Error on exit
> > + ./tools/worlddump.py -d
> > usage: worlddump.py [-h] [-d DIR]
> > worlddump.py: error: argument -d/--dir: expected one argument
> > 317.292u 180.092s 14:40.93 56.4%0+0k 195042+2987608io 1003pf+0w
> >
> > BTW: I was using ubuntu 12.04
> >
> >  gy...@mesos014.eng.platformlab.ibm.com-84: cat  /etc/*release
> > DISTRIB_ID=Ubuntu
> > DISTRIB_RELEASE=12.04
> > DISTRIB_CODENAME=precise
> > DISTRIB_DESCRIPTION="Ubuntu 12.04.1 LTS"
> >
> > --
> > Thanks,
> >
> > Jay
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple API/RPC process behavior

2014-07-02 Thread Miguel Angel Ajo

I'd recommend you to go for ml2, as openvswitch alone
plugin will be removed during Juno, making an upgrade
path harder on your side.

I suppose there could be a bug/noimplementation for
multiplerpc workers within openvswitch plugin, but
as it's deprecated there are high chances that
it won't be fixed.

On 07/02/2014 09:18 AM, tfre...@redhat.com wrote:

Hello,

I've installed the latest version of RHOS5, set multiple API\RPC workers
within neutron.conf file to 8 (the amount of cores of the host)

After I ran "openstack-service restart" I've got different result for
ML2 and Openvswitch deployments.

In ML2 deployment all 17 (1-parent and 16 child processes)
neutron-server processes exist as was expected.
In Openvswitch deployment only 9 processes of API are spawned, RPC
process didn't spawn.

Should be any difference between those two installations?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] One more lifecycle plug point - in scaling groups

2014-07-02 Thread Steven Hardy
On Wed, Jul 02, 2014 at 02:41:19AM +, Adrian Otto wrote:
>Zane,
>If you happen to have a link to this blueprint, could you reply with it? I
>took a look, but did not find it.
>I'd like to suggest that the implementation allow apps to call
>unauthenticated (signed) webhook URLs in order to trigger a scale up/down
>event within a scaling group. This is about the simplest possible API to
>allow any application to control it's own elasticity.

This is already possible, you just need to get the scaling policy alarm URL
into the instance where the application is running.

I believe Zane was referring to:

https://blueprints.launchpad.net/heat/+spec/update-hooks

This is also related to the action aware software config spec:

https://review.openstack.org/#/c/98742/

So in future, you might autoscale nested stacks containing action-aware
software config resources, then you could define specific actions which
happen e.g on scale-down (on action DELETE).

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Mid-cycle sprint for remote people ?

2014-07-02 Thread Thierry Carrez
Sylvain Bauza wrote:
> Hi,
> 
> I won't be able to attend the mid-cycle sprint due to a good family
> reason (a new baby 2.0 release expected to land by these dates), so I'm
> wondering if it's possible to webcast some of the sessions so people who
> are not there can still share their voices ?

It's the same issues we have with remote participation to design summit
sessions. It's possible to arrange a specific person (or set of persons)
to attend a specific session (via some combination of tools). It's
impossible to let everyone attend everything without significantly
impacting the quality of the in-person discussion.

So ideally the agenda would identify key missing stakeholders for
specific sessions, and try to patch them in. And in all cases, you must
make sure any discussion is documented and no decision is final, so that
excluded people can still chime in.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-02 Thread Steven Hardy
On Wed, Jul 02, 2014 at 03:02:14PM +0800, Qiming Teng wrote:
> Just some random thoughts below ...
> 
> On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > In AWS, an autoscaling group includes health maintenance functionality --- 
> > both an ability to detect basic forms of failures and an ability to react 
> > properly to failures detected by itself or by a load balancer.  What is 
> > the thinking about how to get this functionality in OpenStack?  Since 
> 
> We are prototyping a solution to this problem at IBM Research - China
> lab.  The idea is to leverage oslo.messaging and ceilometer events for
> instance (possibly other resource such as port, securitygroup ...)
> failure detection and handling.

This sounds interesting, are you planning to propose a spec for heat
describing this work and submit your patches to heat?

> 
> > OpenStack's OS::Heat::AutoScalingGroup has a more general member type, 
> > what is the thinking about what failure detection means (and how it would 
> > be accomplished, communicated)?
> 
> When most OpenStack services are making use of oslo.notify, in theory, a
> service should be able to send/receive events related to resource
> status.  In our current prototype, at least host failure (detected in
> Nova and reported with a patch), VM failure (detected by nova), and some
> lifecycle events of other resources can be detected and then collected
> by Ceilometer.  There is certainly a possibility to listen to the
> message queue directly from Heat, but we only implemented the Ceilometer
> centric approach.

It has been pointed out a few times that in large deployments, different
services may not share the same message bus.  So while *an* option could be
heat listenting to the message bus, I'd prefer that we maintain the alarm
notifications via the ReST API as the primary signalling mechanism.

> > 
> > I have not found design discussion of this; have I missed something?
> > 
> > I suppose the natural answer for OpenStack would be centered around 
> > webhooks.  An OpenStack scaling group (OS SG = OS::Heat::AutoScalingGroup 
> > or AWS::AutoScaling::AutoScalingGroup or OS::Heat::ResourceGroup or 
> > OS::Heat::InstanceGroup) could generate a webhook per member, with the 
> > meaning of the webhook being that the member has been detected as dead and 
> > should be deleted and removed from the group --- and a replacement member 
> > created if needed to respect the group's minimum size.  
> 
> Well, I would suggest we generalize this into a event messaging or
> signaling solution, instead of just 'webhooks'.  The reason is that
> webhooks as it is implemented today is not carrying a payload of useful
> information -- I'm referring to the alarms in Ceilometer.

The resource signal interface used by ceilometer can carry whatever data
you like, so the existing solution works fine, we don't need a new one IMO.

For example look at this patch which converts WaitConditions to use the
resource_signal interface:

https://review.openstack.org/#/c/101351/2/heat/engine/resources/wait_condition.py

We pass the data to the WaitCondition via a resource signal, the exact same
transport that is used for alarm notifications from ceilometer.

Note the "webhook" thing really just means a pre-signed request, which
using the v2 AWS style signed requests (currently the only option for heat
pre-signed requests) does not sign the request body.

This is a security disadvantage (addressed by the v3 AWS signing scheme),
but it does mean you can pass data via the pre-signed URL.

An alternative to pre-signed URLs is simply to make an authenticated call
to the native ReST API, but then whatever is signalling requires either
credentials, a token, or a trust to impersonate the stack owner. Again, you
can pass whatever data you want via this interface.

> There are other cases as well.  A member failure could be caused by a 
> temporary communication problem, which means it may show up quickly when
> a replacement member is already being created.  It may mean that we have
> to respond to an 'online' event in addition to an 'offline' event?
> 
> > When the member is 
> > a Compute instance and Ceilometer exists, the OS SG could define a 
> > Ceilometer alarm for each member (by including these alarms in the 
> > template generated for the nested stack that is the SG), programmed to hit 
> > the member's deletion webhook when death is detected (I imagine there are 
> > a few ways to write a Ceilometer condition that detects instance death). 
> 
> Yes.  Compute instance failure can be detected with a Ceilometer plugin.
> In our prototype, we developed a Dispatcher plugin that can handle
> events like 'compute.instance.delete.end', 'compute.instance.create.end'
> after they have been processed based on a event_definitions.yaml file.
> There could be other ways, I think.

Are you aware of the "Existence of instance" meter in ceilometer?

http://docs.openstack.org/developer/ceilometer/measurements.htm

Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-02 Thread Thierry Carrez
Sean Dague wrote:
> On 07/01/2014 08:11 AM, Anant Patil wrote:
>> Hi,
>>
>> I use tmux (an alternative to screen) a lot and I believe lot of other
>> developers use it. I have been using devstack for some time now and
>> would like to add the option of using tmux instead of screen for
>> creating sessions for openstack services. I couldn't find a way to do
>> that in current implementation of devstack. 
>>
>> I have submitted an initial blueprint here:
>> https://blueprints.launchpad.net/devstack/+spec/enable-tmux-option-for-screen
>>
>> Any comments are welcome! It will be helpful if someone can review it
>> and provide comments.
> 
> Honestly, making this optional isn't really interesting. It's just code
> complexity for very little benefit. Especially when you look into the
> service stop functions.
> 
> If you could do a full replacement of *all* existing functionality used
> in screen with tmux, that might be. Screen -X stuff has some interesting
> failure modes on loaded environments, and I'd be in favor of switching
> to something else that didn't. However in looking through tmux man page,
> I'm not sure I see the equivalents to the logfile stanzas.
> 
> I'd like the blueprint to address all the screen calls throughout the
> codebase before acking on whether or not we'd accept such a thing.
> Because the screen use is more complex than you might realize.

I agree with Sean, supporting both options sounds like gratuitous added
complexity and maintenance burden. tmux could completely replace screen
in devstack though, as long as the full feature set is covered and there
is some key benefit in using it instead of screen.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-07-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 01/07/14 17:14, Alexei Kornienko wrote:
> Hi,
> 
> Please see some minor comments inline. Do you think we can schedule
> some time to discuss this topic on one of the upcoming meetings? We
> can come out with some kind of the summary and actions plan to
> start working on.

Please check: https://wiki.openstack.org/wiki/Meetings/Oslo I think
you can add your case in the agenda. That's how it works in e.g. Neutron.

>>> There are a lot of questions that should be answered
>>> to implement this: Where such tests would run (jenking,
>>> local PC, devstack VM)?
 I would expect it to be exposed to jenkins thru 'tox'. We
 then can set up a separate job to run them and compare with a
 base line [TBD: what *is* baseline?] to make sure we don't
 introduce performance regressions.
> Such tests cannot be exposed thru 'tox' since they require
> some environment setup (rabbitmq-server, zeromq matchmaker,
> etc.). Such setup is way out of scope for tox. Cause of
> this we should find some other way to run such tests.
> You may just assume server is already set and available thru a
> common socket.
>> Assuming that something is already setup is not an option. If we
>> add new env to tox I assume that anyone can run it locally and
>> get the same results as he would get from jenkins.

Fair enough. You can look into tempest then. I don't know whether they
already have some framework for performance testing though.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTs9UIAAoJEC5aWaUY1u57USMH/j4dfCdAl2QKCMLDWzopLqon
41gqq8DsKDhc/I24z2KNYqRPk43tQvR9dWhbnbs1XlqXk0keozQ2YNE8pvuZyqok
VN4/l435ewHJ3OyZePQLANgDBrAVdWRmmLWJwGJZ8ceTfZRV1MN+vsLHGwCMFinU
J/8UAR8XUF5UVrd/VONsvVSwDnx1v7vs8zM9aICfos0F4ByMU9bPThdeUiuJOTuU
xNP+FEdlNwwlc4sAwm3qHKJIVe7gYSojIfQGhmBlXWpPKA1SYoT5I6qMZibsbtlM
VE092du9APc8LLnE3DdMS6CuUuABbljUtAxUr5z8CirIDoh9n/c5+e8Jfbb8zHU=
=OwOm
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-02 Thread Yi Sun

Carl,
For the overlap IP, I was thinking about whether we could have a case 
where two VMs have the same subnet but belongs to different network. So 
if we create policy base on subnet, how will it work.


Yi

On 6/29/14, 12:43 PM, Carl Baldwin wrote:


In line...

On Jun 25, 2014 2:02 PM, "Yi Sun" > wrote:

>
> All,
> During last summit, we were talking about the integration issues 
between DVR and FWaaS. After the summit, I had one IRC meeting with 
DVR team. But after that meeting I was tight up with my work and did 
not get time to continue to follow up the issue. To not slow down the 
discussion, I'm forwarding out the email that I sent out as the follow 
up to the IRC meeting here, so that whoever may be interested on the 
topic can continue to discuss about it.

>
> First some background about the issue:
> In the normal case, FW and router are running together inside the 
same box so that FW can get route and NAT information from the router 
component. And in order to have FW to function correctly, FW needs to 
see the both directions of the traffic.
> DVR is designed in an asymmetric way that each DVR only sees one leg 
of the traffic. If we build FW on top of DVR, then FW functionality 
will be broken. We need to find a good method to have FW to work with DVR.

>
> ---forwarding email---
>  During the IRC meeting, we think that we could force the traffic to 
the FW before DVR. Vivek had more detail; He thinks that since the 
br-int knowns whether a packet is routed or switched, it is possible 
for the br-int to forward traffic to FW before it forwards to DVR. The 
whole forwarding process can be operated as part of service-chain 
operation. And there could be a FWaaS driver that understands the DVR 
configuration to setup OVS flows on the br-int.


I'm not sure what this solution would look like. I'll have to get the 
details from Vivek.  It seems like this would effectively centralize 
the traffic that we worked so hard to decentralize.


It did cause me to wonder about something:  would it be possible to 
reign the symmetry to the traffic by directing any response traffic 
back to the DVR component which handled the request traffic?  I guess 
this would require running conntrack on the target side to track and 
identify return traffic.  I'm not sure how this would be inserted into 
the data path yet. This is a half-baked idea here.


> The concern is that normally firewall and router are integrated 
together so that firewall can make right decision based on the routing 
result. But what we are suggesting is to split the firewall and router 
into two separated components, hence there could be issues. For 
example, FW will not be able to get enough information to setup zone. 
Normally Zone contains a group of interfaces that can be used in the 
firewall policy to enforce the direction of the policy. If we forward 
traffic to firewall before DVR, then we can only create policy based 
on subnets not the interface.
> Also, I'm not sure if we have ever planed to support SNAT on the 
DVR, but if we do, then it depends on at which point we forward 
traffic to the FW, the subnet may not even work for us anymore (even 
DNAT could have problem too).


I agree that splitting the firewall from routing presents some 
problems that may be difficult to overcome.  I don't know how it would 
be done while maintaining the benefits of DVR.


Another half-baked idea:  could multi-primary state replication be 
used between DVR components to enable firewall operation?  Maybe work 
on the HA router blueprint -- which is long overdue to be merged Btw 
-- could be leveraged.  The number of DVR "pieces" could easily far 
exceed that of active firewall components normally used in such a 
configuration so there could be a major scaling problem.  I'm really 
just thinking out loud here.


Maybe you (or others) have other ideas?

> Another thing that I may have to get detail is that how we handle 
the overlap subnet, it seems that the new namespaces are required.


Can you elaborate here?

Carl

>
> --- end of forwarding 
>
> YI
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 


> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova} NFV patches

2014-07-02 Thread Sylvain Bauza
Le 02/07/2014 10:39, Gary Kotton a écrit :
> Hi,
> There are some patches that are relevant to the NFV support. There are
> as follows:
>
>  1. Current exception handling for interface attachment is broken
> - https://review.openstack.org/103091
>  2. The V2 port attach is a blocking call. This cannot be changed so a
> proposal to have the V3 as non blocking has been posted
> - https://review.openstack.org/103094
>  3. Logging hints have been added in Neutron api
> - https://review.openstack.org/#/c/100258/
>

Hi Gary,

Thanks for that. Could you please update
https://wiki.openstack.org/wiki/Teams/NFV ?

I'm almost finishing my script for updating automatically a tinified
URL, but until that, I'll reply back to this thread with a new URL once
you update the page.

-Sylvain


> Thanks
> Gary
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Few hot questions related to patching for openstack

2014-07-02 Thread Igor Kalnitsky
Hello,

> Could you please clarify what exactly you mean by  "our patches" /
> "our first patch"?

I mean which version should we use in 5.0.1, for example? As far as I
understand @DmitryB, it have to be "2014.1-5.0.1". Am I right?

Thanks,
Igor



On Tue, Jul 1, 2014 at 8:47 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> my 2 cents:
>
> 1) Fuel version (+1 to Dmitry)
> 2) Could you please clarify what exactly you mean by "our patches" / "our
> first patch"?
>
>
>
>
> On Tue, Jul 1, 2014 at 8:04 PM, Dmitry Borodaenko <
> dborodae...@mirantis.com> wrote:
>
>> 1) Puppet manifests are part of Fuel so the version of Fuel should be
>> used. It is possible to have more than one version of Fuel per
>> OpenStack version, but not the other way around: if we upgrade
>> OpenStack version we also increase version of Fuel.
>>
>> 2) Should be a combination of both: it should indicate which OpenStack
>> version it is based on (2014.1.1), and version of Fuel it's included
>> in (5.0.1), e.g. 2014.1.1-5.0.1. Between Fuel versions, we can have
>> additional bugfix patches added to shipped OpenStack components.
>>
>> my 2c,
>> -DmitryB
>>
>>
>> On Tue, Jul 1, 2014 at 9:50 AM, Igor Kalnitsky 
>> wrote:
>> > Hi fuelers,
>> >
>> > I'm working on Patching for OpenStack and I have the following
>> questions:
>> >
>> > 1/ We need to save new puppets and repos under some versioned folder:
>> >
>> > /etc/puppet/{version}/ or /var/www/nailgun/{version}/centos.
>> >
>> > So the question is which version to use? Fuel or OpenStack?
>> >
>> > 2/ Which version we have to use for our patchs? We have an OpenStack
>> 2014.1.
>> > Should we use 2014.1.1 for our first patch? Or we have to use another
>> > format?
>> >
>> > I need a quick reply since these questions have to be solved for 5.0.1
>> too.
>> >
>> > Thanks,
>> > Igor
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Dmitry Borodaenko
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Mid-cycle sprint for remote people ?

2014-07-02 Thread Nikola Đipanov
On 07/01/2014 03:40 PM, Sylvain Bauza wrote:
> Hi,
> 
> I won't be able to attend the mid-cycle sprint due to a good family
> reason (a new baby 2.0 release expected to land by these dates), so I'm
> wondering if it's possible to webcast some of the sessions so people who
> are not there can still share their voices ?
> 

+1

I would be very interested in this as well, as I too won't be able to
make it this time sadly.

Thanks,

N.

> -Sylvain
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][DevStack] Failed to install OpenStack with DevStack

2014-07-02 Thread Eli Qiao(Li Yong Qiao)

? 2014/7/2 13:58, Jay Lau ??:

Hi,

Does any one encounter this error when install devstack? How did you 
resolve this issue?


+ [[ 1 -ne 0 ]]
+ echo 'Error on exit'
Error on exit
+ ./tools/worlddump.py -d
usage: worlddump.py [-h] [-d DIR]
worlddump.py: error: argument -d/--dir: expected one argument
317.292u 180.092s 14:40.93 56.4%0+0k 195042+2987608io 1003pf+0w

BTW: I was using ubuntu 12.04

 gy...@mesos014.eng.platformlab.ibm.com-84: cat  /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION="Ubuntu 12.04.1 LTS"


I got the same error , I am using the latest dev-stack branch, more logs:

+ EC2_URL=http://localhost:8773/services/Cloud
++ openstack endpoint show -f value -c publicurl s3
INFO: urllib3.connectionpool Starting new HTTP connection (1): 
cloudcontroller

usage: openstack endpoint show [-h] [-f {shell,table}] [-c COLUMN]
   [--variable VARIABLE] [--prefix PREFIX]
   [--type ]
   [--attr ]
   [--value ] [--all]
   
openstack endpoint show: error: argument -f/--format: invalid choice: 
'value' (choose from 'shell', 'table')

++ true
+ S3_URL=
+ [[ -z '' ]]
+ S3_URL=http://localhost:
+ mkdir -p /home/taget/code/devstack/accrc
++ readlink -f /home/taget/code/devstack/accrc
+ ACCOUNT_DIR=/home/taget/code/devstack/accrc
+ EUCALYPTUS_CERT=/home/taget/code/devstack/accrc/cacert.pem
+ '[' -e /home/taget/code/devstack/accrc/cacert.pem ']'
+ mv /home/taget/code/devstack/accrc/cacert.pem 
/home/taget/code/devstack/accrc/cacert.pem.old

+ nova x509-get-root-cert /home/taget/code/devstack/accrc/cacert.pem
ERROR: Unauthorized (HTTP 401)
+ echo 'Failed to update the root certificate: 
/home/taget/code/devstack/accrc/cacert.pem'
Failed to update the root certificate: 
/home/taget/code/devstack/accrc/cacert.pem

+ '[' -e /home/taget/code/devstack/accrc/cacert.pem.old ']'
+ mv /home/taget/code/devstack/accrc/cacert.pem.old 
/home/taget/code/devstack/accrc/cacert.pem

+ '[' all '!=' create ']'
+ openstack project list --long --quote none -f csv
+ IFS=,
+ read tenant_id tenant_name desc enabled
+ grep -v service
+ grep ,True
INFO: urllib3.connectionpool Starting new HTTP connection (1): 
cloudcontroller
+ openstack user list --project 1eae3b82938945ab9000e54051fbdf0e --long 
--quote none -f csv

+ IFS=,
+ read user_id user_name project email enabled
+ grep ,True
INFO: urllib3.connectionpool Starting new HTTP connection (1): 
cloudcontroller

+ '[' all = one -a heat_domain_admin '!=' admin ']'
+ eval 'SPECIFIC_UPASSWORD=$ADMIN_PASSWORD'
++ SPECIFIC_UPASSWORD=
+ '[' -n '' ']'
+ add_entry 0eeceb08919c43eaaa2f40e1c5e2eaa4 heat_domain_admin 
1eae3b82938945ab9000e54051fbdf0e demo 123

+ local user_id=0eeceb08919c43eaaa2f40e1c5e2eaa4
+ local user_name=heat_domain_admin
+ local tenant_id=1eae3b82938945ab9000e54051fbdf0e
+ local tenant_name=demo
+ local user_passwd=123
++ grep ' 1eae3b82938945ab9000e54051fbdf0e '
++ openstack ec2 credentials list --user 0eeceb08919c43eaaa2f40e1c5e2eaa4
INFO: urllib3.connectionpool Starting new HTTP connection (1): 
cloudcontroller

+ local line=
+ '[' -z '' ']'
+ openstack ec2 credentials create --user 
0eeceb08919c43eaaa2f40e1c5e2eaa4 --project 1eae3b82938945ab9000e54051fbdf0e
INFO: urllib3.connectionpool Starting new HTTP connection (1): 
cloudcontroller

+---+--+
| Field | Value|
+---+--+
| access| 09dce26fc4924912947566d4ac5335af |
| secret| e5947d632361457184d4e1e8862f58b7 |
| tenant_id | 1eae3b82938945ab9000e54051fbdf0e |
| trust_id  | None |
| user_id   | 0eeceb08919c43eaaa2f40e1c5e2eaa4 |
+---+--+
++ openstack ec2 credentials list --user 0eeceb08919c43eaaa2f40e1c5e2eaa4
++ grep ' 1eae3b82938945ab9000e54051fbdf0e '
INFO: urllib3.connectionpool Starting new HTTP connection (1): 
cloudcontroller

+ line=
++ err_trap
++ local r=1
++ set +o xtrace
stack.sh failed
Error on exit
usage: worlddump.py [-h] [-d DIR]
worlddump.py: error: argument -d/--dir: expected one argument

--
Thanks,

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks Eli Qiao(qia...@cn.ibm.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-07-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/07/14 10:12, Miguel Angel Ajo wrote:
> 
> Shihazhang,
> 
> I really believe we need the RPC refactor done for this cycle, and
> given the close deadlines we have (July 10 for spec submission and 
> July 20 for spec approval).
> 
> Don't you think it's going to be better to split the work in
> several specs?
> 
> 1) ipset optimization   (you) 2) sg rpc optimization (without
> fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you ,
> me)
> 
> 
> This way we increase the chances of having part of this for the
> Juno cycle. If we go for something too complicated is going to take
> more time for approval.
> 

I agree. And it not only increases chances to get at least some of
those highly demanded performance enhancements to get into Juno, it's
also "the right thing to do" (c). It's counterproductive to put
multiple vaguely related enhancements in single spec. This would dim
review focus and put us into position of getting 'all-or-nothing'. We
can't afford that.

Let's leave one spec per enhancement. @Shihazhang, what do you think?

> 
> Also, I proposed the details of "2", trying to bring awareness on
> the topic, as I have been working with the scale lab in Red Hat to
> find and understand those issues, I have a very good knowledge of
> the problem and I believe I could make a very fast advance on the
> issue at the RPC level.
> 
> Given that, I'd like to work on this specific part, whether or not
> we split the specs, as it's something we believe critical for 
> neutron scalability and thus, *nova parity*.
> 
> I will start a separate spec for "2", later on, if you find it ok, 
> we keep them as separate ones, if you believe having just 1 spec
> (for 1 & 2) is going be safer for juno-* approval, then we can
> incorporate my spec in yours, but then "add-ipset-to-security" is
> not a good spec title to put all this together.
> 
> 
> Best regards, Miguel Ángel.
> 
> 
> On 07/02/2014 03:37 AM, shihanzhang wrote:
>> 
>> hi Miguel Angel Ajo Pelayo! I agree with you and modify my spes,
>> but I will also optimization the RPC from security group agent to
>> neutron server. Now the modle is 'port[rule1,rule2...], port...',
>> I will change it to 'port[sg1, sg2..]', this can reduce the size
>> of RPC respose message from neutron server to security group
>> agent.
>> 
>> At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo" 
>>  wrote:
>>> 
>>> 
>>> Ok, I was talking with Édouard @ IRC, and as I have time to
>>> work into this problem, I could file an specific spec for the
>>> security group RPC optimization, a masterplan in two steps:
>>> 
>>> 1) Refactor the current RPC communication for 
>>> security_groups_for_devices, which could be used for full
>>> syncs, etc..
>>> 
>>> 2) Benchmark && make use of a fanout queue per security group
>>> to make sure only the hosts with instances on a certain
>>> security group get the updates as they happen.
>>> 
>>> @shihanzhang do you find it reasonable?
>>> 
>>> 
>>> 
>>> - Original Message -
 - Original Message -
> @Nachi: Yes that could a good improvement to factorize the
> RPC
 mechanism.
> 
> Another idea: What about creating a RPC topic per security
> group (quid of the
 RPC topic
> scalability) on which an agent subscribes if one of its
> ports is
 associated
> to the security group?
> 
> Regards, Édouard.
> 
> 
 
 
 Hmm, Interesting,
 
 @Nachi, I'm not sure I fully understood:
 
 
 SG_LIST [ SG1, SG2] SG_RULE_LIST = [SG_Rule1, SG_Rule2] .. 
 port[SG_ID1, SG_ID2], port2 , port3
 
 
 Probably we may need to include also the SG_IP_LIST =
 [SG_IP1, SG_IP2] ...
 
 
 and let the agent do all the combination work.
 
 Something like this could make sense?
 
 Security_Groups = {SG1:{IPs:[],RULES:[], 
 SG2:{IPs:[],RULES:[]} }
 
 Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
 
 
 @Edouard, actually I like the idea of having the agent
 subscribed to security groups they have ports on... That
 would remove the need to include all the security groups
 information on every call...
 
 But would need another call to get the full information of a
 set of security groups at start/resync if we don't already
 have any.
 
 
> 
> On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang <
 ayshihanzh...@126.com >
> wrote:
> 
> 
> 
> hi Miguel Ángel, I am very agree with you about the
> following point:
>> * physical implementation on the hosts (ipsets, nftables,
>> ... )
> --this can reduce the load of compute node.
>> * rpc communication mechanisms.
> -- this can reduce the load of neutron server can you help
> me to review my BP specs?
> 
> 
> 
> 
> 
> 
> 
> At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" <
 mangel..

[openstack-dev] [Nova} NFV patches

2014-07-02 Thread Gary Kotton
Hi,
There are some patches that are relevant to the NFV support. There are as 
follows:

  1.  Current exception handling for interface attachment is broken - 
https://review.openstack.org/103091
  2.  The V2 port attach is a blocking call. This cannot be changed so a 
proposal to have the V3 as non blocking has been posted - 
https://review.openstack.org/103094
  3.  Logging hints have been added in Neutron api - 
https://review.openstack.org/#/c/100258/

Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-07-02 Thread Miguel Angel Ajo


Shihazhang,

I really believe we need the RPC refactor done for this cycle,
and given the close deadlines we have (July 10 for spec submission and 
July 20 for spec approval).


Don't you think it's going to be better to split the work in several
specs?

1) ipset optimization   (you)
2) sg rpc optimization (without fanout) (me)
3) sg rpc optimization (with fanout) (edouard, you , me)


   This way we increase the chances of having part of this for the Juno
cycle. If we go for something too complicated is going to take more time 
for approval.



  Also, I proposed the details of "2", trying to bring awareness on the
topic, as I have been working with the scale lab in Red Hat to find
and understand those issues, I have a very good knowledge of the problem
and I believe I could make a very fast advance on the issue at the
RPC level.

  Given that, I'd like to work on this specific part, whether or
not we split the specs, as it's something we believe critical for
neutron scalability and thus, *nova parity*.

   I will start a separate spec for "2", later on, if you find it ok,
we keep them as separate ones, if you believe having just 1 spec (for 1
 & 2) is going be safer for juno-* approval, then we can incorporate my 
spec in yours, but then "add-ipset-to-security" is not a good spec title 
to put all this together.



   Best regards,
Miguel Ángel.


On 07/02/2014 03:37 AM, shihanzhang wrote:


hi Miguel Angel Ajo Pelayo!
I agree with you and modify my spes, but I will also optimization the
RPC from security group agent to neutron server.
Now the modle is 'port[rule1,rule2...], port...', I will change it to
'port[sg1, sg2..]', this can reduce the size of RPC respose message from
neutron server to security group agent.

At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo"  wrote:



Ok, I was talking with Édouard @ IRC, and as I have time to work
into this problem, I could file an specific spec for the security
group RPC optimization, a masterplan in two steps:

1) Refactor the current RPC communication for security_groups_for_devices,
  which could be used for full syncs, etc..

2) Benchmark && make use of a fanout queue per security group to make
  sure only the hosts with instances on a certain security group get
  the updates as they happen.

@shihanzhang do you find it reasonable?



- Original Message -

- Original Message -
> @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
>
> Another idea:
> What about creating a RPC topic per security group (quid of the RPC topic
> scalability) on which an agent subscribes if one of its ports is associated
> to the security group?
>
> Regards,
> Édouard.
>
>


Hmm, Interesting,

@Nachi, I'm not sure I fully understood:


SG_LIST [ SG1, SG2]
SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
port[SG_ID1, SG_ID2], port2 , port3


Probably we may need to include also the
SG_IP_LIST = [SG_IP1, SG_IP2] ...


and let the agent do all the combination work.

Something like this could make sense?

Security_Groups = {SG1:{IPs:[],RULES:[],
   SG2:{IPs:[],RULES:[]}
  }

Ports = {Port1:[SG1, SG2], Port2: [SG1]  }


@Edouard, actually I like the idea of having the agent subscribed
to security groups they have ports on... That would remove the need to
include
all the security groups information on every call...

But would need another call to get the full information of a set of security
groups
at start/resync if we don't already have any.


>
> On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com >
> wrote:
>
>
>
> hi Miguel Ángel,
> I am very agree with you about the following point:
> >  * physical implementation on the hosts (ipsets, nftables, ... )
> --this can reduce the load of compute node.
> >  * rpc communication mechanisms.
> -- this can reduce the load of neutron server
> can you help me to review my BP specs?
>
>
>
>
>
>
>
> At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com >
> wrote:
> >
> >  Hi it's a very interesting topic, I was getting ready to raise
> >the same concerns about our security groups implementation, shihanzhang
> >thank you for starting this topic.
> >
> >  Not only at low level where (with our default security group
> >rules -allow all incoming from 'default' sg- the iptable rules
> >will grow in ~X^2 for a tenant, and, the
> >"security_group_rules_for_devices"
> >rpc call from ovs-agent to neutron-server grows to message sizes of
> >>100MB,
> >generating serious scalability issues or timeouts/retries that
> >totally break neutron service.
> >
> >   (example trace of that RPC call with a few instances
> > http://www.fpaste.org/104401/14008522/ )
> >
> >  I believe that we also need to review the RPC calling mechanism
> >for the OVS agent here, there are several possible approaches to breaking
> >down (or/and CIDR compressing) the information we return via this api
> >call.
> >
> >
> >   So we have to look at two things here

[openstack-dev] Multiple API/RPC process behavior

2014-07-02 Thread tfreger

Hello,

I've installed the latest version of RHOS5, set multiple API\RPC workers 
within neutron.conf file to 8 (the amount of cores of the host)


After I ran "openstack-service restart" I've got different result for 
ML2 and Openvswitch deployments.


In ML2 deployment all 17 (1-parent and 16 child processes) 
neutron-server processes exist as was expected.
In Openvswitch deployment only 9 processes of API are spawned, RPC 
process didn't spawn.


Should be any difference between those two installations?

--
Thanks,
Toni


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >