Re: [openstack-dev] [nova][neutron] VIF event callbacks implementation

2014-05-01 Thread Adam Gandelman
On Tue, Apr 29, 2014 at 12:23 PM, Dan Smith  wrote:

>
> Yeah, we've already got plans in place to get Cinder to use the
> interface to provide us more detailed information and eliminate some
> polling. We also have a very purpose-built notification scheme between
> nova and cinder that facilitates a callback for a very specific
> scenario. I'd like to get that converted to use this mechanism as well,
> so that it becomes "the way you tell nova that things it's waiting for
> have happened."
>
> --Dan
>
>
We actually need something *very* similar in Ironic right now to address
many of the same issues that os-external-events solves for Nova <-> Neutron
coordination.  I've been looking at implementing an almost identical thing
in Ironic and was hoping to file a BP to get some discussion going in
Atlanta.  There are a few places currently where the same mechanism would
fix bugs or be a general improvement, and more stuff coming in Juno where
this will be required. I would love to find out if parts of what is
currently in Nova that can be factored out and shared across projects to
make this easier, and to provide all projects with "a way you tell some
other service that things it's waiting for have happened"

Cheers,
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] openstack/tripleo-specs repository online

2014-05-01 Thread Robert Collins
Thanks to Derek and the infra folk we now have a tripleo-specs repo - yay.

https://review.openstack.org/#/c/91741/ needs to land before it will
DTRT after cloning - but! - please add any outstanding unapproved
blueprints there.

Thanks,
Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Objects not getting distributed across the swift cluster...

2014-05-01 Thread Shyam Prasad N
Hi John,

Thanks for the explanation. Have a couple of more questions on this subject
though.

1. "pretend_min_hours_passed" sounds like something that I could use. I'm
okay if there is a chance of interruption in services to the user at this
time, as long as it does not cause any data-loss or data-corruption.
2. It would have been really useful if the rebalancing operations could be
logged by swift somewhere and automatically run later (after
min_part_hours).

Regards,
Shyam


On Thu, May 1, 2014 at 11:15 PM, John Dickinson  wrote:

>
> On May 1, 2014, at 10:32 AM, Shyam Prasad N 
> wrote:
>
> > Hi Chuck,
> > Thanks for the reply.
> >
> > The reason for such weight distribution seems to do with the ring
> rebalance command. I've scripted the disk addition (and rebalance) process
> to the ring using a wrapper command. When I trigger the rebalance after
> each disk addition, only the first rebalance seems to take effect.
> >
> > Is there any other way to adjust the weights other than rebalance? Or is
> there a way to force a rebalance, even if the frequency of the rebalance
> (as a part of disk addition) is under an hour (the min_part_hours value in
> ring creation).
>
> Rebalancing only moves one replica at a time to ensure that your data
> remains available, even if you have a hardware failure while you are adding
> capacity. This is why it may take multiple rebalances to get everything
> evenly balanced.
>
> The min_part_hours setting (perhaps poorly named) should match how long a
> replication pass takes in your cluster. You can understand this because of
> what I said above. By ensuring that replication has completed before
> putting another partition "in flight", Swift can ensure that you keep your
> data highly available.
>
> For completeness to answer your question, there is an (intentionally)
> undocumented option in swift-ring-builder called
> "pretend_min_part_hours_passed", but it should ALMOST NEVER be used in a
> production cluster, unless you really, really know what you are doing.
> Using that option will very likely cause service interruptions to your
> users. The better option is to correctly set the min_part_hours value to
> match your replication pass time (with set_min_part_hours), and then wait
> for swift to move things around.
>
> Here's some more info on how and why to add capacity to a running Swift
> cluster: https://swiftstack.com/blog/2012/04/09/swift-capacity-management/
>
> --John
>
>
>
>
>
> > On May 1, 2014 9:00 PM, "Chuck Thier"  wrote:
> > Hi Shyam,
> >
> > If I am reading your ring output correctly, it looks like only the
> devices in node .202 have a weight set, and thus why all of your objects
> are going to that one node.  You can update the weight of the other
> devices, and rebalance, and things should get distributed correctly.
> >
> > --
> > Chuck
> >
> >
> > On Thu, May 1, 2014 at 5:28 AM, Shyam Prasad N 
> wrote:
> > Hi,
> >
> > I created a swift cluster and configured the rings like this...
> >
> > swift-ring-builder object.builder create 10 3 1
> >
> > ubuntu-202:/etc/swift$ swift-ring-builder object.builder
> > object.builder, build version 12
> > 1024 partitions, 3.00 replicas, 1 regions, 4 zones, 12 devices,
> 300.00 balance
> > The minimum number of hours before a partition can be reassigned is 1
> > Devices:id  region  zone  ip address  port  replication ip
>  replication port  name weight partitions balance meta
> >  0   1 1  10.3.0.202  6010  10.3.0.202
>6010  xvdb   1.00   1024  300.00
> >  1   1 1  10.3.0.202  6020  10.3.0.202
>6020  xvdc   1.00   1024  300.00
> >  2   1 1  10.3.0.202  6030  10.3.0.202
>6030  xvde   1.00   1024  300.00
> >  3   1 2  10.3.0.212  6010  10.3.0.212
>6010  xvdb   1.00  0 -100.00
> >  4   1 2  10.3.0.212  6020  10.3.0.212
>6020  xvdc   1.00  0 -100.00
> >  5   1 2  10.3.0.212  6030  10.3.0.212
>6030  xvde   1.00  0 -100.00
> >  6   1 3  10.3.0.222  6010  10.3.0.222
>6010  xvdb   1.00  0 -100.00
> >  7   1 3  10.3.0.222  6020  10.3.0.222
>6020  xvdc   1.00  0 -100.00
> >  8   1 3  10.3.0.222  6030  10.3.0.222
>6030  xvde   1.00  0 -100.00
> >  9   1 4  10.3.0.232  6010  10.3.0.232
>6010  xvdb   1.00  0 -100.00
> > 10   1 4  10.3.0.232  6020  10.3.0.232
>6020  xvdc   1.00  0 -100.00
> > 11   1 4  10.3.0.232  6030  10.3.0.232
>6030  xvde   1.00  0 -100.00
> >
> > Container and account rings have a similar configuration.
> > Once the rings were created and all

[openstack-dev] [Neutron] ServiceVM IRC meeting(May 6 Tuesday 5:00(AM)UTC-)

2014-05-01 Thread Isaku Yamahata
Hi. This is a reminder mail for the servicevm IRC meeting
May 6, 2014 Tuesdays 5:00(AM)UTC-
#openstack-meeting on freenode
(May 13 will be skipped due to design summit)

* design summit plan
  - unconference
* status update
* new project planning
  - project name
code name: virtue, ginie, jeeve,...
topic name: servicevm, hosting device,...
  - design API/model
  - way to review: gerrit or google-doc?
  - design strategy
* open discussion
-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-01 Thread Carlos Garza

On May 1, 2014, at 7:48 PM, Stephen Balukoff 
mailto:sbaluk...@bluebox.net>>
 wrote:

Hi Trevor,

I was the one who wrote that use case based on discussion that came out of the 
question I wrote the list last week about SSL re-encryption:  Someone had 
stated that sometimes pool members are local, and sometimes they are hosts 
across the internet, accessible either through the usual default route, or via 
a VPN tunnel.

The point of this use case is to make the distinction that if we associate a 
neutron_subnet with the pool (rather than with the member), then some members 
of the pool that don't exist in that neutron_subnet might not be accessible 
from that neutron_subnet.  However, if the behavior of the system is such that 
attempting to reach a host through the subnet's "default route" still works 
(whether that leads to communication over a VPN or the usual internet routes), 
then this might not be a problem.

The other option is to associate the neutron_subnet with a pool member. But in 
this case there might be problems too. Namely:

  *   The device or software that does the load balancing may need to have an 
interface on each of the member subnets, and presumably an IP address from 
which to originate requests.
  *   How does one resolve cases where subnets have overlapping IP ranges?

In the end, it may be simpler not to associate neutron_subnet with a pool at 
all. Maybe it only makes sense to do this for a VIP, and then the assumption 
would be that any member addresses one adds to pools must be accessible from 
the VIP subnet.  (Which is easy, if the VIP exists on the same neutron_subnet. 
But this might require special routing within Neutron itself if it doesn't.)
This topology question (ie. what is feasible, what do people actually want to 
do, and what is supported by the model) is one of the more difficult ones to 
answer, especially given that users of OpenStack that I've come in contact with 
barely understand the Neutron networking model, if at all.

I would think we'd want to use a single subnet with a pool and if the user 
specifies an pool member thats not routable theres not much we can do. Should 
we introduces the concepts of routers into the pool object to bridge the 
subnets if need be. Or we leave it up to the user to add the appropriate 
host_routes on their loadbalancers subnet. and have an interface or 
port_id(With an ip) specified on the pool object.  I don't know if attaching a 
neutron port to a pool and using host_routes makes the flow any easier. But 
routing constructs in Neutron are available. I know networking but not a whole 
lot of the neutron perspective on it.  I've yet to look over how the VPN stuff 
is handled. If the pools do happen to have IP collisions then the first match 
in the LoadBalancer's subnet host_routes wins.

subnet.host_route =  [{'destination': , "nexthop": }...]. according to 
https://wiki.openstack.org/wiki/Neutron/APIv2-specification#High-level_flow
with the ip address being the pool neutron ports on your side of the 
loadbalancer.




On Thu, May 1, 2014 at 1:52 PM, Trevor Vardeman 
mailto:trevor.varde...@rackspace.com>> wrote:
Hello,

After going back through the use-cases to double check some of my
understanding, I realized I didn't quite understand the ones I had
already answered.  I'll use a specific use-case as an example of my
misunderstanding here, and hopefully the clarification can be easily
adapted to the rest of the use-cases that are similar.

Use Case 13:  A project-user has an HTTPS application in which some of
the back-end servers serving this application are in the same subnet,
and others are across the internet, accessible via VPN. He wants this
HTTPS application to be available to web clients via a single IP
address.

In this use-case, is the Load Balancer going to act as a node in the
VPN?  What I mean here, is the Load Balancer supposed to establish a
connection to this VPN for the client, and simulate itself as a computer
on the VPN?  If this is not the case, wouldn't the VPN have a subnet ID,
and simply be added to a pool during its creation?  If the latter is
accurate, would this not just be a basic HTTPS Load Balancer creation?
After looking through the VPNaaS API, you would provide a subnet ID to
the create VPN service request, and it establishes a VPN on said subnet.
Couldn't this be provided to the Load Balancer pool as its subnet?

Forgive me for requiring so much distinction here, but what may be clear
to the creator of this use-case, it has left me confused.  This same
type of clarity would be very helpful across many of the other
VPN-related use-cases.  Thanks again!

-Trevor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenS

Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-01 Thread Adam Harwell
Comments in red. I'm tired, so hopefully most of what I say makes sense. :)

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 7:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

Hi Trevor,

I was the one who wrote that use case based on discussion that came out of the 
question I wrote the list last week about SSL re-encryption:  Someone had 
stated that sometimes pool members are local, and sometimes they are hosts 
across the internet, accessible either through the usual default route, or via 
a VPN tunnel.

The point of this use case is to make the distinction that if we associate a 
neutron_subnet with the pool (rather than with the member), then some members 
of the pool that don't exist in that neutron_subnet might not be accessible 
from that neutron_subnet.  However, if the behavior of the system is such that 
attempting to reach a host through the subnet's "default route" still works 
(whether that leads to communication over a VPN or the usual internet routes), 
then this might not be a problem.

Right, we list a subnet that theoretically most of the pool members use, but 
it's not STRICTLY enforced. As long as there is a route out from the host, they 
should work (and assuming your VIP is public, you will have a route to the 
internet, so any external member should be fine). Really, the subnet is just 
used as a hint for for assigning VIFs to whatever device is handling the load 
balancing (e.g., if the LB is HAProxy running on a Nova VM, we will know to 
create the VM with an IP on the VIP's subnet and an IP on the Pool's subnet).

The other option is to associate the neutron_subnet with a pool member. But in 
this case there might be problems too. Namely:

  *   The device or software that does the load balancing may need to have an 
interface on each of the member subnets, and presumably an IP address from 
which to originate requests.
  *   How does one resolve cases where subnets have overlapping IP ranges?

This would also work, and is more flexible, in the case that you wanted members 
that are on multiple private subnets. When deciding which VIFs to assign a 
machine, you'd just make a set of subnet_ids from all members, and assign one 
VIF for each. As for overlapping IP ranges, I honestly don't think this needs 
to be a use-case we should consider. If you're setting up your network topology 
using overlapping CIDRs, you deserve whatever messed up result you get. I don't 
think there's ANY way to handle that properly, just given how routing works on 
the machine…

In the end, it may be simpler not to associate neutron_subnet with a pool at 
all. Maybe it only makes sense to do this for a VIP, and then the assumption 
would be that any member addresses one adds to pools must be accessible from 
the VIP subnet.  (Which is easy, if the VIP exists on the same neutron_subnet. 
But this might require special routing within Neutron itself if it doesn't.)

I don't think it's safe to assume all members are accessible on the same subnet 
as the VIP, as I'd assume the most common use case would actually be a VIP on a 
public network and members on private networks. We will need the subnet_id 
somewhere.

This topology question (ie. what is feasible, what do people actually want to 
do, and what is supported by the model) is one of the more difficult ones to 
answer, especially given that users of OpenStack that I've come in contact with 
barely understand the Neutron networking model, if at all.

In our case, we don't actually have any users in the scenario of having members 
spread across different subnets that might not be be routable, so the use case 
is somewhat contrived, but I thought it was worth mentioning based on what 
people were saying in the SSL re-encryption discussion last week.

I believe one of the things we were really hoping to do is exactly that — allow 
member nodes to be on private networks so they are only accessible to the 
public via the public VIP. I'd recommend we maintain this case (at least, 
allowing ONE private subnet, so at a minimum attaching subnet_id to the pool).

--Adam


On Thu, May 1, 2014 at 1:52 PM, Trevor Vardeman 
mailto:trevor.varde...@rackspace.com>> wrote:
Hello,

After going back through the use-cases to double check some of my
understanding, I realized I didn't quite understand the ones I had
already answered.  I'll use a specific use-case as an example of my
misunderstanding here, and hopefully the clarification can be easily
adapted to the rest of the use-cases that are similar.

Use Case 13:  A project-user has an HTTPS application in which some of
the back-end servers serving this application are in the same subnet,
and others are across the internet, 

Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

2014-05-01 Thread Adam Harwell
My thoughts are inline (in red, since I can't figure out how to get Outlook to 
properly format the email the way I want).

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 6:52 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

Hi Samuel,

We talked a bit in chat about this, but I wanted to reiterate a few things here 
for the rest of the group.  Comments in-line:


On Wed, Apr 30, 2014 at 6:10 AM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
Hi,

We have compared the API the is in the blue print to the one described in 
Stephen documents.
Follows the differences we have found:

1)  L7PolicyVipAssoc is gone, this means that L7 policy reuse is not 
possible. I have added use cases 42 and 43 to show where such reuse makes sense.

Yep, my thoughts were that:

  *   The number of times L7 policies will actually get re-used is pretty 
minimal. And in the case of use cases 42 and 43, these can be accomplished by 
duplicating the L7policies and rules (with differing actions) for each type of 
connection.
  *   Fewer new objects is usually better and less confusing for the user. 
Having said this, a user advanced enough to use L7 features like this at all is 
likely going to be able to understand what the 'association' policy does.

The main counterpoint you shared with me was (if I remember correctly):

  *   For different load balancer vendors, it's much easier to code for the 
case where a specific entire feature set that isn't available (ie. L7 switching 
or content modification functionality) by making that entire feature set 
modular. A driver in this case can simply return with a "feature not supported" 
error if anyone tries using L7 policies at all.

 I agree that re-use should not be required for L7 policies, which should 
simplify things.

2)  There is a mix between L7 content switching and L7 content 
modification, the API in the blue print only addresses L7 content switching. I 
think that we should separate the APIs from each other. I think that we should 
review/add use cases targeting L7 content modifications to the use cases 
document.

Fair enough. There aren't many such use cases in there yet.

a.   You can see this in L7Policy: APPEND_HEADER, DELETE_HEADER 
actions

3)  The action to redirect to a URL is missing in Stephen’s document. The 
'redirect' action in Stephen’s document is equivalent to the “pool” action in 
the blue print/code.

Yep it is. But this is actually pretty easily added.  We would just add the 
'action' of "URL_REDIRECT" and the action_argument would then be the URL to 
which to redirect.


4)  All the objects have their parent id as an optional argument 
(L7Rule.l7_policy_id, L7Policy.listener_id), is this a mistake?

That's actually not a mistake--  a user can create "orphaned" rules in this 
model. However, the point was raised earlier by Brandon that it may make sense 
for members to be child objects of a specific pool since they can't be shared. 
If we do this for members, it also makes sense to do it for L7Rules since they 
also can't be shared. At which point the API for manipulating L7Rules would 
shift to:

/l7_policy/{policy_uuid}/l7_rules

And in this case, the parent L7Policy ID would be implicit.

(I'm all for this change, by the way.)

Sounds good to me too!

5)  There is also the additional behavior based on L3 information (matching 
the client/source IP to a subnet). This is addressed by L7Rule.type with a 
value of 'CLIENT_IP' and L7Rule.compare_type with a value of 'SUBNET'. I think 
that using Layer 3 type information should not be part of L7 content switching 
as the use cases I am aware of, might require more than just selecting a 
different pool (ex: user with ip from internet browsing to an https based 
application, might need to be secured using 2K SSL keys while internal users 
could use weaker keys)

While it's true that having a way to manipulate this without being part of an 
HTTP or unwrapped HTTPS session is also useful--  it's still useful to be able 
to create L7 rules which also make decisions based on subnet.  (Notice also 
with TLS_SNI_Policies there is a 'hostname' attribute, and also with L7 rules 
there is a 'hostname' type of rule? Again, useful to have in two places, eh!)

I would like to state that although the WIKI describes the solution from a high 
level it is not totally in sync with the actual code.
The key thing which is missing is that, L7 Policies in a specific listener/vip 
are ordered (ordered list) and are processed in order so that the 1st policy 
that has a match will be activated and traversal of the L7 policy list is 
topped as the processing is final (ex: redirect, pool, reject).
This in effect 

Re: [openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: Driver / Management API

2014-05-01 Thread Stephen Balukoff
Hi Adam,

Thank you very much for starting this discussion!  In answer do your
questions from my perspective:

1. I think that it makes sense to start at least one new driver that
focuses on running software virtual appliances on Nova nodes (the NovaHA
you referred to above). The existing haproxy driver should not go away as I
think it solves problems for small to medium size deployments, and does
well for setting up, for example, a 'development' or 'QA' load balancer
that won't need to scale, but needs to duplicate much of the functionality
of the production load balancer(s).

On this note, we may want to actually create several different drivers
depending on the appliance model that operators are using. From the
discussion about HA that I started a couple weeks ago, it sounds like HP is
using an HA model that concentrates on pulling additional instances from a
waiting pool. The stingray solution you're using sounds like "raid 5"
redundancy for load balancing. And what we've been using is more like "raid
1" redundancy.

It probably makes sense to collaborate on a new driver and model if we
agree on the topologies we want to support at our individual organizations.
Even if we can't agree on this, it still makes sense for us to collaborate
on determining that "basic set of operator features" that all drivers
should support, from an operator perspective.

I think a management API is necessary--  operators and their support
personnel need to be able to troubleshoot problems down to the device
level, and I think it makes sense to do this through an OpenStack interface
if possible. In order to accommodate each vendor's differences here,
though, this may only be possible if we allow for different drivers to
expose "operator controls" in their own way.

I do not think any of this should be exposed to the user API we have been
discussing.

I think it's going to be important to come to some kind of agreement on the
user API and object model changes before it's going to be possible to start
to really talk about how to do the management API.

I am completely on board with this! As I have said in a couple other places
on this list, Blue Box actually wrote our own software appliance based load
balancing system based on HAProxy, stunnel, corosync/pacemaker, and a
series of glue scripts (mostly written in perl, ruby, and shell) that
provide a "back-end API" and whatnot. We've actually done this (almost)
from scratch twice now, and have plans and some work underway to do it a
third time-- this time to be compatible with OpenStack (and specifically
the Neutron LBaaS API, hopefully as a driver for the same). This will be
completely open source, and hopefully compliant with OpenStack standards
(equivalent licensing, everything written in python, etc.)  So far, I've
only had time to port over the back-end API and a couple design docs, but
if you want to see what we have in mind, here's the documentation on this
so far:

https://github.com/blueboxgroup/octavia/

In particular, probably the theory of operation document will give you the
best overview of how it works:

https://github.com/blueboxgroup/octavia/blob/master/doc/theory-of-operation.md

And the virtual appliance API (as it was two months ago. Some things will
definitely change based on discussions of the last couple months):
https://github.com/blueboxgroup/octavia/blob/master/doc/virtual-appliance-api.md

Thanks,
Stephen



On Thu, May 1, 2014 at 2:33 PM, Adam Harwell wrote:

>  I am sending this now to gauge interest and get feedback on what I see
> as an impending necessity — updating the existing "haproxy" driver,
> replacing it, or both. Though we're not there yet, it is probably best to
> at least start the discussion now, to hopefully limit some fragmentation
> that may be starting around this concept already.
>
>  To begin with, I should probably define some terms. Following is a list
> of the major things I'll be referencing and what I mean by them, since I
> would like to avoid ambiguity as much as possible.
>
>  --
>  Glossary
> --
> *HAProxy*: This references two things currently, and I feel this is a
> source of some misunderstanding. When I refer to  HAProxy (capitalized), I
> will be referring to the official software package (found here:
> http://haproxy.1wt.eu/ ), and when I refer to "haproxy" (lowercase, and
> in quotes) I will be referring to the neutron-lbaas driver (found here:
> https://github.com/openstack/neutron/tree/master/neutron/services/loadbalancer/drivers/haproxy
>  ).
> The fact that the neutron-lbaas driver is named directly after the software
> package seems very unfortunate, and while it is not directly in the scope
> of what I'd like to discuss here, I would love to see it changed to more
> accurately reflect what it is --  one specific driver implementation that
> coincidentally uses HAProxy as a backend. More on this later.
>
>  *Operator Requirements*: The require

Re: [openstack-dev] [Neutron][LBaaS] Use Case Question

2014-05-01 Thread Carlos Garza
   our stingray nodes don't allow you to specify. Its just an enable or disable 
option.
On May 1, 2014, at 7:35 PM, Stephen Balukoff 
mailto:sbaluk...@bluebox.net>>
 wrote:

Question for those of you using the SSL session ID for persistency: About how 
long do you typically set these sessions to persist?

Also, I think this is a cool way to handle this kind of persistence 
efficiency-- I'd never seen it done that way before, eh!

It should also almost go without saying that of course in the case where the 
SSL session is not terminated on the load balancer, you can't do anything else 
with the content (like insert X-Forwarded-For headers or do anything else that 
has to do with L7).

Stephen


On Wed, Apr 30, 2014 at 9:39 AM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
Hi,

As stated, this could either be handled by SSL session ID persistency or by SSL 
termination and using cookie based persistency options.
If there is no need to inspect the content hence to terminate the SSL 
connection on the load balancer for this sake, than using SSL session ID based 
persistency is obviously a much more efficient way.
The reference to source client IP changing was to negate the use of source IP 
as the stickiness algorithm.


-Sam.


From: Trevor Vardeman 
[mailto:trevor.varde...@rackspace.com]
Sent: Thursday, April 24, 2014 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS] Use Case Question

Hey,

I'm looking through the use-cases doc for review, and I'm confused about one of 
them.  I'm familiar with HTTP cookie based session persistence, but to satisfy 
secure-traffic for this case would there be decryption of content, injection of 
the cookie, and then re-encryption?  Is there another session persistence type 
that solves this issue already?  I'm copying the doc link and the use case 
specifically; not sure if the document order would change so I thought it would 
be easiest to include both :)

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

Specific Use Case:  A project-user wants to make his secured web based 
application (HTTPS) highly available. He has n VMs deployed on the same private 
subnet/network. Each VM is installed with a web server (ex: apache) and 
content. The application requires that a transaction which has started on a 
specific VM will continue to run against the same VM. The application is also 
available to end-users via smart phones, a case in which the end user IP might 
change. The project-user wishes to represent them to the application users as a 
web application available via a single IP.

-Trevor Vardeman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-01 Thread Carlos Garza
 Balukoff I'm liking your API spec so far but can you elaborate on what 
this loadbalancer object you refer to is. on page You declare its immutable and 
refer to it like an actual primitive object yet I don't
see a schema for it. I see loadbalancer_id in the vip request that reference. 
The top part of the doc declares a loadbalancer is is the first object created 
according to the definition in the glossary.
https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary#Loadbalancer where it is 
defined as the root Object that is first created or can be fully populated but 
in you API proposal it looks like the vip object is the created top level 
primitive with flavor attribute is apart of a VIP. Are you intending to rename 
what we call a loadbalancer to a VIP? Could you provide a work flow of a 
created loadbalancer. It looks good either way.

Is it cool if we rename ca_certificate_id to client_ca or client_ca_certificate 
to make it clear the purpose of the CA is to snub clients. Later on if we need 
to do encryption to back end pool members that have x509s signed by their own 
CA we can then use a parameter like reencryption_ca_certificate.

Consider the following cases.

The user wants SSL_ID based persistence on an HTTPS LoadBalancer where the 
loadbalancer does not know the key or cert but has access to the unencrypted 
RFC5246: 7.4.1.2 uncrypted Session ID
to identify persistence to the back end HTTPS pool member?

On the pool side of the loadbalaancer can a loadbalancer still encrypt if no 
ca_certificate_id or client_certificate_id is present? How would they signal to 
the api that they extend to encrypt with out host name validation or even vert 
validation at all. Not sure why they would want to other then they don't feel 
the need to pay for certs on their backend nodes or worse yes pay for a signing 
cert.

The user feels secure on their network and wants SSL termination at the 
loadbalancer so the loadbalancer has the Cert and Key and extends to use plain 
old HTTP to the pool members with some headers injected. What would the 
protocol on the listener be "HTTPS" and would placing a CERT and KEY imply 
deception should happen.

Also I've been burned in an earlier project when I started noticing some CA's 
were using ECDSA certs instead of RSA? should we take non RSA x509s into 
account as well? Right now it looks like the API assumes everything is RSA.


By placing
On May 1, 2014, at 5:35 PM, Stephen Balukoff 
mailto:sbaluk...@bluebox.net>> wrote:

German,

They certainly are essential-- but as far as I can tell, we haven't been 
concentrating on them, so the list there is likely very incomplete.

Stephen


On Thu, May 1, 2014 at 1:04 PM, Eichberger, German 
mailto:german.eichber...@hp.com>> wrote:
Stephen,

I would prefer if we can vote on them, too. They are essential and I would like 
to make sure they are considered first-class citizen when it comes to use cases.

Thanks,
German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.net]
Sent: Thursday, May 01, 2014 12:52 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey


Yep, I'm all for this as well!

Note: We're just talking about "user" use cases in this survey, correct?  
(We'll leave the operator use cases for later when we have more of a story 
and/or model to work with on how we're going to approach those, yes?)

Thanks,
Stephen

On Thu, May 1, 2014 at 11:54 AM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
That sounds good to me. The only thing I would caution is that we have 
prioritized certain requirements (like HA and SSL Termination) and I want to 
ensure we use the survey to compliment what we have already mutually agreed 
upon. Thanks for spearheading this!

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 12:39 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi Everyone!

To assist in evaluating the use cases that matter and since we now have ~45 use 
cases, I would like to propose to conduct a survey using something like 
surveymonkey.
The idea is to have a non-anonymous survey listing the use cases and ask you 
identify and vote.
Then we will publish the results and can prioritize based on this.

To do so in a timely manner, I would like to freeze the document for editing 
and allow only comments by Monday May 5th 08:00AMUTC and publish the survey 
link to ML ASAP after that.

Please let me know if this is acceptable.

Regards,
-Sam.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Neutron][LBaaS] Updated Use Cases Assessment and Questions

2014-05-01 Thread Stephen Balukoff
Hi Trevor,

Some of these use cases are mine, I will try to clarify the ones that are
in-line:


On Thu, May 1, 2014 at 9:20 AM, Trevor Vardeman <
trevor.varde...@rackspace.com> wrote:

>
> Use-Case 10:  I assumed this was referring to the source-IP that
> accesses the Load Balancer.  As far as I know the X-Forwarded-For header
> includes this.  To satisfy this use-case, was there some expectation to
> retrieve this information through an API request?  Also, with the
> trusted-proxy evaluation, is that being handled by the pool member, or
> was this in reference to an "access list" so-to-speak defined on the
> load balancer?
>

Actually, this would be the source IP of the load balancer itself.  That is
to say, any client on the internet can insert an X-Forwarded-For header
which, with the right server configuration, may cause an application
attribute their actions to some other IP on the internet. To solve this
potential security problem, a lot of web application software will only
trust the X-Forwarded-For header if the request comes from a trusted proxy.
So, in order for the back-end application to know which IPs constitute this
group of "trusted proxies" (and therefore, which requests it can trust the
X-Forwarded-For header in), the application needs to have some way to know
what IPs the trusted proxies will be using to originate requests. (More
info on how this works is here: http://en.wikipedia.org/wiki/X-Forwarded-For)

In the case of LBaaS, there are a couple ways to handle this problem:

   1. Provide an API interface that a user can use to get a list of the
   possible source IPs for a given load balancer configuration. This is
   somewhat problematic, because this list might change without notice, and
   therefore the back-end application is going to have to check this with some
   regularity.
   2. Make sure that the load balancer also originates requests to the
   back-end from the VIP IP(s). This works pretty well for medium-sized
   deployments, but may break when moving to an active-active topology (ie. if
   each load balancer originating requests then needs to originate these
   requests from a unique IP.)

Does that clear things up a bit?



> Use-Case 20:  I do not believe much of this is handled within the LBaaS
> API, but with a different service that provides auto-scaling
> functionality.  Especially the "on-the-fly" updating of properties.
> This also becomes incredibly difficult when considering TCP session
> persistence when the possible pool member could be removed at any
> automated time.
>

This is an example of how one might handle SSH load balancing to an array
of back-end servers. It's somewhat contrived in that these were the
parameters that a potential client inquired about with us, but that we
couldn't at that time deliver in our load balancing infrastructure.

Is anyone else doing this kind of (rather convoluted) load balancing? If
not, obviously feel free to strike this one down as unnecessary in the
up-coming survey. :)


> Use-Case 25:  I think this one is referring to the functionality of a
> "draining" status for a pool member; the pool member will not receive
> any new connections, and will not force any active connection closed.
> Is that the right way to understand that use-case?
>

This was meant to be more of a "continuous deployment" or "rolling
deployment" use case.


> Use-Case 26:  Is this functionally wanting something like an "error
> page" to come up during the maintenance window?  Also, to accept only
> connections from a specific set of IPs only during the maintenance
> window, one would manually have to create an access list for the load
> balancer during the time for testing, and then either modify or remove
> it after maintenance is complete.  Does this sound like an accurate
> understanding/solution?
>

Correct-- we've seen this a number of times from our customers:  They want
a 'maintenance page' to show up for anyone connecting to the service except
their own people during a maintenance window. Having the ability of their
own people hitting the site is actually really important because they need
to make sure that the deployment went well and the site is ready for
production traffic before they open up the flood gates again. If they make
the site generally accessible too early (ie. there was still a problem that
could have been detected with testing if their people could have tested)
this has the potential of introducing bad data into their database that's
impossible to root out afterward.

Just denying connections to the general public (ie. dropping packets or
returning 'connection refused' as a firewall would do) is not acceptable in
these kinds of scenarios to these customers (ie. it's unprofessional to not
show a maintenance page.)


> Use-Case 37:  I'm not entirely sure what this one would mean.  I know I
> included it in the section that sounded more like features, but I was
> still curious what this one referred to.  Does this have to do wit

Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Robert Collins
Raising SystemExit *or* calling sys.exit() are poor ideas: only outer
layer code should do that. Plumbing should only be raising semantic,
normally catchable exceptions IMO.

-Rob

On 2 May 2014 07:09, Kevin L. Mitchell  wrote:
> On Thu, 2014-05-01 at 18:41 +, Paul Michali (pcm) wrote:
>> So, I tried to reproduce, but I actually see the same results with
>> both of these. However, they both show the issue I was hitting,
>> namely, I got no information on where the failure was located:
>
> So, this is pretty much by design.  A SystemExit extends BaseException,
> rather than Exception.  The tests will catch Exception, but not
> typically BaseException, as you generally want things like ^C to work
> (raises a different BaseException).  So, your tests that might possibly
> trigger a SystemExit (or sys.exit()) that you don't want to actually
> exit from must either explicitly catch the SystemExit or—assuming the
> code uses sys.exit()—must mock sys.exit() to inhibit the normal exit
> behavior.
>
> (Also, because SystemExit is the exception that is usually raised for a
> normal exit condition, the traceback would not typically be printed, as
> that could confuse users; no one expects a successfully executed script
> to print a traceback, after all :)
> --
> Kevin L. Mitchell 
> Rackspace
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-01 Thread Stephen Balukoff
Hi Trevor,

I was the one who wrote that use case based on discussion that came out of
the question I wrote the list last week about SSL re-encryption:  Someone
had stated that sometimes pool members are local, and sometimes they are
hosts across the internet, accessible either through the usual default
route, or via a VPN tunnel.

The point of this use case is to make the distinction that if we associate
a neutron_subnet with the pool (rather than with the member), then some
members of the pool that don't exist in that neutron_subnet might not be
accessible from that neutron_subnet.  However, if the behavior of the
system is such that attempting to reach a host through the subnet's
"default route" still works (whether that leads to communication over a VPN
or the usual internet routes), then this might not be a problem.

The other option is to associate the neutron_subnet with a pool member. But
in this case there might be problems too. Namely:

   - The device or software that does the load balancing may need to have
   an interface on each of the member subnets, and presumably an IP address
   from which to originate requests.
   - How does one resolve cases where subnets have overlapping IP ranges?

In the end, it may be simpler not to associate neutron_subnet with a pool
at all. Maybe it only makes sense to do this for a VIP, and then the
assumption would be that any member addresses one adds to pools must be
accessible from the VIP subnet.  (Which is easy, if the VIP exists on the
same neutron_subnet. But this might require special routing within Neutron
itself if it doesn't.)

This topology question (ie. what is feasible, what do people actually want
to do, and what is supported by the model) is one of the more difficult
ones to answer, especially given that users of OpenStack that I've come in
contact with barely understand the Neutron networking model, if at all.

In our case, we don't actually have any users in the scenario of having
members spread across different subnets that might not be be routable, so
the use case is somewhat contrived, but I thought it was worth mentioning
based on what people were saying in the SSL re-encryption discussion last
week.


On Thu, May 1, 2014 at 1:52 PM, Trevor Vardeman <
trevor.varde...@rackspace.com> wrote:

> Hello,
>
> After going back through the use-cases to double check some of my
> understanding, I realized I didn't quite understand the ones I had
> already answered.  I'll use a specific use-case as an example of my
> misunderstanding here, and hopefully the clarification can be easily
> adapted to the rest of the use-cases that are similar.
>
> Use Case 13:  A project-user has an HTTPS application in which some of
> the back-end servers serving this application are in the same subnet,
> and others are across the internet, accessible via VPN. He wants this
> HTTPS application to be available to web clients via a single IP
> address.
>
> In this use-case, is the Load Balancer going to act as a node in the
> VPN?  What I mean here, is the Load Balancer supposed to establish a
> connection to this VPN for the client, and simulate itself as a computer
> on the VPN?  If this is not the case, wouldn't the VPN have a subnet ID,
> and simply be added to a pool during its creation?  If the latter is
> accurate, would this not just be a basic HTTPS Load Balancer creation?
> After looking through the VPNaaS API, you would provide a subnet ID to
> the create VPN service request, and it establishes a VPN on said subnet.
> Couldn't this be provided to the Load Balancer pool as its subnet?
>
> Forgive me for requiring so much distinction here, but what may be clear
> to the creator of this use-case, it has left me confused.  This same
> type of clarity would be very helpful across many of the other
> VPN-related use-cases.  Thanks again!
>
> -Trevor
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use Case Question

2014-05-01 Thread Stephen Balukoff
Question for those of you using the SSL session ID for persistency: About
how long do you typically set these sessions to persist?

Also, I think this is a cool way to handle this kind of persistence
efficiency-- I'd never seen it done that way before, eh!

It should also almost go without saying that of course in the case where
the SSL session is not terminated on the load balancer, you can't do
anything else with the content (like insert X-Forwarded-For headers or do
anything else that has to do with L7).

Stephen


On Wed, Apr 30, 2014 at 9:39 AM, Samuel Bercovici wrote:

>  Hi,
>
>
>
> As stated, this could either be handled by SSL session ID persistency or
> by SSL termination and using cookie based persistency options.
>
> If there is no need to inspect the content hence to terminate the SSL
> connection on the load balancer for this sake, than using SSL session ID
> based persistency is obviously a much more efficient way.
>
> The reference to source client IP changing was to negate the use of source
> IP as the stickiness algorithm.
>
>
>
>
>
> -Sam.
>
>
>
>
>
> *From:* Trevor Vardeman [mailto:trevor.varde...@rackspace.com]
> *Sent:* Thursday, April 24, 2014 7:26 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [Neutron][LBaaS] Use Case Question
>
>
>
> Hey,
>
>
>
> I'm looking through the use-cases doc for review, and I'm confused about
> one of them.  I'm familiar with HTTP cookie based session persistence, but
> to satisfy secure-traffic for this case would there be decryption of
> content, injection of the cookie, and then re-encryption?  Is there another
> session persistence type that solves this issue already?  I'm copying the
> doc link and the use case specifically; not sure if the document order
> would change so I thought it would be easiest to include both :)
>
>
>
> Use Cases:
> https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis
>
>
>
> Specific Use Case:  A project-user wants to make his *secured *web based
> application (HTTPS) highly available. He has n VMs deployed on the same
> private subnet/network. Each VM is installed with a web server (ex: apache)
> and content. The application requires that a transaction which has started
> on a specific VM will continue to run against the same VM. The application
> is also available to end-users via smart phones, a case in which the end
> user IP might change. The project-user wishes to represent them to the
> application users as a web application available via a single IP.
>
>
>
> -Trevor Vardeman
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

2014-05-01 Thread Stephen Balukoff
Hi Samuel,

We talked a bit in chat about this, but I wanted to reiterate a few things
here for the rest of the group.  Comments in-line:


On Wed, Apr 30, 2014 at 6:10 AM, Samuel Bercovici wrote:

>  Hi,
>
>
>
> We have compared the API the is in the blue print to the one described in
> Stephen documents.
>
> Follows the differences we have found:
>
> 1)  L7PolicyVipAssoc is gone, this means that L7 policy reuse is not
> possible. I have added use cases 42 and 43 to show where such reuse makes
> sense.
>

Yep, my thoughts were that:

   - The number of times L7 policies will actually get re-used is pretty
   minimal. And in the case of use cases 42 and 43, these can be accomplished
   by duplicating the L7policies and rules (with differing actions) for each
   type of connection.
   - Fewer new objects is usually better and less confusing for the user.
   Having said this, a user advanced enough to use L7 features like this at
   all is likely going to be able to understand what the 'association' policy
   does.

The main counterpoint you shared with me was (if I remember correctly):

   - For different load balancer vendors, it's much easier to code for the
   case where a specific entire feature set that isn't available (ie. L7
   switching or content modification functionality) by making that entire
   feature set modular. A driver in this case can simply return with a
   "feature not supported" error if anyone tries using L7 policies at all.



>  2)  There is a mix between L7 content switching and L7 content
> modification, the API in the blue print only addresses L7 content
> switching. I think that we should separate the APIs from each other. I
> think that we should review/add use cases targeting L7 content
> modifications to the use cases document.
>
Fair enough. There aren't many such use cases in there yet.

>  a.   You can see this in L7Policy: APPEND_HEADER,
> DELETE_HEADER actions
>
> 3)  The action to redirect to a URL is missing in Stephen’s document.
> The 'redirect' action in Stephen’s document is equivalent to the “pool”
> action in the blue print/code.
>
Yep it is. But this is actually pretty easily added.  We would just add the
'action' of "URL_REDIRECT" and the action_argument would then be the URL to
which to redirect.


>  4)  All the objects have their parent id as an optional argument
> (L7Rule.l7_policy_id, L7Policy.listener_id), is this a mistake?
>
That's actually not a mistake--  a user can create "orphaned" rules in this
model. However, the point was raised earlier by Brandon that it may make
sense for members to be child objects of a specific pool since they can't
be shared. If we do this for members, it also makes sense to do it for
L7Rules since they also can't be shared. At which point the API for
manipulating L7Rules would shift to:

/l7_policy/{policy_uuid}/l7_rules

And in this case, the parent L7Policy ID would be implicit.

(I'm all for this change, by the way.)

>  5)  There is also the additional behavior based on L3 information
> (matching the client/source IP to a subnet). This is addressed by
> L7Rule.type with a value of 'CLIENT_IP' and L7Rule.compare_type with a
> value of 'SUBNET'. I think that using Layer 3 type information should not
> be part of L7 content switching as the use cases I am aware of, might
> require more than just selecting a different pool (ex: user with ip from
> internet browsing to an https based application, might need to be secured
> using 2K SSL keys while internal users could use weaker keys)
>
While it's true that having a way to manipulate this without being part of
an HTTP or unwrapped HTTPS session is also useful--  it's still useful to
be able to create L7 rules which also make decisions based on subnet.
 (Notice also with TLS_SNI_Policies there is a 'hostname' attribute, and
also with L7 rules there is a 'hostname' type of rule? Again, useful to
have in two places, eh!)


> I would like to state that although the WIKI describes the solution from a
> high level it is not totally in sync with the actual code.
>
> The key thing which is missing is that, L7 Policies in a specific
> listener/vip are ordered (ordered list) and are processed in order so that
> the 1st policy that has a match will be activated and traversal of the L7
> policy list is topped as the processing is final (ex: redirect, pool,
> reject).
>
> This in effect means that L7 Policy form an ‘or’ condition between them.
>
> L7 Policies have an ordered list of L7 Rules, L7 Rules are processed by
> this order and also form an ‘or’ condition.
>

Agreed, and I think my API works the same way. I will say though:  I did
remove the 'order' attribute from L7Rules because if all the conditions
that make up a policy are OR'ed together, then order no longer matters.  If
we want to define a more feature-rich DSL here, then rule order would
matter.  (Note that the order in which entire L7Policies appear still
matters. The first one to match w

Re: [openstack-dev] [nova] No meeting this week

2014-05-01 Thread Michael Still
Yeah, I feel bad that we haven't had one in three weeks now -- its
definitely a thing I am not happy with. Next week for sure.

Michael

On Fri, May 2, 2014 at 12:15 AM, Matt Riedemann
 wrote:
>
>
> On 5/1/2014 1:58 AM, Michael Still wrote:
>>
>> Hi.
>>
>> I was intending to run a nova meeting this week, but I don't think its
>> worth a mutiny over the "off week" that the rest of the project is
>> respecting. The only agenda items I can think of are:
>>
>>   - please prepare your summit sessions
>>   - I've attempted to fix the clashes in scheduling that are reported
>>   - please fix some bugs!
>>
>> I think those are self explanatory to be honest. If any discussion is
>> required, please use this thread for it. So... keep at it!
>>
>> Cheers,
>> Michael
>>
>
> I might be in the minority, but I'm still "on" this week and while there
> might not be a ton of content to talk about or on the agenda (people rarely
> update the agenda wiki directly I've found anyway), I feel like we should
> still have a meeting at some point - we haven't had one in about a month
> now.  I realize people are either burned out on Icehouse or getting ready
> for Juno, but I suspect people would at least still show up to a meeting and
> topics would come up, especially around people with nova-specs up for
> review.
>
> Maybe I'm just lonely :) but would be nice to have a Nova meeting soon since
> I don't think email spurs the same constructive discussion that can happen
> in the meetings, and those are usually off-topic anyway.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-01 Thread Stephen Balukoff
German,

They certainly are essential-- but as far as I can tell, we haven't been
concentrating on them, so the list there is likely very incomplete.

Stephen


On Thu, May 1, 2014 at 1:04 PM, Eichberger, German  wrote:

>  Stephen,
>
>
>
> I would prefer if we can vote on them, too. They are essential and I would
> like to make sure they are considered first-class citizen when it comes to
> use cases.
>
>
>
> Thanks,
>
> German
>
>
>
> *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
> *Sent:* Thursday, May 01, 2014 12:52 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey
>
>
>
> Yep, I'm all for this as well!
>
>
>
> Note: We're just talking about "user" use cases in this survey, correct?
>  (We'll leave the operator use cases for later when we have more of a story
> and/or model to work with on how we're going to approach those, yes?)
>
>
>
> Thanks,
>
> Stephen
>
>
>
> On Thu, May 1, 2014 at 11:54 AM, Jorge Miramontes <
> jorge.miramon...@rackspace.com> wrote:
>
> That sounds good to me. The only thing I would caution is that we have
> prioritized certain requirements (like HA and SSL Termination) and I want
> to ensure we use the survey to compliment what we have already mutually
> agreed upon. Thanks for spearheading this!
>
>
>
> Cheers,
>
> --Jorge
>
>
>
> *From: *Samuel Bercovici 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Thursday, May 1, 2014 12:39 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *[openstack-dev] [Neutron][LBaaS]User Stories and sruvey
>
>
>
>   Hi Everyone!
>
>
>
> To assist in evaluating the use cases that matter and since we now have
> ~45 use cases, I would like to propose to conduct a survey using something
> like surveymonkey.
>
> The idea is to have a non-anonymous survey listing the use cases and ask
> you identify and vote.
>
> Then we will publish the results and can prioritize based on this.
>
>
>
> To do so in a timely manner, I would like to freeze the document for
> editing and allow only comments by Monday May 5th 08:00AMUTC and publish
> the survey link to ML ASAP after that.
>
>
>
> Please let me know if this is acceptable.
>
>
>
> Regards,
>
> -Sam.
>
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] need feedback on steps for adding oslo libs to projects

2014-05-01 Thread Ben Nemec

On 04/09/2014 11:11 AM, Doug Hellmann wrote:

I have started writing up some general steps for adding oslo libs to
projects, and I would like some feedback about the results. They can't
go into too much detail about specific changes in a project, because
those will vary by library and project. I would like to know if the
order makes sense and if the instructions for the infra updates are
detailed enough. Also, of course, if you think I'm missing any steps.

https://wiki.openstack.org/wiki/Oslo/UsingALibrary

Thanks,
Doug


(finally getting to some e-mail threads I had left in my inbox...)

I did not have any particular problems integrating oslotest with the 
existing instructions, although I know the question of cross-testing is 
still kind of up in the air so of course that part of it may need changes.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Jorge Miramontes
As usual, comments are inline.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 3:10 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

Hi,


On Thu, May 1, 2014 at 10:46 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Eugene,

I think there is a misunderstanding on what iterative development means to you 
and me and I want to make sure we are on the same page. First of all, I'll try 
not to use the term "duct-taping" even though it's a widely used term in the 
industry.
I'm not against the term itself.
It was applied several times to existing code base, apparently without ANY real 
code analysis.
That's especially clearly seen because all API proposals so far are focusing on 
managing the same set of lb primitives.
Yes, the proposals introduce some new primitives; yes some attributes and 
relationships differ from what is in the code.
But nothing was proposed so far that would require to completely throw away 
existing code, not a single requirement.

I understand that writing something from scratch can be more convenient for 
developers than studying existing code, but that's something we all have to do 
when working on opensource project.

To be perfectly clear we are not advocating "starting from scratch". If it has 
come out that way then let me be the first to correct that on behalf of my 
team. In reality, defining a brand new API specification is irrelevant to 
implementation. I like to see defining a spec as similar to defining an RFC. 
The reason why I don't even want to think about implementation is that it does 
not allow discussion to be open-minded. I agree that a particular API proposal 
might be easier to mold existing code to than another. However, this goes 
against the mentality of comparing on equal footing. You, for example, seem 
biased toward Stephen's proposal because you understand the current code base 
the best (since you wrote the majority of it) and see his proposal as most 
inline with said code. However, I ask that you try not to let current 
implementation cloud your judgement. If Stephen's proposal is what the 
community agrees upon then great! All I ask is that we compare fairly and 
without implementation in mind since we are defining what we want, not what we 
currently have in place. Once an API specification is agreed upon, then and 
only then, should we figure out how to mold the existing implementation towards 
the state the spec defines. Does that make sense?


My main concern is that implementing code on top of the current codebase to 
meet the smorgasbord of new requirements without thinking about overall design 
(since we know we will eventually want all the requirements satisfied at some 
point per your words)
Overall design was thought out long before we started having all these 
discussions.
And things are not quick in neutron project, that's regardless of amount of dev 
resources lbaas subteam may have.

While overall design may have been thought out long ago it doesn't mean that 
the discussion should be closed. By saying this, you are implying that 
newcomers are not welcome those discussions. At least, that is how your 
statement rubs off on me. I'll give you the benefit of the doubt to correct my 
understanding of that.


is that some requirement implemented 6 months from now may change code 
architecture. Since we know we want to meet all requirements eventually, its 
makes logical sense to design for what we know we need and then figure out how 
to iteratively implement code over time.
That was initially done on Icehouse summit, and we just had to reiterate the 
discussion for new subteam members who has joined recently.
I agree that "to design for what we know we need", but the primary option 
should be to continue existing work and analyse it to find gaps, that what 
Samuel and me were focusing on. Stephen's proposal also goes along this idea 
because everything in his doc can be implemented gradually starting from 
existing code.

That being said, if it makes sense to use existing code first then fine. In 
fact, I am a fan of trying manipulate as little code as possible unless we 
absolutely have to. I just want to be a smart developer and design knowing I 
will eventually have to implement something. Not keeping things in mind can be 
dangerous.
I fully agree and that's well understood.

In short, I want to avoid having to perform multiple code refactors if possible 
and design upfront with the list of requirements the community has spent time 
fleshing out.

Also, it seems like you have some implicit developer requirements that I'd like 
written somewhere. This may ease confusion as well. For example, you stated 
"Consist

Re: [openstack-dev] [oslo] preparing oslo.i18n for graduation

2014-05-01 Thread Ben Nemec

On 04/29/2014 02:48 PM, Doug Hellmann wrote:

I have exported the gettextutils code and related files to a new git
repository, ready to be imported as oslo.i18n. Please take a few
minutes to look over the files and give it a sanity check.

https://github.com/dhellmann/oslo.i18n

Thanks,
Doug


No functional issues, just a few cleanups:

Would be nice to fix up:
https://github.com/dhellmann/oslo.i18n/blob/master/tests/fakes.py#L17

Also:
https://github.com/dhellmann/oslo.i18n/blob/master/oslo/i18n/gettextutils.py#L22

Are we leaving the globals in
https://github.com/dhellmann/oslo.i18n/blob/master/oslo/i18n/gettextutils.py#L118 
until the integration modules are done?


That's all I noticed looking through the repo.  None of it's a big deal 
(we can fix it all after import if necessary) and the unit tests are 
passing locally for me.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wrong Test Cases

2014-05-01 Thread Ben Nemec

On 05/01/2014 12:27 PM, Hao Wang wrote:

Hi,

I have got one question: if there is something wrong with a test case,
and it causes the review can proceed, what should I do?


You mean you hit a bug in a test case?  If so, see the link Jenkins 
posted to the review: 
https://wiki.openstack.org/wiki/GerritJenkinsGit#Test_Failures




Thanks,
Hao


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread Nachi Ueno
Hi folks

> Clint
Thanks, things get clear for me now :)





2014-05-01 13:21 GMT-07:00 John Wood :
> I was going to bring up Postern [1] as well, Clint. Unfortunately not much 
> work has been done on it though.
>
> [1] https://github.com/cloudkeep/postern
>
> Thanks,
> John
>
>
>
> 
> From: Clint Byrum [cl...@fewbar.com]
> Sent: Thursday, May 01, 2014 2:22 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
>
> Excerpts from Nachi Ueno's message of 2014-05-01 12:04:23 -0700:
>> Ah I got it now!
>> so even if we get stolen HDD, we can keep password safe.
>>
>> However, I'm still not sure why this is more secure..
>> anyway, the ID/PW to access barbican will be written in neutron.conf, right?
>>
>
> Yes. However, you can surround the secret in policies. You'll have an
> audit trail of when and where it was accessed, and you can even restrict
> access, so that out of band you have to open up access with barbican.
>
> So while the server may have access, that access is now audited and
> limited by policy, instead of just being dependent on the security
> measures you can take to protect a file.
>
>> Furthermore,  ID/PW for mysql will be written in conf file..
>> so if we can't trust unix file system protection, there is no security
>> in OpenStack.
>
> The ID/PW for mysql only grants you access to mysql for as long as that
> id/pw are enabled for access. However, the encryption keys for OpenVPN
> will grant any passive listener access for as long as they keep any
> sniffed traffic. They'll also grant an attacker the ability to MITM
> traffic between peers.
>
> So when an encryption key has been accessed, from where, etc, is quite
> a bit more crucial than knowing when a username/password combo have
> been accessed.
>
> Producing a trustworthy audit log for access to /etc/neutron/neutron.conf
> is a lot harder than producing an audit log for a REST API.
>
> So it isn't so much that file system permissions aren't enough, it is
> that file system observability is expensive.
>
> Note that at some point there was a POC to have a FUSE driver backed by
> Barbican called 'Postern' I think. That would make these discussions a
> lot simpler. :)
>
>>
>> 2014-05-01 10:31 GMT-07:00 Clint Byrum :
>> > I think you'd do something like this (Note that I don't know off the top
>> > of my head the barbican CLI or openvpn cli switches... just
>> > pseudo-code):
>> >
>> > oconf=$(mktemp -d /tmp/openvpnconfig.XX)
>> > mount -o tmpfs $oconf size=1M
>> > barbican get my-secret-openvpn-conf > $oconf/foo.conf
>> > openvpn --config-dir $oconf foo --daemonize
>> > umount $oconf
>> > rmdir $oconf
>> >
>> > Excerpts from Nachi Ueno's message of 2014-05-01 10:15:26 -0700:
>> >> Hi Robert
>> >>
>> >> Thank you for your suggestion.
>> >> so your suggestion is let OpenVPN process download key to memory
>> >> directly from Babican?
>> >>
>> >> 2014-05-01 9:42 GMT-07:00 Clark, Robert Graham :
>> >> > Excuse me interrupting but couldn't you treat the key as largely
>> >> > ephemeral, pull it down from Barbican, start the OpenVPN process and
>> >> > then purge the key?  It would of course still be resident in the memory
>> >> > of the OpenVPN process but should otherwise be protected against
>> >> > filesystem disk-residency issues.
>> >> >
>> >> >
>> >> >> -Original Message-
>> >> >> From: Nachi Ueno [mailto:na...@ntti3.com]
>> >> >> Sent: 01 May 2014 17:36
>> >> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> >> Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
>> >> >>
>> >> >> Hi Jarret
>> >> >>
>> >> >> IMO, Zang point is the issue saving plain private key in the
>> >> > filesystem for
>> >> >> OpenVPN.
>> >> >> Isn't this same even if we use Barbican?
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> 2014-05-01 2:56 GMT-07:00 Jarret Raim :
>> >> >> > Zang mentioned that part of the issue is that the private key has to
>> >> >> > be stored in the OpenVPN config file. If the config files are
>> >> >> > generated and can be stored, then storing the whole config file in
>> >> >> > Barbican protects the private key (and any other settings) without
>> >> >> > having to try to deliver the key to the OpenVPN endpoint in some
>> >> > non-
>> >> >> standard way.
>> >> >> >
>> >> >> >
>> >> >> > Jarret
>> >> >> >
>> >> >> > On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:
>> >> >> >
>> >> >> >>> Jarret
>> >> >> >>
>> >> >> >>Thanks!
>> >> >> >>Currently, the config will be generated on demand by the agent.
>> >> >> >>What's merit storing entire config in the Barbican?
>> >> >> >>
>> >> >> >>> Kyle
>> >> >> >>Thanks!
>> >> >> >>
>> >> >> >>2014-04-30 7:05 GMT-07:00 Kyle Mestery
>> >> >> :
>> >> >> >>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno 
>> >> >> wrote:
>> >> >>  Hi Clint
>> >> >> 
>> >> >>  Thank you for your suggestion. Your point get taken :)
>> >> >> 
>> >> >> > Kyle
>> >> >>  This is also a s

[openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: Driver / Management API

2014-05-01 Thread Adam Harwell
I am sending this now to gauge interest and get feedback on what I see as an 
impending necessity — updating the existing "haproxy" driver, replacing it, or 
both. Though we're not there yet, it is probably best to at least start the 
discussion now, to hopefully limit some fragmentation that may be starting 
around this concept already.

To begin with, I should probably define some terms. Following is a list of the 
major things I'll be referencing and what I mean by them, since I would like to 
avoid ambiguity as much as possible.

--
 Glossary
--
HAProxy: This references two things currently, and I feel this is a source of 
some misunderstanding. When I refer to  HAProxy (capitalized), I will be 
referring to the official software package (found here: http://haproxy.1wt.eu/ 
), and when I refer to "haproxy" (lowercase, and in quotes) I will be referring 
to the neutron-lbaas driver (found here: 
https://github.com/openstack/neutron/tree/master/neutron/services/loadbalancer/drivers/haproxy
 ). The fact that the neutron-lbaas driver is named directly after the software 
package seems very unfortunate, and while it is not directly in the scope of 
what I'd like to discuss here, I would love to see it changed to more 
accurately reflect what it is --  one specific driver implementation that 
coincidentally uses HAProxy as a backend. More on this later.

Operator Requirements: The requirements that can be found on the wiki page 
here:  
https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements#Operator_Requirements
 and focusing on (but not limited to) the following list:
* Scalability
* DDoS Mitigation
* Diagnostics
* Logging and Alerting
* Recoverability
* High Availability (this is in the User Requirements section, but will be 
largely up to the operator to handle, so I would include it when discussing 
Operator Requirements)

Management API: A restricted API containing resources that Cloud Operators 
could access, including most of the list of Operator Requirements (above).

Load Balancer (LB): I use this term very generically — essentially a logical 
entity that represents one "use case". As used in the sentence: "I have a Load 
Balancer in front of my website." or "The Load Balancer I set up to offload SSL 
Decryption is lowering my CPU load nicely."

--
 Overview
--
What we've all been discussing for the past month or two (the API, Object 
Model, etc) is being directly driven by the User and Operator Requirements that 
have somewhat recently been enumerated (many thanks to everyone who has 
contributed to that discussion!). With that in mind, it is hopefully apparent 
that the current API proposals don't directly address many (or really, any) of 
the Operator requirements! Where in either of our API proposals are logging, 
high availability, scalability, DDoS mitigation, etc? I believe the answer is 
that none of these things can possibly be handled by the API, but are really 
implementation details at the driver level. Radware, NetScaler, Stingray, F5 
and HAProxy of any flavour would all have very different ways of handling these 
things (these are just some of the possible backends I can think of). At the 
end of the day, what we really have are the requirements for a driver, which 
may or may not use HAProxy, that we hope will satisfy all of our concerns. That 
said, we may also want to have some form of "Management API" to expose these 
features in a common way.

In this case, we really need to discuss two things:

  1.  Whether to update the existing "haproxy" driver to accommodate these 
Operator Requirements, or whether to start from scratch with a new driver 
(possibly both).
  2.  How to expose these Operator features at the (Management?) API level.

--
 1) Driver
--
I believe the current "haproxy" driver serves a very specific purpose, and 
while it will need some incremental updates, it would be in the best interest 
of the community to also create and maintain a new driver (which it sounds like 
several groups have already begun work on — ack!) that could support a 
different approach. For instance, the current "haproxy" driver is implemented 
by initializing HAProxy processes on a set of shared hosts, whereas there has 
been some momentum behind creating individual Virtual Machines (via Nova) for 
each Load Balancer created, similar to Libra's approach. Alternatively, we 
could use LXC or a similar technology to more effectively isolate LBs and 
assuage concerns about tenant cross-talk (real or imaginary, this has been an 
issue for some customers). Either way, we'd probably need a brand new driver, 
to avoid breaking backwards compatibility with the existing driver (which does 
work perfectly fine in many cases). In fact, it's possible that when we begin 
discussing this as a broader c

[openstack-dev] Monitoring as a Service

2014-05-01 Thread Alexandre Viau
Hello Everyone!

My name is Alexandre Viau from Savoir-Faire Linux.

We have submited a Monitoring as a Service blueprint and need feedback.

Problem to solve: Ceilometer's purpose is to track and *measure/meter* usage 
information collected from OpenStack components (originally for billing). While 
Ceilometer is usefull for the cloud operators and infrastructure metering, it 
is not a *monitoring* solution for the tenants and their services/applications 
running in the cloud because it does not allow for service/application-level 
monitoring and it ignores detailed and precise guest system metrics.

Proposed solution: We would like to add Monitoring as a Service to Openstack

Just like Rackspace's Cloud monitoring, the new monitoring service - lets call 
it OpenStackMonitor for now -  would let users/tenants keep track of their 
ressources on the cloud and receive instant notifications when they require 
attention.

This RESTful API would enable users to create multiple monitors with predefined 
checks, such as PING, CPU usage, HTTPS and SMTP or custom checks performed by a 
Monitoring Agent on the instance they want to monitor.

Predefined checks such as CPU and disk usage could be polled from Ceilometer. 
Other predefined checks would be performed by the new monitoring service 
itself. Checks such as PING could be flagged to be performed from multiple 
sites.

Custom checks would be performed by an optional Monitoring Agent. Their results 
would be polled by the monitoring service and stored in Ceilometer.

If you wish to collaborate, feel free to contact me at 
alexandre.v...@savoirfairelinux.com
The blueprint is available here: 
https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service

Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Eugene Nikanorov
>
>
> We wanted a discussion to happen on whether the existing object model
> would work with both API proposals.  That blueprint being pushed to
> gerrit the same time as Stephen mailing out his proposal made it seem
> like this was not going to happen.
>
I'm sorry about that. In fact I was just planning to propose a more
detailed design
of what could be treated as a part of Stephen proposal.
I also think that we'll converge on Stephen's and Rackspace's proposal
eventually.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-01 Thread Trevor Vardeman
Hello,

After going back through the use-cases to double check some of my
understanding, I realized I didn't quite understand the ones I had
already answered.  I'll use a specific use-case as an example of my
misunderstanding here, and hopefully the clarification can be easily
adapted to the rest of the use-cases that are similar.

Use Case 13:  A project-user has an HTTPS application in which some of
the back-end servers serving this application are in the same subnet,
and others are across the internet, accessible via VPN. He wants this
HTTPS application to be available to web clients via a single IP
address.

In this use-case, is the Load Balancer going to act as a node in the
VPN?  What I mean here, is the Load Balancer supposed to establish a
connection to this VPN for the client, and simulate itself as a computer
on the VPN?  If this is not the case, wouldn't the VPN have a subnet ID,
and simply be added to a pool during its creation?  If the latter is
accurate, would this not just be a basic HTTPS Load Balancer creation?
After looking through the VPNaaS API, you would provide a subnet ID to
the create VPN service request, and it establishes a VPN on said subnet.
Couldn't this be provided to the Load Balancer pool as its subnet?

Forgive me for requiring so much distinction here, but what may be clear
to the creator of this use-case, it has left me confused.  This same
type of clarity would be very helpful across many of the other
VPN-related use-cases.  Thanks again!

-Trevor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Brandon Logan
Hi Eugene,

On Fri, 2014-05-02 at 00:10 +0400, Eugene Nikanorov wrote:
> Hi,
> 
> 
> On Thu, May 1, 2014 at 10:46 PM, Jorge Miramontes
>  wrote:
> Hey Eugene,
> 
> 
> I think there is a misunderstanding on what iterative
> development means to you and me and I want to make sure we are
> on the same page. First of all, I'll try not to use the term
> "duct-taping" even though it's a widely used term in the
> industry. 
> I'm not against the term itself. 
> It was applied several times to existing code base, apparently without
> ANY real code analysis.
> That's especially clearly seen because all API proposals so far are
> focusing on managing the same set of lb primitives.
> Yes, the proposals introduce some new primitives; yes some attributes
> and relationships differ from what is in the code. 
> But nothing was proposed so far that would require to completely throw
> away existing code, not a single requirement.
Just to make it clear, no one said the existing code base was duct taped
together.  What happened was that you pushed that object model
improvements blueprint into the neutron-specs the same time that Stephen
sent out his API proposal.  This made it seem like that no discussion or
analysis of the existing object model was going to take place and verify
that the existing object model would work with either of the API
proposals.  So we were saying that duct-taping the API on top of the
existng object model was not a good idea if the existing object model
did not fit well.  This would result in more duct-taping and so forth.
I'm sure we've all been in a maintenance nightmare where the code is
duct-taped together and one minor change causes major issues.

We wanted a discussion to happen on whether the existing object model
would work with both API proposals.  That blueprint being pushed to
gerrit the same time as Stephen mailing out his proposal made it seem
like this was not going to happen.  No one ever said the existing code
was duct-taped together, and I am sorry you got that impression.

Thanks,
Brandon




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN

2014-05-01 Thread John Wood
Hello Samuel,

Just noting that the link below shows current-state Barbican. We are in the 
process of designing SSL certificate support for Barbican via blueprints such 
as this one: 
https://wiki.openstack.org/wiki/Barbican/Blueprints/ssl-certificates
We intend to discuss this feature in Atlanta to enable coding in earnest for 
Juno.

The Container resource is intended to capture/store the final certificate 
details.

Thanks,
John



From: Samuel Bercovici [samu...@radware.com]
Sent: Thursday, May 01, 2014 12:50 PM
To: OpenStack Development Mailing List (not for usage questions); 
os.v...@gmail.com
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN

Hi Vijay,

I have looked at the Barbican APIs – 
https://github.com/cloudkeep/barbican/wiki/Application-Programming-Interface
I was no able to see a “native” API that will accept an SSL certificate 
(private key, public key, CSR, etc.) and will store it.
We can either store the whole certificate as a single file as a secret or use a 
container and store all the certificate parts as secrets.

I think that having LBaaS reference Certificates as IDs using some service is 
the right way to go so this might be achived by either:

1.   Adding to Barbican and API to store / generate certificates

2.   Create a new “module” that might start by being hosted in neutron or 
keystone that will allow to manage certificates and will use Barbican behind 
the scenes to store them.

3.   Decide on a container structure to use in Babican but implement the 
way to access and arrange it as a neutron library

Was any decision made on how to proceed?

Regards,
-Sam.




From: Vijay B [mailto:os.v...@gmail.com]
Sent: Wednesday, April 30, 2014 3:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron] [LBaaS][VPN] SSL cert implementation for 
LBaaS and VPN

Hi,

It looks like there are areas of common effort in multiple efforts that are 
proceeding in parallel to implement SSL for LBaaS as well as VPN SSL in neutron.

Two relevant efforts are listed below:


https://review.openstack.org/#/c/74031/   
(https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL)

https://review.openstack.org/#/c/58897/   
(https://blueprints.launchpad.net/openstack/?searchtext=neutron-ssl-vpn)



Both VPN and LBaaS will use SSL certificates and keys, and this makes it better 
to implement SSL entities as first class citizens in the OS world. So, three 
points need to be discussed here:

1. The VPN SSL implementation above is putting the SSL cert content in a 
mapping table, instead of maintaining certs separately and referencing them 
using IDs. The LBaaS implementation stores certificates in a separate table, 
but implements the necessary extensions and logic under LBaaS. We propose that 
both these implementations move away from this and refer to SSL entities using 
IDs, and that the SSL entities themselves are implemented as their own 
resources, serviced either by a core plugin or a new SSL plugin (assuming 
neutron; please also see point 3 below).

2. The actual data store where the certs and keys are stored should be 
configurable at least globally, such that the SSL plugin code will singularly 
refer to that store alone when working with the SSL entities. The data store 
candidates currently are Barbican and a sql db. Each should have a separate 
backend driver, along with the required config values. If further evaluation of 
Barbican shows that it fits all SSL needs, we should make it a priority over a 
sqldb driver.

3. Where should the primary entries for the SSL entities be stored? While the 
actual certs themselves will reside on Barbican or SQLdb, the entities 
themselves are currently being implemented in Neutron since they are being 
used/referenced there. However, we feel that implementing them in keystone 
would be most appropriate. We could also follow a federated model where a 
subset of keys can reside on another service such as Neutron. We are fine with 
starting an initial implementation in neutron, in a modular manner, and move it 
later to keystone.


Please provide your inputs on this.


Thanks,
Regards,
Vijay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread John Wood
I was going to bring up Postern [1] as well, Clint. Unfortunately not much work 
has been done on it though. 

[1] https://github.com/cloudkeep/postern

Thanks,
John




From: Clint Byrum [cl...@fewbar.com]
Sent: Thursday, May 01, 2014 2:22 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

Excerpts from Nachi Ueno's message of 2014-05-01 12:04:23 -0700:
> Ah I got it now!
> so even if we get stolen HDD, we can keep password safe.
>
> However, I'm still not sure why this is more secure..
> anyway, the ID/PW to access barbican will be written in neutron.conf, right?
>

Yes. However, you can surround the secret in policies. You'll have an
audit trail of when and where it was accessed, and you can even restrict
access, so that out of band you have to open up access with barbican.

So while the server may have access, that access is now audited and
limited by policy, instead of just being dependent on the security
measures you can take to protect a file.

> Furthermore,  ID/PW for mysql will be written in conf file..
> so if we can't trust unix file system protection, there is no security
> in OpenStack.

The ID/PW for mysql only grants you access to mysql for as long as that
id/pw are enabled for access. However, the encryption keys for OpenVPN
will grant any passive listener access for as long as they keep any
sniffed traffic. They'll also grant an attacker the ability to MITM
traffic between peers.

So when an encryption key has been accessed, from where, etc, is quite
a bit more crucial than knowing when a username/password combo have
been accessed.

Producing a trustworthy audit log for access to /etc/neutron/neutron.conf
is a lot harder than producing an audit log for a REST API.

So it isn't so much that file system permissions aren't enough, it is
that file system observability is expensive.

Note that at some point there was a POC to have a FUSE driver backed by
Barbican called 'Postern' I think. That would make these discussions a
lot simpler. :)

>
> 2014-05-01 10:31 GMT-07:00 Clint Byrum :
> > I think you'd do something like this (Note that I don't know off the top
> > of my head the barbican CLI or openvpn cli switches... just
> > pseudo-code):
> >
> > oconf=$(mktemp -d /tmp/openvpnconfig.XX)
> > mount -o tmpfs $oconf size=1M
> > barbican get my-secret-openvpn-conf > $oconf/foo.conf
> > openvpn --config-dir $oconf foo --daemonize
> > umount $oconf
> > rmdir $oconf
> >
> > Excerpts from Nachi Ueno's message of 2014-05-01 10:15:26 -0700:
> >> Hi Robert
> >>
> >> Thank you for your suggestion.
> >> so your suggestion is let OpenVPN process download key to memory
> >> directly from Babican?
> >>
> >> 2014-05-01 9:42 GMT-07:00 Clark, Robert Graham :
> >> > Excuse me interrupting but couldn't you treat the key as largely
> >> > ephemeral, pull it down from Barbican, start the OpenVPN process and
> >> > then purge the key?  It would of course still be resident in the memory
> >> > of the OpenVPN process but should otherwise be protected against
> >> > filesystem disk-residency issues.
> >> >
> >> >
> >> >> -Original Message-
> >> >> From: Nachi Ueno [mailto:na...@ntti3.com]
> >> >> Sent: 01 May 2014 17:36
> >> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >> Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
> >> >>
> >> >> Hi Jarret
> >> >>
> >> >> IMO, Zang point is the issue saving plain private key in the
> >> > filesystem for
> >> >> OpenVPN.
> >> >> Isn't this same even if we use Barbican?
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 2014-05-01 2:56 GMT-07:00 Jarret Raim :
> >> >> > Zang mentioned that part of the issue is that the private key has to
> >> >> > be stored in the OpenVPN config file. If the config files are
> >> >> > generated and can be stored, then storing the whole config file in
> >> >> > Barbican protects the private key (and any other settings) without
> >> >> > having to try to deliver the key to the OpenVPN endpoint in some
> >> > non-
> >> >> standard way.
> >> >> >
> >> >> >
> >> >> > Jarret
> >> >> >
> >> >> > On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:
> >> >> >
> >> >> >>> Jarret
> >> >> >>
> >> >> >>Thanks!
> >> >> >>Currently, the config will be generated on demand by the agent.
> >> >> >>What's merit storing entire config in the Barbican?
> >> >> >>
> >> >> >>> Kyle
> >> >> >>Thanks!
> >> >> >>
> >> >> >>2014-04-30 7:05 GMT-07:00 Kyle Mestery
> >> >> :
> >> >> >>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno 
> >> >> wrote:
> >> >>  Hi Clint
> >> >> 
> >> >>  Thank you for your suggestion. Your point get taken :)
> >> >> 
> >> >> > Kyle
> >> >>  This is also a same discussion for LBaaS Can we discuss this in
> >> >>  advanced service meeting?
> >> >> 
> >> >> >>> Yes! I think we should definitely discuss this in the advanced
> >> >> >>> services meeting today. I've added it to the agenda [1].
> >> >> >>>
> >> >> >>

Re: [openstack-dev] [Ironic] should we have an IRC meeting next week ?

2014-05-01 Thread Ruby Loo

Hi all,

Just a reminder that May 5th is our next scheduled meeting day, but I probably 
won't make it, because I'll be just getting back from one trip and start two 
consecutive weeks of conference travel early the next morning. Chris Krelle 
(nobodycam) has offered to chair that meeting in my absence. The agenda looks 
pretty light at this point, and any serious discussions should just be punted 
to the summit anyway, so if folks want to cancel the meeting, I think that's 
fine.

Also, if there are summit or scheduling related matters that anyone needs to 
discuss with me, please use email (either direct to me, or on this list) and I 
will respond, as my IRC availability for the next ~10 days will be limited due 
to travel.

We won't have a meeting on May 12th... because we'll all be in Atlanta :)

Regards,
Devananda

+1 for cancelling.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Eugene Nikanorov
Hi,


On Thu, May 1, 2014 at 10:46 PM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

>   Hey Eugene,
>
>  I think there is a misunderstanding on what iterative development means
> to you and me and I want to make sure we are on the same page. First of
> all, I'll try not to use the term "duct-taping" even though it's a widely
> used term in the industry.
>
I'm not against the term itself.
It was applied several times to existing code base, apparently without ANY
real code analysis.
That's especially clearly seen because all API proposals so far are
focusing on managing the same set of lb primitives.
Yes, the proposals introduce some new primitives; yes some attributes and
relationships differ from what is in the code.
But nothing was proposed so far that would require to completely throw away
existing code, not a single requirement.

I understand that writing something from scratch can be more convenient for
developers than studying existing code, but that's something we all have to
do when working on opensource project.

My main concern is that implementing code on top of the current codebase to
> meet the smorgasbord of new requirements without thinking about overall
> design (since we know we will eventually want all the requirements
> satisfied at some point per your words)
>
Overall design was thought out long before we started having all these
discussions.
And things are not quick in neutron project, that's regardless of amount of
dev resources lbaas subteam may have.

is that some requirement implemented 6 months from now may change code
> architecture. Since we know we want to meet all requirements eventually,
> its makes logical sense to design for what we know we need and then figure
> out how to iteratively implement code over time.
>
That was initially done on Icehouse summit, and we just had to reiterate
the discussion for new subteam members who has joined recently.
I agree that "to design for what we know we need", but the primary option
should be to continue existing work and analyse it to find gaps, that what
Samuel and me were focusing on. Stephen's proposal also goes along this
idea because everything in his doc can be implemented gradually starting
from existing code.

That being said, if it makes sense to use existing code first then fine. In
> fact, I am a fan of trying manipulate as little code as possible unless we
> absolutely have to. I just want to be a smart developer and design knowing
> I will eventually have to implement something. Not keeping things in mind
> can be dangerous.
>
I fully agree and that's well understood.


>  In short, I want to avoid having to perform multiple code refactors if
> possible and design upfront with the list of requirements the community has
> spent time fleshing out.
>
>  Also, it seems like you have some implicit developer requirements that
> I'd like written somewhere. This may ease confusion as well. For example,
> you stated "Consistency is important". A clear definition in the form of a
> developer requirement would be nice so that the community understands your
> expectations.
>
It might be a bit difficult to formalize. So you know, we're not the only
ones who will make decisions on the implementation.
There is a core team, who mostly out of lbaas discussions right now (and
that fact will not change), who have their own views on how neutron API
should look like, what is allowed and what is not. To get a sense of it,
one really needs to contribute to neutron: push the code through 10-20-50
review iterations, see what other developers are concerned about.
Obviously we can't get everyone to our discussions, but other core dev may
(or may not, i don't know for sure) just -1 your implementation because you
to /object1/id/object2/id/object3/id instead of flat rest API that neutron
has, or something like that.
Then you'll probably spend another month or two trying to discuss these
issues again with other group of folks.
We don't have rigid guidelines on how the code should be written;
understanding of that comes with experience and with discussions on gerrit.


 Lastly, in relation to operator requirements I didn't see you comment on
> whether you are fan of working on an open-source driver together. Just so
> you know, operator requirements are very important for us and I honestly
> don't see how we can use any current driver without major modifications.
> This leads me to want to create a new driver with operator requirements
> being central to the design.
>
The driver itself, IMO, is the most flexible part of the system. If you
think it needs to be improved or even rewritten (once it does what user
asks it to do via API) - I'd be glad to discuss that. I think rm_work (is
that Adam Harwell?) was going to start a thread on this in ML.

Btw, am my understanding is correct that you (as cloud operator) are mostly
interested in haproxy as a backend?

Thanks,
Eugene.
___
OpenStack-dev mailing 

Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-01 Thread Eichberger, German
Stephen,

I would prefer if we can vote on them, too. They are essential and I would like 
to make sure they are considered first-class citizen when it comes to use cases.

Thanks,
German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Thursday, May 01, 2014 12:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Yep, I'm all for this as well!

Note: We're just talking about "user" use cases in this survey, correct?  
(We'll leave the operator use cases for later when we have more of a story 
and/or model to work with on how we're going to approach those, yes?)

Thanks,
Stephen

On Thu, May 1, 2014 at 11:54 AM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
That sounds good to me. The only thing I would caution is that we have 
prioritized certain requirements (like HA and SSL Termination) and I want to 
ensure we use the survey to compliment what we have already mutually agreed 
upon. Thanks for spearheading this!

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 12:39 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi Everyone!

To assist in evaluating the use cases that matter and since we now have ~45 use 
cases, I would like to propose to conduct a survey using something like 
surveymonkey.
The idea is to have a non-anonymous survey listing the use cases and ask you 
identify and vote.
Then we will publish the results and can prioritize based on this.

To do so in a timely manner, I would like to freeze the document for editing 
and allow only comments by Monday May 5th 08:00AMUTC and publish the survey 
link to ML ASAP after that.

Please let me know if this is acceptable.

Regards,
-Sam.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] preparing oslo.i18n for graduation

2014-05-01 Thread Doug Hellmann
On Thu, May 1, 2014 at 3:09 PM, Sergey Lukjanov  wrote:
> Hey Doug,
>
> it looks nice, only two questions:
>
> * why tests aren't located inside the main package (oslo/i18n/tests for ex.)?

That's the way the other oslo libs are. I frankly don't remember the reason.

> * when are you planning to make first release?

The blueprint is targeted for J1, so I would like to have a release
done by then. There are a few changes we need to make after the new
repository is imported.

Doug

>
> Thanks.
>
> On Tue, Apr 29, 2014 at 11:48 PM, Doug Hellmann
>  wrote:
>> I have exported the gettextutils code and related files to a new git
>> repository, ready to be imported as oslo.i18n. Please take a few
>> minutes to look over the files and give it a sanity check.
>>
>> https://github.com/dhellmann/oslo.i18n
>>
>> Thanks,
>> Doug
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-01 Thread Stephen Balukoff
Yep, I'm all for this as well!

Note: We're just talking about "user" use cases in this survey, correct?
 (We'll leave the operator use cases for later when we have more of a story
and/or model to work with on how we're going to approach those, yes?)

Thanks,
Stephen


On Thu, May 1, 2014 at 11:54 AM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

>   That sounds good to me. The only thing I would caution is that we have
> prioritized certain requirements (like HA and SSL Termination) and I want
> to ensure we use the survey to compliment what we have already mutually
> agreed upon. Thanks for spearheading this!
>
>  Cheers,
> --Jorge
>
>   From: Samuel Bercovici 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, May 1, 2014 12:39 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey
>
>Hi Everyone!
>
>
>
> To assist in evaluating the use cases that matter and since we now have
> ~45 use cases, I would like to propose to conduct a survey using something
> like surveymonkey.
>
> The idea is to have a non-anonymous survey listing the use cases and ask
> you identify and vote.
>
> Then we will publish the results and can prioritize based on this.
>
>
>
> To do so in a timely manner, I would like to freeze the document for
> editing and allow only comments by Monday May 5th 08:00AMUTC and publish
> the survey link to ML ASAP after that.
>
>
>
> Please let me know if this is acceptable.
>
>
>
> Regards,
>
> -Sam.
>
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Client and REST API versioning

2014-05-01 Thread Sylvain Bauza
Le 1 mai 2014 19:11, "Dolph Mathews"  a écrit :
>
>
> On Thu, May 1, 2014 at 8:50 AM, Fuente, Pablo A 
wrote:
>>
>> Hi,
>> We recently implemented our V2 REST API, and at this moment we
are
>> trying to get working our python client against this new version. For
>> this reason, we start a discussion about how the client will choose/set
>> the REST API version to use. BTW, we are not deprecating our V1 REST
>> API, so we need that our client still support it.
>> This are our discussions:
>>
>> 1 - Should the URL stored in Keystone service catalog have the version?
>> In this case, our client will get the REST API URL from
Keystone, parse
>> it, select the correct version of the client code and then start
>> performing requests. But if we choose this path, if a user of the client
>> decides to use the V1 REST API version using
>> --os-reservation-api-version, the client should strip the version of the
>> URL and then append the version that the user wants. The thing here is
>> that we are storing a version on a URL that we could not use in some
>> cases. In other words the version on the URL could be override.
>
>
> No - avoid bloating the service catalog with redundant data.
>
>>
>>
>> 2 - Should Climate store only one URL in Keystone catalog without
>> version?
>> Here, the client, will know the default version to use,
appending that
>> version to the service catalog version. When the client user request
>> another version, the client simply append that version to the end. The
>> cons of this option, is that if someone plan to use the REST API without
>> our client needs to know about how we handle the version. Here we can
>> provide /versions in order to tell how we are handling/naming versions.
>
>
> This is by far the best option you've presented, but the client should
also perform discovery on the endpoint as specified in the catalog, without
trying to arbitrarily manipulate it first. If you return the versioning
information in response to / instead of /versions then you're not forcing
clients to have prior knowledge of an arbitrary path.

I agree with Dolph, that's the best way. If you look at the first version I
wrote for the API, it was planned to be served at root path. I haven't
implemented yet the discovery feature in the API, but that's something
quick to do, as Pecan is the default WSGI app for root.

-Sylvain

>
>>
>>
>> 3 - Should Climate store all the REST API URLs with the version at the
>> end and using versions in service types? e.g reservation and
>> reservationV2
>> Here the client will get the version that needs querying the
service
>> type by version. Seems that some projects do this, but for me seems that
>> this option is similar to 2, with the con that when Climate deprecate
>> V1, the only service type will be reservationV2, which sounds weird for
>> me.
>
>
> No - a different version of a service does not represent a different type
of service. Similar to option 1, this also bloats the service catalog
unnecessarily.
>
>>
>>
>> We would like to get your feedback about this points (or new ones) in
>> order to get this implemented in the right way.
>>
>> Pablo.
>> P.S. I hope that all the options in this email reflect correctly what we
>> discussed at Climate. If not, please add/clarify/remove what you want.
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread Clint Byrum
Excerpts from Nachi Ueno's message of 2014-05-01 12:04:23 -0700:
> Ah I got it now!
> so even if we get stolen HDD, we can keep password safe.
> 
> However, I'm still not sure why this is more secure..
> anyway, the ID/PW to access barbican will be written in neutron.conf, right?
> 

Yes. However, you can surround the secret in policies. You'll have an
audit trail of when and where it was accessed, and you can even restrict
access, so that out of band you have to open up access with barbican.

So while the server may have access, that access is now audited and
limited by policy, instead of just being dependent on the security
measures you can take to protect a file.

> Furthermore,  ID/PW for mysql will be written in conf file..
> so if we can't trust unix file system protection, there is no security
> in OpenStack.

The ID/PW for mysql only grants you access to mysql for as long as that
id/pw are enabled for access. However, the encryption keys for OpenVPN
will grant any passive listener access for as long as they keep any
sniffed traffic. They'll also grant an attacker the ability to MITM
traffic between peers.

So when an encryption key has been accessed, from where, etc, is quite
a bit more crucial than knowing when a username/password combo have
been accessed.

Producing a trustworthy audit log for access to /etc/neutron/neutron.conf
is a lot harder than producing an audit log for a REST API.

So it isn't so much that file system permissions aren't enough, it is
that file system observability is expensive.

Note that at some point there was a POC to have a FUSE driver backed by
Barbican called 'Postern' I think. That would make these discussions a
lot simpler. :)

> 
> 2014-05-01 10:31 GMT-07:00 Clint Byrum :
> > I think you'd do something like this (Note that I don't know off the top
> > of my head the barbican CLI or openvpn cli switches... just
> > pseudo-code):
> >
> > oconf=$(mktemp -d /tmp/openvpnconfig.XX)
> > mount -o tmpfs $oconf size=1M
> > barbican get my-secret-openvpn-conf > $oconf/foo.conf
> > openvpn --config-dir $oconf foo --daemonize
> > umount $oconf
> > rmdir $oconf
> >
> > Excerpts from Nachi Ueno's message of 2014-05-01 10:15:26 -0700:
> >> Hi Robert
> >>
> >> Thank you for your suggestion.
> >> so your suggestion is let OpenVPN process download key to memory
> >> directly from Babican?
> >>
> >> 2014-05-01 9:42 GMT-07:00 Clark, Robert Graham :
> >> > Excuse me interrupting but couldn't you treat the key as largely
> >> > ephemeral, pull it down from Barbican, start the OpenVPN process and
> >> > then purge the key?  It would of course still be resident in the memory
> >> > of the OpenVPN process but should otherwise be protected against
> >> > filesystem disk-residency issues.
> >> >
> >> >
> >> >> -Original Message-
> >> >> From: Nachi Ueno [mailto:na...@ntti3.com]
> >> >> Sent: 01 May 2014 17:36
> >> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >> Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
> >> >>
> >> >> Hi Jarret
> >> >>
> >> >> IMO, Zang point is the issue saving plain private key in the
> >> > filesystem for
> >> >> OpenVPN.
> >> >> Isn't this same even if we use Barbican?
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 2014-05-01 2:56 GMT-07:00 Jarret Raim :
> >> >> > Zang mentioned that part of the issue is that the private key has to
> >> >> > be stored in the OpenVPN config file. If the config files are
> >> >> > generated and can be stored, then storing the whole config file in
> >> >> > Barbican protects the private key (and any other settings) without
> >> >> > having to try to deliver the key to the OpenVPN endpoint in some
> >> > non-
> >> >> standard way.
> >> >> >
> >> >> >
> >> >> > Jarret
> >> >> >
> >> >> > On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:
> >> >> >
> >> >> >>> Jarret
> >> >> >>
> >> >> >>Thanks!
> >> >> >>Currently, the config will be generated on demand by the agent.
> >> >> >>What's merit storing entire config in the Barbican?
> >> >> >>
> >> >> >>> Kyle
> >> >> >>Thanks!
> >> >> >>
> >> >> >>2014-04-30 7:05 GMT-07:00 Kyle Mestery
> >> >> :
> >> >> >>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno 
> >> >> wrote:
> >> >>  Hi Clint
> >> >> 
> >> >>  Thank you for your suggestion. Your point get taken :)
> >> >> 
> >> >> > Kyle
> >> >>  This is also a same discussion for LBaaS Can we discuss this in
> >> >>  advanced service meeting?
> >> >> 
> >> >> >>> Yes! I think we should definitely discuss this in the advanced
> >> >> >>> services meeting today. I've added it to the agenda [1].
> >> >> >>>
> >> >> >>> Thanks,
> >> >> >>> Kyle
> >> >> >>>
> >> >> >>> [1]
> >> >> >>>https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_f
> >> >> or_
> >> >> >>>next
> >> >> >>>_meeting
> >> >> >>>
> >> >> > Zang
> >> >>  Could you join the discussion?
> >> >> 
> >> >> 
> >> >> 
> >> >>  2014-04-29 15:48 GMT-07:00 Clint Byrum :
> >> >> > 

Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Kevin L. Mitchell
On Thu, 2014-05-01 at 18:41 +, Paul Michali (pcm) wrote:
> So, I tried to reproduce, but I actually see the same results with
> both of these. However, they both show the issue I was hitting,
> namely, I got no information on where the failure was located:

So, this is pretty much by design.  A SystemExit extends BaseException,
rather than Exception.  The tests will catch Exception, but not
typically BaseException, as you generally want things like ^C to work
(raises a different BaseException).  So, your tests that might possibly
trigger a SystemExit (or sys.exit()) that you don't want to actually
exit from must either explicitly catch the SystemExit or—assuming the
code uses sys.exit()—must mock sys.exit() to inhibit the normal exit
behavior.

(Also, because SystemExit is the exception that is usually raised for a
normal exit condition, the traceback would not typically be printed, as
that could confuse users; no one expects a successfully executed script
to print a traceback, after all :)
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] preparing oslo.i18n for graduation

2014-05-01 Thread Sergey Lukjanov
Hey Doug,

it looks nice, only two questions:

* why tests aren't located inside the main package (oslo/i18n/tests for ex.)?
* when are you planning to make first release?

Thanks.

On Tue, Apr 29, 2014 at 11:48 PM, Doug Hellmann
 wrote:
> I have exported the gettextutils code and related files to a new git
> repository, ready to be imported as oslo.i18n. Please take a few
> minutes to look over the files and give it a sanity check.
>
> https://github.com/dhellmann/oslo.i18n
>
> Thanks,
> Doug
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-05-01 Thread Jay Pipes
On Thu, 2014-05-01 at 20:49 +0800, Jay Lau wrote:
> Jay Pipes and all, I'm planning to merge this topic to
> http://junodesignsummit.sched.org/event/77801877aa42b595f14ae8b020cd1999 
> after some discussion in this week's Gantt IRC meeting, hope it is OK.

I'll be there :)

Thanks!
-jay

> 
> 
> 2014-05-01 19:56 GMT+08:00 Day, Phil :
> > >
> > > In the original API there was a way to remove members from
> the group.
> > > This didn't make it into the code that was submitted.
> >
> > Well, it didn't make it in because it was broken. If you add
> an instance to a
> > group after it's running, a migration may need to take place
> in order to keep
> > the semantics of the group. That means that for a while the
> policy will be
> > being violated, and if we can't migrate the instance
> somewhere to satisfy the
> > policy then we need to either drop it back out, or be in
> violation. Either some
> > additional states (such as being queued for inclusion in a
> group, etc) may be
> > required, or some additional footnotes on what it means to
> be in a group
> > might have to be made.
> >
> > It was for the above reasons, IIRC, that we decided to leave
> that bit out since
> > the semantics and consequences clearly hadn't been fully
> thought-out.
> > Obviously they can be addressed, but I fear the result will
> be ... ugly. I think
> > there's a definite possibility that leaving out those
> dynamic functions will look
> > more desirable than an actual implementation.
> >
> 
> If we look at a server group as a general contained or
> servers, that may have an attribute that expresses scheduling
> policy, then it doesn't seem to ugly to restrict the
> conditions on which an add is allowed to only those that don't
> break the (optional) policy.Wouldn't even have to go to
> the scheduler to work this out.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Thanks,
> 
> 
> Jay
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread Nachi Ueno
Ah I got it now!
so even if we get stolen HDD, we can keep password safe.

However, I'm still not sure why this is more secure..
anyway, the ID/PW to access barbican will be written in neutron.conf, right?

Furthermore,  ID/PW for mysql will be written in conf file..
so if we can't trust unix file system protection, there is no security
in OpenStack.






2014-05-01 10:31 GMT-07:00 Clint Byrum :
> I think you'd do something like this (Note that I don't know off the top
> of my head the barbican CLI or openvpn cli switches... just
> pseudo-code):
>
> oconf=$(mktemp -d /tmp/openvpnconfig.XX)
> mount -o tmpfs $oconf size=1M
> barbican get my-secret-openvpn-conf > $oconf/foo.conf
> openvpn --config-dir $oconf foo --daemonize
> umount $oconf
> rmdir $oconf
>
> Excerpts from Nachi Ueno's message of 2014-05-01 10:15:26 -0700:
>> Hi Robert
>>
>> Thank you for your suggestion.
>> so your suggestion is let OpenVPN process download key to memory
>> directly from Babican?
>>
>> 2014-05-01 9:42 GMT-07:00 Clark, Robert Graham :
>> > Excuse me interrupting but couldn't you treat the key as largely
>> > ephemeral, pull it down from Barbican, start the OpenVPN process and
>> > then purge the key?  It would of course still be resident in the memory
>> > of the OpenVPN process but should otherwise be protected against
>> > filesystem disk-residency issues.
>> >
>> >
>> >> -Original Message-
>> >> From: Nachi Ueno [mailto:na...@ntti3.com]
>> >> Sent: 01 May 2014 17:36
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
>> >>
>> >> Hi Jarret
>> >>
>> >> IMO, Zang point is the issue saving plain private key in the
>> > filesystem for
>> >> OpenVPN.
>> >> Isn't this same even if we use Barbican?
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> 2014-05-01 2:56 GMT-07:00 Jarret Raim :
>> >> > Zang mentioned that part of the issue is that the private key has to
>> >> > be stored in the OpenVPN config file. If the config files are
>> >> > generated and can be stored, then storing the whole config file in
>> >> > Barbican protects the private key (and any other settings) without
>> >> > having to try to deliver the key to the OpenVPN endpoint in some
>> > non-
>> >> standard way.
>> >> >
>> >> >
>> >> > Jarret
>> >> >
>> >> > On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:
>> >> >
>> >> >>> Jarret
>> >> >>
>> >> >>Thanks!
>> >> >>Currently, the config will be generated on demand by the agent.
>> >> >>What's merit storing entire config in the Barbican?
>> >> >>
>> >> >>> Kyle
>> >> >>Thanks!
>> >> >>
>> >> >>2014-04-30 7:05 GMT-07:00 Kyle Mestery
>> >> :
>> >> >>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno 
>> >> wrote:
>> >>  Hi Clint
>> >> 
>> >>  Thank you for your suggestion. Your point get taken :)
>> >> 
>> >> > Kyle
>> >>  This is also a same discussion for LBaaS Can we discuss this in
>> >>  advanced service meeting?
>> >> 
>> >> >>> Yes! I think we should definitely discuss this in the advanced
>> >> >>> services meeting today. I've added it to the agenda [1].
>> >> >>>
>> >> >>> Thanks,
>> >> >>> Kyle
>> >> >>>
>> >> >>> [1]
>> >> >>>https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_f
>> >> or_
>> >> >>>next
>> >> >>>_meeting
>> >> >>>
>> >> > Zang
>> >>  Could you join the discussion?
>> >> 
>> >> 
>> >> 
>> >>  2014-04-29 15:48 GMT-07:00 Clint Byrum :
>> >> > Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700:
>> >> >> Hi Kyle
>> >> >>
>> >> >> 2014-04-29 10:52 GMT-07:00 Kyle Mestery
>> >> :
>> >> >> > On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno
>> >> 
>> >> >>wrote:
>> >> >> >> Hi Zang
>> >> >> >>
>> >> >> >> Thank you for your contribution on this!
>> >> >> >> The private key management is what I want to discuss in the
>> >> >>summit.
>> >> >> >>
>> >> >> > Has the idea of using Barbican been discussed before? There
>> > are
>> >> >>many
>> >> >> > reasons why using Barbican for this may be better than
>> >> >> > developing
>> >> >>key
>> >> >> > management ourselves.
>> >> >>
>> >> >> No, however I'm +1 for using Barbican. Let's discuss this in
>> >> >> certificate management topic in advanced service session.
>> >> >>
>> >> >
>> >> > Just a suggestion: Don't defer that until the summit. Sounds
>> > like
>> >> >you've  already got some consensus, so you don't need the summit
>> >> >just to rubber  stamp it. I suggest discussing as much as you can
>> >> >right now on the mailing  list, and using the time at the summit
>> > to
>> >> >resolve any complicated issues  including any "a or b" things
>> > that
>> >> >need crowd-sourced idea making. You  can also use the summit time
>> >> >to communicate your requirements to the  Barbican developers.
>> >> >
>> >> > Point is: just because you'll have face time, doesn't mean you
>> >> 

Re: [openstack-dev] Where to report bugs on oslo.config?

2014-05-01 Thread Ben Nemec

On 05/01/2014 12:11 PM, Thomas Goirand wrote:

Hi,

I've searched launchpad, and didn't find out. This didn't work:
https://launchpad.net/oslo.config/+bugs

Should I report bugs at:
https://launchpad.net/oslo/+bugs


Yes.  All of the Oslo projects are tracked under the same LP project 
because Launchpad doesn't have very good support for tracking multiple 
related projects, so it would be next to impossible to keep track of 
them if they were all separate.


-Ben



???

Anyway, the bug is this:

==
FAIL: tests.test_cfg.CliSpecialOptsTestCase.test_version
tests.test_cfg.CliSpecialOptsTestCase.test_version
--
_StringException: Empty attachments:
   stderr

stdout: {{{1.0}}}

Traceback (most recent call last):
   File
"/home/zigo/sources/openstack/icehouse/oslo-config/build-area/oslo-config-1.3.0/tests/test_cfg.py",
line 484, in test_version
 self.assertTrue('1.0' in sys.stderr.getvalue())
   File "/usr/lib/python3.4/unittest/case.py", line 651, in assertTrue
 raise self.failureException(msg)
AssertionError: False is not true

if running with Python 3.4.

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Yuriy Taraday
On Thu, May 1, 2014 at 10:41 PM, Paul Michali (pcm)  wrote:

> ==
> FAIL: process-returncode
> tags: worker-1
> --
> *Binary content:*
> *  traceback (test/plain; charset="utf8")*
> ==
> FAIL: process-returncode
> tags: worker-0
> --
> *Binary content:*
> *  traceback (test/plain; charset="utf8")*
>

process-returncode failures means that child process (subunit one) exited
with nonzero code.


> It looks like there was some traceback, but it doesn’t show it. Any ideas
> how to get around this, as it makes it hard to troubleshoot these types of
> failures?
>

Somehow traceback got MIME type "test/plain". I guess, testr doesn't push
this type of attachments to the screen. You can try to see what's there in
.testrepository dir but I doubt there will be anything useful there.

I think this behavior is expected. Subunit process gets terminated because
of uncaught SystemExit exception and testr reports that as an error.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Adam Harwell
With regard to the last paragraph/sentence about a new driver, I am writing a 
lengthy analysis of that specific topic currently — hopefully we will be able 
to start an in-depth discussion on that later today.

--Adam

From: Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 1:46 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

Hey Eugene,

I think there is a misunderstanding on what iterative development means to you 
and me and I want to make sure we are on the same page. First of all, I'll try 
not to use the term "duct-taping" even though it's a widely used term in the 
industry. My main concern is that implementing code on top of the current 
codebase to meet the smorgasbord of new requirements without thinking about 
overall design (since we know we will eventually want all the requirements 
satisfied at some point per your words) is that some requirement implemented 6 
months from now may change code architecture. Since we know we want to meet all 
requirements eventually, its makes logical sense to design for what we know we 
need and then figure out how to iteratively implement code over time. That 
being said, if it makes sense to use existing code first then fine. In fact, I 
am a fan of trying manipulate as little code as possible unless we absolutely 
have to. I just want to be a smart developer and design knowing I will 
eventually have to implement something. Not keeping things in mind can be 
dangerous. In short, I want to avoid having to perform multiple code refactors 
if possible and design upfront with the list of requirements the community has 
spent time fleshing out.

Also, it seems like you have some implicit developer requirements that I'd like 
written somewhere. This may ease confusion as well. For example, you stated 
"Consistency is important". A clear definition in the form of a developer 
requirement would be nice so that the community understands your expectations.

Lastly, in relation to operator requirements I didn't see you comment on 
whether you are fan of working on an open-source driver together. Just so you 
know, operator requirements are very important for us and I honestly don't see 
how we can use any current driver without major modifications. This leads me to 
want to create a new driver with operator requirements being central to the 
design.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 8:12 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

Hi Jorge,

A couple of inline comments:

Now that we have a set of requirements the next question to ask is, "How
doe we prioritize requirements so that we can start designing and
implementing them"?
Prioritization basically means that we want to support everything and only 
choose what is
more important right now and what is less important and can be implemented 
later.

Assuming requirements are prioritized (which as of today we have a pretty
good idea of these priorities) the next step is to design before laying
down any actual code.
That's true. I'd only would like to notice that there were actually a road map 
and requirements
with design before the code was written, that's both for the features that are 
already implemented,
and those which now are hanging in limbo.

I agree with Samuel that pushing the cart before the
horse is a bad idea in this case (and it usually is the case in software
development), especially since we have a pretty clear idea on what we need
to be designing for. I understand that the current code base has been
worked on by many individuals and the work done thus far is the reason why
so many new faces are getting involved. However, we now have a completely
updated set of requirements that the community has put together and trying
to fit the requirements to existing code may or may not work.

In my experience, I would argue that 99% of the time duct-taping existing code
I really don't like the term "duct-taping" here.
Here's the problem: you'll never will be able to implement everything at once, 
you have to do it incrementally.
That's how ecosystem works.
Each step can be then considered as 'duct-taping' because each state you're 
getting to
is not accounting for everything what was planned.
And for sure, there will be design mistakes that need to be fixed.
In the end there will be another cloud provider with another set of 
requirements...

So in order to deal with that in a productive way there are 

Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-01 Thread Jorge Miramontes
That sounds good to me. The only thing I would caution is that we have 
prioritized certain requirements (like HA and SSL Termination) and I want to 
ensure we use the survey to compliment what we have already mutually agreed 
upon. Thanks for spearheading this!

Cheers,
--Jorge

From: Samuel Bercovici mailto:samu...@radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 12:39 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi Everyone!

To assist in evaluating the use cases that matter and since we now have ~45 use 
cases, I would like to propose to conduct a survey using something like 
surveymonkey.
The idea is to have a non-anonymous survey listing the use cases and ask you 
identify and vote.
Then we will publish the results and can prioritize based on this.

To do so in a timely manner, I would like to freeze the document for editing 
and allow only comments by Monday May 5th 08:00AMUTC and publish the survey 
link to ML ASAP after that.

Please let me know if this is acceptable.

Regards,
-Sam.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Jorge Miramontes
Hey Eugene,

I think there is a misunderstanding on what iterative development means to you 
and me and I want to make sure we are on the same page. First of all, I'll try 
not to use the term "duct-taping" even though it's a widely used term in the 
industry. My main concern is that implementing code on top of the current 
codebase to meet the smorgasbord of new requirements without thinking about 
overall design (since we know we will eventually want all the requirements 
satisfied at some point per your words) is that some requirement implemented 6 
months from now may change code architecture. Since we know we want to meet all 
requirements eventually, its makes logical sense to design for what we know we 
need and then figure out how to iteratively implement code over time. That 
being said, if it makes sense to use existing code first then fine. In fact, I 
am a fan of trying manipulate as little code as possible unless we absolutely 
have to. I just want to be a smart developer and design knowing I will 
eventually have to implement something. Not keeping things in mind can be 
dangerous. In short, I want to avoid having to perform multiple code refactors 
if possible and design upfront with the list of requirements the community has 
spent time fleshing out.

Also, it seems like you have some implicit developer requirements that I'd like 
written somewhere. This may ease confusion as well. For example, you stated 
"Consistency is important". A clear definition in the form of a developer 
requirement would be nice so that the community understands your expectations.

Lastly, in relation to operator requirements I didn't see you comment on 
whether you are fan of working on an open-source driver together. Just so you 
know, operator requirements are very important for us and I honestly don't see 
how we can use any current driver without major modifications. This leads me to 
want to create a new driver with operator requirements being central to the 
design.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, May 1, 2014 8:12 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

Hi Jorge,

A couple of inline comments:

Now that we have a set of requirements the next question to ask is, "How
doe we prioritize requirements so that we can start designing and
implementing them"?
Prioritization basically means that we want to support everything and only 
choose what is
more important right now and what is less important and can be implemented 
later.

Assuming requirements are prioritized (which as of today we have a pretty
good idea of these priorities) the next step is to design before laying
down any actual code.
That's true. I'd only would like to notice that there were actually a road map 
and requirements
with design before the code was written, that's both for the features that are 
already implemented,
and those which now are hanging in limbo.

I agree with Samuel that pushing the cart before the
horse is a bad idea in this case (and it usually is the case in software
development), especially since we have a pretty clear idea on what we need
to be designing for. I understand that the current code base has been
worked on by many individuals and the work done thus far is the reason why
so many new faces are getting involved. However, we now have a completely
updated set of requirements that the community has put together and trying
to fit the requirements to existing code may or may not work.

In my experience, I would argue that 99% of the time duct-taping existing code
I really don't like the term "duct-taping" here.
Here's the problem: you'll never will be able to implement everything at once, 
you have to do it incrementally.
That's how ecosystem works.
Each step can be then considered as 'duct-taping' because each state you're 
getting to
is not accounting for everything what was planned.
And for sure, there will be design mistakes that need to be fixed.
In the end there will be another cloud provider with another set of 
requirements...

So in order to deal with that in a productive way there are a few guidelines:
1) follow the style of ecosystem. Consistency is important. Keeping the style 
helps both developers, reviewers and users of the product.
2) Preserve backward compatibility whenever possible.
That's a very important point which however can be 'relaxed' if existing code 
base is completely unable to evolve to support new requirements.

to fit in new requirements results in buggy software. That being said, I
usually don't like to rebuild a project from scratch. If I can I try to
refactor as much as possible first. However, in this case we have a
particular set of requirements that changes the game. P

Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Paul Michali (pcm)
So, I tried to reproduce, but I actually see the same results with both of 
these. However, they both show the issue I was hitting, namely, I got no 
information on where the failure was located:

root@devstack-32:/opt/stack/neutron# tox -e py27 -v -- 
neutron.tests.unit.pcm.test_pcm
using tox.ini: /opt/stack/neutron/tox.ini
using tox-1.6.1 from /usr/local/lib/python2.7/dist-packages/tox/__init__.pyc
py27 reusing: /opt/stack/neutron/.tox/py27
  /opt/stack/neutron$ /opt/stack/neutron/.tox/py27/bin/python 
/opt/stack/neutron/setup.py --name
py27 develop-inst-nodeps: /opt/stack/neutron
  /opt/stack/neutron$ /opt/stack/neutron/.tox/py27/bin/pip install -U -e 
/opt/stack/neutron --no-deps >/opt/stack/neutron/.tox/py27/log/py27-163.log
py27 runtests: commands[0] | python -m neutron.openstack.common.lockutils 
python setup.py testr --slowest --testr-args=neutron.tests.unit.pcm.test_pcm
  /opt/stack/neutron$ /opt/stack/neutron/.tox/py27/bin/python -m 
neutron.openstack.common.lockutils python setup.py testr --slowest 
--testr-args=neutron.tests.unit.pcm.test_pcm
running testr
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit} --list
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpkYugPE
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpPoDsN3
ouch
==
FAIL: process-returncode
tags: worker-1
--
Binary content:
  traceback (test/plain; charset="utf8")
==
FAIL: process-returncode
tags: worker-0
--
Binary content:
  traceback (test/plain; charset="utf8")
Ran 4 (+4) tests in 0.348s
FAILED (id=150, failures=2 (+2))
error: testr failed (1)
ERROR: InvocationError: '/opt/stack/neutron/.tox/py27/bin/python -m 
neutron.openstack.common.lockutils python setup.py testr --slowest 
--testr-args=neutron.tests.unit.pcm.test_pcm'
__
 summary 
___
ERROR:   py27: commands failed

It looks like there was some traceback, but it doesn’t show it. Any ideas how 
to get around this, as it makes it hard to troubleshoot these types of failures?

Here is the code:

# Copyright 2014 Cisco Systems, Inc.  All rights reserved.
#
#Licensed under the Apache License, Version 2.0 (the "License"); you may
#not use this file except in compliance with the License. You may obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#License for the specific language governing permissions and limitations
#under the License.
#
# @author: Paul Michali, Cisco Systems, Inc.

import sys

from neutron.tests import base

def using_sys_exit():
sys.exit(1)

def using_SystemExit():
raise SystemExit("ouch")


class TestSystemExit(base.BaseTestCase):

def test_using_sys_exit(self):
self.assertIsNone(using_sys_exit())

def test_using_SystemExit(self):
self.assertIsNone(using_SystemExit())


Regards,


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On May 1, 2014, at 1:23 PM, Yuriy Taraday 
mailto:yorik@gmail.com>> wrote:

On Thu, May 1, 2014 at 8:17 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
The patch you've been looking at just changes the way in which SystemExit is 
used, it does not replace it with sys.exit.
In my experience sys.exit was causing unit test threads to interrupt abruptly, 
whereas SystemExit was being caught by the test runner and handled.

According to https://docs.python.org/2.7/library/sys.html#sys.exit , 
sys.exit(n) is an equivalent for raise SystemExit(n), it can be confirmed in 
the source code here: 
http://hg.python.org/cpython/file/2.7/Python/sysmodule.c#l206
If there's any difference in behavior it seems to be the problem of test 
runner. For example, it can mock sys.exit somehow.

I find therefore a bit strange that you're reporting what appears to be the 
opposite behaviour.

Maybe if you could share the code you're working on we

Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-01 Thread Sean Dague
On 05/01/2014 01:30 PM, David Kranz wrote:
> On 05/01/2014 11:36 AM, Matthew Treinish wrote:
>> On Thu, May 01, 2014 at 06:18:10PM +0900, Ken'ichi Ohmichi wrote:
>>> # Sorry for sending this again, previous mail was unreadable.
>>>
>>> 2014-04-28 11:54 GMT+09:00 Ken'ichi Ohmichi :
> This is also why there are a bunch of nova v2 extensions that just add
> properties to an existing API. I think in v3 the proposal was to do
> this with
> microversioning of the plugins. (we don't have a way to configure
> microversioned v3 api plugins in tempest yet, but we can cross that
> bridge when
> the time comes) Either way it will allow tempest to have in config
> which
> behavior to expect.
 Good point, my current understanding is:
 When adding new API parameters to the existing APIs, these
 parameters should
 be API extensions according to the above guidelines. So we have
 three options
 for handling API extensions in Tempest:

 1. Consider them as optional, and cannot block the incompatible
 changes of them. (Current)
 2. Consider them as required based on tempest.conf, and can block the
 incompatible changes.
 3. Consider them as required automatically with microversioning, and
 can block the incompatible changes.
>>> I investigated the way of the above option 3, then have one question
>>> about current Tempest implementation.
>>>
>>> Now verify_tempest_config tool gets API extension list from each
>>> service including Nova and verifies API extension config of tempest.conf
>>> based on the list.
>>> Can we use the list for selecting what extension tests run instead of
>>> the verification?
>>> As you said In the previous IRC meeting, current API tests will be
>>> skipped if the test which is decorated with requires_ext() and the
>>> extension is not specified in tempest.conf. I feel it would be nice
>>> that Tempest gets API extension list and selects API tests automatically
>>> based on the list.
>> So we used to do this type of autodiscovery in tempest, but we stopped
>> because
>> it let bugs slip through the gate. This topic has come up several
>> times in the
>> past, most recently in discussing reorganizing the config file. [1]
>> This is why
>> we put [2] in the tempest README. I agree autodiscovery would be
>> simpler, but
>> the problem is because we use tempest as the gate if there was a bug
>> that caused
>> autodiscovery to be different from what was expected the tests would just
>> silently skip. This would often go unnoticed because of the sheer
>> volume of
>> tempest tests.(I think we're currently at ~2300) I also feel that
>> explicitly
>> defining what is a expected to be enabled is a key requirement for
>> branchless
>> tempest for the same reason.
> 
>>
>> The verify_tempest_config tool was an attempt at a compromise between
>> being
>> explicit and also using auto discovery. By using the APIs to help
>> create a
>> config file that reflected the current configuration state of the
>> services. It's
>> still a WIP though, and it's really just meant to be a user tool. I
>> don't ever
>> see it being included in our gate workflow.
> I think we have to accept that there are two legitimate use cases for
> tempest configuration:
> 
> 1. The entity configuring tempest is the same as the entity that
> deployed. This is the gate case.
>
> 2. Tempest is to be pointed at an existing cloud but was not part of a
> deployment process. We want to run the tests for the supported
> services/extensions.
> 
> We should modularize the code around discovery so that the discovery
> functions return the changes to conf that would have to be made. The
> callers can then decide how that information is to be used. This would
> support both use cases. I have some changes to the verify_tempest_config
> code that does this which I will push up if the concept is agreed.

Discovery is a separate thing from testing. We've seen the discovery
issue go wrong and return "everything passed" when *much less* than you
expected to be run was run.

Matt's got some tooling to build some of that out of band, which is good.

But this is a part where adding friction here is good user experience.
Because user experience is not only about making it easy to run things
correctly, but making it *hard* to run them incorrectly.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Prototype code for discussion for BP l3-svcs-vendor-validation

2014-05-01 Thread Paul Michali (pcm)
Please take a gander and let me know your thoughts!

Prototype: https://review.openstack.org/#/c/91437/

Blueprint: 
https://blueprints.launchpad.net/neutron/+spec/l3-svcs-vendor-validation
Spec: https://review.openstack.org/#/c/88406/


Thanks!

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-01 Thread David Kranz

On 05/01/2014 11:36 AM, Matthew Treinish wrote:

On Thu, May 01, 2014 at 06:18:10PM +0900, Ken'ichi Ohmichi wrote:

# Sorry for sending this again, previous mail was unreadable.

2014-04-28 11:54 GMT+09:00 Ken'ichi Ohmichi :

This is also why there are a bunch of nova v2 extensions that just add
properties to an existing API. I think in v3 the proposal was to do this with
microversioning of the plugins. (we don't have a way to configure
microversioned v3 api plugins in tempest yet, but we can cross that bridge when
the time comes) Either way it will allow tempest to have in config which
behavior to expect.

Good point, my current understanding is:
When adding new API parameters to the existing APIs, these parameters should
be API extensions according to the above guidelines. So we have three options
for handling API extensions in Tempest:

1. Consider them as optional, and cannot block the incompatible
changes of them. (Current)
2. Consider them as required based on tempest.conf, and can block the
incompatible changes.
3. Consider them as required automatically with microversioning, and
can block the incompatible changes.

I investigated the way of the above option 3, then have one question
about current Tempest implementation.

Now verify_tempest_config tool gets API extension list from each
service including Nova and verifies API extension config of tempest.conf
based on the list.
Can we use the list for selecting what extension tests run instead of
the verification?
As you said In the previous IRC meeting, current API tests will be
skipped if the test which is decorated with requires_ext() and the
extension is not specified in tempest.conf. I feel it would be nice
that Tempest gets API extension list and selects API tests automatically
based on the list.

So we used to do this type of autodiscovery in tempest, but we stopped because
it let bugs slip through the gate. This topic has come up several times in the
past, most recently in discussing reorganizing the config file. [1] This is why
we put [2] in the tempest README. I agree autodiscovery would be simpler, but
the problem is because we use tempest as the gate if there was a bug that caused
autodiscovery to be different from what was expected the tests would just
silently skip. This would often go unnoticed because of the sheer volume of
tempest tests.(I think we're currently at ~2300) I also feel that explicitly
defining what is a expected to be enabled is a key requirement for branchless
tempest for the same reason.




The verify_tempest_config tool was an attempt at a compromise between being
explicit and also using auto discovery. By using the APIs to help create a
config file that reflected the current configuration state of the services. It's
still a WIP though, and it's really just meant to be a user tool. I don't ever
see it being included in our gate workflow.
I think we have to accept that there are two legitimate use cases for 
tempest configuration:


1. The entity configuring tempest is the same as the entity that 
deployed. This is the gate case.
2. Tempest is to be pointed at an existing cloud but was not part of a 
deployment process. We want to run the tests for the supported 
services/extensions.


We should modularize the code around discovery so that the discovery 
functions return the changes to conf that would have to be made. The 
callers can then decide how that information is to be used. This would 
support both use cases. I have some changes to the verify_tempest_config 
code that does this which I will push up if the concept is agreed.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN

2014-05-01 Thread Samuel Bercovici
Hi Vijay,

I have looked at the Barbican APIs – 
https://github.com/cloudkeep/barbican/wiki/Application-Programming-Interface
I was no able to see a “native” API that will accept an SSL certificate 
(private key, public key, CSR, etc.) and will store it.
We can either store the whole certificate as a single file as a secret or use a 
container and store all the certificate parts as secrets.

I think that having LBaaS reference Certificates as IDs using some service is 
the right way to go so this might be achived by either:

1.   Adding to Barbican and API to store / generate certificates

2.   Create a new “module” that might start by being hosted in neutron or 
keystone that will allow to manage certificates and will use Barbican behind 
the scenes to store them.

3.   Decide on a container structure to use in Babican but implement the 
way to access and arrange it as a neutron library

Was any decision made on how to proceed?

Regards,
-Sam.




From: Vijay B [mailto:os.v...@gmail.com]
Sent: Wednesday, April 30, 2014 3:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron] [LBaaS][VPN] SSL cert implementation for 
LBaaS and VPN

Hi,

It looks like there are areas of common effort in multiple efforts that are 
proceeding in parallel to implement SSL for LBaaS as well as VPN SSL in neutron.

Two relevant efforts are listed below:


https://review.openstack.org/#/c/74031/   
(https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL)

https://review.openstack.org/#/c/58897/   
(https://blueprints.launchpad.net/openstack/?searchtext=neutron-ssl-vpn)



Both VPN and LBaaS will use SSL certificates and keys, and this makes it better 
to implement SSL entities as first class citizens in the OS world. So, three 
points need to be discussed here:

1. The VPN SSL implementation above is putting the SSL cert content in a 
mapping table, instead of maintaining certs separately and referencing them 
using IDs. The LBaaS implementation stores certificates in a separate table, 
but implements the necessary extensions and logic under LBaaS. We propose that 
both these implementations move away from this and refer to SSL entities using 
IDs, and that the SSL entities themselves are implemented as their own 
resources, serviced either by a core plugin or a new SSL plugin (assuming 
neutron; please also see point 3 below).

2. The actual data store where the certs and keys are stored should be 
configurable at least globally, such that the SSL plugin code will singularly 
refer to that store alone when working with the SSL entities. The data store 
candidates currently are Barbican and a sql db. Each should have a separate 
backend driver, along with the required config values. If further evaluation of 
Barbican shows that it fits all SSL needs, we should make it a priority over a 
sqldb driver.

3. Where should the primary entries for the SSL entities be stored? While the 
actual certs themselves will reside on Barbican or SQLdb, the entities 
themselves are currently being implemented in Neutron since they are being 
used/referenced there. However, we feel that implementing them in keystone 
would be most appropriate. We could also follow a federated model where a 
subset of keys can reside on another service such as Neutron. We are fine with 
starting an initial implementation in neutron, in a modular manner, and move it 
later to keystone.


Please provide your inputs on this.


Thanks,
Regards,
Vijay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Objects not getting distributed across the swift cluster...

2014-05-01 Thread John Dickinson

On May 1, 2014, at 10:32 AM, Shyam Prasad N  wrote:

> Hi Chuck, 
> Thanks for the reply.
> 
> The reason for such weight distribution seems to do with the ring rebalance 
> command. I've scripted the disk addition (and rebalance) process to the ring 
> using a wrapper command. When I trigger the rebalance after each disk 
> addition, only the first rebalance seems to take effect.
> 
> Is there any other way to adjust the weights other than rebalance? Or is 
> there a way to force a rebalance, even if the frequency of the rebalance (as 
> a part of disk addition) is under an hour (the min_part_hours value in ring 
> creation).

Rebalancing only moves one replica at a time to ensure that your data remains 
available, even if you have a hardware failure while you are adding capacity. 
This is why it may take multiple rebalances to get everything evenly balanced.

The min_part_hours setting (perhaps poorly named) should match how long a 
replication pass takes in your cluster. You can understand this because of what 
I said above. By ensuring that replication has completed before putting another 
partition "in flight", Swift can ensure that you keep your data highly 
available.

For completeness to answer your question, there is an (intentionally) 
undocumented option in swift-ring-builder called 
"pretend_min_part_hours_passed", but it should ALMOST NEVER be used in a 
production cluster, unless you really, really know what you are doing. Using 
that option will very likely cause service interruptions to your users. The 
better option is to correctly set the min_part_hours value to match your 
replication pass time (with set_min_part_hours), and then wait for swift to 
move things around.

Here's some more info on how and why to add capacity to a running Swift 
cluster: https://swiftstack.com/blog/2012/04/09/swift-capacity-management/

--John





> On May 1, 2014 9:00 PM, "Chuck Thier"  wrote:
> Hi Shyam,
> 
> If I am reading your ring output correctly, it looks like only the devices in 
> node .202 have a weight set, and thus why all of your objects are going to 
> that one node.  You can update the weight of the other devices, and 
> rebalance, and things should get distributed correctly.
> 
> --
> Chuck
> 
> 
> On Thu, May 1, 2014 at 5:28 AM, Shyam Prasad N  wrote:
> Hi,
> 
> I created a swift cluster and configured the rings like this...
> 
> swift-ring-builder object.builder create 10 3 1
> 
> ubuntu-202:/etc/swift$ swift-ring-builder object.builder 
> object.builder, build version 12
> 1024 partitions, 3.00 replicas, 1 regions, 4 zones, 12 devices, 300.00 
> balance
> The minimum number of hours before a partition can be reassigned is 1
> Devices:id  region  zone  ip address  port  replication ip  
> replication port  name weight partitions balance meta
>  0   1 1  10.3.0.202  6010  10.3.0.202
>   6010  xvdb   1.00   1024  300.00 
>  1   1 1  10.3.0.202  6020  10.3.0.202
>   6020  xvdc   1.00   1024  300.00 
>  2   1 1  10.3.0.202  6030  10.3.0.202
>   6030  xvde   1.00   1024  300.00 
>  3   1 2  10.3.0.212  6010  10.3.0.212
>   6010  xvdb   1.00  0 -100.00 
>  4   1 2  10.3.0.212  6020  10.3.0.212
>   6020  xvdc   1.00  0 -100.00 
>  5   1 2  10.3.0.212  6030  10.3.0.212
>   6030  xvde   1.00  0 -100.00 
>  6   1 3  10.3.0.222  6010  10.3.0.222
>   6010  xvdb   1.00  0 -100.00 
>  7   1 3  10.3.0.222  6020  10.3.0.222
>   6020  xvdc   1.00  0 -100.00 
>  8   1 3  10.3.0.222  6030  10.3.0.222
>   6030  xvde   1.00  0 -100.00 
>  9   1 4  10.3.0.232  6010  10.3.0.232
>   6010  xvdb   1.00  0 -100.00 
> 10   1 4  10.3.0.232  6020  10.3.0.232
>   6020  xvdc   1.00  0 -100.00 
> 11   1 4  10.3.0.232  6030  10.3.0.232
>   6030  xvde   1.00  0 -100.00 
> 
> Container and account rings have a similar configuration.
> Once the rings were created and all the disks were added to the rings like 
> above, I ran rebalance on each ring. (I ran rebalance after adding each of 
> the node above.)
> Then I immediately scp the rings to all other nodes in the cluster.
> 
> I now observe that the objects are all going to 10.3.0.202. I don't see the 
> objects being replicated to the other nodes. So much so that 202 is 
> approaching 100% disk usage, while other nodes are almost completely empty.
> What am I doing wrong? Am I not supposed to run rebalance operation after 
> addition of each disk/node?
> 

[openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-01 Thread Samuel Bercovici
Hi Everyone!

To assist in evaluating the use cases that matter and since we now have ~45 use 
cases, I would like to propose to conduct a survey using something like 
surveymonkey.
The idea is to have a non-anonymous survey listing the use cases and ask you 
identify and vote.
Then we will publish the results and can prioritize based on this.

To do so in a timely manner, I would like to freeze the document for editing 
and allow only comments by Monday May 5th 08:00AMUTC and publish the survey 
link to ML ASAP after that.

Please let me know if this is acceptable.

Regards,
-Sam.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread Clint Byrum
I think you'd do something like this (Note that I don't know off the top
of my head the barbican CLI or openvpn cli switches... just
pseudo-code):

oconf=$(mktemp -d /tmp/openvpnconfig.XX)
mount -o tmpfs $oconf size=1M
barbican get my-secret-openvpn-conf > $oconf/foo.conf
openvpn --config-dir $oconf foo --daemonize
umount $oconf
rmdir $oconf

Excerpts from Nachi Ueno's message of 2014-05-01 10:15:26 -0700:
> Hi Robert
> 
> Thank you for your suggestion.
> so your suggestion is let OpenVPN process download key to memory
> directly from Babican?
> 
> 2014-05-01 9:42 GMT-07:00 Clark, Robert Graham :
> > Excuse me interrupting but couldn't you treat the key as largely
> > ephemeral, pull it down from Barbican, start the OpenVPN process and
> > then purge the key?  It would of course still be resident in the memory
> > of the OpenVPN process but should otherwise be protected against
> > filesystem disk-residency issues.
> >
> >
> >> -Original Message-
> >> From: Nachi Ueno [mailto:na...@ntti3.com]
> >> Sent: 01 May 2014 17:36
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
> >>
> >> Hi Jarret
> >>
> >> IMO, Zang point is the issue saving plain private key in the
> > filesystem for
> >> OpenVPN.
> >> Isn't this same even if we use Barbican?
> >>
> >>
> >>
> >>
> >>
> >> 2014-05-01 2:56 GMT-07:00 Jarret Raim :
> >> > Zang mentioned that part of the issue is that the private key has to
> >> > be stored in the OpenVPN config file. If the config files are
> >> > generated and can be stored, then storing the whole config file in
> >> > Barbican protects the private key (and any other settings) without
> >> > having to try to deliver the key to the OpenVPN endpoint in some
> > non-
> >> standard way.
> >> >
> >> >
> >> > Jarret
> >> >
> >> > On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:
> >> >
> >> >>> Jarret
> >> >>
> >> >>Thanks!
> >> >>Currently, the config will be generated on demand by the agent.
> >> >>What's merit storing entire config in the Barbican?
> >> >>
> >> >>> Kyle
> >> >>Thanks!
> >> >>
> >> >>2014-04-30 7:05 GMT-07:00 Kyle Mestery
> >> :
> >> >>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno 
> >> wrote:
> >>  Hi Clint
> >> 
> >>  Thank you for your suggestion. Your point get taken :)
> >> 
> >> > Kyle
> >>  This is also a same discussion for LBaaS Can we discuss this in
> >>  advanced service meeting?
> >> 
> >> >>> Yes! I think we should definitely discuss this in the advanced
> >> >>> services meeting today. I've added it to the agenda [1].
> >> >>>
> >> >>> Thanks,
> >> >>> Kyle
> >> >>>
> >> >>> [1]
> >> >>>https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_f
> >> or_
> >> >>>next
> >> >>>_meeting
> >> >>>
> >> > Zang
> >>  Could you join the discussion?
> >> 
> >> 
> >> 
> >>  2014-04-29 15:48 GMT-07:00 Clint Byrum :
> >> > Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700:
> >> >> Hi Kyle
> >> >>
> >> >> 2014-04-29 10:52 GMT-07:00 Kyle Mestery
> >> :
> >> >> > On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno
> >> 
> >> >>wrote:
> >> >> >> Hi Zang
> >> >> >>
> >> >> >> Thank you for your contribution on this!
> >> >> >> The private key management is what I want to discuss in the
> >> >>summit.
> >> >> >>
> >> >> > Has the idea of using Barbican been discussed before? There
> > are
> >> >>many
> >> >> > reasons why using Barbican for this may be better than
> >> >> > developing
> >> >>key
> >> >> > management ourselves.
> >> >>
> >> >> No, however I'm +1 for using Barbican. Let's discuss this in
> >> >> certificate management topic in advanced service session.
> >> >>
> >> >
> >> > Just a suggestion: Don't defer that until the summit. Sounds
> > like
> >> >you've  already got some consensus, so you don't need the summit
> >> >just to rubber  stamp it. I suggest discussing as much as you can
> >> >right now on the mailing  list, and using the time at the summit
> > to
> >> >resolve any complicated issues  including any "a or b" things
> > that
> >> >need crowd-sourced idea making. You  can also use the summit time
> >> >to communicate your requirements to the  Barbican developers.
> >> >
> >> > Point is: just because you'll have face time, doesn't mean you
> >> > should use it for what can be done via the mailing list.
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> >
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> 
> >>  ___
> >>  OpenStack-dev mailing list
> >>  OpenStack-dev@lists.openstack.org
> >>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>>
> >> >>> ___

Re: [openstack-dev] Objects not getting distributed across the swift cluster...

2014-05-01 Thread Shyam Prasad N
Hi Chuck,
Thanks for the reply.

The reason for such weight distribution seems to do with the ring rebalance
command. I've scripted the disk addition (and rebalance) process to the
ring using a wrapper command. When I trigger the rebalance after each disk
addition, only the first rebalance seems to take effect.

Is there any other way to adjust the weights other than rebalance? Or is
there a way to force a rebalance, even if the frequency of the rebalance
(as a part of disk addition) is under an hour (the min_part_hours value in
ring creation).
 On May 1, 2014 9:00 PM, "Chuck Thier"  wrote:

> Hi Shyam,
>
> If I am reading your ring output correctly, it looks like only the devices
> in node .202 have a weight set, and thus why all of your objects are going
> to that one node.  You can update the weight of the other devices, and
> rebalance, and things should get distributed correctly.
>
> --
> Chuck
>
>
> On Thu, May 1, 2014 at 5:28 AM, Shyam Prasad N wrote:
>
>> Hi,
>>
>> I created a swift cluster and configured the rings like this...
>>
>> swift-ring-builder object.builder create 10 3 1
>>
>> ubuntu-202:/etc/swift$ swift-ring-builder object.builder
>> object.builder, build version 12
>> 1024 partitions, 3.00 replicas, 1 regions, 4 zones, 12 devices,
>> 300.00 balance
>> The minimum number of hours before a partition can be reassigned is 1
>> Devices:id  region  zone  ip address  port  replication ip
>> replication port  name weight partitions balance meta
>>  0   1 1  10.3.0.202  6010
>> 10.3.0.202  6010  xvdb   1.00   1024  300.00
>>  1   1 1  10.3.0.202  6020
>> 10.3.0.202  6020  xvdc   1.00   1024  300.00
>>  2   1 1  10.3.0.202  6030
>> 10.3.0.202  6030  xvde   1.00   1024  300.00
>>  3   1 2  10.3.0.212  6010
>> 10.3.0.212  6010  xvdb   1.00  0 -100.00
>>  4   1 2  10.3.0.212  6020
>> 10.3.0.212  6020  xvdc   1.00  0 -100.00
>>  5   1 2  10.3.0.212  6030
>> 10.3.0.212  6030  xvde   1.00  0 -100.00
>>  6   1 3  10.3.0.222  6010
>> 10.3.0.222  6010  xvdb   1.00  0 -100.00
>>  7   1 3  10.3.0.222  6020
>> 10.3.0.222  6020  xvdc   1.00  0 -100.00
>>  8   1 3  10.3.0.222  6030
>> 10.3.0.222  6030  xvde   1.00  0 -100.00
>>  9   1 4  10.3.0.232  6010
>> 10.3.0.232  6010  xvdb   1.00  0 -100.00
>> 10   1 4  10.3.0.232  6020
>> 10.3.0.232  6020  xvdc   1.00  0 -100.00
>> 11   1 4  10.3.0.232  6030
>> 10.3.0.232  6030  xvde   1.00  0 -100.00
>>
>> Container and account rings have a similar configuration.
>> Once the rings were created and all the disks were added to the rings
>> like above, I ran rebalance on each ring. (I ran rebalance after adding
>> each of the node above.)
>> Then I immediately scp the rings to all other nodes in the cluster.
>>
>> I now observe that the objects are all going to 10.3.0.202. I don't see
>> the objects being replicated to the other nodes. So much so that 202 is
>> approaching 100% disk usage, while other nodes are almost completely empty.
>> What am I doing wrong? Am I not supposed to run rebalance operation after
>> addition of each disk/node?
>>
>> Thanks in advance for the help.
>>
>> --
>> -Shyam
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Wrong Test Cases

2014-05-01 Thread Hao Wang
Hi,

I have got one question: if there is something wrong with a test case, and
it causes the review can proceed, what should I do?

Thanks,
Hao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Yuriy Taraday
On Thu, May 1, 2014 at 8:17 PM, Salvatore Orlando wrote:

> The patch you've been looking at just changes the way in which SystemExit
> is used, it does not replace it with sys.exit.
> In my experience sys.exit was causing unit test threads to interrupt
> abruptly, whereas SystemExit was being caught by the test runner and
> handled.
>

According to https://docs.python.org/2.7/library/sys.html#sys.exit ,
sys.exit(n) is an equivalent for raise SystemExit(n), it can be confirmed
in the source code here:
http://hg.python.org/cpython/file/2.7/Python/sysmodule.c#l206
If there's any difference in behavior it seems to be the problem of test
runner. For example, it can mock sys.exit somehow.

 I find therefore a bit strange that you're reporting what appears to be
> the opposite behaviour.
>
> Maybe if you could share the code you're working on we can have a look at
> it and see what's going on.
>

I'd suggest finding out what's the difference in both of your cases.

Coming back to topic, I'd prefer using standard library call because it can
be mocked for testing.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread Nachi Ueno
Hi Robert

Thank you for your suggestion.
so your suggestion is let OpenVPN process download key to memory
directly from Babican?



2014-05-01 9:42 GMT-07:00 Clark, Robert Graham :
> Excuse me interrupting but couldn't you treat the key as largely
> ephemeral, pull it down from Barbican, start the OpenVPN process and
> then purge the key?  It would of course still be resident in the memory
> of the OpenVPN process but should otherwise be protected against
> filesystem disk-residency issues.
>
>
>> -Original Message-
>> From: Nachi Ueno [mailto:na...@ntti3.com]
>> Sent: 01 May 2014 17:36
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
>>
>> Hi Jarret
>>
>> IMO, Zang point is the issue saving plain private key in the
> filesystem for
>> OpenVPN.
>> Isn't this same even if we use Barbican?
>>
>>
>>
>>
>>
>> 2014-05-01 2:56 GMT-07:00 Jarret Raim :
>> > Zang mentioned that part of the issue is that the private key has to
>> > be stored in the OpenVPN config file. If the config files are
>> > generated and can be stored, then storing the whole config file in
>> > Barbican protects the private key (and any other settings) without
>> > having to try to deliver the key to the OpenVPN endpoint in some
> non-
>> standard way.
>> >
>> >
>> > Jarret
>> >
>> > On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:
>> >
>> >>> Jarret
>> >>
>> >>Thanks!
>> >>Currently, the config will be generated on demand by the agent.
>> >>What's merit storing entire config in the Barbican?
>> >>
>> >>> Kyle
>> >>Thanks!
>> >>
>> >>2014-04-30 7:05 GMT-07:00 Kyle Mestery
>> :
>> >>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno 
>> wrote:
>>  Hi Clint
>> 
>>  Thank you for your suggestion. Your point get taken :)
>> 
>> > Kyle
>>  This is also a same discussion for LBaaS Can we discuss this in
>>  advanced service meeting?
>> 
>> >>> Yes! I think we should definitely discuss this in the advanced
>> >>> services meeting today. I've added it to the agenda [1].
>> >>>
>> >>> Thanks,
>> >>> Kyle
>> >>>
>> >>> [1]
>> >>>https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_f
>> or_
>> >>>next
>> >>>_meeting
>> >>>
>> > Zang
>>  Could you join the discussion?
>> 
>> 
>> 
>>  2014-04-29 15:48 GMT-07:00 Clint Byrum :
>> > Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700:
>> >> Hi Kyle
>> >>
>> >> 2014-04-29 10:52 GMT-07:00 Kyle Mestery
>> :
>> >> > On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno
>> 
>> >>wrote:
>> >> >> Hi Zang
>> >> >>
>> >> >> Thank you for your contribution on this!
>> >> >> The private key management is what I want to discuss in the
>> >>summit.
>> >> >>
>> >> > Has the idea of using Barbican been discussed before? There
> are
>> >>many
>> >> > reasons why using Barbican for this may be better than
>> >> > developing
>> >>key
>> >> > management ourselves.
>> >>
>> >> No, however I'm +1 for using Barbican. Let's discuss this in
>> >> certificate management topic in advanced service session.
>> >>
>> >
>> > Just a suggestion: Don't defer that until the summit. Sounds
> like
>> >you've  already got some consensus, so you don't need the summit
>> >just to rubber  stamp it. I suggest discussing as much as you can
>> >right now on the mailing  list, and using the time at the summit
> to
>> >resolve any complicated issues  including any "a or b" things
> that
>> >need crowd-sourced idea making. You  can also use the summit time
>> >to communicate your requirements to the  Barbican developers.
>> >
>> > Point is: just because you'll have face time, doesn't mean you
>> > should use it for what can be done via the mailing list.
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>  ___
>>  OpenStack-dev mailing list
>>  OpenStack-dev@lists.openstack.org
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>> ___
>> >>> OpenStack-dev mailing list
>> >>> OpenStack-dev@lists.openstack.org
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>___
>> >>OpenStack-dev mailing list
>> >>OpenStack-dev@lists.openstack.org
>> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> Op

[openstack-dev] Where to report bugs on oslo.config?

2014-05-01 Thread Thomas Goirand
Hi,

I've searched launchpad, and didn't find out. This didn't work:
https://launchpad.net/oslo.config/+bugs

Should I report bugs at:
https://launchpad.net/oslo/+bugs

???

Anyway, the bug is this:

==
FAIL: tests.test_cfg.CliSpecialOptsTestCase.test_version
tests.test_cfg.CliSpecialOptsTestCase.test_version
--
_StringException: Empty attachments:
  stderr

stdout: {{{1.0}}}

Traceback (most recent call last):
  File
"/home/zigo/sources/openstack/icehouse/oslo-config/build-area/oslo-config-1.3.0/tests/test_cfg.py",
line 484, in test_version
self.assertTrue('1.0' in sys.stderr.getvalue())
  File "/usr/lib/python3.4/unittest/case.py", line 651, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true

if running with Python 3.4.

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Client and REST API versioning

2014-05-01 Thread Dolph Mathews
On Thu, May 1, 2014 at 8:50 AM, Fuente, Pablo A wrote:

> Hi,
> We recently implemented our V2 REST API, and at this moment we are
> trying to get working our python client against this new version. For
> this reason, we start a discussion about how the client will choose/set
> the REST API version to use. BTW, we are not deprecating our V1 REST
> API, so we need that our client still support it.
> This are our discussions:
>
> 1 - Should the URL stored in Keystone service catalog have the version?
> In this case, our client will get the REST API URL from Keystone,
> parse
> it, select the correct version of the client code and then start
> performing requests. But if we choose this path, if a user of the client
> decides to use the V1 REST API version using
> --os-reservation-api-version, the client should strip the version of the
> URL and then append the version that the user wants. The thing here is
> that we are storing a version on a URL that we could not use in some
> cases. In other words the version on the URL could be override.
>

No - avoid bloating the service catalog with redundant data.


>
> 2 - Should Climate store only one URL in Keystone catalog without
> version?
> Here, the client, will know the default version to use, appending
> that
> version to the service catalog version. When the client user request
> another version, the client simply append that version to the end. The
> cons of this option, is that if someone plan to use the REST API without
> our client needs to know about how we handle the version. Here we can
> provide /versions in order to tell how we are handling/naming versions.
>

This is by far the best option you've presented, but the client should also
perform discovery on the endpoint as specified in the catalog, without
trying to arbitrarily manipulate it first. If you return the versioning
information in response to / instead of /versions then you're not forcing
clients to have prior knowledge of an arbitrary path.


>
> 3 - Should Climate store all the REST API URLs with the version at the
> end and using versions in service types? e.g reservation and
> reservationV2
> Here the client will get the version that needs querying the
> service
> type by version. Seems that some projects do this, but for me seems that
> this option is similar to 2, with the con that when Climate deprecate
> V1, the only service type will be reservationV2, which sounds weird for
> me.
>

No - a different version of a service does not represent a different type
of service. Similar to option 1, this also bloats the service catalog
unnecessarily.


>
> We would like to get your feedback about this points (or new ones) in
> order to get this implemented in the right way.
>
> Pablo.
> P.S. I hope that all the options in this email reflect correctly what we
> discussed at Climate. If not, please add/clarify/remove what you want.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-call)

2014-05-01 Thread Trevor Vardeman
Vijay, I'm following suit: Replies in line :D

On Thu, 2014-05-01 at 16:11 +, Vijay Venkatachalam wrote:
> Thanks Trevor. Replies inline!
> 
> > -Original Message-
> > From: Trevor Vardeman [mailto:trevor.varde...@rackspace.com]
> > Sent: Thursday, May 1, 2014 7:30 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-
> > call)
> > 
> > Vijay,
> > 
> > Comments in-line, hope I can clear some of this up for you :)
> > 
> > -Trevor
> > 
> > On Thu, 2014-05-01 at 13:16 +, Vijay Venkatachalam wrote:
> > > I am expecting to be more active on community on the LBaaS front.
> > >
> > > May be reviewing and picking-up a few items to  work as well.
> > >
> > > I had a look at the proposal. Seeing Single & Multi-Call approach for
> > > each workflow makes it easy to understand.
> > >
> > > Thanks for the clear documentation, it is welcoming to review :-). I was 
> > > not
> > allowed to comment on WorkFlow doc, can you enable comments?
> > >
> > > The single-call approach essentially creates the global pool/VIP. Once
> > VIP/Pool is created using single call, are they reusable in multi-call?
> > > For example: Can a pool created for HTTP endpoint/loadbalancer be used
> > in HTTPS endpoint LB where termination occurs as well?
> > 
> > From what I remember discussing with my team (being a developer under
> > Jorge's umbrella) There is a 1-M relationship between load balancer and
> > pool.  Also, the protocol is specified on the Load Balancer, not the pool,
> > meaning you could expose TCP traffic via one Load Balancer to a pool, and
> > HTTP traffic via another Load Balancer to that same pool.
> > This is easily modified such
> > 
> 
> Ok. Thanks! Should there be a separate use case for covering this (If it is 
> not already present)?

This is already reflected in at least one use-case.  I've been
documenting the "solutions" so to speak to many of the use cases with
regards to the Rackspace API proposal, if you'd like to see some of
those examples (keep in mind they are a WIP) I'll provide a link to them
for you:  https://drive.google.com/#folders/0B2r4apUP7uPwRVc2MzQ2MHNpcE0

> 
> > >
> > > Also, would it be useful to include PUT as a single call? I see PUT only 
> > > for
> > POOL not for LB.
> > > A user who started with single-call  POST, might like to continue to use 
> > > the
> > same approach for PUT/update as well.
> > 
> > On the fifth page of the document found here:
> > https://docs.google.com/document/d/1mTfkkdnPAd4tWOMZAdwHEx7IuFZ
> > DULjG9bTmWyXe-zo/edit
> > There is a PUT detailed for a Load Balancer.  There should be support for 
> > PUT
> > on any parent object assuming the fields one would update are not read-
> > only.
> > 
> 
> My mistake, didn't explain properly.
> I see PUT of loadbalancer containing only loadbalancer properties. 
> I was wondering if it makes sense for PUT of LOADBALANCER to contain 
> pool+members also. Similar to the POST payload.

For this API proposal, we wanted to enforce the updating of properties
as single requests to the resource, where the POST context includes
creations/attachments of resources to one another.  To update a pool/its
members you would use the "/pools" or "/pools/{pool_id}/members"
endpoints accordingly.  Also a POST to
"/loadbalancers/{loadbalancer_id}/pools" will create/attach a pool to
the Load Balancer, however PUT would not be supported at this endpoint.

> 
> Also, will delete of loadbalancer  DELETE the pool/vip, if they are no more 
> referenced by another loadbalancer.
> 
> Or, they have to be cleaned up separately?

Following the concept of the "Neutron port" in essence "detaching"
rather than removing the references has us leaving the extra pieces
intact but disconnected from a Load Balancer.  One would delete the Load
Balancer, and still be able to retrieve the VIP or Pool from their
root-resource references.  This would allow someone the ability to
delete a specific Load Balancer, and then create an entirely new one
while referencing the original pool and VIP.

> 
> > >
> > > Thanks,
> > > Vijay V.
> > >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread Clark, Robert Graham
Excuse me interrupting but couldn't you treat the key as largely
ephemeral, pull it down from Barbican, start the OpenVPN process and
then purge the key?  It would of course still be resident in the memory
of the OpenVPN process but should otherwise be protected against
filesystem disk-residency issues.


> -Original Message-
> From: Nachi Ueno [mailto:na...@ntti3.com]
> Sent: 01 May 2014 17:36
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
> 
> Hi Jarret
> 
> IMO, Zang point is the issue saving plain private key in the
filesystem for
> OpenVPN.
> Isn't this same even if we use Barbican?
> 
> 
> 
> 
> 
> 2014-05-01 2:56 GMT-07:00 Jarret Raim :
> > Zang mentioned that part of the issue is that the private key has to
> > be stored in the OpenVPN config file. If the config files are
> > generated and can be stored, then storing the whole config file in
> > Barbican protects the private key (and any other settings) without
> > having to try to deliver the key to the OpenVPN endpoint in some
non-
> standard way.
> >
> >
> > Jarret
> >
> > On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:
> >
> >>> Jarret
> >>
> >>Thanks!
> >>Currently, the config will be generated on demand by the agent.
> >>What's merit storing entire config in the Barbican?
> >>
> >>> Kyle
> >>Thanks!
> >>
> >>2014-04-30 7:05 GMT-07:00 Kyle Mestery
> :
> >>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno 
> wrote:
>  Hi Clint
> 
>  Thank you for your suggestion. Your point get taken :)
> 
> > Kyle
>  This is also a same discussion for LBaaS Can we discuss this in
>  advanced service meeting?
> 
> >>> Yes! I think we should definitely discuss this in the advanced
> >>> services meeting today. I've added it to the agenda [1].
> >>>
> >>> Thanks,
> >>> Kyle
> >>>
> >>> [1]
> >>>https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_f
> or_
> >>>next
> >>>_meeting
> >>>
> > Zang
>  Could you join the discussion?
> 
> 
> 
>  2014-04-29 15:48 GMT-07:00 Clint Byrum :
> > Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700:
> >> Hi Kyle
> >>
> >> 2014-04-29 10:52 GMT-07:00 Kyle Mestery
> :
> >> > On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno
> 
> >>wrote:
> >> >> Hi Zang
> >> >>
> >> >> Thank you for your contribution on this!
> >> >> The private key management is what I want to discuss in the
> >>summit.
> >> >>
> >> > Has the idea of using Barbican been discussed before? There
are
> >>many
> >> > reasons why using Barbican for this may be better than
> >> > developing
> >>key
> >> > management ourselves.
> >>
> >> No, however I'm +1 for using Barbican. Let's discuss this in
> >> certificate management topic in advanced service session.
> >>
> >
> > Just a suggestion: Don't defer that until the summit. Sounds
like
> >you've  already got some consensus, so you don't need the summit
> >just to rubber  stamp it. I suggest discussing as much as you can
> >right now on the mailing  list, and using the time at the summit
to
> >resolve any complicated issues  including any "a or b" things
that
> >need crowd-sourced idea making. You  can also use the summit time
> >to communicate your requirements to the  Barbican developers.
> >
> > Point is: just because you'll have face time, doesn't mean you
> > should use it for what can be done via the mailing list.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>  ___
>  OpenStack-dev mailing list
>  OpenStack-dev@lists.openstack.org
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>___
> >>OpenStack-dev mailing list
> >>OpenStack-dev@lists.openstack.org
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

2014-05-01 Thread Frittoli, Andrea (HP Cloud)
I will arrive Sunday late.

If you meet on Monday I’ll see you there ^_^

 

From: Miguel Lavalle [mailto:mig...@mlavalle.com] 
Sent: 01 May 2014 17:28
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

 

I arrive Sunday at 3:30pm. Either Sunday or Monday are fine with me. Lookging 
forward to it :-)

 

On Wed, Apr 30, 2014 at 5:11 AM, Koderer, Marc mailto:m.kode...@telekom.de> > wrote:

Hi folks,

last time we met one day before the Summit started for a short meet-up.
Should we do the same this time?

I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be 
fine for me.

Regards,
Marc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread Nachi Ueno
Hi Jarret

IMO, Zang point is the issue saving plain private key in the
filesystem for OpenVPN.
Isn't this same even if we use Barbican?





2014-05-01 2:56 GMT-07:00 Jarret Raim :
> Zang mentioned that part of the issue is that the private key has to be
> stored in the OpenVPN config file. If the config files are generated and
> can be stored, then storing the whole config file in Barbican protects the
> private key (and any other settings) without having to try to deliver the
> key to the OpenVPN endpoint in some non-standard way.
>
>
> Jarret
>
> On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:
>
>>> Jarret
>>
>>Thanks!
>>Currently, the config will be generated on demand by the agent.
>>What's merit storing entire config in the Barbican?
>>
>>> Kyle
>>Thanks!
>>
>>2014-04-30 7:05 GMT-07:00 Kyle Mestery :
>>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno  wrote:
 Hi Clint

 Thank you for your suggestion. Your point get taken :)

> Kyle
 This is also a same discussion for LBaaS
 Can we discuss this in advanced service meeting?

>>> Yes! I think we should definitely discuss this in the advanced
>>> services meeting today. I've added it to the agenda [1].
>>>
>>> Thanks,
>>> Kyle
>>>
>>> [1]
>>>https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_for_next
>>>_meeting
>>>
> Zang
 Could you join the discussion?



 2014-04-29 15:48 GMT-07:00 Clint Byrum :
> Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700:
>> Hi Kyle
>>
>> 2014-04-29 10:52 GMT-07:00 Kyle Mestery :
>> > On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno 
>>wrote:
>> >> Hi Zang
>> >>
>> >> Thank you for your contribution on this!
>> >> The private key management is what I want to discuss in the
>>summit.
>> >>
>> > Has the idea of using Barbican been discussed before? There are
>>many
>> > reasons why using Barbican for this may be better than developing
>>key
>> > management ourselves.
>>
>> No, however I'm +1 for using Barbican. Let's discuss this in
>> certificate management topic in advanced service session.
>>
>
> Just a suggestion: Don't defer that until the summit. Sounds like
>you've
> already got some consensus, so you don't need the summit just to
>rubber
> stamp it. I suggest discussing as much as you can right now on the
>mailing
> list, and using the time at the summit to resolve any complicated
>issues
> including any "a or b" things that need crowd-sourced idea making. You
> can also use the summit time to communicate your requirements to the
> Barbican developers.
>
> Point is: just because you'll have face time, doesn't mean you should
> use it for what can be done via the mailing list.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

2014-05-01 Thread Miguel Lavalle
I arrive Sunday at 3:30pm. Either Sunday or Monday are fine with me.
Lookging forward to it :-)


On Wed, Apr 30, 2014 at 5:11 AM, Koderer, Marc  wrote:

> Hi folks,
>
> last time we met one day before the Summit started for a short meet-up.
> Should we do the same this time?
>
> I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would
> be fine for me.
>
> Regards,
> Marc
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Updated Use Cases Assessment and Questions

2014-05-01 Thread Trevor Vardeman
Hello,

I've been going through the 40+ use cases, and I couldn't help but
notice some additions that are either unclear or not descriptive.

For ease of reference, I'll link the document: 
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit#

I took a look at most of them in a high-level thought process to use for
evaluation in feasibility for the Rackspace API proposal, and began
documenting them for purpose of comparison.  However, I've run into some
issues understanding and/or evaluating them.

One section of the use-cases come to mind specifically.  Numbers 31
through 39 are not very descriptive.  Many of these don't seem like
use-cases as much as they seem like feature requests.  Ideally there
would be more information, or an example of a problem to solve including
the use-case, similar to many of the others.

On that same note, there are some use-cases I simply don't understand,
be-it my own naivety, or wording in the use-case.

Use-Case 10:  I assumed this was referring to the source-IP that
accesses the Load Balancer.  As far as I know the X-Forwarded-For header
includes this.  To satisfy this use-case, was there some expectation to
retrieve this information through an API request?  Also, with the
trusted-proxy evaluation, is that being handled by the pool member, or
was this in reference to an "access list" so-to-speak defined on the
load balancer?

Use-Case 20:  I do not believe much of this is handled within the LBaaS
API, but with a different service that provides auto-scaling
functionality.  Especially the "on-the-fly" updating of properties.
This also becomes incredibly difficult when considering TCP session
persistence when the possible pool member could be removed at any
automated time.

Use-Case 25:  I think this one is referring to the functionality of a
"draining" status for a pool member; the pool member will not receive
any new connections, and will not force any active connection closed.
Is that the right way to understand that use-case?

Use-Case 26:  Is this functionally wanting something like an "error
page" to come up during the maintenance window?  Also, to accept only
connections from a specific set of IPs only during the maintenance
window, one would manually have to create an access list for the load
balancer during the time for testing, and then either modify or remove
it after maintenance is complete.  Does this sound like an accurate
understanding/solution?

Use-Case 37:  I'm not entirely sure what this one would mean.  I know I
included it in the section that sounded more like features, but I was
still curious what this one referred to.  Does this have to do with the
desire for auto-scaling?  When a pool member gains a certain threshold
of connections another pool member is created or chosen to handle the
next connection(s) as they come?

Please feel free to correct me anywhere I've blundered here, and if my
proposed "solution" is inaccurate or not easily understood, I'd be more
than happy to explain in further detail.  Thanks for any help you can
offer!

-Trevor Vardeman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Salvatore Orlando
The patch you've been looking at just changes the way in which SystemExit
is used, it does not replace it with sys.exit.
In my experience sys.exit was causing unit test threads to interrupt
abruptly, whereas SystemExit was being caught by the test runner and
handled.
I find therefore a bit strange that you're reporting what appears to be the
opposite behaviour.

Maybe if you could share the code you're working on we can have a look at
it and see what's going on.

Salvatore


On 30 April 2014 21:08, Paul Michali (pcm)  wrote:

>  Hi,
>
>  In Neutron I see SystemExit() being raised in some cases. Is this
> preferred over calling sys.exit()?
>
>  I ask, because I recall having a TOX failure where all I was getting was
> the return code, with no traceback or indication at all of where the
> failure occurred. In that case, I changed from SystemExit() to sys.exit()
> and I then got the traceback and was able to see what was going wrong in
> the test case (it’s been weeks, so I don’t recall where this was at).
>
>  I see currently, there is some changes to use of SystemExit() being
> reviewed (https://review.openstack.org/91185), and it reminded me of the
> concern I had.
>
>  Can anyone enlighten me?
>
>
>  Thanks!
>
>  PCM (Paul Michali)
>
>  MAIL …..…. p...@cisco.com
> IRC ……..… pcm_ (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-call)

2014-05-01 Thread Vijay Venkatachalam
Thanks Trevor. Replies inline!

> -Original Message-
> From: Trevor Vardeman [mailto:trevor.varde...@rackspace.com]
> Sent: Thursday, May 1, 2014 7:30 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-
> call)
> 
> Vijay,
> 
> Comments in-line, hope I can clear some of this up for you :)
> 
> -Trevor
> 
> On Thu, 2014-05-01 at 13:16 +, Vijay Venkatachalam wrote:
> > I am expecting to be more active on community on the LBaaS front.
> >
> > May be reviewing and picking-up a few items to  work as well.
> >
> > I had a look at the proposal. Seeing Single & Multi-Call approach for
> > each workflow makes it easy to understand.
> >
> > Thanks for the clear documentation, it is welcoming to review :-). I was not
> allowed to comment on WorkFlow doc, can you enable comments?
> >
> > The single-call approach essentially creates the global pool/VIP. Once
> VIP/Pool is created using single call, are they reusable in multi-call?
> > For example: Can a pool created for HTTP endpoint/loadbalancer be used
> in HTTPS endpoint LB where termination occurs as well?
> 
> From what I remember discussing with my team (being a developer under
> Jorge's umbrella) There is a 1-M relationship between load balancer and
> pool.  Also, the protocol is specified on the Load Balancer, not the pool,
> meaning you could expose TCP traffic via one Load Balancer to a pool, and
> HTTP traffic via another Load Balancer to that same pool.
> This is easily modified such
> 

Ok. Thanks! Should there be a separate use case for covering this (If it is not 
already present)?

> >
> > Also, would it be useful to include PUT as a single call? I see PUT only for
> POOL not for LB.
> > A user who started with single-call  POST, might like to continue to use the
> same approach for PUT/update as well.
> 
> On the fifth page of the document found here:
> https://docs.google.com/document/d/1mTfkkdnPAd4tWOMZAdwHEx7IuFZ
> DULjG9bTmWyXe-zo/edit
> There is a PUT detailed for a Load Balancer.  There should be support for PUT
> on any parent object assuming the fields one would update are not read-
> only.
> 

My mistake, didn't explain properly.
I see PUT of loadbalancer containing only loadbalancer properties. 
I was wondering if it makes sense for PUT of LOADBALANCER to contain 
pool+members also. Similar to the POST payload.

Also, will delete of loadbalancer  DELETE the pool/vip, if they are no more 
referenced by another loadbalancer.

Or, they have to be cleaned up separately?

> >
> > Thanks,
> > Vijay V.
> >

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] VIF event callbacks implementation

2014-05-01 Thread Duncan Thomas
On 29 April 2014 20:23, Dan Smith  wrote:
> Yeah, we've already got plans in place to get Cinder to use the
> interface to provide us more detailed information and eliminate some
> polling. We also have a very purpose-built notification scheme between
> nova and cinder that facilitates a callback for a very specific
> scenario. I'd like to get that converted to use this mechanism as well,
> so that it becomes "the way you tell nova that things it's waiting for
> have happened."

I'm gently but firmly pushing for the cinder event interface to be
made rather more generic than what is currently being looked at - nova
is not the only project that could benefit from better status updates
than polling. Dashboards and even the CLI could also potentially
benefit


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][globalization] Need input on how to proceed .

2014-05-01 Thread Duncan Thomas
That sounds like a sensible way forward, yes.

If the dependency is not need, then great, makes review and merge even easier.

Thanks

On 28 April 2014 17:03, Jay S. Bryant  wrote:
> Duncan,
>
> Thanks for the response.  Have some additional thoughts, in-line, below:
>
>
> On Mon, 2014-04-28 at 12:15 +0100, Duncan Thomas wrote:
>> Two separate patches, or even two chains of separate patches, will
>> make reviewing and more importantly (hopefully temporary) backouts
>> easier. It will also reduce the number of merge conflicts, which are
>> still likely to be substantial.
>
> True, I suppose we need to keep in mind the fact that we might want to
> make this be easy to back-out in the future.  Hopefully it isn't an
> issue this time around though.
>
>> There's no benefit at all to all of this being done in one patch, and
>> substantial costs. Doing the conversion by sections seems like the way
>> forward.
>
> So, let me propose a different process here.  Handling the i18n and
> removal of debug separately instead.  First, propose one patch that will
> add the explicit import of '_' to all files.  There will be a lot of
> files touched, but they all will be 1 liners.  Then make the patch for
> the re-enablement of lazy tanslation a second patch that is dependent
> upon the first patch.
>
> Then handle removal of _() from DEBUG logs as a separate issue once the
> one above has merged.  For that change do it in multiple patches divided
> by section.  Make the sections be the top level directories under
> cinder/ ?  Does that sound like a reasonable plan?
>
>>
>> Doing both around the same time (maybe as dependant patches) seems reasonable
>>
>
> As I think about it, I don't know that the debug translation removal
> needs to be dependent, but we could work it out that way if you feel
> that is important.
>
> Let me know what you think.
>
> Thanks!
>
>> On 27 April 2014 00:20, Jay S. Bryant  wrote:
>> > All,
>> >
>> > I am looking for feedback on how to complete implementation of i18n
>> > support for Cinder.  I need to open a new BluePrint for Juno as soon as
>> > the cinder-specs process is available.  In the mean time I would like to
>> > start working on this and need feedback on the scope I should undertake
>> > with this.
>> >
>> > First, the majority of the code for i18n support went in with Icehouse.
>> > There is just a small change that is needed to actually enable Lazy
>> > Translation again.  I want to get this enabled as soon as possible to
>> > get plenty of runtime on the code for Icehouse.
>> >
>> > The second change is to add an explicit export for '_' to all of our
>> > files to be consistent with other projects. [1]  This is also the safer
>> > way to implement i18n.  My plan is to integrate the change as part of
>> > the i18n work.  Unfortunately this will touch many of the files in
>> > Cinder.
>> >
>> > Given that fact, this brings me to the item I need feedback upon.  It
>> > appears that Nova is moving forward with the plan to remove translation
>> > of debug messages as there was a recent patch submitted to enable a
>> > check for translated DEBUG messages.  Given that fact, would it be an
>> > appropriate time, while adding the explicit import of '_' to also remove
>> > translation of debug messages.  It is going to make the commit for
>> > enabling Lazy Translation much bigger, but it would also take out
>> > several work items that need to be addressed at once.  I am willing to
>> > undertake the effort if I have support for the changes.
>> >
>> > Please let me know your thoughts.
>> >
>> > Thanks!2]
>> > Jay
>> > (jungleboyj on freenode)
>> >
>> > [1] https://bugs.launchpad.net/cinder/+bug/1306275
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-01 Thread Matthew Treinish
On Thu, May 01, 2014 at 06:18:10PM +0900, Ken'ichi Ohmichi wrote:
> # Sorry for sending this again, previous mail was unreadable.
> 
> 2014-04-28 11:54 GMT+09:00 Ken'ichi Ohmichi :
> >>
> >> This is also why there are a bunch of nova v2 extensions that just add
> >> properties to an existing API. I think in v3 the proposal was to do this 
> >> with
> >> microversioning of the plugins. (we don't have a way to configure
> >> microversioned v3 api plugins in tempest yet, but we can cross that bridge 
> >> when
> >> the time comes) Either way it will allow tempest to have in config which
> >> behavior to expect.
> >
> > Good point, my current understanding is:
> > When adding new API parameters to the existing APIs, these parameters should
> > be API extensions according to the above guidelines. So we have three 
> > options
> > for handling API extensions in Tempest:
> >
> > 1. Consider them as optional, and cannot block the incompatible
> > changes of them. (Current)
> > 2. Consider them as required based on tempest.conf, and can block the
> > incompatible changes.
> > 3. Consider them as required automatically with microversioning, and
> > can block the incompatible changes.
> 
> I investigated the way of the above option 3, then have one question
> about current Tempest implementation.
> 
> Now verify_tempest_config tool gets API extension list from each
> service including Nova and verifies API extension config of tempest.conf
> based on the list.
> Can we use the list for selecting what extension tests run instead of
> the verification?
> As you said In the previous IRC meeting, current API tests will be
> skipped if the test which is decorated with requires_ext() and the
> extension is not specified in tempest.conf. I feel it would be nice
> that Tempest gets API extension list and selects API tests automatically
> based on the list.

So we used to do this type of autodiscovery in tempest, but we stopped because
it let bugs slip through the gate. This topic has come up several times in the
past, most recently in discussing reorganizing the config file. [1] This is why
we put [2] in the tempest README. I agree autodiscovery would be simpler, but
the problem is because we use tempest as the gate if there was a bug that caused
autodiscovery to be different from what was expected the tests would just
silently skip. This would often go unnoticed because of the sheer volume of
tempest tests.(I think we're currently at ~2300) I also feel that explicitly
defining what is a expected to be enabled is a key requirement for branchless
tempest for the same reason.

The verify_tempest_config tool was an attempt at a compromise between being
explicit and also using auto discovery. By using the APIs to help create a
config file that reflected the current configuration state of the services. It's
still a WIP though, and it's really just meant to be a user tool. I don't ever
see it being included in our gate workflow.

> In addition, The methods which are decorated with requires_ext() are
> test methods now, but I think it would be better to decorate client
> methods(get_hypervisor_list, etc.) because each extension loading
> condition affects available APIs.

So my concern with decorating the client methods directly is that it might raise
the skip too late and we'll end up leaking resources. But, I haven't tried it so
it might work fine without leaking anything. I agree that it would make skipping
based on extensions easier because it's really the client methods that depend on
the extensions. So give it a shot and lets see if it works. The only other
complication is the scenario, and cli tests because they don't use the tempest
clients. But, we can just handle that by decorating the test methods like we do
now.


Thanks,

Matt Treinish

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-October/016859.html
[2] http://git.openstack.org/cgit/openstack/tempest/tree/README.rst#n16

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Objects not getting distributed across the swift cluster...

2014-05-01 Thread Chuck Thier
Hi Shyam,

If I am reading your ring output correctly, it looks like only the devices
in node .202 have a weight set, and thus why all of your objects are going
to that one node.  You can update the weight of the other devices, and
rebalance, and things should get distributed correctly.

--
Chuck


On Thu, May 1, 2014 at 5:28 AM, Shyam Prasad N wrote:

> Hi,
>
> I created a swift cluster and configured the rings like this...
>
> swift-ring-builder object.builder create 10 3 1
>
> ubuntu-202:/etc/swift$ swift-ring-builder object.builder
> object.builder, build version 12
> 1024 partitions, 3.00 replicas, 1 regions, 4 zones, 12 devices, 300.00
> balance
> The minimum number of hours before a partition can be reassigned is 1
> Devices:id  region  zone  ip address  port  replication ip
> replication port  name weight partitions balance meta
>  0   1 1  10.3.0.202  6010
> 10.3.0.202  6010  xvdb   1.00   1024  300.00
>  1   1 1  10.3.0.202  6020
> 10.3.0.202  6020  xvdc   1.00   1024  300.00
>  2   1 1  10.3.0.202  6030
> 10.3.0.202  6030  xvde   1.00   1024  300.00
>  3   1 2  10.3.0.212  6010
> 10.3.0.212  6010  xvdb   1.00  0 -100.00
>  4   1 2  10.3.0.212  6020
> 10.3.0.212  6020  xvdc   1.00  0 -100.00
>  5   1 2  10.3.0.212  6030
> 10.3.0.212  6030  xvde   1.00  0 -100.00
>  6   1 3  10.3.0.222  6010
> 10.3.0.222  6010  xvdb   1.00  0 -100.00
>  7   1 3  10.3.0.222  6020
> 10.3.0.222  6020  xvdc   1.00  0 -100.00
>  8   1 3  10.3.0.222  6030
> 10.3.0.222  6030  xvde   1.00  0 -100.00
>  9   1 4  10.3.0.232  6010
> 10.3.0.232  6010  xvdb   1.00  0 -100.00
> 10   1 4  10.3.0.232  6020
> 10.3.0.232  6020  xvdc   1.00  0 -100.00
> 11   1 4  10.3.0.232  6030
> 10.3.0.232  6030  xvde   1.00  0 -100.00
>
> Container and account rings have a similar configuration.
> Once the rings were created and all the disks were added to the rings like
> above, I ran rebalance on each ring. (I ran rebalance after adding each of
> the node above.)
> Then I immediately scp the rings to all other nodes in the cluster.
>
> I now observe that the objects are all going to 10.3.0.202. I don't see
> the objects being replicated to the other nodes. So much so that 202 is
> approaching 100% disk usage, while other nodes are almost completely empty.
> What am I doing wrong? Am I not supposed to run rebalance operation after
> addition of each disk/node?
>
> Thanks in advance for the help.
>
> --
> -Shyam
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] No meeting this week

2014-05-01 Thread Matt Riedemann



On 5/1/2014 1:58 AM, Michael Still wrote:

Hi.

I was intending to run a nova meeting this week, but I don't think its
worth a mutiny over the "off week" that the rest of the project is
respecting. The only agenda items I can think of are:

  - please prepare your summit sessions
  - I've attempted to fix the clashes in scheduling that are reported
  - please fix some bugs!

I think those are self explanatory to be honest. If any discussion is
required, please use this thread for it. So... keep at it!

Cheers,
Michael



I might be in the minority, but I'm still "on" this week and while there 
might not be a ton of content to talk about or on the agenda (people 
rarely update the agenda wiki directly I've found anyway), I feel like 
we should still have a meeting at some point - we haven't had one in 
about a month now.  I realize people are either burned out on Icehouse 
or getting ready for Juno, but I suspect people would at least still 
show up to a meeting and topics would come up, especially around people 
with nova-specs up for review.


Maybe I'm just lonely :) but would be nice to have a Nova meeting soon 
since I don't think email spurs the same constructive discussion that 
can happen in the meetings, and those are usually off-topic anyway.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Friday Meeting

2014-05-01 Thread Nikolay Starodubtsev
Same for Russia. But I'm not sure is it 1 and 2 May, or more



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2014-05-01 5:59 GMT-07:00 Fuente, Pablo A :

> +1
> I can't attend to the metting too, I will be at the marriage of a
> friend. BTW, I will like to tell you guys, that for us (Argentina
> folks), today and tomorrow are days off.
>
> On Thu, 2014-05-01 at 00:11 +0400, Dina Belova wrote:
> > +1
> >
> >
> > On Wed, Apr 30, 2014 at 11:41 PM, Sylvain Bauza
> >  wrote:
> > Hi Dina,
> >
> >
> > I forgot yesterday to mention it was my last day at Bull, so
> > the end of week was off-work until Monday.
> > As a corollar, I won't be able to attend Friday meeting.
> >
> >
> > Let's cancel this meeting and raise topics in mailing-list if
> > needed.
> >
> >
> > -Sylvain
> >
> >
> > 2014-04-30 19:17 GMT+02:00 Dina Belova :
> > Folks, o/
> >
> >
> > I finally got my dates for the US trip, and I have to
> > say, that I won't be able to attend our closest Friday
> > meeting as I'll be flying at this moment)
> >
> >
> > Sylvain, will you be able to hold the meeting?
> >
> >
> > Best regards,
> >
> > Dina Belova
> >
> > Software Engineer
> >
> > Mirantis Inc.
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> > --
> > Best regards,
> >
> > Dina Belova
> >
> > Software Engineer
> >
> > Mirantis Inc.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] should we have an IRC meeting next week ?

2014-05-01 Thread Matt Wagner

On 30/04/14 15:37 -0700, Devananda van der Veen wrote:

Hi all,

Just a reminder that May 5th is our next scheduled meeting day, but I
probably won't make it, because I'll be just getting back from one trip and
start two consecutive weeks of conference travel early the next morning.
Chris Krelle (nobodycam) has offered to chair that meeting in my absence.
The agenda looks pretty light at this point, and any serious discussions
should just be punted to the summit anyway, so if folks want to cancel the
meeting, I think that's fine.


I would attend, though I personally have nothing to propose.


pgpsH3ecOzP0K.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-call)

2014-05-01 Thread Trevor Vardeman
Vijay,

Comments in-line, hope I can clear some of this up for you :)

-Trevor

On Thu, 2014-05-01 at 13:16 +, Vijay Venkatachalam wrote:
> I am expecting to be more active on community on the LBaaS front. 
> 
> May be reviewing and picking-up a few items to  work as well.
> 
> I had a look at the proposal. Seeing Single & Multi-Call approach for each 
> workflow 
> makes it easy to understand. 
> 
> Thanks for the clear documentation, it is welcoming to review :-). I was not 
> allowed to comment on WorkFlow doc, can you enable comments?
> 
> The single-call approach essentially creates the global pool/VIP. Once 
> VIP/Pool is created using single call, are they reusable in multi-call?
> For example: Can a pool created for HTTP endpoint/loadbalancer be used in 
> HTTPS endpoint LB where termination occurs as well?

>From what I remember discussing with my team (being a developer under
Jorge's umbrella) There is a 1-M relationship between load balancer and
pool.  Also, the protocol is specified on the Load Balancer, not the
pool, meaning you could expose TCP traffic via one Load Balancer to a
pool, and HTTP traffic via another Load Balancer to that same pool.
This is easily modified such 

> 
> Also, would it be useful to include PUT as a single call? I see PUT only for 
> POOL not for LB.
> A user who started with single-call  POST, might like to continue to use the 
> same approach for PUT/update as well.

On the fifth page of the document found here:
https://docs.google.com/document/d/1mTfkkdnPAd4tWOMZAdwHEx7IuFZDULjG9bTmWyXe-zo/edit
There is a PUT detailed for a Load Balancer.  There should be support
for PUT on any parent object assuming the fields one would update are
not read-only.

> 
> Thanks,
> Vijay V.
> 
> -Original Message-
> From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
> Sent: Thursday, May 1, 2014 3:57 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process
> 
> Oops! Everywhere I said Samuel I meant Stephen. Sorry you both have SB as you 
> initials so I got confused. :)
> 
> Cheers,
> --Jorge
> 
> 
> 
> 
> On 4/30/14 5:17 PM, "Jorge Miramontes" 
> wrote:
> 
> >Hey everyone,
> >
> >I agree that we need to be preparing for the summit. Using Google docs 
> >mixed with Openstack wiki works for me right now. I need to become more 
> >familiar the gerrit process and I agree with Samuel that it is not 
> >conducive to "large" design discussions. That being said I'd like to 
> >add my thoughts on how I think we can most effectively get stuff done.
> >
> >As everyone knows there are many new players from across the industry 
> >that have an interest in Neutron LBaaS. Companies I currently see 
> >involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix, 
> >eBay/Paypal and Rackspace. We also have individuals involved as well. I 
> >echo Kyle's sentiment on the passion everyone is bringing to the project!
> >Coming into this project a few months ago I saw that a few things 
> >needed to be done. Most notably, I realized that gathering everyone's 
> >expectations on what they wanted Neutron LBaaS to be was going to be 
> >crucial. Hence, I created the requirements document. Written 
> >requirements are important within a single organization. They are even 
> >more important when multiple organizations are working together because 
> >everyone is spread out across the world and every organization has a 
> >different development process. Again, my goal with the requirements 
> >document is to make sure that everyone's voice in the community is 
> >taken into consideration. The benefit I've seen from this document is 
> >that we ask "Why?" to each other, iterate on the document and in the 
> >end have a clear understanding of everyone's motives. We also learn 
> >from each other by doing this which is one of the great benefits of open 
> >source.
> >
> >Now that we have a set of requirements the next question to ask is, 
> >"How doe we prioritize requirements so that we can start designing and 
> >implementing them"? If this project were a completely new piece of 
> >software I would argue that we iterate on individual features based on 
> >anecdotal information. In essence I would argue an agile approach.
> >However, most of the companies involved have been operating LBaaS for a 
> >while now. Rackspace, for example, has been operating LBaaS for the 
> >better part of 4 years. We have a clear understanding of what features 
> >our customers want and how to operate at scale. I believe other 
> >operators of LBaaS have the same understanding of their customers and 
> >their operational needs. I guess my main point is that, collectively, 
> >we have data to back up which requirements we should be working on. 
> >That doesn't mean we preclude requirements based on anecdotal 
> >information (i.e. "Our customers are saying they want new shiny feature 
> >X"). At the end of the da

[openstack-dev] [climate] Client and REST API versioning

2014-05-01 Thread Fuente, Pablo A
Hi,
We recently implemented our V2 REST API, and at this moment we are
trying to get working our python client against this new version. For
this reason, we start a discussion about how the client will choose/set
the REST API version to use. BTW, we are not deprecating our V1 REST
API, so we need that our client still support it.
This are our discussions:

1 - Should the URL stored in Keystone service catalog have the version?
In this case, our client will get the REST API URL from Keystone, parse
it, select the correct version of the client code and then start
performing requests. But if we choose this path, if a user of the client
decides to use the V1 REST API version using
--os-reservation-api-version, the client should strip the version of the
URL and then append the version that the user wants. The thing here is
that we are storing a version on a URL that we could not use in some
cases. In other words the version on the URL could be override.

2 - Should Climate store only one URL in Keystone catalog without
version?
Here, the client, will know the default version to use, appending that
version to the service catalog version. When the client user request
another version, the client simply append that version to the end. The
cons of this option, is that if someone plan to use the REST API without
our client needs to know about how we handle the version. Here we can
provide /versions in order to tell how we are handling/naming versions.

3 - Should Climate store all the REST API URLs with the version at the
end and using versions in service types? e.g reservation and
reservationV2
Here the client will get the version that needs querying the service
type by version. Seems that some projects do this, but for me seems that
this option is similar to 2, with the con that when Climate deprecate
V1, the only service type will be reservationV2, which sounds weird for
me.

We would like to get your feedback about this points (or new ones) in
order to get this implemented in the right way.

Pablo.
P.S. I hope that all the options in this email reflect correctly what we
discussed at Climate. If not, please add/clarify/remove what you want.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Eugene Nikanorov
Sorry, missed the phrase ending:

> It's not because developers of lbaas have not thought about it, it's
> because we were limited in dev and core reviewing
> resources, so implement
>
so implementing some of the operators requirements was always in our plans.

Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] RackSpace API review (multi-call)

2014-05-01 Thread Vijay Venkatachalam

I am expecting to be more active on community on the LBaaS front. 

May be reviewing and picking-up a few items to  work as well.

I had a look at the proposal. Seeing Single & Multi-Call approach for each 
workflow 
makes it easy to understand. 

Thanks for the clear documentation, it is welcoming to review :-). I was not 
allowed to comment on WorkFlow doc, can you enable comments?

The single-call approach essentially creates the global pool/VIP. Once VIP/Pool 
is created using single call, are they reusable in multi-call?
For example: Can a pool created for HTTP endpoint/loadbalancer be used in HTTPS 
endpoint LB where termination occurs as well?

Also, would it be useful to include PUT as a single call? I see PUT only for 
POOL not for LB.
A user who started with single-call  POST, might like to continue to use the 
same approach for PUT/update as well.

Thanks,
Vijay V.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Thursday, May 1, 2014 3:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

Oops! Everywhere I said Samuel I meant Stephen. Sorry you both have SB as you 
initials so I got confused. :)

Cheers,
--Jorge




On 4/30/14 5:17 PM, "Jorge Miramontes" 
wrote:

>Hey everyone,
>
>I agree that we need to be preparing for the summit. Using Google docs 
>mixed with Openstack wiki works for me right now. I need to become more 
>familiar the gerrit process and I agree with Samuel that it is not 
>conducive to "large" design discussions. That being said I'd like to 
>add my thoughts on how I think we can most effectively get stuff done.
>
>As everyone knows there are many new players from across the industry 
>that have an interest in Neutron LBaaS. Companies I currently see 
>involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix, 
>eBay/Paypal and Rackspace. We also have individuals involved as well. I 
>echo Kyle's sentiment on the passion everyone is bringing to the project!
>Coming into this project a few months ago I saw that a few things 
>needed to be done. Most notably, I realized that gathering everyone's 
>expectations on what they wanted Neutron LBaaS to be was going to be 
>crucial. Hence, I created the requirements document. Written 
>requirements are important within a single organization. They are even 
>more important when multiple organizations are working together because 
>everyone is spread out across the world and every organization has a 
>different development process. Again, my goal with the requirements 
>document is to make sure that everyone's voice in the community is 
>taken into consideration. The benefit I've seen from this document is 
>that we ask "Why?" to each other, iterate on the document and in the 
>end have a clear understanding of everyone's motives. We also learn 
>from each other by doing this which is one of the great benefits of open 
>source.
>
>Now that we have a set of requirements the next question to ask is, 
>"How doe we prioritize requirements so that we can start designing and 
>implementing them"? If this project were a completely new piece of 
>software I would argue that we iterate on individual features based on 
>anecdotal information. In essence I would argue an agile approach.
>However, most of the companies involved have been operating LBaaS for a 
>while now. Rackspace, for example, has been operating LBaaS for the 
>better part of 4 years. We have a clear understanding of what features 
>our customers want and how to operate at scale. I believe other 
>operators of LBaaS have the same understanding of their customers and 
>their operational needs. I guess my main point is that, collectively, 
>we have data to back up which requirements we should be working on. 
>That doesn't mean we preclude requirements based on anecdotal 
>information (i.e. "Our customers are saying they want new shiny feature 
>X"). At the end of the day I want to prioritize the community's 
>requirements based on factual data and anecdotal information.
>
>Assuming requirements are prioritized (which as of today we have a 
>pretty good idea of these priorities) the next step is to design before 
>laying down any actual code. I agree with Samuel that pushing the cart 
>before the horse is a bad idea in this case (and it usually is the case 
>in software development), especially since we have a pretty clear idea 
>on what we need to be designing for. I understand that the current code 
>base has been worked on by many individuals and the work done thus far 
>is the reason why so many new faces are getting involved. However, we 
>now have a completely updated set of requirements that the community 
>has put together and trying to fit the requirements to existing code 
>may or may not work. In my experience, I would argue that 99% of the 
>time duct-taping existing code to fit in new requirements results in 
>buggy sof

Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process

2014-05-01 Thread Eugene Nikanorov
Hi Jorge,

A couple of inline comments:

>
> Now that we have a set of requirements the next question to ask is, "How
> doe we prioritize requirements so that we can start designing and
> implementing them"?

Prioritization basically means that we want to support everything and only
choose what is
more important right now and what is less important and can be implemented
later.

Assuming requirements are prioritized (which as of today we have a pretty
> good idea of these priorities) the next step is to design before laying
> down any actual code.

That's true. I'd only would like to notice that there were actually a road
map and requirements
with design before the code was written, that's both for the features that
are already implemented,
and those which now are hanging in limbo.

I agree with Samuel that pushing the cart before the
> horse is a bad idea in this case (and it usually is the case in software
> development), especially since we have a pretty clear idea on what we need
> to be designing for. I understand that the current code base has been
> worked on by many individuals and the work done thus far is the reason why
> so many new faces are getting involved. However, we now have a completely
> updated set of requirements that the community has put together and trying
> to fit the requirements to existing code may or may not work.



> In my experience, I would argue that 99% of the time duct-taping existing
> code
>
I really don't like the term "duct-taping" here.
Here's the problem: you'll never will be able to implement everything at
once, you have to do it incrementally.
That's how ecosystem works.
Each step can be then considered as 'duct-taping' because each state you're
getting to
is not accounting for everything what was planned.
And for sure, there will be design mistakes that need to be fixed.
In the end there will be another cloud provider with another set of
requirements...

So in order to deal with that in a productive way there are a few
guidelines:
1) follow the style of ecosystem. Consistency is important. Keeping the
style helps both developers, reviewers and users of the product.
2) Preserve backward compatibility whenever possible.
That's a very important point which however can be 'relaxed' if existing
code base is completely unable to evolve to support new requirements.


> to fit in new requirements results in buggy software. That being said, I
> usually don't like to rebuild a project from scratch. If I can I try to
> refactor as much as possible first. However, in this case we have a
> particular set of requirements that changes the game. Particularly,
> operator requirements have not been given the attention they deserve.
>
Operator requirements really don't change the game here.
You're right that operator requirements were not given the attention.
It's not because developers of lbaas have not thought about it, it's
because we were limited in dev and core reviewing
resources, so implement
But what is more important, operator requirements mostly doesn't affect
tenant API that we were discussing.
That's true that almost none of them are addressed by existing code base,
but it only means that it should be implemented.

When talking about existing code base I'd expect the following questions
before any decision is made:

1) how can we do (implement) X with existing code base?
2) if we can't do X, is it possible to fix the code in a simple way and
just implement X on top of existing?

If both answers are "No", and X is really impossible with existing code
base - that could be a reason to deeply revise it.
Looking at operator requirements I don't see a single one that could lead
to that.

Because several of us have been spending large amounts of time on API
> proposals, and because we can safely assume that most operational
> requirements are abstracted into the driver layer I say we continue the
> conversation around the different proposals since this is the area we
> definitely need consensus on. So far there are three proposals--Stephen's,
> Rackspace's and Eugene's.

I'd like to comment that my proposal is actually a small part of Stephen's
that touches the core lbaas API only.
So i would not treat it separately in this context.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Friday Meeting

2014-05-01 Thread Fuente, Pablo A
+1
I can't attend to the metting too, I will be at the marriage of a
friend. BTW, I will like to tell you guys, that for us (Argentina
folks), today and tomorrow are days off.

On Thu, 2014-05-01 at 00:11 +0400, Dina Belova wrote:
> +1
> 
> 
> On Wed, Apr 30, 2014 at 11:41 PM, Sylvain Bauza
>  wrote:
> Hi Dina,
> 
> 
> I forgot yesterday to mention it was my last day at Bull, so
> the end of week was off-work until Monday.
> As a corollar, I won't be able to attend Friday meeting.
> 
> 
> Let's cancel this meeting and raise topics in mailing-list if
> needed.
> 
> 
> -Sylvain
> 
> 
> 2014-04-30 19:17 GMT+02:00 Dina Belova :
> Folks, o/
> 
> 
> I finally got my dates for the US trip, and I have to
> say, that I won't be able to attend our closest Friday
> meeting as I'll be flying at this moment)
> 
> 
> Sylvain, will you be able to hold the meeting?
> 
> 
> Best regards,
> 
> Dina Belova
> 
> Software Engineer
> 
> Mirantis Inc.
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> -- 
> Best regards,
> 
> Dina Belova
> 
> Software Engineer
> 
> Mirantis Inc.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-05-01 Thread Jay Lau
Jay Pipes and all, I'm planning to merge this topic to
http://junodesignsummit.sched.org/event/77801877aa42b595f14ae8b020cd1999after
some discussion in this week's Gantt IRC meeting, hope it is OK.

Thanks!


2014-05-01 19:56 GMT+08:00 Day, Phil :

> > >
> > > In the original API there was a way to remove members from the group.
> > > This didn't make it into the code that was submitted.
> >
> > Well, it didn't make it in because it was broken. If you add an instance
> to a
> > group after it's running, a migration may need to take place in order to
> keep
> > the semantics of the group. That means that for a while the policy will
> be
> > being violated, and if we can't migrate the instance somewhere to
> satisfy the
> > policy then we need to either drop it back out, or be in violation.
> Either some
> > additional states (such as being queued for inclusion in a group, etc)
> may be
> > required, or some additional footnotes on what it means to be in a group
> > might have to be made.
> >
> > It was for the above reasons, IIRC, that we decided to leave that bit
> out since
> > the semantics and consequences clearly hadn't been fully thought-out.
> > Obviously they can be addressed, but I fear the result will be ... ugly.
> I think
> > there's a definite possibility that leaving out those dynamic functions
> will look
> > more desirable than an actual implementation.
> >
> If we look at a server group as a general contained or servers, that may
> have an attribute that expresses scheduling policy, then it doesn't seem to
> ugly to restrict the conditions on which an add is allowed to only those
> that don't break the (optional) policy.Wouldn't even have to go to the
> scheduler to work this out.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS

2014-05-01 Thread Eugene Nikanorov
Hi,

My opinion is that keeping neutron API style is very important but it
doesn't prevent single call API from being implemented.
Flat fine-grained API is obviously the most flexible, but that doesn't mean
we can't support single call API as well.

By the way, looking at the implementation I see that such API (single call)
should be also supported in the drivers, so it is not just something 'on
top' of fine-grained API. Such requirement comes from the fact that
fine-grained API is asynchronous.

Thanks,
Eugene.


On Thu, May 1, 2014 at 5:18 AM, Kyle Mestery wrote:

> I am fully onboard with the single-call approach as well, per this thread.
>
> On Wed, Apr 30, 2014 at 6:54 PM, Stephen Balukoff 
> wrote:
> > It's also worth stating that coding a web UI to deploy a new service is
> > often easier done when single-call is an option. (ie. only one failure
> > scenario to deal with.) I don't see a strong reason we shouldn't allow
> both
> > single-call creation of whole bunch of related objects, as well as a
> > workflow involving the creation of these objects individually.
> >
> >
> > On Wed, Apr 30, 2014 at 3:50 PM, Jorge Miramontes
> >  wrote:
> >>
> >> I agree it may be odd, but is that a strong argument? To me, following
> >> RESTful style/constructs is the main thing to consider. If people can
> >> specify everything in the parent resource then let them (i.e. single
> call).
> >> If they want to specify at a more granular level then let them do that
> too
> >> (i.e. multiple calls). At the end of the day the API user can choose the
> >> style they want.
> >>
> >> Cheers,
> >> --Jorge
> >>
> >> From: Youcef Laribi 
> >> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Date: Wednesday, April 30, 2014 1:35 PM
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Subject: Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack
> API
> >> style in LBaaS
> >>
> >> Sam,
> >>
> >>
> >>
> >> I think it’s important to keep the Neutron API style consistent. It
> would
> >> be odd if LBaaS uses a different style than the rest of the Neutron
> APIs.
> >>
> >>
> >>
> >> Youcef
> >>
> >>
> >>
> >> From: Samuel Bercovici [mailto:samu...@radware.com]
> >> Sent: Wednesday, April 30, 2014 10:59 AM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API
> >> style in LBaaS
> >>
> >>
> >>
> >> Hi Everyone,
> >>
> >>
> >>
> >> During the last few days I have looked into the different LBaaS API
> >> proposals.
> >>
> >> I have also looked on the API style used in Neutron. I wanted to see how
> >> Neutron APIs addressed “tree” like object models.
> >>
> >> Follows my observation:
> >>
> >> 1.   Security groups -
> >>
> http://docs.openstack.org/api/openstack-network/2.0/content/security-groups-ext.html
> )
> >> –
> >>
> >> a.   security-group-rules are children of security-groups, the
> >> capability to create a security group with its children in a single
> call is
> >> not possible.
> >>
> >> b.   The capability to create security-group-rules using the
> following
> >> URI path v2.0/security-groups/{SG-ID}/security-group-rules is not
> supported
> >>
> >> c.The capability to update security-group-rules using the
> >> following URI path
> >> v2.0/security-groups/{SG-ID}/security-group-rules/{SGR-ID} is not
> supported
> >>
> >> d.   The notion of creating security-group-rules (child object)
> >> without providing the parent {SG-ID} is not supported
> >>
> >> 2.   Firewall as a service -
> >>
> http://docs.openstack.org/api/openstack-network/2.0/content/fwaas_ext.html-
> >> the API to manage firewall_policy and firewall_rule which have parent
> child
> >> relationships behaves the same way as Security groups
> >>
> >> 3.   Group Policy – this is work in progress -
> >> https://wiki.openstack.org/wiki/Neutron/GroupPolicy - If I understand
> >> correctly, this API has a complex object model while the API adheres to
> the
> >> way other neutron APIs are done (ex: flat model, granular api, etc.)
> >>
> >>
> >>
> >> How critical is it to preserve a consistent API style for LBaaS?
> >>
> >> Should this be a consideration when evaluating API proposals?
> >>
> >>
> >>
> >> Regards,
> >>
> >> -Sam.
> >>
> >>
> >>
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Stephen Balukoff
> > Blue Box Group, LLC
> > (800)613-4305 x807
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/l

Re: [openstack-dev] [qa] [cinder] Do we now require schema response validation in tempest clients?

2014-05-01 Thread Ghanshyam Mann
Hi David, Ken'ichi,

Thursday, May 1, 2014 1:02 PM Ken'ichi Ohmichi : 
>Hi David,

>2014-05-01 5:44 GMT+09:00 David Kranz :
>> There have been a lot of patches that add the validation of response dicts.
>> We need a policy on whether this is required or not. For example, this 
>> patch
>>
>> https://review.openstack.org/#/c/87438/5
>>
>> is for the equivalent of 'cinder service-list' and is a basically a 
>> copy of the nova test which now does the validation. So two questions:
>>
>> Is cinder going to do this kind of checking?
>> If so, should new tests be required to do it on submission?

>I'm not sure someone will add the similar validation, which we are adding to 
>Nova API tests, to Cinder API tests also. but it would be nice for Cinder and 
>Tempest. The validation can be applied to the other >projects(Cinder, etc) 
>easily because the base framework is implemented in common rest client of 
>Tempest.

Yes, that will be nice if we start implementing the validation part for other 
component's APIs test also. I can take this and start implementing the 
validation for existing tests of cinder. Those can be continue whenever new 
test cases are going to be added.

>When adding new tests like https://review.openstack.org/#/c/87438 , I don't 
>have strong opinion for including the validation also. These schemas will be 
>large sometimes and the combination in the same patch >would make reviews 
>difficult. In current Nova API test implementations, we are separating them 
>into different patches.

Me too agree.

Thanks
Ghanshyam Mann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][service group]improve host state detection

2014-05-01 Thread Day, Phil
>Nova now can detect host unreachable. But it fails to make out host isolation, 
>host dead and nova compute service down. When host unreachable is reported, 
>users have to find out the exact state by himself and then take the 
>appropriate measure to recover. Therefore we'd like to improve the host 
>detection for nova.

I guess this depends on the service group driver that you use.  For example if 
you use the DB driver, then there is a thread running on the compute manager 
that periodically updates the "alive" status - which included both a liveness 
check (to the extent that the thread is still running) of the compute manager 
and that it can contact the DB.If the compute manager is using conductor 
then it also includes implicitly a check that the compute manager can talk to 
MQ (a nice side effect of conductor - as before a node could be "Up" because it 
could talk to the DB but not able to process any messages)

So to me the DB driver kind of already covers "send network heartbeat to the 
central agent and writes timestamp in shared storage periodically" - so maybe 
this is more of a specific ServiceGroup Driver issue rather than a generic 
ServiceGroup change ?

Phil



From: Jiangying (Jenny) [mailto:jenny.jiangy...@huawei.com]
Sent: 28 April 2014 13:31
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][service group]improve host state detection

Nova now can detect host unreachable. But it fails to make out host isolation, 
host dead and nova compute service down. When host unreachable is reported, 
users have to find out the exact state by himself and then take the appropriate 
measure to recover. Therefore we'd like to improve the host detection for nova.

Currently the service group API factors out the host detection and makes it a 
set of abstract internal APIs with a pluggable backend implementation. The 
backend we designed is as follows:

A detection central agent is introduced. When a member joins into the service 
group, the member host starts to send network heartbeat to the central agent 
and writes timestamp in shared storage periodically. When the central agent 
stops receiving the network heartbeats from a member, it pings the member and 
checks the storage heartbeat before declaring the host to have failed.


network heartbeat|network ping|storage heartbeat| state  | reason
|-||---|--
OK   |  - |-| Running | -
  Not OK |   Not OK   | Not OK  | Dead   | hardware 
failure/abnormal host shut down
  Not OK | OK | Not OK  | Service unreachable| service 
process crashed
  Not OK |   Not OK   |   OK| Isolated   | network 
unreachable

Based on the state recognition table, nova can discern the exact host state and 
assign the reasons.

Thoughts?

Jenny

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-05-01 Thread Day, Phil
> >
> > In the original API there was a way to remove members from the group.
> > This didn't make it into the code that was submitted.
> 
> Well, it didn't make it in because it was broken. If you add an instance to a
> group after it's running, a migration may need to take place in order to keep
> the semantics of the group. That means that for a while the policy will be
> being violated, and if we can't migrate the instance somewhere to satisfy the
> policy then we need to either drop it back out, or be in violation. Either 
> some
> additional states (such as being queued for inclusion in a group, etc) may be
> required, or some additional footnotes on what it means to be in a group
> might have to be made.
> 
> It was for the above reasons, IIRC, that we decided to leave that bit out 
> since
> the semantics and consequences clearly hadn't been fully thought-out.
> Obviously they can be addressed, but I fear the result will be ... ugly. I 
> think
> there's a definite possibility that leaving out those dynamic functions will 
> look
> more desirable than an actual implementation.
> 
If we look at a server group as a general contained or servers, that may have 
an attribute that expresses scheduling policy, then it doesn't seem to ugly to 
restrict the conditions on which an add is allowed to only those that don't 
break the (optional) policy.Wouldn't even have to go to the scheduler to 
work this out.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Objects not getting distributed across the swift cluster...

2014-05-01 Thread Shyam Prasad N
Hi,

I created a swift cluster and configured the rings like this...

swift-ring-builder object.builder create 10 3 1

ubuntu-202:/etc/swift$ swift-ring-builder object.builder
object.builder, build version 12
1024 partitions, 3.00 replicas, 1 regions, 4 zones, 12 devices, 300.00
balance
The minimum number of hours before a partition can be reassigned is 1
Devices:id  region  zone  ip address  port  replication ip
replication port  name weight partitions balance meta
 0   1 1  10.3.0.202  6010
10.3.0.202  6010  xvdb   1.00   1024  300.00
 1   1 1  10.3.0.202  6020
10.3.0.202  6020  xvdc   1.00   1024  300.00
 2   1 1  10.3.0.202  6030
10.3.0.202  6030  xvde   1.00   1024  300.00
 3   1 2  10.3.0.212  6010
10.3.0.212  6010  xvdb   1.00  0 -100.00
 4   1 2  10.3.0.212  6020
10.3.0.212  6020  xvdc   1.00  0 -100.00
 5   1 2  10.3.0.212  6030
10.3.0.212  6030  xvde   1.00  0 -100.00
 6   1 3  10.3.0.222  6010
10.3.0.222  6010  xvdb   1.00  0 -100.00
 7   1 3  10.3.0.222  6020
10.3.0.222  6020  xvdc   1.00  0 -100.00
 8   1 3  10.3.0.222  6030
10.3.0.222  6030  xvde   1.00  0 -100.00
 9   1 4  10.3.0.232  6010
10.3.0.232  6010  xvdb   1.00  0 -100.00
10   1 4  10.3.0.232  6020
10.3.0.232  6020  xvdc   1.00  0 -100.00
11   1 4  10.3.0.232  6030
10.3.0.232  6030  xvde   1.00  0 -100.00

Container and account rings have a similar configuration.
Once the rings were created and all the disks were added to the rings like
above, I ran rebalance on each ring. (I ran rebalance after adding each of
the node above.)
Then I immediately scp the rings to all other nodes in the cluster.

I now observe that the objects are all going to 10.3.0.202. I don't see the
objects being replicated to the other nodes. So much so that 202 is
approaching 100% disk usage, while other nodes are almost completely empty.
What am I doing wrong? Am I not supposed to run rebalance operation after
addition of each disk/node?

Thanks in advance for the help.

-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SSL VPN Implemenatation

2014-05-01 Thread Jarret Raim
Zang mentioned that part of the issue is that the private key has to be
stored in the OpenVPN config file. If the config files are generated and
can be stored, then storing the whole config file in Barbican protects the
private key (and any other settings) without having to try to deliver the
key to the OpenVPN endpoint in some non-standard way.


Jarret

On 4/30/14, 6:08 PM, "Nachi Ueno"  wrote:

>> Jarret
>
>Thanks!
>Currently, the config will be generated on demand by the agent.
>What's merit storing entire config in the Barbican?
>
>> Kyle
>Thanks!
>
>2014-04-30 7:05 GMT-07:00 Kyle Mestery :
>> On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno  wrote:
>>> Hi Clint
>>>
>>> Thank you for your suggestion. Your point get taken :)
>>>
 Kyle
>>> This is also a same discussion for LBaaS
>>> Can we discuss this in advanced service meeting?
>>>
>> Yes! I think we should definitely discuss this in the advanced
>> services meeting today. I've added it to the agenda [1].
>>
>> Thanks,
>> Kyle
>>
>> [1] 
>>https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_for_next
>>_meeting
>>
 Zang
>>> Could you join the discussion?
>>>
>>>
>>>
>>> 2014-04-29 15:48 GMT-07:00 Clint Byrum :
 Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700:
> Hi Kyle
>
> 2014-04-29 10:52 GMT-07:00 Kyle Mestery :
> > On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno 
>wrote:
> >> Hi Zang
> >>
> >> Thank you for your contribution on this!
> >> The private key management is what I want to discuss in the
>summit.
> >>
> > Has the idea of using Barbican been discussed before? There are
>many
> > reasons why using Barbican for this may be better than developing
>key
> > management ourselves.
>
> No, however I'm +1 for using Barbican. Let's discuss this in
> certificate management topic in advanced service session.
>

 Just a suggestion: Don't defer that until the summit. Sounds like
you've
 already got some consensus, so you don't need the summit just to
rubber
 stamp it. I suggest discussing as much as you can right now on the
mailing
 list, and using the time at the summit to resolve any complicated
issues
 including any "a or b" things that need crowd-sourced idea making. You
 can also use the summit time to communicate your requirements to the
 Barbican developers.

 Point is: just because you'll have face time, doesn't mean you should
 use it for what can be done via the mailing list.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-01 Thread Ken'ichi Ohmichi
# Sorry for sending this again, previous mail was unreadable.

2014-04-28 11:54 GMT+09:00 Ken'ichi Ohmichi :
>>
>> This is also why there are a bunch of nova v2 extensions that just add
>> properties to an existing API. I think in v3 the proposal was to do this with
>> microversioning of the plugins. (we don't have a way to configure
>> microversioned v3 api plugins in tempest yet, but we can cross that bridge 
>> when
>> the time comes) Either way it will allow tempest to have in config which
>> behavior to expect.
>
> Good point, my current understanding is:
> When adding new API parameters to the existing APIs, these parameters should
> be API extensions according to the above guidelines. So we have three options
> for handling API extensions in Tempest:
>
> 1. Consider them as optional, and cannot block the incompatible
> changes of them. (Current)
> 2. Consider them as required based on tempest.conf, and can block the
> incompatible changes.
> 3. Consider them as required automatically with microversioning, and
> can block the incompatible changes.

I investigated the way of the above option 3, then have one question
about current Tempest implementation.

Now verify_tempest_config tool gets API extension list from each
service including Nova and verifies API extension config of tempest.conf
based on the list.
Can we use the list for selecting what extension tests run instead of
the verification?
As you said In the previous IRC meeting, current API tests will be
skipped if the test which is decorated with requires_ext() and the
extension is not specified in tempest.conf. I feel it would be nice
that Tempest gets API extension list and selects API tests automatically
based on the list.
In addition, The methods which are decorated with requires_ext() are
test methods now, but I think it would be better to decorate client
methods(get_hypervisor_list, etc.) because each extension loading
condition affects available APIs.

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-05-01 Thread Ken'ichi Ohmichi
Hi Matthew,

2014-04-28 11:54 GMT+09:00 Ken'ichi Ohmichi :
> 2014-04-28 11:02 GMT+09:00 Matthew Treinish :
>> On Mon, Apr 28, 2014 at 01:01:00AM +, Kenichi Oomichi wrote:
>>>
>>> Now we are working for adding Nova API responses checks to Tempest[1] to
>>> block backward incompatible changes.
>>> With this work, Tempest checks each response(status code, response body)
>>> and raises a test failure exception if detecting something unexpected.
>>> For example if some API parameter, which is defined as 'required' Tempest
>>> side, does not exist in response body, Tempest test fails.
>>>
>>> We are defining API parameters as 'required' if they are not API extensions
>>> or they are not depended on Nova configuration. In addition now Tempest
>>> allows additional API parameters, that means Tempest does not fail even if
>>> Nova response includes unexpected API parameters. Because I think the 
>>> removal
>>> of API parameter causes backward incompatible issue but the addition does 
>>> not
>>> cause it.
>>
>> So, AIUI we can only add parameters to an API with a new extension. The API
>> change guidelines also say that adding new properties must be conditional:
>>
>> https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK
>>
>> Adding or removing a parameter to an API is a backwards incompatible change 
>> IMO
>> for the exact reasons you mentioned here. If we have to worry about it in
>> tempest then end users do as well.
>>
>> This is also why there are a bunch of nova v2 extensions that just add
>> properties to an existing API. I think in v3 the proposal was to do this with
>> microversioning of the plugins. (we don't have a way to configure
>> microversioned v3 api plugins in tempest yet, but we can cross that bridge 
>> when
>> the time comes) Either way it will allow tempest to have in config which
>> behavior to expect.
>
> Good point, my current understanding is:
> When adding new API parameters to the existing APIs, these parameters should
> be API extensions according to the above guidelines. So we have three options
> for handling API extensions in Tempest:
>
> 1. Consider them as optional, and cannot block the incompatible
> changes of them. (Current)
> 2. Consider them as required based on tempest.conf, and can block the
> incompatible changes.
> 3. Consider them as required automatically with microversioning, and
> can block the incompatible changes.

I investigated the way of the above option 3, then have one question
about current
Tempest implementation.

Now verify_tempest_config tool gets API extension list from each
service including
Nova and verifies API extension config of tempest.conf based on the list.
Can we use the list for selecting what extension tests run instead of
the verification?
As you said In the previous IRC meeting, current API tests will be
skipped if the
test which is decorated with requires_ext() and the extension is not
specified in
tempest.conf. I feel it would be nice that Tempest gets API extension
list and selects
API tests automatically based on the list.
In addition, The methods which are decorated with requires_ext() are
test methods now,
but I think it would be better to decorate client
methods(get_hypervisor_list, etc.) because
each extension loading condition affects available APIs.

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] should we have an IRC meeting next week ?

2014-05-01 Thread Lucas Alvares Gomes
> Hi all,
>
> Just a reminder that May 5th is our next scheduled meeting day, but I
> probably won't make it, because I'll be just getting back from one trip and
> start two consecutive weeks of conference travel early the next morning.
> Chris Krelle (nobodycam) has offered to chair that meeting in my absence.
> The agenda looks pretty light at this point, and any serious discussions
> should just be punted to the summit anyway, so if folks want to cancel the
> meeting, I think that's fine.

Next Monday is holiday here in Ireland, so I won't probably attend the
meeting as well.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Summit] Neutron etherpad

2014-05-01 Thread Roman Podoliaka
Hi all,

Following the mailing list thread started by Marios I've put some
initial questions to discuss into this etherpad document:

https://etherpad.openstack.org/p/juno-summit-tripleo-neutron

You are encouraged to take a look at it and add your thoughts and/or
questions :)

Thanks,
Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev