Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Stefano Maffulli
Hi Angus,

quite a noble intent, one that requires lots of attempts like this you
have started.

On 10/23/2014 09:32 PM, Angus Salkeld wrote:
 I have felt some grumblings about usability issues with Heat
 templates/client/etc..
 and wanted a way that users could come and give us feedback easily (low
 barrier). I started an etherpad
 (https://etherpad.openstack.org/p/heat-useablity-improvements) - the
 first win is it is spelt wrong :-O

:)

 We now have some great feedback there in a very short time, most of this
 we should be able to solve.

 This lead me to think, should OpenStack have a more general mechanism
 for users to provide feedback. The idea is this is not for bugs or
 support, but for users to express pain points, requests for features and
 docs/howtos.

One place to start is to pay attention to what happens on the operators
mailing list. Posting this message there would probably help since lots
of users hang out there.

In Paris there will be another operators mini-summit, the fourth IIRC,
one every 3 months more or less (I can't find the details at the moment,
I assume they'll be published soon -- Ideas are being collected on
https://etherpad.openstack.org/p/PAR-ops-meetup).

Another effort to close this 'feedback loop' is the new working group
temporarily named 'influencers' that will meet in Paris for the first
time:
https://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831

It's great to see lots of efforts going in the same direction. Keep 'em
coming.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Angus Salkeld
On Fri, Oct 24, 2014 at 4:00 PM, Stefano Maffulli stef...@openstack.org
wrote:

 Hi Angus,

 quite a noble intent, one that requires lots of attempts like this you
 have started.

 On 10/23/2014 09:32 PM, Angus Salkeld wrote:
  I have felt some grumblings about usability issues with Heat
  templates/client/etc..
  and wanted a way that users could come and give us feedback easily (low
  barrier). I started an etherpad
  (https://etherpad.openstack.org/p/heat-useablity-improvements) - the
  first win is it is spelt wrong :-O

 :)

  We now have some great feedback there in a very short time, most of this
  we should be able to solve.
 
  This lead me to think, should OpenStack have a more general mechanism
  for users to provide feedback. The idea is this is not for bugs or
  support, but for users to express pain points, requests for features and
  docs/howtos.

 One place to start is to pay attention to what happens on the operators
 mailing list. Posting this message there would probably help since lots
 of users hang out there.

 In Paris there will be another operators mini-summit, the fourth IIRC,
 one every 3 months more or less (I can't find the details at the moment,
 I assume they'll be published soon -- Ideas are being collected on
 https://etherpad.openstack.org/p/PAR-ops-meetup).


Thanks for those pointers, we very interested in feedback from operators,
but
in this case I am talking more about end users not operators (people that
actually use our API).

-Angus


 Another effort to close this 'feedback loop' is the new working group
 temporarily named 'influencers' that will meet in Paris for the first
 time:

 https://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831

 It's great to see lots of efforts going in the same direction. Keep 'em
 coming.

 /stef

 --
 Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Tim Bell
Angus,

There are two groups which may be relevant regarding ‘consumers’ of Heat


-Application eco system working group at 
https://wiki.openstack.org/wiki/Application_Ecosystem_Working_Group

-API working group at https://wiki.openstack.org/wiki/API_Working_Group

There are some discussions planned as part of the breakouts in the Kilo design 
summit (http://kilodesignsummit.sched.org/)

So, there are frameworks in place and we would welcome volunteers to help 
advance these in a consistent way across the OpenStack programs.

Tim

From: Angus Salkeld [mailto:asalk...@mirantis.com]
Sent: 24 October 2014 08:16
To: Stefano Maffulli
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] How can we get more feedback from users?

On Fri, Oct 24, 2014 at 4:00 PM, Stefano Maffulli 
stef...@openstack.orgmailto:stef...@openstack.org wrote:
Hi Angus,

quite a noble intent, one that requires lots of attempts like this you
have started.

On 10/23/2014 09:32 PM, Angus Salkeld wrote:
 I have felt some grumblings about usability issues with Heat
 templates/client/etc..
 and wanted a way that users could come and give us feedback easily (low
 barrier). I started an etherpad
 (https://etherpad.openstack.org/p/heat-useablity-improvements) - the
 first win is it is spelt wrong :-O

:)

 We now have some great feedback there in a very short time, most of this
 we should be able to solve.

 This lead me to think, should OpenStack have a more general mechanism
 for users to provide feedback. The idea is this is not for bugs or
 support, but for users to express pain points, requests for features and
 docs/howtos.

One place to start is to pay attention to what happens on the operators
mailing list. Posting this message there would probably help since lots
of users hang out there.

In Paris there will be another operators mini-summit, the fourth IIRC,
one every 3 months more or less (I can't find the details at the moment,
I assume they'll be published soon -- Ideas are being collected on
https://etherpad.openstack.org/p/PAR-ops-meetup).

Thanks for those pointers, we very interested in feedback from operators, but
in this case I am talking more about end users not operators (people that 
actually use our API).
-Angus

Another effort to close this 'feedback loop' is the new working group
temporarily named 'influencers' that will meet in Paris for the first
time:
https://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831

It's great to see lots of efforts going in the same direction. Keep 'em
coming.

/stef

--
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Summit] proposed item for the crossproject and/ or Nova meetings in the Design summit

2014-10-24 Thread Sylvain Bauza
Damn, there are multiple discussions on scheduler split while I'm on
vacation, thus I can't correctly respond the best I would...

Le 24 oct. 2014 03:11, Jay Pipes jaypi...@gmail.com a écrit :

 On 10/23/2014 07:57 PM, Elzur, Uri wrote:

 Today, OpenStack makes placement decision mainly based on Compute
 demands (Scheduler is part of Nova). It also uses some info provided
 about platform’s Compute capabilities. But for a given application
 (consists of some VMs, some Network appliances, some storage etc),
 Nova/Scheduler has no way to figure out relative placement of Network
 devices (virtual appliances, SFC) and/or Storage devices (which is also
 network born in many cases) in reference to the Compute elements. This
 makes it harder to provide SLA, support certain policies (e.g. HA or
 keeping all of these elements within a physical boundary of your choice,
 or within a given network physical boundary and guarantee storage
 proximity, for example. It also makes it harder to optimize resource
 utilization level, which increases the cost and may cause Openstack to
 be less competitive on TCO.

 Another aspect of the issue, is that in order, to lower the cost per
 unit of compute (or said better per unit of Application), it is
 essential to pack tighter. This increases infrastructure utilization but
 also makes interference a more important phenomenon (aka Nosy neighbor).
 SLA requests, SLA guarantees and placement based on ability to provide
 desired SLA are required.

 We’d like to suggest moving a bit faster on making OpenStack a more
 compelling stack for Compute/Network/Storage, capable of supporting
 Telco/NFV and other usage models, and creating the foundation for
 providing very low cost platform, more competitive with large cloud
 deployment.


 How do you suggest moving faster?

 Also, when you say things like more competitive with large cloud
deployment you need to tell us what you are comparing OpenStack to, and
what cost factors you are using. Otherwise, it's just a statement with no
context.


 The concern is that any scheduler change will take long time. Folks
 closer to the Scheduler work, have already pointed out we first need to
 stabilize the API between Nova and the Scheduler, before we can talk
 about a split (e.g. Gantt). So it may take till  late in 2016 (best
 case?), to get this kind of broader Application level functionality in
 the OpenStack scheduler .


 I'm not entirely sure where late in 2016 comes from? Could you elaborate?


 We’d like to bring it up in the coming design summit. Where do you think
 it needs to be discussed: cross project tack? Scheduler discussion?
Other?

 I’ve just added a proposed item 17.1 to the
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

 1.

 2.“present Application’s Network and Storage requirements, coupled with
 infrastructure capabilities and status (e.g. up/dn


 This is the kind of thing that was nixed as an idea last go around with
the nic-state-aware-scheduler:

 https://review.openstack.org/#/c/87978/

 You are coupling service state monitoring with placement decisions, and
by doing so, you will limit the scale of the system considerably. We need
improvements to our service state monitoring, for sure, including the
ability to have much more fine-grained definition of what a service is. But
I am 100% against adding the concept of checking service state *during*
placement decisions.

 Service state monitoring (it's called the servicegroup API in Nova) can
and should notify the scheduler of important changes to the state of
resource providers, but I'm opposed to making changes to the scheduler that
would essentially make a placement decision and then immediately go and
check a link for UP/DOWN state before finalizing the claim of resources
on the resource provider.


 , utilization levels) and placement policy (e.g. proximity, HA)


 I understand proximity (affinity/anti-affinity), but what does HA have to
do with placement policy? Could you elaborate a bit more on that?


  to get optimized placement

 decisions accounting for all application elements (VMs, virt Network
 appliances, Storage) vs. Compute only”


 Yep. These are all simply inputs to the scheduler's placement decision
engine. We need:

  a) A way of providing these inputs to the launch request without
polluting a cloud user's view of the cloud -- remember we do NOT want users
of the Nova API to essentially need to understand the exact layout of the
cloud provider's datacenter. That's definitively anti-cloudy :) So, we need
a way of providing generic inputs to the scheduler that the scheduler can
translate into specific inputs because the scheduler would know the layout
of the datacenter...

  b) Simple condition engine that would be able to understand the inputs
(requested proximity to a storage cluster used by applications running in
the instance, for example) with information the scheduler can query for
about the topology of the datacenter's network and 

[openstack-dev] [all][doc]

2014-10-24 Thread Angelo Matarazzo

Hi all,
I have a question for you devs.
I don't understand the difference between this link
http://git.openstack.org/cgit/openstack/governance/tree/reference/ 
project-testing-interface.rst

and
https://wiki.openstack.org/wiki/ProjectTestingInterface

Some parts don't match (e.g. unittest running section).
If the git link is the right doc should we update the wiki page?

I found the reference to the wiki page here:
https://lists.launchpad.net/openstack/msg08058.html

Best regards,
Angelo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Meg McRoberts
Terrific idea!  So what happens with the feedback you receive?
I am, of course, most interested in issues that might help us improve
the documentation, of course, but it's a general question about how
this information flows to the Product Engineering team.

meg

On Thu, Oct 23, 2014 at 11:46 PM, Tim Bell tim.b...@cern.ch wrote:

  Angus,



 There are two groups which may be relevant regarding ‘consumers’ of Heat



 -Application eco system working group at
 https://wiki.openstack.org/wiki/Application_Ecosystem_Working_Group

 -API working group at
 https://wiki.openstack.org/wiki/API_Working_Group



 There are some discussions planned as part of the breakouts in the Kilo
 design summit (http://kilodesignsummit.sched.org/)



 So, there are frameworks in place and we would welcome volunteers to help
 advance these in a consistent way across the OpenStack programs.



 Tim



 *From:* Angus Salkeld [mailto:asalk...@mirantis.com]
 *Sent:* 24 October 2014 08:16
 *To:* Stefano Maffulli
 *Cc:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [all] How can we get more feedback from
 users?



 On Fri, Oct 24, 2014 at 4:00 PM, Stefano Maffulli stef...@openstack.org
 wrote:

 Hi Angus,

 quite a noble intent, one that requires lots of attempts like this you
 have started.

 On 10/23/2014 09:32 PM, Angus Salkeld wrote:
  I have felt some grumblings about usability issues with Heat
  templates/client/etc..
  and wanted a way that users could come and give us feedback easily (low
  barrier). I started an etherpad
  (https://etherpad.openstack.org/p/heat-useablity-improvements) - the
  first win is it is spelt wrong :-O

 :)

  We now have some great feedback there in a very short time, most of this
  we should be able to solve.
 
  This lead me to think, should OpenStack have a more general mechanism
  for users to provide feedback. The idea is this is not for bugs or
  support, but for users to express pain points, requests for features and
  docs/howtos.

 One place to start is to pay attention to what happens on the operators
 mailing list. Posting this message there would probably help since lots
 of users hang out there.

 In Paris there will be another operators mini-summit, the fourth IIRC,
 one every 3 months more or less (I can't find the details at the moment,
 I assume they'll be published soon -- Ideas are being collected on
 https://etherpad.openstack.org/p/PAR-ops-meetup).



 Thanks for those pointers, we very interested in feedback from operators,
 but

 in this case I am talking more about end users not operators (people that
 actually use our API).

 -Angus



 Another effort to close this 'feedback loop' is the new working group
 temporarily named 'influencers' that will meet in Paris for the first
 time:

 https://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831

 It's great to see lots of efforts going in the same direction. Keep 'em
 coming.

 /stef

 --
 Ask and answer questions on https://ask.openstack.org



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][doc] project testing interface

2014-10-24 Thread Angelo Matarazzo

Hi all,
I have a question for you devs.
I don't understand the difference between this link
http://git.openstack.org/cgit/openstack/governance/tree/reference/project-testing-interface.rst
and
https://wiki.openstack.org/wiki/ProjectTestingInterface

Some parts don't match (e.g. unittest running section).
If the git link is the right doc should we update the wiki page?

I found the reference to the wiki page here:
https://lists.launchpad.net/openstack/msg08058.html

Best regards,
Angelo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread shihanzhang
Hi, Elena Ezhova, thanks for your work to this problem!
  I agree with your analysis, this why I commit this bug but don't submit patch 
for it.
  I have  want to use conntrack to solve this bug, but I also thought the 
problem you have said:
The problem here is that it is sometimes impossible to tell which 
connection should be killed. For example there may be two instances running in 
different namespaces that have the same ip addresses. As a  compute 
doesn't know anything about namespaces, it cannot distinguish between the two 
seemingly identical connections: 
 $ sudo conntrack -L  | grep 10.0.0.5
 tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 
sport=60723 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] 
mark=0 use=1
 tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 
sport=60729 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] 
mark=0 use=1


1. I think this problem is due to that in compute node, all tenants  instances 
(if use ovs agent, it use vlan to isolate different tenant instance) are in 
same namespace, so it can't distinguish the connection as above use case.
2. the ip_conntrack works above L3, so it can't  search for a connection by 
destination MAC


I am not clear as ajo said:
  I'm not sure if removing all the conntrack rules that match the certain 
filter would be OK enough, as it may only lead to full reevaluation of rules 
for the next packet of the cleared connections (may be I'm missing some corner 
detail, which could be).








在 2014-10-23 18:22:46,Elena Ezhova eezh...@mirantis.com 写道:

Hi!


I am working on a bug ping still working once connected even after related 
security group rule is deleted 
(https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the problem is 
the following: when we delete a security group rule the corresponding rule in 
iptables is also deleted, but the connection, that was allowed by that rule, is 
not being destroyed.
The reason for such behavior is that in iptables we have the following 
structure of a chain that filters input packets for an interface of an istance:


Chain neutron-openvswi-i830fa99f-3 (1 references)
 pkts bytes target prot opt in out source   destination 

0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0   
 state INVALID /* Drop packets that are not associated with a state. */
0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0   
 state RELATED,ESTABLISHED /* Direct packets associated with a known 
session to the RETURN chain. */
0 0 RETURN udp  --  *  *   10.0.0.3 0.0.0.0/0   
 udp spt:67 dpt:68
0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0   
 match-set IPv43a0d3610-8b38-43f2-8 src
0 0 RETURN tcp  --  *  *   0.0.0.0/00.0.0.0/0   
 tcp dpt:22   rule that allows ssh on port 22  
  
184 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0   

0 0 neutron-openvswi-sg-fallback  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0/* Send unmatched traffic to the fallback chain. */


So, if we delete rule that allows tcp on port 22, then all connections that are 
already established won't be closed, because all packets would satisfy the 
rule: 
0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0   
 state RELATED,ESTABLISHED /* Direct packets associated with a known 
session to the RETURN chain. */



I seek advice on the way how to deal with the problem. There are a couple of 
ideas how to do it (more or less realistic):
Kill the connection using conntrack
  The problem here is that it is sometimes impossible to tell which 
connection should be killed. For example there may be two instances running in 
different namespaces that have the same ip addresses. As a compute doesn't know 
anything about namespaces, it cannot distinguish between the two seemingly 
identical connections: 
 $ sudo conntrack -L  | grep 10.0.0.5
 tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60723 
dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] mark=0 use=1
 tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60729 
dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] mark=0 use=1


I wonder whether there is any way to search for a connection by destination MAC?
Delete iptables rule that directs packets associated with a known session to 
the RETURN chain
   It will force all packets to go through the full chain each time and 
this will definitely make the connection close. But this will strongly affect 
the performance. Probably there may be created a timeout after which this rule 
will be restored, but it is uncertain 

Re: [openstack-dev] [sahara] team meeting Oct 23 1800 UTC

2014-10-24 Thread Sergey Lukjanov
Thanks for attending meeting!

Logs:
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-10-23-18.01.html




On Wed, Oct 22, 2014 at 10:33 PM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

 We'll be having the Sahara team meeting as usual in
 #openstack-meeting-alt channel.

 Agenda:
 https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings


 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141023T18

 P.S. The main topic is finalisation of design summit schedule.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Kevin Benton
I think the root cause of the problem here is that we are losing
segregation between tenants at the conntrack level. The compute side plugs
everything into the same namespace and we have no guarantees about
uniqueness of any other fields kept by conntrack.

Because of this loss of uniqueness, I think there may be another lurking
bug here as well. One tenant establishing connections between IPs that
overlap with another tenant will create the possibility that a connection
the other tenant attempts will match the conntrack entry from the original
connection. Then whichever closes the connection first will result in the
conntrack entry being removed and the return traffic from the remaining
connection being dropped.

I think the correct way forward here is to isolate each tenant (or even
compute interface) into its own conntrack zone.[1] This will provide
isolation against that imaginary unlikely scenario I just presented. :-)
More importantly, it will allow us to clear connections for a specific
tenant (or compute interface) without interfering with others because
conntrack can delete by zone.[2]


1.
https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
2. see the -w option.
http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html

On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova eezh...@mirantis.com wrote:

 Hi!

 I am working on a bug ping still working once connected even after
 related security group rule is deleted (
 https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the problem
 is the following: when we delete a security group rule the corresponding
 rule in iptables is also deleted, but the connection, that was allowed by
 that rule, is not being destroyed.
 The reason for such behavior is that in iptables we have the following
 structure of a chain that filters input packets for an interface of an
 istance:

 Chain neutron-openvswi-i830fa99f-3 (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 DROP   all  --  *  *   0.0.0.0/0
 0.0.0.0/0state INVALID /* Drop packets that are not
 associated with a state. */
 0 0 RETURN all  --  *  *   0.0.0.0/0
 0.0.0.0/0state RELATED,ESTABLISHED /* Direct packets
 associated with a known session to the RETURN chain. */
 0 0 RETURN udp  --  *  *   10.0.0.3
 0.0.0.0/0udp spt:67 dpt:68
 0 0 RETURN all  --  *  *   0.0.0.0/0
 0.0.0.0/0match-set IPv43a0d3610-8b38-43f2-8 src
 0 0 RETURN tcp  --  *  *   0.0.0.0/0
 0.0.0.0/0tcp dpt:22   rule that allows ssh on port
 22
 184 RETURN icmp --  *  *   0.0.0.0/0
 0.0.0.0/0
 0 0 neutron-openvswi-sg-fallback  all  --  *  *
 0.0.0.0/00.0.0.0/0/* Send unmatched traffic to
 the fallback chain. */

 So, if we delete rule that allows tcp on port 22, then all connections
 that are already established won't be closed, because all packets would
 satisfy the rule:
 0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0
state RELATED,ESTABLISHED /* Direct packets associated with a
 known session to the RETURN chain. */

 I seek advice on the way how to deal with the problem. There are a couple
 of ideas how to do it (more or less realistic):

- Kill the connection using conntrack

   The problem here is that it is sometimes impossible to tell
 which connection should be killed. For example there may be two instances
 running in different namespaces that have the same ip addresses. As a
 compute doesn't know anything about namespaces, it cannot distinguish
 between the two seemingly identical connections:
  $ sudo conntrack -L  | grep 10.0.0.5
  tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5
 sport=60723 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723
 [ASSURED] mark=0 use=1
  tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5
 sport=60729 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729
 [ASSURED] mark=0 use=1

 I wonder whether there is any way to search for a connection by
 destination MAC?

- Delete iptables rule that directs packets associated with a known
session to the RETURN chain

It will force all packets to go through the full chain each
 time and this will definitely make the connection close. But this will
 strongly affect the performance. Probably there may be created a timeout
 after which this rule will be restored, but it is uncertain how long should
 it be.

 Please share your thoughts on how it would be better to handle it.

 Thanks in advance,
 Elena



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___

[openstack-dev] [Neutron][ServiceVM] servicevm and Paris summit(BoF)

2014-10-24 Thread Isaku Yamahata
Hi, there are several efforts to run service in VM.[1][2][3]
Although we focused on router service, other services like lbaas have
similar requirements. So it will be good to have a BoF or something.
I created a poll page(following NFV BoF), and please register
if you're interested
https://doodle.com/5tyi2bifag2f5ud4

Etherpad page is to manage topics.
https://etherpad.openstack.org/p/servicevm

This is also a reminder mail for the servicevm IRC meeting.
This is the last irc meeting before Paris Summit and discuss for the planning
Oct 28, 2014 Tuesdays 5:00(AM)UTC-
#openstack-meeting on freenode


links:
[0] https://wiki.openstack.org/wiki/ServiceVM

[1]
https://blueprints.launchpad.net/neutron/+spec/cisco-routing-service-vm
https://blueprints.launchpad.net/neutron/+spec/cisco-config-agent
https://blueprints.launchpad.net/neutron/+spec/fwaas-cisco

[2]
https://blueprints.launchpad.net/neutron/+spec/l3-plugin-brocade-vyatta-vrouter
https://blueprints.launchpad.net/neutron/+spec/firewall-plugin-for-brocade-vyatta-vrouter

[3]
https://blueprints.launchpad.net/neutron/+spec/tcs-fwaas-netconf-host-plugin
-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Salvatore Orlando
Just like Kevin I was considering using conntrack zones to segregate
connections.
However, I don't know whether this would be feasible as I've never used
iptables CT target in real applications.

Segregation should probably happen at the security group level - or even at
the rule level - rather than the tenant level.
Indeed the same situation could occur even with two security groups
belonging to the same tenant.

Probably each rule can be associated with a different conntrack zone. So
when it's matched, the corresponding conntrack entries will be added to the
appropriate zone. And therefore when the rules are removed the
corresponding connections to kill can be filtered by zone as explained by
Kevin.

This approach will add a good number of rules to the RAW table however, so
its impact on control/data plane scalability should be assessed, as it
might turn as bad as the solution where connections where explicitly
dropped with an ad-hoc iptables rule.

Salvatore


On 24 October 2014 09:32, Kevin Benton blak...@gmail.com wrote:

 I think the root cause of the problem here is that we are losing
 segregation between tenants at the conntrack level. The compute side plugs
 everything into the same namespace and we have no guarantees about
 uniqueness of any other fields kept by conntrack.

 Because of this loss of uniqueness, I think there may be another lurking
 bug here as well. One tenant establishing connections between IPs that
 overlap with another tenant will create the possibility that a connection
 the other tenant attempts will match the conntrack entry from the original
 connection. Then whichever closes the connection first will result in the
 conntrack entry being removed and the return traffic from the remaining
 connection being dropped.

 I think the correct way forward here is to isolate each tenant (or even
 compute interface) into its own conntrack zone.[1] This will provide
 isolation against that imaginary unlikely scenario I just presented. :-)
 More importantly, it will allow us to clear connections for a specific
 tenant (or compute interface) without interfering with others because
 conntrack can delete by zone.[2]


 1.
 https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
 2. see the -w option.
 http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html

 On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova eezh...@mirantis.com
 wrote:

 Hi!

 I am working on a bug ping still working once connected even after
 related security group rule is deleted (
 https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the
 problem is the following: when we delete a security group rule the
 corresponding rule in iptables is also deleted, but the connection, that
 was allowed by that rule, is not being destroyed.
 The reason for such behavior is that in iptables we have the following
 structure of a chain that filters input packets for an interface of an
 istance:

 Chain neutron-openvswi-i830fa99f-3 (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 DROP   all  --  *  *   0.0.0.0/0
 0.0.0.0/0state INVALID /* Drop packets that are not
 associated with a state. */
 0 0 RETURN all  --  *  *   0.0.0.0/0
 0.0.0.0/0state RELATED,ESTABLISHED /* Direct packets
 associated with a known session to the RETURN chain. */
 0 0 RETURN udp  --  *  *   10.0.0.3
 0.0.0.0/0udp spt:67 dpt:68
 0 0 RETURN all  --  *  *   0.0.0.0/0
 0.0.0.0/0match-set IPv43a0d3610-8b38-43f2-8 src
 0 0 RETURN tcp  --  *  *   0.0.0.0/0
 0.0.0.0/0tcp dpt:22   rule that allows ssh on port
 22
 184 RETURN icmp --  *  *   0.0.0.0/0
 0.0.0.0/0
 0 0 neutron-openvswi-sg-fallback  all  --  *  *
 0.0.0.0/00.0.0.0/0/* Send unmatched traffic to
 the fallback chain. */

 So, if we delete rule that allows tcp on port 22, then all connections
 that are already established won't be closed, because all packets would
 satisfy the rule:
 0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0
state RELATED,ESTABLISHED /* Direct packets associated with a
 known session to the RETURN chain. */

 I seek advice on the way how to deal with the problem. There are a couple
 of ideas how to do it (more or less realistic):

- Kill the connection using conntrack

   The problem here is that it is sometimes impossible to tell
 which connection should be killed. For example there may be two instances
 running in different namespaces that have the same ip addresses. As a
 compute doesn't know anything about namespaces, it cannot distinguish
 between the two seemingly identical connections:
  $ sudo conntrack -L  | grep 10.0.0.5
  tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5
 sport=60723 dport=22 src=10.0.0.5 

Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Miguel Angel Ajo Pelayo

Nice!, It sounds like a good mechanism to handle this. 


Defining a good mechanism here is crucial, we must be aware of the
2^16 zones limit [1], and that ipset rules will coalesce connections
to lots of different IPs over the same rule.

May be a good option is to tag connections per rule (we limit ourself to 2^16 
rules)
AND per ip address / port / protocol, etc. (having an average of 5 in / 5 out 
rules
per port, that's a 6553 ports limit per machine).

Or, if we need this to scale to more ports, tag connections per port, and target
them by rule AND ip address / port / protocol.

[1] 
https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01#diff-4d53dd1f3ad5275bc2e79f2c12af6e68R8


Best,
Miguel Ángel.


- Original Message - 

 Just like Kevin I was considering using conntrack zones to segregate
 connections.
 However, I don't know whether this would be feasible as I've never used
 iptables CT target in real applications.

 Segregation should probably happen at the security group level - or even at
 the rule level - rather than the tenant level.
 Indeed the same situation could occur even with two security groups belonging
 to the same tenant.

 Probably each rule can be associated with a different conntrack zone. So when
 it's matched, the corresponding conntrack entries will be added to the
 appropriate zone. And therefore when the rules are removed the corresponding
 connections to kill can be filtered by zone as explained by Kevin.

 This approach will add a good number of rules to the RAW table however, so
 its impact on control/data plane scalability should be assessed, as it might
 turn as bad as the solution where connections where explicitly dropped with
 an ad-hoc iptables rule.

 Salvatore

 On 24 October 2014 09:32, Kevin Benton  blak...@gmail.com  wrote:

  I think the root cause of the problem here is that we are losing
  segregation
  between tenants at the conntrack level. The compute side plugs everything
  into the same namespace and we have no guarantees about uniqueness of any
  other fields kept by conntrack.
 

  Because of this loss of uniqueness, I think there may be another lurking
  bug
  here as well. One tenant establishing connections between IPs that overlap
  with another tenant will create the possibility that a connection the other
  tenant attempts will match the conntrack entry from the original
  connection.
  Then whichever closes the connection first will result in the conntrack
  entry being removed and the return traffic from the remaining connection
  being dropped.
 

  I think the correct way forward here is to isolate each tenant (or even
  compute interface) into its own conntrack zone.[1] This will provide
  isolation against that imaginary unlikely scenario I just presented. :-)
 
  More importantly, it will allow us to clear connections for a specific
  tenant
  (or compute interface) without interfering with others because conntrack
  can
  delete by zone.[2]
 

  1.
  https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
 
  2. see the -w option.
  http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html
 

  On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova  eezh...@mirantis.com 
  wrote:
 

   Hi!
  
 

   I am working on a bug  ping still working once connected even after
   related
   security group rule is deleted (
   https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the
   problem
   is the following: when we delete a security group rule the corresponding
   rule in iptables is also deleted, but the connection, that was allowed by
   that rule, is not being destroyed.
  
 
   The reason for such behavior is that in iptables we have the following
   structure of a chain that filters input packets for an interface of an
   istance:
  
 

   Chain neutron-openvswi-i830fa99f-3 (1 references)
  
 
   pkts bytes target prot opt in out source destination
  
 
   0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets
   that
   are not associated with a state. */
  
 
   0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /*
   Direct
   packets associated with a known session to the RETURN chain. */
  
 
   0 0 RETURN udp -- * * 10.0.0.3 0.0.0.0/0 udp spt:67 dpt:68
  
 
   0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set
   IPv43a0d3610-8b38-43f2-8
   src
  
 
   0 0 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22  rule that
   allows
   ssh on port 22
  
 
   1 84 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0
  
 
   0 0 neutron-openvswi-sg-fallback all -- * * 0.0.0.0/0 0.0.0.0/0 /* Send
   unmatched traffic to the fallback chain. */
  
 

   So, if we delete rule that allows tcp on port 22, then all connections
   that
   are already established won't be closed, because all packets would
   satisfy
   the rule:
  
 
   0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /*
   Direct
   packets associated with a known session 

Re: [openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-24 Thread Miguel Angel Ajo Pelayo


- Original Message -
 Hi Miguel,
 
 while we'd need to hear from the stable team, I think it's not such a bad
 idea to make this tool available to users of pre-juno openstack releases.
 As far as upstream repos are concerned, I don't know if this tool violates
 the criteria for stable branches. Even if it would be a rather large change
 for stable/icehouse, it is pretty much orthogonal to the existing code, so
 it could be ok. However, please note that stable/havana has now reached its
 EOL, so there will be no more stable release for it.

Sure, I was mentioning havana as affected, but I understand it's already
under U/S EOL, D/S distributions would always be free to backport, specially
on an orthogonal change like this.

About stable/icehouse, I'd like to hear from the stable maintainers.

 
 The orthogonal nature of this tool however also make the case for making it
 widely available on pypi. I think it should be ok to describe the
 scalability issue in the official OpenStack Icehouse docs and point out to
 this tool for mitigation.

Yes, of course, I consider that as a second option, my point here is that 
direct upstream review time would result in better quality code here, and 
could certainly spot any hidden bugs, and increase testing quality.

It also reduces packaging time all across distributions making it available
via the standard neutron repository.


Thanks for the feedback!,

 
 Salvatore
 
 On 23 October 2014 14:03, Miguel Angel Ajo Pelayo  mangel...@redhat.com 
 wrote:
 
 
 
 
 Recently, we have identified clients with problems due to the
 bad scalability of security groups in Havana and Icehouse, that
 was addressed during juno here [1] [2]
 
 This situation is identified by blinking agents (going UP/DOWN),
 high AMQP load, nigh neutron-server load, and timeout from openvswitch
 agents when trying to contact neutron-server
 security_group_rules_for_devices.
 
 Doing a [1] backport involves many dependent patches related
 to the general RPC refactor in neutron (which modifies all plugins),
 and subsequent ones fixing a few bugs. Sounds risky to me. [2] Introduces
 new features and it's dependent on features which aren't available on
 all systems.
 
 To remediate this on production systems, I wrote a quick tool
 to help on reporting security groups and mitigating the problem
 by writing almost-equivalent rules [3].
 
 We believe this tool would be better available to the wider community,
 and under better review and testing, and, since it doesn't modify any
 behavior
 or actual code in neutron, I'd like to propose it for inclusion into, at
 least,
 Icehouse stable branch where it's more relevant.
 
 I know the usual way is to go master-Juno-Icehouse, but at this moment
 the tool is only interesting for Icehouse (and Havana), although I believe
 it could be extended to cleanup orphaned resources, or any other cleanup
 tasks, in that case it could make sense to be available for K-J-I.
 
 As a reference, I'm leaving links to outputs from the tool [4][5]
 
 Looking forward to get some feedback,
 Miguel Ángel.
 
 
 [1] https://review.openstack.org/#/c/111876/ security group rpc refactor
 [2] https://review.openstack.org/#/c/111877/ ipset support
 [3] https://github.com/mangelajo/neutrontool
 [4] http://paste.openstack.org/show/123519/
 [5] http://paste.openstack.org/show/123525/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Kevin Benton
While a zone per rule would be nice because we can easily delete connection
state by only referencing a zone, that's probably overkill. We only need
enough to disambiguate between overlapping IPs so we can then delete
connection state by matching standard L3/4 headers again, right?

I think a conntrack zone per port would be the easiest from an accounting
perspective. We already setup an iptables chain per port so the grouping is
already there (/me sweeps the complexity of choosing zone numbers under the
rug).

On Fri, Oct 24, 2014 at 2:25 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 Just like Kevin I was considering using conntrack zones to segregate
 connections.
 However, I don't know whether this would be feasible as I've never used
 iptables CT target in real applications.

 Segregation should probably happen at the security group level - or even
 at the rule level - rather than the tenant level.
 Indeed the same situation could occur even with two security groups
 belonging to the same tenant.

 Probably each rule can be associated with a different conntrack zone. So
 when it's matched, the corresponding conntrack entries will be added to the
 appropriate zone. And therefore when the rules are removed the
 corresponding connections to kill can be filtered by zone as explained by
 Kevin.

 This approach will add a good number of rules to the RAW table however, so
 its impact on control/data plane scalability should be assessed, as it
 might turn as bad as the solution where connections where explicitly
 dropped with an ad-hoc iptables rule.

 Salvatore


 On 24 October 2014 09:32, Kevin Benton blak...@gmail.com wrote:

 I think the root cause of the problem here is that we are losing
 segregation between tenants at the conntrack level. The compute side plugs
 everything into the same namespace and we have no guarantees about
 uniqueness of any other fields kept by conntrack.

 Because of this loss of uniqueness, I think there may be another lurking
 bug here as well. One tenant establishing connections between IPs that
 overlap with another tenant will create the possibility that a connection
 the other tenant attempts will match the conntrack entry from the original
 connection. Then whichever closes the connection first will result in the
 conntrack entry being removed and the return traffic from the remaining
 connection being dropped.

 I think the correct way forward here is to isolate each tenant (or even
 compute interface) into its own conntrack zone.[1] This will provide
 isolation against that imaginary unlikely scenario I just presented. :-)
 More importantly, it will allow us to clear connections for a specific
 tenant (or compute interface) without interfering with others because
 conntrack can delete by zone.[2]


 1.
 https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
 2. see the -w option.
 http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html

 On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova eezh...@mirantis.com
 wrote:

 Hi!

 I am working on a bug ping still working once connected even after
 related security group rule is deleted (
 https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the
 problem is the following: when we delete a security group rule the
 corresponding rule in iptables is also deleted, but the connection, that
 was allowed by that rule, is not being destroyed.
 The reason for such behavior is that in iptables we have the following
 structure of a chain that filters input packets for an interface of an
 istance:

 Chain neutron-openvswi-i830fa99f-3 (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 DROP   all  --  *  *   0.0.0.0/0
 0.0.0.0/0state INVALID /* Drop packets that are not
 associated with a state. */
 0 0 RETURN all  --  *  *   0.0.0.0/0
 0.0.0.0/0state RELATED,ESTABLISHED /* Direct packets
 associated with a known session to the RETURN chain. */
 0 0 RETURN udp  --  *  *   10.0.0.3
 0.0.0.0/0udp spt:67 dpt:68
 0 0 RETURN all  --  *  *   0.0.0.0/0
 0.0.0.0/0match-set IPv43a0d3610-8b38-43f2-8 src
 0 0 RETURN tcp  --  *  *   0.0.0.0/0
 0.0.0.0/0tcp dpt:22   rule that allows ssh on port
 22
 184 RETURN icmp --  *  *   0.0.0.0/0
 0.0.0.0/0
 0 0 neutron-openvswi-sg-fallback  all  --  *  *
 0.0.0.0/00.0.0.0/0/* Send unmatched traffic to
 the fallback chain. */

 So, if we delete rule that allows tcp on port 22, then all connections
 that are already established won't be closed, because all packets would
 satisfy the rule:
 0 0 RETURN all  --  *  *   0.0.0.0/0
 0.0.0.0/0state RELATED,ESTABLISHED /* Direct packets
 associated with a known session to the RETURN chain. */

 I seek advice on the way how to deal with the 

Re: [openstack-dev] [rally][users]: Synchronizing between multiple scenario instances.

2014-10-24 Thread Boris Pavlovic
Behazd,

Ok, for now we can do it in such way.

What I am thinking is that this *logic* should be implemented on
benchmark.runner level.
Cause different load generators are using different approaches to generate
load.
So in case of serial runner it's even impossible to make locking.

So what about adding in
https://github.com/stackforge/rally/blob/master/rally/benchmark/runners/base.py#L140
2 abstract methods (one for incrementing, another for waiting)

And implement them for different runners?


Best regards,
Boris Pavlovic


On Thu, Oct 23, 2014 at 12:05 PM, Behzad Dastur (bdastur) bdas...@cisco.com
 wrote:

  Hi Boris,

 I am still getting my feet wet with rally so some concepts are new, and
 did not quite get your statement regarding the different load generators. I
 am presuming you are referring to the Scenario runner and the different
 “types” of runs.



 What I was looking at is the runner, where we specify the type, times and
 concurrency.  We could have an additional field(s) which would specify the
 synchronization property.



 Essentially, what I have found most useful in the cases where we run
 scenarios/tests in parallel;  is some sort of “barrier”, where at a certain
 point in the run you want all the parallel tasks to reach a specific point
 before continuing.



 Also, I am also considering cases where synchronization is needed within a
 single benchmark case, where the same benchmark scenario:

 creates some vms, performs some tasks, deletes the vm



 Just for simplicity as a POC, I tried something with shared memory
 (multiprocessing.Value), which looks something like this:



 class Barrier(object):

 __init__(self, concurrency)

 self.shmem = multiprocessing.Value(‘I’, concurrency)

 self.lock = multiprocessing.Lock()



 def wait_at_barrier ():

while self.shmem.value  0:

time.sleep(1)

return



 def decrement_shm_concurrency_cnt ():

  with self.lock:

  self.shmem.value -=  1



 And from the scenario, it can be called as:



 scenario:

  -- do some action –

   barrobj.decrement_shm_concurrency_cnt()

  sync_helper.wait_at_barrier()

 -- do some action –   ß all processes will do this action at almost the
 same time.



 I would be happy to discuss more to get a good common solution.



 regards,

 Behzad











 *From:* bo...@pavlovic.ru [mailto:bo...@pavlovic.ru] *On Behalf Of *Boris
 Pavlovic
 *Sent:* Tuesday, October 21, 2014 3:23 PM
 *To:* Behzad Dastur (bdastur)
 *Cc:* OpenStack Development Mailing List (not for usage questions);
 Pradeep Chandrasekar (pradeech); John Wei-I Wu (weiwu)
 *Subject:* Re: [openstack-dev] [rally][users]: Synchronizing between
 multiple scenario instances.



 Behzad,



 Unfortunately at this point there is no support of locking between
 scenarios.





 It will be quite tricky for implementation, because we have different load
 generators, and we will need to find

 common solutions for all of them.



 If you have any ideas about how to implement it in such way, I will be
 more than happy to get this in upstream.





 One of the way that I see is to having some kind of chain-of-benchmarks:



 1) Like first benchmark is running N VMs

 2) Second benchmarking is doing something with all those benchmarks

 3) Third benchmark is deleting all these VMs



 (where chain elements are atomic actions)



 Probably this will be better long term solution.

 Only thing that we should understand is how to store those results and how
 to display them.





 If you would like to help with this let's start discussing it, in some
 kind of google docs.



 Thoughts?





 Best regards,

 Boris Pavlovic





 On Wed, Oct 22, 2014 at 2:13 AM, Behzad Dastur (bdastur) 
 bdas...@cisco.com wrote:

 Does rally provide any synchronization mechanism to synchronize between
 multiple scenario, when running in parallel? Rally spawns multiple
 processes, with each process running the scenario.  We need a way to
 synchronize between these to start a perf test operation at the same time.





 regards,

 Behzad





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Miguel Angel Ajo Pelayo

Kevin, I agree, with you, 1 zone per port should be reasonable.

The 2^16 rule limit will force us into keeping state (to tie
ports to zones across reboots), may be this state can be just
recovered by reading the iptables rules at boot, and reconstructing
the current openvswitch-agent local port/zone association.

Best,
Miguel Ángel.

- Original Message - 

 While a zone per rule would be nice because we can easily delete connection
 state by only referencing a zone, that's probably overkill. We only need
 enough to disambiguate between overlapping IPs so we can then delete
 connection state by matching standard L3/4 headers again, right?

 I think a conntrack zone per port would be the easiest from an accounting
 perspective. We already setup an iptables chain per port so the grouping is
 already there (/me sweeps the complexity of choosing zone numbers under the
 rug).

 On Fri, Oct 24, 2014 at 2:25 AM, Salvatore Orlando  sorla...@nicira.com 
 wrote:

  Just like Kevin I was considering using conntrack zones to segregate
  connections.
 
  However, I don't know whether this would be feasible as I've never used
  iptables CT target in real applications.
 

  Segregation should probably happen at the security group level - or even at
  the rule level - rather than the tenant level.
 
  Indeed the same situation could occur even with two security groups
  belonging
  to the same tenant.
 

  Probably each rule can be associated with a different conntrack zone. So
  when
  it's matched, the corresponding conntrack entries will be added to the
  appropriate zone. And therefore when the rules are removed the
  corresponding
  connections to kill can be filtered by zone as explained by Kevin.
 

  This approach will add a good number of rules to the RAW table however, so
  its impact on control/data plane scalability should be assessed, as it
  might
  turn as bad as the solution where connections where explicitly dropped with
  an ad-hoc iptables rule.
 

  Salvatore
 

  On 24 October 2014 09:32, Kevin Benton  blak...@gmail.com  wrote:
 

   I think the root cause of the problem here is that we are losing
   segregation
   between tenants at the conntrack level. The compute side plugs everything
   into the same namespace and we have no guarantees about uniqueness of any
   other fields kept by conntrack.
  
 

   Because of this loss of uniqueness, I think there may be another lurking
   bug
   here as well. One tenant establishing connections between IPs that
   overlap
   with another tenant will create the possibility that a connection the
   other
   tenant attempts will match the conntrack entry from the original
   connection.
   Then whichever closes the connection first will result in the conntrack
   entry being removed and the return traffic from the remaining connection
   being dropped.
  
 

   I think the correct way forward here is to isolate each tenant (or even
   compute interface) into its own conntrack zone.[1] This will provide
   isolation against that imaginary unlikely scenario I just presented. :-)
  
 
   More importantly, it will allow us to clear connections for a specific
   tenant
   (or compute interface) without interfering with others because conntrack
   can
   delete by zone.[2]
  
 

   1.
   https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
  
 
   2. see the -w option.
   http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html
  
 

   On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova  eezh...@mirantis.com 
   wrote:
  
 

Hi!
   
  
 

I am working on a bug  ping still working once connected even after
related
security group rule is deleted (
https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the
problem
is the following: when we delete a security group rule the
corresponding
rule in iptables is also deleted, but the connection, that was allowed
by
that rule, is not being destroyed.
   
  
 
The reason for such behavior is that in iptables we have the following
structure of a chain that filters input packets for an interface of an
istance:
   
  
 

Chain neutron-openvswi-i830fa99f-3 (1 references)
   
  
 
pkts bytes target prot opt in out source destination
   
  
 
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets
that
are not associated with a state. */
   
  
 
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /*
Direct
packets associated with a known session to the RETURN chain. */
   
  
 
0 0 RETURN udp -- * * 10.0.0.3 0.0.0.0/0 udp spt:67 dpt:68
   
  
 
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set
IPv43a0d3610-8b38-43f2-8
src
   
  
 
0 0 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22  rule that
allows
ssh on port 22
   
  
 
1 84 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0
   
  
 
0 0 neutron-openvswi-sg-fallback all -- * * 0.0.0.0/0 

Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Miguel Angel Ajo Pelayo
sorry: when I said boot, I mean openvswitch agent restart.

- Original Message -
 
 Kevin, I agree, with you, 1 zone per port should be reasonable.
 
 The 2^16 rule limit will force us into keeping state (to tie
 ports to zones across reboots), may be this state can be just
 recovered by reading the iptables rules at boot, and reconstructing
 the current openvswitch-agent local port/zone association.
 
 Best,
 Miguel Ángel.
 
 - Original Message -
 
  While a zone per rule would be nice because we can easily delete connection
  state by only referencing a zone, that's probably overkill. We only need
  enough to disambiguate between overlapping IPs so we can then delete
  connection state by matching standard L3/4 headers again, right?
 
  I think a conntrack zone per port would be the easiest from an accounting
  perspective. We already setup an iptables chain per port so the grouping is
  already there (/me sweeps the complexity of choosing zone numbers under the
  rug).
 
  On Fri, Oct 24, 2014 at 2:25 AM, Salvatore Orlando  sorla...@nicira.com 
  wrote:
 
   Just like Kevin I was considering using conntrack zones to segregate
   connections.
  
   However, I don't know whether this would be feasible as I've never used
   iptables CT target in real applications.
  
 
   Segregation should probably happen at the security group level - or even
   at
   the rule level - rather than the tenant level.
  
   Indeed the same situation could occur even with two security groups
   belonging
   to the same tenant.
  
 
   Probably each rule can be associated with a different conntrack zone. So
   when
   it's matched, the corresponding conntrack entries will be added to the
   appropriate zone. And therefore when the rules are removed the
   corresponding
   connections to kill can be filtered by zone as explained by Kevin.
  
 
   This approach will add a good number of rules to the RAW table however,
   so
   its impact on control/data plane scalability should be assessed, as it
   might
   turn as bad as the solution where connections where explicitly dropped
   with
   an ad-hoc iptables rule.
  
 
   Salvatore
  
 
   On 24 October 2014 09:32, Kevin Benton  blak...@gmail.com  wrote:
  
 
I think the root cause of the problem here is that we are losing
segregation
between tenants at the conntrack level. The compute side plugs
everything
into the same namespace and we have no guarantees about uniqueness of
any
other fields kept by conntrack.
   
  
 
Because of this loss of uniqueness, I think there may be another
lurking
bug
here as well. One tenant establishing connections between IPs that
overlap
with another tenant will create the possibility that a connection the
other
tenant attempts will match the conntrack entry from the original
connection.
Then whichever closes the connection first will result in the conntrack
entry being removed and the return traffic from the remaining
connection
being dropped.
   
  
 
I think the correct way forward here is to isolate each tenant (or even
compute interface) into its own conntrack zone.[1] This will provide
isolation against that imaginary unlikely scenario I just presented.
:-)
   
  
More importantly, it will allow us to clear connections for a specific
tenant
(or compute interface) without interfering with others because
conntrack
can
delete by zone.[2]
   
  
 
1.
https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
   
  
2. see the -w option.
http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html
   
  
 
On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova  eezh...@mirantis.com 
wrote:
   
  
 
 Hi!

   
  
 
 I am working on a bug  ping still working once connected even after
 related
 security group rule is deleted (
 https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the
 problem
 is the following: when we delete a security group rule the
 corresponding
 rule in iptables is also deleted, but the connection, that was
 allowed
 by
 that rule, is not being destroyed.

   
  
 The reason for such behavior is that in iptables we have the
 following
 structure of a chain that filters input packets for an interface of
 an
 istance:

   
  
 
 Chain neutron-openvswi-i830fa99f-3 (1 references)

   
  
 pkts bytes target prot opt in out source destination

   
  
 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets
 that
 are not associated with a state. */

   
  
 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
 /*
 Direct
 packets associated with a known session to the RETURN chain. */

   
  
 0 0 RETURN udp -- * * 10.0.0.3 0.0.0.0/0 udp spt:67 dpt:68

   
  
 0 0 RETURN all -- * * 

Re: [openstack-dev] [Horizon] [Devstack]

2014-10-24 Thread Yves-Gwenaël Bourhis
Le 23/10/2014 23:55, Gabriel Hurley a écrit :
 1)  If you’re going to store very large amounts of data in the
 session, then session cleanup is going to become an important issue to
 prevent excessive data growth from old sessions.
 
 2)  SQLite is far worse to go into production with than cookie-based
 sessions (which are far from perfect). The more we can do to ensure
 people don’t make that mistake, the better.

memcache can be distributed (so usable in HA) and has far better
performances then db sessions.
Why not use memcache by default?

Just a suggestion.

-- 
Yves-Gwenaël Bourhis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-24 Thread Chris Dent

On Thu, 23 Oct 2014, Doug Hellmann wrote:

Thanks for the feedback Doug, it's useful.


WebTest isn’t quite what you’re talking about, but does provide a
way to talk to a WSGI app from within a test suite rather simply. Can
you expand a little on why “declarative” tests are better suited
for this than the more usual sorts of tests we write?


I'll add a bit more on why I think declarative tests are useful, but
it basically comes down to explicit transparency of the on-the-wire
HTTP requests and responses. At least within Ceilometer and at least
for me, unittest form tests are very difficult to read as a
significant portion of the action is somewhere in test setup or a
super class. This may not have much impact on the effectiveness of the
tests for computers, but it's a total killer of the effectiveness of
the tests as tools for discovery by a developer who needs to make
changes or, heaven forbid, is merely curious. I think we can agree
that a more informed developer, via a more learnable codebase, is a
good thing?

I did look at WebTest and while it looks pretty good I don't particular
care for its grammar. This might be seen as a frivolous point (because
under the hood the same thing is happening, and its not this aspect that
would be exposed the declarations) but calling a method on the `app`
provided by WebTest feels different than using an http library to make
a request of what appears to be a web server hosting the app.

If people prefer WebTest over wsgi-intercept, that's fine, it's not
something worth fighting about. The positions I'm taking in the spec,
as you'll have seen from my responses to Eoghan, are trying to emphasize
the focus on HTTP rather than the app itself. I think this could lead,
eventually, to better HTTP APIs.


I definitely don’t think the ceilometer team should build
something completely new for this without a lot more detail in the
spec about which projects on PyPI were evaluated and rejected as not
meeting the requirements. If we do need/want something like this I
would expect it to be built within the QA program. I don’t know if
it’s appropriate to put it in tempestlib or if we need a
completely new tool.


I don't think anyone wants to build a new thing unless that turns out
to be necessary, thus this thread. I'm hoping to get input from people
who have thought about or explored this before. I'm hoping we can
build on the shoulders of giants and all that. I'm also hoping to
short circuit extensive personal research by collaborating with others
who may have already done it before.

I have an implementation of the concept that I've used in a previous
project (linked from the spec) which could work as a starting point
from which we could iterate but if there is something better out there
I'd prefer to start there.

So the questions from the original post still stand, all input
welcome, please and thank you.


* Is this a good idea?
* Do other projects have similar ideas in progress?
* Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?
* Is there prior art? What's a good format?


--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-24 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 24/10/14 11:56, Miguel Angel Ajo Pelayo wrote:
 
 
 - Original Message -
 Hi Miguel,
 
 while we'd need to hear from the stable team, I think it's not
 such a bad idea to make this tool available to users of pre-juno
 openstack releases.

It's a great idea actually. It's great when code emerged from real
life downstream support cases eventually flow up to upstream for all
operator's benefit (and not just those who pay huge money for
commercial service).

 As far as upstream repos are concerned, I don't know if this tool
 violates the criteria for stable branches. Even if it would be a
 rather large change for stable/icehouse, it is pretty much
 orthogonal to the existing code, so it could be ok. However,
 please note that stable/havana has now reached its EOL, so there
 will be no more stable release for it.
 
 Sure, I was mentioning havana as affected, but I understand it's
 already under U/S EOL, D/S distributions would always be free to
 backport, specially on an orthogonal change like this.
 
 About stable/icehouse, I'd like to hear from the stable
 maintainers.

I'm for inclusion of the tool in the main neutron package. Though it's
possible to publish it on pypi as a separate package, I would better
apply formal review process to it, plus reduce packaging efforts for
distributions (and myself). The tool may be later expanded for other
useful operator hooks, so I'm for inclusion of the tool in master and
backporting it back to all supported branches.

Though official stable maintainership rules state that 'New features'
are no-go for stable branch [1], I think they should not apply in this
case since the tool does not touch production code in any way and just
provides a way to heal security groups on operator demand. Also, rules
are to break them. ;) Quoting the same document, Proposed backports
breaking any of above guidelines can be discussed as exception
requests on openstack-stable-maint list where stable-maint team will
try to reach consensus.

Operators should be more happy if we ship such a tool as part of
neutron release and not as another third-party tool from pypi of
potentially unsafe origin.

BTW I wonder whether the tool can be useful for Juno+ setups too.
Though we mostly mitigated the problem by RPC interface rework and
ipset, some operators may still hit some limitation that could be
workarounded by optimizing their rules. Also, I think the idea of
having a tool with miscellaneous operator hooks in the master tree is
quite interesting. I would recommend to still go with pushing it to
master and then backporting to stable branches. That would also help
to get more review attention from cores than stable branch requests
usually receive. ;)

[1]: https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes

 
 
 The orthogonal nature of this tool however also make the case for
 making it widely available on pypi. I think it should be ok to
 describe the scalability issue in the official OpenStack Icehouse
 docs and point out to this tool for mitigation.
 
 Yes, of course, I consider that as a second option, my point here
 is that direct upstream review time would result in better quality
 code here, and could certainly spot any hidden bugs, and increase
 testing quality.
 
 It also reduces packaging time all across distributions making it
 available via the standard neutron repository.
 
 
 Thanks for the feedback!,
 
 
 Salvatore
 
 On 23 October 2014 14:03, Miguel Angel Ajo Pelayo 
 mangel...@redhat.com  wrote:
 
 
 
 
 Recently, we have identified clients with problems due to the bad
 scalability of security groups in Havana and Icehouse, that was
 addressed during juno here [1] [2]
 
 This situation is identified by blinking agents (going UP/DOWN), 
 high AMQP load, nigh neutron-server load, and timeout from
 openvswitch agents when trying to contact neutron-server 
 security_group_rules_for_devices.
 
 Doing a [1] backport involves many dependent patches related to
 the general RPC refactor in neutron (which modifies all
 plugins), and subsequent ones fixing a few bugs. Sounds risky to
 me. [2] Introduces new features and it's dependent on features
 which aren't available on all systems.
 
 To remediate this on production systems, I wrote a quick tool to
 help on reporting security groups and mitigating the problem by
 writing almost-equivalent rules [3].
 
 We believe this tool would be better available to the wider
 community, and under better review and testing, and, since it
 doesn't modify any behavior or actual code in neutron, I'd like
 to propose it for inclusion into, at least, Icehouse stable
 branch where it's more relevant.
 
 I know the usual way is to go master-Juno-Icehouse, but at this
 moment the tool is only interesting for Icehouse (and Havana),
 although I believe it could be extended to cleanup orphaned
 resources, or any other cleanup tasks, in that case it could make
 sense to be available for K-J-I.

[openstack-dev] [tempest] [devstack] Generic scripts for Tempest configuration

2014-10-24 Thread Timur Nurlygayanov
Hi all,

we are using Tempest tests to verify every changes in different OpenStack
components and we have scripts in devstack, which allow to configure
Tempest.
We want to use Tempest tests to verify different clouds, not only installed
with devstack and to do this we need to configure Tempest manually (or with
some no-generic scripts, which allow to configure tempest for specific lab
configuration).

Looks like we can improve these scripts for configuration of the Tempest,
which we have in devstack repository now and create generic scripts for
Tempest, which can be used by devstack scripts or manually, to configure
Tempest for any private/public OpenStack clouds. These scripts should allow
to easily configure Tempest: user should provide only Keystone endpoint and
logins/passwords, other parameters can be optional and can be configured
automatically.

The idea is to have the generic scripts, which will allow to easily
configure Tempest from-the-box, without deep inspection of lab
configuration (but with the ability to change optional parameters too, if
it is required).

-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] [devstack] Generic scripts for Tempest configuration

2014-10-24 Thread Andrey Kurilin
Hi Timur!
https://review.openstack.org/#/c/94473/ - this is what you need:)

On Fri, Oct 24, 2014 at 2:05 PM, Timur Nurlygayanov 
tnurlygaya...@mirantis.com wrote:

 Hi all,

 we are using Tempest tests to verify every changes in different OpenStack
 components and we have scripts in devstack, which allow to configure
 Tempest.
 We want to use Tempest tests to verify different clouds, not only
 installed with devstack and to do this we need to configure Tempest
 manually (or with some no-generic scripts, which allow to configure tempest
 for specific lab configuration).

 Looks like we can improve these scripts for configuration of the Tempest,
 which we have in devstack repository now and create generic scripts for
 Tempest, which can be used by devstack scripts or manually, to configure
 Tempest for any private/public OpenStack clouds. These scripts should allow
 to easily configure Tempest: user should provide only Keystone endpoint and
 logins/passwords, other parameters can be optional and can be configured
 automatically.

 The idea is to have the generic scripts, which will allow to easily
 configure Tempest from-the-box, without deep inspection of lab
 configuration (but with the ability to change optional parameters too, if
 it is required).

 --

 Timur,
 Senior QA Engineer
 OpenStack Projects
 Mirantis Inc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Devstack]

2014-10-24 Thread Chmouel Boudjnah
On Fri, Oct 24, 2014 at 12:27 PM, Yves-Gwenaël Bourhis 
yves-gwenael.bour...@cloudwatt.com wrote:

 Le 23/10/2014 23:55, Gabriel Hurley a écrit :
  1)  If you’re going to store very large amounts of data in the
  session, then session cleanup is going to become an important issue to
  prevent excessive data growth from old sessions.
 
  2)  SQLite is far worse to go into production with than cookie-based
  sessions (which are far from perfect). The more we can do to ensure
  people don’t make that mistake, the better.

 memcache can be distributed (so usable in HA) and has far better
 performances then db sessions.
 Why not use memcache by default?


I guess for the simple reason that if you restart your memcache you loose
all the sessions?

Chmouel



 Just a suggestion.

 --
 Yves-Gwenaël Bourhis

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] multi-tenant store in Juno

2014-10-24 Thread stuart . mclaren

All,

On my devstack setup neither image upload or image download work
for the Swift multi-tenant store in Juno (I'm getting E500s).

I'd be interested if someone can confirm.

With these changes upload/download worked for me:

glance upload
https://review.openstack.org/#/c/130757/

glance store upload
https://review.openstack.org/#/c/130755/

glance store download
https://review.openstack.org/#/c/130743/

-Stuart

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] multi-tenant store in Juno

2014-10-24 Thread Flavio Percoco
On 10/24/2014 01:40 PM, stuart.mcla...@hp.com wrote:
 All,
 
 On my devstack setup neither image upload or image download work
 for the Swift multi-tenant store in Juno (I'm getting E500s).
 
 I'd be interested if someone can confirm.
 
 With these changes upload/download worked for me:
 
 glance upload
 https://review.openstack.org/#/c/130757/
 
 glance store upload
 https://review.openstack.org/#/c/130755/
 
 glance store download
 https://review.openstack.org/#/c/130743/

The fixes look legit.

I guess this was not caught in the gate because we're not testing
multi-tenant upload there, right? Any chance we can enable this?


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-24 Thread Flavio Percoco
On 10/24/2014 02:14 AM, Jeremy Stanley wrote:
 On 2014-10-23 17:18:04 -0400 (-0400), Doug Hellmann wrote:
 I think we have to actually wait for M, don’t we (K  L represents
 1 year where J is supported, M is the first release where J is not
 supported and 2.6 can be fully dropped).
 [...]
 
 Roughly speaking, probably. It's more accurate to say we need to
 keep it until stable/juno reaches end of support, which won't
 necessarily coincide exactly with any particular release cycle
 ending (it will instead coincide with whenever the stable branch
 management team decides the final 2014.2.x point release is, which I
 don't think has been settled quite yet).
 

Agreed! I think this is a safer and easier way to drop support for py26
in Oslo. Otherwise, I think we'll have to fight with backports, pinned
versions and whatnot.

Thoughts?
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Salvatore Orlando
Assigning a distinct ct zone to each port sounds more scalable. This should
keep the number of zones per host contained.

What should the workflow when rules are updated or deleted be?
1) From the rule security group find ports on host where it's applied
2) kill all matching connections for those ports

I'm just thinking aloud here, but can #1 be achieved without doing a call
from the agent to the server?
Otherwise one could pack the set of affect ports in messages for security
group updates.

Once we identify the ports, and therefore the ct zones, then we'd still
need to find the connections matching the rules which were removed. This
does not sound like being too difficult, but it can result in searches over
long lists - think about an instance hosting a DB or web server.

The above two considerations made me suggest the idea of associating ct
zones with rules, but it is probably true that this can cause us to go
beyond the 2^16 limit.

Salvatore


On 24 October 2014 11:16, Miguel Angel Ajo Pelayo mangel...@redhat.com
wrote:

 sorry: when I said boot, I mean openvswitch agent restart.

 - Original Message -
 
  Kevin, I agree, with you, 1 zone per port should be reasonable.
 
  The 2^16 rule limit will force us into keeping state (to tie
  ports to zones across reboots), may be this state can be just
  recovered by reading the iptables rules at boot, and reconstructing
  the current openvswitch-agent local port/zone association.
 
  Best,
  Miguel Ángel.
 
  - Original Message -
 
   While a zone per rule would be nice because we can easily delete
 connection
   state by only referencing a zone, that's probably overkill. We only
 need
   enough to disambiguate between overlapping IPs so we can then delete
   connection state by matching standard L3/4 headers again, right?
 
   I think a conntrack zone per port would be the easiest from an
 accounting
   perspective. We already setup an iptables chain per port so the
 grouping is
   already there (/me sweeps the complexity of choosing zone numbers
 under the
   rug).
 
   On Fri, Oct 24, 2014 at 2:25 AM, Salvatore Orlando 
 sorla...@nicira.com 
   wrote:
 
Just like Kevin I was considering using conntrack zones to segregate
connections.
  
However, I don't know whether this would be feasible as I've never
 used
iptables CT target in real applications.
  
 
Segregation should probably happen at the security group level - or
 even
at
the rule level - rather than the tenant level.
  
Indeed the same situation could occur even with two security groups
belonging
to the same tenant.
  
 
Probably each rule can be associated with a different conntrack
 zone. So
when
it's matched, the corresponding conntrack entries will be added to
 the
appropriate zone. And therefore when the rules are removed the
corresponding
connections to kill can be filtered by zone as explained by Kevin.
  
 
This approach will add a good number of rules to the RAW table
 however,
so
its impact on control/data plane scalability should be assessed, as
 it
might
turn as bad as the solution where connections where explicitly
 dropped
with
an ad-hoc iptables rule.
  
 
Salvatore
  
 
On 24 October 2014 09:32, Kevin Benton  blak...@gmail.com  wrote:
  
 
 I think the root cause of the problem here is that we are losing
 segregation
 between tenants at the conntrack level. The compute side plugs
 everything
 into the same namespace and we have no guarantees about uniqueness
 of
 any
 other fields kept by conntrack.
   
  
 
 Because of this loss of uniqueness, I think there may be another
 lurking
 bug
 here as well. One tenant establishing connections between IPs that
 overlap
 with another tenant will create the possibility that a connection
 the
 other
 tenant attempts will match the conntrack entry from the original
 connection.
 Then whichever closes the connection first will result in the
 conntrack
 entry being removed and the return traffic from the remaining
 connection
 being dropped.
   
  
 
 I think the correct way forward here is to isolate each tenant (or
 even
 compute interface) into its own conntrack zone.[1] This will
 provide
 isolation against that imaginary unlikely scenario I just
 presented.
 :-)
   
  
 More importantly, it will allow us to clear connections for a
 specific
 tenant
 (or compute interface) without interfering with others because
 conntrack
 can
 delete by zone.[2]
   
  
 
 1.

 https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
   
  
 2. see the -w option.
 http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html
   
  
 
 On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova 
 eezh...@mirantis.com 
 wrote:
   
  
 
  Hi!

   
  
 
  I am working on a 

[openstack-dev] [MagnetoDB] MagnetoDB Juno release announce

2014-10-24 Thread Ilya Sviridov
Hello openstackers,

MagnetoDB team is proud to announce release of Juno milestone [1]

Thanks to everyone who participated and contributed!

[1] https://launchpad.net/magnetodb/juno/2014.2

--
Ilya Sviridov
isviridov @ FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] [devstack] Generic scripts for Tempest configuration

2014-10-24 Thread Timur Nurlygayanov
Yes, got it. I will review this spec.

Thank you!

On Fri, Oct 24, 2014 at 3:29 PM, Andrey Kurilin akuri...@mirantis.com
wrote:

 Hi Timur!
 https://review.openstack.org/#/c/94473/ - this is what you need:)

 On Fri, Oct 24, 2014 at 2:05 PM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

 Hi all,

 we are using Tempest tests to verify every changes in different OpenStack
 components and we have scripts in devstack, which allow to configure
 Tempest.
 We want to use Tempest tests to verify different clouds, not only
 installed with devstack and to do this we need to configure Tempest
 manually (or with some no-generic scripts, which allow to configure tempest
 for specific lab configuration).

 Looks like we can improve these scripts for configuration of the Tempest,
 which we have in devstack repository now and create generic scripts for
 Tempest, which can be used by devstack scripts or manually, to configure
 Tempest for any private/public OpenStack clouds. These scripts should allow
 to easily configure Tempest: user should provide only Keystone endpoint and
 logins/passwords, other parameters can be optional and can be configured
 automatically.

 The idea is to have the generic scripts, which will allow to easily
 configure Tempest from-the-box, without deep inspection of lab
 configuration (but with the ability to change optional parameters too, if
 it is required).

 --

 Timur,
 Senior QA Engineer
 OpenStack Projects
 Mirantis Inc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Andrey Kurilin.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] PTL Candidacy

2014-10-24 Thread Ilya Sviridov
Hello openstackers,

I'd like to announce  my candidacy as PTL of MagnetoDB[1][2][3] project.

As a PTL of MagnetoDB I'll continue my work on building great environment
for contributors, making MagnetoDB well known and great software product.

[1] https://launchpad.net/magnetodb
[2] http://stackalytics.com/report/contribution/magnetodb/90
[3]
http://stackalytics.com/?release=junometric=commitsproject_type=stackforgemodule=magnetodb-group

Thank you,
Ilya Sviridov
isviridov @ FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-24 Thread Miguel Angel Ajo Pelayo
Thanks for your feedback too Ihar, comments inline.

- Original Message -
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 On 24/10/14 11:56, Miguel Angel Ajo Pelayo wrote:
  
  
  - Original Message -
  Hi Miguel,
  
  while we'd need to hear from the stable team, I think it's not
  such a bad idea to make this tool available to users of pre-juno
  openstack releases.
 
 It's a great idea actually. It's great when code emerged from real
 life downstream support cases eventually flow up to upstream for all
 operator's benefit (and not just those who pay huge money for
 commercial service).
 
  As far as upstream repos are concerned, I don't know if this tool
  violates the criteria for stable branches. Even if it would be a
  rather large change for stable/icehouse, it is pretty much
  orthogonal to the existing code, so it could be ok. However,
  please note that stable/havana has now reached its EOL, so there
  will be no more stable release for it.
  
  Sure, I was mentioning havana as affected, but I understand it's
  already under U/S EOL, D/S distributions would always be free to
  backport, specially on an orthogonal change like this.
  
  About stable/icehouse, I'd like to hear from the stable
  maintainers.
 
 I'm for inclusion of the tool in the main neutron package. Though it's
 possible to publish it on pypi as a separate package, I would better
 apply formal review process to it, plus reduce packaging efforts for
 distributions (and myself). The tool may be later expanded for other
 useful operator hooks, so I'm for inclusion of the tool in master and
 backporting it back to all supported branches.
 
 Though official stable maintainership rules state that 'New features'
 are no-go for stable branch [1], I think they should not apply in this
 case since the tool does not touch production code in any way and just
 provides a way to heal security groups on operator demand. Also, rules
 are to break them. ;) Quoting the same document, Proposed backports
 breaking any of above guidelines can be discussed as exception
 requests on openstack-stable-maint list where stable-maint team will
 try to reach consensus.
 
 Operators should be more happy if we ship such a tool as part of
 neutron release and not as another third-party tool from pypi of
 potentially unsafe origin.
 
 BTW I wonder whether the tool can be useful for Juno+ setups too.
 Though we mostly mitigated the problem by RPC interface rework and
 ipset, some operators may still hit some limitation that could be
 workarounded by optimizing their rules. Also, I think the idea of
 having a tool with miscellaneous operator hooks in the master tree is
 quite interesting. I would recommend to still go with pushing it to
 master and then backporting to stable branches. That would also help
 to get more review attention from cores than stable branch requests
 usually receive. ;)


I believe the tool could also be expanded to report and , equally generate
scripts to cleanup orphaned resources, those happen to be when
you remove an instance and the port is not deleted, or you delete a 
tenant, but the resources are kept, etc.

I know there are efforts to do proper cleanup when tenants are deleted,
but still, I see production databases plagued of orphaned resources. 

 
 [1]: https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes
 
  
  
  The orthogonal nature of this tool however also make the case for
  making it widely available on pypi. I think it should be ok to
  describe the scalability issue in the official OpenStack Icehouse
  docs and point out to this tool for mitigation.
  
  Yes, of course, I consider that as a second option, my point here
  is that direct upstream review time would result in better quality
  code here, and could certainly spot any hidden bugs, and increase
  testing quality.
  
  It also reduces packaging time all across distributions making it
  available via the standard neutron repository.
  
  
  Thanks for the feedback!,
  
  
  Salvatore
  
  On 23 October 2014 14:03, Miguel Angel Ajo Pelayo 
  mangel...@redhat.com  wrote:
  
  
  
  
  Recently, we have identified clients with problems due to the bad
  scalability of security groups in Havana and Icehouse, that was
  addressed during juno here [1] [2]
  
  This situation is identified by blinking agents (going UP/DOWN),
  high AMQP load, nigh neutron-server load, and timeout from
  openvswitch agents when trying to contact neutron-server
  security_group_rules_for_devices.
  
  Doing a [1] backport involves many dependent patches related to
  the general RPC refactor in neutron (which modifies all
  plugins), and subsequent ones fixing a few bugs. Sounds risky to
  me. [2] Introduces new features and it's dependent on features
  which aren't available on all systems.
  
  To remediate this on production systems, I wrote a quick tool to
  help on reporting security groups and mitigating the problem by
  writing almost-equivalent rules 

Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-24 Thread Mike Perez
On 09:11 Thu 23 Oct , Flavio Percoco wrote:
 According to the use-cases explained in this thread (also in the emails
 from John and Mathieu) this is something that'd be good having. I'm
 looking forward to seeing the driver completed.
 
 As John mentioned in his email, we should probably sync again in K-1 to
 see if there's been some progress on the bricks side and the other
 things this driver depends on. If there hasn't, we should probably get
 rid of it and add it back once it can actually be full-featured.

I'm unsure if Brick [1] will be completed in time. With that in mind, even if
we were to deprecate the glance driver for Kilo, Brick will likely be done by
then and we would just be removing the deprecation in L, assuming the driver is
completed in L. I think that would be confusing to users. It's unfortunate this
was merged in the current state, but I would just say leave things as is with
intentions at the latest to have the driver completed in L. If we're afraid no
one is going to complete the driver, deprecate it now.

[1] - https://github.com/hemna/cinder-brick

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova] New specs on routed networking

2014-10-24 Thread Cory Benfield
All,

Project Calico [1] is an open source approach to virtual networking based on L3 
routing as opposed to L2 bridging.  In order to accommodate this approach 
within OpenStack, we've just submitted 3 blueprints that cover

-  minor changes to nova to add a new VIF type [2]
-  some changes to neutron to add DHCP support for routed interfaces [3]
-  an ML2 mechanism driver that adds support for Project Calico [4].

We feel that allowing for routed network interfaces is of general use within 
OpenStack, which was our motivation for submitting [2] and [3].  We also 
recognise that there is an open question over the future of 3rd party ML2 
drivers in OpenStack, but until that is finally resolved in Paris, we felt 
submitting our driver spec [4] was appropriate (not least to provide more 
context on the changes proposed in [2] and [3]).

We're extremely keen to hear any and all feedback on these proposals from the 
community.  We'll be around at the Paris summit in a couple of weeks and would 
love to discuss with anyone else who is interested in this direction. 

Regards,

Cory Benfield (on behalf of the entire Project Calico team)

[1] http://www.projectcalico.org 
[2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
[3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
[4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral 0.1.1 released

2014-10-24 Thread Renat Akhmerov
Hi,

Mistral 0.1.1 and Mistral Client 0.1.1 have been just released. They are bugfix 
releases for version 0.1.

The most important bugs fixed:
* Data Flow algorithm for direct workflows didn’t work correctly in all cases
* Workflow pretty logging was broken
* 'action_context’ parameter wasn’t passed to mistral_http action and other 
context dependent actions
* A number of usability bugs in CLI

Please also try the new examples (tenant statistics, registering VM in Zabbix 
and Vyatta, and redesigned calculator example) located at 
https://github.com/stackforge/mistral-extra/tree/master/examples/v2/ 
https://github.com/stackforge/mistral-extra/tree/master/examples/v2/

Mistral release page: https://launchpad.net/mistral/juno/0.1.1 
https://launchpad.net/mistral/juno/0.1.1
Mistral Client release page: 
https://launchpad.net/python-mistralclient/juno/0.1.1 
https://launchpad.net/python-mistralclient/juno/0.1.1

Thanks!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Sandy Walsh
Nice work Angus ... great idea. Would love to see more of this.

-S


From: Angus Salkeld [asalk...@mirantis.com]
Sent: Friday, October 24, 2014 1:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] How can we get more feedback from users?

Hi all

I have felt some grumblings about usability issues with Heat 
templates/client/etc..
and wanted a way that users could come and give us feedback easily (low 
barrier). I started an etherpad 
(https://etherpad.openstack.org/p/heat-useablity-improvements) - the first win 
is it is spelt wrong :-O

We now have some great feedback there in a very short time, most of this we 
should be able to solve.

This lead me to think, should OpenStack have a more general mechanism for 
users to provide feedback. The idea is this is not for bugs or support, but 
for users to express pain points, requests for features and docs/howtos.

It's not easy to improve your software unless you are listening to your users.

Ideas?

-Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-24 Thread David Kranz

On 10/23/2014 06:27 AM, Chris Dent wrote:


I've proposed a spec to Ceilometer

   https://review.openstack.org/#/c/129669/

for a suite of declarative HTTP tests that would be runnable both in
gate check jobs and in local dev environments.

There's been some discussion that this may be generally applicable
and could be best served by a generic tool. My original assertion
was let's make something work and then see if people like it but I
thought I also better check with the larger world:

* Is this a good idea?

I think so


* Do other projects have similar ideas in progress?
Tempest faced a similar problem around negative tests in particular. We 
have code in tempest that automatically generates a series of negative
test cases based on illegal variations of a schema. If you want to look 
at it the NegativeAutoTest class is probably a good place to start. We have
discussed using a similar methodology for positive test cases but never 
did anything with that.


Currently only a few of the previous negative tests have been replaced 
with auto-gen tests. In addition to the issue of how to represent the 
schema, the other major issue we encountered was the need to create 
resources used by the auto-generated tests and a way to integrate a 
resource description into the schema. We use json for the schema and 
hoped one day to be able to receive base schemas from the projects 
themselves.


* Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?

* Is there prior art? What's a good format?
Marc Koderer and I did a lot of searching and asking folks if there was 
some python code that we could use as a starting point but in the end 
did not find anything. I do not have a list of what we considered and 
rejected.


 -David


Thanks.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] StackTach users?

2014-10-24 Thread Sandy Walsh
Hey y'all!

I'm taking a page from Angus and trying to pull together a list of StackTach 
users. We're moving quickly on our V3 implementation and I'd like to ensure 
we're addressing the problems you've faced/are facing with older versions. 

For example, I know initial setup has been a concern and we're starting with an 
ansible installer in V3. Would that help?

We're also ditching the web gui (for now) and buffing up the REST API and 
client tools. Is that a bad thing?

Feel free to contact me directly if you don't like the public forums. Or we can 
chat at the summit. 

Cheers!
-S

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-24 Thread Flavio Percoco
On 10/24/2014 03:29 PM, Mike Perez wrote:
 On 09:11 Thu 23 Oct , Flavio Percoco wrote:
 According to the use-cases explained in this thread (also in the emails
 from John and Mathieu) this is something that'd be good having. I'm
 looking forward to seeing the driver completed.

 As John mentioned in his email, we should probably sync again in K-1 to
 see if there's been some progress on the bricks side and the other
 things this driver depends on. If there hasn't, we should probably get
 rid of it and add it back once it can actually be full-featured.
 
 I'm unsure if Brick [1] will be completed in time. With that in mind, even if
 we were to deprecate the glance driver for Kilo, Brick will likely be done by
 then and we would just be removing the deprecation in L, assuming the driver 
 is
 completed in L. I think that would be confusing to users. It's unfortunate 
 this
 was merged in the current state, but I would just say leave things as is with
 intentions at the latest to have the driver completed in L. If we're afraid no
 one is going to complete the driver, deprecate it now.
 
 [1] - https://github.com/hemna/cinder-brick

Thanks, Mike. This is great feedback.

I wonder how strong is the dependency between the cinder driver and
bricks. I mean, it'd be cool if we could complete the implementation in
a perhaps not so optimized way - on top of cinder's API? - and then use
the bricks library when it's done.

Thoughts on the above? It sounds hacky, I know. :)

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Devstack]

2014-10-24 Thread Yves-Gwenaël Bourhis
Le 24/10/2014 13:30, Chmouel Boudjnah a écrit :
 On Fri, Oct 24, 2014 at 12:27 PM, Yves-Gwenaël Bourhis
 yves-gwenael.bour...@cloudwatt.com
 mailto:yves-gwenael.bour...@cloudwatt.com wrote:
 memcache can be distributed (so usable in HA) and has far better
 performances then db sessions.
 Why not use memcache by default?
 
 
 I guess for the simple reason that if you restart your memcache you
 loose all the sessions?

Indeed, and for devstack that's an easy way do do a cleanup of old
sessions :-)

We are well talking about devstack in this thread, where loosing
sessions after a memcache restart is not an issue and looks more like a
very handy feature.

For production it's another mater, and operators have the choice.

-- 
Yves-Gwenaël Bourhis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler sessions at the summit

2014-10-24 Thread Dugger, Donald D
(I hijacked this thread to save the context but changed the subject so more 
people might read it.)

Looks like we have a scheduler specific session on Thurs. at 11:00AM for 90 
min. so we can use that for working out some of the specific scheduler changes 
we want to do (e.g. the split for sure).

On the Nova summit etherpad:

https://etherpad.openstack.org/p/kilo-nova-summit-topics

there is a scheduler section but it's not very cross-project focused.  I would 
like a cross project session on Tues. so we can get requirements from other 
projects on what they need from a common scheduler.  I don't know that I have 
specifics that I can put down on an etherpad yet, I'm hoping to get that info 
from other projects.  Specifically, I think:

1) Cinder
2) Containers
3) Neutron
4) Ceilometer
5) Heat

are specific projects that have scheduling requirements (either to use the 
scheduler or provide input to it) that they need and it would be really good to 
find out exactly what they need.  My suggestion would be an etherpad with a big 
title `Requirements' on top and sections for each of those projects would be 
enough to get us started.

PS: Note that Sylvain and I are both on vacation next week so we'll have to 
miss the Gantt meeting.  It would be great if you could lead the meeting Jay, 
otherwise we'll just meet up at the summit.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Wednesday, October 22, 2014 10:10 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [gantt] Scheduler group meeting - cancelled this 
week INTERNAL

The regular meeting time is Tuesdays at 15:00 UTC/11am EST:

https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_meeting

Generally, we don't do slides for design summit sessions -- we use etherpads 
instead and the sessions are discussions, not presentations.

Next week's meeting we can and should create etherpads for the cross-project 
session(s) that we will get allocated for Gantt topics.

Best,
-jay

On 10/22/2014 11:54 AM, Elzur, Uri wrote:
 Don

 Will there be a meeting next week? What is the regular time slot for 
 the meeting?

 I'd like to work w you on a technical slide to use in Paris

 Do we need to socialize the Gantt topic more?

 Thx

 Uri (Oo-Ree)

 C: 949-378-7568

 *From:* Dugger, Donald D [mailto:donald.d.dug...@intel.com]
 *Sent:* Wednesday, October 22, 2014 6:04 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [gantt] Scheduler group meeting - cancelled 
 this week

 Just a reminder that, as we mentioned last week, no meeting today.

 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] periodic jobs for master

2014-10-24 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 22/10/14 12:07, Thierry Carrez wrote:
 Ihar Hrachyshka wrote:
 [...] For stable branches, we have so called periodic jobs that
 are triggered once in a while against the current code in a
 stable branch, and report to openstack-stable-maint@ mailing
 list. An example of failing periodic job report can be found at
 [2]. I envision that similar approach can be applied to test
 auxiliary features in gate. So once something is broken in
 master, the interested parties behind the auxiliary feature will
 be informed in due time. [...]
 
 The main issue with periodic jobs is that since they are
 non-blocking, they can get ignored really easily. It takes a bit of
 organization and process to get those failures addressed.
 
 It's only recently (and a lot thanks to you) that failures in the 
 periodic jobs for stable branches are being taken into account
 quickly and seriously. For years the failures just lingered until
 they blocked someone's work enough for that person to go and fix
 them.
 
 So while I think periodic jobs are a good way to increase corner
 case testing coverage, I am skeptical of our collective ability to
 have the discipline necessary for them not to become a pain. We'll
 need a strict process around them: identified groups of people
 signed up to act on failure, and failure stats so that we can
 remove jobs that don't get enough attention.
 

There should be interest groups behind each of periodic jobs (maybe
sometimes consisting of one person). Yes, jobs should be tracked,
though I assume that if the group is really interested in it, it will
track it on daily basis. Otherwise, we'll see it rot and eventually
removed. Let's say anyone can propose a job to remove in the mailing
list, and we'll assess case by case whether it's ok to remove it
instead of e.g. fixing it (because we have no interested parties to
track it).

Another question to solve is how we disseminate state of those jobs.
Do we create a separate mailing list for that? Obviously we should not
reuse -dev one, and it's overkill to create one mailing list per
interest group.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUSmWxAAoJEC5aWaUY1u5742kIAIIwMpTt3WL5j7RQkwtEc9qj
xEHe0cC9gHtsCgxYrDkbhX2t3YmwZYg7tvzRYSJtds7hkRtiG4fjHSkdTWp3bW0m
jYGoC7x4wMxjP6CPv2q/3CGdkE4+0AK9/aGurL22tcmHsqHj8COIAfuMB4np/y9n
FSVyiHS86mlCx02BXIJkJwefpyO4ayM2H6IvtNjhtwYiwoH7mxQAvPpCW2vZPZOt
xBSDTu0tcvlOm0xi8V8S2LDRvVaoV90w8zAh2jaNmeYVU3f/Js+X3VUa579epBOE
kc0zaG1WYrcVxWkBDVGDRCBlvA9oCaQ4C8ZUFtJzGNS8Nss5/QfVndtoZSwWr5I=
=L0NC
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-24 Thread John Griffith


On Fri, Oct 24, 2014, at 07:59 AM, Flavio Percoco wrote:
 On 10/24/2014 03:29 PM, Mike Perez wrote:
  On 09:11 Thu 23 Oct , Flavio Percoco wrote:
  According to the use-cases explained in this thread (also in the emails
  from John and Mathieu) this is something that'd be good having. I'm
  looking forward to seeing the driver completed.
 
  As John mentioned in his email, we should probably sync again in K-1 to
  see if there's been some progress on the bricks side and the other
  things this driver depends on. If there hasn't, we should probably get
  rid of it and add it back once it can actually be full-featured.
  
  I'm unsure if Brick [1] will be completed in time. With that in mind, even 
  if
  we were to deprecate the glance driver for Kilo, Brick will likely be done 
  by
  then and we would just be removing the deprecation in L, assuming the 
  driver is
  completed in L. I think that would be confusing to users. It's unfortunate 
  this
  was merged in the current state, but I would just say leave things as is 
  with
  intentions at the latest to have the driver completed in L. If we're afraid 
  no
  one is going to complete the driver, deprecate it now.
  
  [1] - https://github.com/hemna/cinder-brick
 
 Thanks, Mike. This is great feedback.
 
 I wonder how strong is the dependency between the cinder driver and
 bricks. I mean, it'd be cool if we could complete the implementation in
 a perhaps not so optimized way - on top of cinder's API? - and then use
 the bricks library when it's done.

I fear my mention of Brick may have thrown things out of whack here. 
Frankly for now I think that whole thing should be ignored as it
pertains to this topic.  We've made the mistake of deferring things
based on that whole idea in the past and it hasn't gone well. 

Glance shouldn't need most of the stuff that's going on in there anyway
I don't think.  I think Flavio keyed in on the main point IMO how
strong of a dependency, ideally I hope the answer is not very.  The
other thing about this is my conversations with folks in the past were
that Glance should be kept pretty darn light in terms of any data path
type stuff.  We really should intend for it to be not much more than a
registry and serve as a conduit.  If that's changed or I'm wrong on that
feel free to shout, but previous conversations with Mark suggested that
moving things like initiators and targets in to Glance was not ideal
(and I agree). 

 
 Thoughts on the above? It sounds hacky, I know. :)

Yes, please, let's move forward with it like we were saying earlier. 
There's still a number of gaps and some hand waving I think so really
it's something I think we just need to dive into.

 
 Cheers,
 Flavio
 
 -- 
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][keystone] why is lxml only in test-requirements.txt?

2014-10-24 Thread Xu (Simon) Chen
Great. Thanks Dolph..

On Wed, Oct 22, 2014 at 7:20 PM, Dolph Mathews dolph.math...@gmail.com
wrote:

 Great question!

 For some backstory, the community interest in supporting XML has always
 been lackluster, so the XML translation middleware has been on a slow road
 of decline. It's a burden for everyone to maintain, and only works for
 certain API calls. For the bulk of Keystone's documented APIs, XML support
 is largely untested, undocumented, and unsupported. Given all that, I
 wouldn't recommend anyone deploy the XML middleware unless you *really*
 need some aspect of it's tested functionality.

 In both Icehouse and Juno, we shipped the XML translation middleware with
 a deprecation warning, but kept it in the default pipeline. That was
 basically my fault, because both Keystone's functional tests and tempest
 are hardcoded to expect XML support, and we didn't have time during
 Icehouse to break those expectations... but still wanted to communicate out
 the fact that XML was on the road to deprecation.

 So, to remedy that, we have now have a bunch of patches (thanks for your
 help, Lance!) which complete the work we started back in Icehouse.

 Tempest:
 - Make XML support optional https://review.openstack.org/#/c/126564/

 Devstack:
 - Make XML support optional moving forward
 https://review.openstack.org/#/c/126672/
 - stable/icehouse continue testing XML support
 https://review.openstack.org/#/c/127641/

 Keystone:
 - Remove XML support from keystone's default paste config (this makes lxml
 truly a test-requirement) https://review.openstack.org/#/c/130371/
 - (Potentially) remove XML support altogether
 https://review.openstack.org/#/c/125738/

 The patches to Tempest and Devstack should definitely land, and now we
 need to have a conversation about our desire to continue support for XML in
 Kilo (i.e. choose from the last two Keystone patches).

 -Dolph

 On Mon, Oct 20, 2014 at 8:05 AM, Xu (Simon) Chen xche...@gmail.com
 wrote:

 I am trying to understand why lxml is only in test-requirements.txt...
 The default pipelines do contain xml_body and xml_body_v2 filters, which
 depends on lxml to function properly.

 Since lxml is not in requirements.txt, my packaging system won't include
 lxml in the deployment drop. At the same time, my environment involves
 using browsers to directly authenticate with keystone - and browsers
 (firefox/chrome alike) send accept: application/xml in their request
 headers, which triggers xml_body to perform json to xml conversion, which
 fails because lxml is not there.

 My opinion is that if xml_body filters are in the example/default
 paste.ini file, lxml should be included in requirements.txt.

 Comments?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Carl Baldwin
Hi Elena,

On Thu, Oct 23, 2014 at 4:22 AM, Elena Ezhova eezh...@mirantis.com wrote:
 Kill the connection using conntrack

   The problem here is that it is sometimes impossible to tell which
 connection should be killed. For example there may be two instances running
 in different namespaces that have the same ip addresses. As a compute
 doesn't know anything about namespaces, it cannot distinguish between the
 two seemingly identical connections:

If it really were different namespaces -- Linux network namespaces --
then it wouldn't matter.  The conntrack context is isolated between
namespaces.  A similar fix for floating IPs takes advantage of this
fact [1].  However, I don't think you meant Linux network namespaces
when you said different namespaces above.  I think you probably just
meant to use different namespaces as a more generic term for
different network traffic domains that are isolated from each other.
So, moving on...

In the case of security groups on compute nodes isolation is
accomplished using VLANs, linux bridges, etc.  So, yes, the conntrack
context is shared.  I can't think of a way to use conntrack to solve
this problem.

I'm not advocating the following approach at the moment, just
brainstorming here...  I wonder if we could just ignore the potential
IP address overlap and kill the connections anyway.  What would
happen?  Some connections that were supposed to survive will actually
recover if nf_conntrack_tcp_loose is enabled.  But, some may not
recover.  So, maybe this isn't a brilliant idea.

 I wonder whether there is any way to search for a connection by destination
 MAC?

Well, for one thing, the mac is not guaranteed unique either.  For
another, I don't even know if conntrack is aware of or cares about any
L2 details about the connection since it is kind of an L3/L4 sort of
thing.

 Delete iptables rule that directs packets associated with a known session to
 the RETURN chain

It will force all packets to go through the full chain each time
 and this will definitely make the connection close. But this will strongly
 affect the performance. Probably there may be created a timeout after which
 this rule will be restored, but it is uncertain how long should it be.

I don't think this is just about performance, this will cause other
connections to stop working.  If I'm not mistaken, removing these
rules makes it behave more like a stateless firewall.  In stateless
firewalls, rules must be spelled out independently for ingress and
egress packets whereas in a stateful firewall, one typically thinks of
the whole connection as ingress or egress (after adding those
RELATED/ESTABLISHED rules to allow the other direction through based
on conntrack state).

Here's another brainstorming idea...  I wonder if something could be
done with the NOTRACK target in the raw table.  Could such a rule be
added to squash active connections by hiding them from conntrack?
Rules could be added with a timeout so that they don't stay in the
table forever.  The rule would have to be written to match security
group rules that were just deleted.  This could be tricky.  But, once
added, they could be  removed on the next iptables update after a
given period of time or on agent restart.

Carl

[1] https://review.openstack.org/#/c/103475/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Carl Baldwin
Miguel Ángel,

On Thu, Oct 23, 2014 at 5:56 AM, Miguel Angel Ajo Pelayo
mangel...@redhat.com wrote:
 Temporarily removing this entry doesn't seem like a good solution
 to me as we can't really know how long do we need to remove this rule to
 induce the connection to close at both ends (it will only close if any
 new activity happens, and timeout is exhausted afterwards).

I think you're right here.  I think any activity will keep the
connection alive in conntrack.  So, we are at the mercy of the
timeouts at both ends.  Assuming an attacker has control over at least
the external endpoint, it could be kept open indefinitely generating
activity.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] New screencasts for Mistral 0.1

2014-10-24 Thread Renat Akhmerov
Hi,

In addition to MIstral 0.1.1 bugfix release here are the first two videos from 
the series of screencasts that we started this week aiming to highlight new 
functionality introduced in Mistral 0.1 released in the end of Sept.

MIstral 0.1, part 1: http://www.youtube.com/watch?v=9kaac_AfNow 
http://www.youtube.com/watch?v=9kaac_AfNow
MIstral 0.1, part 2: http://www.youtube.com/watch?v=u6ki_DIpsg0 
http://www.youtube.com/watch?v=u6ki_DIpsg0

More screencasts coming soon...

Thanks

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?-- Great idea Angus!

2014-10-24 Thread Brad Topol
+100!   Angus this is awesome!!!   Anyway to get one of these for each 
project?

Thanks,

Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Sandy Walsh sandy.wa...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   10/24/2014 09:46 AM
Subject:Re: [openstack-dev] [all] How can we get more feedback 
from users?



Nice work Angus ... great idea. Would love to see more of this. 

-S


From: Angus Salkeld [asalk...@mirantis.com]
Sent: Friday, October 24, 2014 1:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] How can we get more feedback from users?

Hi all

I have felt some grumblings about usability issues with Heat 
templates/client/etc..
and wanted a way that users could come and give us feedback easily (low 
barrier). I started an etherpad (
https://etherpad.openstack.org/p/heat-useablity-improvements) - the first 
win is it is spelt wrong :-O

We now have some great feedback there in a very short time, most of this 
we should be able to solve.

This lead me to think, should OpenStack have a more general mechanism for 
users to provide feedback. The idea is this is not for bugs or support, 
but for users to express pain points, requests for features and 
docs/howtos.

It's not easy to improve your software unless you are listening to your 
users.

Ideas?

-Angus___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] making Daneyon Hansen core

2014-10-24 Thread Steven Dake

On 10/23/2014 07:31 AM, Jeff Peeler wrote:

On 10/22/2014 11:04 AM, Steven Dake wrote:

A few weeks ago in IRC we discussed the criteria for joining the core
team in Kolla.  I believe Daneyon has met all of these requirements by
reviewing patches along with the rest of the core team and providing
valuable comments, as well as implementing neutron and helping get
nova-networking implementation rolling.

Please vote +1 or -1 if your kolla core.  Recall a -1 is a veto.  It
takes 3 votes.  This email counts as one vote ;)


definitely +1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Well that is 4 votes so Daneyon - weclome to the core team of Kolla!

Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Rick Jones

On 10/23/2014 08:57 PM, Brian Haley wrote:

On 10/23/14 6:22 AM, Elena Ezhova wrote:

Hi!

I am working on a bug ping still working once connected even after
related security group rule is
deleted (https://bugs.launchpad.net/neutron/+bug/1335375). The gist of
the problem is the following: when we delete a security group rule the
corresponding rule in iptables is also deleted, but the connection, that
was allowed by that rule, is not being destroyed.
The reason for such behavior is that in iptables we have the following
structure of a chain that filters input packets for an interface of an
istance:

snip

Like Miguel said, there's no easy way to identify this on the compute
node since neither the MAC nor the interface are going to be in the
conntrack command output.  And you don't want to drop the wrong tenant's
connections.

Just wondering, if you remove the conntrack entries using the IP/port
from the router namespace does it drop the connection?  Or will it just
start working again on the next packet?  Doesn't work for VM to VM
packets, but those packets are probably less interesting.  It's just my
first guess.


Presumably this issue affects other conntrack users, no?  What does 
upstream conntrack have to say about the matter?


I tend to avoid such things where I can, but what do real firewalls do 
with such matters?  If one removes a rule which allowed a given 
connection through, do they actually go ahead and nuke existing connections?


rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler sessions at the summit

2014-10-24 Thread Jay Pipes

On 10/24/2014 10:39 AM, Dugger, Donald D wrote:

(I hijacked this thread to save the context but changed the subject
so more people might read it.)

Looks like we have a scheduler specific session on Thurs. at 11:00AM
for 90 min. so we can use that for working out some of the specific
scheduler changes we want to do (e.g. the split for sure).


I'd prefer we had much of that done by the time the summit rolls around, 
thus the focus on the various blueprints that involve the resource 
tracker and scheduler. Please do review these all:


resource object models: https://review.openstack.org/#/c/127609/
request spec: https://review.openstack.org/#/c/127610/
select_destinations(): https://review.openstack.org/#/c/127612/
detach service from compute: https://review.openstack.org/#/c/126895/
isolate scheduler db: https://review.openstack.org/#/c/89893/

Note that the last one I have proposed to break down into further more 
specific blueprints that cover the aggregate and instance group DB calls 
separately...



On the Nova summit etherpad:

https://etherpad.openstack.org/p/kilo-nova-summit-topics

there is a scheduler section but it's not very cross-project focused.
I would like a cross project session on Tues. so we can get
requirements from other projects on what they need from a common
scheduler.  I don't know that I have specifics that I can put down on
an etherpad yet, I'm hoping to get that info from other projects.
Specifically, I think:


There is a Gantt session in the cross-project summit etherpad (#17 on 
the list):


https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

My guess is that we will indeed have the Gantt session on Tuesday, as 
there is quite a bit of interest in it. The remainder of the TC members 
are voting today on those cross-project sessions and ttx will take it 
from there.



1) Cinder 2) Containers 3) Neutron 4) Ceilometer 5) Heat

are specific projects that have scheduling requirements (either to
use the scheduler or provide input to it) that they need and it would
be really good to find out exactly what they need.  My suggestion
would be an etherpad with a big title `Requirements' on top and
sections for each of those projects would be enough to get us
started.


As I mentioned in the previous response, I would greatly prefer if the 
Gantt cross-project session was actually NOT a we want XYZ from the 
scheduler session. I'd prefer to have an etherpad already containing 
all those requirements from people before the summit and use the summit 
time to discuss prioritization and implementation/design proposals for 
actually getting the work done.



PS: Note that Sylvain and I are both on vacation next week so we'll
have to miss the Gantt meeting.  It would be great if you could lead
the meeting Jay, otherwise we'll just meet up at the summit.


Sure, I can do that, no problem. Me and Paul will ponder the vast 
expanse of time and space together and then try to unclutter the 
resource tracker. :)


-jay


-- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph:
303/443-3786

-Original Message- From: Jay Pipes
[mailto:jaypi...@gmail.com] Sent: Wednesday, October 22, 2014 10:10
AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
[gantt] Scheduler group meeting - cancelled this week INTERNAL

The regular meeting time is Tuesdays at 15:00 UTC/11am EST:

https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_meeting

 Generally, we don't do slides for design summit sessions -- we use
etherpads instead and the sessions are discussions, not
presentations.

Next week's meeting we can and should create etherpads for the
cross-project session(s) that we will get allocated for Gantt
topics.

Best, -jay

On 10/22/2014 11:54 AM, Elzur, Uri wrote:

Don

Will there be a meeting next week? What is the regular time slot
for the meeting?

I'd like to work w you on a technical slide to use in Paris

Do we need to socialize the Gantt topic more?

Thx

Uri (Oo-Ree)

C: 949-378-7568

*From:* Dugger, Donald D [mailto:donald.d.dug...@intel.com] *Sent:*
Wednesday, October 22, 2014 6:04 AM *To:* OpenStack Development
Mailing List (not for usage questions) *Subject:* [openstack-dev]
[gantt] Scheduler group meeting - cancelled this week

Just a reminder that, as we mentioned last week, no meeting today.

--

Don Dugger

Censeo Toto nos in Kansa esse decisse. - D. Gale

Ph: 303/443-3786



___ OpenStack-dev
mailing list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-24 Thread Doug Hellmann

On Oct 24, 2014, at 6:58 AM, Chris Dent chd...@redhat.com wrote:

 On Thu, 23 Oct 2014, Doug Hellmann wrote:
 
 Thanks for the feedback Doug, it's useful.
 
 WebTest isn’t quite what you’re talking about, but does provide a
 way to talk to a WSGI app from within a test suite rather simply. Can
 you expand a little on why “declarative” tests are better suited
 for this than the more usual sorts of tests we write?
 
 I'll add a bit more on why I think declarative tests are useful, but
 it basically comes down to explicit transparency of the on-the-wire
 HTTP requests and responses. At least within Ceilometer and at least
 for me, unittest form tests are very difficult to read as a
 significant portion of the action is somewhere in test setup or a
 super class. This may not have much impact on the effectiveness of the
 tests for computers, but it's a total killer of the effectiveness of
 the tests as tools for discovery by a developer who needs to make
 changes or, heaven forbid, is merely curious. I think we can agree
 that a more informed developer, via a more learnable codebase, is a
 good thing?

OK, at first I thought you were talking about writing out literal HTTP 
request/response sets, but looking at the example YAML file in [1] I see you’re 
doing something more abstract. I was worried about minor changes in a 
serialization library somewhere breaking a bunch of tests by changing their 
formatting in insignificant ways, but that shouldn’t be a problem if you’re 
testing the semantic contents rather than the literal contents of the response.

[1] https://github.com/tiddlyweb/tiddlyweb/blob/master/test/httptest.yaml

 
 I did look at WebTest and while it looks pretty good I don't particular
 care for its grammar. This might be seen as a frivolous point (because
 under the hood the same thing is happening, and its not this aspect that
 would be exposed the declarations) but calling a method on the `app`
 provided by WebTest feels different than using an http library to make
 a request of what appears to be a web server hosting the app.
 
 If people prefer WebTest over wsgi-intercept, that's fine, it's not
 something worth fighting about. The positions I'm taking in the spec,
 as you'll have seen from my responses to Eoghan, are trying to emphasize
 the focus on HTTP rather than the app itself. I think this could lead,
 eventually, to better HTTP APIs.

I’m not familiar with wsgi-intercept, so I can’t really comment on the 
difference there. I find WebTest tests to be reasonably easy to read, but since 
(as I understand it) I would write YAML files instead of Python tests, I’m not 
sure I care which library is used to build the tool as long as we don’t have to 
actually spin up a web server listening on a network port in order to run tests.

 
 I definitely don’t think the ceilometer team should build
 something completely new for this without a lot more detail in the
 spec about which projects on PyPI were evaluated and rejected as not
 meeting the requirements. If we do need/want something like this I
 would expect it to be built within the QA program. I don’t know if
 it’s appropriate to put it in tempestlib or if we need a
 completely new tool.
 
 I don't think anyone wants to build a new thing unless that turns out
 to be necessary, thus this thread. I'm hoping to get input from people
 who have thought about or explored this before. I'm hoping we can
 build on the shoulders of giants and all that. I'm also hoping to
 short circuit extensive personal research by collaborating with others
 who may have already done it before.
 
 I have an implementation of the concept that I've used in a previous
 project (linked from the spec) which could work as a starting point
 from which we could iterate but if there is something better out there
 I'd prefer to start there.
 
 So the questions from the original post still stand, all input
 welcome, please and thank you.
 
 * Is this a good idea?
 * Do other projects have similar ideas in progress?
 * Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?
 * Is there prior art? What's a good format?
 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD host support

2014-10-24 Thread Roman Bogorodskiy
  Roman Bogorodskiy wrote:

 On Mon, Oct 20, 2014 at 10:19 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
  On Sat, Oct 18, 2014 at 10:04 AM, Roman Bogorodskiy 
  rbogorods...@mirantis.com wrote:
 

ping?

Roman Bogorodskiy


pgpz_SKT6P9Gy.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] making Daneyon Hansen core

2014-10-24 Thread Daneyon Hansen (danehans)

Thanks. I appreciate the opportunity to help the team develop Kolla. Have a 
great weekend!

Regards,
Daneyon Hansen, CCIE 9950
Software Engineer
Office of the Cloud CTO
Mobile: 303-718-0400
Office: 720-875-2936
Email: daneh...@cisco.com

 On Oct 24, 2014, at 8:33 AM, Steven Dake sd...@redhat.com wrote:
 
 On 10/23/2014 07:31 AM, Jeff Peeler wrote:
 On 10/22/2014 11:04 AM, Steven Dake wrote:
 A few weeks ago in IRC we discussed the criteria for joining the core
 team in Kolla.  I believe Daneyon has met all of these requirements by
 reviewing patches along with the rest of the core team and providing
 valuable comments, as well as implementing neutron and helping get
 nova-networking implementation rolling.
 
 Please vote +1 or -1 if your kolla core.  Recall a -1 is a veto.  It
 takes 3 votes.  This email counts as one vote ;)
 
 definitely +1
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Well that is 4 votes so Daneyon - weclome to the core team of Kolla!
 
 Regards
 -steve
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Stefano Maffulli
On 10/23/2014 11:16 PM, Angus Salkeld wrote:
 Thanks for those pointers, we very interested in feedback from
 operators, but
 in this case I am talking more about end users not operators (people
 that actually use our API).
Great! There is a working group being formed also for that.

I would suggest you to put these sessions in your calendar:

http://kilodesignsummit.sched.org/event/f8f76884ce1d7fb7a39f3f6c2f1bb3d4
http://kilodesignsummit.sched.org/event/57f6fc4f2ffd0cc47b216f7bf936c115

/stef

-- 
Ask and answer questions on https://ask.openstack.org


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][all] Synchronizing local Git and Gerrit repositories

2014-10-24 Thread Ondrej Wisniewski

Hi,

I am trying to set up an OpenStack development workflow in our company. 
We have an internal Git repository mirror of all OpenStack projects we 
are working on. It is periodically updated from the upstream OpenStack 
community servers. This is used to share the code among developers.


Furthermore I have set up a Gerrit server for the internal code review. 
The Gerrit server also works with repository mirrors of the community 
repositories which should be updated periodically. Or at least that's 
the idea. I ran into lots of problems and couldn't find a good way of 
synchronizing the developer mirrors with the Gerrit repositories.


So to cut a long story short, here are my questions:
How is the synchronization of the OpenStack community Git repositories 
and the Gerrit server done?
How can I import an OpenStack project into my Gerrit system from my 
local Git mirror and keep both synchronized (at least the master branch) ?


I would be really appreciate if someone could shed some light on this.
Thanks, Ondrej
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] periodic jobs for master

2014-10-24 Thread Thierry Carrez
Ihar Hrachyshka wrote:
 Another question to solve is how we disseminate state of those jobs.
 Do we create a separate mailing list for that? Obviously we should not
 reuse -dev one, and it's overkill to create one mailing list per
 interest group.

Should we explore other avenues than email for this ? If we plan to do
opt-in anyway, would some status website/RSS not work better ?

The ideal system imho would be a status website where we could see
failures and close them as handled so that everyone knows that a past
FAIL result has already been fixed. That could help avoid duplication of
painful debugging work.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-24 Thread racha
Hi Ian,

Here some details about integrating the L2-gateway (supporting multiple
plugins/MDs and not limited to OVS agent implementation) as a trunking
gateway:

It's a block, i.e. building block, that has multiple access ports as tenant
Neutron network ports (having that block type/uuid as
device_owner/device_id) but in different Neutron networks and up to one
gateway port as a provider external network port.

Adding the two following constraints ensure that Neutron networks and
blocks are stubby and there's no way to loop the networks thus very
simply and very easily providing one of the several means of alleviating
the raised concern:
1) Each Neutron network cannot have more than one port that could be
bound/added to any block as an access port.
2) Each block cannot own more than one gateway port that can be set/unset
to that block.

If the type of that block is learning bridge then the gateway port is a
Neutron port on a specific provider external network (with the segmentation
details provided as with existent Neutron API) and that block will forward
between access-ports and gateway-port in broadcast isolation (as with
private VLANs) or broadcast merge (community VLANs). For that, a very easy
implementation was provided for review a very long time ago.

If the type of that block is trunking bridge then the gateway-port is a
trunk-port as in VLAN-aware VMs BP or as a dynamic collection of Neutron
ports as in a suggested extension of the networks collection Idea with
each port in a different provider external network (with a 1 to 1
transparent patching hook service between 1 access-port@tenant_net_x and 1
external-port@provider_net_y that could be the place holder for a
cross-network summarized/factorized security group for tenant networks or
whatever ...)... Then we further abstract a trunk as a mix of VLANs, GREs,
VxLANs, etc. (i.e. Neutron networks) next to each others on the same
networks trunk not limited to usual VLAN trunks. What happens (match -
block/forward/...) to this trunk in the provider external networks as well
as in the transparent patching hooks within that block is up to the
provider I guess. Just a tiny abstract idea out of the top of my head that
I can detail in the specs if there's a tiny interest/match with what is
required?


Thanks,

Best Regards,
Racha


On Thu, Oct 23, 2014 at 2:58 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 There are two categories of problems:

 1. some networks don't pass VLAN tagged traffic, and it's impossible to
 detect this from the API
 2. it's not possible to pass traffic from multiple networks to one port on
 one machine as (e.g.) VLAN tagged traffic

 (1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else
 addresses this, particularly in the case that one VM is emitting tagged
 packets that another one should receive and Openstack knows nothing about
 what's going on.

 We should get this in, and ideally in quickly and in a simple form where
 it simply tells you if a network is capable of passing tagged traffic.  In
 general, this is possible to calculate but a bit tricky in ML2 - anything
 using the OVS mechanism driver won't pass VLAN traffic, anything using
 VLANs should probably also claim it doesn't pass VLAN traffic (though
 actually it depends a little on the switch), and combinations of L3 tunnels
 plus Linuxbridge seem to pass VLAN traffic just fine.  Beyond that, it's
 got a backward compatibility mode, so it's possible to ensure that any
 plugin that doesn't implement VLAN reporting is still behaving correctly
 per the specification.

 (2) is addressed by several blueprints, and these have overlapping ideas
 that all solve the problem.  I would summarise the possibilities as follows:

 A. Racha's L2 gateway blueprint,
 https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension,
 which (at its simplest, though it's had features added on and is somewhat
 OVS-specific in its detail) acts as a concentrator to multiplex multiple
 networks onto one as a trunk.  This is a very simple approach and doesn't
 attempt to resolve any of the hairier questions like making DHCP work as
 you might want it to on the ports attached to the trunk network.
 B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/,
 which is more limited in that it refers only to external connections.
 C. Erik's VLAN port blueprint,
 https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which
 tries to solve the addressing problem mentioned above by having ports
 within ports (much as, on the VM side, interfaces passing trunk traffic
 tend to have subinterfaces that deal with the traffic streams).
 D. Not a blueprint, but an idea I've come across: create a network that is
 a collection of other networks, each 'subnetwork' being a VLAN in the
 network trunk.
 E. Kyle's very old blueprint,
 https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api
 - where we attach a port, not a network, to multiple 

Re: [openstack-dev] [Infra][all] Synchronizing local Git and Gerrit repositories

2014-10-24 Thread Ricardo Carrillo Cruz
Hi Ondrej

The replication between Gerrit and git mirrors is done by the Gerrit
replication mechanism.

If you look at this line in the gerrit manifest:

http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/gerrit/manifests/init.pp#n255

you will see that it deploys a 'replication.config' file based on template:

http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/gerrit/templates/replication.config.erb

You can find more information about how Gerrit replication works here:

http://gerrit.googlecode.com/svn/documentation/2.0/config-replication.html

HTH

Regards

2014-10-24 18:25 GMT+02:00 Ondrej Wisniewski 
ondrej.wisniew...@dektech.com.au:

  Hi,

 I am trying to set up an OpenStack development workflow in our company. We
 have an internal Git repository mirror of all OpenStack projects we are
 working on. It is periodically updated from the upstream OpenStack
 community servers. This is used to share the code among developers.

 Furthermore I have set up a Gerrit server for the internal code review.
 The Gerrit server also works with repository mirrors of the community
 repositories which should be updated periodically. Or at least that's the
 idea. I ran into lots of problems and couldn't find a good way of
 synchronizing the developer mirrors with the Gerrit repositories.

 So to cut a long story short, here are my questions:
 How is the synchronization of the OpenStack community Git repositories and
 the Gerrit server done?
 How can I import an OpenStack project into my Gerrit system from my local
 Git mirror and keep both synchronized (at least the master branch) ?

 I would be really appreciate if someone could shed some light on this.
 Thanks, Ondrej

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stevedore 1.1.0 released

2014-10-24 Thread Doug Hellmann
The Oslo team is pleased to announce the release of stevedore version 1.1.0, 
the first release in the Kilo series for stevedore.

This release includes the following fixes:

$ git log --oneline --no-merges 1.0.0..1.1.0
5749d54 Add pbr to dependency list
bb43bc4 Updated from global requirements
82fa318 Add more detail to the README
4f2b647 Migrate tox to use testr
4a410fe Update repository location in docs
e28939f Work toward Python 3.4 support and testing
9817db0 warn against sorting requirements

Please report issues through launchpad: https://launchpad.net/python-stevedore

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-24 Thread Mathieu Gagné

On 2014-10-14 11:35 AM, Adrien Cunin wrote:

Hi everyone,

Inspired by the travels tips published for the HK summit, the French
OpenStack user group wrote a similar wiki page for Paris:

https://wiki.openstack.org/wiki/Summit/Kilo/Travel_Tips



Can someone add information about pre-paid SIM cards like someone did 
for Hong Kong?


My cellphone provider's travel packages aren't that great...

Thanks!

--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] In tree testing summit prep

2014-10-24 Thread Chris Dent


Since I'm not going to be at summit and since I care about the
forthcoming in-tree testing stuff I was asked to write down some
thoughts prior to summit so my notions didn't get missed. Since it
is a _design_ summit after all I figured I'd take a pretty far out
position so we can go from there to some kind of middle ground. The
notes are at:

   https://tank.peermore.com/tanks/cdent-rhat/SummitFunctionalTesting

TL;DR: death to `unittest` and _unit_ tests.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-24 Thread Carl Baldwin
+1

It would be great to know where to go in the airport and what to ask
for for a good 1 - 1.5 week prepaid GSM data plan.

Carl

On Fri, Oct 24, 2014 at 11:01 AM, Mathieu Gagné mga...@iweb.com wrote:
 On 2014-10-14 11:35 AM, Adrien Cunin wrote:

 Hi everyone,

 Inspired by the travels tips published for the HK summit, the French
 OpenStack user group wrote a similar wiki page for Paris:

 https://wiki.openstack.org/wiki/Summit/Kilo/Travel_Tips


 Can someone add information about pre-paid SIM cards like someone did for
 Hong Kong?

 My cellphone provider's travel packages aren't that great...

 Thanks!

 --
 Mathieu


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] periodic jobs for master

2014-10-24 Thread Andrea Frittoli
I also believe we can find ways to make post-merge / periodic checks useful.
We need to do that to keep the gate to a sane scale.

On 24 October 2014 17:33, Thierry Carrez thie...@openstack.org wrote:
 Ihar Hrachyshka wrote:
 Another question to solve is how we disseminate state of those jobs.
 Do we create a separate mailing list for that? Obviously we should not
 reuse -dev one, and it's overkill to create one mailing list per
 interest group.

 Should we explore other avenues than email for this ? If we plan to do
 opt-in anyway, would some status website/RSS not work better ?

+1


 The ideal system imho would be a status website where we could see
 failures and close them as handled so that everyone knows that a past
 FAIL result has already been fixed. That could help avoid duplication of
 painful debugging work.

+1

Publicizing the test results better, and to the interested audience
will help a lot.
Same as keep a track record of fixed issues and solutions.

Tracking result history at test level (using subunit2sql), build and
analyze trends would be a great tool to identify and troubleshoot
failures.

Also be beneficial IMO would be extracting whatever information can be
gather automatically from the test results.
Rather than saying job X failed we could have tools that allow us to
tell test X started failing in a specific time range, and this is the
list of sha1s that have been merged around that time.

We will also discuss about this topic at Paris in the QA track.

Andrea Frittoli (andreaf)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.concurrency 0.1.0

2014-10-24 Thread Doug Hellmann
The Oslo team is pleased to announce the release of oslo.conccurrency 0.1.0, 
the first development release of the new library containing lockutils and 
processutils.

Documentation for the library is available at 
http://docs.openstack.org/developer/oslo.concurrency/

Please report issues via launchpad: https://launchpad.net/oslo.concurrency

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Anne Gentle
On Fri, Oct 24, 2014 at 11:09 AM, Stefano Maffulli stef...@openstack.org
wrote:

 On 10/23/2014 11:16 PM, Angus Salkeld wrote:
  Thanks for those pointers, we very interested in feedback from
  operators, but
  in this case I am talking more about end users not operators (people
  that actually use our API).
 Great! There is a working group being formed also for that.

 I would suggest you to put these sessions in your calendar:

 http://kilodesignsummit.sched.org/event/f8f76884ce1d7fb7a39f3f6c2f1bb3d4
 http://kilodesignsummit.sched.org/event/57f6fc4f2ffd0cc47b216f7bf936c115


Another relevant one is my Developer Support session:
https://openstacksummitnovember2014paris.sched.org/event/792d87161d517ca27ce1b212c06d695d

Monday November 3, 2014 11:40 - 12:20
Room 242AB

I've been gathering data from SDK issues on github, Stack Overflow,
ask.openstack.org, API doc comments, and other sources to share what it's
like to support developers using OpenStack APIs and related tooling. It's
really interesting info! Some of it you'd already guess (hint: MOAR DOCS)
but some of it surprised me but probably shouldn't have (everyone wants
measurements).

See you there,
Anne


/stef

 --
 Ask and answer questions on https://ask.openstack.org


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Mike Spreitzer
Angus Salkeld asalk...@mirantis.com wrote on 10/24/2014 12:32:04 AM:

 I have felt some grumblings about usability issues with Heat 
 templates/client/etc..
 and wanted a way that users could come and give us feedback easily 
 (low barrier). I started an etherpad (https://
 etherpad.openstack.org/p/heat-useablity-improvements) - the first 
 win is it is spelt wrong :-O

 We now have some great feedback there in a very short time, most of 
 this we should be able to solve.

 This lead me to think, should OpenStack have a more general 
 mechanism for users to provide feedback. The idea is this is not 
 for bugs or support, but for users to express pain points, requests 
 for features and docs/howtos.

 It's not easy to improve your software unless you are listening to your 
users.

 Ideas?

I very much agree with this.

I am actually surprised that OpenStack does not have something fairly 
formal and organized about this.  I suppose it is part of the TC's job, 
but I think we need more than they can do.  I would suggest some sort of 
user's council that gets involved in blueprint and change reviews. Perhaps 
after first working toward some degree of consensus and some degree of 
shaping what the developers work on in each release (this latter is part 
of the overall program of improving review queue speed, by better focusing 
developers and reviewers on some shared agenda).

Other products have discussion fora, some explicitly dedicated to 
feedback.  You could approximate this with etherpads, or use some real 
discussion forum platform.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Carl Baldwin
On Fri, Oct 24, 2014 at 6:17 AM, Salvatore Orlando sorla...@nicira.com wrote:
 Assigning a distinct ct zone to each port sounds more scalable. This should
 keep the number of zones per host

Agree that zones could be a good solution to this problem.  +1 to zone
/ port for scalability.  Though it will take a bit more code and
complexity to kill the right connections than it would with zone /
rule.

 Once we identify the ports, and therefore the ct zones, then we'd still need
 to find the connections matching the rules which were removed. This does not
 sound like being too difficult, but it can result in searches over long
 lists - think about an instance hosting a DB or web server.

Are you thinking of listing all connections and then iterating over
the list with some code to match it to a security group rule being
removed/updated?  Or, map the security group rule to conntrack filter
arguments to send to a call to conntrack -D?

This could be problematic.  What if security group rules were
redundant and an update or removal of a rule should not really affect
any existing connections?  If all connections were compared instead
against the set of *remaining* security group rules then this wouldn't
be a problem.  This sounds non-trivial to me.  I'm probably thinking
too hard about this.  ;)

 The above two considerations made me suggest the idea of associating ct
 zones with rules, but it is probably true that this can cause us to go
 beyond the 2^16 limit.

I agree we'd hit this limit.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Networking API Create network missing Request parameters

2014-10-24 Thread Manish Godara
Provider network is an extension.  The API details should be at [1]

To get this to work your plugin should support the 'provider' extension.
ML2 supports it [2].  More details can be found at [3]

There is also multi-segment provider networks support.  Details for that
is at [4] as mentioned by Mathieu

[1] 
http://developer.openstack.org/api-ref-networking-v2.html#network_provider-
ext
[2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin
.py#L102
[3] 
http://docs.openstack.org/api/openstack-network/2.0/content/provider_ext.ht
ml
[4] 
http://developer.openstack.org/api-ref-networking-v2.html#network_multi_pro
vider-ext


On 10/23/14, 4:12 PM, Mathieu Gagné mga...@iweb.com wrote:

On 2014-10-23 7:00 PM, Danny Choi (dannchoi) wrote:

 In neutron, user with ³admin² role can specify the provider network
 parameters when creating a network.

 ‹provider:network_type
 ‹provider:physical_network
 ‹provider:segmentation_id

 localadmin@qa4:~/devstack$ neutron net-create test-network
 --provider:network_type vlan --provider:physical_network physnet1
 --provider:segmentation_id 400

 However, the Networking API v2.0
 (http://developer.openstack.org/api-ref-networking-v2.html) ³Create
network²
 does not list them as Request parameters.

 Is this a print error?


I see them under the Networks multiple provider extension (networks)
section. [1]

Open the detail for Create network with multiple segment mappings to
see them.

Is this what you were looking for?

[1] 
http://developer.openstack.org/api-ref-networking-v2.html#network_multi_pr
ovider-ext

-- 
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Notifications as a contract summit prep

2014-10-24 Thread Chris Dent


Since I'm not going to be at summit and since I care about
notifications I was asked to write down some thoughts prior to
summit so my notions didn't get missed. The notes are at:

   https://tank.peermore.com/tanks/cdent-rhat/SummitNotifications

TL;DR: make sure that adding new stuff (producers, consumers,
notifications) is easy.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [poppy] Proposal to add Malini Kamalambal (malini) as a Core Reviewer

2014-10-24 Thread Amit Gandhi
Hi folks,

I’d like to propose adding Malini Kamalambal (malini) as a core reviewer on the 
Poppy team. She has been contributing regularly since the start of Poppy, and 
has proven to be a careful reviewer with good judgment.  She also brings a lot 
of insight into Openstack best practices from her experience working on Zaqar, 
where she also is a Core Reviewer.

All Poppy ATC’s, please respond with a +1 or –1.

Thanks

Amit Gandhi
@amitgandhinz
Poppy PTL


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-24 Thread Bob Melander (bmelande)
What scares me a bit about the “let’s find a common solution for both external 
devices and VMs” approach is the challenge to reach an agreement. I remember a 
rather long discussion in the dev lounge in HongKong about trunking support 
that ended up going in all kinds of directions.

I work on implementing services in VMs so my opinion is definitely colored by 
that. Personally, proposal C is the most appealing to me for the following 
reasons: It is “good enough”, a trunk port notion is semantically easy to take 
in (at least to me), by doing it all within the port resource Nova implications 
are minimal, it seemingly can handle multiple network types (VLAN, GRE, VXLAN, 
… they are all mapped to different trunk port local VLAN tags), DHCP should 
work to the trunk ports and its sub ports (unless I overlook something), the 
spec already elaborates a lot on details, there is also already code available 
that can be inspected.

Thanks,
Bob

From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: torsdag 23 oktober 2014 23:58
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

There are two categories of problems:

1. some networks don't pass VLAN tagged traffic, and it's impossible to detect 
this from the API
2. it's not possible to pass traffic from multiple networks to one port on one 
machine as (e.g.) VLAN tagged traffic

(1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else 
addresses this, particularly in the case that one VM is emitting tagged packets 
that another one should receive and Openstack knows nothing about what's going 
on.

We should get this in, and ideally in quickly and in a simple form where it 
simply tells you if a network is capable of passing tagged traffic.  In 
general, this is possible to calculate but a bit tricky in ML2 - anything using 
the OVS mechanism driver won't pass VLAN traffic, anything using VLANs should 
probably also claim it doesn't pass VLAN traffic (though actually it depends a 
little on the switch), and combinations of L3 tunnels plus Linuxbridge seem to 
pass VLAN traffic just fine.  Beyond that, it's got a backward compatibility 
mode, so it's possible to ensure that any plugin that doesn't implement VLAN 
reporting is still behaving correctly per the specification.

(2) is addressed by several blueprints, and these have overlapping ideas that 
all solve the problem.  I would summarise the possibilities as follows:

A. Racha's L2 gateway blueprint, 
https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension, which (at 
its simplest, though it's had features added on and is somewhat OVS-specific in 
its detail) acts as a concentrator to multiplex multiple networks onto one as a 
trunk.  This is a very simple approach and doesn't attempt to resolve any of 
the hairier questions like making DHCP work as you might want it to on the 
ports attached to the trunk network.
B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/, 
which is more limited in that it refers only to external connections.
C. Erik's VLAN port blueprint, 
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which tries to 
solve the addressing problem mentioned above by having ports within ports (much 
as, on the VM side, interfaces passing trunk traffic tend to have subinterfaces 
that deal with the traffic streams).
D. Not a blueprint, but an idea I've come across: create a network that is a 
collection of other networks, each 'subnetwork' being a VLAN in the network 
trunk.
E. Kyle's very old blueprint, 
https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api - 
where we attach a port, not a network, to multiple networks.  Probably doesn't 
work with appliances.

I would recommend we try and find a solution that works with both external 
hardware and internal networks.  (B) is only a partial solution.

Considering the others, note that (C) and (D) add significant complexity to the 
data model, independently of the benefits they bring.  (A) adds one new 
functional block to networking (similar to today's routers, or even today's 
Nova instances).

Finally, I suggest we consider the most prominent use case for multiplexing 
networks.  This seems to be condensing traffic from many networks to either a 
service VM or a service appliance.  It's useful, but not essential, to have 
Neutron control the addresses on the trunk port subinterfaces.

So, that said, I personally favour (A) is the simplest way to solve our current 
needs, and I recommend paring (A) right down to its basics: a block that has 
access ports that we tag with a VLAN ID, and one trunk port that has all of the 
access 

Re: [openstack-dev] [Ceilometer] Notifications as a contract summit prep

2014-10-24 Thread Sandy Walsh
Thanks ... we'll be sure to address your concerns. 

And there's the list we've compiled here:
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
(section 4)

-S


From: Chris Dent [chd...@redhat.com]
Sent: Friday, October 24, 2014 2:45 PM
To: OpenStack-dev@lists.openstack.org
Subject: [openstack-dev] [Ceilometer] Notifications as a contract summit prep

Since I'm not going to be at summit and since I care about
notifications I was asked to write down some thoughts prior to
summit so my notions didn't get missed. The notes are at:

https://tank.peermore.com/tanks/cdent-rhat/SummitNotifications

TL;DR: make sure that adding new stuff (producers, consumers,
notifications) is easy.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Tim Bell
There is a formal structure via the OpenStack user committee and the associated 
working groups 
(https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee). We have 
Monday afternoon and all of Thursday in the summit time dedicated to 
discussions. The specs review process was prepared with the TC to allow easy 
ways for deployers and consumers of OpenStack to give input before code gets 
written.

Further volunteers and suggestions on how to improve the process in this area 
would be more than welcome in the design summit tracks around the Ops summit 
and working groups. Please see http://kilodesignsummit.sched.org/.

Tim

From: Mike Spreitzer [mailto:mspre...@us.ibm.com]
Sent: 24 October 2014 07:05
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] How can we get more feedback from users?

Angus Salkeld asalk...@mirantis.commailto:asalk...@mirantis.com wrote on 
10/24/2014 12:32:04 AM:

 I have felt some grumblings about usability issues with Heat
 templates/client/etc..
 and wanted a way that users could come and give us feedback easily
 (low barrier). I started an etherpad (https://
 etherpad.openstack.org/p/heat-useablity-improvements) - the first
 win is it is spelt wrong :-O

 We now have some great feedback there in a very short time, most of
 this we should be able to solve.

 This lead me to think, should OpenStack have a more general
 mechanism for users to provide feedback. The idea is this is not
 for bugs or support, but for users to express pain points, requests
 for features and docs/howtos.

 It's not easy to improve your software unless you are listening to your users.

 Ideas?

I very much agree with this.

I am actually surprised that OpenStack does not have something fairly formal 
and organized about this.  I suppose it is part of the TC's job, but I think we 
need more than they can do.  I would suggest some sort of user's council that 
gets involved in blueprint and change reviews.  Perhaps after first working 
toward some degree of consensus and some degree of shaping what the developers 
work on in each release (this latter is part of the overall program of 
improving review queue speed, by better focusing developers and reviewers on 
some shared agenda).

Other products have discussion fora, some explicitly dedicated to feedback.  
You could approximate this with etherpads, or use some real discussion forum 
platform.

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence prototyping

2014-10-24 Thread Joshua Harlow
For persistenting the graph there is also:

https://github.com/LogicalDash/gorm#gorm

The above sounds interesting...

'Object relational mapper for graphs with in-built revision control.'

It then hooks in to the https://networkx.github.io/ library which provides a 
ton of useful graph functionality.

As you guys get more and more into doing graph manipulation I am betting that 
library (networkx) will become more and more useful...

-Josh

On Oct 23, 2014, at 11:10 AM, Zane Bitter zbit...@redhat.com wrote:

 Hi folks,
 I've been looking at the convergence stuff, and become a bit concerned that 
 we're more or less flying blind (or at least I have been) in trying to figure 
 out the design, and also that some of the first implementation efforts seem 
 to be around the stuff that is _most_ expensive to change (e.g. database 
 schemata).
 
 What we really want is to experiment on stuff that is cheap to change with a 
 view to figuring out the big picture without having to iterate on the 
 expensive stuff. To that end, I started last week to write a little prototype 
 system to demonstrate the concepts of convergence. (Note that none of this 
 code is intended to end up in Heat!) You can find the code here:
 
 https://github.com/zaneb/heat-convergence-prototype
 
 Note that this is a *very* early prototype. At the moment it can create 
 resources, and not much else. I plan to continue working on it to implement 
 updates and so forth. My hope is that we can develop a test framework and 
 scenarios around this that can eventually be transplanted into Heat's 
 functional tests. So the prototype code is throwaway, but the tests we might 
 write against it in future should be useful.
 
 I'd like to encourage anyone who needs to figure out any part of the design 
 of convergence to fork the repo and try out some alternatives - it should be 
 very lightweight to do so. I will also entertain pull requests (though I see 
 my branch primarily as a vehicle for my own learning at this early stage, so 
 if you want to go in a different direction it may be best to do so on your 
 own branch), and the issue tracker is enabled if there is something you want 
 to track.
 
 I have learned a bunch of stuff already:
 
 * The proposed spec for persisting the dependency graph 
 (https://review.openstack.org/#/c/123749/1) is really well done. Kudos to 
 Anant and the other folks who had input to it. I have left comments based on 
 what I learned so far from trying it out.
 
 
 * We should isolate the problem of merging two branches of execution (i.e. 
 knowing when to trigger a check on one resource that depends on multiple 
 others). Either in a library (like taskflow) or just a separate database 
 table (like my current prototype). Baking it into the orchestration 
 algorithms (e.g. by marking nodes in the dependency graph) would be a 
 colossal mistake IMHO.
 
 
 * Our overarching plan is backwards.
 
 There are two quite separable parts to this architecture - the worker and the 
 observer. Up until now, we have been assuming that implementing the observer 
 would be the first step. Originally we thought that this would give us the 
 best incremental benefits. At the mid-cycle meetup we came to the conclusion 
 that there were actually no real incremental benefits to be had until 
 everything was close to completion. I am now of the opinion that we had it 
 exactly backwards - the observer implementation should come last. That will 
 allow us to deliver incremental benefits from the observer sooner.
 
 The problem with the observer is that it requires new plugins. (That sucks 
 BTW, because a lot of the value of Heat is in having all of these tested, 
 working plugins. I'd love it if we could take the opportunity to design a 
 plugin framework such that plugins would require much less custom code, but 
 it looks like a really hard job.) Basically this means that convergence would 
 be stalled until we could rewrite all the plugins. I think it's much better 
 to implement a first stage that can work with existing plugins *or* the new 
 ones we'll eventually have with the observer. That allows us to get some 
 benefits soon and further incremental benefits as we convert plugins one at a 
 time. It should also mean a transition period (possibly with a performance 
 penalty) for existing plugin authors, and for things like HARestarter (can we 
 please please deprecate it now?).
 
 So the two phases I'm proposing are:
 1. (Workers) Distribute tasks for individual resources among workers; 
 implement update-during-update (no more locking).
 2. (Observers) Compare against real-world values instead of template values 
 to determine when updates are needed. Make use of notifications and such.
 
 I believe it's quite realistic to aim to get #1 done for Kilo. There could 
 also be a phase 1.5, where we use the existing stack-check mechanism to 
 detect the most egregious divergences between template and reality (e.g. 
 whole resource is 

Re: [openstack-dev] [poppy] Proposal to add Malini Kamalambal (malini) as a Core Reviewer

2014-10-24 Thread Obulapathi Challa
She is a very good team member, contributor and reviewer.
I vote +1!

Thanks and regards,
 Obul.


From: Amit Gandhi amit.gan...@rackspace.commailto:amit.gan...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, October 24, 2014 at 2:11 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [poppy] Proposal to add Malini Kamalambal (malini) as 
a Core Reviewer

Hi folks,

I’d like to propose adding Malini Kamalambal (malini) as a core reviewer on the 
Poppy team. She has been contributing regularly since the start of Poppy, and 
has proven to be a careful reviewer with good judgment.  She also brings a lot 
of insight into Openstack best practices from her experience working on Zaqar, 
where she also is a Core Reviewer.

All Poppy ATC’s, please respond with a +1 or –1.

Thanks

Amit Gandhi
@amitgandhinz
Poppy PTL


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-24 Thread Tim Bell
 -Original Message-
 From: Carl Baldwin [mailto:c...@ecbaldwin.net]
 Sent: 24 October 2014 19:05
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Travels tips for the Paris summit
 
 +1
 
 It would be great to know where to go in the airport and what to ask for for a
 good 1 - 1.5 week prepaid GSM data plan.
 

Excellent idea to add to the standard travel tips for each summit. As a French 
resident, I've not been able to find a classic pay-as-you-go offering such as 
in Hong Kong. 

In my area of France (pays-de-gex), the cheapest is Leclerc pay-as-you-go but I 
suspect that Paris has better offers.

SFR and Bouyeges are common providers but don't know for the  1 month deals as 
to which is the best.

Tim


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Backup of information about nodes.

2014-10-24 Thread Adam Lawson
Okay and one other question which I must have missed; is it possible to
expand Swift capacity through Fuel? I notice Swift is installed on the
Controllers but if we need to expand Swift capacity, does that necessarily
mean Fuel requires the addition of more Controllers?


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Thu, Oct 23, 2014 at 2:59 AM, Tomasz Napierala tnapier...@mirantis.com
wrote:


 On 22 Oct 2014, at 21:03, Adam Lawson alaw...@aqorn.com wrote:

  What is current best practice to restore a failed Fuel node?

 It’s documented here:

 http://docs.mirantis.com/openstack/fuel/fuel-5.1/operations.html#restoring-fuel-master

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler sessions at the summit

2014-10-24 Thread Dugger, Donald D
Jay-

Your faith in the ability of everyone to get concrete ideas down before the 
summit is a joy to behold - I'm a little more cynical :-)  I think it'll happen 
in Paris.

1) Gantt specific work.  Your list of BPs is great, if we can come to an 
implementation plan for those on Thurs. I will be ecstatic.

2) Replies to this thread on cross project requirements would be wonderful.  
Let's see what we can get.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, October 24, 2014 9:40 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [gantt] Scheduler sessions at the summit

On 10/24/2014 10:39 AM, Dugger, Donald D wrote:
 (I hijacked this thread to save the context but changed the subject so 
 more people might read it.)

 Looks like we have a scheduler specific session on Thurs. at 11:00AM 
 for 90 min. so we can use that for working out some of the specific 
 scheduler changes we want to do (e.g. the split for sure).

I'd prefer we had much of that done by the time the summit rolls around, thus 
the focus on the various blueprints that involve the resource tracker and 
scheduler. Please do review these all:

resource object models: https://review.openstack.org/#/c/127609/
request spec: https://review.openstack.org/#/c/127610/
select_destinations(): https://review.openstack.org/#/c/127612/
detach service from compute: https://review.openstack.org/#/c/126895/
isolate scheduler db: https://review.openstack.org/#/c/89893/

Note that the last one I have proposed to break down into further more specific 
blueprints that cover the aggregate and instance group DB calls separately...

 On the Nova summit etherpad:

 https://etherpad.openstack.org/p/kilo-nova-summit-topics

 there is a scheduler section but it's not very cross-project focused.
 I would like a cross project session on Tues. so we can get 
 requirements from other projects on what they need from a common 
 scheduler.  I don't know that I have specifics that I can put down on 
 an etherpad yet, I'm hoping to get that info from other projects.
 Specifically, I think:

There is a Gantt session in the cross-project summit etherpad (#17 on the list):

https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

My guess is that we will indeed have the Gantt session on Tuesday, as there is 
quite a bit of interest in it. The remainder of the TC members are voting today 
on those cross-project sessions and ttx will take it from there.

 1) Cinder 2) Containers 3) Neutron 4) Ceilometer 5) Heat

 are specific projects that have scheduling requirements (either to use 
 the scheduler or provide input to it) that they need and it would be 
 really good to find out exactly what they need.  My suggestion would 
 be an etherpad with a big title `Requirements' on top and sections for 
 each of those projects would be enough to get us started.

As I mentioned in the previous response, I would greatly prefer if the Gantt 
cross-project session was actually NOT a we want XYZ from the scheduler 
session. I'd prefer to have an etherpad already containing all those 
requirements from people before the summit and use the summit time to discuss 
prioritization and implementation/design proposals for actually getting the 
work done.

 PS: Note that Sylvain and I are both on vacation next week so we'll 
 have to miss the Gantt meeting.  It would be great if you could lead 
 the meeting Jay, otherwise we'll just meet up at the summit.

Sure, I can do that, no problem. Me and Paul will ponder the vast expanse of 
time and space together and then try to unclutter the resource tracker. :)

-jay

 -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph:
 303/443-3786

 -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] 
 Sent: Wednesday, October 22, 2014 10:10 AM To: 
 openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [gantt] 
 Scheduler group meeting - cancelled this week INTERNAL

 The regular meeting time is Tuesdays at 15:00 UTC/11am EST:

 https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_me
 eting

  Generally, we don't do slides for design summit sessions -- we use 
 etherpads instead and the sessions are discussions, not presentations.

 Next week's meeting we can and should create etherpads for the 
 cross-project session(s) that we will get allocated for Gantt topics.

 Best, -jay

 On 10/22/2014 11:54 AM, Elzur, Uri wrote:
 Don

 Will there be a meeting next week? What is the regular time slot for 
 the meeting?

 I'd like to work w you on a technical slide to use in Paris

 Do we need to socialize the Gantt topic more?

 Thx

 Uri (Oo-Ree)

 C: 949-378-7568

 *From:* Dugger, Donald D [mailto:donald.d.dug...@intel.com] *Sent:* 
 Wednesday, October 22, 2014 6:04 AM *To:* OpenStack Development 
 Mailing List (not for usage questions) *Subject:* [openstack-dev] 
 [gantt] 

[openstack-dev] What's Up Doc? Oct 24 2014 [training] [api]

2014-10-24 Thread Anne Gentle
Here are some docs highlights for this week.

HOT Guides: Gauvain wants a little more time with the HOT Guides before
making them publicly available, so those will probably come out after the
Summit.

Cutting a stable/juno branch for docs: As publicized in this week's meeting
notes, our goal is to cut stable/juno branch for openstack-manuals on Oct
29 2014. Notes:
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-10-22-14.01.html

HA Guide: Looks like there's progress towards reviewing and getting that
team together, thanks.

Upgrade guide: There are patches under discussion to pull the Upgrades
chapter out of the Operations Guide and put into a separate guide. Please
join in the discussion on the openstack-docs mailing list thread if you
have input.
http://lists.openstack.org/pipermail/openstack-docs/2014-October/005370.html

Soft freeze on project-api repos: Now that we're pulling the content out
of those and putting it into respective project-specs repos, we are
declaring a freeze on those repos. Reviewers, be aware of that.

Summit plans: Looks like we'll get a cross-project slot for docs, likely to
talk about how to scale docs efforts across many projects. Stay tuned for
the schedule. The TC is working through
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics to get
those scheduled. We'll also have a pod, so we're using this etherpad:
https://etherpad.openstack.org/p/docstopicsparissummit to organize those.
I've marked some topics there as pod topics.

Training team: I do want to talk about progress on the four things we
talked about after the last Summit:
http://lists.openstack.org/pipermail/openstack-docs/2014-March/004171.html
I think you're at 2 out of 4, so I've added a note to the docs/training
topics etherpad linked above.

Enjoy your weekend.
Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [poppy] Proposal to add Malini Kamalambal (malini) as a Core Reviewer

2014-10-24 Thread Allan Metts
+1


From: Amit Gandhi [mailto:amit.gan...@rackspace.com]
Sent: Friday, October 24, 2014 2:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [poppy] Proposal to add Malini Kamalambal (malini) as 
a Core Reviewer

Hi folks,

I'd like to propose adding Malini Kamalambal (malini) as a core reviewer on the 
Poppy team. She has been contributing regularly since the start of Poppy, and 
has proven to be a careful reviewer with good judgment.  She also brings a lot 
of insight into Openstack best practices from her experience working on Zaqar, 
where she also is a Core Reviewer.

All Poppy ATC's, please respond with a +1 or -1.

Thanks

Amit Gandhi
@amitgandhinz
Poppy PTL


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-24 Thread Kevin Benton
Hi,

Thanks for posting this. I am interested in this use case as well.

I didn't find a link to a review for the ML2 driver. Do you have any more
details for that available?
It seems like not providing L2 connectivity between members of the same
Neutron network conflicts with assumptions ML2 will make about segmentation
IDs, etc. So I am interested in seeing how exactly the ML2 driver will bind
ports, segments, etc.


Cheers,
Kevin Benton

On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield cory.benfi...@metaswitch.com
 wrote:

 All,

 Project Calico [1] is an open source approach to virtual networking based
 on L3 routing as opposed to L2 bridging.  In order to accommodate this
 approach within OpenStack, we've just submitted 3 blueprints that cover

 -  minor changes to nova to add a new VIF type [2]
 -  some changes to neutron to add DHCP support for routed interfaces [3]
 -  an ML2 mechanism driver that adds support for Project Calico [4].

 We feel that allowing for routed network interfaces is of general use
 within OpenStack, which was our motivation for submitting [2] and [3].  We
 also recognise that there is an open question over the future of 3rd party
 ML2 drivers in OpenStack, but until that is finally resolved in Paris, we
 felt submitting our driver spec [4] was appropriate (not least to provide
 more context on the changes proposed in [2] and [3]).

 We're extremely keen to hear any and all feedback on these proposals from
 the community.  We'll be around at the Paris summit in a couple of weeks
 and would love to discuss with anyone else who is interested in this
 direction.

 Regards,

 Cory Benfield (on behalf of the entire Project Calico team)

 [1] http://www.projectcalico.org
 [2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
 [3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
 [4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Devstack]

2014-10-24 Thread Gabriel Hurley
SQLite doesn't introduce any additional dependencies, memcached requires 
installation of memcached (admittedly it's not hard on most distros, but it 
*is* yet another step) and in most cases the installation of another python 
module to interface with it.

Memcached might be a good choice for devstack, but it may or may not be the 
right thing to recommend for Horizon by default.

- Gabriel

-Original Message-
From: Yves-Gwenaël Bourhis [mailto:yves-gwenael.bour...@cloudwatt.com] 
Sent: Friday, October 24, 2014 7:06 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Horizon] [Devstack]

Le 24/10/2014 13:30, Chmouel Boudjnah a écrit :
 On Fri, Oct 24, 2014 at 12:27 PM, Yves-Gwenaël Bourhis 
 yves-gwenael.bour...@cloudwatt.com
 mailto:yves-gwenael.bour...@cloudwatt.com wrote:
 memcache can be distributed (so usable in HA) and has far better
 performances then db sessions.
 Why not use memcache by default?
 
 
 I guess for the simple reason that if you restart your memcache you 
 loose all the sessions?

Indeed, and for devstack that's an easy way do do a cleanup of old sessions :-)

We are well talking about devstack in this thread, where loosing sessions after 
a memcache restart is not an issue and looks more like a very handy feature.

For production it's another mater, and operators have the choice.

--
Yves-Gwenaël Bourhis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-24 Thread Brandon Logan
With the recent talk about advanced services spinning out of Neutron,
and the fact most of the LBaaS community has wanted LBaaS to spin out of
Neutron, I wanted to bring up a possibility and gauge interest and
opinion on this possibility.

Octavia is going to (and has) an API.  The current thinking is that an
Octavia driver will be created in Neutron LBaaS that will make a
requests to the Octavia API.  When LBaaS spins out of Neutron, it will
need a standalone API.  Octavia's API seems to be a good solution to
this.  It will support vendor drivers much like the current Neutron
LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
exact duplicate.  Octavia will be growing more mature in stackforge at a
higher velocity than an Openstack project, so I expect by the time Kilo
comes around it's API will be very mature.

Octavia's API doesn't have to be called Octavia either.  It can be
separated out and it can be called Openstack LBaaS, and the rest of
Octavia (the actual brains of it) will just be another driver to
Openstack LBaaS, which would retain the Octavia name.

This is my PROS and CONS list to using Octavia's API as the spun out
LBaaS:

PROS
1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
will already have this done.
2. Most of the same people working on Octavia have worked on Neutron
LBaaS v2.
3. It's out of Neutron faster, which is good for Neutron and LBaaS.

CONS
1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
another version of an LBaaS API.
2. The Octavia API will also have a separate Operator API which will
most likely only work with Octavia, not any vendors.

The CONS are easily solvable, and IMHO the PROS greatly outweigh the
CONS.

This is just my opinion though and I'd like to hear back from as many as
possible.  Add on to the PROS and CONS if wanted.

If it is direction we can agree on going then we can add as a talking
point in the advanced services spin out meeting:

http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VEq66HWx3UY

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-24 Thread Stephen Balukoff
+1 to this, eh!

Though it sounds more like you're talking about spinning the Octavia user
API out of Octavia to become it's own thing (ie. Openstack LBaaS), and
then ensuring a standardized driver interface that vendors (including
Octavia) will interface with. It's sort of a half-dozen of one, six of the
other kind of deal.

To the pros, I would add:  Spin out from Neutron ensures that LBaaS uses
clean interfaces to the networking layer, and separation of concerns here
means that Neutron and LBaaS can evolve independently. (And testing and
failure modes, etc. all become easier with separation of concerns.)

One other thing to consider (not sure if pro or con): I know at Atlanta
there was a lot of talk around using the Neutron flavor framework to allow
for multiple vendors in a single installation as well as differentiated
product offerings for Operators. If / when LBaaS is spun out of Neutron,
LBaaS will still probably have need for something like Neutron flavors,
even if it isn't an equivalent implementation. (Noting of course, that no
implementation of Neutron flavors actually presently exists. XD )

Stephen


On Fri, Oct 24, 2014 at 2:47 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 With the recent talk about advanced services spinning out of Neutron,
 and the fact most of the LBaaS community has wanted LBaaS to spin out of
 Neutron, I wanted to bring up a possibility and gauge interest and
 opinion on this possibility.

 Octavia is going to (and has) an API.  The current thinking is that an
 Octavia driver will be created in Neutron LBaaS that will make a
 requests to the Octavia API.  When LBaaS spins out of Neutron, it will
 need a standalone API.  Octavia's API seems to be a good solution to
 this.  It will support vendor drivers much like the current Neutron
 LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
 exact duplicate.  Octavia will be growing more mature in stackforge at a
 higher velocity than an Openstack project, so I expect by the time Kilo
 comes around it's API will be very mature.

 Octavia's API doesn't have to be called Octavia either.  It can be
 separated out and it can be called Openstack LBaaS, and the rest of
 Octavia (the actual brains of it) will just be another driver to
 Openstack LBaaS, which would retain the Octavia name.

 This is my PROS and CONS list to using Octavia's API as the spun out
 LBaaS:

 PROS
 1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
 will already have this done.
 2. Most of the same people working on Octavia have worked on Neutron
 LBaaS v2.
 3. It's out of Neutron faster, which is good for Neutron and LBaaS.

 CONS
 1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
 another version of an LBaaS API.
 2. The Octavia API will also have a separate Operator API which will
 most likely only work with Octavia, not any vendors.

 The CONS are easily solvable, and IMHO the PROS greatly outweigh the
 CONS.

 This is just my opinion though and I'd like to hear back from as many as
 possible.  Add on to the PROS and CONS if wanted.

 If it is direction we can agree on going then we can add as a talking
 point in the advanced services spin out meeting:


 http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VEq66HWx3UY

 Thanks,
 Brandon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] periodic jobs for master

2014-10-24 Thread James E. Blair
Andrea Frittoli andrea.fritt...@gmail.com writes:

 I also believe we can find ways to make post-merge / periodic checks useful.
 We need to do that to keep the gate to a sane scale.

Yes, we have a plan to do that that we outlined at the infra/QA meetup
this summer and described to this list in this email:

http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html

Particularly this part, but please read the whole message if you have
not already, or have forgotten it:

  * For all non gold standard configurations, we'll dedicate a part of
our infrastructure to running them in a continuous background loop,
as well as making these configs available as experimental jobs. The
idea here is that we'll actually be able to provide more
configurations that are operating in a more traditional CI (post
merge) context. People that are interested in keeping these bits
functional can monitor those jobs and help with fixes when needed.
The experimental jobs mean that if developers are concerned about
the effect of a particular change on one of these configs, it's easy
to request a pre-merge test run.  In the near term we might imagine
this would allow for things like ceph, mongodb, docker, and possibly
very new libvirt to be validated in some way upstream.

  * Provide some kind of easy to view dashboards of these jobs, as well
as a policy that if some job is failing for  some period of time,
it's removed from the system. We want to provide whatever feedback
we can to engaged parties, but people do need to realize that
engagement is key. The biggest part of putting tests into OpenStack
isn't landing the tests, but dealing with their failures.

I'm glad to see people interested in this.  If you're ready to
contribute to it, please stop by #openstack-infra or join our next team
meeting[1] to discuss how you can help.

-Jim

[1] https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][doc]

2014-10-24 Thread Doug Hellmann

On Oct 24, 2014, at 3:33 AM, Angelo Matarazzo angelo.matara...@dektech.com.au 
wrote:

 Hi all,
 I have a question for you devs.
 I don't understand the difference between this link 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/ 
 project-testing-interface.rst
 and
 https://wiki.openstack.org/wiki/ProjectTestingInterface
 
 Some parts don't match (e.g. unittest running section).
 If the git link is the right doc should we update the wiki page?  
 
 I found the reference to the wiki page here:
 https://lists.launchpad.net/openstack/msg08058.html
 
 Best regards,
 Angelo

We’re working on setting things up so the rst files in that git repository can 
be published as HTML. When we do that, we should update the wiki page to link 
to the HTML page. For now, the rst file should be considered the canonical 
reference, since it is the version managed by the TC.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-24 Thread Eichberger, German
Hi Jorge,

I agree completely with the points you make about the logs. We still feel that 
metering and logging are two different problems. The ceilometers community has 
a proposal on how to meter lbaas (see 
http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/lbaas_metering.html)
 and we at HP think that those values are be sufficient for us for the time 
being. 

I think our discussion is mostly about connection logs which are emitted some 
way from amphora (e.g. haproxy logs). Since they are customer's logs we need to 
explore on our end the privacy implications (I assume at RAX you have controls 
in place to make sure that there is no violation :-). Also I need to check if 
our central logging system is scalable enough and we can send logs there 
without creating security holes.

Another possibility is to log like syslog our apmphora agent logs to a central 
system to help with trouble shooting debugging. Those could be sufficiently 
anonymized to avoid privacy issue. What are your thoughts on logging those?

Thanks,
German

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Thursday, October 23, 2014 3:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide more 
insight into you usage requirements? Also, I'd like to clarify a few points 
related to using logging.

I am advocating that logs be used for multiple purposes, including billing. 
Billing requirements are different that connection logging requirements. 
However, connection logging is a very accurate mechanism to capture billable 
metrics and thus, is related. My vision for this is something like the 
following:

- Capture logs in a scalable way (i.e. capture logs and put them on a separate 
scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and send 
them on their merry way to cielometer or whatever service an operator will be 
using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything from 
indefinitely to not at all. Rackspace is planing on keeping them for a certain 
period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns 
on the connection logging feature for their load balancer it will already have 
a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually after a 
tragic lb event). By already capturing the logs I'm sure customers will be 
extremely happy to see that there are already X days worth of logs they can 
immediately sift through.
B) Operators and their support teams can leverage logs when providing 
service to their customers. This is huge for finding issues and resolving them 
quickly.
C) Albeit a minor point, building support for logs from the get-go 
mitigates capacity management uncertainty. My example earlier was the extreme 
case of every customer turning on logging at the same time. While unlikely, I 
would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my 
experience, those tend to be more complex than what I am advocating and without 
the added benefits listed above. An understanding of HP's desires on this 
matter will hopefully get this to a point where we can start working on a spec.

Cheers,
--Jorge

P.S. Real-time stats is a different beast and I envision there being an API 
call that returns real-time data such as this == 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


From:  Eichberger, German german.eichber...@hp.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Wednesday, October 22, 2014 2:41 PM
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


Hi Jorge,
 
Good discussion so far + glad to have you back J
 
I am not a big fan of using logs for billing information since 
ultimately (at least at HP) we need to pump it into ceilometer. So I am 
envisioning either the  amphora (via a proxy) to pump it straight into 
that system or we collect it on the controller and pump it from there.
 
Allowing/enabling logging creates some requirements on the hardware, 
mainly, that they can handle the IO coming from logging. Some operators 
might choose to  hook up very cheap and non performing disks which 
might not be able to deal with the log traffic. So I would suggest that 
there is some rate limiting on the log output to help with that.

 
Thanks,
German
 
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]

Sent: Wednesday, 

Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-24 Thread Doug Hellmann

On Oct 23, 2014, at 8:14 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-10-23 17:18:04 -0400 (-0400), Doug Hellmann wrote:
 I think we have to actually wait for M, don’t we (K  L represents
 1 year where J is supported, M is the first release where J is not
 supported and 2.6 can be fully dropped).
 [...]
 
 Roughly speaking, probably. It's more accurate to say we need to
 keep it until stable/juno reaches end of support, which won't
 necessarily coincide exactly with any particular release cycle
 ending (it will instead coincide with whenever the stable branch
 management team decides the final 2014.2.x point release is, which I
 don't think has been settled quite yet).

Good point. I’ve been conflating those two things, but they are separate events.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][doc] project testing interface

2014-10-24 Thread Stefano Maffulli
On 10/24/2014 03:03 PM, Anne Gentle wrote:
 The git link is the reference, and we're working on publishing those,
 just have to get a URL/home sorted out.

 In the meantime, yes, you can update the wiki page.

Why not delete the wiki altogether? I think stale content on the wiki is 
damaging us and if we're all agreeing that it's better to use rst files in git 
then let's get rid of old content on the wiki, put redirects or warnings on 
those pages.

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][doc] project testing interface

2014-10-24 Thread James E. Blair
Angelo Matarazzo angelo.matara...@dektech.com.au writes:

 Hi all,
 I have a question for you devs.
 I don't understand the difference between this link
 http://git.openstack.org/cgit/openstack/governance/tree/reference/project-testing-interface.rst
 and
 https://wiki.openstack.org/wiki/ProjectTestingInterface

 Some parts don't match (e.g. unittest running section).
 If the git link is the right doc should we update the wiki page?

 I found the reference to the wiki page here:
 https://lists.launchpad.net/openstack/msg08058.html

The git repo is authoritative now, and the wiki is out of date.  Feel
free to update the wiki to point to the git repo.  We're working on
publishing the governance repo and so should have a nicer looking page
and URL for that soon.  Thanks!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] [devstack] Generic scripts for Tempest configuration

2014-10-24 Thread Rochelle.RochelleGrober
Hi, Timur.

Check out [1].   Boris Pavlovic has been working towards what you want for more 
than a full release cycle.  There are still major issues to be conquered, but 
having something that gets us part of the way there and can identify what can’t 
be determined so that the humans have only a subset to work out would be a 
great first step.

There are also other reviews out there that need to come together to really 
make this work.  And projects that would be the better for it (Refstack and 
Rally).  These are [2] allowing Tempest tests to run as non-admin, [3] making 
Tempest pluggable, [4] refactoring the client manager to be more flexible.

I think some others may have merged already.  The  bottom line is to refactor 
tempest such that there is a test server with the necessary tools and 
components to make it work, and a tempest lib such that writing tests can 
benefit from common procedures.

Enjoy the reading.

--Rocky


[1] https://review.openstack.org/#/c/94473/
[2] https://review.openstack.org/#/c/86967/
[3] https://review.openstack.org/#/c/89322/
[4] https://review.openstack.org/#/c/92804/


From: Timur Nurlygayanov [mailto:tnurlygaya...@mirantis.com]
Sent: Friday, October 24, 2014 4:05 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [tempest] [devstack] Generic scripts for Tempest 
configuration

Hi all,
we are using Tempest tests to verify every changes in different OpenStack 
components and we have scripts in devstack, which allow to configure Tempest.
We want to use Tempest tests to verify different clouds, not only installed 
with devstack and to do this we need to configure Tempest manually (or with 
some no-generic scripts, which allow to configure tempest for specific lab 
configuration).
Looks like we can improve these scripts for configuration of the Tempest, which 
we have in devstack repository now and create generic scripts for Tempest, 
which can be used by devstack scripts or manually, to configure Tempest for any 
private/public OpenStack clouds. These scripts should allow to easily configure 
Tempest: user should provide only Keystone endpoint and logins/passwords, other 
parameters can be optional and can be configured automatically.

The idea is to have the generic scripts, which will allow to easily configure 
Tempest from-the-box, without deep inspection of lab configuration (but with 
the ability to change optional parameters too, if it is required).

--

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] doc8 0.5.0 released

2014-10-24 Thread Joshua Harlow
The doc8 team is pleased to announce the release of doc8 0.5.0, the first 
release in the kilo series for doc8.

Documentation for the library is available at http://pypi.python.org/pypi/doc8

This release includes the following changes:

$ git log 0.4.3..HEAD --oneline --no-merges

666805d Fix the 'ignore-path' config option
04a710c Allow overriding file encoding
5a43417 Add check D005 - no newline at end of file

Please report issues via launchpad: https://bugs.launchpad.net/doc8

Thanks,

Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev