Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-06-03 Thread Miguel Ángel Ajo
Doesn’t this overlap with the work done for the OSProfiler ?  


More comments inline.  

Miguel Ángel Ajo


On Wednesday, 3 de June de 2015 at 11:43, Kekane, Abhishek wrote:

 Hi Devs,
  
 So for I have got following responses on the proposed solutions:
  
 Solution 1: Return tuple containing headers and body from - 3 +1
 Solution 2: Use thread local storage to store 'x-openstack-request-id' 
 returned from headers - 0 +1
 Solution 3: Unique request-id across OpenStack Services - 1 +1
  
  


I’d vote for Solution 3, without involving keystone (first caller with no 
req-id generates one randomly),
the req-id contains a call/hop count, which is incremented on every new call... 
 
   
  
  

  
 Requesting community people, cross-project members and PTL's to go through 
 this mailing thread [1] and give your suggestions/opinions about the 
 solutions proposed so that It will be easy to finalize the solution.
  
 [1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064842.html
  
 Thanks  Regards,
  
 Abhishek Kekane
  
 -Original Message-
 From: Nikhil Komawar [mailto:nik.koma...@gmail.com]  
 Sent: 28 May 2015 12:34
 To: openstack-dev@lists.openstack.org 
 (mailto:openstack-dev@lists.openstack.org)
 Subject: Re: [openstack-dev] [all] cross project communication: Return 
 request-id to caller
  
 Did you get to talk with anyone in the LogWG ( 
 https://wiki.openstack.org/wiki/LogWorkingGroup )? In wonder what kind of 
 recommendations, standards we can come up with while adopting a cross project 
 solution. If our logs follow certain prefix and or suffix style across 
 projects, that would help a long way.
  
 Personally: +1 on Solution 1
  
 On 5/28/15 2:14 AM, Kekane, Abhishek wrote:
   
  Hi Devs,
   
   
  Thank you for your opinions/thoughts.
   
  However I would like to suggest that please give +1 against the  
  solution which you will like to propose so that at the end it will be  
  helpful for us to consolidate the voting against each solution and  
  make some decision.
   
   
  Thanks in advance.
   
   
  Abhishek Kekane
   
   
   
  *From:*Joe Gordon [mailto:joe.gord...@gmail.com]
  *Sent:* 28 May 2015 00:31
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Subject:* Re: [openstack-dev] [all] cross project communication:
  Return request-id to caller
   
   
   
   
  On Wed, May 27, 2015 at 12:06 AM, Kekane, Abhishek  
  abhishek.kek...@nttdata.com mailto:abhishek.kek...@nttdata.com wrote:
   
  Hi Devs,
   
   
  Each OpenStack service sends a request ID header with HTTP responses.
  This request ID can be useful for tracking down problems in the logs.
  However, when operation crosses service boundaries, this tracking can  
  become difficult, as each service has its own request ID. Request ID  
  is not returned to the caller, so it is not easy to track the request.
  This becomes especially problematic when requests are coming in  
  parallel. For example, glance will call cinder for creating image, but  
  that cinder instance may be handling several other requests at the  
  same time. By using same request ID in the log, user can easily find  
  the cinder request ID that is same as glance request ID in the g-api  
  log. It will help operators/developers to analyse logs effectively.
   
   
  Thank you for writing this up.
   
   
   
  To address this issue we have come up with following solutions:
   
   
  Solution 1: Return tuple containing headers and body from
  respective clients (also favoured by Joe Gordon)
   
  Reference:
   
  https://review.openstack.org/#/c/156508/6/specs/log-request-id-mapping
  s.rst
   
   
  Pros:
   
  1. Maintains backward compatibility
   
  2. Effective debugging/analysing of the problem as both calling
  service request-id and called service request-id are logged in
  same log message
   
  3. Build a full call graph
   
  4. End user will able to know the request-id of the request and
  can approach service provider to know the cause of failure of
  particular request.
   
   
  Cons:
   
  1. The changes need to be done first in cross-projects before
  making changes in clients
   
  2. Applications which are using python-*clients needs to do
  required changes (check return type of response)
   
   
  Additional cons:
   
   
  3. Cannot simply search all logs (ala logstash) using the request-id  
  returned to the user without any post processing of the logs.
  

   
   
   
   
   
  Solution 2: Use thread local storage to store
  'x-openstack-request-id' returned from headers (suggested by Doug
  Hellmann)
   
  Reference:
   
  https://review.openstack.org/#/c/156508/9/specs/log-request-id-mapping
  s.rst
   
   
  Add new method 'get_openstack_request_id' to return this
  request-id to the caller.
   
   
  Pros:
   
  1. Doesn't break compatibility
   
  2. Minimal changes are required in client
   
  3. Build a full call graph
   
   
  Cons:
   
  1. Malicious user can send long request-id

Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-02 Thread Miguel Ángel Ajo
The backport seems reasonable IMO.

Is this tested in a multihost environment?.

I ask, because given the Ian explanation (which probably I got wrong), the 
issue is in the  
NET-NIC-VM path while the patch fixes the path in the network node (this is 
ran in the
dhcp agent). dhcp-NIC-NET.


Best,
Miguel Ángel Ajo


On Tuesday, 2 de June de 2015 at 9:32, Ian Wells wrote:

 The fix should work fine.  It is technically a workaround for the way 
 checksums work in virtualised systems, and the unfortunate fact that some 
 DHCP clients check checksums on packets where the hardware has checksum 
 offload enabled.  (This doesn't work due to an optimisation in the way QEMU 
 treats packet checksums.  You'll see the problem if your machine is running 
 the VM on the same host as its DHCP server and the VM has a vulnerable 
 client.)
  
 I haven't tried it myself but I have confidence in it and would recommend a 
 backport.
 --  
 Ian.
  
 On 1 June 2015 at 21:32, Kevin Benton blak...@gmail.com 
 (mailto:blak...@gmail.com) wrote:
  I would propose a back-port of it and then continue the discussion on the 
  patch. I don't see any major blockers for back-porting it.
   
  On Mon, Jun 1, 2015 at 7:01 PM, Tidwell, Ryan ryan.tidw...@hp.com 
  (mailto:ryan.tidw...@hp.com) wrote:
   Not seeing this on Kilo, we're seeing this on Juno builds (that's 
   expected).  I'm interested in a Juno backport, but mainly wanted to be 
   see if others had confidence in the fix.  The discussion in the bug 
   report also seemed to indicate there were other alternative solutions 
   others might be looking into that didn't involve an iptables rule.

   -Ryan

   -Original Message-
   From: Mark McClain [mailto:m...@mcclain.xyz]
   Sent: Monday, June 01, 2015 6:47 PM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [Neutron] virtual machine can not get DHCP 
   lease due packet has no checksum


On Jun 1, 2015, at 7:26 PM, Tidwell, Ryan ryan.tidw...@hp.com 
(mailto:ryan.tidw...@hp.com) wrote:
   
I see a fix for https://bugs.launchpad.net/neutron/+bug/1244589 merged 
during Kilo.  I'm wondering if we think we have identified a root cause 
and have merged an appropriate long-term fix, or if 
https://review.openstack.org/148718 was merged just so there's at least 
a fix available while we investigate other alternatives.  Does anyone 
have an update to provide?
   
-Ryan

   The fix works in environments we’ve tested in.  Are you still seeing 
   problems?

   mark
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
  --  
  Kevin Benton  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-02 Thread Miguel Ángel Ajo
Ooook, fully understood now. Thanks Ihar  Ian for the clarification :)


Miguel Ángel Ajo


On Tuesday, 2 de June de 2015 at 13:33, Ihar Hrachyshka wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
  
 On 06/02/2015 10:10 AM, Miguel Ángel Ajo wrote:
  The backport seems reasonable IMO.
   
  Is this tested in a multihost environment?.
   
  I ask, because given the Ian explanation (which probably I got
  wrong), the issue is in the NET-NIC-VM path while the patch fixes
  the path in the network node (this is ran in the dhcp agent).
  dhcp-NIC-NET.
   
  
  
 If a packet goes out of your real NIC, then it gets a proper checksum
 attached. So the issue is single host only.
  
 Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
  
 iQEcBAEBCAAGBQJVbZSjAAoJEC5aWaUY1u57nWQIAImV2DxUIK1f1NPvuKkm/Del
 lfi90sDNSo8sIOmkLzey8n/1Dyrb9QTzZlb5XpJlG+HLmuRa+AwaWuyNswKJvHEu
 MlMBNPawdimlmyn0uLs+QwQOjL31HOb4SD76DOHGc8X2LVOz4PXf0KO2s0PbjU2v
 bfm+Yo+lhC7ZMAeebEcjNO6s28TSzRhOzQ7H1ItlPcJFrchcYCRJ1l2vdmcL69DO
 FzndWaAQ1R8xGKy2giOt4dc2x/cEad3ZTI/v573aOTJg3UWfHp6GbFfwkuWZzHbW
 U+UAezEogg3P++cv0eEwnQEeNhyN/eO2aV928kpPgJaw4T/6HFBGmp+yhOINXjQ=
 =fQ24
 -END PGP SIGNATURE-
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [fwaas] - Collecting use cases for PI improvements

2015-05-28 Thread Miguel Ángel Ajo
Hi! ;)

On Thursday, 28 de May de 2015 at 9:03, Gal Sagie wrote:

 Hello All,
  
 The session talk was mainly about merging the Security Group API and the 
 FWaaS API to the same one or to keep them separate,
 Which i don't think we reached agreement (or did we? :) )

I can’t tell either ;)
  
  
 I personally think (And few people approached us in the summit to express 
 that they feel the same) that we need to allow the user the full set of 
 features
 to be able to configure both on the perimeter firewall and on the VM port, 
 we can make the UI easier to apply a set of the features (similar to security 
 groups for example)
 on the VM ports but i feel merging the API's would make things easier in the 
 long run (and then you can apply it either on the VM port or the router ports 
 and of course
 choose to apply only a simpler subset of the features)  

I guess, the complexity here lies in deprecating the security group API while 
offering a migration path, probably proxying security group calls to  
FWaaS?, for full deprecation I guess you may need to coordinate with nova.

FWaaS also lacks the ability to reference groups in rules, or is it capable of 
such thing? (ingress all from ‘sg-id’)

Also you may want to make the FirewallDriver used for security groups part of 
FWaaS, and reuse/redesign the messaging mechanisms.

I have mixed feelings about it, but I guess it could be a reasonable path so 
all the Firewall things are in one place, and not two.
  
  
  
 There are many security use cases that are detected and handled better on the 
 Hypervisor / VM level which the current
 security groups API doesn’t cover.
  
 For the staying compatible with Amazon argument, i understand and think its 
 important point, but there are already differences between different cloud 
 providers (For example if you take Azure its Security Group features 
 include actions) and i don’t think we need to hold off features and 
 innovation because others aren’t doing it.

I agree, that we shouldn’t stop innovation even if others aren’t doing it, 
otherwise we’re putting a ceiling on our capabilities, and we’re always going 
to be behind proprietary/closed source public clouds...
  
  
  
 We have already suggested and implemented (in review) easy way to extend new 
 rule classes for security groups [1], [2]
 And have implemented one use case for this [3], [4] (brute force prevention) 
 (which we done a nice
 research/analytic to create templates for common protocols login 
 rates/retries and how to detected brute force probabilities -
 can extend in private if anyone is interested but everything can be seen in 
 the code)   
  
 I also feel that auditing visibility is important for security groups and i 
 have a joint (with Roey Chen from VMware) API spec [5]  
 and reference implementation spec [6] to extend security groups auditing 
 capabilities
  
That came up into the “ovs sg ludicrous speed” talk, we may need auditing 
actions support in OvS at some point.
  
Miguel Ángel Ajo,

 None the less, would love to help and contribute code in any effort around 
 this area and would like to see this move forward, i believe we have
 an opportunity to give added value to the users with this.
  
 Thanks
 Gal.
  
 [1] https://review.openstack.org/#/c/169784/
 [2] https://review.openstack.org/#/c/154535/
 [3] https://review.openstack.org/#/c/151247/
 [4] https://review.openstack.org/#/c/184243/
 [5] https://review.openstack.org/#/c/180078/
 [6] https://review.openstack.org/#/c/180419/
  
 On Thu, May 28, 2015 at 4:51 AM, Sridar Kandaswamy (skandasw) 
 skand...@cisco.com (mailto:skand...@cisco.com) wrote:
  Hi All:  
   
  Thanks German for articulating this – we did have this discussion on last 
  Fri as well on the need to have more user inputs. FWaaS has been in a bit 
  of a Catch22 situation with the experimental state. Regarding feature 
  velocity –  it has definitely been frustrating and we also lost 
  contributors along the journey due to their frustration with moving things 
  forward making things worse.  
   
  Kilo has been interesting in that there are more new contributors, repo 
  split and more in terms of vendor support has gone in than ever before. We 
  hope that this will improve traction for the customers they represent as 
  well. Adding more user inputs and having a concerted conversation will 
  definitely help. I echo Kyle and can certainly speak for all the current 
  contributors in also helping out in any way possible to get this going. New 
  Contributors are always welcome – Slawek  Vikram among the  most recent 
  new contributors know this well.  
   
  Thanks  
   
  Sridar  
   
  From: Vikram Choudhary viks...@gmail.com (mailto:viks...@gmail.com)
  Date: Wednesday, May 27, 2015 at 5:54 PM
  To: OpenStack List openstack-dev@lists.openstack.org 
  (mailto:openstack-dev@lists.openstack.org)
  Subject: Re: [openstack-dev] [neutron] [fwaas] - Collecting use cases for 
  API improvements
   
  Hi

Re: [openstack-dev] [Ironic][Neutron] Ironic/Neutron Integration weekly meeting kick off

2015-05-27 Thread Miguel Ángel Ajo
Thanks for sharing Sukhdev, I’ll join the meetings.

Miguel Ángel Ajo


On Thursday, 28 de May de 2015 at 6:59, Sukhdev Kapur wrote:

 Folks,  
  
 Starting next monday (June 1, 2015), we are kicking off weekly meeting to 
 discuss and track the integration of Ironic and Neutron (ML2).
 We are hoping to implement the Networking support within Liberty cycle. Come 
 join and help us achieve this goal.  
  
 Anybody who is interested in this topic, wants to contribute, share their 
 wisdom with the team, are welcome to join us. Here are the details of the 
 meeting:  
  
  Weekly on Mondays at 1600 UTC (9am Pacific Time)
  IRC Channel - #openstack-meeting-4
  
  Meeting Agenda and team charter - 
  https://wiki.openstack.org/wiki/Meetings/Ironic-neutron
   
 Feel free to add a topic to the agenda for discussion.  
  
 Looking forward to meeting you in the channel.  
  
 regards..
 -Sukhdev
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stepping down from Neutron core team

2015-05-25 Thread Miguel Ángel Ajo
Ahh, I missed this email while I was in the summit.

Thank you for so many years of hard work Salvatore, as Edgar said, your 
“pedant” comments
made it better.  I will miss your sense of humor ;)

Best,
Miguel Ángel Ajo


On Thursday, 21 de May de 2015 at 21:04, Gary Kotton wrote:

 -1
  
 From: Carl Baldwin c...@ecbaldwin.net (mailto:c...@ecbaldwin.net)
 Reply-To: OpenStack List openstack-dev@lists.openstack.org 
 (mailto:openstack-dev@lists.openstack.org)
 Date: Thursday, May 21, 2015 at 11:32 AM
 To: OpenStack List openstack-dev@lists.openstack.org 
 (mailto:openstack-dev@lists.openstack.org)
 Subject: Re: [openstack-dev] [Neutron] Stepping down from Neutron core team
  
  
 On May 21, 2015 9:06 AM, Kyle Mestery mest...@mestery.com 
 (mailto:mest...@mestery.com) wrote:
 
  On Thu, May 21, 2015 at 8:58 AM, Salvatore Orlando sorla...@nicira.com 
  (mailto:sorla...@nicira.com) wrote:
 
  After putting the whole OpenStack networking contributors community 
  through almost 8 cycles of pedant comments and annoying what if 
  questions, it is probably time for me to relieve neutron contributors from 
  this burden.
 
  It has been a pleasure for me serving the Neutron community (or Quantum as 
  it was called at the time), and now it feel right - and probably overdue - 
  to relinquish my position as a core team member in a spirit of rotation 
  and alternation between contributors.
 
  Note: Before you uncork your champagne bottles, please be aware that I 
  will stay in the Neutron community as a contributors and I might still end 
  up reviewing patches.
 
  Thanks for being so understanding with my pedant remarks,
  Salvatore
 
 
  If I could -1 this as a patch, I would.
 
  But seriously. Salvatore, thanks for all the years of work you've put into 
  Neutron. Please note that just because you've stepped down from the core 
  reviewer team doesn't mean we won't be relying on you to be a part of the 
  community.
 
  Thanks,
  Kyle  
 Kyle, I think you should -2 this one.  ;)  Maybe we should put the core team 
 list in a repo just so you can.
 All joking aside, @Salv, I have appreciated your feedback much more than you 
 give yourself credit for, maybe more than I let on.  You don't need to be a 
 core to give it, so keep it coming.
 Carl
  
  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Neutron QoS (Quality Of Service) update

2015-05-07 Thread Miguel Ángel Ajo
Gal, thank you very much for the update to the list, I believe it’s very 
helpful,
I’ll add some inline notes.

On Thursday, 7 de May de 2015 at 8:51, Gal Sagie wrote:

 Hello All,
  
 I think that the Neutron QoS effort is progressing into critical point and i 
 asked Miguel if i could post an update on our progress.
  
 First, i would like to thank Sean and Miguel for running this effort and 
 everyone else that is involved, i personally think its on the right track, 
 However i would like to see more people involved, especially more Neutron 
 experienced members because i believe we want to make the right decisions and 
 learn from past mistakes when making the API design.
  
 Feel free to update in the meeting wiki [1], and the spec review [2]
  
 Topics
  
 API microversioning spec implications [3]
 QoS can benefit from this design, however some concerns were raised that this 
 will
 only be available at mid L cycle.
 I think a better view is needed how this aligns with the new QoS design and
 any feedback/recommendation is use full
I guess an strategy here could be: go on with an extension, and translate that 
into
an experimental API once micro versioning is ready, then after one cycle we 
could
“graduate it” to get versioned.
  
  
 Changes to the QoS API spec: scoping into bandwidth limiting
 At this point the concentration is on the API and implementation
 of bandwidth limiting.
  
 However it is important to keep the design easily extensible for some next 
 steps
 like traffic classification and marking.
  
 Changes to the QoS API spec: modeling of rules (class hierarchy) (Guarantee 
 split out)
 There is a QoSPolicy which is composed of different QoSRules, there is
 a discussion of splitting the rules into different types like QoSTypeRule.
 (This in order to support easy extension of this model by adding new type  
 of rules which extend the possible parameters)
  
 Plugins can then separate optional aspects into separate rules.
 Any feedback on this approach is appreciated.
  
 Discuss multiple API end points (per rule type) vs single

here, the topic name was incorrect, where I said API end points, we were
meaning URLs or REST resources.. (thanks Irena for the correction)

  
 In summary this means  that in the above model, do we want to support
 /v1/qosrule/..  or   /v1/qostyperule/ API's
 I think the consensus right now is that the later is more flexible.
  
 Miguel is also checking the possibility of using something like:
 /v1/qosrule/type/... kind of parsing
 Feedback is desired here too :)
  
 Traffic Classification considerations
 The idea right now is to extract the TC classification to another data model
 and attach it to rule
 that way no need to repeat same filters for the same kind of traffic.
  
 Of course we need to consider here what it means to update a classifier
 and not to introduce too much dependencies
About this, the intention is not to fully model this, or to include it in the 
data model now,
but try to see how could we do it in future iterations and see if it fits the 
current data model
and APIs we’re proposing.
  
  
  
 The ingress vs egress differences and issues
 Egress bandwidth limiting is much more use full and supported,
 There is still doubt on the support of Ingress bandwidth limiting in OVS, 
 anyone
 that knows if Ingress QoS is supported in OVS we want your feedback :)
 (For example implementing OF1.3 Metering spec)
  
 Thanks all (Miguel, Sean or anyone else, please update this if i forgot 
 anything)
  
 [1] https://wiki.openstack.org/wiki/Meetings/QoS
 [2] https://review.openstack.org/#/c/88599/
 [3] https://review.openstack.org/#/c/136760/
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] replace external ip monitor

2015-05-04 Thread Miguel Ángel Ajo
Does the library require root privileges to work
for the operations you’re planning to do?

That would be a stopper, since all the agents run unprivileged, and all the
operations are filtered by the oslo root wrap daemon or cmdline tool.

Best,
Miguel Ángel.


Miguel Ángel Ajo


On Monday, 4 de May de 2015 at 14:58, Assaf Muller wrote:

 .
  
 - Original Message -
  …
   
  Hello.
   
  I would like to discuss the possibility to replace external ip monitor
  in the neutron code [1] with an internal native Python code [2]
   
  The issues of the current implementation:
  * an external process management
  * text output parsing (possibly buffered)
   
  The proposed library:
  * pure Python code
  * threadless (by default) socket-like objects to work with netlink
  * optional eventlet optimization
  * netlink messages as native python objects
  * compatible license
   
  
  
 How's packaging looking on all supported platforms?
  
 On a related note, ip_monitor.py is 87 lines of code, I'd be wary of getting
 rid of it and using a full blown library instead. Then again, using pyroute2,
 we might want to replace other pieces of code (Such as parts of ip_lib).
  
  If it's ok, I would prepare a patchset this week.
   
  [1] neutron/agent/linux/ip_monitor.py
  [2] https://github.com/svinota/pyroute2
   
  --
  Peter V. Saveliev
   
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-06 Thread Miguel Ángel Ajo
On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
 On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando sorla...@nicira.com 
 (mailto:sorla...@nicira.com) wrote:
   
   
  On 7 April 2015 at 00:33, Armando M. arma...@gmail.com 
  (mailto:arma...@gmail.com) wrote:

   On 6 April 2015 at 08:56, Miguel Ángel Ajo majop...@redhat.com 
   (mailto:majop...@redhat.com) wrote:
I’d like to co-organized a QoS weekly meeting with Sean M. Collins,
 
In the last few years, the interest for QoS support has increased, 
Sean has been leading
this effort [1] and we believe we should get into a consensus about how 
to model an extension
to let vendor plugins implement QoS capabilities on network ports and 
tenant networks, and
how to extend agents, and the reference implementation  others [2]
 




   
   
  As you surely know, so far every attempt to achieve a consensus has failed 
  in a pretty miserable way.
  This mostly because QoS can be interpreted in a lot of different ways, 
  both from the conceptual and practical perspective.
   
   
   
  
  
  
  
  
  

Yes, I’m fully aware of it, it was also a new feature, so it was out of scope 
for Kilo.  
  It is important in my opinion to clearly define the goals first. For 
  instance a simple extensions for bandwidth limiting could be a reasonable 
  target for the Liberty release.
   
   
   
  
  
  
  
  
  

I quite agree here, but IMHO, as you said it’s a quite open field (limiting, 
guaranteeing,  
marking, traffic shaping..), we should do our best in trying to define a model 
allowing us  
to build that up in the future without huge changes, on the API side I guess 
micro versioning
is going to help in the API evolution.

Also, at some point, we should/could need to involve the nova folks, for 
example, to define
port flavors that can be associated to nova
instance flavors, providing them  
1) different types of network port speeds/guarantees/priorities,  
2) being able to schedule instance/ports in coordination to be able to met 
specified guarantees.

yes, complexity can sky rocket fast,  
  Moving things such as ECN into future works is the right thing to do in 
  my opinion. Attempting to define a flexible framework that can deal with 
  advanced QoS policies specification is a laudable effort, but I am a bit 
  skeptical about its feasibility.
   
 ++, I think focusing on perhaps bandwidth limiting may make a lot of sense  
Yes, I believe we should look into the future , but at the same pick our very 
first feature (or a
very simple set of them) for L, stick to it, and try to make a design that can 
be extended.
  
   

 
As per discussion we’ve had during the last few months [3], I 
believe we should start simple, but
prepare a model allowing future extendibility, to allow for example 
specific traffic rules (per port,
per IP, etc..), congestion notification support [4], …
 




   
   
  Simple in my mind is even more extreme then what you're proposing here... 
  I'd start with bare APIs for specifying bandwidth limiting, and then phase 
  them out once this framework is in place.
  Also note that this kind of design bears some overlap with the flavor 
  framework which is probably going to be another goal for Liberty.
   
 Indeed, and the flavor framework is something I'm hoping we can land by 
 Liberty-1 (yes, I just said Liberty-1).
Yes it’s something I looked at, I must admit I wasn’t able to see it work 
together (It doesn’t  
mean it doesn’t play well, but most probably I was silly enough not to see it 
:) ),

I didn’t want to distract attention from the Kilo cycle focus making questions, 
so it should
be a good thing to talk about during the first meetings.   

Who are the flavor fathers/mothers? ;)
  
   
  Morever, consider using common tools such as the specs repo to share and 
  discuss design documents.

   
   
   
  
 Also a good idea.
Yes, that was the plan now, we didn’t use it before to avoid creating 
unnecessary noise during this cycle.

   
 
It’s the first time I’m trying to organize an openstack/neutron 
meeting, so, I don’t know what’s the
best way to do it, or find the best timeslot. I guess interested people 
may have a saying, so I’ve  
looped anybody I know is interested in the CC of this mail.  
 


   I think that's a good idea. Incidentally I was speaking with Sean 
   regarding Summit session [1], and we were hoping we could get some folks 
   together either prior or during the summit, to try and get some momentum 
   going behind this initiative, once again.
Very interesting [1]!, nice to see we start to have a bunch of people with an 
interest in QoS.   
   
  I think is a good idea as well.  I was thinking that perhaps it might be a 
  good idea to grab a design summit session as well (surely not a fishbowl 
  one as they're totally unfit for this purpose

[openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-06 Thread Miguel Ángel Ajo
I’d like to co-organized a QoS weekly meeting with Sean M. Collins,

In the last few years, the interest for QoS support has increased, Sean has 
been leading
this effort [1] and we believe we should get into a consensus about how to 
model an extension
to let vendor plugins implement QoS capabilities on network ports and tenant 
networks, and
how to extend agents, and the reference implementation  others [2]

As per discussion we’ve had during the last few months [3], I believe we 
should start simple, but
prepare a model allowing future extendibility, to allow for example specific 
traffic rules (per port,
per IP, etc..), congestion notification support [4], …

It’s the first time I’m trying to organize an openstack/neutron meeting, 
so, I don’t know what’s the
best way to do it, or find the best timeslot. I guess interested people may 
have a saying, so I’ve  
looped anybody I know is interested in the CC of this mail.  


Miguel Ángel Ajo

[1] https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api
[2] 
https://drive.google.com/file/d/0B2XATqL7DxHFRHNjU3k1UFNYRjQ/view?usp=sharing
[3] 
https://docs.google.com/document/d/1xUx0Oq-txz_qVA2eYE1kIAJlwxGCSqXHgQEEGylwlZE/edit#heading=h.2pdgqfl3a231
[4] 
https://blueprints.launchpad.net/neutron/+spec/explicit-congestion-notification

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-31 Thread Miguel Ángel Ajo
On Tuesday, 31 de March de 2015 at 7:14, George Shuklin wrote:
  
  
 On 03/30/2015 11:18 AM, Kevin Benton wrote:
  What does fog do? Is it just a client to the Neutron HTTP API? If so,  
  it should not have broken like that because the API has remained  
  pretty stable. If it's a deployment tool, then I could see that  
  because the configuration options to tend to suffer quite a bit of  
  churn as tools used by the reference implementation evolve.
   
  
 As far as I understand (I'm not ruby guy, I'm openstack guy, but I  
 peeking to ruby guys attempts to use openstack with fog as replacement  
 for vagrant/virtualbox), the problem lies in the default network selection.
  
 Fog expects to have one network and use it, and neutron network-rich  
 environment is simply too complex for it. May be it is fog to blame, but  
 result is simple: some user library worked fine with nova networks but  
 struggling after update to neutron.
  
 Linux usually covers all those cases to make transition between versions  
 very smooth. Openstack is not.
  
  I agree that these changes are an unpleasant experience for the end  
  users, but that's what the deprecation timeline is for. This feature  
  won't break in L, it will just result in deprecation warnings. If we  
  get feedback from users that this serves an important use case that  
  can't be addressed another way, we can always stop the deprecation at  
  that point.
   
  
 In my opinion it happens too fast and cruel. For example: It deprecates  
 in 'L' release and will be kept only of 'L' users complains. But for  
 that many users should switch from havana to newer version. But it not  
 true, many skips few versions before moving to the new one.
  
 Openstack releases are too wild and untested to be used 'after release'  
 (simple example: VLAN id bug in neutron, which completely breaks hard  
 reboots in neutron, was fixed in last update of havana, that means all  
 havanas was broken from the moment of release to the very last moment),  
 so users wait until bugs are fixed. And they deploy new version after  
 that. So it is something like half of the year between new version and  
 deployment. And no one wants to do upgrade right after they done  
 deployment. Add one or two more years. And only than user find that  
 everything is deprecated and removed and openstack is new and shiny  
 again, and everyone need to learn it from scratches. I'm exaggerating a  
 bit, but that's true - the older and mature installation (like big  
 public cloud) the less they want to upgrade every half of the year to  
 the shiny new bugs.
  
 TL;DR: Deprecation cycle should take at least few years to get proper  
 feedback from real heavy users.
  
  


From the user POV I can’t do other thing but agree, you pictured it right,
currently we mark something for deprecation, and by the middle/end of next
cycle it’s deprecated. But most users won’t realize it’s deprecated until
it’s too late, either because they jump to use a stable version after a few 
stable
releases to be safe, or because they skip versions.

From the code point of view, it can, sometimes become messy, but we  
should take care of our customers…

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] initial OVN testing

2015-03-31 Thread Miguel Ángel Ajo
That’s super nice ;) !!! :D

I’m prototyping over here [1] to gather some benchmarks for the summit 
presentation
about “Taking Security Groups To Ludicrous Speed with Open vSwitch” with Ivar, 
Justin
and Thomas.


I know Justin and Joe have been doing good advances on it ;) [3] lately.

[1] https://review.openstack.org/#/c/167671/
[2] https://github.com/justinpettit/ovs/tree/conntrack
[3] https://github.com/justinpettit/ovs/commits/conntrack

Miguel Ángel Ajo


On Tuesday, 31 de March de 2015 at 9:34, Kevin Benton wrote:

 Very cool. What's the latest status on data-plane support for the conntrack 
 based things like firewall rules and conntrack integration?
  
 On Mon, Mar 30, 2015 at 7:19 PM, Russell Bryant rbry...@redhat.com 
 (mailto:rbry...@redhat.com) wrote:
  On 03/26/2015 07:54 PM, Russell Bryant wrote:
   Gary and Kyle, I saw in my IRC backlog that you guys were briefly
   talking about testing the Neutron ovn ml2 driver.  I suppose it's time
   to add some more code to the devstack integration to install the current
   ovn branch and set up ovsdb-server to serve up the right database for
   this.  I'll try to work on that tomorrow.  Of course, note that all we
   can set up right now is the northbound database.  None of the code that
   reacts to updates to that database is merged yet.  We can still go ahead
   and test our code and make sure the expected data makes it there, though.
   
  With help from Kyle Mestery, Gary Kotton, and Gal Sagie, some great
  progress has been made over the last few days.  Devstack support has
  merged and the ML2 driver seems to be doing the right thing.
   
  After devstack runs, you can see that the default networks created by
  devstack are in the OVN db:
   
   $ neutron net-list
   +--+-+--+
   | id   | name| subnets
 |
   +--+-+--+
   | 1c4c9a38-afae-40aa-a890-17cd460b314b | private | 
   115f27d1-5330-489e-b81f-e7f7da123a31 10.0.0.0/24 (http://10.0.0.0/24) |
   | 69fc7d7c-6906-43e7-b5e2-77c059cf4143 | public  | 
   6b5c1597-4af8-4ad3-b28b-a4e83a07121b |
   +--+-+--+
   
   $ ovn-nbctl lswitch-list
   47135494-6b36-4db9-8ced-3bdc9b711ca9 
   (neutron-1c4c9a38-afae-40aa-a890-17cd460b314b)
   03494923-48cf-4af5-a391-ed48fe180c0b 
   (neutron-69fc7d7c-6906-43e7-b5e2-77c059cf4143)
   
   $ ovn-nbctl lswitch-get-external-id 
   neutron-1c4c9a38-afae-40aa-a890-17cd460b314b
   neutron:network_id=1c4c9a38-afae-40aa-a890-17cd460b314b
   neutron:network_name=private
   
   $ ovn-nbctl lswitch-get-external-id 
   neutron-69fc7d7c-6906-43e7-b5e2-77c059cf4143
   neutron:network_id=69fc7d7c-6906-43e7-b5e2-77c059cf4143
   neutron:network_name=public
   
  You can also create ports and see those reflected in the OVN db:
   
   $ neutron port-create 1c4c9a38-afae-40aa-a890-17cd460b314b
   Created a new port:
   +---+-+
   | Field | Value   
   |
   +---+-+
   | admin_state_up| True
   |
   | allowed_address_pairs | 
   |
   | binding:vnic_type | normal  
   |
   | device_id | 
   |
   | device_owner  | 
   |
   | fixed_ips | {subnet_id: 
   115f27d1-5330-489e-b81f-e7f7da123a31, ip_address: 10.0.0.3} |
   | id| e7c080ad-213d-4839-aa02-1af217a6548c
   |
   | mac_address   | fa:16:3e:07:9e:68   
   |
   | name  | 
   |
   | network_id| 1c4c9a38-afae-40aa-a890-17cd460b314b
   |
   | security_groups   | be68fd4e-48d8-46f2-8204-8a916ea6f348
   |
   | status| DOWN
   |
   | tenant_id | ed782253a54c4e0a8b46e275480896c9

Re: [openstack-dev] [Neutron] [TripleO] [Ironic] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-23 Thread Miguel Ángel Ajo
Looking at project Calico, I don’t know if what they do is similar to what the 
people from
triple-o  ironic do with the neutron-dhcp-agent. I believe we should ask them
before deprecation.

I added their tags to the subject.  

AFAIK TripleO/Ironic was patching neutron-dhcp-agent too.

For project Calico, why do you need no netns and why do you patch it?  

Kevin, thanks for pointing that out.

Best,
Miguel Ángel Ajo


On Monday, 23 de March de 2015 at 7:34, Miguel Ángel Ajo wrote:

 +1 for deprecation if people don’t have use cases / good reasons to keep it, 
 I don’t know  
  and I can’t think of any, but that doesn’t mean they don’t exist.
  
 Miguel Ángel Ajo
  
  
 On Monday, 23 de March de 2015 at 2:34, shihanzhang wrote:
  
  +1 to deprecate this option
   
  At 2015-03-21 02:57:09, Assaf Muller amul...@redhat.com 
  (mailto:amul...@redhat.com) wrote: Hello everyone,  The use_namespaces 
  option in the L3 and DHCP Neutron agents controls if you can create 
  multiple routers and DHCP networks managed by a single L3/DHCP agent, or 
  if the agent manages only a single resource.  Are the setups out there 
  *not* using the use_namespaces option? I'm curious as to why, and if it 
  would be difficult to migrate such a setup to use namespaces.  I'm asking 
  because use_namespaces complicates Neutron code for what I gather is an 
  option that has not been relevant for years. I'd like to deprecate the 
  option for Kilo and remove it in Liberty.   Assaf Muller, Cloud 
  Networking Engineer Red Hat  
  __ 
  OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe) 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
   
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-23 Thread Miguel Ángel Ajo
+1 for deprecation if people don’t have use cases / good reasons to keep it, I 
don’t know  
 and I can’t think of any, but that doesn’t mean they don’t exist.

Miguel Ángel Ajo


On Monday, 23 de March de 2015 at 2:34, shihanzhang wrote:

 +1 to deprecate this option
  
 At 2015-03-21 02:57:09, Assaf Muller amul...@redhat.com wrote: Hello 
 everyone,  The use_namespaces option in the L3 and DHCP Neutron agents 
 controls if you can create multiple routers and DHCP networks managed by a 
 single L3/DHCP agent, or if the agent manages only a single resource.  Are 
 the setups out there *not* using the use_namespaces option? I'm curious as 
 to why, and if it would be difficult to migrate such a setup to use 
 namespaces.  I'm asking because use_namespaces complicates Neutron code for 
 what I gather is an option that has not been relevant for years. I'd like to 
 deprecate the option for Kilo and remove it in Liberty.   Assaf Muller, 
 Cloud Networking Engineer Red Hat  
 __ 
 OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-23 Thread Miguel Ángel Ajo



On Monday, 23 de March de 2015 at 8:20, Van Leeuwen, Robert wrote:

Are the setups out there *not* using the use_namespaces option? I'm
curious as
to why, and if it would be difficult to migrate such a setup to use
namespaces.
  
 At my previous employer we did not use namespaces.  
 This was due to a installation a few years ago on SL6 which did not have name 
 space support at that time.
  
 I think there are valid reasons to not use namespaces:  
 Fewer moving parts == less can potentialy fail
 Troubleshooting is easier due to less places to look / need no familiarity 
 with namespaces  tools
 If I remember correctly setting up a namespace can get really slow when you 
 have a lot of them on a single machine
  
 If you have a requirement for having all networks to be routable disabling 
 namespaces does make sense.
 Since I’m currently in the design phase for such an install I’d surely like 
 to know if it is going to be deprecated.  
 Thx for letting us know about this :)
  
  

Hi Van, thanks for pointing those out

IMHO, those shouldn’t be valid reasons anymore, since they were due iproute, or 
sudo issues
that have been corrected long ago, and all distros installing neutron are 
supporting netns at this
point AFAIK.

Best regards,
Miguel Angel Ajo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-23 Thread Miguel Ángel Ajo
I see you point Van,

In the other hand, removing it, cleans up lot of conditional code parts (moving 
parts at the other side),
and also the non-netns case is not tested by upstream CI, AFAIK, so it could be 
broken anytime
and we would not notice it.



Miguel Ángel Ajo


On Monday, 23 de March de 2015 at 9:25, Van Leeuwen, Robert wrote:

  I think there are valid reasons to not use namespaces:
  Fewer moving parts == less can potentialy fail
  Troubleshooting is easier due to less places to look / need no familiarity 
  with namespaces  tools
  If I remember correctly setting up a namespace can get really slow when you 
  have a lot of them on a single machine
   
   
   
   
  
  
  IMHO, those shouldn’t be valid reasons anymore, since they were due 
  iproute, or sudo issues  
  that have been corrected long ago, and all distros installing neutron are 
  supporting netns at this
  
 Well, you exactly made my point:  
 There is lots that can and will go wrong with more moving parts.
 That they are fixed at the moment does not mean that there will not be a new 
 bug in the future…
  
 Cheers,  
 Robert van Leeuwen
  
 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org 
 (mailto:openstack-operat...@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from ool

2015-03-10 Thread Miguel Ángel Ajo
Thanks to everybody working on this,

Answers inline:  

On Tuesday, 10 de March de 2015 at 0:34, Tidwell, Ryan wrote:

 Thanks Salvatore.  Here are my thoughts, hopefully there’s some merit to them:
   
 With implicit allocations, the thinking is that this is where a subnet is 
 created in a backward-compatible way with no subnetpool_id and the subnets 
 API’s continue to work as they always have.
   
 In the case of a specific subnet allocation request (create-subnet passing a 
 pool ID and specific CIDR), I would look in the pool’s available prefix list 
 and carve out a subnet from one of those prefixes and ask for it to be 
 reserved for me.  In that case I know the CIDR I’ll be getting up front.  In 
 such a case, I’m not sure I’d ever specify my gateway using notation like 
 0.0.0.1, even if I was allowed to.  If I know I’ll be getting 10.10.10.0/24, 
 I can simply pass gateway_ip as 10.10.10.1 and be done with it.  I see no 
 added value in supporting that wildcard notation for a gateway on a specific 
 subnet allocation.
   
 In the case of an “any” subnet allocation request (create-subnet passing a 
 pool ID, but no specific CIDR), I’m already delegating responsibility for 
 addressing my subnet to Neutron.  As such, it seems reasonable to not have 
 strong opinions about details like gateway_ip when making the request to 
 create a subnet in this manner.
   
 To me, this all points to not supporting wildcards for gateway_ip and 
 allocation_pools on subnet create (even though it found its way into the 
 spec).  My opinion (which I think lines up with yours) is that on an any 
 request it makes sense to let the pool fill in allocation_pools and 
 gateway_ip when requesting an “any” allocation from a subnet pool.  When 
 creating a specific subnet from a pool, gateway IP and allocation pools could 
 still be passed explicitly by the user.
   
 -Ryan
   
 From: Salvatore Orlando [mailto:sorla...@nicira.com]  
 Sent: Monday, March 09, 2015 6:06 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [api][neutron] Best API for generating subnets from 
 pool  
   
 Greetings!
   
  
 Neutron is adding a new concept of subnet pool. To put it simply, it is a 
 collection of IP prefixes from which subnets can be allocated. In this way a 
 user does not have to specify a full CIDR, but simply a desired prefix 
 length, and then let the pool generate a CIDR from its prefixes. The full 
 spec is available at [1], whereas two patches are up for review at [2] (CRUD) 
 and [3] (integration between subnets and subnet pools).
  
 While [2] is quite straightforward, I must admit I am not really sure that 
 the current approach chosen for generating subnets from a pool might be the 
 best one, and I'm therefore seeking your advice on this matter.
  
   
  
 A subnet can be created with or without a pool.
  
 Without a pool the user will pass the desired cidr:
  
   
  
 POST /v2.0/subnets
  
 {'network_id': 'meh',
  
   'cidr': '192.168.0.0/24 (http://192.168.0.0/24)'}
  
   
  
 Instead with a pool the user will pass pool id and desired prefix lenght:
  
 POST /v2.0/subnets
  
 {'network_id': 'meh',
  
  'prefix_len': 24,
  
  'pool_id': 'some_pool'}
  
   
  
 The response to the previous call would populate the subnet cidr.
  
 So far it looks quite good. Prefix_len is a bit of duplicated information, 
 but that's tolerable.
  
 It gets a bit awkward when the user specifies also attributes such as desired 
 gateway ip or allocation pools, as they have to be specified in a 
 cidr-agnostic way. For instance:
  
   
  
 POST /v2.0/subnets
  
 {'network_id': 'meh',
  
  'gateway_ip': '0.0.0.1',
  
  'prefix_len': 24,
  
  'pool_id': 'some_pool'}
  
  
   
  
 would indicate that the user wishes to use the first address in the range as 
 the gateway IP, and the API would return something like this:
  
   
  
 POST /v2.0/subnets
  
 {'network_id': 'meh',
  
  'cidr': '10.10.10.0/24 (http://10.10.10.0/24)'
  
  'gateway_ip': '10.10.10.1',
  
  'prefix_len': 24,
  
  'pool_id': 'some_pool'}
  
  
  
   
  
 The problem with this approach is, in my opinion, that attributes such as 
 gateway_ip are used with different semantics in requests and responses; this 
 might also need users to write client applications expecting the values in 
 the response might differ from those in the request.
  
   
  
 I have been considering alternatives, but could not find any that I would 
 regard as winner.
  
 I therefore have some questions for the neutron community and the API working 
 group:
  
   
  
 1) (this is more for neutron people) Is there a real use case for requesting 
 specific gateway IPs and allocation pools when allocating from a pool? If 
 not, maybe we should let the pool set a default gateway IP and allocation 
 pools. The user can then update them with another call. Another option would 
 be to provide subnet templates from which a user can choose. For instance 
 one template could have the gateway 

Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-04 Thread Miguel Ángel Ajo
I agree with Assaf, this is an issue across updates, and
we may want (if that’s technically possible) to provide  
access to those functions with/without namespace.

Or otherwise think about reverting for now until we find a
migration strategy

https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:master+topic:bp/rpc-docs-and-namespaces,n,z


Best regards,
Miguel Ángel Ajo


On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller wrote:

 Hello everyone,
  
 I'd like to highlight an issue with:
 https://review.openstack.org/#/c/154670/
  
 According to my understanding, most deployments upgrade the controllers first
 and compute/network nodes later. During that time period, all agents will
 fail to report state as they're sending the report_state message outside
 of any namespace while the server is expecting that message in a namespace.
 This is a show stopper as the Neutron server will think all of its agents are 
 dead.
  
 I think the obvious solution is to modify the Neutron server code so that
 it accepts the report_state method both in and outside of the report_state
 RPC namespace and chuck that code away in L (Assuming we support rolling 
 upgrades
 only from version N to N+1, which while is unfortunate, is the behavior I've
 seen in multiple places in the code).
  
 Finally, are there additional similar issues for other RPC methods placed in 
 a namespace
 this cycle?
  
  
 Assaf Muller, Cloud Networking Engineer
 Red Hat
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-03-03 Thread Miguel Ángel Ajo
https://review.openstack.org/#/c/159840/1/doc/source/testing/openflow-firewall.rst
  

I may need some help from the OVS experts to answer the questions from  
henry.hly.

Ben, Thomas, could you please? (let me know if you are not registered to
the openstack review system, I could answer in your name).


Best,
Miguel Ángel Ajo


On Friday, 27 de February de 2015 at 14:50, Miguel Ángel Ajo wrote:

 Ok, I moved the document here [1], and I will eventually submit another patch
 with the testing scripts when those are ready.
  
 Let’s move the discussion to the review!,
  
  
 Best,
 Miguel Ángel Ajo
 [1] https://review.openstack.org/#/c/159840/
  
  
 On Friday, 27 de February de 2015 at 7:03, Kevin Benton wrote:
  
  Sounds promising. We'll have to evaluate it for feature parity when the 
  time comes.
   
  On Thu, Feb 26, 2015 at 8:21 PM, Ben Pfaff b...@nicira.com 
  (mailto:b...@nicira.com) wrote:
   This sounds quite similar to the planned support in OVN to gateway a
   logical network to a particular VLAN on a physical port, so perhaps it
   will be sufficient.

   On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote:
If a port is bound with a VLAN segmentation type, it will get a VLAN id 
and
a name of a physical network that it corresponds to. In the current 
plugin,
each agent is configured with a mapping between physical networks and 
OVS
bridges. The agent takes the bound port information and sets up rules to
forward traffic from the VM port to the OVS bridge corresponding to the
physical network. The bridge usually then has a physical interface 
added to
it for the tagged traffic to use to reach the rest of the network.
   
On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com 
(mailto:b...@nicira.com) wrote:
   
 What kind of VLAN support would you need?

 On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
  If OVN chooses not to support VLANs, we will still need the current 
  OVS
  reference anyway so it definitely won't be wasted work.
 
  On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
  majop...@redhat.com (mailto:majop...@redhat.com) wrote:
 
  
   Sharing thoughts that I was having:
  
   May be during the next summit it???s worth discussing the future 
   of the
   reference agent(s), I feel we???ll be replicating a lot of work 
   across
   OVN/OVS/RYU(ofagent) and may be other plugins,
  
   I guess until OVN and it???s integration are ready we can???t 
   stop, so
 it makes
   sense to keep development at our side, also having an independent
 plugin
   can help us iterate faster for new features, yet I expect that OVN
 will be
   more fluent at working with OVS and OpenFlow, as their designers 
   have
   a very deep knowledge of OVS under the hood, and it???s C. ;)
  
   Best regards,
  
   On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com 
   (mailto:majop...@redhat.com) wrote:
  
   On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo 
   wrote:
  
Inline comments follow after this, but I wanted to respond to 
   Brian
   questionwhich has been cut out:
   We???re talking here of doing a preliminary analysis of the 
   networking
   performance,before writing any real code at neutron level.
  
   If that looks right, then we should go into a preliminary (and
 orthogonal
   to iptables/LB)implementation. At that moment we will be able to
 examine
   the scalability of the solutionin regards of switching openflow 
   rules,
   which is going to be severely affectedby the way we use to handle 
   OF
 rules
   in the bridge:
  * via OpenFlow, making the agent a ???real OF controller, 
   with the
   current effort to use  the ryu framework plugin to do that.   
   * via
   cmdline (would be alleviated with the current rootwrap work, but 
   the
 former
   one would be preferred).
   Also, ipset groups can be moved into conjunctive groups in OF 
   (thanks
 Ben
   Pfaff for theexplanation, if you???re reading this ;-))
   Best,Miguel ??ngel
  
  
   On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren 
   wrote:
  
   Hi,
  
   The RFC2544 with near zero packet loss is a pretty standard 
   performance
   benchmark. It is also used in the OPNFV project (
  
 https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
   ).
  
   Does this mean that OpenStack will have stateful firewalls (or 
   security
   groups)? Any other ideas planned, like ebtables type filtering?
  
   What I am proposing is in the terms of maintaining the 
   statefulness we
   have nowregards security groups

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-27 Thread Miguel Ángel Ajo
Ok, I moved the document here [1], and I will eventually submit another patch
with the testing scripts when those are ready.

Let’s move the discussion to the review!,


Best,
Miguel Ángel Ajo
[1] https://review.openstack.org/#/c/159840/


On Friday, 27 de February de 2015 at 7:03, Kevin Benton wrote:

 Sounds promising. We'll have to evaluate it for feature parity when the time 
 comes.
  
 On Thu, Feb 26, 2015 at 8:21 PM, Ben Pfaff b...@nicira.com 
 (mailto:b...@nicira.com) wrote:
  This sounds quite similar to the planned support in OVN to gateway a
  logical network to a particular VLAN on a physical port, so perhaps it
  will be sufficient.
   
  On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote:
   If a port is bound with a VLAN segmentation type, it will get a VLAN id 
   and
   a name of a physical network that it corresponds to. In the current 
   plugin,
   each agent is configured with a mapping between physical networks and OVS
   bridges. The agent takes the bound port information and sets up rules to
   forward traffic from the VM port to the OVS bridge corresponding to the
   physical network. The bridge usually then has a physical interface added 
   to
   it for the tagged traffic to use to reach the rest of the network.
  
   On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com 
   (mailto:b...@nicira.com) wrote:
  
What kind of VLAN support would you need?
   
On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
 If OVN chooses not to support VLANs, we will still need the current 
 OVS
 reference anyway so it definitely won't be wasted work.

 On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
 majop...@redhat.com (mailto:majop...@redhat.com) wrote:

 
  Sharing thoughts that I was having:
 
  May be during the next summit it???s worth discussing the future of 
  the
  reference agent(s), I feel we???ll be replicating a lot of work 
  across
  OVN/OVS/RYU(ofagent) and may be other plugins,
 
  I guess until OVN and it???s integration are ready we can???t stop, 
  so
it makes
  sense to keep development at our side, also having an independent
plugin
  can help us iterate faster for new features, yet I expect that OVN
will be
  more fluent at working with OVS and OpenFlow, as their designers 
  have
  a very deep knowledge of OVS under the hood, and it???s C. ;)
 
  Best regards,
 
  On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com 
  (mailto:majop...@redhat.com) wrote:
 
  On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo 
  wrote:
 
   Inline comments follow after this, but I wanted to respond to Brian
  questionwhich has been cut out:
  We???re talking here of doing a preliminary analysis of the 
  networking
  performance,before writing any real code at neutron level.
 
  If that looks right, then we should go into a preliminary (and
orthogonal
  to iptables/LB)implementation. At that moment we will be able to
examine
  the scalability of the solutionin regards of switching openflow 
  rules,
  which is going to be severely affectedby the way we use to handle OF
rules
  in the bridge:
 * via OpenFlow, making the agent a ???real OF controller, with 
  the
  current effort to use  the ryu framework plugin to do that.   * 
  via
  cmdline (would be alleviated with the current rootwrap work, but the
former
  one would be preferred).
  Also, ipset groups can be moved into conjunctive groups in OF 
  (thanks
Ben
  Pfaff for theexplanation, if you???re reading this ;-))
  Best,Miguel ??ngel
 
 
  On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
 
  Hi,
 
  The RFC2544 with near zero packet loss is a pretty standard 
  performance
  benchmark. It is also used in the OPNFV project (
 
https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
  ).
 
  Does this mean that OpenStack will have stateful firewalls (or 
  security
  groups)? Any other ideas planned, like ebtables type filtering?
 
  What I am proposing is in the terms of maintaining the statefulness 
  we
  have nowregards security groups (RELATED/ESTABLISHED connections are
  allowed back on open ports) while adding a new firewall driver 
  working
only
  with OVS+OF (no iptables or linux bridge).
 
  That will be possible (without auto-populating OF rules in oposite
  directions) due to
  the new connection tracker functionality to be eventually merged 
  into
ovs.
 
 
  -Tapio
 
  On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
  (mailto:rick.jon...@hp.com)
wrote:
 
  On 02/25/2015 05:52 AM, Miguel

[openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
I’m writing a plan/script to benchmark OVS+OF(CT) vs OVS+LB+iptables+ipsets,  
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.

The plan is to keep all of it in a single multicore host, and make all the 
measures
within it, to make sure we just measure the difference due to the software 
layers.

Suggestions or ideas on what to measure are welcome, there’s an initial draft 
here:

https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct  

Miguel Ángel Ajo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-25 Thread Miguel Ángel Ajo
Sounds like a very good idea. Cross project development shared knowledge.

Miguel Ángel Ajo


On Wednesday, 25 de February de 2015 at 22:32, michael mccune wrote:

 On 02/25/2015 02:54 PM, Doug Hellmann wrote:
  During yesterday’s cross-project meeting [1], we discussed the Eventlet 
  Best Practices” spec [2] started by bnemec. The discussion of that spec 
  revolved around the question of whether our cross-project specs repository 
  is the right place for this type of document that isn’t a “plan” for a 
  change, and is more a reference guide. Eventually we came around to the 
  idea of creating a cross-project developer guide to hold these sorts of 
  materials.
   
  That leads to two questions, then:
   
  1. Should we have a unified developer guide for the project?
  
 +1, this sounds like a fantastic idea.
  
  2. Where should it live and how should we manage it?
  
 i like the idea of creating a new repository, akin to the other  
 OpenStack manuals. i think it would be great if there was an easy way  
 for the individual projects to add their specific recommendations as well.
  
 the main downside i see to creating a new repo/manual/infra is the  
 project overhead. hopefully there will be enough interest that this  
 won't be an issue though.
  
 mike
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
On Thursday, 26 de February de 2015 at 7:48, Miguel Ángel Ajo wrote:
 Inline comments follow after this, but I wanted to respond to Brian question
 which has been cut out:
  
 We’re talking here of doing a preliminary analysis of the networking 
 performance,
 before writing any real code at neutron level.
  
 If that looks right, then we should go into a preliminary (and orthogonal to 
 iptables/LB)
 implementation. At that moment we will be able to examine the scalability of 
 the solution
 in regards of switching openflow rules, which is going to be severely affected
 by the way we use to handle OF rules in the bridge:
  
* via OpenFlow, making the agent a “real OF controller, with the current 
 effort to use
   the ryu framework plugin to do that.
* via cmdline (would be alleviated with the current rootwrap work, but the 
 former one
  would be preferred).
  
 Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben 
 Pfaff for the
 explanation, if you’re reading this ;-))
  
 Best,
 Miguel Ángel
  
  
  
 On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
  Hi,
   
  The RFC2544 with near zero packet loss is a pretty standard performance 
  benchmark. It is also used in the OPNFV project 
  (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases).
   
  Does this mean that OpenStack will have stateful firewalls (or security 
  groups)? Any other ideas planned, like ebtables type filtering?
   
 What I am proposing is in the terms of maintaining the statefulness we have 
 now
 regards security groups (RELATED/ESTABLISHED connections are allowed back  
 on open ports) while adding a new firewall driver working only with OVS+OF 
 (no iptables  
 or linux bridge).
  
 That will be possible (without auto-populating OF rules in oposite 
 directions) due to
 the new connection tracker functionality to be eventually merged into ovs.
   
  -Tapio
   
   
  On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
  (mailto:rick.jon...@hp.com) wrote:
   On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
I’m writing a plan/script to benchmark OVS+OF(CT) vs
OVS+LB+iptables+ipsets,
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.
 
The plan is to keep all of it in a single multicore host, and make
all the measures within it, to make sure we just measure the
difference due to the software layers.
 
Suggestions or ideas on what to measure are welcome, there’s an initial
draft here:
 
https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

   Conditions to be benchmarked

   Initial connection establishment time
   Max throughput on the same CPU

   Large MTUs and stateless offloads can mask a multitude of path-length 
   sins.  And there is a great deal more to performance than Mbit/s. While 
   some of that may be covered by the first item via the likes of say 
   netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a 
   focus on Mbit/s (which I assume is the focus of the second item) there is 
   something for packet per second performance.  Something like netperf 
   TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.

   Doesn't have to be netperf, that is simply the hammer I wield :)

   What follows may be a bit of perfect being the enemy of the good, or 
   mission creep...

   On the same CPU would certainly simplify things, but it will almost 
   certainly exhibit different processor data cache behaviour than actually 
   going through a physical network with a multi-core system.  Physical NICs 
   will possibly (probably?) have RSS going, which may cause cache lines to 
   be pulled around.  The way packets will be buffered will differ as well.  
   Etc etc.  How well the different solutions scale with cores is definitely 
   a difference of interest between the two sofware layers.



Hi rick, thanks for your feedback here, I’ll take it into consideration,  
specially about the small packet pps measurements, and
really using physical hosts.

Although I may start with an AIO setup for simplicity, we should
get more conclusive results from at least two hosts and decent NICs.

I will put all this together in the document, and loop you in for review.  
   rick


   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
  --  
  -Tapio  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
Inline comments follow after this, but I wanted to respond to Brian question
which has been cut out:

We’re talking here of doing a preliminary analysis of the networking 
performance,
before writing any real code at neutron level.

If that looks right, then we should go into a preliminary (and orthogonal to 
iptables/LB)
implementation. At that moment we will be able to examine the scalability of 
the solution
in regards of switching openflow rules, which is going to be severely affected
by the way we use to handle OF rules in the bridge:

   * via OpenFlow, making the agent a “real OF controller, with the current 
effort to use
  the ryu framework plugin to do that.
   * via cmdline (would be alleviated with the current rootwrap work, but the 
former one
 would be preferred).

Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben Pfaff 
for the
explanation, if you’re reading this ;-))

Best,
Miguel Ángel



On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:  
 Hi,
  
 The RFC2544 with near zero packet loss is a pretty standard performance 
 benchmark. It is also used in the OPNFV project 
 (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases).
  
 Does this mean that OpenStack will have stateful firewalls (or security 
 groups)? Any other ideas planned, like ebtables type filtering?
  
What I am proposing is in the terms of maintaining the statefulness we have now
regards security groups (RELATED/ESTABLISHED connections are allowed back  
on open ports) while adding a new firewall driver working only with OVS+OF (no 
iptables  
or linux bridge).

That will be possible (without auto-populating OF rules in oposite directions) 
due to
the new connection tracker functionality to be eventually merged into ovs.
  
  
 -Tapio
  
  
 On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
 (mailto:rick.jon...@hp.com) wrote:
  On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
   I’m writing a plan/script to benchmark OVS+OF(CT) vs
   OVS+LB+iptables+ipsets,
   so we can make sure there’s a real difference before jumping into any
   OpenFlow security group filters when we have connection tracking in OVS.

   The plan is to keep all of it in a single multicore host, and make
   all the measures within it, to make sure we just measure the
   difference due to the software layers.

   Suggestions or ideas on what to measure are welcome, there’s an initial
   draft here:

   https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
   
  Conditions to be benchmarked
   
  Initial connection establishment time
  Max throughput on the same CPU
   
  Large MTUs and stateless offloads can mask a multitude of path-length sins. 
   And there is a great deal more to performance than Mbit/s. While some of 
  that may be covered by the first item via the likes of say netperf TCP_CRR 
  or TCP_CC testing, I would suggest that in addition to a focus on Mbit/s 
  (which I assume is the focus of the second item) there is something for 
  packet per second performance.  Something like netperf TCP_RR and perhaps 
  aggregate TCP_RR or UDP_RR testing.
   
  Doesn't have to be netperf, that is simply the hammer I wield :)
   
  What follows may be a bit of perfect being the enemy of the good, or 
  mission creep...
   
  On the same CPU would certainly simplify things, but it will almost 
  certainly exhibit different processor data cache behaviour than actually 
  going through a physical network with a multi-core system.  Physical NICs 
  will possibly (probably?) have RSS going, which may cause cache lines to be 
  pulled around.  The way packets will be buffered will differ as well.  Etc 
  etc.  How well the different solutions scale with cores is definitely a 
  difference of interest between the two sofware layers.
   
  rick
   
   
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
 --  
 -Tapio  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-dev] [PATCH 8/8] [RFC] [neutron] ovn: Start work on design ocumentation.

2015-02-20 Thread Miguel Ángel Ajo

On Thursday, 19 de February de 2015 at 23:15, Kyle Mestery wrote:  
 [Adding neutron tag to subject, comments below.]
  
 On Thu, Feb 19, 2015 at 3:55 PM, Ben Pfaff b...@nicira.com 
 (mailto:b...@nicira.com) wrote:
  [moving this conversation to openstack-dev because it's more
  interesting there and that avoids crossposting to a subscribers-only
  list]
   
  On Thu, Feb 19, 2015 at 10:57:02AM +0100, Miguel ??ngel Ajo wrote:
  I specially liked the VIF port lifecycle, looks good to me, Ionly miss 
   some
   ???port_security??? concepts we have in neutron, which I guess could have 
   been
   deliberately omitted for a start.
  
  In neutron we have something called security groups, and every port
   belongs to 1 or more security groups.  Each security group has a list of
   rules to control traffic at port level in a very fine grained fashion 
   (ingress/egress
   protocol/flags/etc???   remote_ip/mask or security_group ID)
  
   I guess we could build  render security_group ID to multiple IPs for each 
   port,
   but then we will miss the ingress/egress and protocol flags (like type  
   of protocol,
   ports, etc.. [1])
  
   Also, be aware, that not having security group ID references from neutron,
   when lot???s of ports go to the same security group we end up with an 
   exponential
   growth of rules / OF entries per port, we solved this in the 
   server-agent
   communication for the reference OVS solution by keeping a lists of IPs
   belonging to security group IDs, and then, separately having the
   references from the rules.
   
  Thanks a lot for the comment.
   
  We aim to fully support security groups in OVN.  The current documents
  don't capture that intent.  (That's partly because we're planning to
  implement them in terms of some new connection tracking code for OVS
  that's still in the process of getting committed, and partly because I
  haven't fully thought them through yet.)
   
Ah, yes, I know it, I’m tracking that effort to benchmark and
shed some numbers over the OVS+OF  vs   OVS+veths+LB+iptables
stateful firewalling/security groups.

I guess also routing namespaceless router benchmarking would make
sense too.

  My initial reaction is that we can implement security groups as
  another action in the ACL table that is similar to allow but in
  addition permits reciprocal inbound traffic.  Does that sound
  sufficient to you?
Yes, having fine grained allows (matching on protocols, ports, and
remote ips would satisfy the neutron use case).

Also we use connection tracking to allow reciprocal inbound traffic
via ESTABLISHED/RELATED, any equivalent solution would do.

For reference, our SG implementation, currently is able to match on
combinations of:

* direction: ingress/egress
* protocol: icmp/tcp/udp/raw number
* port_range:  min-max   (it’s always dst)
* L2 packet ethertype: IPv4, IPv6, etc...
* remote_ip_prefix: as a CIDR   or* remote_group_id (to reference all other 
IPs in a certain group)

All of them assume connection tracking so known connection packets will
go the other way around.

   
  Is the exponential explosion due to cross-producting, that is, because
  you have, say, n1 source addresses and n2 destination addresses and so
  you need n1*n2 flows to specify all the combinations?  We aim to solve
  that in OVN by giving the CMS direct support for more sophisticated
  matching rules, so that it can say something like:
   
  ip.src in {a, b, c, ...}  ip.dst in {d, e, f, ...}
   (tcp.src in {80, 443, 8080} || tcp.dst in {80, 443, 8080})

That sounds good and very flexible.
   
  and let OVN implement it in OVS via the conjunctive match feature
  recently added, which is like a set membership match but more
  powerful.  
Hmm, where can I find examples about that feature, sounds interesting.
  
  It might still be nice to support lists of IPs (or
  whatever), since these lists could still recur in a number of
  circumstances, but my guess is that this will help a lot even without
  that.
   
As afar as I understood, given the way megaflows resolve rules via hashes
even if we had lots of rules with different ip addresses, that would be very 
fast,
probably as fast or more than our current ipset solution.

The only caveat would be having to update lots of flow rules when a port goes
in or out of a security group, since you have to go and clear/add the rules to 
each
single port on the same security group (as long as they have 1 rule referencing 
the sg).

  Thoughts?
   
 This all sounds really good to me Ben. I look forward to seeing the 
 connection tracking code land  
 and some design details on the security groups aspects of OVN published once 
 that happens!
  
  
  
  
  
  

  
 Thanks,
 Kyle
   
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  

Re: [openstack-dev] [ovs-dev] [PATCH 8/8] [RFC] [neutron] ovn: Start work on design ocumentation.

2015-02-20 Thread Miguel Ángel Ajo
On Friday, 20 de February de 2015 at 17:06, Ben Pfaff wrote:
 On Fri, Feb 20, 2015 at 12:45:46PM +0100, Miguel Ángel Ajo wrote:
  On Thursday, 19 de February de 2015 at 23:15, Kyle Mestery wrote:  
   On Thu, Feb 19, 2015 at 3:55 PM, Ben Pfaff b...@nicira.com 
   (mailto:b...@nicira.com) wrote:
My initial reaction is that we can implement security groups as
another action in the ACL table that is similar to allow but in
addition permits reciprocal inbound traffic. Does that sound
sufficient to you?
 


   
  Yes, having fine grained allows (matching on protocols, ports, and
  remote ips would satisfy the neutron use case).
   
  Also we use connection tracking to allow reciprocal inbound traffic
  via ESTABLISHED/RELATED, any equivalent solution would do.
   
  For reference, our SG implementation, currently is able to match on
  combinations of:
   
  * direction: ingress/egress
  * protocol: icmp/tcp/udp/raw number
  * port_range: min-max (it’s always dst)
  * L2 packet ethertype: IPv4, IPv6, etc...
  * remote_ip_prefix: as a CIDR or * remote_group_id (to reference all other 
  IPs in a certain group)
   
  All of them assume connection tracking so known connection packets will
  go the other way around.
   
  
  
 OK. All of those should work OK. (I don't know for sure whether we'll
 have explicit groups; initially, probably not.)
  
  

That makes sense.

  
Is the exponential explosion due to cross-producting, that is, because
you have, say, n1 source addresses and n2 destination addresses and so
you need n1*n2 flows to specify all the combinations? We aim to solve
that in OVN by giving the CMS direct support for more sophisticated
matching rules, so that it can say something like:
 
ip.src in {a, b, c, ...}  ip.dst in {d, e, f, ...}
 (tcp.src in {80, 443, 8080} || tcp.dst in {80, 443, 8080})
 

   
   
  That sounds good and very flexible.
 
and let OVN implement it in OVS via the conjunctive match feature
recently added, which is like a set membership match but more
powerful.  
 

   
  Hmm, where can I find examples about that feature, sounds interesting.
   
  
  
 If you look at ovs-ofctl(8) in a development version of OVS, such as
 http://benpfaff.org/~blp/dist-docs/ovs-ofctl.8.pdf
 search for conjunction, which explains the implementation.  
  
  

Amazing, yes, it seems like conjunctions will do the work quite optimally
at OpenFlow level.

My hat off… :)
 (This
 isn't the form that Neutron would use with OVN; that is the Boolean
 expression syntax above.)
  
Of course, understood, I was curious about the low level supporting the
high level above.
  
  
It might still be nice to support lists of IPs (or
whatever), since these lists could still recur in a number of
circumstances, but my guess is that this will help a lot even without
that.
 
 

   
  As afar as I understood, given the way megaflows resolve rules via hashes
  even if we had lots of rules with different ip addresses, that would be 
  very fast,
  probably as fast or more than our current ipset solution.
   
  The only caveat would be having to update lots of flow rules when a port 
  goes
  in or out of a security group, since you have to go and clear/add the rules 
  to each
  single port on the same security group (as long as they have 1 rule 
  referencing the sg).
   
  
  
 That sounds like another good argument for allowing explicit groups. I
 have a design in mind for that but I doubt it's the first thing to
 implement.
  
  

Of course, 1 step at a time. I will do a 2nd pass on your documents, looking a 
bit
more on the higher level. I’m very happy to see that the low level is very well 
tied
up and capable.

Best regards,
Miguel Ángel.
  

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ovs-dev] [PATCH 8/8] [RFC] ovn: Start work on design Documentation.

2015-02-19 Thread Miguel Ángel Ajo
Thank you Ben!,  

Cross posting [1] to openstack list /neutron.


[1] http://benpfaff.org/~blp/dist-docs. 

On Thursday, 19 de February de 2015 at 09:13, Ben Pfaff wrote:

 On Thu, Feb 19, 2015 at 12:12:26AM -0800, Ben Pfaff wrote:
  This commit adds preliminary design documentation for Open Virtual Network,
  or OVN, a new OVS-based project to add support for virtual networking to
  OVS, initially with OpenStack integration.
  
  This initial design has been influenced by many people, including (in
  alphabetical order) Aaron Rosen, Chris Wright, Jeremy Stribling,
  Justin Pettit, Ken Duda, Madhu Venugopal, Martin Casado, Pankaj Thakkar,
  Russell Bryant, and Teemu Koponen. All blunders, however, are due to my
  own hubris.
  
  Signed-off-by: Ben Pfaff b...@nicira.com (mailto:b...@nicira.com)
 
 I've posted the rendered version of the documentation following this
 commit at http://benpfaff.org/~blp/dist-docs. You probably want to look
 at the ovn* manpages, especially ovn-architecture(7), ovn(5), and
 ovn-nb(5).
 ___
 dev mailing list
 d...@openvswitch.org (mailto:d...@openvswitch.org)
 http://openvswitch.org/mailman/listinfo/dev
 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-17 Thread Miguel Ángel Ajo


Miguel Ángel Ajo


On Wednesday, 18 de February de 2015 at 08:14, yamam...@valinux.co.jp wrote:

 hi,
  
  On Wednesday, 18 de February de 2015 at 07:00, yamam...@valinux.co.jp 
  (mailto:yamam...@valinux.co.jp) wrote:
   hi,

   i want to add an extra requirement specific to OVS-agent.
   (namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
   but the question is not specific to the blueprint.)
   to avoid messing deployments without OVS-agent, such a requirement
   should be per-agent/driver/plugin/etc. however, there currently
   seems no standard mechanism for such a requirement.



   
   
   
  Awesome, I was thinking of the same a few days ago, we make lots
  and lots of calls to ovs-ofctl, and we will do more if we change to
  security groups/routers in OF, if that proves to be efficient, and we
  get CT.
   
  
  
 CT?

Connection tracking in OVS. At that point we could do NAT/stateful
firewalling, etc  
  
   
  After this change, what would be the differences of ofagent to ovs-agent ?  
   
  I guess OVS set’s rules in advance, while ofagent works as a normal
  OF controller?
   
  
  
 the basic architecture will be same.
  
 actually it was suggested to merge two agents during spec review.
 i think it's a good idea for longer term. (but unlikely for kilo)
  
  


If that’s the case, I would love to see both evaluated side to side,
and make a community decision on that.  
  
   some ideas:

   a. don't bother to make it per-agent.
   add it to neutron's requirements. (and global-requirement)
   simple, but this would make non-ovs plugin users unhappy.


   
  I would simply go with a, what’s the ryu’s internal requirement list? is
  it big?
   
  
  
 no additional requirements as far as we use only openflow part of ryu.

Then IMHO I don’t believe this is a big deal as for any other dependency.  
  
   b. make devstack look at per-agent extra requirements file in neutron 
   tree.
   eg. neutron/plugins/$Q_AGENT/requirements.txt


   
  IMHO that would make distribution work a bit harder because we
  may need to process new requirement files, but my answer could depend
  on what I asked for a.  
   
  
  
 probably.
 i guess distributors can speak up.
  
  

I speak up, I prefer a. But looping Ihar as he’s doing the majority of
work related to neutron distribution in RH/RDO.
  
  
   c. move OVS agent to a separate repository, just like other
   after-decomposition vendor plugins. and use requirements.txt there.
   for longer term, this might be a way to go. but i don't want to
   block my work until it happens.



   
   
  We’re not ready for that yet, as co-gating has proven as a bad strategy
  and we need to keep the reference implementation working for tests.  
   
  
  
 i agree that it will not likely be ready in near future.
  
 YAMAMOTO Takashi
  
   d. follow the way how openvswitch is installed by devstack.
   a downside: we can't give a jenkins run for a patch which introduces
   an extra requirement. (like my patch for the mentioned blueprint [2])

   i think b. is the most reasonable choice, at least for short/mid term.

   any comments/thoughts?

   YAMAMOTO Takashi

   [1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
   [2] https://review.openstack.org/#/c/153946/

   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



   
  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-17 Thread Miguel Ángel Ajo
On Wednesday, 18 de February de 2015 at 07:00, yamam...@valinux.co.jp wrote:
 hi,
  
 i want to add an extra requirement specific to OVS-agent.
 (namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
 but the question is not specific to the blueprint.)
 to avoid messing deployments without OVS-agent, such a requirement
 should be per-agent/driver/plugin/etc. however, there currently
 seems no standard mechanism for such a requirement.
  
  


Awesome, I was thinking of the same a few days ago, we make lots
and lots of calls to ovs-ofctl, and we will do more if we change to
security groups/routers in OF, if that proves to be efficient, and we
get CT.

After this change, what would be the differences of ofagent to ovs-agent ?  

I guess OVS set’s rules in advance, while ofagent works as a normal
OF controller?
  
  
  
 some ideas:
  
 a. don't bother to make it per-agent.
 add it to neutron's requirements. (and global-requirement)
 simple, but this would make non-ovs plugin users unhappy.
  
I would simply go with a, what’s the ryu’s internal requirement list? is
it big?
  
  
 b. make devstack look at per-agent extra requirements file in neutron tree.
 eg. neutron/plugins/$Q_AGENT/requirements.txt
  
IMHO that would make distribution work a bit harder because we
may need to process new requirement files, but my answer could depend
on what I asked for a.  
  
 c. move OVS agent to a separate repository, just like other
 after-decomposition vendor plugins. and use requirements.txt there.
 for longer term, this might be a way to go. but i don't want to
 block my work until it happens.
  
  

We’re not ready for that yet, as co-gating has proven as a bad strategy
and we need to keep the reference implementation working for tests.  
  
 d. follow the way how openvswitch is installed by devstack.
 a downside: we can't give a jenkins run for a patch which introduces
 an extra requirement. (like my patch for the mentioned blueprint [2])
  
 i think b. is the most reasonable choice, at least for short/mid term.
  
 any comments/thoughts?
  
 YAMAMOTO Takashi
  
 [1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
 [2] https://review.openstack.org/#/c/153946/
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-13 Thread Miguel Ángel Ajo
We have an ongoing effort in neutron to move to rootwrap-daemon.  

https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/rootwrap-daemon-mode,n,z

To speed up multiple system calls, and be able to spawn daemons inside 
namespaces.

I have to read a bit what are the good  bad points of privsep.   

The advantage of rootwrap-daemon, is that we don’t need to change all our 
networking libraries across neutron,
and we kill the sudo/rootwrap spawn for every call, yet keeping the rootwrap 
permission granularity.

Miguel Ángel Ajo


On Friday, 13 de February de 2015 at 10:54, Angus Lees wrote:

 On Fri Feb 13 2015 at 5:45:36 PM Eric Windisch e...@windisch.us 
 (mailto:e...@windisch.us) wrote:
  ᐧ

   from neutron.agent.privileged.commands import ip_lib as priv_ip
   def foo():
   # Need to create a new veth interface pair - that usually 
   requires root/NET_ADMIN
   priv_ip.CreateLink('veth', 'veth0', peer='veth1')

   Because we now have elevated privileges directly (on the privileged 
   daemon side) without having to shell out through sudo, we can do all 
   sorts of nicer things like just using netlink directly to configure 
   networking.  This avoids the overhead of executing subcommands, the 
   ugliness (and danger) of generating command lines and regex parsing 
   output, and make us less reliant on specific versions of command line 
   tools (since the kernel API should be very stable).
   
  One of the advantages of spawning a new process is being able to use flags 
  to clone(2) and to set capabilities. This basically means to create 
  containers, by some definition. Anything you have in a privileged daemon 
  or privileged process ideally should reduce its privilege set for any 
  operation it performs. That might mean it clones itself and executes 
  Python, or it may execvp an executable, but either way, the new process 
  would have less-than-full-privilege.
   
  For instance, writing a file might require root access, but does not need 
  the ability to load kernel modules. Changing network interfaces does not 
  need access to the filesystem, no more than changes to the filesystem needs 
  access to the network. The capabilities and namespaces mechanisms resolve 
  these security conundrums and simplify principle of least privilege.
  
 Agreed wholeheartedly, and I'd appreciate your thoughts on how I'm using 
 capabilities in this change.  The privsep daemon limits itself to a 
 particular set of capabilities (and drops root). The assumption is that most 
 OpenStack services commonly need the same small set of capabilities to 
 perform their duties (neutron - net_admin+sys_admin for example), so it 
 makes sense to reuse the same privileged process.
  
 If we have a single service that somehow needs to frequently use a broad 
 swathe of capabilities then we might want to break it up further somehow 
 between the different internal aspects (multiple privsep helpers?) - but is 
 there such a case?   There's also no problems with mixing privsep for 
 frequent operations with the existing sudo/rootwrap approach for rare/awkward 
 cases.
  
  - Gus  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-13 Thread Miguel Ángel Ajo
Sorry, I forgot about   

5)  If we put all our OVS/OF bridge logic in just one bridge (instead of N: 
br-tun, br-int, br-ex, br-xxx),
 the performance should be yet higher, since, as far as I understood, flow 
rule lookup could be more
 optimized into the kernel megaflows without forwarding and re-starting 
evaluation due to patch ports.
 (Please correct me here where I’m wrong, I just have very high level view 
of this).

Best,
Miguel Ángel Ajo


On Friday, 13 de February de 2015 at 13:42, Miguel Ángel Ajo wrote:

 Hi, Ihar  Jiri, thank you for pointing this out.
  
 I’m working on the following items:
  
 1) Doing Openflow traffic filtering (stateful firewall) based on OVS+CT[1] 
 patch, which may
 eventually merge. Here I want to build a good amount of benchmarks to be 
 able to compare
 the current network iptables+LB solution to just openflow.
  
  Openflow filtering should be fast, as it’s quite smart at using hashes 
 to match OF rules
  in the kernel megaflows (thanks Jiri  T.Graf for explaining me this)
 
  The only bad part is that we would have to dynamically change more rules 
 based on security
 group changes (now we do that via ip sets without reloading all the rules).
  
   To do this properly, we may want to make the OVS plugin a real OF 
 controller to be able to
 push OF rules to the bridge without the need of calling ovs-ofctl on the 
 command line all the time.
  
 2) Using OVS+OF to do QoS
  
 other interesting stuff to look at:
  
 3) Doing routing in OF too, thanks to the NAT capabilities of having OVS+CT  
  
 4) The namespace problem, what kinds of statistics get broken by moving ports 
 into namespaces now?,
 the short-term fix could be using vets, but “namespaceable” OVS ports 
 would be perfect, yet I understand
 the change is a big feature.
  
 If we had 1  3, may be 4 wouldn’t be a problem anymore.
  
 [1] https://github.com/justinpettit/ovs/tree/conntrack  
  
 Miguel Ángel Ajo
  
  
 On Friday, 13 de February de 2015 at 13:14, Ihar Hrachyshka wrote:
  
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
   
  Hi neutroners,
   
  we** had several conversations recently with our Red Hat fellows who
  work on openvswitch (Jiri Benc and Jiri Pirko) regarding the way
  neutron utilizes their software. Those were beneficial to both sides
  to understand what we do right and wrong. I was asked to share some of
  the points from those discussions with broader community.
   
  ===
   
  One of the issues that came up during discussions is the way neutron
  connects ovs ports to namespaces. The short story is that openvswitch
  is not designed with namespaces in mind, and the fact that moving its
  ports into a different namespace works for neutron is mere
  coincidence, and is actually considered as a bug by openvswitch guys.
   
  It's not just broken in theory from software design standpoint, but
  also in practice. Specifically,
   
  1. ovsdb dump does not work properly for namespaces:
  - - https://bugzilla.redhat.com/show_bug.cgi?id=1160340
   
  This results in wrong statistics and other data collected for these ports;
   
  2. We suspect that the following kernel crash is triggered because of
  our usage of the feature that is actually a bug:
  - - https://bugs.launchpad.net/neutron/+bug/1418097
   
  Quoting Jiri Benc,
   
  The problem is openvswitch does not support its ports to be moved to
  a different name space. The fact that it's possible to do that is a
  bug - such operation should have been denied. Unfortunately, due to a
  missing check, it's not been denied. Such setup does not work
  reliably, though, and leads to various issues from incorrect resource
  accounting to kernel crashes.
   
  We're aware of the bug but we don't have any easy way to fix it. The
  most obvious option, disabling moving of ovs ports to different name
  spaces, would be easy to do but it would break Neutron. The other
  option, making ovs to work across net name spaces, is hard and will
  require addition of different kernel APIs and large changes in ovs
  user space daemons. This constitutes tremendous amount of work.
   
  The tracker bug on openvswitch side is:
  - - https://bugzilla.redhat.com/show_bug.cgi?id=1160340
   
  So in the best case, we may expect openvswitch to properly support the
  feature in long term, but short term it won't work, especially while
  neutron expects other features implemented in openvswitch for it (like
  NAT, or connection tracking, or ipv6 tunnel endpoints, to name a few).
   
  We could try to handle the issue neutron side. We can fix it by
  utilizing veth pairs to get into namespaces, but it may mean worse
  performance, and will definitely require proper benchmarking to see
  whether we can live with performance drop.
   
  ===
   
  There were other suggestions on how we can enhance our way of usage of
  openvswitch. Among those, getting rid of linux bridge used for
  security groups

Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-13 Thread Miguel Ángel Ajo
Hi, Ihar  Jiri, thank you for pointing this out.

I’m working on the following items:

1) Doing Openflow traffic filtering (stateful firewall) based on OVS+CT[1] 
patch, which may
eventually merge. Here I want to build a good amount of benchmarks to be 
able to compare
the current network iptables+LB solution to just openflow.

 Openflow filtering should be fast, as it’s quite smart at using hashes to 
match OF rules
 in the kernel megaflows (thanks Jiri  T.Graf for explaining me this)

 The only bad part is that we would have to dynamically change more rules 
based on security
group changes (now we do that via ip sets without reloading all the rules).

  To do this properly, we may want to make the OVS plugin a real OF 
controller to be able to
push OF rules to the bridge without the need of calling ovs-ofctl on the 
command line all the time.

2) Using OVS+OF to do QoS

other interesting stuff to look at:

3) Doing routing in OF too, thanks to the NAT capabilities of having OVS+CT  

4) The namespace problem, what kinds of statistics get broken by moving ports 
into namespaces now?,
the short-term fix could be using vets, but “namespaceable” OVS ports would 
be perfect, yet I understand
the change is a big feature.

If we had 1  3, may be 4 wouldn’t be a problem anymore.

[1] https://github.com/justinpettit/ovs/tree/conntrack  

Miguel Ángel Ajo


On Friday, 13 de February de 2015 at 13:14, Ihar Hrachyshka wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
  
 Hi neutroners,
  
 we** had several conversations recently with our Red Hat fellows who
 work on openvswitch (Jiri Benc and Jiri Pirko) regarding the way
 neutron utilizes their software. Those were beneficial to both sides
 to understand what we do right and wrong. I was asked to share some of
 the points from those discussions with broader community.
  
 ===
  
 One of the issues that came up during discussions is the way neutron
 connects ovs ports to namespaces. The short story is that openvswitch
 is not designed with namespaces in mind, and the fact that moving its
 ports into a different namespace works for neutron is mere
 coincidence, and is actually considered as a bug by openvswitch guys.
  
 It's not just broken in theory from software design standpoint, but
 also in practice. Specifically,
  
 1. ovsdb dump does not work properly for namespaces:
 - - https://bugzilla.redhat.com/show_bug.cgi?id=1160340
  
 This results in wrong statistics and other data collected for these ports;
  
 2. We suspect that the following kernel crash is triggered because of
 our usage of the feature that is actually a bug:
 - - https://bugs.launchpad.net/neutron/+bug/1418097
  
 Quoting Jiri Benc,
  
 The problem is openvswitch does not support its ports to be moved to
 a different name space. The fact that it's possible to do that is a
 bug - such operation should have been denied. Unfortunately, due to a
 missing check, it's not been denied. Such setup does not work
 reliably, though, and leads to various issues from incorrect resource
 accounting to kernel crashes.
  
 We're aware of the bug but we don't have any easy way to fix it. The
 most obvious option, disabling moving of ovs ports to different name
 spaces, would be easy to do but it would break Neutron. The other
 option, making ovs to work across net name spaces, is hard and will
 require addition of different kernel APIs and large changes in ovs
 user space daemons. This constitutes tremendous amount of work.
  
 The tracker bug on openvswitch side is:
 - - https://bugzilla.redhat.com/show_bug.cgi?id=1160340
  
 So in the best case, we may expect openvswitch to properly support the
 feature in long term, but short term it won't work, especially while
 neutron expects other features implemented in openvswitch for it (like
 NAT, or connection tracking, or ipv6 tunnel endpoints, to name a few).
  
 We could try to handle the issue neutron side. We can fix it by
 utilizing veth pairs to get into namespaces, but it may mean worse
 performance, and will definitely require proper benchmarking to see
 whether we can live with performance drop.
  
 ===
  
 There were other suggestions on how we can enhance our way of usage of
 openvswitch. Among those, getting rid of linux bridge used for
 security groups, with special attention on getting rid of ebtables
 (sic!) for they are a lot slower than iptables; getting rid of veth
 pair for instance ports.
  
 ===
  
 I really encourage everyone to check the following video from
 devconf.cz (http://devconf.cz) 2015 on all that and more at:
  
 - - https://www.youtube.com/watch?v=BlLD-qh9EBQ
  
 Among other things, you will find presentation of plotnetcfg tool to
 create nice graphs of openstack state.
  
 If you're lazy enough and want to switch directly to the analysis of
 neutron problems, skip to ~18:00.
  
 I also encourage to check our the video around 30:00 on the way out of
 openvswitch for neutron (tc/eBPF

Re: [openstack-dev] [neutron] [lbaas] LBaaS Haproxy performance benchmarking

2015-02-04 Thread Miguel Ángel Ajo
You can try with httperf[1], or ab[2] for http workloads.  

If you will use overlay, make sure your network MTU is correctly configured to 
handle the extra  
size of the overlay (GRE / VXLAN packets) otherwise you will be introducing 
fragmentation
overhead on the tenant networks.


[1] http://www.hpl.hp.com/research/linux/httperf/  
[2] http://httpd.apache.org/docs/2.2/programs/ab.html

Miguel Ángel Ajo


On Wednesday, 4 de February de 2015 at 01:58, Varun Lodaya wrote:

 Hi,
  
 We were trying to use haproxy as our LBaaS solution on the overlay. Has 
 anybody done some baseline benchmarking with LBaaSv1 haproxy solution?
  
 Also, any recommended tools which we could use to do that?
  
 Thanks,
 Varun  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-04 Thread Miguel Ángel Ajo


Miguel Ángel Ajo


On Wednesday, 4 de February de 2015 at 10:41, Cory Benfield wrote:

 On Wed, Feb 04, 2015 at 08:59:54, Kevin Benton wrote:
  I proposed an alternative to adjusting the lease time early on the in
  the thread. By specifying the renewal time (DHCP option 58), we can
  have
  the benefits of a long lease time (resiliency to long DHCP server
  outages) while having a frequent renewal interval to check for IP
  changes. I favored this approach because it only required a patch to
  dnsmasq to allow that option to be set and patch to our agent to set
  that option, both of which are pretty straight-forward.
   
  
  
 It's hard to see a downside to this proposal. Even if one of the other ideas 
 goes forward as well, a short DHCP renewal interval feels like a very good 
 idea to me.
  
+1

I understand some dhcp-clients could ignore option 58, but yet, I understand
they will obey the longer lease time, without affecting they behavior.

So only who really needs it would take care of using fully compliant clients…

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Miguel Ángel Ajo
Miguel Ángel Ajo


On Wednesday, 28 de January de 2015 at 09:50, Kevin Benton wrote:

 Hi,
  
 Approximately a year and a half ago, the default DHCP lease time in Neutron 
 was increased from 120 seconds to 86400 seconds.[1] This was done with the 
 goal of reducing DHCP traffic with very little discussion (based on what I 
 can see in the review and bug report). While it it does indeed reduce DHCP 
 traffic, I don't think any bug reports were filed showing that a 120 second 
 lease time resulted in too much traffic or that a jump all of the way to 
 86400 seconds was required instead of a value in the same order of magnitude.
  
 Why does this matter?  
  
 Neutron ports can be updated with a new IP address from the same subnet or 
 another subnet on the same network. The port update will result in 
 anti-spoofing iptables rule changes that immediately stop the old IP address 
 from working on the host. This means the host is unreachable for 0-12 hours 
 based on the current default lease time without manual intervention[2] 
 (assuming half-lease length DHCP renewal attempts).
  
 Why is this on the mailing list?
  
 In an attempt to make the VMs usable in a much shorter timeframe following a 
 Neutron port address change, I submitted a patch to reduce the default DHCP 
 lease time to 8 minutes.[3] However, this was upsetting to several people,[4] 
 so it was suggested I bring this discussion to the mailing list. The 
 following are the high-level concerns followed by my responses:
 8 minutes is arbitrary
 Yes, but it's no more arbitrary than 1440 minutes. I picked it as an interval 
 because it is still 4 times larger than the last short value, but it still 
 allows VMs to regain connectivity in 5 minutes in the event their IP is 
 changed. If someone has a good suggestion for another interval based on known 
 dnsmasq QPS limits or some other quantitative reason, please chime in here.
  
 other datacenters use long lease times
 This is true, but it's not really a valid comparison. In most regular 
 datacenters, updating a static DHCP lease has no effect on the data plane so 
 it doesn't matter that the client doesn't react for hours/days (even with 
 DHCP snooping enabled). However, in Neutron's case, the security groups are 
 immediately updated so all traffic using the old address is blocked.
  
 dhcp traffic is scary because it's broadcast
 ARP traffic is also broadcast and many clients will expire entries every 5-10 
 minutes and re-ARP. L2population may be used to prevent ARP propagation, so 
 the comparison between DHCP and ARP isn't always relevant here.
  
  
  
  
For what I’ve seen, at least for linux, the first DHCP request will be 
broadcast. Then all lease renewals are unicast, unless, the original
DHCP can’t be contacted, in which case, the dhcp client will turn back to 
broadcast trying to find out another server to renew his lease.

So, only initial boot of an instance should generate broadcast traffic.

Your proposal seems reasonable to me.

In this context, please see this ongoing work [5], specially comments here [6], 
where we’re discussing about optimization,  
due to theoretical 120 second limit for renews at scale, and we made some 
calculations of CPU usage for the current default, I  
will recalculate those for the new proposed default: 8 minutes.

TL; DR.  
That patch fixes an issue found when you restart dnsmasq, and old leases can’t 
be renewed, so we end up in a storm of requests,
for that we need to provide dnsmasq with a script for initialization of the 
leases table, initially such script was provided in python,
but that means that script is called for: init (once), lease (once per 
instance), and renew (every lease renew time * number of instances),
thus we should minimize the impact of such script as much as possible, or 
contribute dnsmasq to avoid such script being called
for lease renews under some flag.
  
  
 Please reply back with your opinions/anecdotes/data related to short DHCP 
 lease times.
  
 Cheers
  
 1. 
 https://github.com/openstack/neutron/commit/d9832282cf656b162c51afdefb830dacab72defe
 2. Manual intervention could be an instance reboot, a dhcp client invocation 
 via the console, or a delayed invocation right before the update. (all 
 significantly more difficult to script than a simple update of a port's IP 
 via the API).
 3. https://review.openstack.org/#/c/150595/
 4. http://i.imgur.com/xtvatkP.jpg
  
  
  
  

5. https://review.openstack.org/#/c/108272/ 
(https://review.openstack.org/#/c/108272/8/neutron/agent/linux/dhcp.py)
6. https://review.openstack.org/#/c/108272/8/neutron/agent/linux/dhcp.py  
  
 --  
 Kevin Benton  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin

Re: [openstack-dev] [Neutron] Project Idea: IDS integration.

2015-01-19 Thread Miguel Ángel Ajo
Hi Mario,

   Salvatore and Kevin perfectly expressed what I think.

   I’d follow his advice, and look on how the advanced services [1] [2] 
integrate with neutron,
and build a POC. If the POC looks good it could be a good start point to build 
community
around and go on with the development.

[1] https://github.com/openstack/neutron-lbaas
[2] https://github.com/openstack/neutron-fwaas


Miguel Ángel Ajo


On Sunday, 18 de January de 2015 at 13:42, Salvatore Orlando wrote:

 Hello Mario,
  
 IDS surely is an interesting topic for OpenStack integration. I think there 
 might be users out there which could be interested in having this capability 
 in OpenStack networks.
 As Kevin said, we are moving towards a model where it becomes easier for 
 developers to add such capabilities in the form of service plugins - you 
 should be able to develop everything you need in a separate repository and 
 still integrate it with Neutron.
  
 According to what you wrote you have just a bit more than 100 hours to spend 
 on this project. What can be achieved in this timeframe really depends on 
 one's skills, but I believe it could be enough to provide some sort of 
 Proof-of-Concept. However, this time won't be enough at all if you also aim 
 to seek feedback on your proposal, build a consensus and a developer 
 community around it. Unsurprisingly these aspects, albeit not technically 
 challenging, take an awful lot more time than coding!
  
 Therefore the only advice I have here is that you should focus on achieving 
 your real goal, which is graduate with the highest possible marks! Then, if 
 from your thesis there will be something to gain for the OpenStack community, 
 that would be awesome. With a PoC implementation and perhaps some time on 
 your hands, you can then be able to work with the community to transform your 
 masters' project into an OpenStack project and avoid it becomes a bitrotting 
 shelved piece of code.
  
 Salvatore
  
 On 18 January 2015 at 10:45, Kevin Benton blak...@gmail.com 
 (mailto:blak...@gmail.com) wrote:
  Hi Mario,
   
  There is currently a large backlog of network-related features that many 
  people want to develop for Neutron. The model of adding them all to the 
  main neutron codebase has failed to keep up. Due to this, all of the 
  advanced services (LBaaS, FWaaS, etc) are being separated into their own 
  repositories. The main Neutron repo will only be for establishing L2/L3 
  connectivity and providing a framework for other networking services to 
  build on. You can read more about it in the advanced services split 
  blueprint.[1]
   
  Based on what you've described, it sounds like you would be developing an 
  IDS service plugin with a driver/plugin framework for different vendors. 
  For an initial proof of concept, you could do it in github to get started 
  quickly or you can also request a new stackforge repo for it. The benefit 
  of stackforge is that you get the OpenStack testing infrastructure and 
  integration with its gerrit system so other OpenStack developers don't have 
  to switch code review workflows to contribute.
   
  To gauge interest, I would try emailing the OpenStack users list. It 
  doesn't matter if developers are interested if nobody ever wants to 
  actually try it out.  
   
  1. https://blueprints.launchpad.net/neutron/+spec/services-split
   
  Cheers,
  Kevin Benton
   
   
  On Fri, Jan 16, 2015 at 2:32 PM, Mario Tejedor González 
  m.tejedor-gonza...@mycit.ie (mailto:m.tejedor-gonza...@mycit.ie) wrote:
   Hello, Neutron developers.

   My name is Mario and I am a Masters student in Networking and Security.

   I am considering the possibility of integrating IDS technology to Neutron 
   as part of my Masters project.
   As there are many flavors of open ID[P]S out there and those might follow 
   different philosophies, my approach would be developing a Neutron plugin 
   that might cover IDS integration as a service and also a driver (or more, 
   depending on time constraints) to cover the specifics of an IDS. 
   Following the nature of Neutron and OpenStack projects these drivers 
   would be developed for Free and Open Software IDSs and the plugin would 
   be as vendor-agnostic as possible. In order to achieve that the plugin 
   would have to deal with the need for logging and alerting.

   The time window I have for the development of this project goes from 
   February to the end of June and I would be able to allocate around 5h a 
   week to it.

   Now, I would like to know your opinion on this idea, given that you know 
   the project inside out and you are the ones making it happen day after 
   day.
   Do you think there is usefulness on bringing that functionality inside 
   the Neutron project (as a plugin)? I'd prefer do something that 
   contributes to it rather than a one-shot piece of software that will be 
   stored on a shelf.  

   I'd like to know if you think that what I am proposing

Re: [openstack-dev] [Neutron] grenade failures

2015-01-14 Thread Miguel Ángel Ajo
Hi Sukhdev, thanks,
Can you post links to the specific patches?


Miguel Ángel Ajo


On Wednesday, 14 de January de 2015 at 09:01, Sukhdev Kapur wrote:

 Hi All,  
  
 I noticed that several neutron patches are failing 
 check-grenade-dsvm-neutron. I pinged it on IRC, did not get any response, 
 thought I post it here.  
  
  
 -Sukhdev
  
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3][Devstack] Bug during delete loating Ps?

2015-01-13 Thread Miguel Ángel Ajo
That’s nice Sunil, can you send the patch for review on gerrit?

May be it’s also interesting to avoid sending a notify_routers_updated when 
there are
no router_ids.


Miguel Ángel Ajo


On Sunday, 11 de January de 2015 at 08:42, Sunil Kumar wrote:

 This trivial patch fixes the tracebacks:
  
 $ cat disassociate_floating_ips.patch  
 --- neutron/db/l3_db.py.orig2015-01-10 22:20:30.101506298 -0800
 +++ neutron/db/l3_db.py 2015-01-10 22:24:18.111479818 -0800
 @@ -1257,4 +1257,4 @@
   
  def notify_routers_updated(self, context, router_ids):
  super(L3_NAT_db_mixin, self).notify_routers_updated(
 -context, list(router_ids), 'disassociate_floatingips', {})
 +context, list(router_ids) if router_ids else None, 
 'disassociate_floatingips', {})
  
  
 -Sunil
 From: Sunil Kumar [su...@embrane.com (mailto:su...@embrane.com)]
 Sent: Saturday, January 10, 2015 7:07 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][L3][Devstack] Bug during delete floating 
 IPs?
  
 Not sure if its something seen by others. I hit this when I run 
 tempest.scenario.test_network_basic_ops.TestNetworkBasicOps against master:
  
 2015-01-10 17:45:13.227 5350 DEBUG neutron.plugins.ml2.plugin 
 [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] Deleting port 
 e5deb014-0063-4d55-8ee3-5ba3524fee14 delete_port 
 /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:995
 2015-01-10 17:45:13.228 5350 DEBUG neutron.openstack.common.lockutils 
 [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Created new semaphore db-access 
 internal_lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:206
 2015-01-10 17:45:13.228 5350 DEBUG neutron.openstack.common.lockutils 
 [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Acquired semaphore db-access 
 lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:229
 2015-01-10 17:45:13.252 5350 DEBUG neutron.plugins.ml2.plugin 
 [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] Calling delete_port for 
 e5deb014-0063-4d55-8ee3-5ba3524fee14 owned by network:floatingip delete_port 
 /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:1043
 2015-01-10 17:45:13.254 5350 DEBUG neutron.openstack.common.lockutils 
 [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Releasing semaphore db-access 
 lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:238
 2015-01-10 17:45:13.282 5350 ERROR neutron.api.v2.resource 
 [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] delete failed
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource Traceback (most 
 recent call last):
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource   File 
 /opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource result = 
 method(request=request, **args)
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource   File 
 /opt/stack/new/neutron/neutron/api/v2/base.py, line 479, in delete
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource 
 obj_deleter(request.context, id, **kwargs)
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource   File 
 /opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 198, in 
 delete_floatingip
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource 
 self).delete_floatingip(context, id)
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource   File 
 /opt/stack/new/neutron/neutron/db/l3_db.py, line 1237, in delete_floatingip
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource router_id = 
 self._delete_floatingip(context, id)
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource   File 
 /opt/stack/new/neutron/neutron/db/l3_db.py, line 902, in _delete_floatingip
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource 
 l3_port_check=False)
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource   File 
 /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 1050, in 
 delete_port
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource 
 l3plugin.notify_routers_updated(context, router_ids)
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource   File 
 /opt/stack/new/neutron/neutron/db/l3_db.py, line 1260, in 
 notify_routers_updated
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource context, 
 list(router_ids), 'disassociate_floatingips', {})
 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource TypeError: 
 'NoneType' object is not iterable
  
 Looks like the code is assuming that router_ids can never be None, which 
 clearly is the case here. Is that a bug?
  
 Looking elsewhere in the l3_db.py, 
 L3RpcNotifierMixin.notify_routers_updated() does make a check for router_ids 
 (which means that that function does expect it to be empty some times), but 
 the list() is killing it before it reaches that.
  
 This backtrace repeats itself many many times in the neutron logs.
  
 Thanks for your help.
 -Sunil

Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Miguel Ángel Ajo
Now that I re-read the patch.
Shouldn't the version checking  need to be converted into a sanity check?

Miguel Ángel Ajo


On Thursday, 8 de January de 2015 at 12:51, Kevin Benton wrote:

 Thanks for the insight.
  
 On Thu, Jan 8, 2015 at 3:41 AM, Miguel Ángel Ajo majop...@redhat.com 
 (mailto:majop...@redhat.com) wrote:
  Correct, that’s the problem, what Kevin said should be the ideal case, but 
  distros have
  proven to fail satisfying this kind of requirements earlier.
   
  So at least a warning to the user may be good to have IMHO.  
   
  Miguel Ángel Ajo
   
   
  On Thursday, 8 de January de 2015 at 12:36, Ihar Hrachyshka wrote:
   
   The problem is probably due to the fact that some operators may run 
   neutron from git and manage their dependencies in some other way; or 
   distributions may suck sometimes, so packagers may miss the release note 
   and fail to upgrade dnsmasq; or distributions may have their specific 
   concerns on upgrading dnsmasq version, and would just backport the needed 
   fix to their 'claimed to 2.66' dnsmasq (common story in Red Hat world).

   On 01/08/2015 05:25 AM, Kevin Benton wrote:
If the new requirement is expressed in the neutron packages for the 
distro, wouldn't it be transparent to the operators?  
 
On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com 
(mailto:mest...@mestery.com) wrote:
 On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka ihrac...@redhat.com 
 (mailto:ihrac...@redhat.com) wrote:
  Hi all,
   
  I've found out that dnsmasq  2.67 does not work properly for IPv6 
  clients when it comes to MAC address matching (it fails to match, 
  and so clients get 'no addresses available' response). I've 
  requested version bump to 2.67 in: 
  https://review.openstack.org/145482
   
 Good catch, thanks for finding this Ihar!
   
  Now, since we've already released Juno with IPv6 DHCP stateful 
  support, and DHCP agent still has minimal version set to 2.63 
  there, we have a dilemma on how to manage it from stable 
  perspective.
   
  Obviously, we should communicate the revealed version dependency to 
  deployers via next release notes.
   
  Should we also backport the minimal version bump to Juno? This will 
  result in DHCP agent failing to start in case packagers don't bump 
  dnsmasq version with the next Juno release. If we don't bump the 
  version, we may leave deployers uninformed about the fact that 
  their IPv6 stateful instances won't get any IPv6 address assigned.
   
  An alternative is to add a special check just for Juno that would 
  WARN administrators instead of failing to start DHCP agent.
   
  Comments?
   
 Personally, I think the WARN may be the best route to go. Backporting 
 a change which bumps the required dnsmasq version seems like it may 
 be harder for operators to handle.
  
 Kyle
   
  /Ihar
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org 
  (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
 (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 
--  
Kevin Benton  
 
___ OpenStack-dev mailing 
list OpenStack-dev@lists.openstack.org 
(mailto:OpenStack-dev@lists.openstack.org) 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



   
   
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
  
 --  
 Kevin Benton  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-08 Thread Miguel Ángel Ajo
Correct, that’s the problem, what Kevin said should be the ideal case, but 
distros have
proven to fail satisfying this kind of requirements earlier.

So at least a warning to the user may be good to have IMHO.  

Miguel Ángel Ajo


On Thursday, 8 de January de 2015 at 12:36, Ihar Hrachyshka wrote:

 The problem is probably due to the fact that some operators may run neutron 
 from git and manage their dependencies in some other way; or distributions 
 may suck sometimes, so packagers may miss the release note and fail to 
 upgrade dnsmasq; or distributions may have their specific concerns on 
 upgrading dnsmasq version, and would just backport the needed fix to their 
 'claimed to 2.66' dnsmasq (common story in Red Hat world).
  
 On 01/08/2015 05:25 AM, Kevin Benton wrote:
  If the new requirement is expressed in the neutron packages for the distro, 
  wouldn't it be transparent to the operators?  
   
  On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com 
  (mailto:mest...@mestery.com) wrote:
   On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka ihrac...@redhat.com 
   (mailto:ihrac...@redhat.com) wrote:
Hi all,
 
I've found out that dnsmasq  2.67 does not work properly for IPv6 
clients when it comes to MAC address matching (it fails to match, and 
so clients get 'no addresses available' response). I've requested 
version bump to 2.67 in: https://review.openstack.org/145482
 
   Good catch, thanks for finding this Ihar!
 
Now, since we've already released Juno with IPv6 DHCP stateful support, 
and DHCP agent still has minimal version set to 2.63 there, we have a 
dilemma on how to manage it from stable perspective.
 
Obviously, we should communicate the revealed version dependency to 
deployers via next release notes.
 
Should we also backport the minimal version bump to Juno? This will 
result in DHCP agent failing to start in case packagers don't bump 
dnsmasq version with the next Juno release. If we don't bump the 
version, we may leave deployers uninformed about the fact that their 
IPv6 stateful instances won't get any IPv6 address assigned.
 
An alternative is to add a special check just for Juno that would WARN 
administrators instead of failing to start DHCP agent.
 
Comments?
 
   Personally, I think the WARN may be the best route to go. Backporting a 
   change which bumps the required dnsmasq version seems like it may be 
   harder for operators to handle.

   Kyle
 
/Ihar
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
(mailto:OpenStack-dev@lists.openstack.org)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

   
   
   
  --  
  Kevin Benton  
   
  ___ OpenStack-dev mailing list 
  OpenStack-dev@lists.openstack.org 
  (mailto:OpenStack-dev@lists.openstack.org) 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] client noauth deprecation

2015-01-07 Thread Miguel Ángel Ajo
Seems like a good reason to keep it, this allows us to test  
internal integration in isolation from keystone.


Miguel Ángel Ajo


On Wednesday, 7 de January de 2015 at 10:05, Assaf Muller wrote:

  
  
 - Original Message -
  The option to disable keystone authentication in the neutron client was
  marked for deprecation in August as part of a Keystone support upgrade.[1]
   
  What was the reason for this? As far as I can tell, Neutron works fine in 
  the
  'noauth' mode and there isn't a lot of code that tightly couples neutron to
  Keystone that I can think of.
   
  
  
 It was actually broken until John fixed it in:
 https://review.openstack.org/#/c/125022/
  
 We plan on using it in the Neutron in-tree full-stack testing. I'd appreciate
 if the functionality was not removed or otherwise broken :)
  
   
  1.
  https://github.com/openstack/python-neutronclient/commit/2203b013fb66808ef280eff0285318ce21d9bc67#diff-ba2e4fad85e66d9aabb6193f222fcc4cR438
   
  --
  Kevin Benton
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-07 Thread Miguel Ángel Ajo
Totally correct, that’s what I was meaning with “will remain active”
but “unmanaged”.

Yes, it would be good to have something to tell the schedulers to ban a host.  

Miguel Ángel Ajo


On Thursday, 8 de January de 2015 at 00:52, Kevin Benton wrote:

 The problem is that if you just stop the service, it not only removes it from 
 scheduling, but it stops it from receiving updates to floating IP changes, 
 interface changes, etc. I think it would be nice to have a way to explicitly 
 stop it from being scheduled new routers, but still act as a functioning L3 
 agent otherwise.
  
 On Wed, Jan 7, 2015 at 3:30 PM, Miguel Ángel Ajo majop...@redhat.com 
 (mailto:majop...@redhat.com) wrote:
  You can stop the neutron-dhcp-agent or neutron-l3-agent,  
  the agents should go not-active after reporting timeout.
   
  The actual network services (routers, dhcp, etc) will stay
  active into the node, but unmanaged. In some cases,
  if you have automatic rescheduling of the resources
  configured, those will be spawned on other hosts.
   
  Depending on your use case this will be enough or not.
  It’s intended for upgrades and maintenance. But not
  for controlling resources in a node.
   
   
  Miguel Ángel Ajo
   
   
  On Thursday, 8 de January de 2015 at 00:20, Itsuro ODA wrote:
   
   Carl,

   Thank you for your comment.

   It seems there is no clear opinion about whether bug report or
   buleprint is better.  
   So I submitted a bug report for the moment so that the requirememt
   is not forgotten.
   https://bugs.launchpad.net/neutron/+bug/1408488

   Thanks.
   Itsuro Oda

   On Tue, 6 Jan 2015 09:05:19 -0700
   Carl Baldwin c...@ecbaldwin.net (mailto:c...@ecbaldwin.net) wrote:

Itsuro,
 
It would be desirable to be able to be hide an agent from scheduling
but no one has stepped up to make this happen. Come to think of it,
I'm not sure that a bug or blueprint has been filed yet to address it
though it is something that I've wanted for a little while now.
 
Carl
 
On Mon, Jan 5, 2015 at 4:13 PM, Itsuro ODA o...@valinux.co.jp 
(mailto:o...@valinux.co.jp) wrote:
 Neutron experts,
  
 I want to stop scheduling to a specific {dhcp|l3}_agent without
 stopping router/dhcp services on it.
 I expected setting admin_state_up of the agent to False is met
 this demand. But this operation stops all services on the agent
 in actuality. (Is this behavior intended ? It seems there is no
 document for agent API.)
  
 I think admin_state_up of agents should affect only scheduling.
 If it is accepted I will submit a bug report and make a fix.
  
 Or should I propose a blueprint for adding function to stop
 agent's scheduling without stopping services on it ?
  
 I'd like to hear neutron experts' suggestions.
  
 Thanks.
 Itsuro Oda
 --
 Itsuro ODA o...@valinux.co.jp (mailto:o...@valinux.co.jp)
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
 (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
(mailto:OpenStack-dev@lists.openstack.org)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


   --  
   Itsuro ODA o...@valinux.co.jp (mailto:o...@valinux.co.jp)


   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



   
   
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
  
 --  
 Kevin Benton  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-07 Thread Miguel Ángel Ajo
You can stop the neutron-dhcp-agent or neutron-l3-agent,  
the agents should go not-active after reporting timeout.

The actual network services (routers, dhcp, etc) will stay
active into the node, but unmanaged. In some cases,
if you have automatic rescheduling of the resources
configured, those will be spawned on other hosts.

Depending on your use case this will be enough or not.
It’s intended for upgrades and maintenance. But not
for controlling resources in a node.


Miguel Ángel Ajo


On Thursday, 8 de January de 2015 at 00:20, Itsuro ODA wrote:

 Carl,
  
 Thank you for your comment.
  
 It seems there is no clear opinion about whether bug report or
 buleprint is better.  
 So I submitted a bug report for the moment so that the requirememt
 is not forgotten.
 https://bugs.launchpad.net/neutron/+bug/1408488
  
 Thanks.
 Itsuro Oda
  
 On Tue, 6 Jan 2015 09:05:19 -0700
 Carl Baldwin c...@ecbaldwin.net (mailto:c...@ecbaldwin.net) wrote:
  
  Itsuro,
   
  It would be desirable to be able to be hide an agent from scheduling
  but no one has stepped up to make this happen. Come to think of it,
  I'm not sure that a bug or blueprint has been filed yet to address it
  though it is something that I've wanted for a little while now.
   
  Carl
   
  On Mon, Jan 5, 2015 at 4:13 PM, Itsuro ODA o...@valinux.co.jp 
  (mailto:o...@valinux.co.jp) wrote:
   Neutron experts,

   I want to stop scheduling to a specific {dhcp|l3}_agent without
   stopping router/dhcp services on it.
   I expected setting admin_state_up of the agent to False is met
   this demand. But this operation stops all services on the agent
   in actuality. (Is this behavior intended ? It seems there is no
   document for agent API.)

   I think admin_state_up of agents should affect only scheduling.
   If it is accepted I will submit a bug report and make a fix.

   Or should I propose a blueprint for adding function to stop
   agent's scheduling without stopping services on it ?

   I'd like to hear neutron experts' suggestions.

   Thanks.
   Itsuro Oda
   --
   Itsuro ODA o...@valinux.co.jp (mailto:o...@valinux.co.jp)


   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

   
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 --  
 Itsuro ODA o...@valinux.co.jp (mailto:o...@valinux.co.jp)
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Canceling the next two meetings

2014-12-22 Thread Miguel Ángel Ajo
Happy Holidays!, thank you Kyle.  

Miguel Ángel Ajo


On Monday, 22 de December de 2014 at 21:12, Kyle Mestery wrote:

 Hi folks, given I expect low attendance today and next week, lets just cancel 
 the next two Neutron meetings. We'll reconvene in the new year on Monday, 
 January 5, 2015 at 2100 UTC.
  
 Happy holidays to all!
  
 Kyle
  
 [1] https://wiki.openstack.org/wiki/Network/Meetings
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas] neutron/agent/firewall.py

2014-12-20 Thread Miguel Ángel Ajo
Correct, this is for the security groups implementation

Miguel Ángel Ajo


On Friday, 19 de December de 2014 at 23:50, Sridar Kandaswamy (skandasw) wrote:

 +1 Mathieu. Paul, this is not related to FWaaS.
  
 Thanks
  
 Sridar
  
 On 12/19/14, 2:23 PM, Mathieu Gagné mga...@iweb.com (http://web.com) 
 wrote:
  
  On 2014-12-19 5:16 PM, Paul Michali (pcm) wrote:

   This has a FirewallDriver and NoopFirewallDriver. Should this be moved
   into the neutron_fwaas repo?

   
   
  AFAIK, FirewallDriver is used to implement SecurityGroup:
   
  See:
  -  
  https://github.com/openstack/neutron/blob/master/neutron/agent/firewall.py
  #L26-L29
  -  
  https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptab
  les_firewall.py#L45
  -  
  https://github.com/openstack/neutron/blob/master/neutron/plugins/hyperv/ag
  ent/security_groups_driver.py#L25
   
  This class looks to not be used by neutron-fwaas
   
  --  
  Mathieu
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] mid-cycle hot reviews

2014-12-09 Thread Miguel Ángel Ajo

Hi all!

  It would be great if you could use this thread to post hot reviews on stuff  
that it’s being worked out during the mid-cycle, where others from different
timezones could participate.

  I know posting reviews to the list is not permitted, but I think an exception 
 
in this case would be beneficial.

  Best regards,
Miguel Ángel Ajo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Deprecating old security groups code / RPC.

2014-12-04 Thread Miguel Ángel Ajo


During Juno, we introduced the enhanced security groups rpc 
(security_groups_info_for_devices) instead of 
(security_group_rules_for_devices),  
and the ipset functionality to offload iptable chains a bit.


Here I propose to:

1) Remove the old security_group_info_for_devices, which was left to ease 
operators upgrade  
path from I to J (allowing running old openvswitch agents as we upgrade)

Doing this we can cleanup the current iptables firewall driver a bit from 
unused code paths.


I suppose this would require a major RPC version bump.

2) Remove the option to disable ipset (now it’s enabled by default and seems  
to be working without problems), and make it an standard way to handle “IP” 
groups  
from the iptables perspective.


Thoughts?,

Best regards,
Miguel Ángel Ajo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Deprecating old security groups code / RPC.

2014-12-04 Thread Miguel Ángel Ajo


On Thursday, 4 de December de 2014 at 15:19, Ihar Hrachyshka wrote:  
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
  
  On Thursday, 4 de December de 2014 at 15:06, Miguel Ángel Ajo
  wrote:
   


   During Juno, we introduced the enhanced security groups rpc  
   (security_groups_info_for_devices) instead of  
   (security_group_rules_for_devices), and the ipset functionality
   to offload iptable chains a bit.


   Here I propose to:

   1) Remove the old security_group_info_for_devices, which was left
   to ease operators upgrade path from I to J (allowing running old
   openvswitch agents as we upgrade)

   Doing this we can cleanup the current iptables firewall driver a
   bit from unused code paths.

   
   
  
  
 +1.
  

   I suppose this would require a major RPC version bump.

   2) Remove the option to disable ipset (now it’s enabled by
   default and seems to be working without problems), and make it an
   standard way to handle “IP” groups from the iptables
   perspective.

   
  
  
 Is ipset support present in all supported distributions?
  

It is from Red Hat perspective, not sure Ubuntu, and the others, I think
Juno was targeted to ubuntu 14.04 only (which does have ipset kernel
support and it’s tool).

Ipset was in kernel since 2.4.x, but RHEL6/Centos6 didn’t ship
the tools neither enabled it on kernel (AFAIK).  

  
  


   Thoughts?,

   Best regards, Miguel Ángel Ajo

   ___ OpenStack-dev
   mailing list OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)  
   mailto:OpenStack-dev@lists.openstack.org  
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

   
   
   
   
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org 
  (mailto:OpenStack-dev@lists.openstack.org)  
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
  
 iQEcBAEBCgAGBQJUgG1jAAoJEC5aWaUY1u57aK4H/1G0R0NgURf1l7WCx27VqRDR
 jdFlYzecMk2E6h84Fv5tJgGqAm6mGEFUrLf8MJ9+kDB33Syb+zvxJc9v6CvMw7br
 o+Qjk4lbHiiko1W8kDmq+onjUDHExapTR1+PsSX0HmuEvwV8yrAm/VJyccAAiqB6
 XPrWG4Xft2zEp004/uT9jzJPeW4YhRNY84Sa2C1ghemzKn43QYlu8U3DfuDzfQFP
 2MjzTwdP1FfBIX0jhXHrMlnHGuuxAscL9v6DM7Np2Iro6ExXK1ry9ex4/NWbdcIY
 sP9MkuA2wAMYE8pN1UM4LwSPg2rpEZEuwJfXyTohshcVHDoyPk81F4Q6R+ABPqM=
 =xzY6
 -END PGP SIGNATURE-
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-03 Thread Miguel Ángel Ajo
Congratulations to Henry and Kevin, very well deserved!,
keep up the good work! :)


Miguel Ángel Ajo


On Wednesday, 3 de December de 2014 at 09:44, Oleg Bondarev wrote:

 +1! Congrats, Henry and Kevin!
  
 On Tue, Dec 2, 2014 at 6:59 PM, Kyle Mestery mest...@mestery.com 
 (mailto:mest...@mestery.com) wrote:
  Now that we're in the thick of working hard on Kilo deliverables, I'd
  like to make some changes to the neutron core team. Reviews are the
  most important part of being a core reviewer, so we need to ensure
  cores are doing reviews. The stats for the 180 day period [1] indicate
  some changes are needed for cores who are no longer reviewing.
   
  First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
  neutron-core. Bob and Nachi have been core members for a while now.
  They have contributed to Neutron over the years in reviews, code and
  leading sub-teams. I'd like to thank them for all that they have done
  over the years. I'd also like to propose that should they start
  reviewing more going forward the core team looks to fast track them
  back into neutron-core. But for now, their review stats place them
  below the rest of the team for 180 days.
   
  As part of the changes, I'd also like to propose two new members to
  neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
  been very active in reviews, meetings, and code for a while now. Henry
  lead the DB team which fixed Neutron DB migrations during Juno. Kevin
  has been actively working across all of Neutron, he's done some great
  work on security fixes and stability fixes in particular. Their
  comments in reviews are insightful and they have helped to onboard new
  reviewers and taken the time to work with people on their patches.
   
  Existing neutron cores, please vote +1/-1 for the addition of Henry
  and Kevin to the core team.
   
  Thanks!
  Kyle
   
  [1] http://stackalytics.com/report/contribution/neutron-group/180
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate

2014-12-01 Thread Miguel Ángel Ajo

My proposal here, is, _let’s not deprecate this setting_, as it’s a valid use 
case of a gateway configuration, and let’s provide it on the reference 
implementation.  

TL;DR

I’ve been looking at this yesterday, during a test deployment
on a site  where they provide external connectivity with the
gateway outside subnet.

And I needed to switch it of, to actually be able to have any external 
connectivity.

https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121

This is handled by providing an on-link route to the gateway first,
and then adding the default gateway.  

It looks to me very interesting (not only because it’s the only way to work on 
that specific site [2][3][4]), because you can dynamically wire RIPE blocks to 
your server, without needing to use an specific IP for external routing or 
broadcast purposes, and instead use the full block in openstack.


I have a tiny patch to support this on the neutron l3-agent [1] I yet need to 
add the logic to check “gateway outside subnet”, then add the “onlink” route.


[1]

diff --git a/neutron/agent/linux/interface.py b/neutron/agent/linux/interface.py
index 538527b..5a9f186 100644
--- a/neutron/agent/linux/interface.py
+++ b/neutron/agent/linux/interface.py
@@ -116,15 +116,16 @@ class LinuxInterfaceDriver(object):
 namespace=namespace,
 ip=ip_cidr)

-if gateway:
-device.route.add_gateway(gateway)
-
 new_onlink_routes = set(s['cidr'] for s in extra_subnets)
+   if gateway:
+   new_onlink_routes.update([gateway])
 existing_onlink_routes = set(device.route.list_onlink_routes())
 for route in new_onlink_routes - existing_onlink_routes:
 device.route.add_onlink_route(route)
 for route in existing_onlink_routes - new_onlink_routes:
 device.route.delete_onlink_route(route)
+if gateway:
+device.route.add_gateway(gateway)

 def delete_conntrack_state(self, root_helper, namespace, ip):
 Delete conntrack state associated with an IP address.


[2] http://www.soyoustart.com/ (http://www.soyoustart.com/en/essential-servers/)
[3] http://www.ovh.co.uk/ (http://www.ovh.co.uk/dedicated_servers/)
[4] http://www.kimsufi.com/ (http://www.kimsufi.com/uk/)



Miguel Ángel Ajo



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [qa] gate-tempest-dsvm-neutron-heat-slow uture

2014-11-25 Thread Miguel Ángel Ajo
+1 from me! good catch.

Miguel Ángel Ajo


On Tuesday, 25 de November de 2014 at 16:57, Kyle Mestery wrote:

 On Tue, Nov 25, 2014 at 9:28 AM, Sean Dague s...@dague.net 
 (mailto:s...@dague.net) wrote:
  So as I was waiting for other tests to return, I started looking through
  our existing test lists.
   
  gate-tempest-dsvm-neutron-heat-slow has been slowly evaporating, and I'm
  no longer convinced that it does anything useful (and just burns test
  nodes).
   
  The entire output of the job is currently as follows:
   
  2014-11-25 14:43:13.801 | heat-slow runtests: commands[0] | bash
  tools/pretty_tox.sh (http://pretty_tox.sh)
  (?=.*\[.*\bslow\b.*\])(^tempest\.(api|scenario)\.orchestration)
  --concurrency=4
  2014-11-25 14:43:21.313 | {1}
  tempest.scenario.orchestration.test_server_cfn_init.CfnInitScenarioTest.test_server_cfn_init
  ... SKIPPED: Skipped until Bug: 1374175 is resolved.
  2014-11-25 14:47:36.271 | {0}
  tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_network
  [0.169736s] ... ok
  2014-11-25 14:47:36.271 | {0}
  tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_resources
  [0.000508s] ... ok
  2014-11-25 14:47:36.291 | {0}
  tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router
  [0.019679s] ... ok
  2014-11-25 14:47:36.313 | {0}
  tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router_interface
  [0.020597s] ... ok
  2014-11-25 14:47:36.564 | {0}
  tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_server
  [0.250788s] ... ok
  2014-11-25 14:47:36.580 | {0}
  tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_subnet
  [0.015410s] ... ok
  2014-11-25 14:47:44.113 |
  2014-11-25 14:47:44.113 | ==
  2014-11-25 14:47:44.113 | Totals
  2014-11-25 14:47:44.113 | ==
  2014-11-25 14:47:44.114 | Run: 7 in 0.478173 sec.
  2014-11-25 14:47:44.114 | - Passed: 6
  2014-11-25 14:47:44.114 | - Skipped: 1
  2014-11-25 14:47:44.115 | - Failed: 0
  2014-11-25 14:47:44.115 |
  2014-11-25 14:47:44.116 | ==
  2014-11-25 14:47:44.116 | Worker Balance
  2014-11-25 14:47:44.116 | ==
  2014-11-25 14:47:44.117 | - Worker 0 (6 tests) = 0:00:00.480677s
  2014-11-25 14:47:44.117 | - Worker 1 (1 tests) = 0:00:00.001455s
   
  So we are running about 1s worth of work, no longer in parallel (as
  there aren't enough classes to even do parallel runs).
   
  Given the emergence of the heat functional job, and the fact that this
  is really not testing anything any more, I'd like to propose we just
  remove it entirely at this stage and get the test nodes back.
   
  
 +1 from me.
  
  -Sean
   
  --
  Sean Dague
  http://dague.net
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stale patches

2014-11-14 Thread Miguel Ángel Ajo
Thanks for cleaning up the house!,

Best regards,  

Miguel Ángel Ajo


On Friday, 14 de November de 2014 at 00:46, Salvatore Orlando wrote:

 There are a lot of neutron patches which, for different reasons, have not 
 been updated in a while.
 In order to ensure reviewers focus on active patch, I have set a few patches 
 (about 75) as 'abandoned'.
  
 No patch with an update in the past month, either patchset or review, has 
 been abandoned. Moreover, only a part of the patches not updated for over a 
 month have been abandoned. I took extra care in identifying which ones could 
 safely be abandoned, and which ones were instead still valuable; 
 nevertheless, if you find out I abandoned a change you're actively working 
 on, please restore it.
  
 If you are the owner of one of these patches, you can use the 'restore 
 change' button in gerrit to resurrect the change. If you're not the other and 
 wish to resume work on these patches either contact any member of the 
 neutron-core team in IRC or push a new patch.
  
 Salvatore  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Miguel Ángel Ajo
I believe this fix to IPv6 dhcp spawn breaks isolated metadata when we have a 
subnet combination like this on a network:

1) IPv6 subnet, with DHCP enabled
2) IPv4 subnet, with isolated metadata enabled.


https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py  

I haven’t been able to test yet, but wanted to share it before I forget.




Miguel Ángel
ajo @ freenode.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] HA cross project session summary and next teps

2014-11-13 Thread Miguel Ángel Ajo
Thank you for sharing, I missed that session.

Somehow related to the health checks: https://review.openstack.org/#/c/97748/

This is an spec/functionality for oslo I’m working on, to provide feedback to 
the process
manager that runs the daemons (init.d, pacemaker, systemd, pacemaker+systemd, 
upstart).

The idea is that daemons themselves could provide feedback about their inner
status, with an status code + status message. To allow, for example, degraded 
operation.

Feedback on the spec/comments is appreciated.

Best regards,
Miguel Ángel



Miguel Ángel
ajo @ freenode.net


On Thursday, 13 de November de 2014 at 12:59, Angus Salkeld wrote:

 On Tue, Nov 11, 2014 at 12:13 PM, Angus Salkeld asalk...@mirantis.com 
 (mailto:asalk...@mirantis.com) wrote:
  Hi all
   
  The HA session was really well attended and I'd like to give some feedback 
  from the session.
   
  Firstly there is some really good content here: 
  https://etherpad.openstack.org/p/kilo-crossproject-ha-integration
   
  1. We SHOULD provide better health checks for OCF resources 
  (http://linux-ha.org/wiki/OCF_Resource_Agents).  
  These should be fast and reliable. We should probably bike shed on some 
  convention like project-manage healthcheck
  and then roll this out for each project.
   
  2. We should really move 
  https://github.com/madkiss/openstack-resource-agents to stackforge or 
  openstack if the author is agreeable to it (it's referred to in our 
  official docs).
   
  
 I have chatted to the author of this repo and he is happy for it to live 
 under stackforge or openstack. Or each OCF resource going into each of the 
 projects.
 Does anyone have any particular preference? I suspect stackforge will be the 
 path of least resistance.
  
 -Angus
   
  3. All services SHOULD support Active/Active configurations
  (better scaling and it's always tested)
   
  4. We should be testing HA (there are a number of ideas on the etherpad 
  about this)
   
  5. Many services do not recovery in the case of failure mid-task
  This seems like a big problem to me (some leave the DB in a mess). 
  Someone linked to an interesting article (
  crash-only-software: http://lwn.net/Articles/191059/) 
  (http://lwn.net/Articles/191059/) that suggests that we if we do this 
  correctly we should not need the concept of clean shutdown.
   
  (https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L459-L471)
   I'd be interested in how people think this needs to be approached 
  (just raise bugs for each?).
   
  Regards
  Angus
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron mid-cycle announcement

2014-11-13 Thread Miguel Ángel Ajo
I was willing to go, (plane is ~850€ from Madrid) but it’s going to push
too much stress on my family  as it’s too close to the summit.  
(I need to add at least 3 more days because of flights).

I’d love to participate remotely, even with the timezone differences,
if you find a way to leave “homework to the ones at CEST without  
making your work more difficult I’d allocate those days to help where  
I can/It’s needed.


Miguel Ángel
ajo @ freenode.net


On Friday, 14 de November de 2014 at 00:17, Kyle Mestery wrote:

 A severe typo hopefully didn't result in people booking week and a half trips 
 to Lehi!
  
 The mid-cycle is as originally planned: December 8-10.
  
 Thanks,
 Kyle
  
  On Nov 13, 2014, at 2:04 PM, Carl Baldwin c...@ecbaldwin.net 
  (mailto:c...@ecbaldwin.net) wrote:
   
   On Thu, Nov 13, 2014 at 1:00 PM, Salvatore Orlando sorla...@nicira.com 
   (mailto:sorla...@nicira.com) wrote:
   No worries,

   you get one day off over the weekend. And you also get to choose if it's
   saturday or sunday.

   
   
  I didn't think it was going to be a whole day.
   
   Salvatore

On 13 November 2014 20:05, Kevin Benton blak...@gmail.com 
(mailto:blak...@gmail.com) wrote:
 
December 8-19? 11 day mid-cycle seems a little intense...
   
  If you thought the summits fried your brain...
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Miguel Ángel Ajo
Wow Xu!, that was fast,  

Thank you very much.  


Miguel Ángel
ajo @ freenode.net


On Friday, 14 de November de 2014 at 04:01, Xu Han Peng wrote:

 I opened a new bug and submitted a fix for this problem since it was 
 introduced by my previous patch.
  
 https://bugs.launchpad.net/neutron/+bug/1392564
 https://review.openstack.org/#/c/134432/
  
 It will be great if you can have a look at the fix and comment. Thanks!
  
 Xu Han
  
 On 11/14/2014 05:54 AM, Ihar Hrachyshka wrote:
  -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Robert, Miguel, do you plan 
  to take care of the bug and the fix, or you need help? RDO depends on the 
  fix, also we should introduce the fix before the next Juno release that 
  includes the bad patch, so I would be glad to step in if you don't have 
  spare cycles. /Ihar On 13/11/14 16:44, Robert Li (baoli) wrote:  
   Nice catch. Since it’s already merged, a new bug may be in order. —Robert 
   On 11/13/14, 10:25 AM, Miguel Ángel Ajo majop...@redhat.com 
   (mailto:majop...@redhat.com) mailto:majop...@redhat.com 
   (mailto:majop...@redhat.com) wrote: I believe this fix to IPv6 dhcp 
   spawn breaks isolated metadata when we have a subnet combination like 
   this on a network: 1) IPv6 subnet, with DHCP enabled 2) IPv4 subnet, with 
   isolated metadata enabled. 
   https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py I 
   haven’t been able to test yet, but wanted to share it before I forget. 
   Miguel Ángel ajo @ freenode.net (http://freenode.net)  

   
  -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) 
  iQEcBAEBCgAGBQJUZSioAAoJEC5aWaUY1u57WvIH/0pEFnXwPF9dKGbmWGvxgURW 
  Fec0SMxl544DUyKXfhy/fEJPiedAm63TcBbBRkcLrwrGrAI23iMvAi96tCuH/eb7 
  uYbgoDF16b6DGUg0V2bXh8OufpgoIn4T38Vwwr0MFhfxOLbux4QfK1MshsRF1XWx 
  LnzmLLnDuJvEYF/gq9ifAA0ekQ+OdwYaKpTEyoVXZNuSJzJOkY8BnwjPQTuRStYG 
  M1oBsIxO9j9C/fw1/lkIKasaq9Vmy5LtG+neOyQDzM6EjZrVKO+Z9ZbJnhlkIoaF 
  fL8dKqDBzDbFbHpE/pHcUSR5lMnBkHWjsfqC6w8NKpEKnPW6UN88oipSoB9h7U4= =NfOp 
  -END PGP SIGNATURE- ___ 
  OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
  (mailto:OpenStack-dev@lists.openstack.org) 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
   
  
  
 -- Thanks, Xu Han  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Reminder: No meeting this week

2014-11-10 Thread Miguel Ángel Ajo
Congratulations :-)  

--  
Miguel Ángel Ajo
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Monday, 10 de November de 2014 at 21:22, Brandon Logan wrote:

 Congrats! Another little mestery in the world, scary!
  
 On Mon, 2014-11-10 at 12:22 -0600, Kyle Mestery wrote:
  Since most folks are either freshly back from traveling, in the midst
  of returning, or perhaps even with a new baby, we'll be skipping this
  week's meeting. We'll resume next week at our normally scheduled time
  [1] of 1400UTC on Tuesday.
   
  Thanks!
  Kyle
   
  [1] https://wiki.openstack.org/wiki/Network/Meetings
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon ode support

2014-11-07 Thread Miguel Ángel Ajo

Hi Yorik,

   I was talking with Mark Mcclain a minute ago here at the summit about this. 
And he told me that now at the start of the cycle looks like a good moment to 
merge the spec  the root wrap daemon bits, so we have a lot of headroom for 
testing during the next months.

   We need to upgrade the spec [1] to the new Kilo format.

   Do you have some time to do it?, I can allocate some time and do it right 
away.

[1] https://review.openstack.org/#/c/93889/
--  
Miguel Ángel Ajo
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:

 +1  
  
 Sent from my Android phone using TouchDown (www.nitrodesk.com)  
  
  
 -Original Message-  
 From: Yuriy Taraday [yorik@gmail.com (mailto:yorik@gmail.com)]  
 Received: Thursday, 24 Jul 2014, 0:42  
 To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org 
 (mailto:openstack-dev@lists.openstack.org)]  
 Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon 
 mode support  
  
  
 Hello.
  
 I'd like to propose making a spec freeze exception for rootwrap-daemon-mode 
 spec [1].
  
 Its goal is to save agents' execution time by using daemon mode for rootwrap 
 and thus avoiding python interpreter startup time as well as sudo overhead 
 for each call. Preliminary benchmark shows 10x+ speedup of the rootwrap 
 interaction itself.  
  
 This spec have a number of supporters from Neutron team (Carl and Miguel gave 
 it their +2 and +1) and have all code waiting for review [2], [3], [4].
 The only thing that has been blocking its progress is Mark's -2 left when 
 oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code 
 in oslo.rootwrap is steadily getting approved [5].
  
 [1] https://review.openstack.org/93889
 [2] https://review.openstack.org/82787
 [3] https://review.openstack.org/84667
 [4] https://review.openstack.org/107386
 [5] 
 https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z
  
 --  
  
 Kind regards, Yuriy.  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon ode support

2014-11-07 Thread Miguel Ángel Ajo
Ohh, sad to hear that Yuriy, you were doing an awesome work. I will take some 
time to re-review the final state of the code and specs, and move it forward. 
Thank you very much for your contribution.  

--  
Miguel Ángel Ajo
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Friday, 7 de November de 2014 at 11:44, Yuriy Taraday wrote:

 Hello, Miguel.
  
 I switched departments recently and unfortunately don't have much free time 
 for community work. Feel free to pick up my change requests and push them if 
 you have time. I'll try to keep track of these changes and give some feedback 
 on them on occasion, but don't wait on me.
 Thank you for keeping this feature in mind. I'd be glad to see it finally 
 used in Neutron (and any other project).
  
 --  
  
 Kind regards, Yuriy.
  
 On Fri, Nov 7, 2014 at 1:05 PM, Miguel Ángel Ajo majop...@redhat.com 
 (mailto:majop...@redhat.com) wrote:
   
  Hi Yorik,
   
 I was talking with Mark Mcclain a minute ago here at the summit about 
  this. And he told me that now at the start of the cycle looks like a good 
  moment to merge the spec  the root wrap daemon bits, so we have a lot of 
  headroom for testing during the next months.
   
 We need to upgrade the spec [1] to the new Kilo format.
   
 Do you have some time to do it?, I can allocate some time and do it 
  right away.
   
  [1] https://review.openstack.org/#/c/93889/
  --  
  Miguel Ángel Ajo
  Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
   
   
  On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:
   
   +1  

   Sent from my Android phone using TouchDown (www.nitrodesk.com 
   (http://www.nitrodesk.com))  


   -Original Message-  
   From: Yuriy Taraday [yorik@gmail.com (mailto:yorik@gmail.com)]  
   Received: Thursday, 24 Jul 2014, 0:42  
   To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org 
   (mailto:openstack-dev@lists.openstack.org)]  
   Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon 
   mode support  


   Hello.

   I'd like to propose making a spec freeze exception for 
   rootwrap-daemon-mode spec [1].

   Its goal is to save agents' execution time by using daemon mode for 
   rootwrap and thus avoiding python interpreter startup time as well as 
   sudo overhead for each call. Preliminary benchmark shows 10x+ speedup of 
   the rootwrap interaction itself.  

   This spec have a number of supporters from Neutron team (Carl and Miguel 
   gave it their +2 and +1) and have all code waiting for review [2], [3], 
   [4].
   The only thing that has been blocking its progress is Mark's -2 left when 
   oslo.rootwrap spec hasn't been merged yet. Now that's not the case and 
   code in oslo.rootwrap is steadily getting approved [5].

   [1] https://review.openstack.org/93889
   [2] https://review.openstack.org/82787
   [3] https://review.openstack.org/84667
   [4] https://review.openstack.org/107386
   [5] 
   https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z
  

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Neutron] Rootwrap daemon ode support

2014-11-07 Thread Miguel Ángel Ajo
Yuriy, what’s the status of the rootwrap-daemon implementation on the nova 
side?, was it merged?, otherwise do you think there could be anyone interested 
in picking it up?  

Best regards,  

--  
Miguel Ángel Ajo
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Friday, 7 de November de 2014 at 11:52, Miguel Ángel Ajo wrote:

 Ohh, sad to hear that Yuriy, you were doing an awesome work. I will take some 
 time to re-review the final state of the code and specs, and move it forward. 
 Thank you very much for your contribution.  
  
 --  
 Miguel Ángel Ajo
 Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
  
  
 On Friday, 7 de November de 2014 at 11:44, Yuriy Taraday wrote:
  
  Hello, Miguel.
   
  I switched departments recently and unfortunately don't have much free time 
  for community work. Feel free to pick up my change requests and push them 
  if you have time. I'll try to keep track of these changes and give some 
  feedback on them on occasion, but don't wait on me.
  Thank you for keeping this feature in mind. I'd be glad to see it finally 
  used in Neutron (and any other project).
   
  --  
   
  Kind regards, Yuriy.
   
  On Fri, Nov 7, 2014 at 1:05 PM, Miguel Ángel Ajo majop...@redhat.com 
  (mailto:majop...@redhat.com) wrote:

   Hi Yorik,

  I was talking with Mark Mcclain a minute ago here at the summit about 
   this. And he told me that now at the start of the cycle looks like a good 
   moment to merge the spec  the root wrap daemon bits, so we have a lot of 
   headroom for testing during the next months.

  We need to upgrade the spec [1] to the new Kilo format.

  Do you have some time to do it?, I can allocate some time and do it 
   right away.

   [1] https://review.openstack.org/#/c/93889/
   --  
   Miguel Ángel Ajo
   Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


   On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:

+1  
 
Sent from my Android phone using TouchDown (www.nitrodesk.com 
(http://www.nitrodesk.com))  
 
 
-Original Message-  
From: Yuriy Taraday [yorik@gmail.com (mailto:yorik@gmail.com)]  
Received: Thursday, 24 Jul 2014, 0:42  
To: OpenStack Development Mailing List 
[openstack-dev@lists.openstack.org 
(mailto:openstack-dev@lists.openstack.org)]  
Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap 
daemon mode support  
 
 
Hello.
 
I'd like to propose making a spec freeze exception for 
rootwrap-daemon-mode spec [1].
 
Its goal is to save agents' execution time by using daemon mode for 
rootwrap and thus avoiding python interpreter startup time as well as 
sudo overhead for each call. Preliminary benchmark shows 10x+ speedup 
of the rootwrap interaction itself.  
 
This spec have a number of supporters from Neutron team (Carl and 
Miguel gave it their +2 and +1) and have all code waiting for review 
[2], [3], [4].
The only thing that has been blocking its progress is Mark's -2 left 
when oslo.rootwrap spec hasn't been merged yet. Now that's not the case 
and code in oslo.rootwrap is steadily getting approved [5].
 
[1] https://review.openstack.org/93889
[2] https://review.openstack.org/82787
[3] https://review.openstack.org/84667
[4] https://review.openstack.org/107386
[5] 
https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z
   
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev