Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Miguel Angel Ajo Pelayo
+1

- Original Message -
> like it! +1
> 
> Fawad Khaliq
> 
> 
> On Wed, Aug 13, 2014 at 7:58 AM, mar...@redhat.com < mandr...@redhat.com >
> wrote:
> 
> 
> 
> On 13/08/14 17:05, Kyle Mestery wrote:
> > Per this week's Neutron meeting [1], it was decided that offering a
> > rotating meeting slot for the weekly Neutron meeting would be a good
> > thing. This will allow for a much easier time for people in
> > Asia/Pacific timezones, as well as for people in Europe.
> > 
> > So, I'd like to propose we rotate the weekly as follows:
> > 
> > Monday 2100UTC
> > Tuesday 1400UTC
> 
> 
> HUGE +1 and thanks!
> 
> 
> > 
> > If people are ok with these time slots, I'll set this up and we'll
> > likely start with this new schedule in September, after the FPF.
> > 
> > Thanks!
> > Kyle
> > 
> > [1]
> > http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Ipnetns+rootwrap, two different issues

2014-08-19 Thread Miguel Angel Ajo Pelayo

Hi Eugene, 

   I'd like to ask you to reconsider the -1 on this review:

 a)  https://review.openstack.org/#/c/114928/

   I'm tackling a different issue than Kevin here:

 b) https://review.openstack.org/#/c/109736/

I'm trying to allow general use of the IpNetns wrapper when
namespace=None (which is valid) and root_wraper=None (which is
valid too) to make the external_process module more testable 
without needing to add extra rootwrap rules, and without needing 
to rewrite the ProcessManager for that special case. [1]

Kevin, on his side, is making the code smarter, to guess from 
the system, whether ip netns requires privilege elevation
or not.

Those are different bugs, tackling different issues:

   a) https://bugs.launchpad.net/neutron/+bug/1358196

   b) https://bugs.launchpad.net/neutron/+bug/1311804 +
  https://bugs.launchpad.net/neutron/+bug/1348812


Best regards,
Miguel Ángel.


[1]: 
https://review.openstack.org/#/c/112798/18/neutron/tests/functional/agent/linux/test_process_manager.py
 (line 77), 
 namespace=None by default.

 which goes through this: 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L54

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-08-20 Thread Miguel Angel Ajo Pelayo
his for the
> >>>> Juno cycle. If we go for something too complicated is going to
> >>>> take more time for approval.
> >>>
> >>>
> >>> I agree. And it not only increases chances to get at least some of
> >>> those highly demanded performance enhancements to get into Juno,
> >>> it's also "the right thing to do" (c). It's counterproductive to
> >>> put multiple vaguely related enhancements in single spec. This
> >>> would dim review focus and put us into position of getting
> >>> 'all-or-nothing'. We can't afford that.
> >>>
> >>> Let's leave one spec per enhancement. @Shihazhang, what do you
> >>> think?
> >>>
> >>>
> >>>> Also, I proposed the details of "2", trying to bring awareness
> >>>> on the topic, as I have been working with the scale lab in Red
> >>>> Hat to find and understand those issues, I have a very good
> >>>> knowledge of the problem and I believe I could make a very fast
> >>>> advance on the issue at the RPC level.
> >>>
> >>>> Given that, I'd like to work on this specific part, whether or
> >>>> not we split the specs, as it's something we believe critical
> >>>> for neutron scalability and thus, *nova parity*.
> >>>
> >>>> I will start a separate spec for "2", later on, if you find it
> >>>> ok, we keep them as separate ones, if you believe having just 1
> >>>> spec (for 1 & 2) is going be safer for juno-* approval, then we
> >>>> can incorporate my spec in yours, but then
> >>>> "add-ipset-to-security" is not a good spec title to put all this
> >>>> together.
> >>>
> >>>
> >>>> Best regards, Miguel Ángel.
> >>>
> >>>
> >>>> On 07/02/2014 03:37 AM, shihanzhang wrote:
> >>>>>
> >>>>> hi Miguel Angel Ajo Pelayo! I agree with you and modify my
> >>>>> spes, but I will also optimization the RPC from security group
> >>>>> agent to neutron server. Now the modle is
> >>>>> 'port[rule1,rule2...], port...', I will change it to 'port[sg1,
> >>>>> sg2..]', this can reduce the size of RPC respose message from
> >>>>> neutron server to security group agent.
> >>>>>
> >>>>> At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo"
> >>>>>  wrote:
> >>>>>>
> >>>>>>
> >>>>>> Ok, I was talking with Édouard @ IRC, and as I have time to
> >>>>>> work into this problem, I could file an specific spec for
> >>>>>> the security group RPC optimization, a masterplan in two
> >>>>>> steps:
> >>>>>>
> >>>>>> 1) Refactor the current RPC communication for
> >>>>>> security_groups_for_devices, which could be used for full
> >>>>>> syncs, etc..
> >>>>>>
> >>>>>> 2) Benchmark && make use of a fanout queue per security
> >>>>>> group to make sure only the hosts with instances on a
> >>>>>> certain security group get the updates as they happen.
> >>>>>>
> >>>>>> @shihanzhang do you find it reasonable?
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> - Original Message -
> >>>>>>> - Original Message -
> >>>>>>>> @Nachi: Yes that could a good improvement to factorize
> >>>>>>>> the RPC
> >>>>>>> mechanism.
> >>>>>>>>
> >>>>>>>> Another idea: What about creating a RPC topic per
> >>>>>>>> security group (quid of the
> >>>>>>> RPC topic
> >>>>>>>> scalability) on which an agent subscribes if one of its
> >>>>>>>> ports is
> >>>>>>> associated
> >>>>>>>> to the security group?
> >>>>>>>>
> >>>>>>>> Regards, Édouard.
> >>>>>>>>
> >>>>>>>>
> >>>
> >>>
> >>>>>>> Hmm, Interesting,

Re: [openstack-dev] [neutron]Performance of security group

2014-08-20 Thread Miguel Angel Ajo Pelayo
t; On 07/02/2014 02:50 PM, shihanzhang wrote:
> >>>>> hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
> >>>>> split the work in several specs, I have finished the work (
> >>>>> ipset optimization), you can do 'sg rpc optimization (without
> >>>>> fanout)'. as the third part(sg rpc optimization (with fanout)),
> >>>>> I think we need talk about it, because just using ipset to
> >>>>> optimize security group agent codes does not bring the best
> >>>>> results!
> >>>>> 
> >>>>> Best regards, shihanzhang.
> >>>>> 
> >>>>> 
> >>>>> 
> >>>>> 
> >>>>> 
> >>>>> 
> >>>>> 
> >>>>> 
> >>>>> At 2014-07-02 04:43:24, "Ihar Hrachyshka" < ihrac...@redhat.com >
> >>>>> wrote:
> >>> On 02/07/14 10:12, Miguel Angel Ajo wrote:
> >>> 
> >>>> Shihazhang,
> >>> 
> >>>> I really believe we need the RPC refactor done for this cycle,
> >>>> and given the close deadlines we have (July 10 for spec
> >>>> submission and July 20 for spec approval).
> >>> 
> >>>> Don't you think it's going to be better to split the work in
> >>>> several specs?
> >>> 
> >>>> 1) ipset optimization (you) 2) sg rpc optimization (without
> >>>> fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you
> >>>> , me)
> >>> 
> >>> 
> >>>> This way we increase the chances of having part of this for the
> >>>> Juno cycle. If we go for something too complicated is going to
> >>>> take more time for approval.
> >>> 
> >>> 
> >>> I agree. And it not only increases chances to get at least some of
> >>> those highly demanded performance enhancements to get into Juno,
> >>> it's also "the right thing to do" (c). It's counterproductive to
> >>> put multiple vaguely related enhancements in single spec. This
> >>> would dim review focus and put us into position of getting
> >>> 'all-or-nothing'. We can't afford that.
> >>> 
> >>> Let's leave one spec per enhancement. @Shihazhang, what do you
> >>> think?
> >>> 
> >>> 
> >>>> Also, I proposed the details of "2", trying to bring awareness
> >>>> on the topic, as I have been working with the scale lab in Red
> >>>> Hat to find and understand those issues, I have a very good
> >>>> knowledge of the problem and I believe I could make a very fast
> >>>> advance on the issue at the RPC level.
> >>> 
> >>>> Given that, I'd like to work on this specific part, whether or
> >>>> not we split the specs, as it's something we believe critical
> >>>> for neutron scalability and thus, *nova parity*.
> >>> 
> >>>> I will start a separate spec for "2", later on, if you find it
> >>>> ok, we keep them as separate ones, if you believe having just 1
> >>>> spec (for 1 & 2) is going be safer for juno-* approval, then we
> >>>> can incorporate my spec in yours, but then
> >>>> "add-ipset-to-security" is not a good spec title to put all this
> >>>> together.
> >>> 
> >>> 
> >>>> Best regards, Miguel Ángel.
> >>> 
> >>> 
> >>>> On 07/02/2014 03:37 AM, shihanzhang wrote:
> >>>>> 
> >>>>> hi Miguel Angel Ajo Pelayo! I agree with you and modify my
> >>>>> spes, but I will also optimization the RPC from security group
> >>>>> agent to neutron server. Now the modle is
> >>>>> 'port[rule1,rule2...], port...', I will change it to 'port[sg1,
> >>>>> sg2..]', this can reduce the size of RPC respose message from
> >>>>> neutron server to security group agent.
> >>>>> 
> >>>>> At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo"
> >>>>> < mangel...@redhat.com > wrote:
> >>>>>> 
> >>>>>> 
> >>>>>> Ok, I was talking with Édouard @ IRC, and as I have time to
> >>>>>> w

Re: [openstack-dev] [neutron] Runtime checks vs Sanity checks

2014-08-25 Thread Miguel Angel Ajo Pelayo

In spite of my +1 I actually agree. I had forgotten about the sanity
check framework. We put it in place to avoid an excessive (and
growing) amount of checks to be done in runtime.

In this case several agents need would be doing the same check.

We should do things either one way or another, but not mixed.

In this case, the check would require a privilege drop to be done:
(probably could be done by using a root_helper= 
"su - %(neutron_user)s %(root_helper)s")



Best regards,
Miguel Ángel.

- Original Message -
> Kevin Benton has proposed adding a runtime check for netns permission
> problems:
> 
> https://review.openstack.org/#/c/109736/
> 
> There seems to be consensus on the patch that this is something that we want
> to do at runtime, but that would seem to run counter to the precedent that
> host-specific issues such as this one be considered a deployment-time
> responsibility.  The addition of the sanity check  framework would seem to
> support the latter case:
> 
> https://github.com/openstack/neutron/blob/master/neutron/cmd/sanity_check.py
> 
> Thoughts?
> 
> 
> Maru
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Juno-3 BP meeting

2014-08-26 Thread Miguel Angel Ajo Pelayo
Works perfect for me.  I will join.  

Sent from my Android phone using TouchDown (www.nitrodesk.com)


-Original Message-
From: Carl Baldwin [c...@ecbaldwin.net]
Received: Wednesday, 27 Aug 2014, 5:07
To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
Subject: Re: [openstack-dev] [neutron] Juno-3 BP meeting

Kyle,

These are three good ones.  I've been reviewing the HA ones and have had an
eye on the other two.

1300 is a bit early but I'll plan to be there.

Carl
On Aug 26, 2014 4:04 PM, "Kyle Mestery"  wrote:

> I'd like to propose a meeting at 1300UTC on Thursday in
> #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this
> point. We're taking specifically about medium and high priority ones,
> with a focus on these three:
>
> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability)
> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security)
>
> https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
> )
>
> These three BPs will provide a final push for scalability in a few
> areas and are things we as a team need to work to merge this week. The
> meeting will allow for discussion of final issues on these patches
> with the goal of trying to merge them by Feature Freeze next week. If
> time permits, we can discuss other medium and high priority community
> BPs as well.
>
> Let me know if this works by responding on this thread and I hope to
> see people there Thursday!
>
> Thanks,
> Kyle
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Juno-3 BP meeting

2014-08-27 Thread Miguel Angel Ajo Pelayo


If we yet had time at the end, as a lower priority, I'd like 
to talk about:
 
https://blueprints.launchpad.net/neutron/+spec/agent-child-processes-status

Which I believe is in good shape (l3 & dhcp are implemented, and leave the bases
to do the work for all other agents).



- Original Message -
> Works for me.
> 
> 
> On Wed, Aug 27, 2014 at 10:54 AM, Assaf Muller < amul...@redhat.com > wrote:
> 
> 
> Good for me.
> 
> - Original Message -
> > Works perfect for me. I will join.
> > 
> > Sent from my Android phone using TouchDown ( www.nitrodesk.com )
> > 
> > 
> > -Original Message-
> > From: Carl Baldwin [ c...@ecbaldwin.net ]
> > Received: Wednesday, 27 Aug 2014, 5:07
> > To: OpenStack Development Mailing List [ openstack-dev@lists.openstack.org
> > ]
> > Subject: Re: [openstack-dev] [neutron] Juno-3 BP meeting
> > 
> > 
> > 
> > 
> > Kyle,
> > 
> > These are three good ones. I've been reviewing the HA ones and have had an
> > eye on the other two.
> > 
> > 1300 is a bit early but I'll plan to be there.
> > 
> > Carl
> > On Aug 26, 2014 4:04 PM, "Kyle Mestery" < mest...@mestery.com > wrote:
> > 
> > 
> > I'd like to propose a meeting at 1300UTC on Thursday in
> > #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this
> > point. We're taking specifically about medium and high priority ones,
> > with a focus on these three:
> > 
> > https://blueprints.launchpad.net/neutron/+spec/l3-high-availability )
> > https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security )
> > https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
> > )
> > 
> > These three BPs will provide a final push for scalability in a few
> > areas and are things we as a team need to work to merge this week. The
> > meeting will allow for discussion of final issues on these patches
> > with the goal of trying to merge them by Feature Freeze next week. If
> > time permits, we can discuss other medium and high priority community
> > BPs as well.
> > 
> > Let me know if this works by responding on this thread and I hope to
> > see people there Thursday!
> > 
> > Thanks,
> > Kyle
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-04 Thread Miguel Angel Ajo Pelayo

I didn't know that we could ask for FFE, so I'd like to ask (if
yet in time) for: 

https://blueprints.launchpad.net/neutron/+spec/agent-child-processes-status

https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/agent-child-processes-status,n,z

To get the ProcessMonitor implemented in the l3_agent and dhcp_agent at least.

I believe the work is ready (I need to check the radvd respawn in the l3 agent).
The ProcessMonitor class is already merged.

Best regards,
Miguel Ángel.

- Original Message -
> On Wed, Sep 3, 2014 at 10:19 AM, Mark McClain  wrote:
> >
> > On Sep 3, 2014, at 11:04 AM, Brian Haley  wrote:
> >
> >> On 09/03/2014 08:17 AM, Kyle Mestery wrote:
> >>> Given how deep the merge queue is (146 currently), we've effectively
> >>> reached feature freeze in Neutron now (likely other projects as well).
> >>> So this morning I'm going to go through and remove BPs from Juno which
> >>> did not make the merge window. I'll also be putting temporary -2s in
> >>> the patches to ensure they don't slip in as well. I'm looking at FFEs
> >>> for the high priority items which are close but didn't quite make it:
> >>>
> >>> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
> >>> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
> >>> https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
> >>
> >> I guess I'll be the first to ask for an exception for a Medium since the
> >> code
> >> was originally completed in Icehouse:
> >>
> >> https://blueprints.launchpad.net/neutron/+spec/l3-metering-mgnt-ext
> >>
> >> The neutronclient-side code was committed in January, and the neutron
> >> side,
> >> https://review.openstack.org/#/c/70090 has had mostly positive reviews
> >> since
> >> then.  I've really just spent the last week re-basing it as things moved
> >> along.
> >>
> >
> > +1 for FFE.  I think this is good community that fell through the cracks.
> >
> I agree, and I've marked it as RC1 now. I'll sort through these with
> ttx on Friday and get more clarity on it's official status.
> 
> Thanks,
> Kyle
> 
> > mark
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] default allow security group

2014-09-05 Thread Miguel Angel Ajo Pelayo

I believe your request matches this, and I agree
it'd be something good

https://blueprints.launchpad.net/neutron/+spec/default-rules-for-default-security-group

And also, the fact that we have hardcoded default 
security group settings. It would be good to have 
a system wide default security group settings.

https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L122





- Original Message -
> Hi!
> 
> I've decided that as I have problems with OpenStack while using it in
> the service of Infra, I'm going to just start spamming the list.
> 
> Please make something like this:
> 
> neutron security-group-create default --allow-every-damn-thing
> 
> Right now, to make security groups get the hell out of our way because
> they do not provide us any value because we manage our own iptables, it
> takes adding something like 20 rules.
> 
> 15:24:05  clarkb | one each for ingress and egress udp tcp over
> ipv4 then ipv6 and finaly icmp
> 
> That may be great for someone using my-first-server-pony, but for me, I
> know how the internet works, and when I ask for a server, I want it to
> just work.
> 
> Now, I know, I know - the DEPLOYER can make decisions blah blah blah.
> 
> BS
> 
> If OpenStack is going to let my deployer make the absolutely assinine
> decision that all of my network traffic should be blocked by default, it
> should give me, the USER, a get out of jail free card.
> 
> kthxbai
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] non-deterministic gate failures due to unclosed eventlet Timeouts

2014-09-10 Thread Miguel Angel Ajo Pelayo
Good catch John, and good work Angus! ;)

This will save a lot of headaches.

- Original Message -
> On Mon, 8 Sep 2014 05:25:22 PM Jay Pipes wrote:
> > On 09/07/2014 10:43 AM, Matt Riedemann wrote:
> > > On 9/7/2014 8:39 AM, John Schwarz wrote:
> > >> Hi,
> > >> 
> > >> Long story short: for future reference, if you initialize an eventlet
> > >> Timeout, make sure you close it (either with a context manager or simply
> > >> timeout.close()), and be extra-careful when writing tests using
> > >> eventlet Timeouts, because these timeouts don't implicitly expire and
> > >> will cause unexpected behaviours (see [1]) like gate failures. In our
> > >> case this caused non-deterministic failures on the dsvm-functional test
> > >> suite.
> > >> 
> > >> 
> > >> Late last week, a bug was found ([2]) in which an eventlet Timeout
> > >> object was initialized but not closed. This instance was left inside
> > >> eventlet's inner-workings and triggered non-deterministic "Timeout: 10
> > >> seconds" errors and failures in dsvm-functional tests.
> > >> 
> > >> As mentioned earlier, initializing a new eventlet.timeout.Timeout
> > >> instance also registers it to inner mechanisms that exist within the
> > >> library, and the reference remains there until it is explicitly removed
> > >> (and not until the scope leaves the function block, as some would have
> > >> thought). Thus, the old code (simply creating an instance without
> > >> assigning it to a variable) left no way to close the timeout object.
> > >> This reference remains throughout the "life" of a worker, so this can
> > >> (and did) effect other tests and procedures using eventlet under the
> > >> same process. Obviously this could easily effect production-grade
> > >> systems with very high load.
> > >> 
> > >> For future reference:
> > >>   1) If you run into a "Timeout: %d seconds" exception whose traceback
> > >> 
> > >> includes "hub.switch()" and "self.greenlet.switch()" calls, there might
> > >> be a latent Timeout somewhere in the code, and a search for all
> > >> eventlet.timeout.Timeout instances will probably produce the culprit.
> > >> 
> > >>   2) The setup used to reproduce this error for debugging purposes is a
> > >> 
> > >> baremetal machine running a VM with devstack. In the baremetal machine I
> > >> used some 6 "dd if=/dev/zero of=/dev/null" to simulate high CPU load
> > >> (full command can be found at [3]), and in the VM I ran the
> > >> dsvm-functional suite. Using only a VM with similar high CPU simulation
> > >> fails to produce the result.
> > >> 
> > >> [1]
> > >> http://eventlet.net/doc/modules/timeout.html#eventlet.timeout.eventlet.ti
> > >> meout.Timeout.Timeout.cancel
> > >> 
> > >> [2] https://review.openstack.org/#/c/119001/
> > >> [3]
> > >> http://stackoverflow.com/questions/2925606/how-to-create-a-cpu-spike-with
> > >> -a-bash-command
> > >> 
> > >> 
> > >> 
> > >> --
> > >> John Schwarz,
> > >> Software Engineer, Red Hat.
> > >> 
> > >> 
> > >> ___
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > 
> > > Thanks, that might be what's causing this timeout/gate failure in the
> > > nova unit tests. [1]
> > > 
> > > [1] https://bugs.launchpad.net/nova/+bug/1357578
> > 
> > Indeed, there are a couple places where eventlet.timeout.Timeout() seems
> > to be used in the test suite without a context manager or calling
> > close() explicitly:
> > 
> > tests/virt/libvirt/test_driver.py
> > 8925:raise eventlet.timeout.Timeout()
> > 
> > tests/virt/hyperv/test_vmops.py
> > 196:mock_with_timeout.side_effect = etimeout.Timeout()
> 
> If it's useful for anyone, I wrote a quick pylint test that will catch all
> the
> above cases of "misused" context managers.
> 
> (Indeed, it will currently trigger on the "raise Timeout()" case, which is
> probably too eager but can be disabled in the usual #pylint meta-comment way)
> 
> Here: https://review.openstack.org/#/c/120320/
> 
> --
>  - Gus
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Ipset, merge refactor for J?

2014-09-15 Thread Miguel Angel Ajo Pelayo

During the ipset implementatio, we designed a refactor [1] to cleanup 
the firewall driver a bit, and move all the ipset low-level knowledge 
down into the  IpsetManager.

I'd like to see this merged for J, and, it's a bit of an urgent matter 
to decide, because we keep adding small changes [2] [3] fruit of the
early testing which break the refactor, and will add extra work which
needs to be refactored too.

The advantage of merging now, vs in J, is having K & J share a more common 
code base, which would help us during bug backports/etc in the future.

Shihanzhang and I, are happy to see this merge during K, as it doesn't 
incur in functional changes, just code blocks are moved from the iptables
firewall driver to IpsetManager, and the corresponding tests are moved too.

This is where I'd like to see the driver going, in conjunction with a separate
driver for Iptables+Ipset, but that second part is change which 
can't be done now (CI changes, documentation, etc.)


[1] https://review.openstack.org/#/c/120806/ 
[2] https://review.openstack.org/#/c/121455/
[3] to be done: not re-loading iptables when only ipset group members change.
[4] to be done: better locking strategy (brian haley is looking at that)


Best regards,
Miguel Ángel Ajo.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ipset, merge refactor for J?

2014-09-17 Thread Miguel Angel Ajo Pelayo

Ack, thank you for the feedback. I will put it in WIP until we reach Kilo.

We will track any new bugfixes or changes so the refactor is ready
for early kilo, 

then after this is merged I will tackle a second refactor to extract Ipset 
out as we planned, into a new driver which extends IptablesFirewallDriver,
here I will need to coordinate with  Juergen Brendel from cisco

https://review.openstack.org/#/c/119343/ (ebtables driver)
https://review.openstack.org/#/c/119352/ (ARP fix, based on ebtables)

I also know Jakub Libosvar had good plans to enhance the iptables firewall 
driver and testing code so may be it will be good to schedule some meetings
around this starting at Kilo.

I would also like to explore the ebtables driver to extend it with Ipset too.

Best regards,
Miguel Ángel.


- Original Message -
> On Tue, Sep 16, 2014 at 7:24 AM, Sean Dague  wrote:
> > On 09/16/2014 03:57 AM, Thierry Carrez wrote:
> >> Miguel Angel Ajo Pelayo wrote:
> >>> During the ipset implementatio, we designed a refactor [1] to cleanup
> >>> the firewall driver a bit, and move all the ipset low-level knowledge
> >>> down into the  IpsetManager.
> >>>
> >>> I'd like to see this merged for J, and, it's a bit of an urgent matter
> >>> to decide, because we keep adding small changes [2] [3] fruit of the
> >>> early testing which break the refactor, and will add extra work which
> >>> needs to be refactored too.
> >>>
> >>> The advantage of merging now, vs in J, is having K & J share a more
> >>> common
> >>> code base, which would help us during bug backports/etc in the future.
> >>>
> >>> Shihanzhang and I, are happy to see this merge during K, as it doesn't
> >>> incur in functional changes, just code blocks are moved from the iptables
> >>> firewall driver to IpsetManager, and the corresponding tests are moved
> >>> too.
> >>> [...]
> >>
> >> IMHO code refactoring should be considered a superfluous change at this
> >> point in the cycle. The risk/benefit ratio is too high, and focus should
> >> be on bugfixing at this point.
> >
> > +1.
> >
> > Hold the refactoring until Kilo.
> >
> +1
> 
> At this point, we should be focusing on bugs which stabilize the
> release. Lets hold this for Kilo.
> 
> Thanks,
> Kyle
> 
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] Parallel testing update

2014-01-02 Thread Miguel Angel Ajo Pelayo
Hi Salvatore!, 

   Good work on this.

   About the quota limit tests, I believe they may be unit-tested, 
instead of functionally tested.

   When running those tests in parallel with any other tests that rely
on having ports, networks or subnets available into quota, they have
high chances of making those other tests fail.

Cheers,
Miguel Ángel Ajo



- Original Message -
> From: "Kyle Mestery" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Thursday, January 2, 2014 7:53:05 PM
> Subject: Re: [openstack-dev] [Neutron][qa] Parallel testing update
> 
> Thanks for the updates here Salvatore, and for continuing to push on
> this! This is all great work!
> 
> On Jan 2, 2014, at 6:57 AM, Salvatore Orlando  wrote:
> > 
> > Hi again,
> > 
> > I've now run the experimental job a good deal of times, and I've filed bugs
> > for all the issues which came out.
> > Most of them occurred no more than once among all test execution (I think
> > about 30).
> > 
> > They're all tagged with neutron-parallel [1]. for ease of tracking, I've
> > associated all the bug reports with neutron, but some are probably more
> > tempest or nova issues.
> > 
> > Salvatore
> > 
> > [1] https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel
> > 
> > 
> > On 27 December 2013 11:09, Salvatore Orlando  wrote:
> > Hi,
> > 
> > We now have several patches under review which improve a lot how neutron
> > handles parallel testing.
> > In a nutshell, these patches try to ensure the ovs agent processes new,
> > removed, and updated interfaces as soon as possible,
> > 
> > These patches are:
> > https://review.openstack.org/#/c/61105/
> > https://review.openstack.org/#/c/61964/
> > https://review.openstack.org/#/c/63100/
> > https://review.openstack.org/#/c/63558/
> > 
> > There is still room for improvement. For instance the calls from the agent
> > into the plugins might be consistently reduced.
> > However, even if the above patches shrink a lot the time required for
> > processing a device, we are still hitting a hard limit with the execution
> > ovs commands for setting local vlan tags and clearing flows (or adding the
> > flow rule for dropping all the traffic).
> > In some instances this commands slow down a lot, requiring almost 10
> > seconds to complete. This adds a delay in interface processing which in
> > some cases leads to the hideous SSH timeout error (the same we see with
> > bug 1253896 in normal testing).
> > It is also worth noting that when this happens sysstat reveal CPU usage is
> > very close to 100%
> > 
> > From the neutron side there is little we can do. Introducing parallel
> > processing for interface, as we do for the l3 agent, is not actually a
> > solution, since ovs-vswitchd v1.4.x, the one executed on gate tests, is
> > not multithreaded. If you think the situation might be improved by
> > changing the logic for handling local vlan tags and putting ports on the
> > dead vlan, I would be happy to talk about that.
> > On my local machines I've seen a dramatic improvement in processing times
> > by installing ovs 2.0.0, which has a multi-threaded vswitchd. Is this
> > something we might consider for gate tests? Also, in order to reduce CPU
> > usage on the gate (and making tests a bit faster), there is a tempest
> > patch which stops creating and wiring neutron routers when they're not
> > needed: https://review.openstack.org/#/c/62962/
> > 
> > Even in my local setup which succeeds about 85% of times, I'm still seeing
> > some occurrences of the issue described in [1], which at the end of the
> > day seems a dnsmasq issue.
> > 
> > Beyond the 'big' structural problem discussed above, there are some minor
> > problems with a few tests:
> > 
> > 1) test_network_quotas.test_create_ports_until_quota_hit  fails about 90%
> > of times. I think this is because the test itself should be made aware of
> > parallel execution and asynchronous events, and there is a patch for this
> > already: https://review.openstack.org/#/c/64217
> > 
> > 2) test_attach_interfaces.test_create_list_show_delete_interfaces fails
> > about 66% of times. The failure is always on an assertion made after
> > deletion of interfaces, which probably means the interface is not deleted
> > within 5 seconds. I think this might be a consequence of the higher load
> > on the neutron service and we might try to enable multiple workers on the
> > gate to this aim, or just increase the tempest timeout. On a slightly
> > different note, allow me to say that the way assertion are made on this
> > test might be improved a bit. So far one has to go through the code to see
> > why the test failed.
> > 
> > Thanks for reading this rather long message.
> > Regards,
> > Salvatore
> > 
> > [1] https://lists.launchpad.net/openstack/msg23817.html
> > 
> > 
> > 
> > 
> > On 2 December 2013 22:01, Kyle Mestery (kmestery) 
> > wrote:
> > Yes, this is all great Salvatore and Armando! Thank you for all of this
> > work
> 

Re: [openstack-dev] [nova][neutron]About creating vms without ip address

2014-01-22 Thread Miguel Angel Ajo Pelayo


Hi Dong,
 
Can you elaborate an example of what you get, and what you were expecting 
exactly?.

I have a similar problem within one operator, where they assign you sparse 
blocks
of IP addresses (floating IPs), directly routed to your machine, and they also
assign the virtual mac addresses from their API.

Direct routing means, that the subnet router will route your IP from 
outside the
subnet directly through your subnet, to your machine..., and the traffic (with 
external IP)
is routed back to this internal router through the subnet to this router.

   Chears,
 
- Original Message -
> From: "Dong Liu" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, January 21, 2014 9:52:44 AM
> Subject: [openstack-dev] [nova][neutron]About creating vms without ip address
> 
> Hi fellow OpenStackers
> 
> I found that we could not create vms without ip address. But in the
> telecom scene, the ip address usually managed by the telecom network
> element themselves. So they need a vm without ip address and configurate
> it through some specific method. How can we provide a kind of vms like this.
> 
> I think provide a bility that allow tenant to create vm without ip
> address is necessary.
> 
> What's your opinion?
> 
> 
> Regards
> 
> Dong Liu
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ha][agents] host= parameter

2014-01-23 Thread Miguel Angel Ajo Pelayo
Hi!,
  
I want to ask, specifically, about the intended purpose of the
host=... parameter in the neutron-agents (undocumented AFAIK).

I haven't found any documentation about it. As far as I discovered,
it's being used to provide Active/Passive replication of agents, as
you can manage agent's on different hosts to register with the same ID to 
neutron 
(of course, *never* at the same time).
 
For example, if an L3 router host dies somewhere, you can start it's passive
using the same host= id in the agent .ini file, and it will setup
the exact same virtual routers and connectivity.

My main concern, is, as it doesn't seem to be documented, is it really
purposed for this?,

In that case, I suppose it would be good if I spend a few minutes to 
document
the agent example config files.

Thanks in advance for any feedback,

Cheers,
Miguel Ángel Ajo




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ha][agents] host= parameter

2014-01-29 Thread Miguel Angel Ajo Pelayo
Hi, any feedback on this?

   If there is not, and it does seem right, I will go on adding the
documentation of this parameter to the agent config files.

Best Regards,
Miguel Ángel.

- Original Message -
> From: "Miguel Angel Ajo Pelayo" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: "Fabio M. Di Nitto" 
> Sent: Friday, January 24, 2014 8:09:21 AM
> Subject: [openstack-dev] [neutron][ha][agents] host= parameter
> 
> Hi!,
>   
> I want to ask, specifically, about the intended purpose of the
> host=... parameter in the neutron-agents (undocumented AFAIK).
> 
> I haven't found any documentation about it. As far as I discovered,
> it's being used to provide Active/Passive replication of agents, as
> you can manage agent's on different hosts to register with the same ID to
> neutron
> (of course, *never* at the same time).
>  
> For example, if an L3 router host dies somewhere, you can start it's passive
> using the same host= id in the agent .ini file, and it will setup
> the exact same virtual routers and connectivity.
> 
> My main concern, is, as it doesn't seem to be documented, is it really
> purposed for this?,
> 
> In that case, I suppose it would be good if I spend a few minutes to
> document
> the agent example config files.
> 
> Thanks in advance for any feedback,
> 
> Cheers,
> Miguel Ángel Ajo
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] backporting database migrations to stable/havana

2014-02-04 Thread Miguel Angel Ajo Pelayo


Hi Ralf, I see we're on the same boat for this.

   It seems that a database migration introduces complications
for future upgrades. It's not an easy path.

   My aim when I started this backport was trying to scale out
neutron-server, starting several ones together. But I'm afraid
we would find more bugs like this requiring db migrations.

   Have you actually tested running multiple servers in icehouse?,
I just didn't have the time, but it's in my roadmap.

   If that fixes the problem, may be some heavier approach (like
table locking) could be used in the backport, without introducing 
a new/conflicting migration.



About the DB migration backport problem, the actual problem is:

you have migrations A,B,C,D,E,F and cross version boundaries |, them migrations
are a linked list.

havana   | icehouse
 |
A<-B<-C<-|-D<-E<-F

the problem is that if you want to bring E back, you have to fix E, but the 
first
migration on the next release keeps pointing to C

havana | icehouse
   |
A<-B<-C<-E | 
\--|--D<-E<-F


at this moment in our backport, we didn't even fix E, so it's worse... E points 
to D that it doesn't even exist yet

you can think of fixing this tree, by putting E in the middle of B and C, but
then E is replicated, and F points to an E that exists twice.


and an almost good fix would be:

havana | icehouse
   |
A<-B<-E2<-C|--D<-E<-F

but then the code in E will fail because the unique constraint was already 
applied, so in 
2 steps we could do (if I'm not wrong)



1st step) fix E in icehouse to skip the real unique constraint insertion if it 
does already exist:

havana   | icehouse
 |
A<-B<-C<-|--D<-*E*<-F
 
2nd step) insert E2 in the middle of B and C to keep the icehouse first 
reference happy:

havana  | icehouse
|
A<-B<-E<-C<-|--D<-*E*<-F



What do you think?


- Original Message -
> From: "Ralf Haferkamp" 
> To: openstack-dev@lists.openstack.org
> Sent: Tuesday, February 4, 2014 4:02:36 PM
> Subject: [openstack-dev] [Neutron] backporting database migrations to 
> stable/havana
> 
> Hi,
> 
> I am currently trying to backport the fix for
> https://launchpad.net/bugs/1254246 to stable/havana. The current state of
> that
> is here: https://review.openstack.org/#/c/68929/
> 
> However, the fix requires a database migration to be applied (to add a unique
> constraint to the agents table). And the current fix linked above will AFAIK
> break havana->icehouse migrations. So I wonder what would be the correct way
> to
> do backport database migrations in neutron using alembic? Is there even a
> correct way, or are backports of database migrations a no go?
> 
> --
> regards,
> Ralf
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] backporting database migrations to stable/havana

2014-02-04 Thread Miguel Angel Ajo Pelayo


- Original Message -
> From: "Miguel Angel Ajo Pelayo" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, February 4, 2014 6:36:16 PM
> Subject: Re: [openstack-dev] [Neutron] backporting database migrations
> to  stable/havana
> 
> 
> 
> Hi Ralf, I see we're on the same boat for this.
> 
>It seems that a database migration introduces complications
> for future upgrades. It's not an easy path.
> 
>My aim when I started this backport was trying to scale out
> neutron-server, starting several ones together. But I'm afraid
> we would find more bugs like this requiring db migrations.
> 
>Have you actually tested running multiple servers in icehouse?,
> I just didn't have the time, but it's in my roadmap.
> 
>If that fixes the problem, may be some heavier approach (like
> table locking) could be used in the backport, without introducing
> a new/conflicting migration.
> 
> 
> 
> About the DB migration backport problem, the actual problem is:
> 
> you have migrations A,B,C,D,E,F and cross version boundaries |, them
> migrations
> are a linked list.
> 
> havana   | icehouse
>  |
> A<-B<-C<-|-D<-E<-F
> 
> the problem is that if you want to bring E back, you have to fix E, but the
> first
> migration on the next release keeps pointing to C
> 
> havana | icehouse
>|
> A<-B<-C<-E |
> \--|--D<-E<-F
> 
> 
> at this moment in our backport, we didn't even fix E, so it's worse... E
> points
> to D that it doesn't even exist yet
> 
> you can think of fixing this tree, by putting E in the middle of B and C, but
> then E is replicated, and F points to an E that exists twice.
> 
> 
> and an almost good fix would be:
> 
> havana | icehouse
>|
> A<-B<-E2<-C|--D<-E<-F
> 
> but then the code in E will fail because the unique constraint was already
> applied, so in
> 2 steps we could do (if I'm not wrong)
> 
> 
> 
> 1st step) fix E in icehouse to skip the real unique constraint insertion if
> it does already exist:
> 
> havana   | icehouse
>  |
> A<-B<-C<-|--D<-*E*<-F
>  
> 2nd step) insert E2 in the middle of B and C to keep the icehouse first
> reference happy:
> 
> havana  | icehouse
> |
> A<-B<-E<-C<-|--D<-*E*<-F
> 
> 

Sorry, I had a typo in this last ascii figure: (missing 2 in the E)

 havana  | icehouse
 |
A<-B<-E2<-C<-|--D<-*E*<-F


> 
> What do you think?
> 
> 
> - Original Message -
> > From: "Ralf Haferkamp" 
> > To: openstack-dev@lists.openstack.org
> > Sent: Tuesday, February 4, 2014 4:02:36 PM
> > Subject: [openstack-dev] [Neutron] backporting database migrations to
> > stable/havana
> > 
> > Hi,
> > 
> > I am currently trying to backport the fix for
> > https://launchpad.net/bugs/1254246 to stable/havana. The current state of
> > that
> > is here: https://review.openstack.org/#/c/68929/
> > 
> > However, the fix requires a database migration to be applied (to add a
> > unique
> > constraint to the agents table). And the current fix linked above will
> > AFAIK
> > break havana->icehouse migrations. So I wonder what would be the correct
> > way
> > to
> > do backport database migrations in neutron using alembic? Is there even a
> > correct way, or are backports of database migrations a no go?
> > 
> > --
> > regards,
> > Ralf
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] backporting database migrations to stable/havana

2014-02-05 Thread Miguel Angel Ajo Pelayo


- Original Message -
> From: "Ralf Haferkamp" 
> To: openstack-dev@lists.openstack.org
> Sent: Wednesday, February 5, 2014 12:47:24 PM
> Subject: Re: [openstack-dev] [Neutron] backporting database migrations
> to  stable/havana
> 
> Hi,
> 
> On Tue, Feb 04, 2014 at 12:36:16PM -0500, Miguel Angel Ajo Pelayo wrote:
> > 
> > 
> > Hi Ralf, I see we're on the same boat for this.
> > 
> >It seems that a database migration introduces complications
> > for future upgrades. It's not an easy path.
> > 
> >My aim when I started this backport was trying to scale out
> > neutron-server, starting several ones together. But I'm afraid
> > we would find more bugs like this requiring db migrations.
> > 
> >Have you actually tested running multiple servers in icehouse?,
> > I just didn't have the time, but it's in my roadmap.
> I actually ran into the bug in a single server setup. But that seems to
> happen
> pretty rarely.

Upps, really?, then it's worse than I thought, thank you for this feedback.

> 
> >If that fixes the problem, may be some heavier approach (like
> > table locking) could be used in the backport, without introducing
> > a new/conflicting migration.
> Hm, there seems to be no clean way to do table locking in sqlalchemy. At
> least I
> didn't find one.

I must admit I didn't yet look at this, if we don't have table locking it's 
hard to 
think of a proper solution for similar problems.

>  
> > About the DB migration backport problem, the actual problem is:
> [..]
> > 1st step) fix E in icehouse to skip the real unique constraint insertion if
> > it does already exist:
> > 
> > havana   | icehouse
> >  |
> > A<-B<-C<-|--D<-*E*<-F
> >  
> > 2nd step) insert E2 in the middle of B and C to keep the icehouse first
> > reference happy:
> > 
> > havana  | icehouse
> > |
> > A<-B<-E2<-C<-|--D<-*E*<-F
> > 
> > What do you think?
> I agree, that would likely be the right fix. But as it seems there are some
> (more or less) strict rules about stable backports of migrations (which I
> understand as it can get really tricky). So a solution that doesn't require
> them would probabyl be preferable.


Yes, we must think about something else, but I'm afraid that either we manage
to have table locking, or we will need this DB backport.  

The 2-step process seems correct to me, but we will need approval from the 
community.
I believe that a bug that breaks agent registration or listing is *bad* enough. 
But
anyway, until the gate is not stable there are more important things.

I like what the nova guys do, we must definitely make the same at the end
of Icehouse cycle, add a set of empty DB migrations which could be used for
this purpose.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [HA] blueprint: Provide agent service status which can be queried via init.d script or parent process

2014-02-06 Thread Miguel Angel Ajo Pelayo


During the design of HA deployments for Neutron, I have found
that agent's could run into problems, and they keep running,
but they have no methods to expose status to parent process 
or which could be queried via an init.d script.

So I'm proposing this blueprint,

https://blueprints.launchpad.net/neutron/+spec/agent-service-status

to make agent's expose internal status conditions via filesystem
as an extension of the current pid file.

This way, permanent or transient error conditions could be handled
by standard monitoring (or HA) solutions, to notify or take action
as appropriate.



It's a simple change that can make HA deployment's more robust, 
and capable of handling situations like this:

(If neutron spawned dnsmasq dies, neutron-dhcp-agent will be totally unaware)
https://bugs.launchpad.net/neutron/+bug/1257524 

We have the exact same problem with the other agents and sub-processes.

So I'm interested in getting this done for icehouse-3.

Any feedback?

Best regards, 
Miguel Ángel Ajo.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [stable/havana] cherry backport, multiple external networks, passing tests

2014-02-12 Thread Miguel Angel Ajo Pelayo

Could any core developer check/approve this if it does look good?

https://review.openstack.org/#/c/68601/

I'd like to get it in for the new stable/havana release 
if it's possible.


Best regards,
Miguel Ángel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [stable/havana] cherry backport, multiple external networks, passing tests

2014-02-12 Thread Miguel Angel Ajo Pelayo
Thank you Alan, 

   I wasn't aware of a stable freeze being active at this moment.

   Then we must target it for 2013.2.3 .

   Cheers,
Miguel Ángel

- Original Message -
> From: "Alan Pevec" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: "openstack-stable-maint" 
> Sent: Wednesday, February 12, 2014 12:05:29 PM
> Subject: Re: [openstack-dev] [neutron] [stable/havana] cherry backport, 
> multiple external networks, passing tests
> 
> 2014-02-12 10:48 GMT+01:00 Miguel Angel Ajo Pelayo :
> > Could any core developer check/approve this if it does look good?
> > https://review.openstack.org/#/c/68601/
> >
> > I'd like to get it in for the new stable/havana release
> > if it's possible.
> 
> I'm afraid it's too late for 2013.2.2 (to be released tomorrow after week
> delay)
> It would be the same answer as
> http://lists.openstack.org/pipermail/openstack-stable-maint/2014-February/002124.html
> - both linked bugs are Medium only and known for long time, so
> targeting 2013.2.3 is more reasonable.
> 
> 
> Cheers,
> Alan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?

2014-02-20 Thread Miguel Angel Ajo Pelayo

I rebased the https://review.openstack.org/#/c/72576/ no-op change.



- Original Message -
> From: "Alan Pevec" 
> To: "openstack-stable-maint" 
> Cc: "OpenStack Development Mailing List" 
> Sent: Tuesday, February 18, 2014 7:52:23 PM
> Subject: Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?
> 
> 2014-02-11 16:14 GMT+01:00 Anita Kuno:
> > On 02/11/2014 04:57 AM, Alan Pevec wrote:
> >> Hi Mark and Anita,
> >>
> >> could we declare stable/havana neutron gate jobs good enough at this
> >> point?
> >> There are still random failures as this no-op change shows
> >> https://review.openstack.org/72576
> >> but I don't think they're stable/havana specific.
> ...
> 
> > I will reaffirm here what I had stated in IRC.
> >
> > If Mark McClain gives his assent for stable/havana patches to be
> > approved, I will not remove Neutron stable/havana patches from the gate
> > queue before they start running tests. If after they start running
> > tests, they demonstrate that they are failing, I will remove them from
> > the gate as a means to keep the gate flowing. If the stable/havana gate
> > jobs are indeed stable, I will not be removing any patches that should
> > be merged.
> 
> As discussed on #openstack-infra last week, stable-maint team should
> start looking more closely at Tempest stable/havana branch and Matthew
> Treinish from Tempest core joined the stable-maint team to help us
> there.
> 
> In the meantime, we need to do something more urgently, there are
> remaining failures showing up frequently in stable/havana jobs which
> seem to have been fixed or at least improved on master:
> 
> * bug 1254890 - "Timed out waiting for thing ... to become ACTIVE"
> causes tempest-dsvm-* failures
>   resolution unclear?
> 
> * bug 1253896 - "Attempts to verify guests are running via SSH fails.
> SSH connection to guest does not work."
>   based on Salvatore's comment 56, I've marked it as Won't Fix in
> neutron/havana and opened tempest/havana to propose what Tempest test
> or jobs should skip for Havana. Please chime-in in the bug if you have
> suggestions.
> 
> 
> Cheers,
> Alan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] when icehouse will be frozen

2014-02-20 Thread Miguel Angel Ajo Pelayo

If I didn't understand it wrong, as long as you have an active review for
your change, and some level of interest / approval, then you should
be ok to finish it during the last Icehouse cycle, but of course,
your code needs to be approved to become part of Icehouse.

Cheers,
Miguel Ángel.

- Original Message -
> From: "马煜" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, February 20, 2014 1:52:04 AM
> Subject: [openstack-dev] [neutron] when icehouse will be frozen
> 
> who know when to freezy icehouse version ?
> 
> my bp on ml2 driver has been approved, code is under review,
> but I have some trouble to deploy third-party ci on which tempest test run.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Should we clean resource first then do assert in test?

2014-11-18 Thread Miguel Angel Ajo Pelayo
Correct,  

So,  it's better to keep tests clean of de resource cleanups for readability 
purposes and easier maintenance. 


Sent from my Android phone using TouchDown (www.nitrodesk.com)


-Original Message-
From: Kevin Benton [blak...@gmail.com]
Received: Wednesday, 19 Nov 2014, 5:18
To: OpenStack Development Mailing List (not for usage questions) 
[openstack-dev@lists.openstack.org]
Subject: Re: [openstack-dev] [Neutron] Should we clean resource first then do 
assert in test?

It's basically a style argument at this point. All resources are destroyed
at the DB level at the end of each test regardless of what happens.[1]

1.
https://github.com/openstack/neutron/blob/33c60c18cc78fb558ab8464c007559e44100ebba/neutron/tests/unit/testlib_api.py#L74

On Tue, Nov 18, 2014 at 7:36 PM, Damon Wang  wrote:

> Hi,
>
> @Michali asked me a question about clean resource after assert seems not
> valid in a review
> 
> .
>
> Should we clean resource before assert?
>
> Regards
> Wei Wang
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Miguel Angel Ajo Pelayo


- Original Message -
> Miguel Angel Ajo wrote:
> > [...]
> >The overhead comes from python startup time + rootwrap loading.
> > 
> >I suppose that rootwrap was designed for lower amount of system calls
> > (nova?).
> 
> Yes, it was not really designed to escalate rights on hundreds of
> separate shell commands in a row.
> 
> >And, I understand what rootwrap provides, a level of filtering that
> > sudo cannot offer. But it raises some question:
> > 
> > 1) It's actually someone using rootwrap in production?
> > 
> > 2) What alternatives can we think about to improve this situation.
> > 
> >0) already being done: coalescing system calls. But I'm unsure that's
> > enough. (if we coalesce 15 calls to 3 on this system we get:
> > 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
> > 
> >a) Rewriting rules into sudo (to the extent that it's possible), and
> > live with that.
> 
> We used to use sudo and a sudoers file. The rules were poorly written,
> and there is just so much you can check in a sudoers file. But the main
> issue was that the sudoers file lived in packaging
> (distribution-dependent), and was not maintained in sync with the code.
> Rootwrap let us to maintain the rules (filters) in sync with the code
> calling them.

Yes, from security & maintenance, it was an smart decision. I'm thinking
of automatically converting rootwrap rules to sudoers, but that's very 
limited, specially for the ip netns exec ... case.


> To work around perf issues, you still have the option of running with a
> wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
> running with a badly-written or badly-maintained sudo rules anyway.

That's what I used for my "benchmark". I just wonder, the how possible
is to get command injection from neutron, via API or DB.

> 
> > [...]
> >d) Re-writing rootwrap into C (it's 600 python LOCs now).
> 
> (d2) would be to explore running rootwrap under Pypy. Testing that is on
> my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
> that option.

I tried in my system right now, it takes more time to boot-up. Pypy JIT 
is awesome on runtime, but it seems that boot time is slower.

I also played a little with shedskin (py->c++ converter), but it 
doesn't support all the python libraries, dynamic typing, or parameter 
unpacking.

That could be another approach, writing a simplified rootwrap in python, and
have it automatically converted to C++.

f) haleyb on IRC is pointing me to another approach Carl Baldwin is
pushing https://review.openstack.org/#/c/67490/ towards command execution 
coalescing.


> 
> >e) Doing the command filtering at neutron-side, as a library and live
> > with sudo with simple filtering. (we kill the python/rootwrap startup
> > overhead).
> 
> That's as safe as running with a wildcard sudoers file (neutron user can
> escalate to root). Which may just be acceptable in /some/ scenarios.

I think it can be safer, (from the command injection point of view).

> 
> --
> Thierry Carrez (ttx)
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Miguel Angel Ajo Pelayo
Hi Carl, thank you, good idea.

I started reviewing it, but I will do it more carefully tomorrow morning.



- Original Message -
> All,
> 
> I was writing down a summary of all of this and decided to just do it
> on an etherpad.  Will you help me capture the big picture there?  I'd
> like to come up with some actions this week to try to address at least
> part of the problem before Icehouse releases.
> 
> https://etherpad.openstack.org/p/neutron-agent-exec-performance
> 
> Carl
> 
> On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo 
> wrote:
> > Hi Yuri & Stephen, thanks a lot for the clarification.
> >
> > I'm not familiar with unix domain sockets at low level, but , I wonder
> > if authentication could be achieved just with permissions (only users in
> > group "neutron" or group "rootwrap" accessing this service.
> >
> > I find it an interesting alternative, to the other proposed solutions, but
> > there are some challenges associated with this solution, which could make
> > it
> > more complicated:
> >
> > 1) Access control, file system permission based or token based,
> >
> > 2) stdout/stderr/return encapsulation/forwarding to the caller,
> >if we have a simple/fast RPC mechanism we can use, it's a matter
> >of serializing a dictionary.
> >
> > 3) client side implementation for 1 + 2.
> >
> > 4) It would need to accept new domain socket connections in green threads
> > to
> > avoid spawning a new process to handle a new connection.
> >
> > The advantages:
> >* we wouldn't need to break the only-python-rule.
> >* we don't need to rewrite/translate rootwrap.
> >
> > The disadvantages:
> >   * it needs changes on the client side (neutron + other projects),
> >
> >
> > Cheers,
> > Miguel Ángel.
> >
> >
> >
> > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
> >>
> >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
> >> mailto:stephen.g...@theguardian.com>>
> >> wrote:
> >>
> >> Hi,
> >>
> >> Given that Yuriy says explicitly 'unix socket', I dont think he
> >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
> >> listening on a unix socket for execution requests.  This seems like
> >> a reasonably sensible idea to me.
> >>
> >>
> >> Yes, you're right.
> >>
> >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
> >>
> >>
> >> I thought of this option, but didn't consider it, as It's somehow
> >> risky to expose an RPC end executing priviledged (even filtered)
> >> commands.
> >>
> >>
> >> subprocess module have some means to do RPC securely over UNIX sockets.
> >> I does this by passing some token along with messages. It should be
> >> secure because with UNIX sockets we don't need anything stronger since
> >> MITM attacks are not possible.
> >>
> >> If I'm not wrong, once you have credentials for messaging, you can
> >> send messages to any end, even filtered, I somehow see this as a
> >> higher
> >> risk option.
> >>
> >>
> >> As Stephen noted, I'm not talking about using MQ for RPC. Just some
> >> local UNIX socket with very simple RPC over it.
> >>
> >> And btw, if we add RPC in the middle, it's possible that all those
> >> system call delays increase, or don't decrease all it'll be
> >> desirable.
> >>
> >>
> >> Every call to rootwrap would require the following.
> >>
> >> Client side:
> >> - new client socket;
> >> - one message sent;
> >> - one message received.
> >>
> >> Server side:
> >> - accepting new connection;
> >> - one message received;
> >> - one fork-exec;
> >> - one message sent.
> >>
> >> This looks like way simpler than passing through sudo and rootwrap that
> >> requires three exec's and whole lot of configuration files opened and
> >> parsed.
> >>
> >> --
> >>
> >> Kind regards, Yuriy.
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-11 Thread Miguel Angel Ajo Pelayo

I have included on the etherpad, the option to write a sudo 
plugin (or several), specific for openstack.


And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.

[majopela@redcylon tmp]$ cat <test.py
> import sys
> print "hello world"
> sys.exit(0)
> EOF

[majopela@redcylon tmp]$ time python test.py
hello world

real0m0.016s
user0m0.015s
sys 0m0.001s


[majopela@redcylon tmp]$ shedskin test.py
*** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)

[analyzing types..]
100% 
[generating c++ code..]
[elapsed time: 1.59 seconds]
[majopela@redcylon tmp]$ make 
g++  -O2 -march=native -Wno-deprecated  -I. 
-I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/re.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc -lpcre  -o test
[majopela@redcylon tmp]$ time ./test
hello world

real0m0.003s
user0m0.000s
sys 0m0.002s


- Original Message -
> We had this same issue with the dhcp-agent. Code was added that paralleled
> the initial sync here: https://review.openstack.org/#/c/28914/ that made
> things a good bit faster if I remember correctly. Might be worth doing
> something similar for the l3-agent.
> 
> Best,
> 
> Aaron
> 
> 
> On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon < joe.gord...@gmail.com > wrote:
> 
> 
> 
> 
> 
> 
> On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon < joe.gord...@gmail.com > wrote:
> 
> 
> 
> I looked into the python to C options and haven't found anything promising
> yet.
> 
> 
> I tried Cython, and RPython, on a trivial hello world app, but git similar
> startup times to standard python.
> 
> The one thing that did work was adding a '-S' when starting python.
> 
> -S Disable the import of the module site and the site-dependent manipulations
> of sys.path that it entails.
> 
> Using 'python -S' didn't appear to help in devstack
> 
> #!/usr/bin/python -S
> # PBR Generated from u'console_scripts'
> 
> import sys
> import site
> site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
> 
> 
> 
> 
> 
> 
> I am not sure if we can do that for rootwrap.
> 
> 
> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
> 
> real 0m0.021s
> user 0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
> 
> real 0m0.021s
> user 0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
> 
> real 0m0.010s
> user 0m0.000s
> sys 0m0.008s
> 
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
> 
> real 0m0.010s
> user 0m0.000s
> sys 0m0.008s
> 
> 
> 
> On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
> mangel...@redhat.com > wrote:
> 
> 
> Hi Carl, thank you, good idea.
> 
> I started reviewing it, but I will do it more carefully tomorrow morning.
> 
> 
> 
> - Original Message -
> > All,
> > 
> > I was writing down a summary of all of this and decided to just do it
> > on an etherpad. Will you help me capture the big picture there? I'd
> > like to come up with some actions this week to try to address at least
> > part of the problem before Icehouse releases.
> > 
> > https://etherpad.openstack.org/p/neutron-agent-exec-performance
> > 
> > Carl
> > 
> > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo < majop...@redhat.com >
> > wrote:
> > > Hi Yuri & Stephen, thanks a lot for the clarification.
> > > 
> > > I'm not familiar with unix domain sockets at low level, but , I wonder
> > > if authentication could be achieved just with permissions (only users in
> > > group "neutron" or group "rootwrap" accessing this service.
> > > 
> > > I find it an interesting alternative, to the other proposed solutions,
> > > but
> > > there are some challenges associated with this solution, which could make
> > > it
> > > more complicated:
> > > 
> > > 1) Access control, file system permission based or token based,
> > > 
> > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
> > > if we have a simple/fast RPC mechanism we can use, it's a matter
> > > of serializing a dictionary.
> > > 
> > > 3) client side implementation for 1 + 2.
> > > 
> >

Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Miguel Angel Ajo Pelayo

Hi!

  This is an interesting topic, I don't know if there's any way to
target connection tracker rules by MAC, but that'd be the ideal solution.

  I also understand the RETURN for RELATED,ESTABLISHED is there for
performance reasons, and removing it would lead to longer table evaluation,
and degraded packet throughput.

  Temporarily removing this entry doesn't seem like a good solution
to me as we can't really know how long do we need to remove this rule to
induce the connection to close at both ends (it will only close if any
new activity happens, and timeout is exhausted afterwards).


  Also, I'm not sure if removing all the conntrack rules that match the
certain filter would be OK enough, as it may only lead to full reevaluation
of rules for the next packet of the cleared connections (may be I'm missing 
some corner detail, which could be).


Best regards,
Miguel Ángel.



- Original Message - 

> Hi!

> I am working on a bug " ping still working once connected even after related
> security group rule is deleted" (
> https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the problem
> is the following: when we delete a security group rule the corresponding
> rule in iptables is also deleted, but the connection, that was allowed by
> that rule, is not being destroyed.
> The reason for such behavior is that in iptables we have the following
> structure of a chain that filters input packets for an interface of an
> istance:

> Chain neutron-openvswi-i830fa99f-3 (1 references)
> pkts bytes target prot opt in out source destination
> 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets that
> are not associated with a state. */
> 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /* Direct
> packets associated with a known session to the RETURN chain. */
> 0 0 RETURN udp -- * * 10.0.0.3 0.0.0.0/0 udp spt:67 dpt:68
> 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set IPv43a0d3610-8b38-43f2-8
> src
> 0 0 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 < rule that allows
> ssh on port 22
> 1 84 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0
> 0 0 neutron-openvswi-sg-fallback all -- * * 0.0.0.0/0 0.0.0.0/0 /* Send
> unmatched traffic to the fallback chain. */

> So, if we delete rule that allows tcp on port 22, then all connections that
> are already established won't be closed, because all packets would satisfy
> the rule:
> 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /* Direct
> packets associated with a known session to the RETURN chain. */

> I seek advice on the way how to deal with the problem. There are a couple of
> ideas how to do it (more or less realistic):

> * Kill the connection using conntrack

> The problem here is that it is sometimes impossible to tell which connection
> should be killed. For example there may be two instances running in
> different namespaces that have the same ip addresses. As a compute doesn't
> know anything about namespaces, it cannot distinguish between the two
> seemingly identical connections:
> $ sudo conntrack -L | grep "10.0.0.5"
> tcp 6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60723 dport=22
> src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] mark=0 use=1
> tcp 6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60729 dport=22
> src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] mark=0 use=1

> I wonder whether there is any way to search for a connection by destination
> MAC?

> * Delete iptables rule that directs packets associated with a known session
> to the RETURN chain

> It will force all packets to go through the full chain each time and this
> will definitely make the connection close. But this will strongly affect the
> performance. Probably there may be created a timeout after which this rule
> will be restored, but it is uncertain how long should it be.

> Please share your thoughts on how it would be better to handle it.

> Thanks in advance,
> Elena

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-23 Thread Miguel Angel Ajo Pelayo


Recently, we have identified clients with problems due to the 
bad scalability of security groups in Havana and Icehouse, that 
was addressed during juno here [1] [2]

This situation is identified by blinking agents (going UP/DOWN),
high AMQP load, nigh neutron-server load, and timeout from openvswitch
agents when trying to contact neutron-server "security_group_rules_for_devices".

Doing a [1] backport involves many dependent patches related 
to the general RPC refactor in neutron (which modifies all plugins), 
and subsequent ones fixing a few bugs. Sounds risky to me. [2] Introduces 
new features and it's dependent on features which aren't available on 
all systems.

To remediate this on production systems, I wrote a quick tool
to help on reporting security groups and mitigating the problem
by writing almost-equivalent rules [3]. 

We believe this tool would be better available to the wider community,
and under better review and testing, and, since it doesn't modify any behavior 
or actual code in neutron, I'd like to propose it for inclusion into, at least, 
Icehouse stable branch where it's more relevant.

I know the usual way is to go master->Juno->Icehouse, but at this moment
the tool is only interesting for Icehouse (and Havana), although I believe 
it could be extended to cleanup orphaned resources, or any other cleanup 
tasks, in that case it could make sense to be available for K->J->I.
 
As a reference, I'm leaving links to outputs from the tool [4][5]
  
Looking forward to get some feedback,
Miguel Ángel.


[1] https://review.openstack.org/#/c/111876/ security group rpc refactor
[2] https://review.openstack.org/#/c/111877/ ipset support
[3] https://github.com/mangelajo/neutrontool
[4] http://paste.openstack.org/show/123519/
[5] http://paste.openstack.org/show/123525/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Miguel Angel Ajo Pelayo

Nice!, It sounds like a good mechanism to handle this. 


Defining a good mechanism here is crucial, we must be aware of the
2^16 zones limit [1], and that ipset rules will coalesce connections
to lots of different IPs over the same rule.

May be a good option is to tag connections per rule (we limit ourself to 2^16 
rules)
AND per ip address / port / protocol, etc. (having an average of 5 in / 5 out 
rules
per port, that's a 6553 ports limit per machine).

Or, if we need this to scale to more ports, tag connections per port, and target
them by rule AND ip address / port / protocol.

[1] 
https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01#diff-4d53dd1f3ad5275bc2e79f2c12af6e68R8


Best,
Miguel Ángel.


- Original Message - 

> Just like Kevin I was considering using conntrack zones to segregate
> connections.
> However, I don't know whether this would be feasible as I've never used
> iptables CT target in real applications.

> Segregation should probably happen at the security group level - or even at
> the rule level - rather than the tenant level.
> Indeed the same situation could occur even with two security groups belonging
> to the same tenant.

> Probably each rule can be associated with a different conntrack zone. So when
> it's matched, the corresponding conntrack entries will be added to the
> appropriate zone. And therefore when the rules are removed the corresponding
> connections to kill can be filtered by zone as explained by Kevin.

> This approach will add a good number of rules to the RAW table however, so
> its impact on control/data plane scalability should be assessed, as it might
> turn as bad as the solution where connections where explicitly dropped with
> an ad-hoc iptables rule.

> Salvatore

> On 24 October 2014 09:32, Kevin Benton < blak...@gmail.com > wrote:

> > I think the root cause of the problem here is that we are losing
> > segregation
> > between tenants at the conntrack level. The compute side plugs everything
> > into the same namespace and we have no guarantees about uniqueness of any
> > other fields kept by conntrack.
> 

> > Because of this loss of uniqueness, I think there may be another lurking
> > bug
> > here as well. One tenant establishing connections between IPs that overlap
> > with another tenant will create the possibility that a connection the other
> > tenant attempts will match the conntrack entry from the original
> > connection.
> > Then whichever closes the connection first will result in the conntrack
> > entry being removed and the return traffic from the remaining connection
> > being dropped.
> 

> > I think the correct way forward here is to isolate each tenant (or even
> > compute interface) into its own conntrack zone.[1] This will provide
> > isolation against that imaginary unlikely scenario I just presented. :-)
> 
> > More importantly, it will allow us to clear connections for a specific
> > tenant
> > (or compute interface) without interfering with others because conntrack
> > can
> > delete by zone.[2]
> 

> > 1.
> > https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
> 
> > 2. see the -w option.
> > http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html
> 

> > On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova < eezh...@mirantis.com >
> > wrote:
> 

> > > Hi!
> > 
> 

> > > I am working on a bug " ping still working once connected even after
> > > related
> > > security group rule is deleted" (
> > > https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the
> > > problem
> > > is the following: when we delete a security group rule the corresponding
> > > rule in iptables is also deleted, but the connection, that was allowed by
> > > that rule, is not being destroyed.
> > 
> 
> > > The reason for such behavior is that in iptables we have the following
> > > structure of a chain that filters input packets for an interface of an
> > > istance:
> > 
> 

> > > Chain neutron-openvswi-i830fa99f-3 (1 references)
> > 
> 
> > > pkts bytes target prot opt in out source destination
> > 
> 
> > > 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets
> > > that
> > > are not associated with a state. */
> > 
> 
> > > 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /*
> > > Direct
> > > packets associated with a known session to the RETURN chain. */
> > 
> 
> > > 0 0 RETURN udp -- * * 10.0.0.3 0.0.0.0/0 udp spt:67 dpt:68
> > 
> 
> > > 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set
> > > IPv43a0d3610-8b38-43f2-8
> > > src
> > 
> 
> > > 0 0 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 < rule that
> > > allows
> > > ssh on port 22
> > 
> 
> > > 1 84 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0
> > 
> 
> > > 0 0 neutron-openvswi-sg-fallback all -- * * 0.0.0.0/0 0.0.0.0/0 /* Send
> > > unmatched traffic to the fallback chain. */
> > 
> 

> > > So, if we delete rule that allows tcp on port 22, then all connections
> > > that
> > > ar

Re: [openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-24 Thread Miguel Angel Ajo Pelayo


- Original Message -
> Hi Miguel,
> 
> while we'd need to hear from the stable team, I think it's not such a bad
> idea to make this tool available to users of pre-juno openstack releases.
> As far as upstream repos are concerned, I don't know if this tool violates
> the criteria for stable branches. Even if it would be a rather large change
> for stable/icehouse, it is pretty much orthogonal to the existing code, so
> it could be ok. However, please note that stable/havana has now reached its
> EOL, so there will be no more stable release for it.

Sure, I was mentioning havana as affected, but I understand it's already
under U/S EOL, D/S distributions would always be free to backport, specially
on an orthogonal change like this.

About stable/icehouse, I'd like to hear from the stable maintainers.

> 
> The orthogonal nature of this tool however also make the case for making it
> widely available on pypi. I think it should be ok to describe the
> scalability issue in the official OpenStack Icehouse docs and point out to
> this tool for mitigation.

Yes, of course, I consider that as a second option, my point here is that 
direct upstream review time would result in better quality code here, and 
could certainly spot any hidden bugs, and increase testing quality.

It also reduces packaging time all across distributions making it available
via the standard neutron repository.


Thanks for the feedback!,

> 
> Salvatore
> 
> On 23 October 2014 14:03, Miguel Angel Ajo Pelayo < mangel...@redhat.com >
> wrote:
> 
> 
> 
> 
> Recently, we have identified clients with problems due to the
> bad scalability of security groups in Havana and Icehouse, that
> was addressed during juno here [1] [2]
> 
> This situation is identified by blinking agents (going UP/DOWN),
> high AMQP load, nigh neutron-server load, and timeout from openvswitch
> agents when trying to contact neutron-server
> "security_group_rules_for_devices".
> 
> Doing a [1] backport involves many dependent patches related
> to the general RPC refactor in neutron (which modifies all plugins),
> and subsequent ones fixing a few bugs. Sounds risky to me. [2] Introduces
> new features and it's dependent on features which aren't available on
> all systems.
> 
> To remediate this on production systems, I wrote a quick tool
> to help on reporting security groups and mitigating the problem
> by writing almost-equivalent rules [3].
> 
> We believe this tool would be better available to the wider community,
> and under better review and testing, and, since it doesn't modify any
> behavior
> or actual code in neutron, I'd like to propose it for inclusion into, at
> least,
> Icehouse stable branch where it's more relevant.
> 
> I know the usual way is to go master->Juno->Icehouse, but at this moment
> the tool is only interesting for Icehouse (and Havana), although I believe
> it could be extended to cleanup orphaned resources, or any other cleanup
> tasks, in that case it could make sense to be available for K->J->I.
> 
> As a reference, I'm leaving links to outputs from the tool [4][5]
> 
> Looking forward to get some feedback,
> Miguel Ángel.
> 
> 
> [1] https://review.openstack.org/#/c/111876/ security group rpc refactor
> [2] https://review.openstack.org/#/c/111877/ ipset support
> [3] https://github.com/mangelajo/neutrontool
> [4] http://paste.openstack.org/show/123519/
> [5] http://paste.openstack.org/show/123525/
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Miguel Angel Ajo Pelayo

Kevin, I agree, with you, 1 zone per port should be reasonable.

The 2^16 rule limit will force us into keeping state (to tie
ports to zones across reboots), may be this state can be just
recovered by reading the iptables rules at boot, and reconstructing
the current openvswitch-agent local port/zone association.

Best,
Miguel Ángel.

- Original Message - 

> While a zone per rule would be nice because we can easily delete connection
> state by only referencing a zone, that's probably overkill. We only need
> enough to disambiguate between overlapping IPs so we can then delete
> connection state by matching standard L3/4 headers again, right?

> I think a conntrack zone per port would be the easiest from an accounting
> perspective. We already setup an iptables chain per port so the grouping is
> already there (/me sweeps the complexity of choosing zone numbers under the
> rug).

> On Fri, Oct 24, 2014 at 2:25 AM, Salvatore Orlando < sorla...@nicira.com >
> wrote:

> > Just like Kevin I was considering using conntrack zones to segregate
> > connections.
> 
> > However, I don't know whether this would be feasible as I've never used
> > iptables CT target in real applications.
> 

> > Segregation should probably happen at the security group level - or even at
> > the rule level - rather than the tenant level.
> 
> > Indeed the same situation could occur even with two security groups
> > belonging
> > to the same tenant.
> 

> > Probably each rule can be associated with a different conntrack zone. So
> > when
> > it's matched, the corresponding conntrack entries will be added to the
> > appropriate zone. And therefore when the rules are removed the
> > corresponding
> > connections to kill can be filtered by zone as explained by Kevin.
> 

> > This approach will add a good number of rules to the RAW table however, so
> > its impact on control/data plane scalability should be assessed, as it
> > might
> > turn as bad as the solution where connections where explicitly dropped with
> > an ad-hoc iptables rule.
> 

> > Salvatore
> 

> > On 24 October 2014 09:32, Kevin Benton < blak...@gmail.com > wrote:
> 

> > > I think the root cause of the problem here is that we are losing
> > > segregation
> > > between tenants at the conntrack level. The compute side plugs everything
> > > into the same namespace and we have no guarantees about uniqueness of any
> > > other fields kept by conntrack.
> > 
> 

> > > Because of this loss of uniqueness, I think there may be another lurking
> > > bug
> > > here as well. One tenant establishing connections between IPs that
> > > overlap
> > > with another tenant will create the possibility that a connection the
> > > other
> > > tenant attempts will match the conntrack entry from the original
> > > connection.
> > > Then whichever closes the connection first will result in the conntrack
> > > entry being removed and the return traffic from the remaining connection
> > > being dropped.
> > 
> 

> > > I think the correct way forward here is to isolate each tenant (or even
> > > compute interface) into its own conntrack zone.[1] This will provide
> > > isolation against that imaginary unlikely scenario I just presented. :-)
> > 
> 
> > > More importantly, it will allow us to clear connections for a specific
> > > tenant
> > > (or compute interface) without interfering with others because conntrack
> > > can
> > > delete by zone.[2]
> > 
> 

> > > 1.
> > > https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
> > 
> 
> > > 2. see the -w option.
> > > http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html
> > 
> 

> > > On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova < eezh...@mirantis.com >
> > > wrote:
> > 
> 

> > > > Hi!
> > > 
> > 
> 

> > > > I am working on a bug " ping still working once connected even after
> > > > related
> > > > security group rule is deleted" (
> > > > https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the
> > > > problem
> > > > is the following: when we delete a security group rule the
> > > > corresponding
> > > > rule in iptables is also deleted, but the connection, that was allowed
> > > > by
> > > > that rule, is not being destroyed.
> > > 
> > 
> 
> > > > The reason for such behavior is that in iptables we have the following
> > > > structure of a chain that filters input packets for an interface of an
> > > > istance:
> > > 
> > 
> 

> > > > Chain neutron-openvswi-i830fa99f-3 (1 references)
> > > 
> > 
> 
> > > > pkts bytes target prot opt in out source destination
> > > 
> > 
> 
> > > > 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets
> > > > that
> > > > are not associated with a state. */
> > > 
> > 
> 
> > > > 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /*
> > > > Direct
> > > > packets associated with a known session to the RETURN chain. */
> > > 
> > 
> 
> > > > 0 0 RETURN udp -- * * 10.0.0.3 0.0.0.0/0 udp spt:67 dpt:68
> > > 
> > 
> 
> > > >

Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Miguel Angel Ajo Pelayo
sorry: when I said boot, I mean "openvswitch agent restart".

- Original Message -
> 
> Kevin, I agree, with you, 1 zone per port should be reasonable.
> 
> The 2^16 rule limit will force us into keeping state (to tie
> ports to zones across reboots), may be this state can be just
> recovered by reading the iptables rules at boot, and reconstructing
> the current openvswitch-agent local port/zone association.
> 
> Best,
> Miguel Ángel.
> 
> - Original Message -
> 
> > While a zone per rule would be nice because we can easily delete connection
> > state by only referencing a zone, that's probably overkill. We only need
> > enough to disambiguate between overlapping IPs so we can then delete
> > connection state by matching standard L3/4 headers again, right?
> 
> > I think a conntrack zone per port would be the easiest from an accounting
> > perspective. We already setup an iptables chain per port so the grouping is
> > already there (/me sweeps the complexity of choosing zone numbers under the
> > rug).
> 
> > On Fri, Oct 24, 2014 at 2:25 AM, Salvatore Orlando < sorla...@nicira.com >
> > wrote:
> 
> > > Just like Kevin I was considering using conntrack zones to segregate
> > > connections.
> > 
> > > However, I don't know whether this would be feasible as I've never used
> > > iptables CT target in real applications.
> > 
> 
> > > Segregation should probably happen at the security group level - or even
> > > at
> > > the rule level - rather than the tenant level.
> > 
> > > Indeed the same situation could occur even with two security groups
> > > belonging
> > > to the same tenant.
> > 
> 
> > > Probably each rule can be associated with a different conntrack zone. So
> > > when
> > > it's matched, the corresponding conntrack entries will be added to the
> > > appropriate zone. And therefore when the rules are removed the
> > > corresponding
> > > connections to kill can be filtered by zone as explained by Kevin.
> > 
> 
> > > This approach will add a good number of rules to the RAW table however,
> > > so
> > > its impact on control/data plane scalability should be assessed, as it
> > > might
> > > turn as bad as the solution where connections where explicitly dropped
> > > with
> > > an ad-hoc iptables rule.
> > 
> 
> > > Salvatore
> > 
> 
> > > On 24 October 2014 09:32, Kevin Benton < blak...@gmail.com > wrote:
> > 
> 
> > > > I think the root cause of the problem here is that we are losing
> > > > segregation
> > > > between tenants at the conntrack level. The compute side plugs
> > > > everything
> > > > into the same namespace and we have no guarantees about uniqueness of
> > > > any
> > > > other fields kept by conntrack.
> > > 
> > 
> 
> > > > Because of this loss of uniqueness, I think there may be another
> > > > lurking
> > > > bug
> > > > here as well. One tenant establishing connections between IPs that
> > > > overlap
> > > > with another tenant will create the possibility that a connection the
> > > > other
> > > > tenant attempts will match the conntrack entry from the original
> > > > connection.
> > > > Then whichever closes the connection first will result in the conntrack
> > > > entry being removed and the return traffic from the remaining
> > > > connection
> > > > being dropped.
> > > 
> > 
> 
> > > > I think the correct way forward here is to isolate each tenant (or even
> > > > compute interface) into its own conntrack zone.[1] This will provide
> > > > isolation against that imaginary unlikely scenario I just presented.
> > > > :-)
> > > 
> > 
> > > > More importantly, it will allow us to clear connections for a specific
> > > > tenant
> > > > (or compute interface) without interfering with others because
> > > > conntrack
> > > > can
> > > > delete by zone.[2]
> > > 
> > 
> 
> > > > 1.
> > > > https://github.com/torvalds/linux/commit/5d0aa2ccd4699a01cfdf14886191c249d7b45a01
> > > 
> > 
> > > > 2. see the -w option.
> > > > http://manpages.ubuntu.com/manpages/raring/man8/conntrack.8.html
> > > 
> > 
> 
> > > > On Thu, Oct 23, 2014 at 3:22 AM, Elena Ezhova < eezh...@mirantis.com >
> > > > wrote:
> > > 
> > 
> 
> > > > > Hi!
> > > > 
> > > 
> > 
> 
> > > > > I am working on a bug " ping still working once connected even after
> > > > > related
> > > > > security group rule is deleted" (
> > > > > https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the
> > > > > problem
> > > > > is the following: when we delete a security group rule the
> > > > > corresponding
> > > > > rule in iptables is also deleted, but the connection, that was
> > > > > allowed
> > > > > by
> > > > > that rule, is not being destroyed.
> > > > 
> > > 
> > 
> > > > > The reason for such behavior is that in iptables we have the
> > > > > following
> > > > > structure of a chain that filters input packets for an interface of
> > > > > an
> > > > > istance:
> > > > 
> > > 
> > 
> 
> > > > > Chain neutron-openvswi-i830fa99f-3 (1 references)
> > > > 
> > > 
> > 
> > > > > pkts bytes target pr

Re: [openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-24 Thread Miguel Angel Ajo Pelayo
Thanks for your feedback too Ihar, comments inline.

- Original Message -
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> On 24/10/14 11:56, Miguel Angel Ajo Pelayo wrote:
> > 
> > 
> > - Original Message -
> >> Hi Miguel,
> >> 
> >> while we'd need to hear from the stable team, I think it's not
> >> such a bad idea to make this tool available to users of pre-juno
> >> openstack releases.
> 
> It's a great idea actually. It's great when code emerged from real
> life downstream support cases eventually flow up to upstream for all
> operator's benefit (and not just those who pay huge money for
> commercial service).
> 
> >> As far as upstream repos are concerned, I don't know if this tool
> >> violates the criteria for stable branches. Even if it would be a
> >> rather large change for stable/icehouse, it is pretty much
> >> orthogonal to the existing code, so it could be ok. However,
> >> please note that stable/havana has now reached its EOL, so there
> >> will be no more stable release for it.
> > 
> > Sure, I was mentioning havana as affected, but I understand it's
> > already under U/S EOL, D/S distributions would always be free to
> > backport, specially on an orthogonal change like this.
> > 
> > About stable/icehouse, I'd like to hear from the stable
> > maintainers.
> 
> I'm for inclusion of the tool in the main neutron package. Though it's
> possible to publish it on pypi as a separate package, I would better
> apply formal review process to it, plus reduce packaging efforts for
> distributions (and myself). The tool may be later expanded for other
> useful operator hooks, so I'm for inclusion of the tool in master and
> backporting it back to all supported branches.
> 
> Though official stable maintainership rules state that 'New features'
> are no-go for stable branch [1], I think they should not apply in this
> case since the tool does not touch production code in any way and just
> provides a way to heal security groups on operator demand. Also, rules
> are to break them. ;) Quoting the same document, "Proposed backports
> breaking any of above guidelines can be discussed as exception
> requests on openstack-stable-maint list where stable-maint team will
> try to reach consensus."
> 
> Operators should be more happy if we ship such a tool as part of
> neutron release and not as another third-party tool from pypi of
> potentially unsafe origin.
> 
> BTW I wonder whether the tool can be useful for Juno+ setups too.
> Though we mostly mitigated the problem by RPC interface rework and
> ipset, some operators may still hit some limitation that could be
> workarounded by optimizing their rules. Also, I think the idea of
> having a tool with miscellaneous operator hooks in the master tree is
> quite interesting. I would recommend to still go with pushing it to
> master and then backporting to stable branches. That would also help
> to get more review attention from cores than stable branch requests
> usually receive. ;)


I believe the tool could also be expanded to report and , equally generate
scripts to cleanup orphaned resources, those happen to be when
you remove an instance and the port is not deleted, or you delete a 
tenant, but the resources are kept, etc.

I know there are efforts to do proper cleanup when tenants are deleted,
but still, I see production databases plagued of orphaned resources. 

> 
> [1]: https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes
> 
> > 
> >> 
> >> The orthogonal nature of this tool however also make the case for
> >> making it widely available on pypi. I think it should be ok to
> >> describe the scalability issue in the official OpenStack Icehouse
> >> docs and point out to this tool for mitigation.
> > 
> > Yes, of course, I consider that as a second option, my point here
> > is that direct upstream review time would result in better quality
> > code here, and could certainly spot any hidden bugs, and increase
> > testing quality.
> > 
> > It also reduces packaging time all across distributions making it
> > available via the standard neutron repository.
> > 
> > 
> > Thanks for the feedback!,
> > 
> >> 
> >> Salvatore
> >> 
> >> On 23 October 2014 14:03, Miguel Angel Ajo Pelayo <
> >> mangel...@redhat.com > wrote:
> >> 
> >> 
> >> 
> >> 
> >> Recently, we have identified clients with problems due to the bad
> >> scalability of security group

Re: [openstack-dev] [neutron] Lightning talks during the Design Summit!

2014-10-30 Thread Miguel Angel Ajo Pelayo
Thank you very much, voted.

- Original Message -
> On Thu, Oct 23, 2014 at 3:22 PM, Kyle Mestery  wrote:
> > As discussed during the neutron-drivers meeting this week [1], we've
> > going to use one of the Neutron 40 minute design summit slots for
> > lightning talks. The basic idea is we will have 6 lightning talks,
> > each 5 minutes long. We will force a 5 minute hard limit here. We'll
> > do the lightning talk round first thing Thursday morning.
> >
> > To submit a lightning talk, please add it to the etherpad linked here
> > [2]. I'll be collecting ideas until after the Neutron meeting on
> > Monday, 10-27-2014. At that point, I'll take all the ideas and add
> > them into a Survey Monkey form and we'll vote for which talks people
> > want to see. The top 6 talks will get a lightning talk slot.
> >
> > I'm hoping the lightning talks allow people to discuss some ideas
> > which didn't get summit time, and allow for even new contributors to
> > discuss their ideas face to face with folks.
> >
> As discussed in the weekly Neutron meeting, I've setup a Survey Monkey
> to determine which 6 talks will get a slot for the Neutron Lightning
> Talk track at the Design Summit. Please go here [1] and vote. I'll
> collect results until Thursday around 2300UTC or so, and then close
> the poll and the top 6 choices will get a 5 minute lightning talk.
> 
> Thanks!
> Kyle
> 
> [1] https://www.surveymonkey.com/s/RLTPBY6
> 
> > Thanks!
> > Kyle
> >
> > [1]
> > http://eavesdrop.openstack.org/meetings/neutron_drivers/2014/neutron_drivers.2014-10-22-15.02.log.html
> > [2] https://etherpad.openstack.org/p/neutron-kilo-lightning-talks
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon ode support

2014-11-10 Thread Miguel Angel Ajo Pelayo
Thank you very much Armando,

I updated the spec (which is missing the dev impact now) and I must rebase all 
the patches.  That may happen during tomorrow if I'm not missing anything. 

I will ping you back when it's ready. 

Sent from my Android phone using TouchDown (www.nitrodesk.com)


-Original Message-
From: Armando M. [arma...@gmail.com]
Received: Saturday, 08 Nov 2014, 11:25
To: OpenStack Development Mailing List (not for usage questions) 
[openstack-dev@lists.openstack.org]
Subject: Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon 
ode support

Hi Miguel,

Thanks for picking this up. Pull me in and I'd be happy to help!

Cheers,
Armando

On 7 November 2014 10:05, Miguel Ángel Ajo  wrote:

>
> Hi Yorik,
>
>I was talking with Mark Mcclain a minute ago here at the summit about
> this. And he told me that now at the start of the cycle looks like a good
> moment to merge the spec & the root wrap daemon bits, so we have a lot of
> headroom for testing during the next months.
>
>We need to upgrade the spec [1] to the new Kilo format.
>
>Do you have some time to do it?, I can allocate some time and do it
> right away.
>
> [1] https://review.openstack.org/#/c/93889/
> --
> Miguel Ángel Ajo
> Sent with Sparrow <http://www.sparrowmailapp.com/?sig>
>
> On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:
>
> +1
>
> Sent from my Android phone using TouchDown (www.nitrodesk.com)
>
>
> -Original Message-
> From: Yuriy Taraday [yorik@gmail.com]
> Received: Thursday, 24 Jul 2014, 0:42
> To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
>
> Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon
>mode support
>
>
> Hello.
>
> I'd like to propose making a spec freeze exception for
> rootwrap-daemon-mode spec [1].
>
> Its goal is to save agents' execution time by using daemon mode for
> rootwrap and thus avoiding python interpreter startup time as well as sudo
> overhead for each call. Preliminary benchmark shows 10x+ speedup of the
> rootwrap interaction itself.
>
> This spec have a number of supporters from Neutron team (Carl and Miguel
> gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
> The only thing that has been blocking its progress is Mark's -2 left when
> oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
> in oslo.rootwrap is steadily getting approved [5].
>
> [1] https://review.openstack.org/93889
> [2] https://review.openstack.org/82787
> [3] https://review.openstack.org/84667
> [4] https://review.openstack.org/107386
> [5]
> https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z
>
> --
>
> Kind regards, Yuriy.
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-06-19 Thread Miguel Angel Ajo Pelayo

  Hi it's a very interesting topic, I was getting ready to raise
the same concerns about our security groups implementation, shihanzhang
thank you for starting this topic.

  Not only at low level where (with our default security group
rules -allow all incoming from 'default' sg- the iptable rules
will grow in ~X^2 for a tenant, and, the "security_group_rules_for_devices"
rpc call from ovs-agent to neutron-server grows to message sizes of >100MB,
generating serious scalability issues or timeouts/retries that 
totally break neutron service.

   (example trace of that RPC call with a few instances
http://www.fpaste.org/104401/14008522/)

  I believe that we also need to review the RPC calling mechanism
for the OVS agent here, there are several possible approaches to breaking
down (or/and CIDR compressing) the information we return via this api call.


   So we have to look at two things here:

  * physical implementation on the hosts (ipsets, nftables, ... )
  * rpc communication mechanisms.

   Best regards,
Miguel Ángel.

- Mensaje original - 

> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> It also based on the rule set mechanism.
> The issue in that proposition, it's only stable since the begin of the year
> and on Linux kernel 3.13.
> But there lot of pros I don't list here (leverage iptables limitation,
> efficient update rule, rule set, standardization of netfilter commands...).

> Édouard.

> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com > wrote:

> > we have done some tests, but have different result: the performance is
> > nearly
> > the same for empty and 5k rules in iptable, but huge gap between
> > enable/disable iptable hook on linux bridge
> 

> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang < ayshihanzh...@126.com >
> > wrote:
> 

> > > Now I have not get accurate test data, but I can confirm the following
> > > points:
> > 
> 
> > > 1. In compute node, the iptable's chain of a VM is liner, iptable filter
> > > it
> > > one by one, if a VM in default security group and this default security
> > > group have many members, but ipset chain is set, the time ipset filter
> > > one
> > > and many member is not much difference.
> > 
> 
> > > 2. when the iptable rule is very large, the probability of failure that
> > > iptable-save save the iptable rule is very large.
> > 
> 

> > > At 2014-06-19 10:55:56, "Kevin Benton" < blak...@gmail.com > wrote:
> > 
> 

> > > > This sounds like a good idea to handle some of the performance issues
> > > > until
> > > > the ovs firewall can be implemented down the the line.
> > > 
> > 
> 
> > > > Do you have any performance comparisons?
> > > 
> > 
> 
> > > > On Jun 18, 2014 7:46 PM, "shihanzhang" < ayshihanzh...@126.com > wrote:
> > > 
> > 
> 

> > > > > Hello all,
> > > > 
> > > 
> > 
> 

> > > > > Now in neutron, it use iptable implementing security group, but the
> > > > > performance of this implementation is very poor, there is a bug:
> > > > > https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this
> > > > > problem.
> > > > > In
> > > > > his test, w ith default security groups(which has remote security
> > > > > group),
> > > > > beyond 250-300 VMs, there were around 6k Iptable rules on evry
> > > > > compute
> > > > > node,
> > > > > although his patch can reduce the processing time, but it don't solve
> > > > > this
> > > > > problem fundamentally. I have commit a BP to solve this problem:
> > > > > https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
> > > > 
> > > 
> > 
> 
> > > > > There are other people interested in this it?
> > > > 
> > > 
> > 
> 

> > > > > ___
> > > > 
> > > 
> > 
> 
> > > > > OpenStack-dev mailing list
> > > > 
> > > 
> > 
> 
> > > > > OpenStack-dev@lists.openstack.org
> > > > 
> > > 
> > 
> 
> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > > 
> > > 
> > 
> 

> > > ___
> > 
> 
> > > OpenStack-dev mailing list
> > 
> 
> > > OpenStack-dev@lists.openstack.org
> > 
> 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 

> > ___
> 
> > OpenStack-dev mailing list
> 
> > OpenStack-dev@lists.openstack.org
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] default security group rules in neutron

2014-06-22 Thread Miguel Angel Ajo Pelayo

   I believe it's an important feature, because currently
the default security rules are hard-coded in neutron's code,
and that won't fit all organizations (not to say that the
default security rules won't scale well on our current
implementation).

   Best,
Miguel Ángel
  



- Mensaje original -
> Greetings
> 
> We use neutron as network functionality implementation in nova, and as
> you know, there is a feature called 'os-security-group-default-rules'
> in nova extension[1], a hook mechanism to add customized rules when
> creating default security groups, which is a very useful feature to
> the administrators or operators (at least useful to us in our
> deployment). But I found this feature is valid only when using
> nova-network.
> 
> So, for the functionality parity between nova-network and neutron and
> for our use case, I registered a blueprint[2] about default security
> group rules in Neutron days ago and related neutron spec[3], and I
> want it to be involved in Juno, so we can upgrade our deployment that
> time for this feature. I'm ready for the code implementation[3].
> 
> But I still want to see what's the community's thought about including
> this feature in neutron, any of your feedback and comments are
> appreciated!
> 
> [1]
> https://blueprints.launchpad.net/nova/+spec/default-rules-for-default-security-group
> [2]
> https://blueprints.launchpad.net/neutron/+spec/default-rules-for-default-security-group
> [3] https://review.openstack.org/98966
> [4] https://review.openstack.org/99320
> 
> --
> Regards!
> ---
> Lingxian Kong
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]One security issue about floating ip

2014-06-26 Thread Miguel Angel Ajo Pelayo
Yes, once a connection has past the nat tables, 
and it's on the kernel connection tracker, it
will keep working even if you remove the nat rule.

Doing that would require manipulating the kernel
connection tracking to kill that connection, 
I'm not familiar with that part of the linux network
stack, not sure if it's possible, but that would be
the perfect way. (kill nat connection on ext ip=float ip int_ip = internal 
ip)...




- Original Message -
> Hi folks,
> 
> After we create an SSH connection to a VM via its floating ip, even though we
> have removed the floating ip association, we can still access the VM via
> that connection. Namely, SSH is not disconnected when the floating ip is not
> valid. Any good solution about this security issue?
> 
> Thanks
> Xurong Yang
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-06-26 Thread Miguel Angel Ajo Pelayo
- Original Message -
> @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
> 
> Another idea:
> What about creating a RPC topic per security group (quid of the RPC topic
> scalability) on which an agent subscribes if one of its ports is associated
> to the security group?
> 
> Regards,
> Édouard.
> 
> 


Hmm, Interesting,

@Nachi, I'm not sure I fully understood:


SG_LIST [ SG1, SG2]
SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
port[SG_ID1, SG_ID2], port2 , port3


Probably we may need to include also the 
SG_IP_LIST = [SG_IP1, SG_IP2] ...


and let the agent do all the combination work.

Something like this could make sense?

Security_Groups = {SG1:{IPs:[],RULES:[],
   SG2:{IPs:[],RULES:[]} 
  }

Ports = {Port1:[SG1, SG2], Port2: [SG1]  }


@Edouard, actually I like the idea of having the agent subscribed
to security groups they have ports on... That would remove the need to include
all the security groups information on every call...

But would need another call to get the full information of a set of security 
groups
at start/resync if we don't already have any. 


> 
> On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com > wrote:
> 
> 
> 
> hi Miguel Ángel,
> I am very agree with you about the following point:
> >  * physical implementation on the hosts (ipsets, nftables, ... )
> --this can reduce the load of compute node.
> >  * rpc communication mechanisms.
> -- this can reduce the load of neutron server
> can you help me to review my BP specs?
> 
> 
> 
> 
> 
> 
> 
> At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com >
> wrote:
> >
> >  Hi it's a very interesting topic, I was getting ready to raise
> >the same concerns about our security groups implementation, shihanzhang
> >thank you for starting this topic.
> >
> >  Not only at low level where (with our default security group
> >rules -allow all incoming from 'default' sg- the iptable rules
> >will grow in ~X^2 for a tenant, and, the "security_group_rules_for_devices"
> >rpc call from ovs-agent to neutron-server grows to message sizes of >100MB,
> >generating serious scalability issues or timeouts/retries that
> >totally break neutron service.
> >
> >   (example trace of that RPC call with a few instances
> > http://www.fpaste.org/104401/14008522/ )
> >
> >  I believe that we also need to review the RPC calling mechanism
> >for the OVS agent here, there are several possible approaches to breaking
> >down (or/and CIDR compressing) the information we return via this api call.
> >
> >
> >   So we have to look at two things here:
> >
> >  * physical implementation on the hosts (ipsets, nftables, ... )
> >  * rpc communication mechanisms.
> >
> >   Best regards,
> >Miguel Ángel.
> >
> >- Mensaje original -
> >
> >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> >> It also based on the rule set mechanism.
> >> The issue in that proposition, it's only stable since the begin of the
> >> year
> >> and on Linux kernel 3.13.
> >> But there lot of pros I don't list here (leverage iptables limitation,
> >> efficient update rule, rule set, standardization of netfilter
> >> commands...).
> >
> >> Édouard.
> >
> >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com > wrote:
> >
> >> > we have done some tests, but have different result: the performance is
> >> > nearly
> >> > the same for empty and 5k rules in iptable, but huge gap between
> >> > enable/disable iptable hook on linux bridge
> >> 
> >
> >> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang < ayshihanzh...@126.com >
> >> > wrote:
> >> 
> >
> >> > > Now I have not get accurate test data, but I can confirm the following
> >> > > points:
> >> > 
> >> 
> >> > > 1. In compute node, the iptable's chain of a VM is liner, iptable
> >> > > filter
> >> > > it
> >> > > one by one, if a VM in default security group and this default
> >> > > security
> >> > > group have many members, but ipset chain is set, the time ipset filter
> >> > > one
> >> > > and many member is not much difference.
> >> > 
> >> 
> >> > > 2. when the iptable rule is very large, the probability of failure
> 

Re: [openstack-dev] [Neutron] DVR demo and how-to

2014-07-01 Thread Miguel Angel Ajo Pelayo
Thank you for the video, keep up the good work!,


- Original Message -
> Hi folks,
> 
> The DVR team is working really hard to complete this important task for Juno
> and Neutron.
> 
> In order to help see this feature in action, a video has been made available
> and link can be found in [2].
> 
> There is still some work to do, however I wanted to remind you that all of
> the relevant information is available on the wiki [1, 2] and Gerrit [3].
> 
> [1] - https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
> [2] - https://wiki.openstack.org/wiki/Neutron/DVR/HowTo
> [3] - https://review.openstack.org/#/q/topic:bp/neutron-ovs-dvr,n,z
> 
> More to follow!
> 
> Cheers,
> Armando
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-07-01 Thread Miguel Angel Ajo Pelayo


Ok, I was talking with Édouard @ IRC, and as I have time to work 
into this problem, I could file an specific spec for the security
group RPC optimization, a masterplan in two steps:

1) Refactor the current RPC communication for security_groups_for_devices,
   which could be used for full syncs, etc..

2) Benchmark && make use of a fanout queue per security group to make
   sure only the hosts with instances on a certain security group get
   the updates as they happen.

@shihanzhang do you find it reasonable?



- Original Message -
> - Original Message -
> > @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
> > 
> > Another idea:
> > What about creating a RPC topic per security group (quid of the RPC topic
> > scalability) on which an agent subscribes if one of its ports is associated
> > to the security group?
> > 
> > Regards,
> > Édouard.
> > 
> > 
> 
> 
> Hmm, Interesting,
> 
> @Nachi, I'm not sure I fully understood:
> 
> 
> SG_LIST [ SG1, SG2]
> SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
> port[SG_ID1, SG_ID2], port2 , port3
> 
> 
> Probably we may need to include also the
> SG_IP_LIST = [SG_IP1, SG_IP2] ...
> 
> 
> and let the agent do all the combination work.
> 
> Something like this could make sense?
> 
> Security_Groups = {SG1:{IPs:[],RULES:[],
>SG2:{IPs:[],RULES:[]}
>   }
> 
> Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
> 
> 
> @Edouard, actually I like the idea of having the agent subscribed
> to security groups they have ports on... That would remove the need to
> include
> all the security groups information on every call...
> 
> But would need another call to get the full information of a set of security
> groups
> at start/resync if we don't already have any.
> 
> 
> > 
> > On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com >
> > wrote:
> > 
> > 
> > 
> > hi Miguel Ángel,
> > I am very agree with you about the following point:
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > --this can reduce the load of compute node.
> > >  * rpc communication mechanisms.
> > -- this can reduce the load of neutron server
> > can you help me to review my BP specs?
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com >
> > wrote:
> > >
> > >  Hi it's a very interesting topic, I was getting ready to raise
> > >the same concerns about our security groups implementation, shihanzhang
> > >thank you for starting this topic.
> > >
> > >  Not only at low level where (with our default security group
> > >rules -allow all incoming from 'default' sg- the iptable rules
> > >will grow in ~X^2 for a tenant, and, the
> > >"security_group_rules_for_devices"
> > >rpc call from ovs-agent to neutron-server grows to message sizes of
> > >>100MB,
> > >generating serious scalability issues or timeouts/retries that
> > >totally break neutron service.
> > >
> > >   (example trace of that RPC call with a few instances
> > > http://www.fpaste.org/104401/14008522/ )
> > >
> > >  I believe that we also need to review the RPC calling mechanism
> > >for the OVS agent here, there are several possible approaches to breaking
> > >down (or/and CIDR compressing) the information we return via this api
> > >call.
> > >
> > >
> > >   So we have to look at two things here:
> > >
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > >  * rpc communication mechanisms.
> > >
> > >   Best regards,
> > >Miguel Ángel.
> > >
> > >- Mensaje original -
> > >
> > >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> > >> It also based on the rule set mechanism.
> > >> The issue in that proposition, it's only stable since the begin of the
> > >> year
> > >> and on Linux kernel 3.13.
> > >> But there lot of pros I don't list here (leverage iptables limitation,
> > >> efficient update rule, rule set, standardization of netfilter
> > >> commands...).
> > >
> > >> Édouard.
> > >
> > >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com >
> > >> wrote:
>

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-07-08 Thread Miguel Angel Ajo Pelayo
I'd like to bring the attention back to this topic:

Mark, could you reconsider removing the -2 here?

https://review.openstack.org/#/c/93889/

Your reason was: 
"""Until the upstream blueprint 
   (https://blueprints.launchpad.net/oslo/+spec/rootwrap-daemon-mode )
   merges in Oslo it does not make sense to track this in Neutron.
"""

Given the new deadlines for the specs, I believe there is no
reason to finish the oslo side in a rush, but it looks like it's 
going to be available during this cycle.

I believe it's something good which we may have available
during the juno cycle, as it's a very serious performance
penalty.

Best regards,
Miguel Ángel.


- Original Message -
> 
> 
> On 03/24/2014 07:23 PM, Yuriy Taraday wrote:
> > On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin  > > wrote:
> >
> > Don't discard the first number so quickly.
> >
> > For example, say we use a timeout mechanism for the daemon running
> > inside namespaces to avoid using too much memory with a daemon in
> > every namespace.  That means we'll pay the startup cost repeatedly but
> > in a way that amortizes it down.
> >
> > Even if it is really a one time cost, then if you collect enough
> > samples then the outlier won't have much affect on the mean anyway.
> >
> >
> > It actually affects all numbers but mean (e.g. deviation is gross).
> 
> 
> Carl is right, I thought of it later in the evening, when the timeout
> mechanism is in place we must consider the number.
> 
> >
> > I'd say keep it in there.
> 
> +1 I agree.
> 
> >
> > Carl
> >
> > On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo
> > mailto:majop...@redhat.com>> wrote:
> >  >
> >  >
> >  > It's the first call starting the daemon / loading config files,
> >  > etc?,
> >  >
> >  > May be that first sample should be discarded from the mean for
> > all processes
> >  > (it's an outlier value).
> >
> >
> > I thought about cutting max from counting deviation and/or showing
> > second-max value. But I don't think it matters much and there's not much
> > people here who're analyzing deviation. It's pretty clear what happens
> > with the longest run with this case and I think we can let it be as is.
> > It's mean value that matters most here.
> 
> Yes, I agree, but as Carl said, having timeouts in place, in a practical
> environment, the mean will be shifted too.
> 
> Timeouts are needed within namespaces, to avoid excessive memory
> consumption. But it could be OK as we'd be cutting out the ip netns
> delay.  Or , if we find a simpler "setns" mechanism enough for our
> needs, may be we don't need to care about short-timeouts in ip netns
> at all...
> 
> 
> Best,
> Miguel Ángel.
> 
> 
> >
> > --
> >
> > Kind regards, Yuriy.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-07-09 Thread Miguel Angel Ajo Pelayo

+1

Anyway, we would need to have caution on how the new single-package
would provide the old ones to handle the upgrade from split to single,
and also, back compatibility with the deployment tools.

Anyway, wouldn't it be openstack-neutron instead of python-neutron ?





- Original Message -
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> Reviving the old thread.
> 
> On 17/06/14 11:23, Kevin Benton wrote:
> > Hi Ihar,
> > 
> > What is the reason to breakup neutron into so many packages? A
> > quick disk usage stat shows the plugins directory is currently
> > 3.4M. Is that considered to be too much space for a package, or was
> > it for another reason?
> 
> I think the reasoning was that we don't want to pollute systems with
> unneeded files, and it seems to be easily achievable by splitting
> files into separate packages. It turned out now it's not that easy now
> that we have dependencies between ml2 mechanisms and separate plugins.
> 
> So I would be in favor of merging plugin packages back into
> python-neutron package. AFAIK there is still no bug for that in Red
> Hat Bugzilla, so please report one.
> 
> > 
> > Thanks, Kevin Benton
> > 
> > 
> > On Mon, Jun 16, 2014 at 3:37 PM, Ihar Hrachyshka
> >  wrote:
> > 
> > On 17/06/14 00:10, Anita Kuno wrote:
>  On 06/16/2014 06:02 PM, Kevin Benton wrote:
> > Hello,
> > 
> > In the Big Switch ML2 driver, we rely on quite a bit of
> > code from the Big Switch plugin. This works fine for
> > distributions that include the entire neutron code base.
> > However, some break apart the neutron code base into
> > separate packages. For example, in CentOS I can't use the
> > Big Switch ML2 driver with just ML2 installed because the
> > Big Switch plugin directory is gone.
> > 
> > Is there somewhere where we can put common third party code
> > that will be safe from removal during packaging?
> > 
> > 
> > Hi,
> > 
> > I'm a neutron packager for redhat based distros.
> > 
> > AFAIK the main reason is to avoid installing lots of plugins to
> > systems that are not going to use them. No one really spent too
> > much time going file by file and determining internal
> > interdependencies.
> > 
> > In your case, I would move those Brocade specific ML2 files to
> > Brocade plugin package. I would suggest to report the bug in Red
> > Hat bugzilla. I think this won't get the highest priority, but once
> > packagers will have spare cycles, this can be fixed.
> > 
> > Cheers, /Ihar
> >> 
> >> ___ OpenStack-dev
> >> mailing list OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >> 
> > 
> > 
> > 
> > 
> > ___ OpenStack-dev
> > mailing list OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> 
> iQEcBAEBCgAGBQJTvRmeAAoJEC5aWaUY1u57OSoIALVFA1a0CrIrUk/vc28I7245
> P3xe2WjV86txu71vtOVh0uSzh7oaGHkFOy1fpDDPp4httsALQepza8YziR2MsQHp
> 8fotY/fOvR2MRLNNvR+ekE+2n8U+pZW5vRchfOo3xKBGNeHs30Is3ZZHLyF6I7+T
> TrSR1qcHhkWgUF6HB6IcnRGHlNjhXJt1RBAjLVhbc4FuQAqy41ZxtFpi1QfIsgIl
> 7CmBJeZu+nTap+XvXqBqQslUbGdSeodbVh6uNMso6OP+P+3hKAwgXBhGD2Mc7Hed
> TMeKtY8BH5k1LAsadkMXgRm0L9f+vBPHeB5rzQgyLDBD6UpwH9bWryaDoDEJFYE=
> =M8GI
> -END PGP SIGNATURE-
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] respawn/action neutron-*-agent dying childs.

2014-07-10 Thread Miguel Angel Ajo Pelayo


  Eugene (& list)

  I just saw that you reassigned the bug to yourself recently , and that 
the ideas described in the bug differ a bit from the idea that I had,
but I'd be willing to extend my spec to incorporate your design and spend
some time in the problem if you believe it's feasible.

I'm talking about:

 https://bugs.launchpad.net/neutron/+bug/1257524

and 

 https://review.openstack.org/105999
 
http://docs-draft.openstack.org/99/105999/1/check/gate-neutron-specs-docs/86e0554/doc/build/html/specs/juno/agent-child-processes-status.html

   Best regards,
Miguel Ángel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-07-10 Thread Miguel Angel Ajo Pelayo
Wow Shihanzhang, truly awesome!,

  Is the current implementation for https://review.openstack.org/#/c/104522/ 
also ready?, could you make it available?.


Good work!,


- Original Message -
> 
> With the deployment 'nova + neutron + openvswitch', when we bulk create about
> 500 VM with a default security group, the CPU usage of neutron-server and
> openvswitch agent is very high, especially the CPU usage of openvswitch
> agent will be 100%, this will cause creating VMs failed.
> 
> With the method discussed in mailist:
> 1) ipset optimization   (https://review.openstack.org/#/c/100761/)
> 3) sg rpc optimization (with fanout)  (
> https://review.openstack.org/#/c/104522/)
> I have implement  these two scheme in my deployment, when we again bulk
> create about 500 VM with a default security group, the CPU usage of
>  openvswitch agent will reduce to 10%, e ven lower than 10%, so I think the i
> provement of these two options are very efficient.
> Who can help us to review our spec?
> Best regards,
> shihanzhang
> 
> 
> 
> 
> 
> At 2014-07-03 10:08:21, "Ihar Hrachyshka"  wrote:
> >-BEGIN PGP SIGNED MESSAGE-
> >Hash: SHA512
> >
> >Oh, so you have the enhancement implemented? Great! Any numbers that
> >shows how much we gain from that?
> >
> >/Ihar
> >
> >On 03/07/14 02:49, shihanzhang wrote:
> >> Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
> >> I will modify my spec, when the spec is approved, I will commit the
> >> codes as soon as possilbe!
> >> 
> >> 
> >> 
> >> 
> >> 
> >> At 2014-07-02 10:12:34, "Miguel Angel Ajo" 
> >> wrote:
> >>> 
> >>> Nice Shihanzhang,
> >>> 
> >>> Do you mean the ipset implementation is ready, or just the
> >>> spec?.
> >>> 
> >>> 
> >>> For the SG group refactor, I don't worry about who does it, or
> >>> who takes the credit, but I believe it's important we address
> >>> this bottleneck during Juno trying to match nova's scalability.
> >>> 
> >>> Best regards, Miguel Ángel.
> >>> 
> >>> 
> >>> On 07/02/2014 02:50 PM, shihanzhang wrote:
> >>>> hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
> >>>> split  the work in several specs, I have finished the work (
> >>>> ipset optimization), you can do 'sg rpc optimization (without
> >>>> fanout)'. as the third part(sg rpc optimization (with fanout)),
> >>>> I think we need talk about it, because just using ipset to
> >>>> optimize security group agent codes does not bring the best
> >>>> results!
> >>>> 
> >>>> Best regards, shihanzhang.
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> At 2014-07-02 04:43:24, "Ihar Hrachyshka" 
> >>>> wrote:
> >> On 02/07/14 10:12, Miguel Angel Ajo wrote:
> >> 
> >>> Shihazhang,
> >> 
> >>> I really believe we need the RPC refactor done for this cycle,
> >>> and given the close deadlines we have (July 10 for spec
> >>> submission and July 20 for spec approval).
> >> 
> >>> Don't you think it's going to be better to split the work in
> >>> several specs?
> >> 
> >>> 1) ipset optimization   (you) 2) sg rpc optimization (without
> >>> fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you
> >>> , me)
> >> 
> >> 
> >>> This way we increase the chances of having part of this for the
> >>> Juno cycle. If we go for something too complicated is going to
> >>> take more time for approval.
> >> 
> >> 
> >> I agree. And it not only increases chances to get at least some of
> >> those highly demanded performance enhancements to get into Juno,
> >> it's also "the right thing to do" (c). It's counterproductive to
> >> put multiple vaguely related enhancements in single spec. This
> >> would dim review focus and put us into position of getting
> >> 'all-or-nothing'. We can't afford that.
> >> 
> >> Let's leave one spec per enhancement. @Shihazhang, what do you
> >> think?
> >> 
> >> 
> >>> Also, I proposed the details of 

Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-11 Thread Miguel Angel Ajo Pelayo
+1 here too,

Amazed with the performance gains, x2.4 seems a lot,
and we'd get rid of deadlocks.



- Original Message -
> +1
> 
> I'm pretty excited about the possibilities here.  I've had this
> mysqldb/eventlet contention in the back of my mind for some time now.
> I'm glad to see some work being done in this area.
> 
> Carl
> 
> On Fri, Jul 11, 2014 at 7:04 AM, Ihar Hrachyshka  wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > On 09/07/14 13:17, Ihar Hrachyshka wrote:
> >> Hi all,
> >>
> >> Multiple projects are suffering from db lock timeouts due to
> >> deadlocks deep in mysqldb library that we use to interact with
> >> mysql servers. In essence, the problem is due to missing eventlet
> >> support in mysqldb module, meaning when a db lock is encountered,
> >> the library does not yield to the next green thread, allowing other
> >> threads to eventually unlock the grabbed lock, and instead it just
> >> blocks the main thread, that eventually raises timeout exception
> >> (OperationalError).
> >>
> >> The failed operation is not retried, leaving failing request not
> >> served. In Nova, there is a special retry mechanism for deadlocks,
> >> though I think it's more a hack than a proper fix.
> >>
> >> Neutron is one of the projects that suffer from those timeout
> >> errors a lot. Partly it's due to lack of discipline in how we do
> >> nested calls in l3_db and ml2_plugin code, but that's not something
> >> to change in foreseeable future, so we need to find another
> >> solution that is applicable for Juno. Ideally, the solution should
> >> be applicable for Icehouse too to allow distributors to resolve
> >> existing deadlocks without waiting for Juno.
> >>
> >> We've had several discussions and attempts to introduce a solution
> >> to the problem. Thanks to oslo.db guys, we now have more or less
> >> clear view on the cause of the failures and how to easily fix them.
> >> The solution is to switch mysqldb to something eventlet aware. The
> >> best candidate is probably MySQL Connector module that is an
> >> official MySQL client for Python and that shows some (preliminary)
> >> good results in terms of performance.
> >
> > I've made additional testing, creating 2000 networks in parallel (10
> > thread workers) for both drivers and comparing results.
> >
> > With mysqldb: 215.81 sec
> > With mysql-connector: 88.66
> >
> > ~2.4 times performance boost, ok? ;)
> >
> > I think we should switch to that library *even* if we forget about all
> > the nasty deadlocks we experience now.
> >
> >>
> >> I've posted a Neutron spec for the switch to the new client in Juno
> >> at [1]. Ideally, switch is just a matter of several fixes to
> >> oslo.db that would enable full support for the new driver already
> >> supported by SQLAlchemy, plus 'connection' string modified in
> >> service configuration files, plus documentation updates to refer to
> >> the new official way to configure services for MySQL. The database
> >> code won't, ideally, require any major changes, though some
> >> adaptation for the new client library may be needed. That said,
> >> Neutron does not seem to require any changes, though it was
> >> revealed that there are some alembic migration rules in Keystone or
> >> Glance that need (trivial) modifications.
> >>
> >> You can see how trivial the switch can be achieved for a service
> >> based on example for Neutron [2].
> >>
> >> While this is a Neutron specific proposal, there is an obvious wish
> >> to switch to the new library globally throughout all the projects,
> >> to reduce devops burden, among other things. My vision is that,
> >> ideally, we switch all projects to the new library in Juno, though
> >> we still may leave several projects for K in case any issues arise,
> >> similar to the way projects switched to oslo.messaging during two
> >> cycles instead of one. Though looking at how easy Neutron can be
> >> switched to the new library, I wouldn't expect any issues that
> >> would postpone the switch till K.
> >>
> >> It was mentioned in comments to the spec proposal that there were
> >> some discussions at the latest summit around possible switch in
> >> context of Nova that revealed some concerns, though they do not
> >> seem to be documented anywhere. So if you know anything about it,
> >> please comment.
> >>
> >> So, we'd like to hear from other projects what's your take on that
> >> move, whether you see any issues or have concerns about it.
> >>
> >> Thanks for your comments, /Ihar
> >>
> >> [1]: https://review.openstack.org/#/c/104905/ [2]:
> >> https://review.openstack.org/#/c/105209/
> >>
> >> ___ OpenStack-dev
> >> mailing list OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
> > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> >
> > iQEcBAEBCgAGBQ

Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a note on Spec Approval Deadline

2014-07-12 Thread Miguel Angel Ajo Pelayo
+1 

Sent from my Android phone using TouchDown (www.nitrodesk.com)


-Original Message-
From: Carl Baldwin [c...@ecbaldwin.net]
Received: Saturday, 12 Jul 2014, 17:04
To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
Subject: Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a 
note on Spec Approval Deadline

+1  This spec had already been proposed quite some time ago.  I'd like to
see this work get in to juno.

Carl
On Jul 12, 2014 9:53 AM, "Yuriy Taraday"  wrote:

> Hello, Kyle.
>
> On Fri, Jul 11, 2014 at 6:18 PM, Kyle Mestery 
> wrote:
>
>> Just a note that yesterday we passed SPD for Neutron. We have a
>> healthy backlog of specs, and I'm working to go through this list and
>> make some final approvals for Juno-3 over the next week. If you've
>> submitted a spec which is in review, please hang tight while myself
>> and the rest of the neutron cores review these. It's likely a good
>> portion of the proposed specs may end up as deferred until "K"
>> release, given where we're at in the Juno cycle now.
>>
>> Thanks!
>> Kyle
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Please don't skip my spec on rootwrap daemon support:
> https://review.openstack.org/#/c/93889/
> It got -2'd my Mark McClain when my spec in oslo wasn't approved but now
> that's fixed but it's not easy to get hold of Mark.
> Code for that spec (also -2'd by Mark) is close to be finished and
> requires some discussion to get merged by Juno-3.
>
> --
>
> Kind regards, Yuriy.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a note on Spec Approval Deadline

2014-07-14 Thread Miguel Angel Ajo Pelayo
The oslo-rootwrap spec counterpart of this
spec has been approved:

https://review.openstack.org/#/c/94613/

Cheers :-)

- Original Message -
> Yurly, thanks for your spec and code! I'll sync with Carl tomorrow on this
> and see how we can proceed for Juno around this.
> 
> 
> On Sat, Jul 12, 2014 at 10:00 AM, Carl Baldwin < c...@ecbaldwin.net > wrote:
> 
> 
> 
> 
> +1 This spec had already been proposed quite some time ago. I'd like to see
> this work get in to juno.
> 
> Carl
> On Jul 12, 2014 9:53 AM, "Yuriy Taraday" < yorik@gmail.com > wrote:
> 
> 
> 
> Hello, Kyle.
> 
> On Fri, Jul 11, 2014 at 6:18 PM, Kyle Mestery < mest...@noironetworks.com >
> wrote:
> 
> 
> Just a note that yesterday we passed SPD for Neutron. We have a
> healthy backlog of specs, and I'm working to go through this list and
> make some final approvals for Juno-3 over the next week. If you've
> submitted a spec which is in review, please hang tight while myself
> and the rest of the neutron cores review these. It's likely a good
> portion of the proposed specs may end up as deferred until "K"
> release, given where we're at in the Juno cycle now.
> 
> Thanks!
> Kyle
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> Please don't skip my spec on rootwrap daemon support:
> https://review.openstack.org/#/c/93889/
> It got -2'd my Mark McClain when my spec in oslo wasn't approved but now
> that's fixed but it's not easy to get hold of Mark.
> Code for that spec (also -2'd by Mark) is close to be finished and requires
> some discussion to get merged by Juno-3.
> 
> --
> 
> Kind regards, Yuriy.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon mode support

2014-07-23 Thread Miguel Angel Ajo Pelayo
+1

Sent from my Android phone using TouchDown (www.nitrodesk.com)


-Original Message-
From: Yuriy Taraday [yorik@gmail.com]
Received: Thursday, 24 Jul 2014, 0:42
To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon   
mode support

Hello.

I'd like to propose making a spec freeze exception for rootwrap-daemon-mode
spec [1].

Its goal is to save agents' execution time by using daemon mode for
rootwrap and thus avoiding python interpreter startup time as well as sudo
overhead for each call. Preliminary benchmark shows 10x+ speedup of the
rootwrap interaction itself.

This spec have a number of supporters from Neutron team (Carl and Miguel
gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
The only thing that has been blocking its progress is Mark's -2 left when
oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
in oslo.rootwrap is steadily getting approved [5].

[1] https://review.openstack.org/93889
[2] https://review.openstack.org/82787
[3] https://review.openstack.org/84667
[4] https://review.openstack.org/107386
[5]
https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-17 Thread Miguel Angel Ajo Pelayo
I agree, let's try to find a timeslot that works.

using #openstack-neutron with the meetbot works, but it's going to generate
a lot of noise.

On Tue, May 17, 2016 at 11:47 AM, Ihar Hrachyshka 
wrote:

>
> > On 16 May 2016, at 15:47, Takashi Yamamoto 
> wrote:
> >
> > On Mon, May 16, 2016 at 10:25 PM, Takashi Yamamoto
> >  wrote:
> >> hi,
> >>
> >> On Mon, May 16, 2016 at 9:00 PM, Ihar Hrachyshka 
> wrote:
> >>> +1 for earlier time. But also, have we booked any channel for the
> meeting? Hijacking #openstack-neutron may not work fine during such a busy
> (US) time. I suggest we propose a patch for
> https://github.com/openstack-infra/irc-meetings
> >>
> >> i agree and submitted a patch.
> >> https://review.openstack.org/#/c/316830/
> >
> > oops, unfortunately there seems no meeting channel free at the time slot.
>
> This should be solved either by changing the slot, or by getting a new
> channel registered for meetings. Using unregistered channels, especially
> during busy hours, is not effective, and is prone to overlaps for relevant
> meetings. The meetings will also not get a proper slot at
> eavesdrop.openstack.org.
>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-18 Thread Miguel Angel Ajo Pelayo
Hey,

   Finally we took over the channel for 1h. mostly because the time was
already agreed between many people on opposed timezones and it was a bit
late to cancel it.

   The first point was finding a suitable timeslot for a biweekly meeting
-for some time- and alternatively following up on email. We should not take
over the neutron channel for these meetings anymore, I'm sorry for the
inconvenience.

  Please find the summary here:

http://eavesdrop.openstack.org/meetings/network_common_flow_classifier/2016/network_common_flow_classifier.2016-05-17-17.02.html

On Tue, May 17, 2016 at 8:10 PM, Kevin Benton  wrote:

> Yeah, no meetings in #openstack-neutron please. It leaves us nowhere to
> discuss development stuff during that hour.
>
> On Tue, May 17, 2016 at 2:54 AM, Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
>
>> I agree, let's try to find a timeslot that works.
>>
>> using #openstack-neutron with the meetbot works, but it's going to
>> generate a lot of noise.
>>
>> On Tue, May 17, 2016 at 11:47 AM, Ihar Hrachyshka 
>> wrote:
>>
>>>
>>> > On 16 May 2016, at 15:47, Takashi Yamamoto 
>>> wrote:
>>> >
>>> > On Mon, May 16, 2016 at 10:25 PM, Takashi Yamamoto
>>> >  wrote:
>>> >> hi,
>>> >>
>>> >> On Mon, May 16, 2016 at 9:00 PM, Ihar Hrachyshka 
>>> wrote:
>>> >>> +1 for earlier time. But also, have we booked any channel for the
>>> meeting? Hijacking #openstack-neutron may not work fine during such a busy
>>> (US) time. I suggest we propose a patch for
>>> https://github.com/openstack-infra/irc-meetings
>>> >>
>>> >> i agree and submitted a patch.
>>> >> https://review.openstack.org/#/c/316830/
>>> >
>>> > oops, unfortunately there seems no meeting channel free at the time
>>> slot.
>>>
>>> This should be solved either by changing the slot, or by getting a new
>>> channel registered for meetings. Using unregistered channels, especially
>>> during busy hours, is not effective, and is prone to overlaps for relevant
>>> meetings. The meetings will also not get a proper slot at
>>> eavesdrop.openstack.org.
>>>
>>> Ihar
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Hirofumi Ichihara to Neutron Core Reviewer Team

2016-04-08 Thread Miguel Angel Ajo Pelayo
On Fri, Apr 8, 2016 at 11:28 AM, Ihar Hrachyshka 
wrote:

> Kevin Benton  wrote:
>
> I don't know if my vote counts in this area, but +1!
>>
>
> What the gentleman said ^, +1.


"me too ^" , +1 !




> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-04-08 Thread Miguel Angel Ajo Pelayo
Hi, good that you're looking at this,


You could create a lot of ports with this method [1] and a bit of extra
bash, without the extra expense of instance RAM.


[1]
http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in


This effort is going to be still more relevant in the context of
openvswitch firewall. We still need to make sure it's tested with the
native interface, and eventually we will need flow bundling (like in
ovs-ofctl --bundle add-flows) where the whole addition/removal/modification
is sent to be executed atomically by the switch.






On Thu, Apr 7, 2016 at 10:00 AM, IWAMOTO Toshihiro 
wrote:

> At Thu, 07 Apr 2016 16:33:02 +0900,
> IWAMOTO Toshihiro wrote:
> >
> > At Mon, 18 Jan 2016 12:12:28 +0900,
> > IWAMOTO Toshihiro wrote:
> > >
> > > I'm sending out this mail to share the finding and discuss how to
> > > improve with those interested in neutron ovs performance.
> > >
> > > TL;DR: The native of_interface code, which has been merged recently
> > > and isn't default, seems to consume less CPU time but gives a mixed
> > > result.  I'm looking into this for improvement.
> >
> > I went on to look at implementation details of eventlet etc, but it
> > turned out to be fairly simple.  The OVS agent in the
> > of_interface=native mode waits for a openflow connection from
> > ovs-vswitchd, which can take up to 5 seconds.
> >
> > Please look at the attached graph.
> > The x-axis is time from agent restarts, the y-axis is numbers of ports
> > processed (in treat_devices and bind_devices).  Each port is counted
> > twice; the first slope is treat_devices and the second is
> > bind_devices.  The native of_interface needs some more time on
> > start-up, but bind_devices is about 2x faster.
> >
> > The data was collected with 160 VMs with the devstack default settings.
>
> And if you wonder how other services are doing meanwhile, here is a
> bonus chart.
>
> The ovs agent was restarted 3 times with of_interface=native, then 3
> times with of_interface=ovs-ofctl.
>
> As the test machine has 16 CPUs, 6.25% CPU usage can mean a single
> threaded process is CPU bound.
>
> Frankly, the OVS agent would have little rooms for improvement than
> other services.  Also, it might be fun to draw similar charts for
> other types of workloads.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-08 Thread Miguel Angel Ajo Pelayo
Hi!,

   In the context of [1] (generic resource pools / scheduling in nova) and
[2] (minimum bandwidth guarantees -egress- in neutron), I had a talk a few
weeks ago with Jay Pipes,

   The idea was leveraging the generic resource pools and scheduling
mechanisms defined in [1] to find the right hosts and track the total
available bandwidth per host (and per host "physical network"), something
in neutron (still to be defined where) would notify the new API about the
total amount of "NIC_BW_KB" available on every host/physnet.

   That part is quite clear to me,

   From [1] I'm not sure which blueprint introduces the ability to schedule
based on the resource allocation/availability itself,
("resource-providers-scheduler" seems more like an optimization to the
schedule/DB interaction, right?)

And, that brings me to another point: at the moment of filtering hosts,
nova  I guess, will have the neutron port information, it has to somehow
identify if the port is tied to a minimum bandwidth QoS policy.

That would require identifying that the port has a "qos_policy_id"
attached to it, and then, asking neutron for the specific QoS policy  [3],
then look out for a minimum bandwidth rule (still to be defined), and
extract the required bandwidth from it.

   That moves, again some of the responsibility to examine and understand
external resources to nova.

Could it make sense to make that part pluggable via stevedore?, so we
would provide something that takes the "resource id" (for a port in this
case) and returns the requirements translated to resource classes
(NIC_BW_KB in this case).


Best regards,
Miguel Ángel Ajo


[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
[2] https://bugs.launchpad.net/neutron/+bug/1560963
[3] http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-11 Thread Miguel Angel Ajo Pelayo
On Sun, Apr 10, 2016 at 10:07 AM, Moshe Levi  wrote:

>
>
>
>
> *From:* Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com]
> *Sent:* Friday, April 08, 2016 4:17 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [neutron] [nova] scheduling bandwidth
> resources / NIC_BW_KB resource class
>
>
>
>
>
> Hi!,
>
>
>
>In the context of [1] (generic resource pools / scheduling in nova) and
> [2] (minimum bandwidth guarantees -egress- in neutron), I had a talk a few
> weeks ago with Jay Pipes,
>
>
>
>The idea was leveraging the generic resource pools and scheduling
> mechanisms defined in [1] to find the right hosts and track the total
> available bandwidth per host (and per host "physical network"), something
> in neutron (still to be defined where) would notify the new API about the
> total amount of "NIC_BW_KB" available on every host/physnet.
>



> I believe that NIC bandwidth can be taken from Libvirt see [4] and the
> only piece that is missing is to tell nova the mapping of physnet to
> network interface name. (In case of SR-IOV this is already known)
>
> I see bandwidth (speed)  as one of many capabilities of NIC, therefore I
> think we should take all of them in the same way in this case libvirt.  I
> was think of adding a new NIC as new resource to nova.
>

Yes, at the low level, thats one way to do it. We may need neutron agents
or plugins to collect such information, since, in some cases one devices
will be tied to one physical network, other devices will be tied to other
physical networks, or even some devices could be connected to the same
physnet. In some cases, connectivity depends on L3 tunnels, and in that
case, bandwidth calculation is more complicated (depending on routes, etc..
-I'm not even looking at that case yet-)



>
>
> [4] - 
>
>   net_enp129s0_e4_1d_2d_2d_8c_41
>
>
> /sys/devices/pci:80/:80:01.0/:81:00.0/net/enp129s0
>
>   pci__81_00_0
>
>   
>
> enp129s0
>
> e4:1d:2d:2d:8c:41
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>   
>
> 
>
>
>
>That part is quite clear to me,
>
>
>
>From [1] I'm not sure which blueprint introduces the ability to
> schedule based on the resource allocation/availability itself,
> ("resource-providers-scheduler" seems more like an optimization to the
> schedule/DB interaction, right?)
>
> My understating is that the resource provider blueprint is just a rough
> filter of compute nodes before passing them to the scheduler filters. The
> existing filters here [6] will do the  accurate filtering of resources.
>
> see [5]
>
>
>
> [5] -
> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-04-04.log.html#t2016-04-04T16:24:10
>
>
> [6] - http://docs.openstack.org/developer/nova/filter_scheduler.html
>
>
>

Thanks, yes, if those filters can operate on the generic resource pools,
then, great, we will just need to write the right filters.



> And, that brings me to another point: at the moment of filtering
> hosts, nova  I guess, will have the neutron port information, it has to
> somehow identify if the port is tied to a minimum bandwidth QoS policy.
>
>
>
> That would require identifying that the port has a "qos_policy_id"
> attached to it, and then, asking neutron for the specific QoS policy  [3],
> then look out for a minimum bandwidth rule (still to be defined), and
> extract the required bandwidth from it.
>
> I am not sure if that is the correct  way to do it, but you can create NIC
> bandwidth filter (or NIC capabilities filter)  and in it you can implement
> the way to retrieve Qos policy information by using neutron client.
>

That's my concern, that logic would have to live on the nova side, again,
and it's tightly couple to the neutron models. I'd be glad to find a way to
uncouple nova from that as much as possible. And, even better if we could
find a way to avoid the need for nova to retrieve policies as it discovers
ports.


>
>
>That moves, again some of the responsibility to examine and understand
> external resources to nova.
>
>
>
> Could it make sense to make that part pluggable via stevedore?, so we
> would provide something that takes the "resource id" (for a port in this
> case) and returns the requirements translated to resource classes
> (NIC_BW_KB in this case).
>
>
>
>
>
> Best regards,
>
> Miguel 

Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-04-11 Thread Miguel Angel Ajo Pelayo
On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
 wrote:
> At Fri, 8 Apr 2016 12:21:21 +0200,
> Miguel Angel Ajo Pelayo wrote:
>>
>> Hi, good that you're looking at this,
>>
>>
>> You could create a lot of ports with this method [1] and a bit of extra
>> bash, without the extra expense of instance RAM.
>>
>>
>> [1]
>> http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in
>>
>>
>> This effort is going to be still more relevant in the context of
>> openvswitch firewall. We still need to make sure it's tested with the
>> native interface, and eventually we will need flow bundling (like in
>> ovs-ofctl --bundle add-flows) where the whole addition/removal/modification
>> is sent to be executed atomically by the switch.
>
> Bad news is that ovs-firewall isn't currently using the native
> of_interface much.  I can add install_xxx methods to
> OpenFlowSwitchMixin classes so that ovs-firewall can use the native
> interface.
> Do you have a plan for implementing flow bundling or using conjunction?
>

Adding Jakub to the thread,

IMO, if the native interface is going to provide us with greater speed
for rule manipulation, we should look into it.

We don't use bundling or conjunctions yet, but it's part of the plan.
Bundling will allow atomicity of operations with rules (switching
firewall rules, etc, as we have with iptables-save /
iptables-restore), and conjunctions will reduce the number of entries.
(No expansion of IP addresses for remote groups, no expansion of
security group rules per port, when several ports are on the same
security group on the same compute host).

Do we have any metric of bare rule manipulation time (ms/rule, for example)?

As a note, we're around 80 rules/port with IPv6 + IPv4 on the default
sec group plus a couple of rules.






>> On Thu, Apr 7, 2016 at 10:00 AM, IWAMOTO Toshihiro 
>> wrote:
>>
>> > At Thu, 07 Apr 2016 16:33:02 +0900,
>> > IWAMOTO Toshihiro wrote:
>> > >
>> > > At Mon, 18 Jan 2016 12:12:28 +0900,
>> > > IWAMOTO Toshihiro wrote:
>> > > >
>> > > > I'm sending out this mail to share the finding and discuss how to
>> > > > improve with those interested in neutron ovs performance.
>> > > >
>> > > > TL;DR: The native of_interface code, which has been merged recently
>> > > > and isn't default, seems to consume less CPU time but gives a mixed
>> > > > result.  I'm looking into this for improvement.
>> > >
>> > > I went on to look at implementation details of eventlet etc, but it
>> > > turned out to be fairly simple.  The OVS agent in the
>> > > of_interface=native mode waits for a openflow connection from
>> > > ovs-vswitchd, which can take up to 5 seconds.
>> > >
>> > > Please look at the attached graph.
>> > > The x-axis is time from agent restarts, the y-axis is numbers of ports
>> > > processed (in treat_devices and bind_devices).  Each port is counted
>> > > twice; the first slope is treat_devices and the second is
>> > > bind_devices.  The native of_interface needs some more time on
>> > > start-up, but bind_devices is about 2x faster.
>> > >
>> > > The data was collected with 160 VMs with the devstack default settings.
>> >
>> > And if you wonder how other services are doing meanwhile, here is a
>> > bonus chart.
>> >
>> > The ovs agent was restarted 3 times with of_interface=native, then 3
>> > times with of_interface=ovs-ofctl.
>> >
>> > As the test machine has 16 CPUs, 6.25% CPU usage can mean a single
>> > threaded process is CPU bound.
>> >
>> > Frankly, the OVS agent would have little rooms for improvement than
>> > other services.  Also, it might be fun to draw similar charts for
>> > other types of workloads.
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-11 Thread Miguel Angel Ajo Pelayo
On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes  wrote:
> Hi Miguel Angel, comments/answers inline :)
>
> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
>>
>> Hi!,
>>
>> In the context of [1] (generic resource pools / scheduling in nova)
>> and [2] (minimum bandwidth guarantees -egress- in neutron), I had a talk
>> a few weeks ago with Jay Pipes,
>>
>> The idea was leveraging the generic resource pools and scheduling
>> mechanisms defined in [1] to find the right hosts and track the total
>> available bandwidth per host (and per host "physical network"),
>> something in neutron (still to be defined where) would notify the new
>> API about the total amount of "NIC_BW_KB" available on every host/physnet.
>
>
> Yes, what we discussed was making it initially per host, meaning the host
> would advertise a total aggregate bandwidth amount for all NICs that it uses
> for the data plane as a single amount.
>
> The other way to track this resource class (NIC_BW_KB) would be to make the
> NICs themselves be resource providers and then the scheduler could pick a
> specific NIC to bind the port to based on available NIC_BW_KB on a
> particular NIC.
>
> The former method makes things conceptually easier at the expense of
> introducing greater potential for retrying placement decisions (since the
> specific NIC to bind a port to wouldn't be known until the claim is made on
> the compute host). The latter method adds complexity to the filtering and
> scheduler in order to make more accurate placement decisions that would
> result in fewer retries.
>
>> That part is quite clear to me,
>>
>> From [1] I'm not sure which blueprint introduces the ability to
>> schedule based on the resource allocation/availability itself,
>> ("resource-providers-scheduler" seems more like an optimization to the
>> schedule/DB interaction, right?)
>
>
> Yes, you are correct about the above blueprint; it's only for moving the
> Python-side filters to be a DB query.
>
> The resource-providers-allocations blueprint:
>
> https://review.openstack.org/300177
>
> Is the one where we convert the various consumed resource amount fields to
> live in the single allocations table that may be queried for usage
> information.
>
> We aim to use the ComputeNode object as a facade that hides the migration of
> these data fields as much as possible so that the scheduler actually does
> not need to know that the schema has changed underneath it. Of course, this
> only works for *existing* resource classes, like vCPU, RAM, etc. It won't
> work for *new* resource classes like the discussed NET_BW_KB because,
> clearly, we don't have an existing field in the instance_extra or other
> tables that contain that usage amount and therefore can't use ComputeNode
> object as a facade over a non-existing piece of data.
>
> Eventually, the intent is to change the ComputeNode object to return a new
> AllocationList object that would contain all of the compute node's resources
> in a tabular format (mimicking the underlying allocations table):
>
> https://review.openstack.org/#/c/282442/20/nova/objects/resource_provider.py
>
> Once this is done, the scheduler can be fitted to query this AllocationList
> object to make resource usage and placement decisions in the Python-side
> filters.
>
> We are still debating on the resource-providers-scheduler-db-filters
> blueprint:
>
> https://review.openstack.org/#/c/300178/
>
> Whether to change the existing FilterScheduler or create a brand new
> scheduler driver. I could go either way, frankly. If we made a brand new
> scheduler driver, it would do a query against the compute_nodes table in the
> DB directly. The legacy FilterScheduler would manipulate the AllocationList
> object returned by the ComputeNode.allocations attribute. Either way we get
> to where we want to go: representing all quantitative resources in a
> standardized and consistent fashion.
>
>>  And, that brings me to another point: at the moment of filtering
>> hosts, nova  I guess, will have the neutron port information, it has to
>> somehow identify if the port is tied to a minimum bandwidth QoS policy.
>
>
> Yes, Nova's conductor gathers information about the requested networks
> *before* asking the scheduler where to place hosts:
>
> https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362
>
>>  That would require identifying that the port has a "qos_policy_id"
>> attached to it, and then, asking neutron for the specific QoS policy
>>   [3], then look out for a minimum band

Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-04-15 Thread Miguel Angel Ajo Pelayo
On Fri, Apr 15, 2016 at 7:32 AM, IWAMOTO Toshihiro
 wrote:
> At Mon, 11 Apr 2016 14:42:59 +0200,
> Miguel Angel Ajo Pelayo wrote:
>>
>> On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
>>  wrote:
>> > At Fri, 8 Apr 2016 12:21:21 +0200,
>> > Miguel Angel Ajo Pelayo wrote:
>> >>
>> >> Hi, good that you're looking at this,
>> >>
>> >>
>> >> You could create a lot of ports with this method [1] and a bit of extra
>> >> bash, without the extra expense of instance RAM.
>> >>
>> >>
>> >> [1]
>> >> http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in
>> >>
>> >>
>> >> This effort is going to be still more relevant in the context of
>> >> openvswitch firewall. We still need to make sure it's tested with the
>> >> native interface, and eventually we will need flow bundling (like in
>> >> ovs-ofctl --bundle add-flows) where the whole 
>> >> addition/removal/modification
>> >> is sent to be executed atomically by the switch.
>> >
>> > Bad news is that ovs-firewall isn't currently using the native
>> > of_interface much.  I can add install_xxx methods to
>> > OpenFlowSwitchMixin classes so that ovs-firewall can use the native
>> > interface.
>> > Do you have a plan for implementing flow bundling or using conjunction?
>> >
>>
>> Adding Jakub to the thread,
>>
>> IMO, if the native interface is going to provide us with greater speed
>> for rule manipulation, we should look into it.
>>
>> We don't use bundling or conjunctions yet, but it's part of the plan.
>> Bundling will allow atomicity of operations with rules (switching
>> firewall rules, etc, as we have with iptables-save /
>> iptables-restore), and conjunctions will reduce the number of entries.
>> (No expansion of IP addresses for remote groups, no expansion of
>> security group rules per port, when several ports are on the same
>> security group on the same compute host).
>>
>> Do we have any metric of bare rule manipulation time (ms/rule, for example)?
>
> No bare numbers but from a graph in the other mail I sent last week,
> bind_devices for 160 ports (iirc, that amounts to 800 flows) takes
> 4.5sec with of_interface=native and 8sec with of_interface=ovs-ofctl,
> which means an native add-flow is 4ms faster than the other.
>
> As the ovs firewall uses DeferredOVSBridge and has less exec
> overheads, I have no idea how much gain the native of_interface
> brings.
>
>> As a note, we're around 80 rules/port with IPv6 + IPv4 on the default
>> sec group plus a couple of rules.
>
> I booted 120VMs on one network and the default security group
> generated 62k flows.  It seems using conjunction is the #1 item for
> performance.
>

Ouch, hello again cartesian product!, luckily we already know how to
optimize that, now we need to get our hands on it.

@iwamoto, thanks for trying it.



>
>
>>
>> >> On Thu, Apr 7, 2016 at 10:00 AM, IWAMOTO Toshihiro 
>> >> wrote:
>> >>
>> >> > At Thu, 07 Apr 2016 16:33:02 +0900,
>> >> > IWAMOTO Toshihiro wrote:
>> >> > >
>> >> > > At Mon, 18 Jan 2016 12:12:28 +0900,
>> >> > > IWAMOTO Toshihiro wrote:
>> >> > > >
>> >> > > > I'm sending out this mail to share the finding and discuss how to
>> >> > > > improve with those interested in neutron ovs performance.
>> >> > > >
>> >> > > > TL;DR: The native of_interface code, which has been merged recently
>> >> > > > and isn't default, seems to consume less CPU time but gives a mixed
>> >> > > > result.  I'm looking into this for improvement.
>> >> > >
>> >> > > I went on to look at implementation details of eventlet etc, but it
>> >> > > turned out to be fairly simple.  The OVS agent in the
>> >> > > of_interface=native mode waits for a openflow connection from
>> >> > > ovs-vswitchd, which can take up to 5 seconds.
>> >> > >
>> >> > > Please look at the attached graph.
>> >> > > The x-axis is time from agent restarts, the y-axis is numbers of ports
>> >> > > processed (in treat_devices and bind_devices).  Each port is counted
>> >> > > twice; the first slope is treat_devices and the second is
>> >&g

Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-20 Thread Miguel Angel Ajo Pelayo
Sorry, I just saw, FC = flow classifier :-), I made it a multi purpose
abrev. now ;)

On Wed, Apr 20, 2016 at 2:12 PM, Miguel Angel Ajo Pelayo
 wrote:
> I think this is an interesting topic.
>
> What do you mean exactly by FC ? (feature chaining?)
>
> I believe we have three things to look at:  (sorry for the TL)
>
> 1) The generalization of traffic filters / traffic classifiers. Having
> common models, some sort of common API or common API structure
> available, and translators to convert those filters to iptables,
> openflow filters, etc..
>
> 2) The enhancement of extensiblity of agents via Extension API.
>
> 3) How we chain features in OpenFlow, which current approach of just
> inserting rules, renders into incompatible extensions. This becomes
> specially relevant for the new openvswitch firewall.
>
> 2 and 3 are interlinked, and a good mechanism to enhance (3) should be
> provided in (2).
>
> We need to resolve:
>
> a) The order of tables, and how openflow actions chain the
> different features in the pipeline.  Some naive thinking brings me
> into the idea that we need to identify different input/output stages
> of packet processing, and every feature/extension declares the point
> where it needs to be. And then when we have all features, every
> feature get's it's own table number, and the "next" action in
> pipeline.
>
> b) We need to have a way to request openflow registers to use in
> extensions, so one extension doesn't overwrite other's registers
>
>c) Those registers need to be given a logical names that other
> extensions can query for (for example "port_number", "local_zone",
> etc..) , and those standard registers should be filled in for all
> extensions at the input stage.
>
>and probably c,d,e,f,g,h what I didn't manage to think of.
>
> On Fri, Apr 15, 2016 at 11:13 PM, Cathy Zhang  
> wrote:
>> Hi Reedip,
>>
>>
>>
>> Sure will include you in the discussion. Let me know if there are other
>> Tap-as-a-Service members who would like to join this initiative.
>>
>>
>>
>> Cathy
>>
>>
>>
>> From: reedip banerjee [mailto:reedi...@gmail.com]
>> Sent: Thursday, April 14, 2016 7:03 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and
>> OVS Agent extension for Newton cycle
>>
>>
>>
>> Speaking on behalf of Tap-as-a-Service members, we would also be very much
>> interested in the following initiative :)
>>
>>
>>
>> On Fri, Apr 15, 2016 at 5:14 AM, Ihar Hrachyshka 
>> wrote:
>>
>> Cathy Zhang  wrote:
>>
>>
>> I think there is no formal spec or anything, just some emails around there.
>>
>> That said, I don’t follow why it’s a requirement for SFC to switch to l2
>> agent extension mechanism. Even today, with SFC maintaining its own agent,
>> there are no clear guarantees for flow priorities that would avoid all
>> possible conflicts.
>>
>> Cathy> There is no requirement for SFC to switch. My understanding is that
>> current L2 agent extension does not solve the conflicting entry issue if two
>> features inject the same priority table entry. I think this new L2 agent
>> effort is try to come up with a mechanism to resolve this issue. Of course
>> if each feature( SFC or Qos) uses its own agent, then there is no
>> coordination and no way to avoid conflicts.
>>
>>
>> Sorry, I probably used misleading wording. I meant, why do we consider the
>> semantic flow management support in l2 agent extension framework a
>> *prerequisite* for SFC to switch to l2 agent extensions? The existing
>> framework should already allow SFC to achieve what you have in the
>> subproject tree implemented as a separate agent (essentially a fork of OVS
>> agent). It will also set SFC to use standard extension mechanisms instead of
>> hacky inheritance from OVS agent classes. So even without the strict
>> semantic flow management, there is benefit for the subproject.
>>
>> With that in mind, I would split this job into 3 pieces:
>> * first, adopt l2 agent extension mechanism for SFC functionality (dropping
>> custom agent);
>> * then, work on semantic flow management support in OVS agent API class [1];
>> * once the feature emerges, switch SFC l2 agent extension to the new
>> framework to manage SFC flows.
>>
>> I would at least prioritize the first point and target it to Newton-1. Other
>> bu

Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-20 Thread Miguel Angel Ajo Pelayo
I think this is an interesting topic.

What do you mean exactly by FC ? (feature chaining?)

I believe we have three things to look at:  (sorry for the TL)

1) The generalization of traffic filters / traffic classifiers. Having
common models, some sort of common API or common API structure
available, and translators to convert those filters to iptables,
openflow filters, etc..

2) The enhancement of extensiblity of agents via Extension API.

3) How we chain features in OpenFlow, which current approach of just
inserting rules, renders into incompatible extensions. This becomes
specially relevant for the new openvswitch firewall.

2 and 3 are interlinked, and a good mechanism to enhance (3) should be
provided in (2).

We need to resolve:

a) The order of tables, and how openflow actions chain the
different features in the pipeline.  Some naive thinking brings me
into the idea that we need to identify different input/output stages
of packet processing, and every feature/extension declares the point
where it needs to be. And then when we have all features, every
feature get's it's own table number, and the "next" action in
pipeline.

b) We need to have a way to request openflow registers to use in
extensions, so one extension doesn't overwrite other's registers

   c) Those registers need to be given a logical names that other
extensions can query for (for example "port_number", "local_zone",
etc..) , and those standard registers should be filled in for all
extensions at the input stage.

   and probably c,d,e,f,g,h what I didn't manage to think of.

On Fri, Apr 15, 2016 at 11:13 PM, Cathy Zhang  wrote:
> Hi Reedip,
>
>
>
> Sure will include you in the discussion. Let me know if there are other
> Tap-as-a-Service members who would like to join this initiative.
>
>
>
> Cathy
>
>
>
> From: reedip banerjee [mailto:reedi...@gmail.com]
> Sent: Thursday, April 14, 2016 7:03 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and
> OVS Agent extension for Newton cycle
>
>
>
> Speaking on behalf of Tap-as-a-Service members, we would also be very much
> interested in the following initiative :)
>
>
>
> On Fri, Apr 15, 2016 at 5:14 AM, Ihar Hrachyshka 
> wrote:
>
> Cathy Zhang  wrote:
>
>
> I think there is no formal spec or anything, just some emails around there.
>
> That said, I don’t follow why it’s a requirement for SFC to switch to l2
> agent extension mechanism. Even today, with SFC maintaining its own agent,
> there are no clear guarantees for flow priorities that would avoid all
> possible conflicts.
>
> Cathy> There is no requirement for SFC to switch. My understanding is that
> current L2 agent extension does not solve the conflicting entry issue if two
> features inject the same priority table entry. I think this new L2 agent
> effort is try to come up with a mechanism to resolve this issue. Of course
> if each feature( SFC or Qos) uses its own agent, then there is no
> coordination and no way to avoid conflicts.
>
>
> Sorry, I probably used misleading wording. I meant, why do we consider the
> semantic flow management support in l2 agent extension framework a
> *prerequisite* for SFC to switch to l2 agent extensions? The existing
> framework should already allow SFC to achieve what you have in the
> subproject tree implemented as a separate agent (essentially a fork of OVS
> agent). It will also set SFC to use standard extension mechanisms instead of
> hacky inheritance from OVS agent classes. So even without the strict
> semantic flow management, there is benefit for the subproject.
>
> With that in mind, I would split this job into 3 pieces:
> * first, adopt l2 agent extension mechanism for SFC functionality (dropping
> custom agent);
> * then, work on semantic flow management support in OVS agent API class [1];
> * once the feature emerges, switch SFC l2 agent extension to the new
> framework to manage SFC flows.
>
> I would at least prioritize the first point and target it to Newton-1. Other
> bullet points may take significant time to bake.
>
> [1]
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py
>
>
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Thanks and Regards,
> Reedip Banerjee
>
> IRC: reedip
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

_

Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-20 Thread Miguel Angel Ajo Pelayo
Inline update.

On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
 wrote:
> On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes  wrote:
>> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
[...]
>> Yes, Nova's conductor gathers information about the requested networks
>> *before* asking the scheduler where to place hosts:
>>
>> https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362
>>
>>>  That would require identifying that the port has a "qos_policy_id"
>>> attached to it, and then, asking neutron for the specific QoS policy
>>>   [3], then look out for a minimum bandwidth rule (still to be defined),
>>> and extract the required bandwidth from it.
>>
>>
>> Yep, exactly correct.
>>
>>> That moves, again some of the responsibility to examine and
>>> understand external resources to nova.
>>
>>
>> Yep, it does. The alternative is more retries for placement decisions
>> because accurate decisions cannot be made until the compute node is already
>> selected and the claim happens on the compute node.
>>
>>>  Could it make sense to make that part pluggable via stevedore?, so
>>> we would provide something that takes the "resource id" (for a port in
>>> this case) and returns the requirements translated to resource classes
>>> (NIC_BW_KB in this case).
>>
>>
>> Not sure Stevedore makes sense in this context. Really, we want *less*
>> extensibility and *more* consistency. So, I would envision rather a system
>> where Nova would call to Neutron before scheduling when it has received a
>> port or network ID in the boot request and ask Neutron whether the port or
>> network has any resource constraints on it. Neutron would return a
>> standardized response containing each resource class and the amount
>> requested in a dictionary (or better yet, an os_vif.objects.* object,
>> serialized). Something like:
>>
>> {
>>   'resources': {
>> '': {
>>   'NIC_BW_KB': 2048,
>>   'IPV4_ADDRESS': 1
>> }
>>   }
>> }
>>
>
> Oh, true, that's a great idea, having some API that translates a
> neutron resource, to scheduling constraints. The external call will be
> still required, but the coupling issue is removed.
>
>


I had a talk yesterday with @iharchys, @dansmith, and @sbauzas about
this, and we believe the synthesis of resource usage / scheduling
constraints from neutron makes sense.

We should probably look into providing those details in a read only
dictionary during port creation/update/show in general, in that way,
we would not be adding an extra API call to neutron from the nova
scheduler to figure out any of those details. That extra optimization
is something we may need to discuss with the neutron community.



>> In the case of the NIC_BW_KB resource class, Nova's scheduler would look for
>> compute nodes that had a NIC with that amount of bandwidth still available.
>> In the case of the IPV4_ADDRESS resource class, Nova's scheduler would use
>> the generic-resource-pools interface to find a resource pool of IPV4_ADDRESS
>> resources (i.e. a Neutron routed network or subnet allocation pool) that has
>> available IP space for the request.
>>
>
> Not sure about the IPV4_ADDRESS part because I still didn't look on
> how they resolve routed networks with this new framework, but for
> other constraints makes perfect sense to me.
>
>> Best,
>> -jay
>>
>>
>>> Best regards,
>>> Miguel Ángel Ajo
>>>
>>>
>>> [1]
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
>>> [2] https://bugs.launchpad.net/neutron/+bug/1560963
>>> [3]
>>> http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-21 Thread Miguel Angel Ajo Pelayo
On Thu, Apr 21, 2016 at 9:54 AM, Vikram Choudhary  wrote:
> AFAIK, there is proposal about adding a 'priority' option in the existing
> flow classifier rule. This can ensure the rule ordering.
>
>

It's more complicated than that, there you're only considering Flow
Classifiers, while we need to make the full pipeline of features
(externally pluggable) work together.

> On Thu, Apr 21, 2016 at 12:58 PM, IWAMOTO Toshihiro 
> wrote:
>>
>> At Wed, 20 Apr 2016 14:12:07 +0200,
>> Miguel Angel Ajo Pelayo wrote:
>> >
>> > I think this is an interesting topic.
>> >
>> > What do you mean exactly by FC ? (feature chaining?)
>> >
>> > I believe we have three things to look at:  (sorry for the TL)
>> >
>> > 1) The generalization of traffic filters / traffic classifiers. Having
>> > common models, some sort of common API or common API structure
>> > available, and translators to convert those filters to iptables,
>> > openflow filters, etc..
>> >
>> > 2) The enhancement of extensiblity of agents via Extension API.
>> >
>> > 3) How we chain features in OpenFlow, which current approach of just
>> > inserting rules, renders into incompatible extensions. This becomes
>> > specially relevant for the new openvswitch firewall.
>> >
>> > 2 and 3 are interlinked, and a good mechanism to enhance (3) should be
>> > provided in (2).
>> >
>> > We need to resolve:
>> >
>> > a) The order of tables, and how openflow actions chain the
>> > different features in the pipeline.  Some naive thinking brings me
>> > into the idea that we need to identify different input/output stages
>> > of packet processing, and every feature/extension declares the point
>> > where it needs to be. And then when we have all features, every
>> > feature get's it's own table number, and the "next" action in
>> > pipeline.
>>
>> Can we create an API that allocates flow insertion points and table
>> numbers?  How can we ensure correct ordering of flows?

I believe that just an API to allocate flow insertion points and table
numbers wouldn't work, because, you need to get the "next" hop in the
table, and the next hop would not be still resolved when you ask for
it (may be yes, if we return a mutable object).

The idea, is than when all features are declared and inspected, we
have the next hops, and table numbers for all features.

Also another API for requesting openflow registers would be necessary,
as extensions consume registers for different purposes.

>> IMHO, it might be a time to collect low-level flow operation functions
>> into a single repository and test interoperability there.
>>

That's may be something we must consider, I don't completely disagree,
but if we can find a way to solve the issue dynamically, that would
lead to quicker evolution, and easy interoperability with
out-of-our-trees solutions, and cross version compatibility.

>> > b) We need to have a way to request openflow registers to use in
>> > extensions, so one extension doesn't overwrite other's registers
>> >
>> >c) Those registers need to be given a logical names that other
>> > extensions can query for (for example "port_number", "local_zone",
>> > etc..) , and those standard registers should be filled in for all
>> > extensions at the input stage.
>> >
>> >and probably c,d,e,f,g,h what I didn't manage to think of.
>> >
>> > On Fri, Apr 15, 2016 at 11:13 PM, Cathy Zhang 
>> > wrote:
>> > > Hi Reedip,
>> > >
>> > >
>> > >
>> > > Sure will include you in the discussion. Let me know if there are
>> > > other
>> > > Tap-as-a-Service members who would like to join this initiative.
>> > >
>> > >
>> > >
>> > > Cathy
>> > >
>> > >
>> > >
>> > > From: reedip banerjee [mailto:reedi...@gmail.com]
>> > > Sent: Thursday, April 14, 2016 7:03 PM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier
>> > > and
>> > > OVS Agent extension for Newton cycle
>> > >
>> > >
>> > >
>> > > Speaking on behalf of Tap-as-a-Service members, we would also be very
>> > > much
>> > > interested in the following initiative :)
>> > >

Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-27 Thread Miguel Angel Ajo Pelayo
Trying to find you folks. I was late
El 27/4/2016 12:04, "Paul Carver"  escribió:

> SFC team and anybody else dealing with flow selection/classification (e.g.
> QoS),
>
> I just wanted to confirm that we're planning to meet in salon C today
> (Wednesday) to get lunch but then possibly move to a quieter location to
> discuss the common flow classifier ideas.
>
> On 4/21/2016 19:42, Cathy Zhang wrote:
>
>> I like Malini’s suggestion on meeting for a lunch to get to know each
>> other, then continue on Thursday.
>>
>> So let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Wednesday
>> and then continue the discussion at Room 400 at 3:10pm Thursday.
>>
>> Since Salon C is a big room, I will put a sign “Common Flow Classifier
>> and OVS Agent Extension” on the table.
>>
>> I have created an etherpad for the discussion.
>> https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-27 Thread Miguel Angel Ajo Pelayo
Please add me to whatsapp or telegram if you use that : +34636522569
El 27/4/2016 12:50, majop...@redhat.com escribió:

> Trying to find you folks. I was late
> El 27/4/2016 12:04, "Paul Carver"  escribió:
>
>> SFC team and anybody else dealing with flow selection/classification
>> (e.g. QoS),
>>
>> I just wanted to confirm that we're planning to meet in salon C today
>> (Wednesday) to get lunch but then possibly move to a quieter location to
>> discuss the common flow classifier ideas.
>>
>> On 4/21/2016 19:42, Cathy Zhang wrote:
>>
>>> I like Malini’s suggestion on meeting for a lunch to get to know each
>>> other, then continue on Thursday.
>>>
>>> So let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Wednesday
>>> and then continue the discussion at Room 400 at 3:10pm Thursday.
>>>
>>> Since Salon C is a big room, I will put a sign “Common Flow Classifier
>>> and OVS Agent Extension” on the table.
>>>
>>> I have created an etherpad for the discussion.
>>> https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [qos] gathering Friday 9:30

2016-04-28 Thread Miguel Angel Ajo Pelayo
Does governors ballroom in Hilton sound ok?

We can move to somewhere else if necessary.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-06 Thread Miguel Angel Ajo Pelayo
Sounds good,

   I started by opening a tiny RFE, that may help in the organization
of flows inside OVS agent, for inter operability of features (SFC,
TaaS, ovs fw, and even port trunking with just openflow). [1] [2]


[1] https://bugs.launchpad.net/neutron/+bug/1577791
[2] http://paste.openstack.org/show/495967/


On Fri, May 6, 2016 at 12:35 AM, Cathy Zhang  wrote:
> Hi everyone,
>
> We had a discussion on the two topics during the summit. Here is the etherpad 
> link for the discussion.
> https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit
>
> We agreed to continue the discussion on Neutron channel on a weekly basis. It 
> seems UTC 1700 ~ UTC 1800 Tuesday is good for most people.
> Another option is UTC 1700 ~ UTC 1800 Friday.
>
> I will tentatively set the meeting time to UTC 1700 ~ UTC 1800 Tuesday. Hope 
> this time is good for all people who have interest and like to contribute to 
> this work. We plan to start the first meeting on May 17.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Cathy Zhang
> Sent: Thursday, April 21, 2016 11:43 AM
> To: Cathy Zhang; OpenStack Development Mailing List (not for usage 
> questions); Ihar Hrachyshka; Vikram Choudhary; Sean M. Collins; Haim Daniel; 
> Mathieu Rohon; Shaughnessy, David; Eichberger, German; Henry Fourie; 
> arma...@gmail.com; Miguel Angel Ajo; Reedip; Thierry Carrez
> Cc: Cathy Zhang
> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Hi everyone,
>
> We have room 400 at 3:10pm on Thursday available for discussion of the two 
> topics.
> Another option is to use the common room with roundtables in "Salon C" during 
> Monday or Wednesday lunch time.
>
> Room 400 at 3:10pm is a closed room while the Salon C is a big open room 
> which can host 500 people.
>
> I am Ok with either option. Let me know if anyone has a strong preference.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Cathy Zhang
> Sent: Thursday, April 14, 2016 1:23 PM
> To: OpenStack Development Mailing List (not for usage questions); 'Ihar 
> Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim Daniel'; 'Mathieu 
> Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; Cathy Zhang; Henry 
> Fourie; 'arma...@gmail.com'
> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Thanks for everyone's reply!
>
> Here is the summary based on the replies I received:
>
> 1.  We should have a meet-up for these two topics. The "to" list are the 
> people who have interest in these topics.
> I am thinking about around lunch time on Tuesday or Wednesday since some 
> of us will fly back on Friday morning/noon.
> If this time is OK with everyone, I will find a place and let you know 
> where and what time to meet.
>
> 2.  There is a bug opened for the QoS Flow Classifier 
> https://bugs.launchpad.net/neutron/+bug/1527671
> We can either change the bug title and modify the bug details or start with a 
> new one for the common FC which provides info on all requirements needed by 
> all relevant use cases. There is a bug opened for OVS agent extension 
> https://bugs.launchpad.net/neutron/+bug/1517903
>
> 3.  There are some very rough, ugly as Sean put it:-), and preliminary work 
> on common FC https://github.com/openstack/neutron-classifier which we can see 
> how to leverage. There is also a SFC API spec which covers the FC API for SFC 
> usage 
> https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst,
> the following is the CLI version of the Flow Classifier for your reference:
>
> neutron flow-classifier-create [-h]
> [--description ]
> [--protocol ]
> [--ethertype ]
> [--source-port : protocol port>]
> [--destination-port : destination protocol port>]
> [--source-ip-prefix ]
> [--destination-ip-prefix ]
> [--logical-source-port ]
> [--logical-destination-port ]
> [--l7-parameters ] FLOW-CLASSIFIER-NAME
>
> The corresponding code is here 
> https://github.com/openstack/networking-sfc/tree/master/networking_sfc/extensions
>
> 4.  We should come up with a formal Neutron spec for FC and another one for 
> OVS Agent extension and get everyone's review and approval. Here is the 
> etherpad catching our previous requirement discussion on OVS agent (Thanks 
> David for the link! I remember we had this discussion before) 
> https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion
>
>
> More inline.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Thursday, April 14, 2016 3:34 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Cathy Zhang  wrote:
>
>> Hi everyone,
>> Per Armando’s request, Louis and I are looking int

Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC

2016-01-19 Thread Miguel Angel Ajo Pelayo
Thinking of this, I had another idea, a bit raw yet.

But how does it sound to have two meetings a week, one in a EU/ASIA friendlier
timezone, and another for USA/AU (current one), with different chairs.

We don't impose unnatural-working hours (too early, too late for family, etc..)
to anyone, we encourage gathering as a community (may be split by timezones, but
it feels more human and faster than ML conversations..) and also people able
to make to both, could serve as bridges for both meetings.


Thoughts?




- Original Message -
> In Nova the alternate meetings were chaired by different people. I think that
> was very productive and fruitful. So it is certainly something worth
> considering. At the end of the day all of the meetings are logged and people
> can go over the logs and address issues that can and may concern them. At
> the end of the day we are a community and it would be nice to know that the
> community is open to accommodating people irrespective of where and how they
> live (yeah we are all envious of the IRC surfer ‘checkyouinthetubes’ who
> spends her/his days surfing around the world). If we do decide to continue
> with the single meeting time then we need to understand and accept that
> certain people may not be able to take part. In general if there is
> something really important that one wants to raise and it does not get
> addressed on the mail list then they can make an effort to attend the
> meeting to raise their issues/concerns/points.
> 
> Meetings aside the core team is spread pretty nicely across the globe.
> 
> A luta continua
> 
> 
> From: " mest...@mestery.com " < mest...@mestery.com >
> Reply-To: OpenStack List < openstack-dev@lists.openstack.org >
> Date: Wednesday, January 13, 2016 at 6:07 AM
> To: OpenStack List < openstack-dev@lists.openstack.org >
> Subject: Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC
> 
> On Tue, Jan 12, 2016 at 5:28 PM, Doug Wiegley < doug...@parksidesoftware.com
> > wrote:
> 
> 
> I don’t think it ninja merged. It had plenty of reviews, and was open during
> international hours. I don’t have any issue there.
> 
> I don’t like the crazy early meeting, so I set out to prove it didn’t matter:
> 
> Average attendance before rotating: 20.7 people
> Average attendance on Monday afternoons (U.S. time): 20.9
> Average attendance on Tuesday morning (U.S. time): 23.7
> 
> Stupid data, that’s not what I wanted to see.
> 
> I haven’t yet correlated people to which meeting time yet, but attendance was
> slightly up during the crazy early hated time, across the 1.25 years it was
> running (started 9/9/14). This is just people saying something; lurkers can
> just read the logs.
> 
> Data is from eavesdrop meeting logs, if anyone else wants to crunch it.
> 
> Since it's ridiculous to assume people are required to attend this meeting,
> one easy solution to this would be to go back to the rotating meeting and
> have a different chair for the Tuesday morning PST meeting. I think rotating
> chairs for this meeting would be a good idea for a multitude of reasons
> (spreads the pain, lets others have a chance at the pulpit, grooms future
> meeting leaders, etc.).
> 
> Thanks,
> Kyle
> 
> 
> 
> Thanks,
> doug
> 
> 
> > On Jan 12, 2016, at 4:32 PM, Tony Breeds < t...@bakeyournoodle.com > wrote:
> > 
> > On Tue, Jan 12, 2016 at 01:27:30PM +0100, Ihar Hrachyshka wrote:
> >> Agreed with Gary on behalf of my European compatriots. (Note that I
> >> *personally* +1’d the patch because I don’t mind, doing late hours anyway;
> >> but it’s sad it was ninja merged without giving any chance for those from
> >> affected timezones to express their concerns).
> > 
> > So Ninja merged has a negative connotation that I refute.
> > 
> > I merged it. It was judgment error, and I apologise for that.
> > 
> > * I found and read through the list thread.
> > * Saw only +1's yours included
> > - known you'd be affected I used your +1 as a barometer
> > 
> > My mistake was not noticing your request to leave the review open for
> > longer.
> > 
> > I also noted in my review that reverting it is pretty low cost to back it
> > out
> > if needed.
> > 
> > I understand that the 'root cause' for this change was the yaml2ical issue
> > that
> > stemmed from having 2 odd week in a row. We've fixed that [1]. I'm also
> > working a a more human concept of biweekly meeting in yaml2ical.
> > 
> > Tony
> > [1] the next time it could have been a problem is 2020/2021 ;P
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [Neutron] Being more aggressive with our defaults

2016-02-10 Thread Miguel Angel Ajo Pelayo

> On 09 Feb 2016, at 21:43, Sean M. Collins  wrote:
> 
> Kevin Benton wrote:
>> I agree with the mtu setting because there isn't much of a downside to
>> enabling it. However, the others do have reasons to be disabled.
>> 
>> csum - requires runtime detection of support for a feature and then auto
>> degradation for systems that don't support it. People were against those so
>> we have the whole sanity check framework instead. I wouldn't be opposed to
>> revisiting that decision, but it's definitely a blocker right now.
> 
> Agree - I think the work that can be done here is to do some
> self-discovery to see if the system supports it, and enable it.

The risk of doing such thing, and this is why we stayed with sanity checks, 
is that we slow down agent startup, it could be trivial at the start, but as we 
keep piling checks, it could be come an excessive overhead.

We could cache the system discoveries, which are unlikely to change, but that
could bring other issues, like switching hardware/network settings requiring to
cleanup the “facts” cache.

Another approach could be making the sanity checks generate configuration file
additions or modifications on request.

IMHO we should keep any setting which is an optimization OFF, and let the 
administrator
tune it up.

What do we want?, a super performant neutron reference implementation that 
doesn’t
work for 40% (random number) of the deployers, or a neutron reference 
implantation
that works for all but can be tuned?



> 
>> dvr - doesn't work in non-VM cases (e.g. floating IP pointing to allowed
>> address pair or bare metal host) and consumes more public IPs than legacy
>> or HA.
> 
> Yes it does have tradeoffs currently. But I like to think back to
> Nova-Network. It was extremely common to run it in multi_host=True mode.
> 
> Despite the fact that the default is False.
> 
> https://github.com/openstack/nova/blob/da019e89976f9673c4f80575909dda3bab3e1a24/nova/network/rpcapi.py#L31
> 
> It's been a little while for me since I looked at nova-network (Essex,
> Folsom era) so things may have moved around a bit, but that's at least
> what I recall.
> 
> I'd like to see some grizzled Nova network veterans chime in, but at
> least from the operator standpoint the whole pain point for Neutron
> (which endangered Neutron's existence for a long time) was the fact that
> we didn't have an equivalent feature to multi_host - hence DVR being
> written.
> 
> So, even Nova may have a couple things turned off by default probably a
> majority of deployers have to consciously turn the knob for.
> 
>> l2pop - this one is weird because it's an ML2 driver. It makes no sense to
>> have it always enabled because an operator could be using an l2pop
>> incompatible backend. We also don't have a notion of a driver enabled by
>> default so if we did want to do it, it would take a bunch of changes to
>> ML2.
> 
> I think in this case, the point is - enable L2Pop for things where it
> really makes sense. Meaning if you are using a tunnel protocol for
> tenant networking, and you do not have something like vxlan multicast
> group configured. I don't think Open vSwitch supports it, so in that
> deployment model I think we can bet that it should be enabled.
> 
> Linux Bridge supports l2pop and vxlan multicast, so even in that case
> I'd say - enable l2pop but put good docs in to say "hey if you have
> multicast vxlan set up, switch it over to use that instead" 
> 
>> Whenever we have a knob, it usually stems from the fact that we don't use
>> runtime feature detection or the feature has a tradeoff that doesn't make
>> its use obvious in all cases.
> 
> Right, but I think we've been very cautious in the past, where we don't
> want to make any decision, so we just turn it all off and force
> operators to enable it. In some cases we've decided to do nothing and
> the result is forcing everyone to make the decision, where a high % of
> people end up making the same decision. Perhaps we can use the user
> survey and the ops meetups to find options where "80% of people use this 
> option
> and have to be proactive and enable it" - and think about turning them
> on by default.
> 
> It's not cut and dry, but maybe taking a stab at it will help us clarify
> which really options really are a toss up between on/off and which
> should be defaults.
> 
> 
> -- 
> Sean M. Collins
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [QoS] Roadmap and prioritisation of features.

2016-02-23 Thread Miguel Angel Ajo Pelayo
Regarding this conversation about QoS, [1]  as Nate said, we 
have every feature x4 ( x[API, OVS, LB, SR-IOV]) and I add: we
should avoid writing RFEs for any missing piece in the reference
implementations, if any of those is missing, that’s just a bug.

I guess I haven’t been communicating the status and plan lately
neither reviewing new RFEs due to our focus on the current ones,
sorry about that.

I believe the framework we have is solid (what could I say!) but 
we’re sticking to the features that are easier to develop on the reference
implementation, and still beneficial to the broadest audience
(like bandwidth policing, L3 marking -DSCP- , … ) and then
we will be able to jump into more complicated QoS rules.

Some of the things, are simply technically complicated in the low level
while very easy to model with the current framework.

And some of the things need integration with the nova scheduler (like
min bandwidth guarantees -requested by NFV/operators-) 

After the QoS meeting I will work on a tiny report so we can raise visibility
about the features, and the plans.

Best regards,
Miguel Ángel.


[1] 
http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-02-18-22.01.log.html#l-52
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Neutron][QoS]horizon angular network QoS panel

2016-02-25 Thread Miguel Angel Ajo Pelayo
Hi Masco!,

   Thanks a lot for working on this, I’m not following the [Horizon] tag and I 
missed
this. I’ve added the Neutron and QoS tags.

   I will give it a try as soon as I can. 

   Keep up the good work!,

Cheers,
Miguel Ángel.
> On 10 Feb 2016, at 13:04, masco  wrote:
> 
> 
> Hello All,
> 
> As most of you people knows the 'QoS' feature is added in neutron during 
> liberty release.
> It will be nice to have this feature in horizon, so I have added a 'network 
> qos' panel for the same in angularJS.
> It will be very helpful if you people reviewing this patches and helping to 
> land this feature in horizon.
> 
> gerrit links:
> 
> https://review.openstack.org/#/c/247997/ 
> 
> https://review.openstack.org/#/c/259022/11 
> 
> https://review.openstack.org/#/c/272928/4 
> 
> https://review.openstack.org/#/c/277743/3 
> 
> 
> 
> To set test env:
> here is some steps how to enable a QoS in neutron.
> If you want to test it will help you.
> 
> 
>   To enable the QoS in devstack please add below two
>   lines in the local.conf enable_plugin neutron
>   git://git.openstack.org/openstack/neutron
>   enable_service q-qos and rebuild your stack
>   (./stack.sh)
> 
> Thanks,
> Masco.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-26 Thread Miguel Angel Ajo Pelayo

> On 26 Feb 2016, at 02:38, Sean McGinnis  wrote:
> 
> On Thu, Feb 25, 2016 at 04:13:56PM +0800, Qiming Teng wrote:
>> Hi, All,
>> 
>> After reading through all the +1's and -1's, we realized how difficult
>> it is to come up with a proposal that makes everyone happy. When we are
>> discussing this proposal with some other contributors, we came up with a
>> proposal which is a little bit different. This idea could be very
>> impractical, very naive, given that we don't know much about the huge
>> efforts behind the scheduling, planning, coordination ... etc etc. So,
>> please treat this as a random thought.
>> 
>> Maybe we can still have the Summit and the Design Summit colocated, but
>> we can avoid the overlap that has been the source of many troubles. The
>> idea is to have both events scheduled by the end of a release cycle. For
>> example:
>> 
>> Week 1:
>>  Wednesday-Friday: 3 days Summit.
>>* Primarily an event for marketing, sales, CTOs, architects,
>>  operators, journalists, ...
>>* Contributors can decide whether they want to attend this.
>>  Saturday-Sunday:
>>* Social activities: contributors meet-up, hang outs ...
>> 
>> Week 2:
>>  Monday-Wednesday: 3 days Design Summit
>>* Primarily an event for developers.
>>* Operators can hold meetups during these days, or join project
>>  design summits.
>> 


A proposal like this one seems much more rational to me, 

  * no need for two trips
  * no overlap of the summit/design (I end up running back and forth otherwise)
  
Otherwise, separating both parts of the summit increases the gap
between engineering and the final OpenStack users/ops. I couldn’t go
to summit-related-events 4 times a year for family reasons. But I like
to have the opportunity to spend some time close to the user/op side
of things to understand how people is using OpenStack, what are they
missing, what are we doing good.


>> If you need to attend both events, you don't need two trips. Scheduling
>> both events by the end of a release cycle can help gather more
>> meaningful feedbacks, experiences or lessons from previous releases and
>> ensure a better plan for the coming release.
>> 
>> If you want to attend just the main Summit or only the Design Summit,
>> you can plan your trip accordingly.
>> 
>> Thoughts?

I really like it. Not sure how well does it work for others, or from
the organisational point of view.

>> 
>> - Qiming
>> 
> 
> This would eliminate the need for a second flight, and it would
> net be total less time away than attending two separate events. I could
> see this working.
> 
> Sean
> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][release] Releasing python-neutronclient 4.1.2?

2016-03-09 Thread Miguel Angel Ajo Pelayo
On Wed, Mar 9, 2016 at 4:16 PM, Doug Hellmann  wrote:

> Excerpts from Armando M.'s message of 2016-03-08 15:43:05 -0700:
> > On 8 March 2016 at 15:07, Doug Hellmann  wrote:
> >
> > > Excerpts from Armando M.'s message of 2016-03-08 12:49:16 -0700:
> > > > Hi folks,
> > > >
> > > > There's a feature or two that are pending to be delivered in Mitaka
> > > [1,2],
> > > > and those involve changes to both the server and client sides.
> Ideally
> > > we'd
> > > > merge both sides in time for Mitaka RC and that implies that we
> would be
> > > > able to release a new version of the client including changes [3,4].
> This
> > > > is especially important since a new client release would be
> beneficial to
> > > > improving test coverage as needed by [5].
> > > >
> > > > Considering what we released already, and what the tip of master is
> for
> > > the
> > > > client [6], I can't see any side effect that a new neutronclient
> release
> > > > may introduce.
> > > >
> > > > Having said that, I am leaning towards the all-or-none approach, but
> the
> > > > 'all' approach is predicated on the fact that we are indeed allowed
> to
> > > > release a new client and touch the global requirements.
> > > >
> > > > What's the release team's recommendation? Based on it, we may want to
> > > > decide to defer these to as soon as N master opens up.
> > >
> > > I'm a bit reluctant to start touching the requirements lists for
> > > feature work. We do have some bug fixes in the pipeline that will
> > > require library releases, but those are for bugs not new features.
> > > We also have one or two libs where feature work needed to be extended,
> > > but none of those have dependencies outside of the project producing
> > > them.
> > >
> > > The main reason to require a client release is for some *other* project
> > > to take advantage of the new feature work. Is that planned?
> > >
> >
> > Thanks for the prompt reply. Neutron would be the only consumer of these
> > additions, and no other project has pending work to leverage these
> > capabilities.
>
> In that case, I don't think we want to make an exception. Although
> Neutron is the only user of this feature, I counted more than 50 other
> projects that have python-neutronclient in a requirements file, and
> that's a lot of potential for impact with a new release.
>
> It seems like the options are to wait for Newton to land both parts of
> the feature, or to land the server side during Mitaka and release a
> feature update to the client as soon as Newton development opens.
>
> Doug
>

Yes, if anyone want's more detail, we discussed that in the
qos-meeting today [1], thank you Doug, for joining us.

I would like to ask for the inclusion of the server side, regardless
of the client bits. Fullstack would have to stay out, but I believe
the api-tests, unit tests, and functional tests included in the patch
will maintain the feature stability.

Users would have the chance to make use of the feature via direct
API calls without the client, or by bumping to neutronclient 4.2.x when
that's available. Distros would be able to backport the neutronclient
patch at their will.

I ask for it, not only for the sake of the feature, which I believe is not
critical, but because Comcast and other related contributors have been
patient enough (5-6 cycles?), learned and collaborated the U/S way to
get this finally in while helping with L2 agent extensibility and other
technical debt in the way. And because, the earlier the feature gets
used, the early we could iron out any possible bug features come with.


Best regards,
Miguel Ángel.

[1]
http://eavesdrop.openstack.org/meetings/neutron_qos/2016/neutron_qos.2016-03-09-14.03.log.html
 (around 15:37)


>
> >
> > >
> > > Doug
> > >
> > > >
> > > > Many thanks,
> > > > Armando
> > > >
> > > > [1] https://review.openstack.org/#/q/topic:bug/1468353
> > > > [2] https://review.openstack.org/#/q/topic:bug/1521783
> > > > [3] https://review.openstack.org/#/c/254280/
> > > > [4] https://review.openstack.org/#/c/288187/
> > > > [5] https://review.openstack.org/#/c/288392/
> > > > [6]
> > > >
> > >
> https://github.com/openstack/python-neutronclient/commit/8460b0dbb354a304a112be13c63cb933ebe1927a
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@list

[openstack-dev] [nova][neutron] Routed networks / Generic Resource pools

2016-03-21 Thread Miguel Angel Ajo Pelayo
Hi,

   I was doing another pass on this spec, to see if we could leverage
it as-is for QoS / bandwidth tracking / bandwidth guarantees, and I
have a question [1]

   I guess I'm just missing some detail, but looking at the 2nd scenario,
why wouldn't availability zones allow the same exactly if we used one
availability zone per subnet?

  What's the advantage of modelling it via a generic resource pool?


Best regards,
Miguel Ángel.

[1] 
https://review.openstack.org/#/c/253187/14/specs/newton/approved/generic-resource-pools.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Routed networks / Generic Resource pools

2016-03-21 Thread Miguel Angel Ajo Pelayo
On Mon, Mar 21, 2016 at 3:17 PM, Jay Pipes  wrote:
> On 03/21/2016 06:22 AM, Miguel Angel Ajo Pelayo wrote:
>>
>> Hi,
>>
>> I was doing another pass on this spec, to see if we could leverage
>> it as-is for QoS / bandwidth tracking / bandwidth guarantees, and I
>> have a question [1]
>>
>> I guess I'm just missing some detail, but looking at the 2nd scenario,
>> why wouldn't availability zones allow the same exactly if we used one
>> availability zone per subnet?
>>
>>What's the advantage of modelling it via a generic resource pool?
>
>
> Hi Miguel,
>
> On the (Nova) scheduler side, we don't actually care whether Neutron uses
> availability zone or subnet pool to model the boundaries of a pool of some
> resource. The generic-resource-pools functionality being added to Nova (as
> the new placement API meant to become the split-out scheduler RESTful API)
> just sees a resource provider UUID and an inventory of some type of
> resource.

That means, that we could also match a pool by the requisites of the resources
bound to the instance we're trying to deploy (i.e disk space (GB), bandwidth
(NIC_KB)).

>
> In the case of Neutron QoS, the first thing to determine would be what is
> the resource type exactly? The resource type must be able to be represented
> with an integer amount of something. For QoS, I *think* the resource type
> would be "NIC_BANDWIDTH_KB" or something like that. Is that correct?

The resource could be NIC_BANDWIDTH_KB, yes, in a simplified case
we could care about just tenant networks connectivity, but we can also
have provider networks bound to this. And they would be separate counts.

>This
> would represent the amount of total network bandwidth that a workload can
> consume on a particular compute node. Is that statement correct?

This would represent the amount of total network bandwidth a port could
consume (and by consume I mean: asking for a "min" bandwidth guarantee).

>
> Now, the second thing that would need to be determined is what resource
> boundary this resource type would have. I *think* it is the amount of
> bandwidth consumed on a set of compute nodes? Like, amount of bandwidth
> consumed within a rack?

No, what we're trying to model first is, the maximum bandwidth available
on a compute node [+physnet combination].

(please note this is coming from NFV / telco requisites):
When they schedule VNFs, they want to be 100% sure the throughput a VNF
can provide is exactly what they asked for, and not less (because for example
you had 10Gb throughput on a NIC, but you schedule 3 VNFs supposed
to push 5Gb each).



> Or some similar segmentation of a network, like an
> aggregate, which is a generic grouping of compute nodes. If so, then the
> bandwidth resource would be considered a *shared* resource, shared among the
> compute nodes in the aggregate. And if this is the case, then
> generic-resource-pools are intended for *exactly* this type of scenario.

We could certainly use generic resource pools to model rack switches, and their
bandwidth capabilities. But that would not satisfy my above paragraph, they are
two levels of verification that would be independent.



__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-26 Thread Miguel Angel Ajo Pelayo

Sharing thoughts that I was having:

May be during the next summit it’s worth discussing the future of the 
reference agent(s), I feel we’ll be replicating a lot of work across 
OVN/OVS/RYU(ofagent) and may be other plugins,

I guess until OVN and it’s integration are ready we can’t stop, so it makes
sense to keep development at our side, also having an independent plugin
can help us iterate faster for new features, yet I expect that OVN will be
more fluent at working with OVS and OpenFlow, as their designers have
a very deep knowledge of OVS under the hood, and it’s C. ;)

Best regards,

> On 26/2/2015, at 7:57, Miguel Ángel Ajo  wrote:
> 
> On Thursday, 26 de February de 2015 at 7:48, Miguel Ángel Ajo wrote:
>> Inline comments follow after this, but I wanted to respond to Brian 
>> questionwhich has been cut out:
>> We’re talking here of doing a preliminary analysis of the networking 
>> performance,before writing any real code at neutron level.
>> 
>> If that looks right, then we should go into a preliminary (and orthogonal to 
>> iptables/LB)implementation. At that moment we will be able to examine the 
>> scalability of the solutionin regards of switching openflow rules, which is 
>> going to be severely affectedby the way we use to handle OF rules in the 
>> bridge:
>>* via OpenFlow, making the agent a “real" OF controller, with the current 
>> effort to use  the ryu framework plugin to do that.   * via cmdline 
>> (would be alleviated with the current rootwrap work, but the former one 
>> would be preferred).
>> Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben 
>> Pfaff for theexplanation, if you’re reading this ;-))
>> Best,Miguel Ángel
>> 
>> 
>> On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
>>> Hi,
>>> 
>>> The RFC2544 with near zero packet loss is a pretty standard performance 
>>> benchmark. It is also used in the OPNFV project 
>>> (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
>>>  
>>> ).
>>> 
>>> Does this mean that OpenStack will have stateful firewalls (or security 
>>> groups)? Any other ideas planned, like ebtables type filtering?
>>> 
>> What I am proposing is in the terms of maintaining the statefulness we have 
>> nowregards security groups (RELATED/ESTABLISHED connections are allowed back 
>> on open ports) while adding a new firewall driver working only with OVS+OF 
>> (no iptables or linux bridge).
>> That will be possible (without auto-populating OF rules in oposite 
>> directions) due to
>> the new connection tracker functionality to be eventually merged into ovs.
>>  
>>> -Tapio
>>> 
>>> On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones >> > wrote:
 On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
> I’m writing a plan/script to benchmark OVS+OF(CT) vs
> OVS+LB+iptables+ipsets,
> so we can make sure there’s a real difference before jumping into any
> OpenFlow security group filters when we have connection tracking in OVS.
> 
> The plan is to keep all of it in a single multicore host, and make
> all the measures within it, to make sure we just measure the
> difference due to the software layers.
> 
> Suggestions or ideas on what to measure are welcome, there’s an initial
> draft here:
> 
> https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct 
> 
 
 Conditions to be benchmarked
 
 Initial connection establishment time
 Max throughput on the same CPU
 
 Large MTUs and stateless offloads can mask a multitude of path-length 
 sins.  And there is a great deal more to performance than Mbit/s. While 
 some of that may be covered by the first item via the likes of say netperf 
 TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on 
 Mbit/s (which I assume is the focus of the second item) there is something 
 for packet per second performance.  Something like netperf TCP_RR and 
 perhaps aggregate TCP_RR or UDP_RR testing.
 
 Doesn't have to be netperf, that is simply the hammer I wield :)
 
 What follows may be a bit of perfect being the enemy of the good, or 
 mission creep...
 
 On the same CPU would certainly simplify things, but it will almost 
 certainly exhibit different processor data cache behaviour than actually 
 going through a physical network with a multi-core system.  Physical NICs 
 will possibly (probably?) have RSS going, which may cause cache lines to 
 be pulled around.  The way packets will be buffered will differ as well.  
 Etc etc.  How well the different solutions scale with cores is definitely 
 a difference of interest between the two sofware layers.
 
> 
> 
> Hi rick, thanks for your feedback here, I’l

Re: [openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-04-07 Thread Miguel Angel Ajo Pelayo

> On 7/4/2015, at 0:43, Robert Collins  wrote:
> 
> On 7 April 2015 at 05:11, Joe Gordon  wrote:
>> 
>> 
>> On Mon, Apr 6, 2015 at 8:39 AM, Dolph Mathews 
>> wrote:
>>> 
>>> 
>>> On Mon, Apr 6, 2015 at 10:26 AM, Boris Pavlovic  wrote:
 
 Jay,
 
 
> Not far, IMHO. 100ms difference in startup time isn't something we
> should spend much time optimizing. There's bigger fish to fry.
 
 
 I agree that priority of this task shouldn't be critical or even high,
 and that there are other places that can be improved in OpenStack.
 
 In other hand this one is as well big source of UX issues that we have in
 OpenStack..
 
 For example:
 
 1) You would like to run some command X times where X is pretty big
 (admins likes to do this via bash loops). If you can execute all of them 
 for
 1 and not 10 minutes you will get happier end user.
>>> 
>>> 
>>> +1 I'm fully in support of this effort. Shaving 100ms off the startup time
>>> of a frequently used library means that you'll save that 100ms over and
>>> over, adding up to a huge win.
>>> 
>> 
>> 
>> Another data point on how slow our libraries/CLIs can be:
>> 
>> $ time openstack -h
>> 
>> real0m2.491s
>> user0m2.378s
>> sys 0m0.111s
> 
> 
> pbr should be snappy - taking 100ms to get the version is wrong.

I totally agree,  pbr needs a cleanup to reduce the dependency 
tree where that’s possible, or otherwise we need to change the strategy
for pbr.



> 
> -Rob
> 
> 
> 
> -- 
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Miguel Angel Ajo




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-07 Thread Miguel Angel Ajo Pelayo
Hi Raghunath, 

feel free to look at:
https://wiki.openstack.org/wiki/Meetings 


and suggest other timeslots with a free meeting room,
this is a very wide community, and it’s impossible to get
a timeslot in everybody working hours.

Please note that option b is out of my working hours already,
but I picked it up there because it’s right after another meeting I have
and it’s easier to organize at home with kids/wife/etc…

Best regards,
Miguel Ángel.


> On 7/4/2015, at 13:14, Raghunath D  wrote:
> 
> Hi  Miguel,
> 
> I am interested to join this meeting.
> I assume that CC of this mail are from different time zones.Please provide
> time slot with common time zone.
> 
> With Best Regards
> Raghunath Dudyala
> Tata Consultancy Services Limited
> Mailto: raghunat...@tcs.com
> Website: http://www.tcs.com 
> 
> Experience certainty. IT Services
> Business Solutions
> Consulting
> 
> 
> 
> -Miguel Ángel Ajo  wrote: -
> To: openstack-dev@lists.openstack.org
> From: Miguel Ángel Ajo 
> Date: 04/06/2015 09:26PM
> Cc: Kyle Mestery , "Sean M. Collins" 
> , "irenab@gmail.com" , Suresh 
> Balineni , Raghunath D , Tal 
> Anker , livnat Peer , Nir Yechiel 
> , Vikram Choudhary , 
> Kalyankumar Asangi , Dhruv Dhody 
> , "Dongfeng (C)" 
> Subject: [neutron] [QoS] QoS weekly meeting
> 
> I’d like to co-organized a QoS weekly meeting with Sean M. Collins,
> 
> In the last few years, the interest for QoS support has increased, Sean 
> has been leading
> this effort [1] and we believe we should get into a consensus about how to 
> model an extension
> to let vendor plugins implement QoS capabilities on network ports and tenant 
> networks, and
> how to extend agents, and the reference implementation & others [2]
> 
> As per discussion we’ve had during the last few months [3], I believe we 
> should start simple, but
> prepare a model allowing future extendibility, to allow for example specific 
> traffic rules (per port,
> per IP, etc..), congestion notification support [4], …
> 
> It’s the first time I’m trying to organize an openstack/neutron meeting, 
> so, I don’t know what’s the
> best way to do it, or find the best timeslot. I guess interested people may 
> have a saying, so I’ve 
> looped anybody I know is interested in the CC of this mail. 
> 
> 
> Miguel Ángel Ajo
> 
> [1] https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api 
> 
> [2] 
> https://drive.google.com/file/d/0B2XATqL7DxHFRHNjU3k1UFNYRjQ/view?usp=sharing 
> 
> [3] 
> https://docs.google.com/document/d/1xUx0Oq-txz_qVA2eYE1kIAJlwxGCSqXHgQEEGylwlZE/edit#heading=h.2pdgqfl3a231
>  
> 
> [4] 
> https://blueprints.launchpad.net/neutron/+spec/explicit-congestion-notification
>  
> 
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain 
> confidential or privileged information. If you are 
> not the intended recipient, any dissemination, use, 
> review, distribution, printing or copying of the 
> information contained in this e-mail message 
> and/or attachments to it are strictly prohibited. If 
> you have received this communication in error, 
> please notify us by reply e-mail or telephone and 
> immediately and permanently delete the message 
> and any attachments. Thank you
> 
> 

Miguel Angel Ajo



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-07 Thread Miguel Angel Ajo Pelayo
Hi Anthony, nice to hear about it! :)

Is the implementation available somewhere?,

IMHO, the design should be what’s best for the whole neutron project looking
into future extension of the design, 

by this I mean that we should not influence the design by what was already 
designed D/S, 

*but*, I’m sure there are lots of logic that we could reuse from the DSCP 
perspective, and
even if API or internal implementation differs in the end, you’re going to get 
equivalent 
logic as soon as diffserv/DSCP is implemented.

Best regards,
Miguel Ángel

> On 7/4/2015, at 15:07, Veiga, Anthony  wrote:
> 
>> 
>> On Apr 6, 2015, at 11:56 , Miguel Ángel Ajo > > wrote:
>> 
>> I’d like to co-organized a QoS weekly meeting with Sean M. Collins,
>> 
>> In the last few years, the interest for QoS support has increased, Sean 
>> has been leading
>> this effort [1] and we believe we should get into a consensus about how to 
>> model an extension
>> to let vendor plugins implement QoS capabilities on network ports and tenant 
>> networks, and
>> how to extend agents, and the reference implementation & others [2]
> 
> I’m very interested in seeing this feature mature.  Sean was writing this 
> code initialy while working with our team here at Comcast and we’re still 
> carrying the patches he wrote through to new versions of Neutron.  I’d very 
> much like to discuss ways to bering them back into mailine.
>> 
>> As per discussion we’ve had during the last few months [3], I believe we 
>> should start simple, but
>> prepare a model allowing future extendibility, to allow for example specific 
>> traffic rules (per port,
>> per IP, etc..), congestion notification support [4], …
> 
> I agree with starting simple.  We’ve implemented basic DSCP marking only at 
> this point to allow hardware switches to to queue and filter based on the 
> marks.  It would be great to bring the queueing down into the vSwitch and 
> then extend this to things like minimum guaranteed bandwidth.  I have a fair 
> few applications that would benefit from these kinds of features.
> 
>> 
>> It’s the first time I’m trying to organize an openstack/neutron meeting, 
>> so, I don’t know what’s the
>> best way to do it, or find the best timeslot. I guess interested people may 
>> have a saying, so I’ve 
>> looped anybody I know is interested in the CC of this mail. 
> 
> There’s no best way.  Just pick an open meeting timeslot, email out the 
> meeting details and get your notes/meeting minutes onto the wiki. Hopefully 
> this works out and I’d be glad to help!
> -Anthony
> 
>> 
>> 
>> Miguel Ángel Ajo
>> 
>> [1] https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api 
>> 
>> [2] 
>> https://drive.google.com/file/d/0B2XATqL7DxHFRHNjU3k1UFNYRjQ/view?usp=sharing
>>  
>> 
>> [3] 
>> https://docs.google.com/document/d/1xUx0Oq-txz_qVA2eYE1kIAJlwxGCSqXHgQEEGylwlZE/edit#heading=h.2pdgqfl3a231
>>  
>> 
>> [4] 
>> https://blueprints.launchpad.net/neutron/+spec/explicit-congestion-notification
>>  
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
Miguel Angel Ajo



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][pbr] splitting our deployment vs install dependencies

2015-04-13 Thread Miguel Angel Ajo Pelayo

> On 13/4/2015, at 3:53, Robert Collins  wrote:
> 
> On 13 April 2015 at 13:09, Robert Collins  wrote:
>> On 13 April 2015 at 12:53, Monty Taylor  wrote:
>> 
>>> What we have in the gate is the thing that produces the artifacts that
>>> someone installing using the pip tool would get. Shipping anything with
>>> those artifacts other that a direct communication of what we tested is
>>> just mean to our end users.
>> 
>> Actually its not.
>> 
>> What we test is point in time. At 2:45 UTC on Monday installing this
>> git ref of nova worked.
>> 
>> Noone can reconstruct that today.
>> 
>> I entirely agree with the sentiment you're expressing, but we're not
>> delivering that sentiment today.
> 
> This observation led to yet more IRC discussion and eventually
> https://etherpad.openstack.org/p/stable-omg-deps
> 
> In short, the proposal is that we:
> - stop trying to use install_requires to reproduce exactly what
> works, and instead use it to communicate known constraints (> X, Y is
> broken etc).
> - use a requirements.txt file we create *during* CI to capture
> exactly what worked, and also capture the dpkg and rpm versions of
> packages that were present when it worked, and so on. So we'll build a
> git tree where its history is an audit trail of exactly what worked
> for everything that passed CI, formatted to make it really really easy
> for other people to consume.
> 

That sounds like a very neat idea, this way we could look back, and backtrack
to discover which package version change breaks the system.


Miguel Angel Ajo




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-14 Thread Miguel Angel Ajo Pelayo
Ok, after one week, looks like the most popular time slot is B,
that is 14:00 UTC / Wednesdays.

I’m proposing first meeting for Wednesday / Apr 22th 14:00 UTC / 
#openstack-meeting-2.

Tomorrow (Apr 15th / 14:00 UTC) It’s a been early since the announcement, so 
I will join #openstack-meeting-2 while working on the agenda for next week, 
feel free to join
if you want/have time.




> On 9/4/2015, at 22:43, Howard, Victor  wrote:
> 
> I prefer Timeslot B, thanks for coordinating.  I would be interested in 
> helping out in any way with the design session let me know!
> 
> From: "Sandhya Dasu (sadasu)" mailto:sad...@cisco.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Tuesday, April 7, 2015 12:19 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
> 
> Hi Miguel,
> Both time slots work for me. Thanks for rekindling this effort.
> 
> Thanks,
> Sandhya
> 
> From: Miguel Ángel Ajo mailto:majop...@redhat.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Tuesday, April 7, 2015 1:45 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
> 
> On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
>> On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando > > wrote:
>>> 
>>> 
>>> On 7 April 2015 at 00:33, Armando M. >> > wrote:
 
 On 6 April 2015 at 08:56, Miguel Ángel Ajo >>> > wrote:
> I’d like to co-organized a QoS weekly meeting with Sean M. Collins,
> 
> In the last few years, the interest for QoS support has increased, 
> Sean has been leading
> this effort [1] and we believe we should get into a consensus about how 
> to model an extension
> to let vendor plugins implement QoS capabilities on network ports and 
> tenant networks, and
> how to extend agents, and the reference implementation & others [2]
>>> 
>>> As you surely know, so far every attempt to achieve a consensus has failed 
>>> in a pretty miserable way.
>>> This mostly because "QoS" can be interpreted in a lot of different ways, 
>>> both from the conceptual and practical perspective.
> Yes, I’m fully aware of it, it was also a new feature, so it was out of scope 
> for Kilo. 
>>> It is important in my opinion to clearly define the goals first. For 
>>> instance a simple extensions for bandwidth limiting could be a reasonable 
>>> target for the Liberty release.
> I quite agree here, but IMHO, as you said it’s a quite open field (limiting, 
> guaranteeing, 
> marking, traffic shaping..), we should do our best in trying to define a 
> model allowing us 
> to build that up in the future without huge changes, on the API side I guess 
> micro versioning
> is going to help in the API evolution.
> 
> Also, at some point, we should/could need to involve the nova folks, for 
> example, to define
> port flavors that can be associated to nova
> instance flavors, providing them 
> 1) different types of network port speeds/guarantees/priorities, 
> 2) being able to schedule instance/ports in coordination to be able to met 
> specified guarantees.
> 
> yes, complexity can sky rocket fast, 
>>> Moving things such as ECN into "future works" is the right thing to do in 
>>> my opinion. Attempting to define a flexible framework that can deal with 
>>> advanced QoS policies specification is a laudable effort, but I am a bit 
>>> skeptical about its feasibility.
>>> 
>> ++, I think focusing on perhaps bandwidth limiting may make a lot of sense 
> Yes, I believe we should look into the future , but at the same pick our very 
> first feature (or a
> very simple set of them) for L, stick to it, and try to make a design that 
> can be extended.
>>  
>>>  
> 
> As per discussion we’ve had during the last few months [3], I believe 
> we should start simple, but
> prepare a model allowing future extendibility, to allow for example 
> specific traffic rules (per port,
> per IP, etc..), congestion notification support [4], …
>>> 
>>> "Simple" in my mind is even more extreme then what you're proposing here... 
>>> I'd start with bare APIs for specifying bandwidth limiting, and then phase 
>>> them out once this "framework" is in place.
>>> Also note that this kind of design bears some overlap with the flavor 
>>> framework which is probably going to be another goal for Liberty.
>>> 
>> Indeed, and the flavor framework is something I'm hoping we can land by 
>> Liberty-1 (yes, I just said Liberty-1).
> Yes it’s something I looked at, I must admit I wasn’t able to see it work 
> together (It 

Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-14 Thread Miguel Angel Ajo Pelayo

> On 10/4/2015, at 20:10, Kyle Mestery  wrote:
> 
> On Fri, Apr 10, 2015 at 1:03 PM, Sean M. Collins  > wrote:
> We already tried to make Neutron the default with OVS - and the results
> were not good[1].
> 
> Operators who are currently not using Neutron have said that they do
> not want to learn both Neutron and also Open vSwitch at the same time.
> 
> This was documented at the operator's summit. We clearly have a
> stumbling block - Open vSwitch.
> 
> https://etherpad.openstack.org/p/PHL-ops-nova-feedback 
> 
> 
> https://etherpad.openstack.org/p/PHL-ops-OVS 
> 
> 
> Based on discussions we had at the QA sprint, the idea is to start with
> a very simple Neutron configuration in DevStack, so that we can remove
> nova-network as the default. The argument is that the people
> who are still using nova-network are comfortable with Linux Bridge, and
> that having Linux Bridge as the default Neutron agent will serve as
> a "bridge" (no pun intended) into the Neutron world.
> 
> There will be *clear* documentation that this is a very simplistic
> model, and it will be made obvious that for more advanced scenarios you
> will need to configure your local.conf to use the Open vSwitch agent.
> 
> I've spoken with Sean Dague about testing Linux Bridge and
> the idea is that we should have test coverage of both OVS and Linux
> Bridge, and that currently there is a gap with Linux Bridge. I think
> that it's OK if we override the default at the gate to run OVS, since
> that has the most test coverage and will cause the least disruption.
> There will be jobs created, that will test Linux Bridge similar to how
> Open vSwitch is tested at the gate today.
> 
> As a DevStack user, I would like to see Neutron as the default. After
> making Neutron the default, the changes needed for the local.conf file
> for me to run my mechanism driver of choice is significantly smaller
> than the changes needed today for my local.conf/localrc to
> disable Nova-Network, enable Neutron, and run my mechanism driver of choice.
> 
> If Linux Bridge being the default is the way to make the above happen,
> so be it.
> 
> ++, well said Sean. I support what's being done here.
>  
> [1]: https://review.openstack.org/#/c/153208/ 
> 

-1 ;)

Why would operators install from devstack? that’s not going to be the case.

I believe we should have both LB & OVS well tested, if LB is a good option for
some operators willing to migrate from nova, that’s great, let’s make sure LB 
is perfectly tested upstream.

But by doing such change we can’t let OVS code rot and we would be neglecting
a big customer base which is now making use of the openvswitch mechanism.
(may be I’m miss understanding  the scope of the change).



> 
> --
> Sean M. Collins
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
Miguel Angel Ajo



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-15 Thread Miguel Angel Ajo Pelayo
I saw Mathieu Rohon message on the mail list archive, but it didn’t reach my 
inbox
for some reason:

>Hi,

>It will overlap with the Telco Working group weekly meeting [1]. It's too
>bad, since Qos is a big interest for Telco Cloud Operator!

>Mathieu

>[1]https://wiki.openstack.org/wiki/TelcoWorkingGroup#Meetings 
><https://wiki.openstack.org/wiki/TelcoWorkingGroup#Meetings>
My intention was to set the meeting one hour earlier, but it seems that the DST 
time changes got to confuse me, I’m very sorry. I’m ok with moving the meeting 
1 hour later (15:00 UTC) for future meetings, as long as it still works for 
other people interested in the QoS topic.
Mathieu, I’m not sure if people from the telco meeting would be interested in 
participation on this meeting, but my participation on the TWG meeting would 
probably help getting everyone in sync.

Best, 
Miguel Ángel

> On 14/4/2015, at 10:43, Miguel Angel Ajo Pelayo  wrote:
> 
> Ok, after one week, looks like the most popular time slot is B,
> that is 14:00 UTC / Wednesdays.
> 
> I’m proposing first meeting for Wednesday / Apr 22th 14:00 UTC / 
> #openstack-meeting-2.
> 
> Tomorrow (Apr 15th / 14:00 UTC) It’s a been early since the announcement, so 
> I will join #openstack-meeting-2 while working on the agenda for next week, 
> feel free to join
> if you want/have time.
> 
> 
> 
> 
>> On 9/4/2015, at 22:43, Howard, Victor > <mailto:victor_how...@cable.comcast.com>> wrote:
>> 
>> I prefer Timeslot B, thanks for coordinating.  I would be interested in 
>> helping out in any way with the design session let me know!
>> 
>> From: "Sandhya Dasu (sadasu)" mailto:sad...@cisco.com>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> > <mailto:openstack-dev@lists.openstack.org>>
>> Date: Tuesday, April 7, 2015 12:19 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> > <mailto:openstack-dev@lists.openstack.org>>
>> Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
>> 
>> Hi Miguel,
>> Both time slots work for me. Thanks for rekindling this effort.
>> 
>> Thanks,
>> Sandhya
>> 
>> From: Miguel Ángel Ajo mailto:majop...@redhat.com>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> > <mailto:openstack-dev@lists.openstack.org>>
>> Date: Tuesday, April 7, 2015 1:45 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> > <mailto:openstack-dev@lists.openstack.org>>
>> Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
>> 
>> On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
>>> On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando >> <mailto:sorla...@nicira.com>> wrote:
>>>> 
>>>> 
>>>> On 7 April 2015 at 00:33, Armando M. >>> <mailto:arma...@gmail.com>> wrote:
>>>>> 
>>>>> On 6 April 2015 at 08:56, Miguel Ángel Ajo >>>> <mailto:majop...@redhat.com>> wrote:
>>>>>> I’d like to co-organized a QoS weekly meeting with Sean M. Collins,
>>>>>> 
>>>>>> In the last few years, the interest for QoS support has increased, 
>>>>>> Sean has been leading
>>>>>> this effort [1] and we believe we should get into a consensus about how 
>>>>>> to model an extension
>>>>>> to let vendor plugins implement QoS capabilities on network ports and 
>>>>>> tenant networks, and
>>>>>> how to extend agents, and the reference implementation & others [2]
>>>> 
>>>> As you surely know, so far every attempt to achieve a consensus has failed 
>>>> in a pretty miserable way.
>>>> This mostly because "QoS" can be interpreted in a lot of different ways, 
>>>> both from the conceptual and practical perspective.
>> Yes, I’m fully aware of it, it was also a new feature, so it was out of 
>> scope for Kilo. 
>>>> It is important in my opinion to clearly define the goals first. For 
>>>> instance a simple extensions for bandwidth limiting could be a reasonable 
>>>> target for the Liberty release.
>> I quite agree here, but IMHO, as you said it’s a quite open field (limiting, 
>> guaranteeing, 
>> marking, traffic shaping..), we should do our best in trying to define a 
>> model allowing us 
>> to build that up in the future without huge changes, on the API side I gues

Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-15 Thread Miguel Angel Ajo Pelayo
Ok,

1) #openstack-meeting-2 doesn’t exist (-alt is it)

2) and not only that we’re colliding the TWG meeting,
but all the meeting rooms starting at UTC 14:30 are busy.

3) If we move -30m (UTC 13:30) then we could use meeting room
#openstack-meeting-3  

 before the neutron drivers meeting, and removing some overlap
with the TGW meeting.

But I know it’s an awful time (yet more) for anyone in the USA west coast.

What do you think?

#openstack-meeting-3 @ UTC 13:30 sounds good for everybody, or should we 
propose some
other timeslot?

What a wonderful meeting organizer I am… :/

Best,
Miguel Ángel?

Unless we’re able to live with 30min, we may need to move the meeting 
> On 15/4/2015, at 15:26, Veiga, Anthony  
> wrote:
> 
> Miguel,
> As a telco operator, who is active in the WG, I am absolutely an interested 
> party for QoS.  I’d be willing to hop between the two of them if absolutely 
> necessary (it’s IRC, after all) but would prefer they not overlap if 
> possible. Thanks!
> -Anthony
> 
>> On Apr 15, 2015, at 6:39 , Miguel Angel Ajo Pelayo > <mailto:mangel...@redhat.com>> wrote:
>> 
>> I saw Mathieu Rohon message on the mail list archive, but it didn’t reach my 
>> inbox
>> for some reason:
>> 
>> >Hi,
>> 
>> >It will overlap with the Telco Working group weekly meeting [1]. It's too
>> >bad, since Qos is a big interest for Telco Cloud Operator!
>> 
>> >Mathieu
>> 
>> >[1]https://wiki.openstack.org/wiki/TelcoWorkingGroup#Meetings 
>> ><https://wiki.openstack.org/wiki/TelcoWorkingGroup#Meetings>
>> My intention was to set the meeting one hour earlier, but it seems that the 
>> DST time changes got to confuse me, I’m very sorry. I’m ok with moving the 
>> meeting 1 hour later (15:00 UTC) for future meetings, as long as it still 
>> works for other people interested in the QoS topic.
>> Mathieu, I’m not sure if people from the telco meeting would be interested 
>> in participation on this meeting, but my participation on the TWG meeting 
>> would probably help getting everyone in sync.
>> 
>> Best, 
>> Miguel Ángel
>> 
>>> On 14/4/2015, at 10:43, Miguel Angel Ajo Pelayo >> <mailto:mangel...@redhat.com>> wrote:
>>> 
>>> Ok, after one week, looks like the most popular time slot is B,
>>> that is 14:00 UTC / Wednesdays.
>>> 
>>> I’m proposing first meeting for Wednesday / Apr 22th 14:00 UTC / 
>>> #openstack-meeting-2.
>>> 
>>> Tomorrow (Apr 15th / 14:00 UTC) It’s a been early since the announcement, 
>>> so 
>>> I will join #openstack-meeting-2 while working on the agenda for next week, 
>>> feel free to join
>>> if you want/have time.
>>> 
>>> 
>>> 
>>> 
>>>> On 9/4/2015, at 22:43, Howard, Victor >>> <mailto:victor_how...@cable.comcast.com>> wrote:
>>>> 
>>>> I prefer Timeslot B, thanks for coordinating.  I would be interested in 
>>>> helping out in any way with the design session let me know!
>>>> 
>>>> From: "Sandhya Dasu (sadasu)" mailto:sad...@cisco.com>>
>>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>>>> >>> <mailto:openstack-dev@lists.openstack.org>>
>>>> Date: Tuesday, April 7, 2015 12:19 PM
>>>> To: "OpenStack Development Mailing List (not for usage questions)" 
>>>> >>> <mailto:openstack-dev@lists.openstack.org>>
>>>> Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
>>>> 
>>>> Hi Miguel,
>>>> Both time slots work for me. Thanks for rekindling this effort.
>>>> 
>>>> Thanks,
>>>> Sandhya
>>>> 
>>>> From: Miguel Ángel Ajo mailto:majop...@redhat.com>>
>>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>>>> >>> <mailto:openstack-dev@lists.openstack.org>>
>>>> Date: Tuesday, April 7, 2015 1:45 AM
>>>> To: "OpenStack Development Mailing List (not for usage questions)" 
>>>> >>> <mailto:openstack-dev@lists.openstack.org>>
>>>> Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
>>>> 
>>>> On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
>>>>> On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando >>>> <mailto:sorla...@nicira.com>> wrote:
>>>>>> 
>>>>>&g

Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-15 Thread Miguel Angel Ajo Pelayo
Ok, during today’s preliminary meeting we talked about moving to 
#openstack-meeting-3,

and we’re open to move -30m the meeting if it’s ok for everybody, to only 
partly overlap with the TWG,
yet we could stay at 14:00 UTC for now.

I have updated both wikis to reflect the meeting room change (to an existing 
one… ‘:D )

minutes of this preliminary meeting can be found here:

http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-04-15-14.07.html

Best,
Miguel Ángel



> On 15/4/2015, at 16:32, Veiga, Anthony  
> wrote:
> 
> On Apr 15, 2015, at 10:00 , Miguel Angel Ajo Pelayo  
> wrote:
>> 
>> Ok,
>> 
>> 1) #openstack-meeting-2 doesn’t exist (-alt is it)
>> 
>> 2) and not only that we’re colliding the TWG meeting,
>>but all the meeting rooms starting at UTC 14:30 are busy.
> 
> While not preferable, I don’t mind overlapping that meeting. I can be in both 
> places.
> 
>> 
>> 3) If we move -30m (UTC 13:30) then we could use meeting room
>>#openstack-meeting-3  
>> 
>> before the neutron drivers meeting, and removing some overlap
>> with the TGW meeting.
>> 
>> But I know it’s an awful time (yet more) for anyone in the USA west coast.
>> 
>> What do you think?
> 
> This time is fine for me, but I’m EDT so it’s normal business hours here.
> 
>> 
>> #openstack-meeting-3 @ UTC 13:30 sounds good for everybody, or should we 
>> propose some
>> other timeslot?
>> 
>> What a wonderful meeting organizer I am… :/
> 
> You’re doing fine! It’s an international organization.  It is by definition 
> impossible to select a timeslot that’s perfect for everyone.
> 
>> 
>> Best,
>> Miguel Ángel?
>> 
>> Unless we’re able to live with 30min, we may need to move the meeting 
> 
> -Anthony
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Miguel Angel Ajo




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Introducing the Cloud Service Federation project (cross-project design summit proposal)

2015-04-15 Thread Miguel Angel Ajo Pelayo
Sounds like a very interesting idea.

Have you talked to the keystone folks?,

I would do this work into the keystone project itself (just a separate daemon).

This still looks like identity management (federated, but identity)

I know the burden of working with a mainstream project could be higher, but 
benefits
are also higher: it becomes more useful (it becomes automatically available for 
everyone)
and also passes through the review process of the general keystone 
contributors, thus
getting a more robust code.


Best regards,
Miguel 

> On 16/4/2015, at 6:24, Geoff Arnold  wrote:
> 
> Yeah, we’ve taken account of:
> https://github.com/openstack/keystone-specs/blob/master/specs/juno/keystone-to-keystone-federation.rst
>  
> 
> http://blog.rodrigods.com/playing-with-keystone-to-keystone-federation/ 
> 
> http://docs.openstack.org/developer/keystone/configure_federation.html 
> 
> 
> One of the use cases we’re wrestling with requires fairly strong 
> anonymization: if user A purchases IaaS services from reseller B, who sources 
> those services from service provider C, nobody at C (OpenStack admin, root on 
> any box) should be able to identify that A is consuming resources; all that 
> they can see is that the requests are coming from B. It’s unclear if we 
> should defer this requirement to a future version, or whether there’s 
> something we need to (or can) do now.
> 
> The main focus of Cloud Service Federation is managing the life cycle of 
> virtual regions and service chaining. It builds on the Keystone federated 
> identity work over the last two cycles, but identity is only part of the 
> problem. However, I recognize that we do have an issue with terminology. For 
> a lot of people, “federation” is simply interpreted as “identity federation”. 
> If there’s a better term than “cloud service federation”, I’d love to hear 
> it. (The Cisco term “Intercloud” is accurate, but probably inappropriate!)
> 
> Geoff
> 
>> On Apr 15, 2015, at 7:07 PM, Adam Young > > wrote:
>> 
>> On 04/15/2015 04:23 PM, Geoff Arnold wrote:
>>> That’s the basic idea.  Now, if you’re a reseller of cloud services, you 
>>> deploy Horizon+Aggregator/Keystone behind your public endpoint, with your 
>>> branding on Horizon. You then bind each of your Aggregator Regions to a 
>>> Virtual Region from one of your providers. As a reseller, you don’t 
>>> actually deploy the rest of OpenStack.
>>> 
>>> As for tokens, there are at least two variations, each with pros and cons: 
>>> proxy and pass-through. Still working through implications of both.
>>> 
>>> Geoff
>> 
>> 
>> Read the Keysteon to Keystone (K2K) docs in the Keystone spec repo, as that 
>> addresses some of the issues here.
>> 
>>> 
 On Apr 15, 2015, at 12:49 PM, Fox, Kevin M >>> > wrote:
 
 So, an Aggregator would basically be a stripped down keystone that 
 basically provided a dynamic service catalog that points to the registered 
 other regions?  You could then point a horizon, cli, or rest api at the 
 aggregator service?
 
 I guess if it was an identity provider too, it can potentially talk to the 
 remote keystone and generate project scoped tokens, though you'd need 
 project+region scoped tokens, which I'm not sure exists today?
 
 Thanks,
 Kevin
 
 
 From: Geoff Arnold [ge...@geoffarnold.com ]
 Sent: Wednesday, April 15, 2015 12:05 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [all] Introducing the Cloud Service Federation 
 project (cross-project design summit proposal)
 
 tl;dr We want to implement a new system which we’re calling an Aggregator 
 which is based on Horizon and Keystone, and that can provide access to 
 virtual Regions from multiple independent OpenStack providers. We plan on 
 developing this system as a project in Stackforge, but we need help right 
 now in identifying any unexpected dependencies.
 
 
 
 For the last 6-7 years, there has been great interest in the potential for 
 various business models involving multiple clouds and/or cloud providers. 
 These business models include but are not limited to, federation, 
 reseller, broker, cloud-bursting, hybrid and intercloud. The core concept 
 of this initiative is to go beyond the simple dyadic relationship between 
 a cloud service provider and a cloud service consumer to a more 
 sophisticated “supply chain” of cloud services, dynamically configured, 
 and operated by different business entities. This is an ambitious goal, 

Re: [openstack-dev] [neutron] [QoS] weekly meeting - update

2015-04-20 Thread Miguel Angel Ajo Pelayo
Thank you Irena,

   I believe this last minute change deserves an explanation, I asked Irena to
write to the list on my behalf (thank you!).

   I got a surgery to one of my kids scheduled on Wednesday (it’s something
very mild near the ear), also it’s holiday for a few of the participants that 
were intending
to join.

  I hope it works for most of you, otherwise we will all be available on next 
Wednesday 29th
as planned.

   Best regards,
Miguel Ángel.

> On 20/4/2015, at 12:03, Irena Berezovsky  wrote:
> 
> Hi,
> 
> This week neutron QoS meeting will take place on Tuesday, April 21 at 14:00 
> UTC on #openstack-meeting-3.
> 
> Next week, the meeting is back to its original slot: Wed at 14:00 UTC on 
> #openstack-meeting-3.
> 
> Please join if you are interested.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Miguel Angel Ajo



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] weekly meeting - update

2015-04-20 Thread Miguel Angel Ajo Pelayo

> On 20/4/2015, at 16:03, Kyle Mestery  wrote:
> 
> On Mon, Apr 20, 2015 at 8:44 AM, Ihar Hrachyshka  <mailto:ihrac...@redhat.com>> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
> 
> On 04/20/2015 12:20 PM, Miguel Angel Ajo Pelayo wrote:
> > Thank you Irena,
> >
> > I believe this last minute change deserves an explanation, I asked
> > Irena to write to the list on my behalf (thank you!).
> >
> > I got a surgery to one of my kids scheduled on Wednesday (it’s
> > something very mild near the ear), also it’s holiday for a few of
> > the participants that were intending to join.
> 
> I hope your kid (and you) will be well.
> 
> ++, hope everything turns out ok!

Thank you Ihar & Kyle,

>  
> >
> > I hope it works for most of you, otherwise we will all be available
> > on next Wednesday 29th as planned.
> 
> Note that the proposed time conflicts with general neutron meeting.
> 
> This is true, but I think this is a one-time occurance due to some scheduling 
> concerns.

True! :/, somehow I thought this week it was on Monday.

After the meeting I will write a summary of what we talked on this thread,
so anybody who misses it could make comments.

>  
> /Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
> 
> iQEcBAEBCAAGBQJVNQK/AAoJEC5aWaUY1u578PcIAL7FkO+wSV4AdkcjIrFNbrcX
> aHmbV9pYYkUf3GTrBfqBzJ37XqApSCrFinZPpg7vrDfp1TLxvWXBoh8HhBrJUiv3
> ItqEGpixotKcuT06E0QSm76p6ZkIFffy68Iudj1dX1bLVuDa7VU59jcBxo9H3EMQ
> Ei4Vtvf3M1a8wPZV+FHcySzkjrLNJNlgUCHOy/h5JCWC26nGxKHciFYxXz82HQpQ
> V6o2d+tyLAkJnkIkmbUuRfOLoZaBFwc7ortS1CEt00fMux6EyoNmuhouBZxoUp84
> IKZsSDea8QRciHFot1mUA4cF9Up1vFiNuy9K3FOh8VuNCxzc2Un0sGtjIP6zdb0=
> =Poy8
> -END PGP SIGNATURE-
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
Miguel Angel Ajo



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management

2015-04-21 Thread Miguel Angel Ajo Pelayo
The rally process (email based) doesn’t seem scalable enough for the neutron 
case
IMHO, but I agree that a voting system doesn’t differ too much from launchpad
and that it can be gamed.

> On 22/4/2015, at 1:21, Assaf Muller  wrote:
> 
> Just to offer some closure, it seems like the voting idea was shot down with
> the energy of a trillion stars, yet the general idea of offering an easy way
> for users to request features makes sense. Expect to see ideas of how
> to implement this soon...
> 
> - Original Message -
>> 
>>> On Apr 10, 2015, at 11:04 AM, Boris Pavlovic  wrote:
>>> 
>>> Hi,
>>> 
>>> I believe that specs are too detailed and too dev oriented for managers,
>>> operators and devops.
>>> They actually don't want/have time to write/read all the stuff in specs and
>>> that's why the communication between dev & operators community is a
>>> broken.
>>> 
>>> I would recommend to think about simpler approaches like making mechanism
>>> for proposing features/changes in projects.
>>> Like we have in Rally:
>>> https://rally.readthedocs.org/en/latest/feature_requests.html
>>> 
>>> This is similar to specs but more concentrate on WHAT rather than HOW.
>> 
>> +1
>> 
>> I think the line between HOW and WHAT are too often blurred in Neutron.
>> Unless we’re able to improve our ability to communicate at an appropriate
>> level of abstraction with non-dev stakeholders, meeting their needs will
>> continue to be a struggle.
>> 
>> 
>> Maru
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Miguel Angel Ajo




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][QoS] service-plugin or not discussion

2015-04-22 Thread Miguel Angel Ajo Pelayo

Hi everybody,

   In the latest QoS meeting, one of the topics was a discussion about how to 
implement
QoS [1] either as in core, or as a service plugin, in, or out-tree.

   It’s my feeling, and Mathieu’s that it looks more like a core feature, as 
we’re talking of
port properties that we define at high level, and most plugins (QoS capable) 
may want
to implement at dataplane/controlplane level, and also that it’s something 
requiring a good
amount of review.


   In the other hand Irena and Sean were more concerned about having a good 
separation
of concerns (I agree actually with that part), and being able to do quicker 
iterations on a
separate stackforge repo.

   Since we didn’t seem to find an agreement, and I’m probably missing some 
details, 
I’d like to loop in our core developers and PTL to provide an opinion on this.


[1] 
http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-04-21-14.03.log.html#l-192


Thanks for your patience,
Miguel Angel Ajo




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][QoS] service-plugin or not discussion

2015-04-24 Thread Miguel Angel Ajo Pelayo
Hi Armando & Salvatore,

> On 23/4/2015, at 9:30, Salvatore Orlando  wrote:
> 
> 
> 
> On 23 April 2015 at 01:30, Armando M.  <mailto:arma...@gmail.com>> wrote:
> 
> On 22 April 2015 at 06:02, Miguel Angel Ajo Pelayo  <mailto:mangel...@redhat.com>> wrote:
> 
> Hi everybody,
> 
>In the latest QoS meeting, one of the topics was a discussion about how to 
> implement
> QoS [1] either as in core, or as a service plugin, in, or out-tree.
> 
> It is really promising that after only two meetings the team is already 
> split! I cannot wait for the API discussion to start ;)

We seem to be relatively on the same page about how to model the API, but we 
need yet to loop
in users/operators who have an interest in QoS to make sure they find it 
usable. [1]

>  
> 
> My apologies if I was unable to join, the meeting clashed with another one I 
> was supposed to attend.

My bad, sorry ;-/

>  
> 
>It’s my feeling, and Mathieu’s that it looks more like a core feature, as 
> we’re talking of
> port properties that we define at high level, and most plugins (QoS capable) 
> may want
> to implement at dataplane/controlplane level, and also that it’s something 
> requiring a good
> amount of review.
> 
> "Core" is a term which is recently being abused in Neutron... However, I 
> think you mean that it is a feature fairly entangled with the L2 mechanisms,

Not only the L2 mechanisms, but the description of ports themselves, in the 
basic cases we’re just defining
how “small” or “big” your port is.  In the future we could be saying “UDP ports 
5000-6000” have the highest
priority on this port, or a minimum bandwidth of 50Mbps…, it’s marked with a 
IPv6 flow label for hi-prio…
or whatever policy we support.

> that deserves being integrated in what is today the "core" plugin and in the 
> OVS/LB agents. To this aim I think it's good to make a distinction between 
> the management plane and the control plane implementation.
> 
> At the management plane you have a few choices:
> - yet another mixin, so that any plugin can add it and quickly support the 
> API extension at the mgmt layer. I believe we're fairly certain everybody 
> understands mixins are not sustainable anymore and I'm hopeful you are not 
> considering this route.

Are you specifically referring to this on every plugin? 

class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, <---
dvr_mac_db.DVRDbMixin, <---
external_net_db.External_net_db_mixin, <---
sg_db_rpc.SecurityGroupServerRpcMixin,   <---
agentschedulers_db.DhcpAgentSchedulerDbMixin,  <---
addr_pair_db.AllowedAddressPairsMixin,  <

I’m quite allergic to mixings, I must admit, but, if it’s not the desired way, 
why don’t we refactor the way we compose plugins !? (yet more refactors 
probably would slow us down, …) but… I feel like we’re pushing to 
overcomplicate the design for a case which is similar to everything else we had 
before (security groups, port security, allowed address pairs).

It feels wrong to have every similar feature done in a different way, even if 
the current way is not the best one I admit.

> - a service plugin - as suggested by some proposers. The service plugin is 
> fairly easy to implement, and now Armando has provided you with a mechanism 
> to register for callbacks for events in other plugins. This should make the 
> implementation fairly straightforward. This also enables other plugins to 
> implement QoS support.
> - a ML2 mechanism driver + a ML2 extension driver. From an architectural 
> perspective this would be the preferred solution for a ML2 implementation, 
> but at the same time will not provide management level support for non-ML2 
> plugins.

I’m a bit lost of why a a plugin (apart from ML2) could not just declare that 
it’s implementing the extension,  or it’s just that the only way we have to do 
it right now it’s mixings? why would ML2 avoid it?.


>  
> 
> 
>In the other hand Irena and Sean were more concerned about having a good 
> separation
> of concerns (I agree actually with that part), and being able to do quicker 
> iterations on a
> separate stackforge repo.
> 
> Perhaps we're trying to address the issue at the wrong time. Once a 
> reasonable agreement has been reached on the data model, and the API, whether 
> we're going with a service plugin or core etc should be an implementation 
> detail. I think the crux of the matter is the data plane integration. From a 
> management and control standpoint it should be fairly trivial to 
> expose/implement the API and business logic via a service plugin and, and 
> some of you suggested, integrate with the core via callbacks.

We have an

Re: [openstack-dev] [neutron][QoS] service-plugin or not discussion

2015-04-28 Thread Miguel Angel Ajo Pelayo

> On 24/4/2015, at 19:42, Armando M.  wrote:
> 
> 
> 
> On 24 April 2015 at 01:47, Miguel Angel Ajo Pelayo  <mailto:mangel...@redhat.com>> wrote:
> Hi Armando & Salvatore,
> 
>> On 23/4/2015, at 9:30, Salvatore Orlando > <mailto:sorla...@nicira.com>> wrote:
>> 
>> 
>> 
>> On 23 April 2015 at 01:30, Armando M. > <mailto:arma...@gmail.com>> wrote:
>> 
>> On 22 April 2015 at 06:02, Miguel Angel Ajo Pelayo > <mailto:mangel...@redhat.com>> wrote:
>> 
>> Hi everybody,
>> 
>>In the latest QoS meeting, one of the topics was a discussion about how 
>> to implement
>> QoS [1] either as in core, or as a service plugin, in, or out-tree.
>> 
>> It is really promising that after only two meetings the team is already 
>> split! I cannot wait for the API discussion to start ;)
> 
> We seem to be relatively on the same page about how to model the API, but we 
> need yet to loop
> in users/operators who have an interest in QoS to make sure they find it 
> usable. [1]
> 
>>  
>> 
>> My apologies if I was unable to join, the meeting clashed with another one I 
>> was supposed to attend.
> 
> My bad, sorry ;-/
> 
>>  
>> 
>>It’s my feeling, and Mathieu’s that it looks more like a core feature, as 
>> we’re talking of
>> port properties that we define at high level, and most plugins (QoS capable) 
>> may want
>> to implement at dataplane/controlplane level, and also that it’s something 
>> requiring a good
>> amount of review.
>> 
>> "Core" is a term which is recently being abused in Neutron... However, I 
>> think you mean that it is a feature fairly entangled with the L2 mechanisms,
> 
> Not only the L2 mechanisms, but the description of ports themselves, in the 
> basic cases we’re just defining
> how “small” or “big” your port is.  In the future we could be saying “UDP 
> ports 5000-6000” have the highest
> priority on this port, or a minimum bandwidth of 50Mbps…, it’s marked with a 
> IPv6 flow label for hi-prio…
> or whatever policy we support.
> 
>> that deserves being integrated in what is today the "core" plugin and in the 
>> OVS/LB agents. To this aim I think it's good to make a distinction between 
>> the management plane and the control plane implementation.
>> 
>> At the management plane you have a few choices:
>> - yet another mixin, so that any plugin can add it and quickly support the 
>> API extension at the mgmt layer. I believe we're fairly certain everybody 
>> understands mixins are not sustainable anymore and I'm hopeful you are not 
>> considering this route.
> 
> Are you specifically referring to this on every plugin? 
> 
> class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, <---
> dvr_mac_db.DVRDbMixin, <---
> external_net_db.External_net_db_mixin, <---
> sg_db_rpc.SecurityGroupServerRpcMixin,   <---
> agentschedulers_db.DhcpAgentSchedulerDbMixin,  <---
> addr_pair_db.AllowedAddressPairsMixin,  <
> 
> I’m quite allergic to mixings, I must admit, but, if it’s not the desired 
> way, why don’t we refactor the way we compose plugins !? (yet more refactors 
> probably would slow us down, …) but… I feel like we’re pushing to 
> overcomplicate the design for a case which is similar to everything else we 
> had before (security groups, port security, allowed address pairs).
> 
> It feels wrong to have every similar feature done in a different way, even if 
> the current way is not the best one I admit.
> 
> 
> This attitude led us to the pain we are in now, I think we can no longer 
> afford to keep doing that. Bold goals require bold actions. If we don't step 
> back and figure out a way to extend the existing components without hijacking 
> the current codebase, it would be very difficult to give this effort the 
> priority it deserves.

I agree with you, please note my point of “let’s refactor it all into something 
better”, but refactoring the world and forgetting about new features is not 
sustainable, so, as you say we may start with new features as we explore better 
ways to do it. But I believe old extensions should also be equally addressed in 
the future.

I also lack the perspective yet to propose better approaches, I hope I will be 
able to do it in the future when I explore those areas of neutron.

Let’s focus in the API, and the lowest levels of what we’re going to do, and 
lets resolve everything else at a later time when that’s clear. I start to lean 
towards a service-plugin implementation as it’s going 

Re: [openstack-dev] Stepping down from Neutron roles

2017-03-07 Thread Miguel Angel Ajo Pelayo
Nate, it was a pleasure working with you, you and your team made great
contributions to OpenStack and neutron. I'll be very happy if we ever have
the chance to work again together.

Best regards, and very good luck, my friend.

On Tue, Mar 7, 2017 at 4:55 AM, Kevin Benton  wrote:

> Hi Nate,
>
> Thanks for all of your contributions and good luck in your future
> endeavors! You're always welcome back. :)
>
>
> On Mar 6, 2017 13:15, "Nate Johnston"  wrote:
>
> All,
>
> I have been delaying this long enough... sadly, due to changes in
> direction I
> am no longer able to spend time working on OpenStack, and as such I need to
> resign my duties as Services lieutenant, and liaison to the Infra team.  My
> time in the Neutron and FWaaS community has been one of the most rewarding
> experiences of my career.  Thank you to everyone I met at the summits and
> who
> took the time to work with me on my contributions.  And thank you to the
> many
> of you who have become my personal friends.  If I see an opportunity in the
> future to return to OpenStack development I will jump on it in a hot
> minute.
> But until then, I'll be cheering you on from the sidelines.
>
> All the best,
>
> --N.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][gate] functional job busted

2017-03-15 Thread Miguel Angel Ajo Pelayo
Thank you for the patches. I merged them, released 1.1.0 and proposed [1]

Cheers!,

[1] //review.openstack.org/445884


On Wed, Mar 15, 2017 at 10:14 AM, Gorka Eguileor 
wrote:

> On 14/03, Ihar Hrachyshka wrote:
> > Hi all,
> >
> > the patch that started to produce log index file for logstash [1] and
> > the patch that switched metadata proxy to haproxy [2] landed and
> > together busted the functional job because the latter produces log
> > messages with null-bytes inside, while os-log-merger is not resilient
> > against it.
> >
> > If functional job would be in gate and not just in check queue, that
> > would not happen.
> >
> > Attempt to fix the situation in multiple ways at [3]. (For
> > os-log-merger patches, we will need new release and then bump the
> > version used in gate, so short term neutron patches seem more viable.)
> >
> > I will need support from both authors of os-log-merger as well as
> > other neutron members to unravel that. I am going offline in a moment,
> > and hope someone will take care of patches up for review, and land
> > what's due.
> >
> > [1] https://review.openstack.org/#/c/442804/ [2]
> > https://review.openstack.org/#/c/431691/ [3]
> > https://review.openstack.org/#/q/topic:fix-os-log-merger-crash
> >
> > Thanks,
> > Ihar
>
> Hi Ihar,
>
> That is an unexpected case that never came up during our tests or usage,
> but it is indeed something the script should take into account.
>
> Thanks for the os-log-merger patches, I've reviewed them and they look
> good to me, so hopefully they'll land before you come back online.  ;-)
>
> Cheers,
> Gorka.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] stepping down from neutron core team

2017-04-28 Thread Miguel Angel Ajo Pelayo
Hi everybody,

Some of you already know, but I wanted to make it official.

Recently I moved to work on the networking-ovn component,
and OVS/OVN itself, and while I'll stick around and I will be available
on IRC for any questions I'm already not doing a good work with
neutron reviews, so...

It's time to leave room for new reviewers.

It's always a pleasure to work with you folks.

Best regards,
Miguel Ángel Ajo.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn] metadata agent implementation

2017-05-08 Thread Miguel Angel Ajo Pelayo
On Mon, May 8, 2017 at 2:48 AM, Michael Still  wrote:

> It would be interesting for this to be built in a way where other
> endpoints could be added to the list that have extra headers added to them.
>
> For example, we could end up with something quite similar to EC2 IAMS if
> we could add headers on the way through for requests to OpenStack endpoints.
>
> Do you think the design your proposing will be extensible like that?
>


I believe we may focus on achieving parity with the neutron reference
implementation first, and later on what you're proposing probably needs to
modelled on the neutron side.

Could you provide a practical example of how that would work anyway?


>
> Thanks,
> Michael
>
>
>
>
> On Fri, May 5, 2017 at 10:07 PM, Daniel Alvarez Sanchez <
> dalva...@redhat.com> wrote:
>
>> Hi folks,
>>
>> Now that it looks like the metadata proposal is more refined [0], I'd like
>> to get some feedback from you on the driver implementation.
>>
>> The ovn-metadata-agent in networking-ovn will be responsible for
>> creating the namespaces, spawning haproxies and so on. But also,
>> it must implement most of the "old" neutron-metadata-agent functionality
>> which listens on a UNIX socket and receives requests from haproxy,
>> adds some headers and forwards them to Nova. This means that we can
>> import/reuse big part of neutron code.
>>
>> Makes sense, you would avoid this way, depending on an extra co-hosted
service, reducing this way deployment complexity.


> I wonder what you guys think about depending on neutron tree for the
>> agent implementation despite we can benefit from a lot of code reuse.
>> On the other hand, if we want to get rid of this dependency, we could
>> probably write the agent "from scratch" in C (what about having C
>> code in the networking-ovn repo?) and, at the same time, it should
>> buy us a performance boost (probably not very noticeable since it'll
>> respond to requests from local VMs involving a few lookups and
>> processing simple HTTP requests; talking to nova would take most
>> of the time and this only happens at boot time).
>>
>
I would try to keep that part in Python, as everything on the networking-ovn
repo. I remember that Jakub made lots of improvements on the
neutron-metadata-agent area by caching, I'd make sure we reuse that if
it's of use to us (not sure if we used it for nova communication or not).

The neutron metadata agent, apparently has a get_ports RPC call [2] to
neutron-server plugin. We don't want RPC calls but ovsdb to get that info,
I have vague proof about caching also being used for those requests [1],
but with ovsdb we have that for free.

I don't know, the agent is 300 LOC, it seems to me like a whole re-write in
python (copying whatever is necessary) could be a reasonable way, but I
guess that trying to go down that rabbit hole would tell you better if I'm
wrong or if it makes sense.


>
>> I would probably aim for a Python implementation
>>
> +1000


> reusing/importing
>> code from neutron tree but I'm not sure how we want to deal with
>> changes in neutron codebase (we're actually importing code now).
>> Looking forward to reading your thoughts :)
>>
>
I guess the neutron-ns-metadata haproxy spawning [3] can be reused
from neutron, I wonder if it would make sense to move that to neutron_lib?
I believe that's the key thing that can be reused,

if we don't reuse it: we need to maintain it in two places,
if we reuse it, we can be broken by changes in neutron repo,
but I'm sure we're flexible enough to react to such changes,

Cheers! :D


>
>> Thanks,
>> Daniel
>>
>> [0] https://review.openstack.org/#/c/452811/
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rackspace Australia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][infra] Functional job failure rate at 100%

2017-08-10 Thread Miguel Angel Ajo Pelayo
Good (amazing) job folks. :)

El 10 ago. 2017 9:43, "Thierry Carrez"  escribió:

> Oh, that's good for us. Should still be fixed, if only so that we can
> test properly :)
>
> Kevin Benton wrote:
> > This is just the code simulating the conntrack entries that would be
> > created by real traffic in a production system, right?
> >
> > On Wed, Aug 9, 2017 at 11:46 AM, Jakub Libosvar  > > wrote:
> >
> > On 09/08/2017 18:23, Jeremy Stanley wrote:
> > > On 2017-08-09 15:29:04 +0200 (+0200), Jakub Libosvar wrote:
> > > [...]
> > >> Is it possible to switch used image for jenkins machines to use
> > >> back the older version? Any other ideas how to deal with the
> > >> kernel bug?
> > >
> > > Making our images use non-current kernel packages isn't trivial,
> but
> > > as Thierry points out in his reply this is not just a problem for
> > > our CI system. Basically Ubuntu has broken OpenStack (and probably
> a
> > > variety of other uses of conntrack) for a lot of people following
> > > kernel updates in 16.04 LTS so the fix needs to happen there
> > > regardless. Right now, basically, Ubuntu Xenial is not a good
> > > platform to be running OpenStack on until they get the kernel
> > > regression addressed.
> >
> > True. Fortunately, the impact is not that catastrophic for Neutron
> as it
> > might seem on the first look. Not sure about the other projects,
> though.
> > Neutron doesn't create conntrack entries in production code - only in
> > testing. That said, agents should work just fine even with the
> > kernel bug.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] mod_wsgi support (pike bug?)

2017-09-05 Thread Miguel Angel Ajo Pelayo
Why do we need to put all the configuration in a single file?

That would be a big big change to deployers. It'd be great if we can think
of an alternate solution. (not sure how that's being handled for other
services though).

Best regards,
Miguel Ángel.

On Mon, Sep 4, 2017 at 3:01 PM, Kevin Benton  wrote:

> Yes, unfortunately I didn't make it back to the patch in time to adjust
> devstack to dump all of the configuration into one file (instead of
> /etc/neutron/neutron.conf /etc/neutron/plugins/ml2.conf etc). I did test
> locally with my dev environment around the time that RPC server patch went
> in, but there may have been a regression since it went in since it's not
> tested as Matt pointed out.
>
> It appears that puppet is still spreading the config files for the server
> into multiple locations as well[1]. Does it inherit that logic from
> devstack? Because that will need to be changed to push all of the relevant
> server config into one conf.
>
> 1. http://logs.openstack.org/82/500182/3/check/gate-puppet-
> openstack-integration-4-scenario004-tempest-ubuntu-
> xenial/791523c/logs/etc/neutron/plugins/
>
> On Sun, Sep 3, 2017 at 12:03 PM, Mohammed Naser 
> wrote:
>
>> On Sun, Sep 3, 2017 at 3:03 PM, Mohammed Naser 
>> wrote:
>> > On Sun, Sep 3, 2017 at 2:20 PM, Matthew Treinish 
>> wrote:
>> >> On Sun, Sep 03, 2017 at 01:47:24PM -0400, Mohammed Naser wrote:
>> >>> Hi folks,
>> >>>
>> >>> I've attempted to enable mod_wsgi support in our dev environment with
>> >>> Puppet however it results in a traceback.  I figured it was an
>> >>> environment thing so I looked into moving the Puppet CI to test using
>> >>> mod_wsgi and it resulted in the same error.
>> >>>
>> >>> http://logs.openstack.org/82/500182/3/check/gate-puppet-open
>> stack-integration-4-scenario004-tempest-ubuntu-xenial/
>> 791523c/logs/apache/neutron_wsgi_error.txt.gz
>> >>>
>> >>> Would anyone from the Neutron team be able to give input on this?
>> >>> We'd love to add gating for Neutron deployed by mod_wsgi which can
>> >>> help find similar issues.
>> >>>
>> >>
>> >> Neutron never got their wsgi support working in Devstack either. The
>> patch
>> >> adding that: https://review.openstack.org/#/c/439191/ never passed
>> the gate and
>> >> seems to have lost the attention of the author. The wsgi support in
>> neutron
>> >> probably doesn't work yet, and is definitely untested. IIRC, the issue
>> they were
>> >> hitting was loading the config files. [1] I don't think I saw any
>> progress on it
>> >> after that though.
>> >>
>> >> The TC goal doc [2] probably should say something about it never
>> landing and
>> >> missing pike.
>> >>
>> >
>> > That would make sense.  The release notes also state that it does
>> > offer the ability to run inside mod_wsgi which can be misleading to
>> > deployers (that was the main reason I thought we can start testing
>> > using it):
>> >
>> Sigh, hit send too early.  Here is the link:
>>
>> http://git.openstack.org/cgit/openstack/neutron/commit/?id=9
>> 16bc96ee214078496b4b38e1c93f36f906ce840
>> >
>> >>
>> >> -Matt Treinish
>> >>
>> >>
>> >> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-June
>> /117830.html
>> >> [2] https://governance.openstack.org/tc/goals/pike/deploy-api-in
>> -wsgi.html#neutron
>> >>
>> >> 
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] mod_wsgi support (pike bug?)

2017-09-05 Thread Miguel Angel Ajo Pelayo
As a note, in OSP we also include configuration directories and things
alike:

https://review.rdoproject.org/r/gitweb?p=openstack/neutron-distgit.git;a=blob;f=neutron-server.service;h=e68024cb9dc06e474b1ac9473bff93c3d892b4d6;hb=48a9d1aa77506d0c60d5bc448b7c5c303083aa68#l8

Config directories make it a bit more future proof, and able to easily
integrate with vendor plugins without the need to modify the service file.


On Tue, Sep 5, 2017 at 9:27 AM, Miguel Angel Ajo Pelayo  wrote:

> Why do we need to put all the configuration in a single file?
>
> That would be a big big change to deployers. It'd be great if we can think
> of an alternate solution. (not sure how that's being handled for other
> services though).
>
> Best regards,
> Miguel Ángel.
>
> On Mon, Sep 4, 2017 at 3:01 PM, Kevin Benton  wrote:
>
>> Yes, unfortunately I didn't make it back to the patch in time to adjust
>> devstack to dump all of the configuration into one file (instead of
>> /etc/neutron/neutron.conf /etc/neutron/plugins/ml2.conf etc). I did test
>> locally with my dev environment around the time that RPC server patch went
>> in, but there may have been a regression since it went in since it's not
>> tested as Matt pointed out.
>>
>> It appears that puppet is still spreading the config files for the server
>> into multiple locations as well[1]. Does it inherit that logic from
>> devstack? Because that will need to be changed to push all of the relevant
>> server config into one conf.
>>
>> 1. http://logs.openstack.org/82/500182/3/check/gate-puppet-o
>> penstack-integration-4-scenario004-tempest-ubuntu-xenial/
>> 791523c/logs/etc/neutron/plugins/
>>
>> On Sun, Sep 3, 2017 at 12:03 PM, Mohammed Naser 
>> wrote:
>>
>>> On Sun, Sep 3, 2017 at 3:03 PM, Mohammed Naser 
>>> wrote:
>>> > On Sun, Sep 3, 2017 at 2:20 PM, Matthew Treinish 
>>> wrote:
>>> >> On Sun, Sep 03, 2017 at 01:47:24PM -0400, Mohammed Naser wrote:
>>> >>> Hi folks,
>>> >>>
>>> >>> I've attempted to enable mod_wsgi support in our dev environment with
>>> >>> Puppet however it results in a traceback.  I figured it was an
>>> >>> environment thing so I looked into moving the Puppet CI to test using
>>> >>> mod_wsgi and it resulted in the same error.
>>> >>>
>>> >>> http://logs.openstack.org/82/500182/3/check/gate-puppet-open
>>> stack-integration-4-scenario004-tempest-ubuntu-xenial/791523
>>> c/logs/apache/neutron_wsgi_error.txt.gz
>>> >>>
>>> >>> Would anyone from the Neutron team be able to give input on this?
>>> >>> We'd love to add gating for Neutron deployed by mod_wsgi which can
>>> >>> help find similar issues.
>>> >>>
>>> >>
>>> >> Neutron never got their wsgi support working in Devstack either. The
>>> patch
>>> >> adding that: https://review.openstack.org/#/c/439191/ never passed
>>> the gate and
>>> >> seems to have lost the attention of the author. The wsgi support in
>>> neutron
>>> >> probably doesn't work yet, and is definitely untested. IIRC, the
>>> issue they were
>>> >> hitting was loading the config files. [1] I don't think I saw any
>>> progress on it
>>> >> after that though.
>>> >>
>>> >> The TC goal doc [2] probably should say something about it never
>>> landing and
>>> >> missing pike.
>>> >>
>>> >
>>> > That would make sense.  The release notes also state that it does
>>> > offer the ability to run inside mod_wsgi which can be misleading to
>>> > deployers (that was the main reason I thought we can start testing
>>> > using it):
>>> >
>>> Sigh, hit send too early.  Here is the link:
>>>
>>> http://git.openstack.org/cgit/openstack/neutron/commit/?id=9
>>> 16bc96ee214078496b4b38e1c93f36f906ce840
>>> >
>>> >>
>>> >> -Matt Treinish
>>> >>
>>> >>
>>> >> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-June
>>> /117830.html
>>> >> [2] https://governance.openstack.org/tc/goals/pike/deploy-api-in
>>> -wsgi.html#neutron
>>> >>
>>> >> 
>>> __
>>> >> OpenStack Development Mailing List (not for us

  1   2   >