Re: [Openstack-operators] [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Slawomir Kaplonski
Hi,

I think it was published, see 
http://lists.openstack.org/pipermail/openstack/2018-November/047172.html

> Wiadomość napisana przez Jeremy Freudberg  w dniu 
> 14.11.2018, o godz. 06:12:
> 
> Hey Tony,
> 
> What's the reason for the results of the poll not being public?
> 
> Thanks,
> Jeremy
> On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds  wrote:
>> 
>> 
>> Hi everybody!
>> 
>> As the subject reads, the "T" release of OpenStack is officially
>> "Train".  Unlike recent choices Train was the popular choice so
>> congrats!
>> 
>> Thanks to everybody who participated and help with the naming process.
>> 
>> Lets make OpenStack Train the release so awesome that people can't help
>> but choo-choo-choose to run it[1]!
>> 
>> 
>> Yours Tony.
>> [1] Too soon? Too much?
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Slawomir Kaplonski
Hi,

> Wiadomość napisana przez Ignazio Cassano  w dniu 
> 12.11.2018, o godz. 22:55:
> 
> Hello,
> the nova api in on the same controller on port 8774 and it can be reached 
> from the metadata agent

Nova-metadata-api is running on port 8775 IIRC.

> No firewall is present
> Regards
> 
> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski 
>  ha scritto:
> Hi,
> 
> From logs which You attached it looks that Your neutron-metadata-agent can’t 
> connect to nova-api service. Please check if nova-metadata-api is reachable 
> from node where Your neutron-metadata-agent is running.
> 
> > Wiadomość napisana przez Ignazio Cassano  w dniu 
> > 12.11.2018, o godz. 22:34:
> > 
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy 
> > --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> >  --metadata_proxy_socket=/var/lib/neutron/metadata_proxy 
> > --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 
> > --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 
> > --metadata_proxy_group=993 
> > --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> >  --log-dir=/var/log/neutron
> > 
> > On queens like this:
> >  haproxy -f 
> > /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> > 
> > Is it the correct behaviour ?
> 
> Yes, that is correct. It was changed some time ago, see 
> https://bugs.launchpad.net/neutron/+bug/1524916
> 
> > 
> > Regards
> > Ignazio
> > 
> > 
> > 
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski 
> >  ha scritto:
> > Hi,
> > 
> > Can You share logs from Your haproxy-metadata-proxy service which is 
> > running in qdhcp namespace? There should be some info about reason of those 
> > errors 500.
> > 
> > > Wiadomość napisana przez Ignazio Cassano  w 
> > > dniu 12.11.2018, o godz. 19:49:
> > > 
> > > Hi All,
> > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach 
> > > metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp namespace  
> > > the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > > 
> > > ___
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > 
> > — 
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> > 
> 
> — 
> Slawek Kaplonski
> Senior software engineer
> Red Hat
> 

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Slawomir Kaplonski
Hi,

From logs which You attached it looks that Your neutron-metadata-agent can’t 
connect to nova-api service. Please check if nova-metadata-api is reachable 
from node where Your neutron-metadata-agent is running.

> Wiadomość napisana przez Ignazio Cassano  w dniu 
> 12.11.2018, o godz. 22:34:
> 
> Hello again,
> I have another installation of ocata .
> On ocata the metadata for a network id is displayed by ps -afe like this:
>  /usr/bin/python2 /bin/neutron-ns-metadata-proxy 
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>  --metadata_proxy_socket=/var/lib/neutron/metadata_proxy 
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 
> --metadata_proxy_group=993 
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log 
> --log-dir=/var/log/neutron
> 
> On queens like this:
>  haproxy -f 
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> 
> Is it the correct behaviour ?

Yes, that is correct. It was changed some time ago, see 
https://bugs.launchpad.net/neutron/+bug/1524916

> 
> Regards
> Ignazio
> 
> 
> 
> Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski 
>  ha scritto:
> Hi,
> 
> Can You share logs from Your haproxy-metadata-proxy service which is running 
> in qdhcp namespace? There should be some info about reason of those errors 
> 500.
> 
> > Wiadomość napisana przez Ignazio Cassano  w dniu 
> > 12.11.2018, o godz. 19:49:
> > 
> > Hi All,
> > I upgraded  manually my centos 7 openstack ocata to pike.
> > All worked fine.
> > Then I upgraded from pike to Queens and instances stopped to reach metadata 
> > on 169.254.169.254 with error 500.
> > I am using isolated metadata true in my dhcp conf and in dhcp namespace  
> > the port 80 is in listen.
> > Please, anyone can help me?
> > Regards
> > Ignazio
> > 
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> — 
> Slawek Kaplonski
> Senior software engineer
> Red Hat
> 

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Slawomir Kaplonski
Hi,

Can You share logs from Your haproxy-metadata-proxy service which is running in 
qdhcp namespace? There should be some info about reason of those errors 500.

> Wiadomość napisana przez Ignazio Cassano  w dniu 
> 12.11.2018, o godz. 19:49:
> 
> Hi All,
> I upgraded  manually my centos 7 openstack ocata to pike.
> All worked fine.
> Then I upgraded from pike to Queens and instances stopped to reach metadata 
> on 169.254.169.254 with error 500.
> I am using isolated metadata true in my dhcp conf and in dhcp namespace  the 
> port 80 is in listen.
> Please, anyone can help me?
> Regards
> Ignazio
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [neutron] Automatically allow incoming DHCP traffic for networks which uses external dhcp server

2018-10-29 Thread Slawomir Kaplonski
Hi,

Some time ago we had in Neutron reported RFE to allow automatically incoming 
DHCP traffic to the VM [1].
Basically it can be done today by adding proper security group rule that will 
allow such incoming traffic to the VM but idea of this RFE was to add some 
flag, called e.g. „external_dhcp” to network/subnet and in case if this flag is 
set to True, add such firewall rule by default for each port.
This small RFE don’t cover cases like „how to ensure that external DHCP server 
will be aware of IP addresses assigned for port in Neutron’s DB” or things like 
that.
It’s only about adding this one new flag to subnet (or network) attributes 
instead of doing it „manually” with security groups.

And now question to You is: would You be interested in such new „feature”? 
Currently we had only one request and we are not sure if it is worth to 
implement it. But if there would be more interest in it we can revive this RFE.

[1] https://bugs.launchpad.net/neutron/+bug/1785213

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack

2018-10-18 Thread Slawomir Kaplonski


> Wiadomość napisana przez Remo Mattei  w dniu 18.10.2018, o godz. 
> 19:08:
> 
> Michal, that will never work it’s 11 characters long

Shorter could be Openstack Trouble ;)
> 
> 
>  
> 
>> On Oct 18, 2018, at 09:43, Eric Fried  wrote:
>> 
>> Sorry, I'm opposed to this idea.
>> 
>> I admit I don't understand the political framework, nor have I read the
>> governing documents beyond [1], but that document makes it clear that
>> this is supposed to be a community-wide vote.  Is it really legal for
>> the TC (or whoever has merge rights on [2]) to merge a patch that gives
>> that same body the power to take the decision out of the hands of the
>> community? So it's really an oligarchy that gives its constituency the
>> illusion of democracy until something comes up that it feels like not
>> having a vote on? The fact that it's something relatively "unimportant"
>> (this time) is not a comfort.
>> 
>> Not that I think the TC would necessarily move forward with [2] in the
>> face of substantial opposition from non-TC "cores" or whatever.
>> 
>> I will vote enthusiastically for "Train". But a vote it should be.
>> 
>> -efried
>> 
>> [1] https://governance.openstack.org/tc/reference/release-naming.html
>> [2] https://review.openstack.org/#/c/611511/
>> 
>> On 10/18/2018 10:52 AM, arkady.kanev...@dell.com wrote:
>>> +1 for the poll.
>>> 
>>> Let’s follow well established process.
>>> 
>>> If we want to add Train as one of the options for the name I am OK with it.
>>> 
>>>  
>>> 
>>> *From:* Jonathan Mills 
>>> *Sent:* Thursday, October 18, 2018 10:49 AM
>>> *To:* openstack-s...@lists.openstack.org
>>> *Subject:* Re: [Openstack-sigs] [all] Naming the T release of OpenStack
>>> 
>>>  
>>> 
>>> [EXTERNAL EMAIL]
>>> Please report any suspicious attachments, links, or requests for
>>> sensitive information.
>>> 
>>> +1 for just having a poll
>>> 
>>>  
>>> 
>>> On Thu, Oct 18, 2018 at 11:39 AM David Medberry >> > wrote:
>>> 
>>>I'm fine with Train but I'm also fine with just adding it to the
>>>list and voting on it. It will win.
>>> 
>>> 
>>> 
>>>Also, for those not familiar with the debian/ubuntu command "sl",
>>>now is the time to become so.
>>> 
>>> 
>>> 
>>>apt install sl
>>> 
>>>sl -Flea #ftw
>>> 
>>> 
>>> 
>>>On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds
>>>mailto:t...@bakeyournoodle.com>> wrote:
>>> 
>>>Hello all,
>>>As per [1] the nomination period for names for the T release
>>>have
>>>now closed (actually 3 days ago sorry).  The nominated names and any
>>>qualifying remarks can be seen at2].
>>> 
>>>Proposed Names
>>> * Tarryall
>>> * Teakettle
>>> * Teller
>>> * Telluride
>>> * Thomas
>>> * Thornton
>>> * Tiger
>>> * Tincup
>>> * Timnath
>>> * Timber
>>> * Tiny Town
>>> * Torreys
>>> * Trail
>>> * Trinidad
>>> * Treasure
>>> * Troublesome
>>> * Trussville
>>> * Turret
>>> * Tyrone
>>> 
>>>Proposed Names that do not meet the criteria
>>> * Train
>>> 
>>>However I'd like to suggest we skip the CIVS poll and select
>>>'Train' as
>>>the release name by TC resolution[3].  My think for this is
>>> 
>>> * It's fun and celebrates a humorous moment in our community
>>> * As a developer I've heard the T release called Train for quite
>>>   sometime, and was used often at the PTG[4].
>>> * As the *next* PTG is also in Colorado we can still choose a
>>>   geographic based name for U[5]
>>> * If train causes a problem for trademark reasons then we can
>>>always
>>>   run the poll
>>> 
>>>I'll leave[3] for marked -W for a week for discussion to happen
>>>before the
>>>TC can consider / vote on it.
>>> 
>>>Yours Tony.
>>> 
>>>[1]
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html
>>>[2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals
>>>[3]
>>>
>>> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53
>>>[4] https://twitter.com/vkmc/status/1040321043959754752
>>>[5]
>>>https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z
>>>
>>> 
>>>___
>>>openstack-sigs mailing list
>>>openstack-s...@lists.openstack.org
>>>
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
>>> 
>>>___
>>>openstack-sigs mailing list
>>>openstack-s...@lists.openstack.org
>>>
>>>

Re: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup

2018-09-28 Thread Slawomir Kaplonski
Hi,

What version of Neutron and ovsdbapp You are using? IIRC there was such issue 
somewhere around Pike version, we saw it in functional tests quite often. But 
later with new ovsdbapp version I think that this problem was somehow solved.
Maybe try newer version of ovsdbapp and check if it will be better.

> Wiadomość napisana przez Jean-Philippe Méthot  w 
> dniu 27.09.2018, o godz. 23:05:
> 
> I got some answers from the openvswitch mailing list, essentially indicating 
> the issue is in the connection between neutron-openvswitch-agent and ovs.
> 
> Here’s an output of ovs-vsctl list controller:
> 
> _uuid   : ff2dca74-9628-43c8-b89c-8d2f1242dd3f
> connection_mode : out-of-band
> controller_burst_limit: []
> controller_rate_limit: []
> enable_async_messages: []
> external_ids: {}
> inactivity_probe: []
> is_connected: false
> local_gateway   : []
> local_ip: []
> local_netmask   : []
> max_backoff : []
> other_config: {}
> role: other
> status  : {last_error="Connection timed out", 
> sec_since_connect="22", sec_since_disconnect="1", state=BACKOFF}
> target  : "tcp:127.0.0.1:6633 »
> 
> So OVS is still working but the connection between neutron-openvswitch-agent 
> and OVS gets interrupted somehow. It may also be linked to the HA vrrp 
> switching host at random as the connection between both network nodes get 
> severed. We also see SSH lagging momentarily. I’m starting to think that a 
> limit of some kind in linux is reached, preventing connections from 
> happening. However, I don’t think it’s max open file since the number of open 
> files is nowhere close to what I’ve set it.
> 
> Ideas?
>   
> Jean-Philippe Méthot
> Openstack system administrator
> Administrateur système Openstack
> PlanetHoster inc.
> 
> 
> 
> 
>> Le 26 sept. 2018 à 15:16, Jean-Philippe Méthot  
>> a écrit :
>> 
>> Yes, I notice that every time that message appears, at least a few packets 
>> get dropped and some of our instances pop up in nagios, even though they are 
>> reachable 1 or 2 seconds after. It’s really causing us some issues as we 
>> can’t ensure proper network quality for our customers. Have you noticed the 
>> same?
>> 
>> By that point I think it may be best to contact openvswitch directly since 
>> it seems to be an issue with their component. I am about to do that and hope 
>> I don’t get sent back to the openstack mailing list. I would really like to 
>> know what this probe is and why it disconnects constantly under load.
>> 
>> Jean-Philippe Méthot
>> Openstack system administrator
>> Administrateur système Openstack
>> PlanetHoster inc.
>> 
>> 
>> 
>> 
>>> Le 26 sept. 2018 à 11:48, Simon Leinen  a écrit :
>>> 
>>> Jean-Philippe Méthot writes:
 This particular message makes it sound as if openvswitch is getting 
 overloaded.
 Sep 23 03:54:08 network1 ovsdb-server: 
 ovs|01253|reconnect|ERR|tcp:127.0.0.1:50814: no response to inactivity 
 probe after 5.01 seconds, disconnecting
>>> 
>>> We get these as well :-(
>>> 
 A lot of those keep appear, and openvswitch always reconnects almost
 instantly though. I’ve done some research about that particular
 message, but it didn’t give me anything I can use to fix it.
>>> 
>>> Would be interested in solutions as well.  But I'm sceptical whether
>>> kernel settings can help here, because the timeout/slowness seems to be
>>> located in the user-space/control-plane parts of Open vSwitch,
>>> i.e. OVSDB.
>>> -- 
>>> Simon.
>>> 
 Jean-Philippe Méthot
 Openstack system administrator
 Administrateur système Openstack
 PlanetHoster inc.
>>> 
 Le 25 sept. 2018 à 19:37, Erik McCormick  a 
 écrit :
>>> 
 Ate you getting any particular log messages that lead you to conclude your 
 issue lies with OVS? I've hit lots of kernel limits under those conditions 
 before OVS itself ever
 noticed. Anything in dmesg, journal or neutron logs of interest? 
>>> 
 On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot 
  wrote:
>>> 
 Hi,
>>> 
 Are there some recommendations regarding kernel settings configuration for 
 openvswitch? We’ve just been hit by what we believe may be an attack of 
 some kind we
 have never seen before and we’re wondering if there’s a way to optimize 
 our network nodes kernel for openvswitch operation and thus minimize the 
 impact of such an
 attack, or whatever it was.
>>> 
 Best regards,
>>> 
 Jean-Philippe Méthot
 Openstack system administrator
 Administrateur système Openstack
 PlanetHoster inc.
>>> 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> 
 ___
 OpenStack-operators mailing list
 

Re: [Openstack-operators] [openstack-dev] [openstack-ansible] Change in our IRC channel

2018-08-01 Thread Slawomir Kaplonski
Maybe such change should be considered to be done globally on all OpenStack 
channels?

> Wiadomość napisana przez jean-phili...@evrard.me w dniu 01.08.2018, o godz. 
> 10:13:
> 
> Hello everyone,
> 
> Due to a continuously increasing spam [0] on our IRC channels, I have decided 
> to make our channel (#openstack-ansible on freenode) only joinable by 
> Freenode's nickserv registered users.
> 
> I am sorry for the inconvenience, as it will now be harder to reach us (but 
> it's not that hard to register! [1]). The conversations will be easier to 
> follow though.
> 
> You can still contact us on the mailing lists too.
> 
> Regards,
> Jean-Philippe Evrard (evrardjp)
> 
> [0]: https://freenode.net/news/spambot-attack
> [1]: https://freenode.net/kb/answer/registration
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [ovs] [neutron] openvswitch flows firewall driver

2018-06-11 Thread Slawomir Kaplonski
Hi,

I’m not sure about Queens but recently with [1] we switched default security 
group driver in devstack to „openvswitch”.
Since at least month we have scenario gate job with this SG driver running as 
voting and gating.
Currently, after switch devstack default driver to openvswitch it’s tested in 
many jobs in Neutron.

[1] https://review.openstack.org/#/c/568297/

> Wiadomość napisana przez Tobias Urdin  w dniu 
> 11.06.2018, o godz. 05:20:
> 
> Hello everybody,
> I'm cross-posting this with operators list.
> 
> The openvswitch flows-based stateful firewall driver which uses the
> conntrack support in Linux kernel >= 4.3 (iirc) has been
> marked as experimental for several releases now, is there any
> information about flaws in this and why it should not be used in production?
> 
> It's still marked as experimental or missing documentation in the
> networking guide [1].
> 
> And to operators; is anybody running the OVS stateful firewall in
> production? (firewall_driver = openvswitch)
> 
> Appreciate any feedback :)
> Best regards
> 
> [1] https://docs.openstack.org/neutron/queens/admin/config-ovsfwdriver.html
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] attaching network cards to VMs taking a very long time

2018-06-03 Thread Slawomir Kaplonski
Hi,

> Wiadomość napisana przez Matt Riedemann  w dniu 
> 03.06.2018, o godz. 16:54:
> 
> On 6/2/2018 1:37 AM, Chris Apsey wrote:
>> This is great.  I would even go so far as to say the install docs should be 
>> updated to capture this as the default; as far as I know there is no 
>> negative impact when running in daemon mode, even on very small deployments. 
>>  I would imagine that there are operators out there who have run into this 
>> issue but didn't know how to work through it - making stuff like this less 
>> painful is key to breaking the 'openstack is hard' stigma.
> 
> I think changing the default on the root_helper_daemon option is a good idea 
> if everyone is setting that anyway. There are some comments in the code next 
> to the option that make me wonder if there are edge cases where it might not 
> be a good idea, but I don't really know the details, someone from the neutron 
> team that knows more about it would have to speak up.
> 
> Also, I wonder if converting to privsep in the neutron agent would eliminate 
> the need for this option altogether and still gain the performance benefits.

Converting L2 agents to privsep is ongoing process but it’s very slow. There is 
switch of ip_lib to privsep in progress: 
https://bugs.launchpad.net/neutron/+bug/1492714
But to completely drop rootwrap there is also tc_lib to switch to privsep for 
QoS, iptables module for security groups and probably also some other modules. 
So I would not consider it as possibly done soon :)

> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators