Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-07-08 Thread Miguel Angel Ajo Pelayo
I'd like to bring the attention back to this topic:

Mark, could you reconsider removing the -2 here?

https://review.openstack.org/#/c/93889/

Your reason was: 
"""Until the upstream blueprint 
   (https://blueprints.launchpad.net/oslo/+spec/rootwrap-daemon-mode )
   merges in Oslo it does not make sense to track this in Neutron.
"""

Given the new deadlines for the specs, I believe there is no
reason to finish the oslo side in a rush, but it looks like it's 
going to be available during this cycle.

I believe it's something good which we may have available
during the juno cycle, as it's a very serious performance
penalty.

Best regards,
Miguel Ángel.


- Original Message -
> 
> 
> On 03/24/2014 07:23 PM, Yuriy Taraday wrote:
> > On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin  > > wrote:
> >
> > Don't discard the first number so quickly.
> >
> > For example, say we use a timeout mechanism for the daemon running
> > inside namespaces to avoid using too much memory with a daemon in
> > every namespace.  That means we'll pay the startup cost repeatedly but
> > in a way that amortizes it down.
> >
> > Even if it is really a one time cost, then if you collect enough
> > samples then the outlier won't have much affect on the mean anyway.
> >
> >
> > It actually affects all numbers but mean (e.g. deviation is gross).
> 
> 
> Carl is right, I thought of it later in the evening, when the timeout
> mechanism is in place we must consider the number.
> 
> >
> > I'd say keep it in there.
> 
> +1 I agree.
> 
> >
> > Carl
> >
> > On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo
> > mailto:majop...@redhat.com>> wrote:
> >  >
> >  >
> >  > It's the first call starting the daemon / loading config files,
> >  > etc?,
> >  >
> >  > May be that first sample should be discarded from the mean for
> > all processes
> >  > (it's an outlier value).
> >
> >
> > I thought about cutting max from counting deviation and/or showing
> > second-max value. But I don't think it matters much and there's not much
> > people here who're analyzing deviation. It's pretty clear what happens
> > with the longest run with this case and I think we can let it be as is.
> > It's mean value that matters most here.
> 
> Yes, I agree, but as Carl said, having timeouts in place, in a practical
> environment, the mean will be shifted too.
> 
> Timeouts are needed within namespaces, to avoid excessive memory
> consumption. But it could be OK as we'd be cutting out the ip netns
> delay.  Or , if we find a simpler "setns" mechanism enough for our
> needs, may be we don't need to care about short-timeouts in ip netns
> at all...
> 
> 
> Best,
> Miguel Ángel.
> 
> 
> >
> > --
> >
> > Kind regards, Yuriy.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-25 Thread Miguel Angel Ajo



On 03/24/2014 07:23 PM, Yuriy Taraday wrote:

On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin mailto:c...@ecbaldwin.net>> wrote:

Don't discard the first number so quickly.

For example, say we use a timeout mechanism for the daemon running
inside namespaces to avoid using too much memory with a daemon in
every namespace.  That means we'll pay the startup cost repeatedly but
in a way that amortizes it down.

Even if it is really a one time cost, then if you collect enough
samples then the outlier won't have much affect on the mean anyway.


It actually affects all numbers but mean (e.g. deviation is gross).



Carl is right, I thought of it later in the evening, when the timeout
mechanism is in place we must consider the number.



I'd say keep it in there.


+1 I agree.



Carl

On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo
mailto:majop...@redhat.com>> wrote:
 >
 >
 > It's the first call starting the daemon / loading config files, etc?,
 >
 > May be that first sample should be discarded from the mean for
all processes
 > (it's an outlier value).


I thought about cutting max from counting deviation and/or showing
second-max value. But I don't think it matters much and there's not much
people here who're analyzing deviation. It's pretty clear what happens
with the longest run with this case and I think we can let it be as is.
It's mean value that matters most here.


Yes, I agree, but as Carl said, having timeouts in place, in a practical
environment, the mean will be shifted too.

Timeouts are needed within namespaces, to avoid excessive memory
consumption. But it could be OK as we'd be cutting out the ip netns
delay.  Or , if we find a simpler "setns" mechanism enough for our
needs, may be we don't need to care about short-timeouts in ip netns
at all...


Best,
Miguel Ángel.




--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Carl Baldwin
I was thinking that we could document the information about sudo and
iproute2 patches with the upcoming release.  How would I go about
doing this?  Is there any section in our documentation about OS level
tweaks or requirements such as these that could present this
information as part of the release?

Carl

On Wed, Mar 5, 2014 at 9:58 AM, Rick Jones  wrote:
> On 03/05/2014 06:42 AM, Miguel Angel Ajo wrote:
>>
>>
>>  Hello,
>>
>>  Recently, I found a serious issue about network-nodes startup time,
>> neutron-rootwrap eats a lot of cpu cycles, much more than the processes
>> it's wrapping itself.
>>
>>  On a database with 1 public network, 192 private networks, 192
>> routers, and 192 nano VMs, with OVS plugin:
>>
>>
>> Network node setup time (rootwrap): 24 minutes
>> Network node setup time (sudo): 10 minutes
>
>
> I've not been looking at rootwrap, but have been looking at sudo and ip.
> (Using some scripts which create "fake routers" so I could look without any
> of this icky OpenStack stuff in the way :) ) The Ubuntu 12.04 versions of
> each at least will enumerate all the interfaces on the system, even though
> they don't need to.
>
> There was already an upstream change to 'ip' that eliminates the unnecessary
> enumeration.  In the last few weeks an enhancement went into the upstream
> sudo that allows one to configure sudo to not do the same thing.   Down in
> the low(ish) three figures of interfaces it may not be a Big Deal (tm) but
> as one starts to go beyond that...
>
> commit f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8
> Author: Stephen Hemminger 
> Date:   Thu Mar 28 15:17:47 2013 -0700
>
> ip: remove unnecessary ll_init_map
>
> Don't call ll_init_map on modify operations
> Saves significant overhead with 1000's of devices.
>
> http://www.sudo.ws/pipermail/sudo-workers/2014-January/000826.html
>
> Whether your environment already has the 'ip' change I don't know, but odd
> are probably pretty good it doesn't have the sudo enhancement.
>
>
>> That's the time since you reboot a network node, until all namespaces
>> and services are restored.
>
>
> So, that includes the time for the system to go down and reboot, not just
> the time it takes to rebuild once rebuilding starts?
>
>
>> If you see appendix "1", this extra 14min overhead, matches with the
>> fact that rootwrap needs 0.3s to start, and launch a system command
>> (once filtered).
>>
>>  14minutes =  840 s.
>>  (840s. / 192 resources)/0.3s ~= 15 operations /
>> resource(qdhcp+qrouter) (iptables, ovs port creation & tagging, starting
>> child processes, etc..)
>>
>> The overhead comes from python startup time + rootwrap loading.
>
>
> How much of the time is python startup time?  I assume that would be all the
> "find this lib, find that lib" stuff one sees in a system call trace?  I saw
> a boatload of that at one point but didn't quite feel like wading into that
> at the time.
>
>
>> I suppose that rootwrap was designed for lower amount of system
>> calls (nova?).
>
>
> And/or a smaller environment perhaps.
>
>
>> And, I understand what rootwrap provides, a level of filtering that
>> sudo cannot offer. But it raises some question:
>>
>> 1) It's actually someone using rootwrap in production?
>>
>> 2) What alternatives can we think about to improve this situation.
>>
>> 0) already being done: coalescing system calls. But I'm unsure
>> that's enough. (if we coalesce 15 calls to 3 on this system we get:
>> 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
>
>
> It may not be sufficient, but it is (IMO) certainly necessary.  It will make
> any work that minimizes or eliminates the overhead of rootwrap look that
> much better.
>
>
>> a) Rewriting rules into sudo (to the extent that it's possible), and
>> live with that.
>> b) How secure is neutron about command injection to that point? How
>> much is user input filtered on the API calls?
>> c) Even if "b" is ok , I suppose that if the DB gets compromised,
>> that could lead to command injection.
>>
>> d) Re-writing rootwrap into C (it's 600 python LOCs now).
>>
>> e) Doing the command filtering at neutron-side, as a library and
>> live with sudo with simple filtering. (we kill the python/rootwrap
>> startup overhead).
>>
>> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant
>> structures, I wonder if that time could be reduced by conversion
>> of system process calls into system library calls (I know we don't have
>> libraries for iproute, iptables?, and many other things... but it's a
>> problem that's probably worth looking at.)
>
>
> Certainly going back and forth creating short-lived processes is at least
> anti-social and perhaps ever so slightly upsetting to the process scheduler.
> Particularly "at scale."  The/a problem is though that the Linux networking
> folks have been somewhat reticent about creating libraries (at least any
> that they would end-up supporting) because they have

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Yuriy Taraday
On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin  wrote:

> Don't discard the first number so quickly.
>
> For example, say we use a timeout mechanism for the daemon running
> inside namespaces to avoid using too much memory with a daemon in
> every namespace.  That means we'll pay the startup cost repeatedly but
> in a way that amortizes it down.
>
> Even if it is really a one time cost, then if you collect enough
> samples then the outlier won't have much affect on the mean anyway.
>

It actually affects all numbers but mean (e.g. deviation is gross).


> I'd say keep it in there.
>
> Carl
>
> On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo 
> wrote:
> >
> >
> > It's the first call starting the daemon / loading config files, etc?,
> >
> > May be that first sample should be discarded from the mean for all
> processes
> > (it's an outlier value).
>

I thought about cutting max from counting deviation and/or showing
second-max value. But I don't think it matters much and there's not much
people here who're analyzing deviation. It's pretty clear what happens with
the longest run with this case and I think we can let it be as is. It's
mean value that matters most here.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Carl Baldwin
Don't discard the first number so quickly.

For example, say we use a timeout mechanism for the daemon running
inside namespaces to avoid using too much memory with a daemon in
every namespace.  That means we'll pay the startup cost repeatedly but
in a way that amortizes it down.

Even if it is really a one time cost, then if you collect enough
samples then the outlier won't have much affect on the mean anyway.
I'd say keep it in there.

Carl

On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo  wrote:
>
>
> It's the first call starting the daemon / loading config files, etc?,
>
> May be that first sample should be discarded from the mean for all processes
> (it's an outlier value).
>
>
>
>
> On 03/21/2014 05:32 PM, Yuriy Taraday wrote:
>>
>> On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez > > wrote:
>>
>> Yuriy Taraday wrote:
>>  > Benchmark included showed on my machine these numbers (average
>> over 100
>>  > iterations):
>>  >
>>  > Running 'ip a':
>>  >   ip a :   4.565ms
>>  >  sudo ip a :  13.744ms
>>  >sudo rootwrap conf ip a : 102.571ms
>>  > daemon.run('ip a') :   8.973ms
>>  > Running 'ip netns exec bench_ns ip a':
>>  >   sudo ip netns exec bench_ns ip a : 162.098ms
>>  > sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
>>  >  daemon.run('ip netns exec bench_ns ip a') : 129.876ms
>>  >
>>  > So it looks like running daemon is actually faster than running
>> "sudo".
>>
>> That's pretty good! However I fear that the extremely simplistic
>> filter
>> rule file you fed on the benchmark is affecting numbers. Could you
>> post
>> results from a realistic setup (like same command, but with all the
>> filter files normally found on a devstack host ?)
>>
>>
>> I don't have a devstack host at hands but I gathered all filters from
>> Nova, Cinder and Neutron and got this:
>>  method  :min   avg   max   dev
>> ip a :   3.741ms   4.443ms   7.356ms 500.660us
>>sudo ip a :  11.165ms  13.739ms  32.326ms   2.643ms
>> sudo rootwrap conf ip a : 100.814ms 125.701ms 169.048ms  16.265ms
>>   daemon.run('ip a') :   6.032ms   8.895ms 172.287ms  16.521ms
>>
>> Then I switched back to one file and got:
>>  method  :min   avg   max   dev
>> ip a :   4.176ms   4.976ms  22.910ms   1.821ms
>>sudo ip a :  13.240ms  14.730ms  21.793ms   1.382ms
>> sudo rootwrap conf ip a :  79.834ms 104.586ms 145.070ms  15.063ms
>>   daemon.run('ip a') :   5.062ms   8.427ms 160.799ms  15.493ms
>>
>> There is a difference but it looks like it's because of config files
>> parsing, not applying filters themselves.
>>
>> --
>>
>> Kind regards, Yuriy.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Miguel Angel Ajo



It's the first call starting the daemon / loading config files, etc?,

May be that first sample should be discarded from the mean for all 
processes (it's an outlier value).




On 03/21/2014 05:32 PM, Yuriy Taraday wrote:

On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez mailto:thie...@openstack.org>> wrote:

Yuriy Taraday wrote:
 > Benchmark included showed on my machine these numbers (average
over 100
 > iterations):
 >
 > Running 'ip a':
 >   ip a :   4.565ms
 >  sudo ip a :  13.744ms
 >sudo rootwrap conf ip a : 102.571ms
 > daemon.run('ip a') :   8.973ms
 > Running 'ip netns exec bench_ns ip a':
 >   sudo ip netns exec bench_ns ip a : 162.098ms
 > sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
 >  daemon.run('ip netns exec bench_ns ip a') : 129.876ms
 >
 > So it looks like running daemon is actually faster than running
"sudo".

That's pretty good! However I fear that the extremely simplistic filter
rule file you fed on the benchmark is affecting numbers. Could you post
results from a realistic setup (like same command, but with all the
filter files normally found on a devstack host ?)


I don't have a devstack host at hands but I gathered all filters from
Nova, Cinder and Neutron and got this:
 method  :min   avg   max   dev
ip a :   3.741ms   4.443ms   7.356ms 500.660us
   sudo ip a :  11.165ms  13.739ms  32.326ms   2.643ms
sudo rootwrap conf ip a : 100.814ms 125.701ms 169.048ms  16.265ms
  daemon.run('ip a') :   6.032ms   8.895ms 172.287ms  16.521ms

Then I switched back to one file and got:
 method  :min   avg   max   dev
ip a :   4.176ms   4.976ms  22.910ms   1.821ms
   sudo ip a :  13.240ms  14.730ms  21.793ms   1.382ms
sudo rootwrap conf ip a :  79.834ms 104.586ms 145.070ms  15.063ms
  daemon.run('ip a') :   5.062ms   8.427ms 160.799ms  15.493ms

There is a difference but it looks like it's because of config files
parsing, not applying filters themselves.

--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Yuriy Taraday
On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez wrote:

> Yuriy Taraday wrote:
> > Benchmark included showed on my machine these numbers (average over 100
> > iterations):
> >
> > Running 'ip a':
> >   ip a :   4.565ms
> >  sudo ip a :  13.744ms
> >sudo rootwrap conf ip a : 102.571ms
> > daemon.run('ip a') :   8.973ms
> > Running 'ip netns exec bench_ns ip a':
> >   sudo ip netns exec bench_ns ip a : 162.098ms
> > sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
> >  daemon.run('ip netns exec bench_ns ip a') : 129.876ms
> >
> > So it looks like running daemon is actually faster than running "sudo".
>
> That's pretty good! However I fear that the extremely simplistic filter
> rule file you fed on the benchmark is affecting numbers. Could you post
> results from a realistic setup (like same command, but with all the
> filter files normally found on a devstack host ?)
>

I don't have a devstack host at hands but I gathered all filters from Nova,
Cinder and Neutron and got this:
method  :min   avg   max   dev
   ip a :   3.741ms   4.443ms   7.356ms 500.660us
  sudo ip a :  11.165ms  13.739ms  32.326ms   2.643ms
sudo rootwrap conf ip a : 100.814ms 125.701ms 169.048ms  16.265ms
 daemon.run('ip a') :   6.032ms   8.895ms 172.287ms  16.521ms

Then I switched back to one file and got:
method  :min   avg   max   dev
   ip a :   4.176ms   4.976ms  22.910ms   1.821ms
  sudo ip a :  13.240ms  14.730ms  21.793ms   1.382ms
sudo rootwrap conf ip a :  79.834ms 104.586ms 145.070ms  15.063ms
 daemon.run('ip a') :   5.062ms   8.427ms 160.799ms  15.493ms

There is a difference but it looks like it's because of config files
parsing, not applying filters themselves.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Thierry Carrez
Sean Dague wrote:
> Sounds great. One of the things I hope happens with this is a look at
> some place rootwrap is used with such an open policy, that it's
> completely moot. For instance the nova-cpu policy includes tee & dd with
> no arg limitting (which has been that way forever from my look in git
> annotate)
> 
> Which is basically game over.

n-cpu is not the only component where the use of rootwrap doesn't
actually provide additional security... I'll leave as an exercise to the
reader to find the other ones :)

> So in the nova-cpu case I really think we should remove rootwrap as it's
> got to do so many things as root that being a limitted user really isn't
> an option.

The original idea was to have the framework in place to address those
issues: notice abusive commands in filter definitions, and either find a
way to filter them in an efficient way (the way we addressed the kill
calls for example), or adapt the code so that it doesn't need such
commands (like, say, removing file injection altogether).

The trick is, despite multiple sessions on the subject (one at every
summit since the dawn of time) this big review/fix effort hasn't
magically happened :) In some cases we even regressed (re-addition of
blind 'cat' CommandFilter while we have a specific ReadFileFilter).

I still think we are in a better starting place forcing those calls
through inefficient rootwrap rules -- at least we know which those calls
are and we have the framework ready to help in further restricting them
(RegExpFilter anyone ?). But the issue is the current rootwrap gives a
false sense of security. People just add filter rules for their commands
and call their security work done. It's *not* done. It's a continuing
process to make sure you don't have insecure rules, improve them or
rewrite the code so that it doesn't need them. Most CommandFilter rules
can be abused, and they still represent something like 95% of the
filters :) I'm not sure how to better communicate that rootwrap is not
the end, it's just the beginning.

As a final note, the best solution is not "better rootwrap filters". the
best solution is solid design that doesn't require running anything as
root. So components without run_as_root calls should really stay that
way. And components with a couple of rootwrap rules should seriously
look into removing the need for them.

Cheers,

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Sean Dague
On 03/21/2014 05:42 AM, Thierry Carrez wrote:
> Yuriy Taraday wrote:
>> On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo > > wrote:
>>>If this coupled to neutron in a way that it can be accepted for
>>> Icehouse (we're killing a performance bug), or that at least it can
>>> be y backported, you'd be covering both the short & long term needs.
>>
>> As I said on the meeting I plan to provide change request to Neutron
>> with some integration with this patch.
>> I'm also going to engage people involved in rootwrap about my change
>> request.
> 
> Temporarily removing my rootwrap maintainer hat and putting on my
> OpenStack release manager hat: as you probably know we are well into
> Icehouse feature freeze at this point, and there is no way I would
> consider such a significant change for inclusion in the Icehouse release
> at this point.
> 
> The work on both the daemon and the shedskin stuff is very promising,
> but the nature of this beast makes it necessary to undergo a lot of
> testing and security audits before it can be accepted. Not exactly
> something I'd consider 4 weeks before a final release.
> 
> Frankly, this issue has been on the table forever and this is just the
> wrong timing to rush a new implementation to fix it.
> 
> I filed a rootwrap session for the Juno Design summit -- ideally we'll
> have various solutions ready by then and we'd make the final choice for
> early integration in Juno, leaving plenty of time to catch the weird
> regressions (or security holes) that it may cause.

Sounds great. One of the things I hope happens with this is a look at
some place rootwrap is used with such an open policy, that it's
completely moot. For instance the nova-cpu policy includes tee & dd with
no arg limitting (which has been that way forever from my look in git
annotate)

Which is basically game over.

So in the nova-cpu case I really think we should remove rootwrap as it's
got to do so many things as root that being a limitted user really isn't
an option.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Miguel Angel Ajo



On 03/21/2014 11:01 AM, Thierry Carrez wrote:

Yuriy Taraday wrote:

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
   ip a :   4.565ms
  sudo ip a :  13.744ms
sudo rootwrap conf ip a : 102.571ms
 daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
   sudo ip netns exec bench_ns ip a : 162.098ms
 sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
  daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".


That's pretty good! However I fear that the extremely simplistic filter
rule file you fed on the benchmark is affecting numbers. Could you post
results from a realistic setup (like same command, but with all the
filter files normally found on a devstack host ?)

Thanks,




That's a good point to have a fair comparison to the c translated one,
I ran it with all the rootwrap filters provided in havana, but I will
rerun the benchmark if that changed.

Anyway, I don't think there should be a huge difference, the worst
part was the startup part.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Miguel Angel Ajo



On 03/21/2014 10:42 AM, Thierry Carrez wrote:

Yuriy Taraday wrote:

On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:

If this coupled to neutron in a way that it can be accepted for
 Icehouse (we're killing a performance bug), or that at least it can
 be y backported, you'd be covering both the short & long term needs.


As I said on the meeting I plan to provide change request to Neutron
with some integration with this patch.
I'm also going to engage people involved in rootwrap about my change
request.


Temporarily removing my rootwrap maintainer hat and putting on my
OpenStack release manager hat: as you probably know we are well into
Icehouse feature freeze at this point, and there is no way I would
consider such a significant change for inclusion in the Icehouse release
at this point.

The work on both the daemon and the shedskin stuff is very promising,
but the nature of this beast makes it necessary to undergo a lot of
testing and security audits before it can be accepted. Not exactly
something I'd consider 4 weeks before a final release.

Frankly, this issue has been on the table forever and this is just the
wrong timing to rush a new implementation to fix it.



Thierry, it sounds reasonable to me, even if this is a bug that we're
trying to kill (and not a new feature), the regressions and security
problems it could come with totally justify that reasoning.

I'd be satisfied  if the implementation Yuriy is preparing could be done 
in a way that:

1) The traditional sudo/rootwrap functionality is preserved
2) it can be backported to icehouse/havana if it does work as we expect 
and the security sounds reasonable.


1: would allow falling back to a C/translated implementation, which
   looks like it will be more expensive to develop & maintain for the
   same/very similar performance results.

2: would fix our short-term problem with Icehouse.


I filed a rootwrap session for the Juno Design summit -- ideally we'll
have various solutions ready by then and we'd make the final choice for
early integration in Juno, leaving plenty of time to catch the weird
regressions (or security holes) that it may cause.



Best,
Miguel Ángel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Thierry Carrez
Yuriy Taraday wrote:
> Benchmark included showed on my machine these numbers (average over 100
> iterations):
> 
> Running 'ip a':
>   ip a :   4.565ms
>  sudo ip a :  13.744ms
>sudo rootwrap conf ip a : 102.571ms
> daemon.run('ip a') :   8.973ms
> Running 'ip netns exec bench_ns ip a':
>   sudo ip netns exec bench_ns ip a : 162.098ms
> sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
>  daemon.run('ip netns exec bench_ns ip a') : 129.876ms
> 
> So it looks like running daemon is actually faster than running "sudo".

That's pretty good! However I fear that the extremely simplistic filter
rule file you fed on the benchmark is affecting numbers. Could you post
results from a realistic setup (like same command, but with all the
filter files normally found on a devstack host ?)

Thanks,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Thierry Carrez
Yuriy Taraday wrote:
> On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo  > wrote:
>>If this coupled to neutron in a way that it can be accepted for
>> Icehouse (we're killing a performance bug), or that at least it can
>> be y backported, you'd be covering both the short & long term needs.
> 
> As I said on the meeting I plan to provide change request to Neutron
> with some integration with this patch.
> I'm also going to engage people involved in rootwrap about my change
> request.

Temporarily removing my rootwrap maintainer hat and putting on my
OpenStack release manager hat: as you probably know we are well into
Icehouse feature freeze at this point, and there is no way I would
consider such a significant change for inclusion in the Icehouse release
at this point.

The work on both the daemon and the shedskin stuff is very promising,
but the nature of this beast makes it necessary to undergo a lot of
testing and security audits before it can be accepted. Not exactly
something I'd consider 4 weeks before a final release.

Frankly, this issue has been on the table forever and this is just the
wrong timing to rush a new implementation to fix it.

I filed a rootwrap session for the Juno Design summit -- ideally we'll
have various solutions ready by then and we'd make the final choice for
early integration in Juno, leaving plenty of time to catch the weird
regressions (or security holes) that it may cause.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 8:23 PM, Rick Jones  wrote:

> On 03/20/2014 09:07 AM, Yuriy Taraday wrote:
>
>> On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones > > wrote:
>> Interesting result.  Which versions of sudo and ip and with how many
>> interfaces on the system?
>>
>>
>> Here are the numbers:
>>
>> % sudo -V
>> Sudo version 1.8.6p7
>> Sudoers policy plugin version 1.8.6p7
>> Sudoers file grammar version 42
>> Sudoers I/O plugin version 1.8.6p7
>> % ip -V
>> ip utility, iproute2-ss130221
>> % ip a | grep '^[^ ]' | wc -l
>> 5
>>
>> For consistency's sake (however foolish it may be) and purposes of
>> others being able to reproduce results and all that, stating the
>> number of interfaces on the system and versions and such would be a
>> Good Thing.
>>
>>
>> Ok, I'll add them to benchmark output.
>>
>
> Since there are only five interfaces on the system, it likely doesn't make
> much of a difference in your specific benchmark but the top-of-trunk
> version of sudo has the fix/enhancement to allow one to tell it via
> sudo.conf to not grab the list of interfaces on the system.
>
> Might be worthwhile though to take the interface count out to 2000 or more
> in the name of doing things at scale.  Namespace count as well.


Given that this benchmark is created to show that my changes are worth
doing and they already show that my approach is almost 2x faster than sudo,
slowing down sudo will only enhance this difference. I don't think we
should add this to the benchmark itself.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Rick Jones

On 03/20/2014 09:07 AM, Yuriy Taraday wrote:

On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones mailto:rick.jon...@hp.com>> wrote:
Interesting result.  Which versions of sudo and ip and with how many
interfaces on the system?


Here are the numbers:

% sudo -V
Sudo version 1.8.6p7
Sudoers policy plugin version 1.8.6p7
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p7
% ip -V
ip utility, iproute2-ss130221
% ip a | grep '^[^ ]' | wc -l
5

For consistency's sake (however foolish it may be) and purposes of
others being able to reproduce results and all that, stating the
number of interfaces on the system and versions and such would be a
Good Thing.


Ok, I'll add them to benchmark output.


Since there are only five interfaces on the system, it likely doesn't 
make much of a difference in your specific benchmark but the 
top-of-trunk version of sudo has the fix/enhancement to allow one to 
tell it via sudo.conf to not grab the list of interfaces on the system.


Might be worthwhile though to take the interface count out to 2000 or 
more in the name of doing things at scale.  Namespace count as well.


happy benchmarking,

rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo wrote:

>
>Wow Yuriy, amazing and fast :-), benchmarks included ;-)
>
>The daemon solution only adds 4.5ms, good work. I'll add some comments
> in a while.
>
>Recently I talked with another engineer in Red Hat (working
> in ovirt/vdsm), and they have something like this daemon, and they
> are using BaseManager too.
>
>In our last conversation he told me that the BaseManager has
> a couple of bugs & race conditions that won't be fixed for python2.x,
> I'm waiting for details on those bugs, I'll post them to the thread
> as soon as I have the details.
>

Looking at log of managers.py and connection.py I don't see any significant
changes landed after 2.7.6 was released (Nov, 10). So it looks like those
bugs should be fixed in 2.7.

   If this coupled to neutron in a way that it can be accepted for
> Icehouse (we're killing a performance bug), or that at least it can
> be y backported, you'd be covering both the short & long term needs.
>

As I said on the meeting I plan to provide change request to Neutron with
some integration with this patch.
I'm also going to engage people involved in rootwrap about my change
request.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones  wrote:

> On 03/20/2014 05:41 AM, Yuriy Taraday wrote:
>
>> Benchmark included showed on my machine these numbers (average over 100
>>  iterations):
>>
>> Running 'ip a':
>>ip a :   4.565ms
>>   sudo ip a :  13.744ms
>> sudo rootwrap conf ip a : 102.571ms
>>  daemon.run('ip a') :   8.973ms
>> Running 'ip netns exec bench_ns ip a':
>>sudo ip netns exec bench_ns ip a : 162.098ms
>>  sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
>>   daemon.run('ip netns exec bench_ns ip a') : 129.876ms
>>
>> So it looks like running daemon is actually faster than running "sudo".
>>
>
> Interesting result.  Which versions of sudo and ip and with how many
> interfaces on the system?
>

Here are the numbers:

% sudo -V
Sudo version 1.8.6p7
Sudoers policy plugin version 1.8.6p7
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p7
% ip -V
ip utility, iproute2-ss130221
% ip a | grep '^[^ ]' | wc -l
5


> For consistency's sake (however foolish it may be) and purposes of others
> being able to reproduce results and all that, stating the number of
> interfaces on the system and versions and such would be a Good Thing.
>

Ok, I'll add them to benchmark output.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Rick Jones

On 03/20/2014 05:41 AM, Yuriy Taraday wrote:

On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday mailto:yorik@gmail.com>> wrote:

I'm aiming at ~100 new lines of code for daemon. Of course I'll use
some batteries included with Python stdlib but they should be safe
already.
It should be rather easy to audit them.


Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
   ip a :   4.565ms
  sudo ip a :  13.744ms
sudo rootwrap conf ip a : 102.571ms
 daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
   sudo ip netns exec bench_ns ip a : 162.098ms
 sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
  daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".


Interesting result.  Which versions of sudo and ip and with how many 
interfaces on the system?


For consistency's sake (however foolish it may be) and purposes of 
others being able to reproduce results and all that, stating the number 
of interfaces on the system and versions and such would be a Good Thing.


happy benchmarking,

rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin  wrote:
>
> https://etherpad.openstack.org/p/neutron-agent-exec-performance
>

I've added info on how we can speedup work with namespaces by setting
namespaces by ourselves using setns() without "ip netns exec" overhead.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Miguel Angel Ajo




On 03/20/2014 12:32 PM, Monty Taylor wrote:

On 03/20/2014 05:31 AM, Miguel Angel Ajo wrote:



On 03/19/2014 10:54 PM, Joe Gordon wrote:




On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:



An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.


https://github.com/mangelajo/__shedskin.rootwrap/commit/__e4167a6491dfbc71e2d0f6e28ba93b__c8a1dd66c0







The current translation output is included.


Updated changes:
https://github.com/mangelajo/shedskin.rootwrap/commit/f4bd8d6efac9d3a972e686b42a47996f32ce0464

Some results of the compiled version (it's a different machine, so note 
the rootwrap runtime difference).


# time neutron-rootwrap /etc/neutron/rootwrap.conf
/usr/bin/neutron-rootwrap: No command specified

real0m0.133s
user0m0.106s
sys 0m0.023s

[root@rdo-storm rootwrap]# time ./cmd /etc/neutron/rootwrap.conf
./cmd: No command specified

real0m0.003s
user0m0.003s
sys 0m0.001s


[root@rdo-storm rootwrap]# time ./cmd /etc/neutron/rootwrap.conf ip 
netns exec bench_ns ip a

37: lo:  mtu 16436 qdisc noop state DOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

real0m0.025s
user0m0.003s
sys 0m0.011s
[root@rdo-storm rootwrap]# time neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec bench_ns ip a

37: lo:  mtu 16436 qdisc noop state DOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

real0m0.164s
user0m0.124s
sys 0m0.029s





The C++ code it's generating is inefficient and subject to some issues.
I'd be a bit concerned about it and would want to make some fixes.


Well, the inefficiencies should not be a problem for us, as long as the
final solution performs as we need.



Also - I'm sure you know this, but it's labeled as experimental. As
something that is going to run as root, I'd want to actually audit all
of the code it does. WHICH MEANS - we'd be auditing/examining C++ code.


Sure, I didn't miss that, and I think the same way you do. At this
moment, what I'm trying is a proof of concept, something we can
play around and do real measurements, look how does the code looks like,
if it can be audited or not, etc..


If we're going to do that, I'd rather just convert it to C++ by hand,
audit that, and deal with it. Either way, we'll need a substantial
amount of infra support - such as multi-platform compilation testing,
valgrind, etc.

Not a blocker - but things to think about. If you're really serious
about bringing C++ into the mix - let's just do it.


As I think, it's a solution aiming to the short-term, but IMHO, with
the speed Yuriy is working on the daemon, may be it's solution can make
it for  Icehouse without being very code-intrussive.


(ps. if we're going to do that, we could also just write a sudo plugin)


Yes, but sudo plugin / sudo rules, comes back with the maintenance 
problem we already had with sudo rules.


Ideally, if we needed to write/translate in C/C++, I'd go for something
rootwrap compatible, so, innovation could be kept on the rootwrap-python
side, and translated later.



It looks like doable (almost killed 80% of the translation
problems),
but there are two big stones:

1) As Joe said, no support for Subprocess (we're interested in
popen),
I'm using a dummy os.system() for the test.

2) No logging support.

I'm not sure on how complicated could be getting those modules
implemented for shedkin.


Before sorting out of we can get those working under shedskin, any
preliminary performance numbers from neutron when using this?



Sure, totally agree.

I'm trying to put up a conversion without 1 & 2, to run a benchmark on
it, and then I'll post the results.

I suppose, we couldn't use it in neutron itself without Popen support
(not sure) but at least I could get an estimate out of the previous
numbers and the new ones.

Best,
Miguel Ángel.



On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:

Hi Joe, thank you very much for the positive feedback,

 I plan to spend a day during this week on the
shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to
make
it compile under shedskin [1] : nothing done yet.

 It's a short-term alternative until we can have a rootwrap
agent,
together with it's integration in neutron (for Juno). As, for
the
compiled rootwrap, if it works, and if it does look good
(security wise)
then we'd have a solution for Icehouse/Havana.

help in [1] is really  welcome ;-) I'm available in
#openstack-neutron
as 'ajo'.

 Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/__shedskin.rootwrap


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Miguel Angel Ajo


   Wow Yuriy, amazing and fast :-), benchmarks included ;-)

   The daemon solution only adds 4.5ms, good work. I'll add some 
comments in a while.


   Recently I talked with another engineer in Red Hat (working
in ovirt/vdsm), and they have something like this daemon, and they
are using BaseManager too.

   In our last conversation he told me that the BaseManager has
a couple of bugs & race conditions that won't be fixed for python2.x,
I'm waiting for details on those bugs, I'll post them to the thread
as soon as I have the details.


   If this coupled to neutron in a way that it can be accepted for
Icehouse (we're killing a performance bug), or that at least it can
be y backported, you'd be covering both the short & long term needs.


Best,
Miguel Ángel.




On 03/20/2014 01:41 PM, Yuriy Taraday wrote:

On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday mailto:yorik@gmail.com>> wrote:

I'm aiming at ~100 new lines of code for daemon. Of course I'll use
some batteries included with Python stdlib but they should be safe
already.
It should be rather easy to audit them.


Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
   ip a :   4.565ms
  sudo ip a :  13.744ms
sudo rootwrap conf ip a : 102.571ms
 daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
   sudo ip netns exec bench_ns ip a : 162.098ms
 sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
  daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".

--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday  wrote:

> I'm aiming at ~100 new lines of code for daemon. Of course I'll use some
> batteries included with Python stdlib but they should be safe already.
> It should be rather easy to audit them.
>

Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
  ip a :   4.565ms
 sudo ip a :  13.744ms
   sudo rootwrap conf ip a : 102.571ms
daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
  sudo ip netns exec bench_ns ip a : 162.098ms
sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
 daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Monty Taylor

On 03/20/2014 05:31 AM, Miguel Angel Ajo wrote:



On 03/19/2014 10:54 PM, Joe Gordon wrote:




On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:



An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.


https://github.com/mangelajo/__shedskin.rootwrap/commit/__e4167a6491dfbc71e2d0f6e28ba93b__c8a1dd66c0





The current translation output is included.


The C++ code it's generating is inefficient and subject to some issues. 
I'd be a bit concerned about it and would want to make some fixes.


Also - I'm sure you know this, but it's labeled as experimental. As 
something that is going to run as root, I'd want to actually audit all 
of the code it does. WHICH MEANS - we'd be auditing/examining C++ code. 
If we're going to do that, I'd rather just convert it to C++ by hand, 
audit that, and deal with it. Either way, we'll need a substantial 
amount of infra support - such as multi-platform compilation testing, 
valgrind, etc.


Not a blocker - but things to think about. If you're really serious 
about bringing C++ into the mix - let's just do it.


(ps. if we're going to do that, we could also just write a sudo plugin)


It looks like doable (almost killed 80% of the translation problems),
but there are two big stones:

1) As Joe said, no support for Subprocess (we're interested in
popen),
I'm using a dummy os.system() for the test.

2) No logging support.

I'm not sure on how complicated could be getting those modules
implemented for shedkin.


Before sorting out of we can get those working under shedskin, any
preliminary performance numbers from neutron when using this?



Sure, totally agree.

I'm trying to put up a conversion without 1 & 2, to run a benchmark on
it, and then I'll post the results.

I suppose, we couldn't use it in neutron itself without Popen support
(not sure) but at least I could get an estimate out of the previous
numbers and the new ones.

Best,
Miguel Ángel.



On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:

Hi Joe, thank you very much for the positive feedback,

 I plan to spend a day during this week on the
shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to
make
it compile under shedskin [1] : nothing done yet.

 It's a short-term alternative until we can have a rootwrap
agent,
together with it's integration in neutron (for Juno). As, for the
compiled rootwrap, if it works, and if it does look good
(security wise)
then we'd have a solution for Icehouse/Havana.

help in [1] is really  welcome ;-) I'm available in
#openstack-neutron
as 'ajo'.

 Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/__shedskin.rootwrap


On 03/18/2014 12:48 AM, Joe Gordon wrote:




On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mailto:mangel...@redhat.com>
>>
wrote:


 I have included on the etherpad, the option to write
a sudo
 plugin (or several), specific for openstack.


 And this is a test with shedskin, I suppose that in
more complicated
 dependecy scenarios it should perform better.

 [majopela@redcylon tmp]$ cat  import sys
  > print "hello world"
  > sys.exit(0)
  > EOF

 [majopela@redcylon tmp]$ time python test.py
 hello world

 real0m0.016s
 user0m0.015s
 sys 0m0.001s



This looks very promising!

A few gotchas:

* Very limited library support

https://code.google.com/p/__shedskin/wiki/docs#Library___Limitations


* no logging
* no six
* no subprocess

* no *args support
*

https://code.google.com/p/__shedskin/wiki/docs#Python___Subset_Restrictions




that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar> /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

 

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Miguel Angel Ajo



On 03/19/2014 10:54 PM, Joe Gordon wrote:




On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:



An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.


https://github.com/mangelajo/__shedskin.rootwrap/commit/__e4167a6491dfbc71e2d0f6e28ba93b__c8a1dd66c0



The current translation output is included.

It looks like doable (almost killed 80% of the translation problems),
but there are two big stones:

1) As Joe said, no support for Subprocess (we're interested in popen),
I'm using a dummy os.system() for the test.

2) No logging support.

I'm not sure on how complicated could be getting those modules
implemented for shedkin.


Before sorting out of we can get those working under shedskin, any
preliminary performance numbers from neutron when using this?



Sure, totally agree.

I'm trying to put up a conversion without 1 & 2, to run a benchmark on
it, and then I'll post the results.

I suppose, we couldn't use it in neutron itself without Popen support
(not sure) but at least I could get an estimate out of the previous
numbers and the new ones.

Best,
Miguel Ángel.



On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:

Hi Joe, thank you very much for the positive feedback,

 I plan to spend a day during this week on the
shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to
make
it compile under shedskin [1] : nothing done yet.

 It's a short-term alternative until we can have a rootwrap
agent,
together with it's integration in neutron (for Juno). As, for the
compiled rootwrap, if it works, and if it does look good
(security wise)
then we'd have a solution for Icehouse/Havana.

help in [1] is really  welcome ;-) I'm available in
#openstack-neutron
as 'ajo'.

 Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/__shedskin.rootwrap


On 03/18/2014 12:48 AM, Joe Gordon wrote:




On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mailto:mangel...@redhat.com>
>>
wrote:


 I have included on the etherpad, the option to write a sudo
 plugin (or several), specific for openstack.


 And this is a test with shedskin, I suppose that in
more complicated
 dependecy scenarios it should perform better.

 [majopela@redcylon tmp]$ cat  import sys
  > print "hello world"
  > sys.exit(0)
  > EOF

 [majopela@redcylon tmp]$ time python test.py
 hello world

 real0m0.016s
 user0m0.015s
 sys 0m0.001s



This looks very promising!

A few gotchas:

* Very limited library support
https://code.google.com/p/__shedskin/wiki/docs#Library___Limitations

* no logging
* no six
* no subprocess

* no *args support
*

https://code.google.com/p/__shedskin/wiki/docs#Python___Subset_Restrictions



that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar> /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

print sys.argv
print os.system("ip a")

$ time ./tmp.py  foo bar > /dev/null

min:
real0m0.016s
user0m0.004s
sys 0m0.012s

max:
real0m0.038s
user0m0.016s
sys 0m0.020s



shedskin  tmp.py && make


$ time ./tmp  foo bar > /dev/null

real0m0.010s
user0m0.007s
sys 0m0.002s



Based in these results I think a deeper dive into making
rootwrap
supportshedskin is worthwhile.





 [majopela@redcylon tmp]$ shedskin test.py
 *** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
 

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-19 Thread Joe Gordon
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo wrote:

>
>
> An advance on the changes that it's requiring to have a
> py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.
>
> https://github.com/mangelajo/shedskin.rootwrap/commit/
> e4167a6491dfbc71e2d0f6e28ba93bc8a1dd66c0
>
> The current translation output is included.
>
> It looks like doable (almost killed 80% of the translation problems),
> but there are two big stones:
>
> 1) As Joe said, no support for Subprocess (we're interested in popen),
>I'm using a dummy os.system() for the test.
>
> 2) No logging support.
>
>I'm not sure on how complicated could be getting those modules
> implemented for shedkin.


Before sorting out of we can get those working under shedskin, any
preliminary performance numbers from neutron when using this?


>
>
> On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:
>
>> Hi Joe, thank you very much for the positive feedback,
>>
>> I plan to spend a day during this week on the shedskin-compatibility
>> for rootwrap (I'll branch it, and tune/cut down as necessary) to make
>> it compile under shedskin [1] : nothing done yet.
>>
>> It's a short-term alternative until we can have a rootwrap agent,
>> together with it's integration in neutron (for Juno). As, for the
>> compiled rootwrap, if it works, and if it does look good (security wise)
>> then we'd have a solution for Icehouse/Havana.
>>
>> help in [1] is really  welcome ;-) I'm available in #openstack-neutron
>> as 'ajo'.
>>
>> Best regards,
>> Miguel Ángel.
>>
>> [1] https://github.com/mangelajo/shedskin.rootwrap
>>
>> On 03/18/2014 12:48 AM, Joe Gordon wrote:
>>
>>>
>>>
>>>
>>> On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
>>> mailto:mangel...@redhat.com>> wrote:
>>>
>>>
>>> I have included on the etherpad, the option to write a sudo
>>> plugin (or several), specific for openstack.
>>>
>>>
>>> And this is a test with shedskin, I suppose that in more complicated
>>> dependecy scenarios it should perform better.
>>>
>>> [majopela@redcylon tmp]$ cat >>  > import sys
>>>  > print "hello world"
>>>  > sys.exit(0)
>>>  > EOF
>>>
>>> [majopela@redcylon tmp]$ time python test.py
>>> hello world
>>>
>>> real0m0.016s
>>> user0m0.015s
>>> sys 0m0.001s
>>>
>>>
>>>
>>> This looks very promising!
>>>
>>> A few gotchas:
>>>
>>> * Very limited library support
>>> https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
>>>* no logging
>>>* no six
>>>* no subprocess
>>>
>>> * no *args support
>>>*
>>> https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions
>>>
>>> that being said I did a quick comparison with great results:
>>>
>>> $ cat tmp.sh
>>> #!/usr/bin/env bash
>>> echo $0 $@
>>> ip a
>>>
>>> $ time ./tmp.sh  foo bar> /dev/null
>>>
>>> real0m0.009s
>>> user0m0.003s
>>> sys 0m0.006s
>>>
>>>
>>>
>>> $ cat tmp.py
>>> #!/usr/bin/env python
>>> import os
>>> import sys
>>>
>>> print sys.argv
>>> print os.system("ip a")
>>>
>>> $ time ./tmp.py  foo bar > /dev/null
>>>
>>> min:
>>> real0m0.016s
>>> user0m0.004s
>>> sys 0m0.012s
>>>
>>> max:
>>> real0m0.038s
>>> user0m0.016s
>>> sys 0m0.020s
>>>
>>>
>>>
>>> shedskin  tmp.py && make
>>>
>>>
>>> $ time ./tmp  foo bar > /dev/null
>>>
>>> real0m0.010s
>>> user0m0.007s
>>> sys 0m0.002s
>>>
>>>
>>>
>>> Based in these results I think a deeper dive into making rootwrap
>>> supportshedskin is worthwhile.
>>>
>>>
>>>
>>>
>>>
>>> [majopela@redcylon tmp]$ shedskin test.py
>>> *** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
>>> Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See
>>> LICENSE)
>>>
>>> [analyzing types..]
>>> 100%
>>> [generating c++ code..]
>>> [elapsed time: 1.59 seconds]
>>> [majopela@redcylon tmp]$ make
>>> g++  -O2 -march=native -Wno-deprecated  -I.
>>> -I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
>>> /usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
>>> /usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
>>> /usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc
>>> -lpcre  -o test
>>> [majopela@redcylon tmp]$ time ./test
>>> hello world
>>>
>>> real0m0.003s
>>> user0m0.000s
>>> sys 0m0.002s
>>>
>>>
>>> - Original Message -
>>>  > We had this same issue with the dhcp-agent. Code was added that
>>> paralleled
>>>  > the initial sync here: https://review.openstack.org/#/c/28914/
>>> that made
>>>  > things a good bit faster if I remember correctly. Might be worth
>>> doing
>>>  > something similar for the l3-agent.
>>>  >
>>>  > Best,
>>>  >
>>>  > Aaron
>>>  >
>>>  >
>>>  > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon <
>>> joe.gord...@gmail.com  > wrote:
>>>  >
>>>   

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-19 Thread Miguel Angel Ajo



An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.

https://github.com/mangelajo/shedskin.rootwrap/commit/e4167a6491dfbc71e2d0f6e28ba93bc8a1dd66c0

The current translation output is included.

It looks like doable (almost killed 80% of the translation problems),
but there are two big stones:

1) As Joe said, no support for Subprocess (we're interested in popen),
   I'm using a dummy os.system() for the test.

2) No logging support.

   I'm not sure on how complicated could be getting those modules 
implemented for shedkin.


On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:

Hi Joe, thank you very much for the positive feedback,

I plan to spend a day during this week on the shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to make
it compile under shedskin [1] : nothing done yet.

It's a short-term alternative until we can have a rootwrap agent,
together with it's integration in neutron (for Juno). As, for the
compiled rootwrap, if it works, and if it does look good (security wise)
then we'd have a solution for Icehouse/Havana.

help in [1] is really  welcome ;-) I'm available in #openstack-neutron
as 'ajo'.

Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/shedskin.rootwrap

On 03/18/2014 12:48 AM, Joe Gordon wrote:




On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mailto:mangel...@redhat.com>> wrote:


I have included on the etherpad, the option to write a sudo
plugin (or several), specific for openstack.


And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.

[majopela@redcylon tmp]$ cat  import sys
 > print "hello world"
 > sys.exit(0)
 > EOF

[majopela@redcylon tmp]$ time python test.py
hello world

real0m0.016s
user0m0.015s
sys 0m0.001s



This looks very promising!

A few gotchas:

* Very limited library support
https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
   * no logging
   * no six
   * no subprocess

* no *args support
   *
https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions

that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar> /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

print sys.argv
print os.system("ip a")

$ time ./tmp.py  foo bar > /dev/null

min:
real0m0.016s
user0m0.004s
sys 0m0.012s

max:
real0m0.038s
user0m0.016s
sys 0m0.020s



shedskin  tmp.py && make


$ time ./tmp  foo bar > /dev/null

real0m0.010s
user0m0.007s
sys 0m0.002s



Based in these results I think a deeper dive into making rootwrap
supportshedskin is worthwhile.





[majopela@redcylon tmp]$ shedskin test.py
*** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See
LICENSE)

[analyzing types..]
100%
[generating c++ code..]
[elapsed time: 1.59 seconds]
[majopela@redcylon tmp]$ make
g++  -O2 -march=native -Wno-deprecated  -I.
-I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc
-lpcre  -o test
[majopela@redcylon tmp]$ time ./test
hello world

real0m0.003s
user0m0.000s
sys 0m0.002s


- Original Message -
 > We had this same issue with the dhcp-agent. Code was added that
paralleled
 > the initial sync here: https://review.openstack.org/#/c/28914/
that made
 > things a good bit faster if I remember correctly. Might be worth
doing
 > something similar for the l3-agent.
 >
 > Best,
 >
 > Aaron
 >
 >
 > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon <
joe.gord...@gmail.com  > wrote:
 >
 >
 >
 >
 >
 >
 > On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon <
joe.gord...@gmail.com  > wrote:
 >
 >
 >
 > I looked into the python to C options and haven't found anything
promising
 > yet.
 >
 >
 > I tried Cython, and RPython, on a trivial hello world app, but
git similar
 > startup times to standard python.
 >
 > The one thing that did work was adding a '-S' when starting
python.
 >
 > -S Disable the import of the module site and the site-dependent
manipulations
 > of sys.path that it entails.
 >
 > Using 'python -S' didn't appear to help in devstack
 >
 > #!/usr/bin/python -S
 > # PBR Generated from u'console_scripts'
 >
 > import sys

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-18 Thread Yuriy Taraday
On Mon, Mar 17, 2014 at 1:01 PM, IWAMOTO Toshihiro wrote:
>
> I've added a couple of security-related comments (pickle decoding and
> token leak) on the etherpad.
> Please check.
>

Hello. Thanks for your input.

- We can avoid pickle using xmlrpclib.
- Token won't leak because we have direct pipe to parent process.

I'm in process of implementing it now so thanks for early notice.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-18 Thread Yuriy Taraday
Hello, Thierry.

On Mon, Mar 17, 2014 at 6:04 PM, Thierry Carrez wrote:

> Note that the whole concept behind rootwrap is to limit the amount of
> code that runs with elevated privileges. If you end up running a full
> service as root which imports as many libraries as the rest of OpenStack
> services, then you should seriously consider switching to running your
> root-heavy service as root directly, because it won't make that much of
> a difference.
>
> I'm not closing the door to a persistent implementation... Just saying
> that in order to be useful, it needs to be as minimal as possible (both
> in amount of code written and code imported) and as simple as possible
> (so that its security model can be easily proven safe).
>

I'm aiming at ~100 new lines of code for daemon. Of course I'll use some
batteries included with Python stdlib but they should be safe already.
It should be rather easy to audit them.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-18 Thread Thierry Carrez
Joe Gordon wrote:
>> And this is a test with shedskin, I suppose that in more complicated
>> dependecy scenarios it should perform better.
>> 
>> [majopela@redcylon tmp]$ cat > > import sys
>> > print "hello world"
>> > sys.exit(0)
>> > EOF
>> 
>> [majopela@redcylon tmp]$ time python test.py
>> hello world
>> 
>> real0m0.016s
>> user0m0.015s
>> sys 0m0.001s
> 
> This looks very promising!
> 
> A few gotchas:
> 
> * Very limited library support
> https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
>   * no logging
>   * no six
>   * no subprocess
> 
> * no *args support 
>   * https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions

This certainly looks promising enough to do a more complete
proof-of-concept around it. This adds packaging complexity and we are
likely to have only a subset of features available, but it may still be
worth it.

I filed the following session so that we can discuss it at the summit:
http://summit.openstack.org/cfp/details/97

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-18 Thread Miguel Angel Ajo

Hi Joe, thank you very much for the positive feedback,

   I plan to spend a day during this week on the shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to make
it compile under shedskin [1] : nothing done yet.

   It's a short-term alternative until we can have a rootwrap agent,
together with it's integration in neutron (for Juno). As, for the 
compiled rootwrap, if it works, and if it does look good (security wise) 
then we'd have a solution for Icehouse/Havana.


help in [1] is really  welcome ;-) I'm available in #openstack-neutron
as 'ajo'.

   Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/shedskin.rootwrap

On 03/18/2014 12:48 AM, Joe Gordon wrote:




On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mailto:mangel...@redhat.com>> wrote:


I have included on the etherpad, the option to write a sudo
plugin (or several), specific for openstack.


And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.

[majopela@redcylon tmp]$ cat  import sys
 > print "hello world"
 > sys.exit(0)
 > EOF

[majopela@redcylon tmp]$ time python test.py
hello world

real0m0.016s
user0m0.015s
sys 0m0.001s



This looks very promising!

A few gotchas:

* Very limited library support
https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
   * no logging
   * no six
   * no subprocess

* no *args support
   * https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions

that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar> /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

print sys.argv
print os.system("ip a")

$ time ./tmp.py  foo bar > /dev/null

min:
real0m0.016s
user0m0.004s
sys 0m0.012s

max:
real0m0.038s
user0m0.016s
sys 0m0.020s



shedskin  tmp.py && make


$ time ./tmp  foo bar > /dev/null

real0m0.010s
user0m0.007s
sys 0m0.002s



Based in these results I think a deeper dive into making rootwrap
supportshedskin is worthwhile.





[majopela@redcylon tmp]$ shedskin test.py
*** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)

[analyzing types..]
100%
[generating c++ code..]
[elapsed time: 1.59 seconds]
[majopela@redcylon tmp]$ make
g++  -O2 -march=native -Wno-deprecated  -I.
-I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc
-lpcre  -o test
[majopela@redcylon tmp]$ time ./test
hello world

real0m0.003s
user0m0.000s
sys 0m0.002s


- Original Message -
 > We had this same issue with the dhcp-agent. Code was added that
paralleled
 > the initial sync here: https://review.openstack.org/#/c/28914/
that made
 > things a good bit faster if I remember correctly. Might be worth
doing
 > something similar for the l3-agent.
 >
 > Best,
 >
 > Aaron
 >
 >
 > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon <
joe.gord...@gmail.com  > wrote:
 >
 >
 >
 >
 >
 >
 > On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon <
joe.gord...@gmail.com  > wrote:
 >
 >
 >
 > I looked into the python to C options and haven't found anything
promising
 > yet.
 >
 >
 > I tried Cython, and RPython, on a trivial hello world app, but
git similar
 > startup times to standard python.
 >
 > The one thing that did work was adding a '-S' when starting python.
 >
 > -S Disable the import of the module site and the site-dependent
manipulations
 > of sys.path that it entails.
 >
 > Using 'python -S' didn't appear to help in devstack
 >
 > #!/usr/bin/python -S
 > # PBR Generated from u'console_scripts'
 >
 > import sys
 > import site
 > site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
 >
 >
 >
 >
 >
 >
 > I am not sure if we can do that for rootwrap.
 >
 >
 > jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
 > hello world
 >
 > real 0m0.021s
 > user 0m0.000s
 > sys 0m0.020s
 > jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
 > hello world
 >
 > real 0m0.021s
 > user 0m0.000s
 > sys 0m0.020s
 > jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
 > hello world
 >
 > real 0m0.010s
 > user 0m0.000s
 > sys 0m0.008s
 >
 > jogo@dev:~/tmp/pyp

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-17 Thread Joe Gordon
On Mon, Mar 17, 2014 at 4:48 PM, Joe Gordon  wrote:

>
>
>
> On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo <
> mangel...@redhat.com> wrote:
>
>>
>> I have included on the etherpad, the option to write a sudo
>> plugin (or several), specific for openstack.
>>
>>
>> And this is a test with shedskin, I suppose that in more complicated
>> dependecy scenarios it should perform better.
>>
>> [majopela@redcylon tmp]$ cat > > import sys
>> > print "hello world"
>> > sys.exit(0)
>> > EOF
>>
>> [majopela@redcylon tmp]$ time python test.py
>> hello world
>>
>> real0m0.016s
>> user0m0.015s
>> sys 0m0.001s
>>
>
>
> This looks very promising!
>
> A few gotchas:
>
> * Very limited library support
> https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
>   * no logging
>   * no six
>   * no subprocess
>
> * no *args support
>   *
> https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions
>
> that being said I did a quick comparison with great results:
>
> $ cat tmp.sh
> #!/usr/bin/env bash
> echo $0 $@
> ip a
>
> $ time ./tmp.sh  foo bar> /dev/null
>
> real0m0.009s
> user0m0.003s
> sys 0m0.006s
>
>
>
> $ cat tmp.py
> #!/usr/bin/env python
> import os
> import sys
>
> print sys.argv
> print os.system("ip a")
>
> $ time ./tmp.py  foo bar > /dev/null
>
> min:
> real0m0.016s
> user0m0.004s
> sys 0m0.012s
>
> max:
> real0m0.038s
> user0m0.016s
> sys 0m0.020s
>
>
>
> shedskin  tmp.py && make
>
>
> $ time ./tmp  foo bar > /dev/null
>
> real0m0.010s
> user0m0.007s
> sys 0m0.002s
>
>
for completeness here is the auto generated cpp code:
http://paste.openstack.org/show/73711/

>
>
> Based in these results I think a deeper dive into making rootwrap
> supportshedskin is worthwhile.
>
>
>
>
>
>>
>>
>> [majopela@redcylon tmp]$ shedskin test.py
>> *** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
>> Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)
>>
>> [analyzing types..]
>> 100%
>> [generating c++ code..]
>> [elapsed time: 1.59 seconds]
>> [majopela@redcylon tmp]$ make
>> g++  -O2 -march=native -Wno-deprecated  -I.
>> -I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
>> /usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
>> /usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
>> /usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc -lpcre  -o
>> test
>> [majopela@redcylon tmp]$ time ./test
>> hello world
>>
>> real0m0.003s
>> user0m0.000s
>> sys 0m0.002s
>>
>>
>> - Original Message -
>> > We had this same issue with the dhcp-agent. Code was added that
>> paralleled
>> > the initial sync here: https://review.openstack.org/#/c/28914/ that
>> made
>> > things a good bit faster if I remember correctly. Might be worth doing
>> > something similar for the l3-agent.
>> >
>> > Best,
>> >
>> > Aaron
>> >
>> >
>> > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon < joe.gord...@gmail.com >
>> wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon < joe.gord...@gmail.com >
>> wrote:
>> >
>> >
>> >
>> > I looked into the python to C options and haven't found anything
>> promising
>> > yet.
>> >
>> >
>> > I tried Cython, and RPython, on a trivial hello world app, but git
>> similar
>> > startup times to standard python.
>> >
>> > The one thing that did work was adding a '-S' when starting python.
>> >
>> > -S Disable the import of the module site and the site-dependent
>> manipulations
>> > of sys.path that it entails.
>> >
>> > Using 'python -S' didn't appear to help in devstack
>> >
>> > #!/usr/bin/python -S
>> > # PBR Generated from u'console_scripts'
>> >
>> > import sys
>> > import site
>> > site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
>> >
>> >
>> >
>> >
>> >
>> >
>> > I am not sure if we can do that for rootwrap.
>> >
>> >
>> > jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
>> > hello world
>> >
>> > real 0m0.021s
>> > user 0m0.000s
>> > sys 0m0.020s
>> > jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
>> > hello world
>> >
>> > real 0m0.021s
>> > user 0m0.000s
>> > sys 0m0.020s
>> > jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
>> > hello world
>> >
>> > real 0m0.010s
>> > user 0m0.000s
>> > sys 0m0.008s
>> >
>> > jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
>> > hello world
>> >
>> > real 0m0.010s
>> > user 0m0.000s
>> > sys 0m0.008s
>> >
>> >
>> >
>> > On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
>> > mangel...@redhat.com > wrote:
>> >
>> >
>> > Hi Carl, thank you, good idea.
>> >
>> > I started reviewing it, but I will do it more carefully tomorrow
>> morning.
>> >
>> >
>> >
>> > - Original Message -
>> > > All,
>> > >
>> > > I was writing down a summary of all of this and decided to just do it
>> > > on an etherpad. Will you help me capture the big picture there? I'd
>> > > like to come up with some actions this week to try to address at least
>> > > part of the pro

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-17 Thread Joe Gordon
On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo <
mangel...@redhat.com> wrote:

>
> I have included on the etherpad, the option to write a sudo
> plugin (or several), specific for openstack.
>
>
> And this is a test with shedskin, I suppose that in more complicated
> dependecy scenarios it should perform better.
>
> [majopela@redcylon tmp]$ cat  > import sys
> > print "hello world"
> > sys.exit(0)
> > EOF
>
> [majopela@redcylon tmp]$ time python test.py
> hello world
>
> real0m0.016s
> user0m0.015s
> sys 0m0.001s
>


This looks very promising!

A few gotchas:

* Very limited library support
https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
  * no logging
  * no six
  * no subprocess

* no *args support
  * https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions

that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar> /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

print sys.argv
print os.system("ip a")

$ time ./tmp.py  foo bar > /dev/null

min:
real0m0.016s
user0m0.004s
sys 0m0.012s

max:
real0m0.038s
user0m0.016s
sys 0m0.020s



shedskin  tmp.py && make


$ time ./tmp  foo bar > /dev/null

real0m0.010s
user0m0.007s
sys 0m0.002s



Based in these results I think a deeper dive into making rootwrap
supportshedskin is worthwhile.





>
>
> [majopela@redcylon tmp]$ shedskin test.py
> *** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
> Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)
>
> [analyzing types..]
> 100%
> [generating c++ code..]
> [elapsed time: 1.59 seconds]
> [majopela@redcylon tmp]$ make
> g++  -O2 -march=native -Wno-deprecated  -I.
> -I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
> /usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
> /usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
> /usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc -lpcre  -o
> test
> [majopela@redcylon tmp]$ time ./test
> hello world
>
> real0m0.003s
> user0m0.000s
> sys 0m0.002s
>
>
> - Original Message -
> > We had this same issue with the dhcp-agent. Code was added that
> paralleled
> > the initial sync here: https://review.openstack.org/#/c/28914/ that made
> > things a good bit faster if I remember correctly. Might be worth doing
> > something similar for the l3-agent.
> >
> > Best,
> >
> > Aaron
> >
> >
> > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon < joe.gord...@gmail.com >
> wrote:
> >
> >
> >
> >
> >
> >
> > On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon < joe.gord...@gmail.com >
> wrote:
> >
> >
> >
> > I looked into the python to C options and haven't found anything
> promising
> > yet.
> >
> >
> > I tried Cython, and RPython, on a trivial hello world app, but git
> similar
> > startup times to standard python.
> >
> > The one thing that did work was adding a '-S' when starting python.
> >
> > -S Disable the import of the module site and the site-dependent
> manipulations
> > of sys.path that it entails.
> >
> > Using 'python -S' didn't appear to help in devstack
> >
> > #!/usr/bin/python -S
> > # PBR Generated from u'console_scripts'
> >
> > import sys
> > import site
> > site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
> >
> >
> >
> >
> >
> >
> > I am not sure if we can do that for rootwrap.
> >
> >
> > jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> > hello world
> >
> > real 0m0.021s
> > user 0m0.000s
> > sys 0m0.020s
> > jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> > hello world
> >
> > real 0m0.021s
> > user 0m0.000s
> > sys 0m0.020s
> > jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> > hello world
> >
> > real 0m0.010s
> > user 0m0.000s
> > sys 0m0.008s
> >
> > jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> > hello world
> >
> > real 0m0.010s
> > user 0m0.000s
> > sys 0m0.008s
> >
> >
> >
> > On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
> > mangel...@redhat.com > wrote:
> >
> >
> > Hi Carl, thank you, good idea.
> >
> > I started reviewing it, but I will do it more carefully tomorrow morning.
> >
> >
> >
> > - Original Message -
> > > All,
> > >
> > > I was writing down a summary of all of this and decided to just do it
> > > on an etherpad. Will you help me capture the big picture there? I'd
> > > like to come up with some actions this week to try to address at least
> > > part of the problem before Icehouse releases.
> > >
> > > https://etherpad.openstack.org/p/neutron-agent-exec-performance
> > >
> > > Carl
> > >
> > > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo <
> majop...@redhat.com >
> > > wrote:
> > > > Hi Yuri & Stephen, thanks a lot for the clarification.
> > > >
> > > > I'm not familiar with unix domain sockets at low level, but , I
> wonder
> > > > if authentication could be achieved just 

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-17 Thread Thierry Carrez
Yuriy Taraday wrote:
> Another option would be to allow rootwrap to run in daemon mode and
> provide RPC interface. This way Neutron can spawn rootwrap (with its
> CPython startup overhead) once and send new commands to be run later
> over UNIX socket.
> This way we won't need learn new language (C/C++), adopt new toolchain
> (RPython, Cython, whatever else) and still get secure way to run
> commands with root priviledges.

Note that the whole concept behind rootwrap is to limit the amount of
code that runs with elevated privileges. If you end up running a full
service as root which imports as many libraries as the rest of OpenStack
services, then you should seriously consider switching to running your
root-heavy service as root directly, because it won't make that much of
a difference.

I'm not closing the door to a persistent implementation... Just saying
that in order to be useful, it needs to be as minimal as possible (both
in amount of code written and code imported) and as simple as possible
(so that its security model can be easily proven safe).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-17 Thread IWAMOTO Toshihiro
At Thu, 13 Mar 2014 07:48:53 -0700,
Aaron Rosen wrote:
> 
> [1  ]
> [1.1  ]
> The easiest/quickest thing to do for ice house would probably be to run the
> initial sync in parallel like the dhcp-agent does for this exact reason.
> See: https://review.openstack.org/#/c/28914/ which did this for thr
> dhcp-agent.
> 
> Best,
> 
> Aaron
> On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo wrote:
> >
> > Yuri, could you elaborate your idea in detail? , I'm lost at some
> > points with your unix domain / token authentication.
> >
> > Where does the token come from?,
> >
> > Who starts rootwrap the first time?
> >
> > If you could write a full interaction sequence, on the etherpad, from
> > rootwrap daemon start ,to a simple call to system happening, I think that'd
> > help my understanding.
> 
> 
> Here it is: https://etherpad.openstack.org/p/rootwrap-agent
> Please take a look.

I've added a couple of security-related comments (pickle decoding and
token leak) on the etherpad.
Please check.

--
IWAMOTO Toshihiro


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-14 Thread Miguel Angel Ajo

As we said on the Thursday meeting, I've filled a bug with the details

https://bugs.launchpad.net/neutron/+bug/1292598

Feel free to add / ask for any missing details.

Best,
Miguel Ángel.

On 03/13/2014 10:52 PM, Carl Baldwin wrote:

Right, the L3 agent does do this already.  Agreed that the limiting
factor is the cumulative effect of the wrappers and executables' start
up overhead.

Carl

On Thu, Mar 13, 2014 at 9:47 AM, Brian Haley  wrote:

Aaron,

I thought the l3-agent already did this if doing a "full sync"?

_sync_routers_task()->_process_routers()->spawn_n(self.process_router, ri)

So each router gets processed in a greenthread.

It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
limiting factor, at least on network nodes with large numbers of namespaces.

-Brian

On 03/13/2014 10:48 AM, Aaron Rosen wrote:

The easiest/quickest thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason. See:
https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.

Best,

Aaron

On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:

 Yuri, could you elaborate your idea in detail? , I'm lost at some
 points with your unix domain / token authentication.

 Where does the token come from?,

 Who starts rootwrap the first time?

 If you could write a full interaction sequence, on the etherpad, from
 rootwrap daemon start ,to a simple call to system happening, I think that'd
 help my understanding.


Here it is: https://etherpad.openstack.org/p/rootwrap-agent
Please take a look.

--

Kind regards, Yuriy.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Carl Baldwin
Right, the L3 agent does do this already.  Agreed that the limiting
factor is the cumulative effect of the wrappers and executables' start
up overhead.

Carl

On Thu, Mar 13, 2014 at 9:47 AM, Brian Haley  wrote:
> Aaron,
>
> I thought the l3-agent already did this if doing a "full sync"?
>
> _sync_routers_task()->_process_routers()->spawn_n(self.process_router, ri)
>
> So each router gets processed in a greenthread.
>
> It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
> limiting factor, at least on network nodes with large numbers of namespaces.
>
> -Brian
>
> On 03/13/2014 10:48 AM, Aaron Rosen wrote:
>> The easiest/quickest thing to do for ice house would probably be to run the
>> initial sync in parallel like the dhcp-agent does for this exact reason. See:
>> https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.
>>
>> Best,
>>
>> Aaron
>>
>> On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo > > wrote:
>>
>> Yuri, could you elaborate your idea in detail? , I'm lost at some
>> points with your unix domain / token authentication.
>>
>> Where does the token come from?,
>>
>> Who starts rootwrap the first time?
>>
>> If you could write a full interaction sequence, on the etherpad, from
>> rootwrap daemon start ,to a simple call to system happening, I think 
>> that'd
>> help my understanding.
>>
>>
>> Here it is: https://etherpad.openstack.org/p/rootwrap-agent
>> Please take a look.
>>
>> --
>>
>> Kind regards, Yuriy.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Brian Haley
Aaron,

I thought the l3-agent already did this if doing a "full sync"?

_sync_routers_task()->_process_routers()->spawn_n(self.process_router, ri)

So each router gets processed in a greenthread.

It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
limiting factor, at least on network nodes with large numbers of namespaces.

-Brian

On 03/13/2014 10:48 AM, Aaron Rosen wrote:
> The easiest/quickest thing to do for ice house would probably be to run the
> initial sync in parallel like the dhcp-agent does for this exact reason. See:
> https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.
> 
> Best,
> 
> Aaron
> 
> On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo  > wrote:
> 
> Yuri, could you elaborate your idea in detail? , I'm lost at some
> points with your unix domain / token authentication.
> 
> Where does the token come from?,
> 
> Who starts rootwrap the first time?
> 
> If you could write a full interaction sequence, on the etherpad, from
> rootwrap daemon start ,to a simple call to system happening, I think 
> that'd
> help my understanding.
> 
> 
> Here it is: https://etherpad.openstack.org/p/rootwrap-agent
> Please take a look.
> 
> -- 
> 
> Kind regards, Yuriy.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Aaron Rosen
The easiest/quickest thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason.
See: https://review.openstack.org/#/c/28914/ which did this for thr
dhcp-agent.

Best,

Aaron
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo wrote:
>
> Yuri, could you elaborate your idea in detail? , I'm lost at some
> points with your unix domain / token authentication.
>
> Where does the token come from?,
>
> Who starts rootwrap the first time?
>
> If you could write a full interaction sequence, on the etherpad, from
> rootwrap daemon start ,to a simple call to system happening, I think that'd
> help my understanding.


Here it is: https://etherpad.openstack.org/p/rootwrap-agent
Please take a look.

-- 

Kind regards, Yuriy.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Yuriy Taraday
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo wrote:
>
> Yuri, could you elaborate your idea in detail? , I'm lost at some
> points with your unix domain / token authentication.
>
> Where does the token come from?,
>
> Who starts rootwrap the first time?
>
> If you could write a full interaction sequence, on the etherpad, from
> rootwrap daemon start ,to a simple call to system happening, I think that'd
> help my understanding.


Here it is: https://etherpad.openstack.org/p/rootwrap-agent
Please take a look.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Miguel Angel Ajo


Yuri, could you elaborate your idea in detail? , I'm lost at some
points with your unix domain / token authentication.

Where does the token come from?,

Who starts rootwrap the first time?

If you could write a full interaction sequence, on the etherpad, from 
rootwrap daemon start ,to a simple call to system happening, I think 
that'd help my understanding.


Best regards,
Miguel Ángel.


On 03/13/2014 07:42 AM, Yuriy Taraday wrote:

On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:

I'm not familiar with unix domain sockets at low level, but , I wonder
if authentication could be achieved just with permissions (only
users in group "neutron" or group "rootwrap" accessing this service.


It can be enforced, but it is not needed at all (see below).

I find it an interesting alternative, to the other proposed
solutions, but there are some challenges associated with this
solution, which could make it more complicated:

1) Access control, file system permission based or token based,


If we pass the token to the calling process through a pipe bound to
stdout, it won't be intercepted so token-based authentication for
further requests is secure enough.

2) stdout/stderr/return encapsulation/forwarding to the caller,
if we have a simple/fast RPC mechanism we can use, it's a matter
of serializing a dictionary.


RPC implementation in multiprocessing module uses either xmlrpclib or
pickle-based RPC. It should be enough to pass output of a command.
If we ever hit performance problem with passing long strings we can even
pass opened pipe's descriptors over UNIX socket to let caller interact
with spawned process directly.

3) client side implementation for 1 + 2.


Most of the code should be placed in oslo.rootwrap. Services using it
should replaces calls to root_helper with appropriate client calls like
this:

if run_as_root:
   if CONF.use_rootwrap_daemon:
 oslo.rootwrap.client.call(cmd)

All logic around spawning rootwrap daemon and interacting with it should
be hidden so that changes to services will be minimum.

4) It would need to accept new domain socket connections in green
threads to avoid spawning a new process to handle a new connection.


We can do connection pooling if we ever run into performance problems
with connecting new socket for every rootwrap call (which is unlikely).
On the daemon side I would avoid using fancy libraries (eventlet)
because of both new fat requirement for oslo.rootwrap (it depends on six
only currently) and running more possibly buggy and unsafe code with
elevated privileges.
Simple threaded daemon should be enough given it will handle needs of
only one service process.

The advantages:
* we wouldn't need to break the only-python-rule.
* we don't need to rewrite/translate rootwrap.

The disadvantages:
   * it needs changes on the client side (neutron + other projects),


As I said, changes should be minimal.

--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Yuriy Taraday
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin  wrote:

> All,
>
> I was writing down a summary of all of this and decided to just do it
> on an etherpad.  Will you help me capture the big picture there?  I'd
> like to come up with some actions this week to try to address at least
> part of the problem before Icehouse releases.
>
> https://etherpad.openstack.org/p/neutron-agent-exec-performance
>

Great idea! I've added some details on my proposal there.

As of your proposed multitool, I'm very concerned about moving logic to a
bash script. I think we should not deviate from having Python-based agent,
not bash-based.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-12 Thread Yuriy Taraday
On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo wrote:

> I'm not familiar with unix domain sockets at low level, but , I wonder
> if authentication could be achieved just with permissions (only users in
> group "neutron" or group "rootwrap" accessing this service.
>

It can be enforced, but it is not needed at all (see below).


> I find it an interesting alternative, to the other proposed solutions, but
> there are some challenges associated with this solution, which could make
> it more complicated:
>
> 1) Access control, file system permission based or token based,
>

If we pass the token to the calling process through a pipe bound to stdout,
it won't be intercepted so token-based authentication for further requests
is secure enough.

2) stdout/stderr/return encapsulation/forwarding to the caller,
>if we have a simple/fast RPC mechanism we can use, it's a matter
>of serializing a dictionary.
>

RPC implementation in multiprocessing module uses either xmlrpclib or
pickle-based RPC. It should be enough to pass output of a command.
If we ever hit performance problem with passing long strings we can even
pass opened pipe's descriptors over UNIX socket to let caller interact with
spawned process directly.


> 3) client side implementation for 1 + 2.
>

Most of the code should be placed in oslo.rootwrap. Services using it
should replaces calls to root_helper with appropriate client calls like
this:

if run_as_root:
  if CONF.use_rootwrap_daemon:
oslo.rootwrap.client.call(cmd)

All logic around spawning rootwrap daemon and interacting with it should be
hidden so that changes to services will be minimum.

4) It would need to accept new domain socket connections in green threads
> to avoid spawning a new process to handle a new connection.
>

We can do connection pooling if we ever run into performance problems with
connecting new socket for every rootwrap call (which is unlikely).
On the daemon side I would avoid using fancy libraries (eventlet) because
of both new fat requirement for oslo.rootwrap (it depends on six only
currently) and running more possibly buggy and unsafe code with elevated
privileges.
Simple threaded daemon should be enough given it will handle needs of only
one service process.


> The advantages:
>* we wouldn't need to break the only-python-rule.
>* we don't need to rewrite/translate rootwrap.
>
> The disadvantages:
>   * it needs changes on the client side (neutron + other projects),
>

As I said, changes should be minimal.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-11 Thread Miguel Angel Ajo Pelayo

I have included on the etherpad, the option to write a sudo 
plugin (or several), specific for openstack.


And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.

[majopela@redcylon tmp]$ cat  import sys
> print "hello world"
> sys.exit(0)
> EOF

[majopela@redcylon tmp]$ time python test.py
hello world

real0m0.016s
user0m0.015s
sys 0m0.001s


[majopela@redcylon tmp]$ shedskin test.py
*** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)

[analyzing types..]
100% 
[generating c++ code..]
[elapsed time: 1.59 seconds]
[majopela@redcylon tmp]$ make 
g++  -O2 -march=native -Wno-deprecated  -I. 
-I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/re.cpp 
/usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc -lpcre  -o test
[majopela@redcylon tmp]$ time ./test
hello world

real0m0.003s
user0m0.000s
sys 0m0.002s


- Original Message -
> We had this same issue with the dhcp-agent. Code was added that paralleled
> the initial sync here: https://review.openstack.org/#/c/28914/ that made
> things a good bit faster if I remember correctly. Might be worth doing
> something similar for the l3-agent.
> 
> Best,
> 
> Aaron
> 
> 
> On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon < joe.gord...@gmail.com > wrote:
> 
> 
> 
> 
> 
> 
> On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon < joe.gord...@gmail.com > wrote:
> 
> 
> 
> I looked into the python to C options and haven't found anything promising
> yet.
> 
> 
> I tried Cython, and RPython, on a trivial hello world app, but git similar
> startup times to standard python.
> 
> The one thing that did work was adding a '-S' when starting python.
> 
> -S Disable the import of the module site and the site-dependent manipulations
> of sys.path that it entails.
> 
> Using 'python -S' didn't appear to help in devstack
> 
> #!/usr/bin/python -S
> # PBR Generated from u'console_scripts'
> 
> import sys
> import site
> site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
> 
> 
> 
> 
> 
> 
> I am not sure if we can do that for rootwrap.
> 
> 
> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
> 
> real 0m0.021s
> user 0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
> 
> real 0m0.021s
> user 0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
> 
> real 0m0.010s
> user 0m0.000s
> sys 0m0.008s
> 
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
> 
> real 0m0.010s
> user 0m0.000s
> sys 0m0.008s
> 
> 
> 
> On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
> mangel...@redhat.com > wrote:
> 
> 
> Hi Carl, thank you, good idea.
> 
> I started reviewing it, but I will do it more carefully tomorrow morning.
> 
> 
> 
> - Original Message -
> > All,
> > 
> > I was writing down a summary of all of this and decided to just do it
> > on an etherpad. Will you help me capture the big picture there? I'd
> > like to come up with some actions this week to try to address at least
> > part of the problem before Icehouse releases.
> > 
> > https://etherpad.openstack.org/p/neutron-agent-exec-performance
> > 
> > Carl
> > 
> > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo < majop...@redhat.com >
> > wrote:
> > > Hi Yuri & Stephen, thanks a lot for the clarification.
> > > 
> > > I'm not familiar with unix domain sockets at low level, but , I wonder
> > > if authentication could be achieved just with permissions (only users in
> > > group "neutron" or group "rootwrap" accessing this service.
> > > 
> > > I find it an interesting alternative, to the other proposed solutions,
> > > but
> > > there are some challenges associated with this solution, which could make
> > > it
> > > more complicated:
> > > 
> > > 1) Access control, file system permission based or token based,
> > > 
> > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
> > > if we have a simple/fast RPC mechanism we can use, it's a matter
> > > of serializing a dictionary.
> > > 
> > > 3) client side implementation for 1 + 2.
> > > 
> > > 4) It would need to accept new domain socket connections in green threads
> > > to
> > > avoid spawning a new process to handle a new connection.
> > > 
> > > The advantages:
> > > * we wouldn't need to break the only-python-rule.
> > > * we don't need to rewrite/translate rootwrap.
> > > 
> > > The disadvantages:
> > > * it needs changes on the client side (neutron + other projects),
> > > 
> > > 
> > > Cheers,
> > > Miguel Ángel.
> > > 
> > > 
> > > 
> > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
> > >> 
> > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
> > >> < stephen.g...@theguardian.com >
> > >> wrote:
> > >> 
> > >> Hi,
> > >> 
> > >> Given that Y

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Aaron Rosen
We had this same issue with the dhcp-agent. Code was added that paralleled
the initial sync here: https://review.openstack.org/#/c/28914/  that made
things a good bit faster if I remember correctly.  Might be worth doing
something similar for the l3-agent.

Best,

Aaron


On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon  wrote:

>
>
>
> On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon  wrote:
>
>> I looked into the python to C options and haven't found anything
>> promising yet.
>>
>>
>> I tried Cython, and RPython, on a trivial hello world app, but git
>> similar startup times to standard python.
>>
>> The one thing that did work was adding a '-S' when starting python.
>>
>>-S Disable the import of the module site and the
>> site-dependent manipulations of sys.path that it entails.
>>
>
> Using 'python -S' didn't appear to help in devstack
>
> #!/usr/bin/python -S
> # PBR Generated from u'console_scripts'
>
> import sys
> import site
> site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
>
>
>
>
>>
>> I am not sure if we can do that for rootwrap.
>>
>>
>> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
>> hello world
>>
>> real0m0.021s
>> user0m0.000s
>> sys 0m0.020s
>> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
>> hello world
>>
>> real0m0.021s
>> user0m0.000s
>> sys 0m0.020s
>> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
>> hello world
>>
>> real0m0.010s
>> user0m0.000s
>> sys 0m0.008s
>>
>> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
>> hello world
>>
>> real0m0.010s
>> user0m0.000s
>> sys 0m0.008s
>>
>>
>>
>> On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
>> mangel...@redhat.com> wrote:
>>
>>> Hi Carl, thank you, good idea.
>>>
>>> I started reviewing it, but I will do it more carefully tomorrow morning.
>>>
>>>
>>>
>>> - Original Message -
>>> > All,
>>> >
>>> > I was writing down a summary of all of this and decided to just do it
>>> > on an etherpad.  Will you help me capture the big picture there?  I'd
>>> > like to come up with some actions this week to try to address at least
>>> > part of the problem before Icehouse releases.
>>> >
>>> > https://etherpad.openstack.org/p/neutron-agent-exec-performance
>>> >
>>> > Carl
>>> >
>>> > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo >> >
>>> > wrote:
>>> > > Hi Yuri & Stephen, thanks a lot for the clarification.
>>> > >
>>> > > I'm not familiar with unix domain sockets at low level, but , I
>>> wonder
>>> > > if authentication could be achieved just with permissions (only
>>> users in
>>> > > group "neutron" or group "rootwrap" accessing this service.
>>> > >
>>> > > I find it an interesting alternative, to the other proposed
>>> solutions, but
>>> > > there are some challenges associated with this solution, which could
>>> make
>>> > > it
>>> > > more complicated:
>>> > >
>>> > > 1) Access control, file system permission based or token based,
>>> > >
>>> > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
>>> > >if we have a simple/fast RPC mechanism we can use, it's a matter
>>> > >of serializing a dictionary.
>>> > >
>>> > > 3) client side implementation for 1 + 2.
>>> > >
>>> > > 4) It would need to accept new domain socket connections in green
>>> threads
>>> > > to
>>> > > avoid spawning a new process to handle a new connection.
>>> > >
>>> > > The advantages:
>>> > >* we wouldn't need to break the only-python-rule.
>>> > >* we don't need to rewrite/translate rootwrap.
>>> > >
>>> > > The disadvantages:
>>> > >   * it needs changes on the client side (neutron + other projects),
>>> > >
>>> > >
>>> > > Cheers,
>>> > > Miguel Ángel.
>>> > >
>>> > >
>>> > >
>>> > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
>>> > >>
>>> > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
>>> > >> mailto:stephen.g...@theguardian.com
>>> >>
>>> > >> wrote:
>>> > >>
>>> > >> Hi,
>>> > >>
>>> > >> Given that Yuriy says explicitly 'unix socket', I dont think he
>>> > >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
>>> > >> listening on a unix socket for execution requests.  This seems
>>> like
>>> > >> a reasonably sensible idea to me.
>>> > >>
>>> > >>
>>> > >> Yes, you're right.
>>> > >>
>>> > >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>>> > >>
>>> > >>
>>> > >> I thought of this option, but didn't consider it, as It's
>>> somehow
>>> > >> risky to expose an RPC end executing priviledged (even
>>> filtered)
>>> > >> commands.
>>> > >>
>>> > >>
>>> > >> subprocess module have some means to do RPC securely over UNIX
>>> sockets.
>>> > >> I does this by passing some token along with messages. It should be
>>> > >> secure because with UNIX sockets we don't need anything stronger
>>> since
>>> > >> MITM attacks are not possible.
>>> > >>
>>> > >> If I'm not wrong, once you have credentials for messaging,
>>> you can
>>> > >> send messages to any end, even filtered

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Joe Gordon
On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon  wrote:

> I looked into the python to C options and haven't found anything promising
> yet.
>
>
> I tried Cython, and RPython, on a trivial hello world app, but git similar
> startup times to standard python.
>
> The one thing that did work was adding a '-S' when starting python.
>
>-S Disable the import of the module site and the site-dependent
> manipulations of sys.path that it entails.
>

Using 'python -S' didn't appear to help in devstack

#!/usr/bin/python -S
# PBR Generated from u'console_scripts'

import sys
import site
site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')




>
> I am not sure if we can do that for rootwrap.
>
>
> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
>
> real0m0.021s
> user0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
>
> real0m0.021s
> user0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
>
> real0m0.010s
> user0m0.000s
> sys 0m0.008s
>
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
>
> real0m0.010s
> user0m0.000s
> sys 0m0.008s
>
>
>
> On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
> mangel...@redhat.com> wrote:
>
>> Hi Carl, thank you, good idea.
>>
>> I started reviewing it, but I will do it more carefully tomorrow morning.
>>
>>
>>
>> - Original Message -
>> > All,
>> >
>> > I was writing down a summary of all of this and decided to just do it
>> > on an etherpad.  Will you help me capture the big picture there?  I'd
>> > like to come up with some actions this week to try to address at least
>> > part of the problem before Icehouse releases.
>> >
>> > https://etherpad.openstack.org/p/neutron-agent-exec-performance
>> >
>> > Carl
>> >
>> > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo 
>> > wrote:
>> > > Hi Yuri & Stephen, thanks a lot for the clarification.
>> > >
>> > > I'm not familiar with unix domain sockets at low level, but , I wonder
>> > > if authentication could be achieved just with permissions (only users
>> in
>> > > group "neutron" or group "rootwrap" accessing this service.
>> > >
>> > > I find it an interesting alternative, to the other proposed
>> solutions, but
>> > > there are some challenges associated with this solution, which could
>> make
>> > > it
>> > > more complicated:
>> > >
>> > > 1) Access control, file system permission based or token based,
>> > >
>> > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
>> > >if we have a simple/fast RPC mechanism we can use, it's a matter
>> > >of serializing a dictionary.
>> > >
>> > > 3) client side implementation for 1 + 2.
>> > >
>> > > 4) It would need to accept new domain socket connections in green
>> threads
>> > > to
>> > > avoid spawning a new process to handle a new connection.
>> > >
>> > > The advantages:
>> > >* we wouldn't need to break the only-python-rule.
>> > >* we don't need to rewrite/translate rootwrap.
>> > >
>> > > The disadvantages:
>> > >   * it needs changes on the client side (neutron + other projects),
>> > >
>> > >
>> > > Cheers,
>> > > Miguel Ángel.
>> > >
>> > >
>> > >
>> > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
>> > >>
>> > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
>> > >> mailto:stephen.g...@theguardian.com>>
>> > >> wrote:
>> > >>
>> > >> Hi,
>> > >>
>> > >> Given that Yuriy says explicitly 'unix socket', I dont think he
>> > >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
>> > >> listening on a unix socket for execution requests.  This seems
>> like
>> > >> a reasonably sensible idea to me.
>> > >>
>> > >>
>> > >> Yes, you're right.
>> > >>
>> > >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>> > >>
>> > >>
>> > >> I thought of this option, but didn't consider it, as It's
>> somehow
>> > >> risky to expose an RPC end executing priviledged (even
>> filtered)
>> > >> commands.
>> > >>
>> > >>
>> > >> subprocess module have some means to do RPC securely over UNIX
>> sockets.
>> > >> I does this by passing some token along with messages. It should be
>> > >> secure because with UNIX sockets we don't need anything stronger
>> since
>> > >> MITM attacks are not possible.
>> > >>
>> > >> If I'm not wrong, once you have credentials for messaging,
>> you can
>> > >> send messages to any end, even filtered, I somehow see this
>> as a
>> > >> higher
>> > >> risk option.
>> > >>
>> > >>
>> > >> As Stephen noted, I'm not talking about using MQ for RPC. Just some
>> > >> local UNIX socket with very simple RPC over it.
>> > >>
>> > >> And btw, if we add RPC in the middle, it's possible that all
>> those
>> > >> system call delays increase, or don't decrease all it'll be
>> > >> desirable.
>> > >>
>> > >>
>> > >> Every call to rootwrap would require the following.
>> > >>
>

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Joe Gordon
I looked into the python to C options and haven't found anything promising
yet.


I tried Cython, and RPython, on a trivial hello world app, but git similar
startup times to standard python.

The one thing that did work was adding a '-S' when starting python.

   -S Disable the import of the module site and the site-dependent
manipulations of sys.path that it entails.

I am not sure if we can do that for rootwrap.


jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
hello world

real0m0.021s
user0m0.000s
sys 0m0.020s
jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
hello world

real0m0.021s
user0m0.000s
sys 0m0.020s
jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
hello world

real0m0.010s
user0m0.000s
sys 0m0.008s

jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
hello world

real0m0.010s
user0m0.000s
sys 0m0.008s



On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
mangel...@redhat.com> wrote:

> Hi Carl, thank you, good idea.
>
> I started reviewing it, but I will do it more carefully tomorrow morning.
>
>
>
> - Original Message -
> > All,
> >
> > I was writing down a summary of all of this and decided to just do it
> > on an etherpad.  Will you help me capture the big picture there?  I'd
> > like to come up with some actions this week to try to address at least
> > part of the problem before Icehouse releases.
> >
> > https://etherpad.openstack.org/p/neutron-agent-exec-performance
> >
> > Carl
> >
> > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo 
> > wrote:
> > > Hi Yuri & Stephen, thanks a lot for the clarification.
> > >
> > > I'm not familiar with unix domain sockets at low level, but , I wonder
> > > if authentication could be achieved just with permissions (only users
> in
> > > group "neutron" or group "rootwrap" accessing this service.
> > >
> > > I find it an interesting alternative, to the other proposed solutions,
> but
> > > there are some challenges associated with this solution, which could
> make
> > > it
> > > more complicated:
> > >
> > > 1) Access control, file system permission based or token based,
> > >
> > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
> > >if we have a simple/fast RPC mechanism we can use, it's a matter
> > >of serializing a dictionary.
> > >
> > > 3) client side implementation for 1 + 2.
> > >
> > > 4) It would need to accept new domain socket connections in green
> threads
> > > to
> > > avoid spawning a new process to handle a new connection.
> > >
> > > The advantages:
> > >* we wouldn't need to break the only-python-rule.
> > >* we don't need to rewrite/translate rootwrap.
> > >
> > > The disadvantages:
> > >   * it needs changes on the client side (neutron + other projects),
> > >
> > >
> > > Cheers,
> > > Miguel Ángel.
> > >
> > >
> > >
> > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
> > >>
> > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
> > >> mailto:stephen.g...@theguardian.com>>
> > >> wrote:
> > >>
> > >> Hi,
> > >>
> > >> Given that Yuriy says explicitly 'unix socket', I dont think he
> > >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
> > >> listening on a unix socket for execution requests.  This seems
> like
> > >> a reasonably sensible idea to me.
> > >>
> > >>
> > >> Yes, you're right.
> > >>
> > >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
> > >>
> > >>
> > >> I thought of this option, but didn't consider it, as It's
> somehow
> > >> risky to expose an RPC end executing priviledged (even
> filtered)
> > >> commands.
> > >>
> > >>
> > >> subprocess module have some means to do RPC securely over UNIX
> sockets.
> > >> I does this by passing some token along with messages. It should be
> > >> secure because with UNIX sockets we don't need anything stronger since
> > >> MITM attacks are not possible.
> > >>
> > >> If I'm not wrong, once you have credentials for messaging,
> you can
> > >> send messages to any end, even filtered, I somehow see this
> as a
> > >> higher
> > >> risk option.
> > >>
> > >>
> > >> As Stephen noted, I'm not talking about using MQ for RPC. Just some
> > >> local UNIX socket with very simple RPC over it.
> > >>
> > >> And btw, if we add RPC in the middle, it's possible that all
> those
> > >> system call delays increase, or don't decrease all it'll be
> > >> desirable.
> > >>
> > >>
> > >> Every call to rootwrap would require the following.
> > >>
> > >> Client side:
> > >> - new client socket;
> > >> - one message sent;
> > >> - one message received.
> > >>
> > >> Server side:
> > >> - accepting new connection;
> > >> - one message received;
> > >> - one fork-exec;
> > >> - one message sent.
> > >>
> > >> This looks like way simpler than passing through sudo and rootwrap
> that
> > >> requires three exec's and whole lot of configuration files opened and
> > >> parsed.
> > >>
> > >> --
>

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Miguel Angel Ajo Pelayo
Hi Carl, thank you, good idea.

I started reviewing it, but I will do it more carefully tomorrow morning.



- Original Message -
> All,
> 
> I was writing down a summary of all of this and decided to just do it
> on an etherpad.  Will you help me capture the big picture there?  I'd
> like to come up with some actions this week to try to address at least
> part of the problem before Icehouse releases.
> 
> https://etherpad.openstack.org/p/neutron-agent-exec-performance
> 
> Carl
> 
> On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo 
> wrote:
> > Hi Yuri & Stephen, thanks a lot for the clarification.
> >
> > I'm not familiar with unix domain sockets at low level, but , I wonder
> > if authentication could be achieved just with permissions (only users in
> > group "neutron" or group "rootwrap" accessing this service.
> >
> > I find it an interesting alternative, to the other proposed solutions, but
> > there are some challenges associated with this solution, which could make
> > it
> > more complicated:
> >
> > 1) Access control, file system permission based or token based,
> >
> > 2) stdout/stderr/return encapsulation/forwarding to the caller,
> >if we have a simple/fast RPC mechanism we can use, it's a matter
> >of serializing a dictionary.
> >
> > 3) client side implementation for 1 + 2.
> >
> > 4) It would need to accept new domain socket connections in green threads
> > to
> > avoid spawning a new process to handle a new connection.
> >
> > The advantages:
> >* we wouldn't need to break the only-python-rule.
> >* we don't need to rewrite/translate rootwrap.
> >
> > The disadvantages:
> >   * it needs changes on the client side (neutron + other projects),
> >
> >
> > Cheers,
> > Miguel Ángel.
> >
> >
> >
> > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
> >>
> >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
> >> mailto:stephen.g...@theguardian.com>>
> >> wrote:
> >>
> >> Hi,
> >>
> >> Given that Yuriy says explicitly 'unix socket', I dont think he
> >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
> >> listening on a unix socket for execution requests.  This seems like
> >> a reasonably sensible idea to me.
> >>
> >>
> >> Yes, you're right.
> >>
> >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
> >>
> >>
> >> I thought of this option, but didn't consider it, as It's somehow
> >> risky to expose an RPC end executing priviledged (even filtered)
> >> commands.
> >>
> >>
> >> subprocess module have some means to do RPC securely over UNIX sockets.
> >> I does this by passing some token along with messages. It should be
> >> secure because with UNIX sockets we don't need anything stronger since
> >> MITM attacks are not possible.
> >>
> >> If I'm not wrong, once you have credentials for messaging, you can
> >> send messages to any end, even filtered, I somehow see this as a
> >> higher
> >> risk option.
> >>
> >>
> >> As Stephen noted, I'm not talking about using MQ for RPC. Just some
> >> local UNIX socket with very simple RPC over it.
> >>
> >> And btw, if we add RPC in the middle, it's possible that all those
> >> system call delays increase, or don't decrease all it'll be
> >> desirable.
> >>
> >>
> >> Every call to rootwrap would require the following.
> >>
> >> Client side:
> >> - new client socket;
> >> - one message sent;
> >> - one message received.
> >>
> >> Server side:
> >> - accepting new connection;
> >> - one message received;
> >> - one fork-exec;
> >> - one message sent.
> >>
> >> This looks like way simpler than passing through sudo and rootwrap that
> >> requires three exec's and whole lot of configuration files opened and
> >> parsed.
> >>
> >> --
> >>
> >> Kind regards, Yuriy.
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Carl Baldwin
All,

I was writing down a summary of all of this and decided to just do it
on an etherpad.  Will you help me capture the big picture there?  I'd
like to come up with some actions this week to try to address at least
part of the problem before Icehouse releases.

https://etherpad.openstack.org/p/neutron-agent-exec-performance

Carl

On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo  wrote:
> Hi Yuri & Stephen, thanks a lot for the clarification.
>
> I'm not familiar with unix domain sockets at low level, but , I wonder
> if authentication could be achieved just with permissions (only users in
> group "neutron" or group "rootwrap" accessing this service.
>
> I find it an interesting alternative, to the other proposed solutions, but
> there are some challenges associated with this solution, which could make it
> more complicated:
>
> 1) Access control, file system permission based or token based,
>
> 2) stdout/stderr/return encapsulation/forwarding to the caller,
>if we have a simple/fast RPC mechanism we can use, it's a matter
>of serializing a dictionary.
>
> 3) client side implementation for 1 + 2.
>
> 4) It would need to accept new domain socket connections in green threads to
> avoid spawning a new process to handle a new connection.
>
> The advantages:
>* we wouldn't need to break the only-python-rule.
>* we don't need to rewrite/translate rootwrap.
>
> The disadvantages:
>   * it needs changes on the client side (neutron + other projects),
>
>
> Cheers,
> Miguel Ángel.
>
>
>
> On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
>>
>> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
>> mailto:stephen.g...@theguardian.com>>
>> wrote:
>>
>> Hi,
>>
>> Given that Yuriy says explicitly 'unix socket', I dont think he
>> means 'MQ' when he says 'RPC'.  I think he just means a daemon
>> listening on a unix socket for execution requests.  This seems like
>> a reasonably sensible idea to me.
>>
>>
>> Yes, you're right.
>>
>> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>>
>>
>> I thought of this option, but didn't consider it, as It's somehow
>> risky to expose an RPC end executing priviledged (even filtered)
>> commands.
>>
>>
>> subprocess module have some means to do RPC securely over UNIX sockets.
>> I does this by passing some token along with messages. It should be
>> secure because with UNIX sockets we don't need anything stronger since
>> MITM attacks are not possible.
>>
>> If I'm not wrong, once you have credentials for messaging, you can
>> send messages to any end, even filtered, I somehow see this as a
>> higher
>> risk option.
>>
>>
>> As Stephen noted, I'm not talking about using MQ for RPC. Just some
>> local UNIX socket with very simple RPC over it.
>>
>> And btw, if we add RPC in the middle, it's possible that all those
>> system call delays increase, or don't decrease all it'll be
>> desirable.
>>
>>
>> Every call to rootwrap would require the following.
>>
>> Client side:
>> - new client socket;
>> - one message sent;
>> - one message received.
>>
>> Server side:
>> - accepting new connection;
>> - one message received;
>> - one fork-exec;
>> - one message sent.
>>
>> This looks like way simpler than passing through sudo and rootwrap that
>> requires three exec's and whole lot of configuration files opened and
>> parsed.
>>
>> --
>>
>> Kind regards, Yuriy.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Miguel Angel Ajo

Hi Yuri & Stephen, thanks a lot for the clarification.

I'm not familiar with unix domain sockets at low level, but , I wonder
if authentication could be achieved just with permissions (only users in 
group "neutron" or group "rootwrap" accessing this service.


I find it an interesting alternative, to the other proposed solutions, 
but there are some challenges associated with this solution, which could 
make it more complicated:


1) Access control, file system permission based or token based,

2) stdout/stderr/return encapsulation/forwarding to the caller,
   if we have a simple/fast RPC mechanism we can use, it's a matter
   of serializing a dictionary.

3) client side implementation for 1 + 2.

4) It would need to accept new domain socket connections in green 
threads to avoid spawning a new process to handle a new connection.


The advantages:
   * we wouldn't need to break the only-python-rule.
   * we don't need to rewrite/translate rootwrap.

The disadvantages:
  * it needs changes on the client side (neutron + other projects),


Cheers,
Miguel Ángel.


On 03/08/2014 07:09 AM, Yuriy Taraday wrote:

On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
mailto:stephen.g...@theguardian.com>> wrote:

Hi,

Given that Yuriy says explicitly 'unix socket', I dont think he
means 'MQ' when he says 'RPC'.  I think he just means a daemon
listening on a unix socket for execution requests.  This seems like
a reasonably sensible idea to me.


Yes, you're right.

On 07/03/14 12:52, Miguel Angel Ajo wrote:


I thought of this option, but didn't consider it, as It's somehow
risky to expose an RPC end executing priviledged (even filtered)
commands.


subprocess module have some means to do RPC securely over UNIX sockets.
I does this by passing some token along with messages. It should be
secure because with UNIX sockets we don't need anything stronger since
MITM attacks are not possible.

If I'm not wrong, once you have credentials for messaging, you can
send messages to any end, even filtered, I somehow see this as a
higher
risk option.


As Stephen noted, I'm not talking about using MQ for RPC. Just some
local UNIX socket with very simple RPC over it.

And btw, if we add RPC in the middle, it's possible that all those
system call delays increase, or don't decrease all it'll be
desirable.


Every call to rootwrap would require the following.

Client side:
- new client socket;
- one message sent;
- one message received.

Server side:
- accepting new connection;
- one message received;
- one fork-exec;
- one message sent.

This looks like way simpler than passing through sudo and rootwrap that
requires three exec's and whole lot of configuration files opened and
parsed.

--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Yuriy Taraday
On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
wrote:

> Hi,
>
> Given that Yuriy says explicitly 'unix socket', I dont think he means 'MQ'
> when he says 'RPC'.  I think he just means a daemon listening on a unix
> socket for execution requests.  This seems like a reasonably sensible idea
> to me.
>

Yes, you're right.


> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>
>>
>> I thought of this option, but didn't consider it, as It's somehow
>> risky to expose an RPC end executing priviledged (even filtered) commands.
>>
>
subprocess module have some means to do RPC securely over UNIX sockets. I
does this by passing some token along with messages. It should be secure
because with UNIX sockets we don't need anything stronger since MITM
attacks are not possible.

If I'm not wrong, once you have credentials for messaging, you can
>> send messages to any end, even filtered, I somehow see this as a higher
>> risk option.
>>
>
As Stephen noted, I'm not talking about using MQ for RPC. Just some local
UNIX socket with very simple RPC over it.


>  And btw, if we add RPC in the middle, it's possible that all those
>> system call delays increase, or don't decrease all it'll be desirable.
>>
>
Every call to rootwrap would require the following.

Client side:
- new client socket;
- one message sent;
- one message received.

Server side:
- accepting new connection;
- one message received;
- one fork-exec;
- one message sent.

This looks like way simpler than passing through sudo and rootwrap that
requires three exec's and whole lot of configuration files opened and
parsed.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Carl Baldwin
I had a reply drafted up to Miguel's original post and now I realize
that I never actually sent it.  :(  So, I'll clean up and update my
draft and send it.  This is a huge impediment to scaling Neutron and I
believe this needs some attention before Icehouse releases.

I believe this problem needs to be tackled on multiple fronts.  I have
been focusing mostly on the L3 agent because routers seem to take a
lot more commands to create and maintain than DHCP namespaces, in
general.  I've created a few patches to address the issues that I've
found.  The patch that Mark mentioned [1] is one potential part of the
solution but it turns out to be one of the more complicated patches to
work out and it keeps falling lower in priority for me.  I have come
back to it this week and will work on it through next week as a higher
priority task.

There are some other recent improvements that have merged to Icehouse
3:  I have changed the iptables lock to avoid contention [2], avoided
an unnecessary RPC call for each router processed [3], and avoided
some unnecessary ip netns calls to check existence of a device [4].  I
feel like I'm just slowly whittling away at the problem.

I'm also throwing around the idea of refactoring the L3 agent to give
precedence to RPC calls on a restart [5].  There is a very rough
preview up that I put up yesterday evening to get feedback on the
approach that I'm thinking of taking.  This should make the agent more
responsive to changes that come in through RPC.  This is less of a win
on reboot than on a simple agent process restart.

Another thing that we've found to help is to delete namespaces when a
router or dhcp server namespace is no longer needed [6].  We've
learned that having vestigial namespaces hanging around and
accumulating when they are no longer needed adversely affects the
performance of all "ip netns exec" commands.  There are some sticky
kernel issues related to using this patch.  That is why the default
configuration is to not delete namespaces.  See the "Related-Bug"
referenced by that commit message.

I'm intrigued by the idea of writing a rootwrap compatible alternative
in C.  It might even be possible to replace sudo + rootwrap
combination with a single, stand-alone executable with setuid
capability of elevating permissions on its own.  I know it breaks the
everything-in-python pattern that has been established but this sort
of thing is sensitive enough to start-up time that it may be worth it.
 I think we've shown that some of the OpenStack projects, namely Nova
and Neutron, run enough commands at scale that this performance really
matters.  My plate is full enough that I cannot imagine taking on this
kind of task at this time.  Does anyone have any interest in making
this a reality?

A C version of rootwrap could do some of the more common and simple
command verification and punt anything that fails to the python
version of rootwrap with an exec.  That would ease the burden of
keeping it in sync and feature compatible with the python version and
allow python developers to continue developing root wrap in python.

Carl

[1] https://review.openstack.org/#/c/67490/
[2] https://review.openstack.org/#/c/67558/
[3] https://review.openstack.org/#/c/66928/
[4] https://review.openstack.org/#/c/67475/
[5] https://review.openstack.org/#/c/78819/
[6] https://review.openstack.org/#/c/56114/

On Fri, Mar 7, 2014 at 9:22 AM, Mark McClain  wrote:
>
> On Mar 6, 2014, at 3:31 AM, Miguel Angel Ajo  wrote:
>
>>
>> Yes, one option could be to coalesce all calls that go into
>> a namespace into a shell script and run this in the
>> ootwrap > ip netns exec
>>
>> But we might find a mechanism to determine if some of the steps failed, and 
>> what was the result / output, something like failing line + result code. I'm 
>> not sure if we rely on stdout/stderr results at any time.
>>
>
> This is exactly one of the items Carl Baldwin has been investigating.  Have 
> you checked out his early work? [1]
>
> mark
>
> [1] https://review.openstack.org/#/c/67490/
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Mark McClain

On Mar 6, 2014, at 3:31 AM, Miguel Angel Ajo  wrote:

> 
> Yes, one option could be to coalesce all calls that go into
> a namespace into a shell script and run this in the
> ootwrap > ip netns exec
> 
> But we might find a mechanism to determine if some of the steps failed, and 
> what was the result / output, something like failing line + result code. I'm 
> not sure if we rely on stdout/stderr results at any time.
> 

This is exactly one of the items Carl Baldwin has been investigating.  Have you 
checked out his early work? [1]

mark

[1] https://review.openstack.org/#/c/67490/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Stephen Gran

Hi,

Given that Yuriy says explicitly 'unix socket', I dont think he means 
'MQ' when he says 'RPC'.  I think he just means a daemon listening on a 
unix socket for execution requests.  This seems like a reasonably 
sensible idea to me.


Cheers,

On 07/03/14 12:52, Miguel Angel Ajo wrote:


I thought of this option, but didn't consider it, as It's somehow
risky to expose an RPC end executing priviledged (even filtered) commands.

If I'm not wrong, once you have credentials for messaging, you can
send messages to any end, even filtered, I somehow see this as a higher
risk option.

And btw, if we add RPC in the middle, it's possible that all those
system call delays increase, or don't decrease all it'll be desirable.


On 03/07/2014 10:06 AM, Yuriy Taraday wrote:

Another option would be to allow rootwrap to run in daemon mode and
provide RPC interface. This way Neutron can spawn rootwrap (with its
CPython startup overhead) once and send new commands to be run later
over UNIX socket.



This way we won't need learn new language (C/C++), adopt new toolchain
(RPython, Cython, whatever else) and still get secure way to run
commands with root priviledges.


--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 57% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News & Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Miguel Angel Ajo


I thought of this option, but didn't consider it, as It's somehow
risky to expose an RPC end executing priviledged (even filtered) commands.

If I'm not wrong, once you have credentials for messaging, you can
send messages to any end, even filtered, I somehow see this as a higher
risk option.

And btw, if we add RPC in the middle, it's possible that all those
system call delays increase, or don't decrease all it'll be desirable.


On 03/07/2014 10:06 AM, Yuriy Taraday wrote:

Another option would be to allow rootwrap to run in daemon mode and
provide RPC interface. This way Neutron can spawn rootwrap (with its
CPython startup overhead) once and send new commands to be run later
over UNIX socket.



This way we won't need learn new language (C/C++), adopt new toolchain
(RPython, Cython, whatever else) and still get secure way to run
commands with root priviledges.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Yuriy Taraday
Hello.

On Wed, Mar 5, 2014 at 6:42 PM, Miguel Angel Ajo wrote:

> 2) What alternatives can we think about to improve this situation.
>
>0) already being done: coalescing system calls. But I'm unsure that's
> enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60
> ~=3 minutes overhead on a 10min operation).
>
>a) Rewriting rules into sudo (to the extent that it's possible), and
> live with that.
>b) How secure is neutron about command injection to that point? How
> much is user input filtered on the API calls?
>c) Even if "b" is ok , I suppose that if the DB gets compromised, that
> could lead to command injection.
>
>d) Re-writing rootwrap into C (it's 600 python LOCs now).

   e) Doing the command filtering at neutron-side, as a library and live
> with sudo with simple filtering. (we kill the python/rootwrap startup
> overhead).
>

Another option would be to allow rootwrap to run in daemon mode and provide
RPC interface. This way Neutron can spawn rootwrap (with its CPython
startup overhead) once and send new commands to be run later over UNIX
socket.
This way we won't need learn new language (C/C++), adopt new toolchain
(RPython, Cython, whatever else) and still get secure way to run commands
with root priviledges.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-06 Thread Miguel Angel Ajo


On 03/06/2014 07:57 AM, IWAMOTO Toshihiro wrote:

At Wed, 05 Mar 2014 15:42:54 +0100,
Miguel Angel Ajo wrote:

3) I also find 10 minutes a long time to setup 192 networks/basic tenant
structures, I wonder if that time could be reduced by conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)


Try benchmarking

$ sudo ip netns exec qfoobar /bin/echo


You're totally right, that takes the same time as rootwrap itself. It's 
another point to think about from the performance point of view.


An interesting read:
http://man7.org/linux/man-pages/man8/ip-netns.8.html

ip netns does a lot of mounts around to simulate a normal environment,
where an netns-aware application could avoid all this.



Network namespace switching costs almost as much as a rootwrap
execution, IIRC.

Execution coalesceing is not enough in this case and we would need to
change how Neutron issues commands, IMO.


Yes, one option could be to coalesce all calls that go into
a namespace into a shell script and run this in the
ootwrap > ip netns exec

But we might find a mechanism to determine if some of the steps failed, 
and what was the result / output, something like failing line + result 
code. I'm not sure if we rely on stdout/stderr results at any time.








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread IWAMOTO Toshihiro
At Wed, 05 Mar 2014 15:42:54 +0100,
Miguel Angel Ajo wrote:
> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
> structures, I wonder if that time could be reduced by conversion
> of system process calls into system library calls (I know we don't have
> libraries for iproute, iptables?, and many other things... but it's a
> problem that's probably worth looking at.)

Try benchmarking

   $ sudo ip netns exec qfoobar /bin/echo

Network namespace switching costs almost as much as a rootwrap
execution, IIRC.

Execution coalesceing is not enough in this case and we would need to
change how Neutron issues commands, IMO.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Solly Ross
Has anyone tried compiling rootwrap under Cython?  Even with non-optimized 
libraries,
Cython sometimes sees speedups.

Best Regards,
Solly Ross

- Original Message -
From: "Vishvananda Ishaya" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, March 5, 2014 1:13:33 PM
Subject: Re: [openstack-dev] [neutron][rootwrap] Performance considerations,
sudo?


On Mar 5, 2014, at 6:42 AM, Miguel Angel Ajo  wrote:

> 
>Hello,
> 
>Recently, I found a serious issue about network-nodes startup time,
> neutron-rootwrap eats a lot of cpu cycles, much more than the processes it's 
> wrapping itself.
> 
>On a database with 1 public network, 192 private networks, 192 routers, 
> and 192 nano VMs, with OVS plugin:
> 
> 
> Network node setup time (rootwrap): 24 minutes
> Network node setup time (sudo): 10 minutes
> 
> 
>   That's the time since you reboot a network node, until all namespaces
> and services are restored.
> 
> 
>   If you see appendix "1", this extra 14min overhead, matches with the fact 
> that rootwrap needs 0.3s to start, and launch a system command (once 
> filtered).
> 
>14minutes =  840 s.
>(840s. / 192 resources)/0.3s ~= 15 operations / resource(qdhcp+qrouter) 
> (iptables, ovs port creation & tagging, starting child processes, etc..)
> 
>   The overhead comes from python startup time + rootwrap loading.
> 
>   I suppose that rootwrap was designed for lower amount of system calls 
> (nova?).
> 
>   And, I understand what rootwrap provides, a level of filtering that sudo 
> cannot offer. But it raises some question:
> 
> 1) It's actually someone using rootwrap in production?
> 
> 2) What alternatives can we think about to improve this situation.
> 
>   0) already being done: coalescing system calls. But I'm unsure that's 
> enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60 ~=3 
> minutes overhead on a 10min operation).
> 
>   a) Rewriting rules into sudo (to the extent that it's possible), and live 
> with that.
>   b) How secure is neutron about command injection to that point? How much is 
> user input filtered on the API calls?
>   c) Even if "b" is ok , I suppose that if the DB gets compromised, that 
> could lead to command injection.
> 
>   d) Re-writing rootwrap into C (it's 600 python LOCs now).


This seems like the best choice to me. It shouldn’t be that much work for a 
proficient C coder. Obviously it will need to be audited for buffer overflow 
issues etc, but the code should be small enough to make this doable with high 
confidence.

Vish

> 
>   e) Doing the command filtering at neutron-side, as a library and live with 
> sudo with simple filtering. (we kill the python/rootwrap startup overhead).
> 
> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
> structures, I wonder if that time could be reduced by conversion
> of system process calls into system library calls (I know we don't have
> libraries for iproute, iptables?, and many other things... but it's a
> problem that's probably worth looking at.)
> 
> Best,
> Miguel Ángel Ajo.
> 
> 
> Appendix:
> 
> [1] Analyzing overhead:
> 
> [root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
> [root@rhos4-neutron2 ~]# gcc test.c -o test
> [root@rhos4-neutron2 ~]# time test  # to time process invocation on this 
> machine
> 
> real0m0.000s
> user0m0.000s
> sys0m0.000s
> 
> 
> [root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'
> 
> real0m0.032s
> user0m0.010s
> sys0m0.019s
> 
> 
> [root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'
> 
> real0m0.057s
> user0m0.016s
> sys0m0.011s
> 
> [root@rhos4-neutron2 ~]# time neutron-rootwrap --help
> /usr/bin/neutron-rootwrap: No command specified
> 
> real0m0.309s
> user0m0.128s
> sys0m0.037s
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Vishvananda Ishaya

On Mar 5, 2014, at 6:42 AM, Miguel Angel Ajo  wrote:

> 
>Hello,
> 
>Recently, I found a serious issue about network-nodes startup time,
> neutron-rootwrap eats a lot of cpu cycles, much more than the processes it's 
> wrapping itself.
> 
>On a database with 1 public network, 192 private networks, 192 routers, 
> and 192 nano VMs, with OVS plugin:
> 
> 
> Network node setup time (rootwrap): 24 minutes
> Network node setup time (sudo): 10 minutes
> 
> 
>   That's the time since you reboot a network node, until all namespaces
> and services are restored.
> 
> 
>   If you see appendix "1", this extra 14min overhead, matches with the fact 
> that rootwrap needs 0.3s to start, and launch a system command (once 
> filtered).
> 
>14minutes =  840 s.
>(840s. / 192 resources)/0.3s ~= 15 operations / resource(qdhcp+qrouter) 
> (iptables, ovs port creation & tagging, starting child processes, etc..)
> 
>   The overhead comes from python startup time + rootwrap loading.
> 
>   I suppose that rootwrap was designed for lower amount of system calls 
> (nova?).
> 
>   And, I understand what rootwrap provides, a level of filtering that sudo 
> cannot offer. But it raises some question:
> 
> 1) It's actually someone using rootwrap in production?
> 
> 2) What alternatives can we think about to improve this situation.
> 
>   0) already being done: coalescing system calls. But I'm unsure that's 
> enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60 ~=3 
> minutes overhead on a 10min operation).
> 
>   a) Rewriting rules into sudo (to the extent that it's possible), and live 
> with that.
>   b) How secure is neutron about command injection to that point? How much is 
> user input filtered on the API calls?
>   c) Even if "b" is ok , I suppose that if the DB gets compromised, that 
> could lead to command injection.
> 
>   d) Re-writing rootwrap into C (it's 600 python LOCs now).


This seems like the best choice to me. It shouldn’t be that much work for a 
proficient C coder. Obviously it will need to be audited for buffer overflow 
issues etc, but the code should be small enough to make this doable with high 
confidence.

Vish

> 
>   e) Doing the command filtering at neutron-side, as a library and live with 
> sudo with simple filtering. (we kill the python/rootwrap startup overhead).
> 
> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
> structures, I wonder if that time could be reduced by conversion
> of system process calls into system library calls (I know we don't have
> libraries for iproute, iptables?, and many other things... but it's a
> problem that's probably worth looking at.)
> 
> Best,
> Miguel Ángel Ajo.
> 
> 
> Appendix:
> 
> [1] Analyzing overhead:
> 
> [root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
> [root@rhos4-neutron2 ~]# gcc test.c -o test
> [root@rhos4-neutron2 ~]# time test  # to time process invocation on this 
> machine
> 
> real0m0.000s
> user0m0.000s
> sys0m0.000s
> 
> 
> [root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'
> 
> real0m0.032s
> user0m0.010s
> sys0m0.019s
> 
> 
> [root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'
> 
> real0m0.057s
> user0m0.016s
> sys0m0.011s
> 
> [root@rhos4-neutron2 ~]# time neutron-rootwrap --help
> /usr/bin/neutron-rootwrap: No command specified
> 
> real0m0.309s
> user0m0.128s
> sys0m0.037s
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Joe Gordon
On Wed, Mar 5, 2014 at 8:51 AM, Miguel Angel Ajo Pelayo
 wrote:
>
>
> - Original Message -
>> Miguel Angel Ajo wrote:
>> > [...]
>> >The overhead comes from python startup time + rootwrap loading.
>> >
>> >I suppose that rootwrap was designed for lower amount of system calls
>> > (nova?).
>>
>> Yes, it was not really designed to escalate rights on hundreds of
>> separate shell commands in a row.
>>
>> >And, I understand what rootwrap provides, a level of filtering that
>> > sudo cannot offer. But it raises some question:
>> >
>> > 1) It's actually someone using rootwrap in production?
>> >
>> > 2) What alternatives can we think about to improve this situation.
>> >
>> >0) already being done: coalescing system calls. But I'm unsure that's
>> > enough. (if we coalesce 15 calls to 3 on this system we get:
>> > 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
>> >
>> >a) Rewriting rules into sudo (to the extent that it's possible), and
>> > live with that.
>>
>> We used to use sudo and a sudoers file. The rules were poorly written,
>> and there is just so much you can check in a sudoers file. But the main
>> issue was that the sudoers file lived in packaging
>> (distribution-dependent), and was not maintained in sync with the code.
>> Rootwrap let us to maintain the rules (filters) in sync with the code
>> calling them.
>
> Yes, from security & maintenance, it was an smart decision. I'm thinking
> of automatically converting rootwrap rules to sudoers, but that's very
> limited, specially for the ip netns exec ... case.
>
>
>> To work around perf issues, you still have the option of running with a
>> wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
>> running with a badly-written or badly-maintained sudo rules anyway.
>
> That's what I used for my "benchmark". I just wonder, the how possible
> is to get command injection from neutron, via API or DB.
>
>>
>> > [...]
>> >d) Re-writing rootwrap into C (it's 600 python LOCs now).
>>
>> (d2) would be to explore running rootwrap under Pypy. Testing that is on
>> my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
>> that option.
>
> I tried in my system right now, it takes more time to boot-up. Pypy JIT
> is awesome on runtime, but it seems that boot time is slower.

That is the wrong pypy! there are some pypy core devs lurking on this
ML so they may correct some of these details but:

It turns out python has a really big startup overhead:

jogo@lappy:~$ time echo true
true

real0m0.000s
user0m0.000s
sys 0m0.000s

jogo@lappy:~$ time python -c "print True"
True

real0m0.022s
user0m0.013s
sys 0m0.009s

And I am not surprised pypy isn't much better, pypy works better with
longer running programs.

But pypy isn't just one thing its two parts:

"In common parlance, PyPy has been used to mean two things. The first
is the RPython translation toolchain, which is a framework for
generating dynamic programming language implementations. And the
second is one particular implementation that is so generated - an
implementation of the Pythonprogramming language written in Python
itself. It is designed to be flexible and easy to experiment with."

So the idea is to rewrite rootwrap in in RPython and use the Rpython
translation toolchain to convert rootwrap into C. That way we keep the
source code in a language more friendly to OpenStack devs, and we
hopefully avoid the overhead assocated with starting python up.

>
> I also played a little with shedskin (py->c++ converter), but it
> doesn't support all the python libraries, dynamic typing, or parameter 
> unpacking.
>
> That could be another approach, writing a simplified rootwrap in python, and
> have it automatically converted to C++.
>
> f) haleyb on IRC is pointing me to another approach Carl Baldwin is
> pushing https://review.openstack.org/#/c/67490/ towards command execution
> coalescing.
>
>
>>
>> >e) Doing the command filtering at neutron-side, as a library and live
>> > with sudo with simple filtering. (we kill the python/rootwrap startup
>> > overhead).
>>
>> That's as safe as running with a wildcard sudoers file (neutron user can
>> escalate to root). Which may just be acceptable in /some/ scenarios.
>
> I think it can be safer, (from the command injection point of view).
>
>>
>> --
>> Thierry Carrez (ttx)
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Rick Jones

On 03/05/2014 06:42 AM, Miguel Angel Ajo wrote:


 Hello,

 Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes
it's wrapping itself.

 On a database with 1 public network, 192 private networks, 192
routers, and 192 nano VMs, with OVS plugin:


Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


I've not been looking at rootwrap, but have been looking at sudo and ip. 
(Using some scripts which create "fake routers" so I could look without 
any of this icky OpenStack stuff in the way :) ) The Ubuntu 12.04 
versions of each at least will enumerate all the interfaces on the 
system, even though they don't need to.


There was already an upstream change to 'ip' that eliminates the 
unnecessary enumeration.  In the last few weeks an enhancement went into 
the upstream sudo that allows one to configure sudo to not do the same 
thing.   Down in the low(ish) three figures of interfaces it may not be 
a Big Deal (tm) but as one starts to go beyond that...


commit f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8
Author: Stephen Hemminger 
Date:   Thu Mar 28 15:17:47 2013 -0700

ip: remove unnecessary ll_init_map

Don't call ll_init_map on modify operations
Saves significant overhead with 1000's of devices.

http://www.sudo.ws/pipermail/sudo-workers/2014-January/000826.html

Whether your environment already has the 'ip' change I don't know, but 
odd are probably pretty good it doesn't have the sudo enhancement.



That's the time since you reboot a network node, until all namespaces
and services are restored.


So, that includes the time for the system to go down and reboot, not 
just the time it takes to rebuild once rebuilding starts?



If you see appendix "1", this extra 14min overhead, matches with the
fact that rootwrap needs 0.3s to start, and launch a system command
(once filtered).

 14minutes =  840 s.
 (840s. / 192 resources)/0.3s ~= 15 operations /
resource(qdhcp+qrouter) (iptables, ovs port creation & tagging, starting
child processes, etc..)

The overhead comes from python startup time + rootwrap loading.


How much of the time is python startup time?  I assume that would be all 
the "find this lib, find that lib" stuff one sees in a system call 
trace?  I saw a boatload of that at one point but didn't quite feel like 
wading into that at the time.



I suppose that rootwrap was designed for lower amount of system
calls (nova?).


And/or a smaller environment perhaps.


And, I understand what rootwrap provides, a level of filtering that
sudo cannot offer. But it raises some question:

1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

0) already being done: coalescing system calls. But I'm unsure
that's enough. (if we coalesce 15 calls to 3 on this system we get:
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).


It may not be sufficient, but it is (IMO) certainly necessary.  It will 
make any work that minimizes or eliminates the overhead of rootwrap look 
that much better.



a) Rewriting rules into sudo (to the extent that it's possible), and
live with that.
b) How secure is neutron about command injection to that point? How
much is user input filtered on the API calls?
c) Even if "b" is ok , I suppose that if the DB gets compromised,
that could lead to command injection.

d) Re-writing rootwrap into C (it's 600 python LOCs now).

e) Doing the command filtering at neutron-side, as a library and
live with sudo with simple filtering. (we kill the python/rootwrap
startup overhead).

3) I also find 10 minutes a long time to setup 192 networks/basic tenant
structures, I wonder if that time could be reduced by conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)


Certainly going back and forth creating short-lived processes is at 
least anti-social and perhaps ever so slightly upsetting to the process 
scheduler.  Particularly "at scale."  The/a problem is though that the 
Linux networking folks have been somewhat reticent about creating 
libraries (at least any that they would end-up supporting) because they 
have a concern it will lock-in interfaces and reduce their freedom of 
movement.


happy benchmarking,

rick jones
the fastest procedure call is the one you never make



Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation on
this machine

real0m0.000s
user0m0.000s
sys0m0.000s


[root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'

real0m0.032s
user0m0.010s
sys0m0.019s


[r

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Miguel Angel Ajo Pelayo


- Original Message -
> Miguel Angel Ajo wrote:
> > [...]
> >The overhead comes from python startup time + rootwrap loading.
> > 
> >I suppose that rootwrap was designed for lower amount of system calls
> > (nova?).
> 
> Yes, it was not really designed to escalate rights on hundreds of
> separate shell commands in a row.
> 
> >And, I understand what rootwrap provides, a level of filtering that
> > sudo cannot offer. But it raises some question:
> > 
> > 1) It's actually someone using rootwrap in production?
> > 
> > 2) What alternatives can we think about to improve this situation.
> > 
> >0) already being done: coalescing system calls. But I'm unsure that's
> > enough. (if we coalesce 15 calls to 3 on this system we get:
> > 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
> > 
> >a) Rewriting rules into sudo (to the extent that it's possible), and
> > live with that.
> 
> We used to use sudo and a sudoers file. The rules were poorly written,
> and there is just so much you can check in a sudoers file. But the main
> issue was that the sudoers file lived in packaging
> (distribution-dependent), and was not maintained in sync with the code.
> Rootwrap let us to maintain the rules (filters) in sync with the code
> calling them.

Yes, from security & maintenance, it was an smart decision. I'm thinking
of automatically converting rootwrap rules to sudoers, but that's very 
limited, specially for the ip netns exec ... case.


> To work around perf issues, you still have the option of running with a
> wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
> running with a badly-written or badly-maintained sudo rules anyway.

That's what I used for my "benchmark". I just wonder, the how possible
is to get command injection from neutron, via API or DB.

> 
> > [...]
> >d) Re-writing rootwrap into C (it's 600 python LOCs now).
> 
> (d2) would be to explore running rootwrap under Pypy. Testing that is on
> my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
> that option.

I tried in my system right now, it takes more time to boot-up. Pypy JIT 
is awesome on runtime, but it seems that boot time is slower.

I also played a little with shedskin (py->c++ converter), but it 
doesn't support all the python libraries, dynamic typing, or parameter 
unpacking.

That could be another approach, writing a simplified rootwrap in python, and
have it automatically converted to C++.

f) haleyb on IRC is pointing me to another approach Carl Baldwin is
pushing https://review.openstack.org/#/c/67490/ towards command execution 
coalescing.


> 
> >e) Doing the command filtering at neutron-side, as a library and live
> > with sudo with simple filtering. (we kill the python/rootwrap startup
> > overhead).
> 
> That's as safe as running with a wildcard sudoers file (neutron user can
> escalate to root). Which may just be acceptable in /some/ scenarios.

I think it can be safer, (from the command injection point of view).

> 
> --
> Thierry Carrez (ttx)
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Ben Nemec
This has actually come up before, too: 
http://lists.openstack.org/pipermail/openstack-dev/2013-July/012539.html


-Ben

On 2014-03-05 08:42, Miguel Angel Ajo wrote:

Hello,

Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the
processes it's wrapping itself.

On a database with 1 public network, 192 private networks, 192
routers, and 192 nano VMs, with OVS plugin:


Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


   That's the time since you reboot a network node, until all 
namespaces

and services are restored.


   If you see appendix "1", this extra 14min overhead, matches with
the fact that rootwrap needs 0.3s to start, and launch a system
command (once filtered).

14minutes =  840 s.
(840s. / 192 resources)/0.3s ~= 15 operations /
resource(qdhcp+qrouter) (iptables, ovs port creation & tagging,
starting child processes, etc..)

   The overhead comes from python startup time + rootwrap loading.

   I suppose that rootwrap was designed for lower amount of system
calls (nova?).

   And, I understand what rootwrap provides, a level of filtering that
sudo cannot offer. But it raises some question:

1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

   0) already being done: coalescing system calls. But I'm unsure
that's enough. (if we coalesce 15 calls to 3 on this system we get:
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).

   a) Rewriting rules into sudo (to the extent that it's possible),
and live with that.
   b) How secure is neutron about command injection to that point? How
much is user input filtered on the API calls?
   c) Even if "b" is ok , I suppose that if the DB gets compromised,
that could lead to command injection.

   d) Re-writing rootwrap into C (it's 600 python LOCs now).

   e) Doing the command filtering at neutron-side, as a library and
live with sudo with simple filtering. (we kill the python/rootwrap
startup overhead).

3) I also find 10 minutes a long time to setup 192 networks/basic
tenant structures, I wonder if that time could be reduced by
conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)

Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation
on this machine

real0m0.000s
user0m0.000s
sys0m0.000s


[root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'

real0m0.032s
user0m0.010s
sys0m0.019s


[root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'

real0m0.057s
user0m0.016s
sys0m0.011s

[root@rhos4-neutron2 ~]# time neutron-rootwrap --help
/usr/bin/neutron-rootwrap: No command specified

real0m0.309s
user0m0.128s
sys0m0.037s

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Thierry Carrez
Miguel Angel Ajo wrote:
> [...]
>The overhead comes from python startup time + rootwrap loading.
> 
>I suppose that rootwrap was designed for lower amount of system calls
> (nova?).

Yes, it was not really designed to escalate rights on hundreds of
separate shell commands in a row.

>And, I understand what rootwrap provides, a level of filtering that
> sudo cannot offer. But it raises some question:
> 
> 1) It's actually someone using rootwrap in production?
> 
> 2) What alternatives can we think about to improve this situation.
> 
>0) already being done: coalescing system calls. But I'm unsure that's
> enough. (if we coalesce 15 calls to 3 on this system we get:
> 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
> 
>a) Rewriting rules into sudo (to the extent that it's possible), and
> live with that.

We used to use sudo and a sudoers file. The rules were poorly written,
and there is just so much you can check in a sudoers file. But the main
issue was that the sudoers file lived in packaging
(distribution-dependent), and was not maintained in sync with the code.
Rootwrap let us to maintain the rules (filters) in sync with the code
calling them.

To work around perf issues, you still have the option of running with a
wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
running with a badly-written or badly-maintained sudo rules anyway.

> [...]
>d) Re-writing rootwrap into C (it's 600 python LOCs now).

(d2) would be to explore running rootwrap under Pypy. Testing that is on
my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
that option.

>e) Doing the command filtering at neutron-side, as a library and live
> with sudo with simple filtering. (we kill the python/rootwrap startup
> overhead).

That's as safe as running with a wildcard sudoers file (neutron user can
escalate to root). Which may just be acceptable in /some/ scenarios.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Miguel Angel Ajo


Hello,

Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes 
it's wrapping itself.


On a database with 1 public network, 192 private networks, 192 
routers, and 192 nano VMs, with OVS plugin:



Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


   That's the time since you reboot a network node, until all namespaces
and services are restored.


   If you see appendix "1", this extra 14min overhead, matches with the 
fact that rootwrap needs 0.3s to start, and launch a system command 
(once filtered).


14minutes =  840 s.
(840s. / 192 resources)/0.3s ~= 15 operations / 
resource(qdhcp+qrouter) (iptables, ovs port creation & tagging, starting 
child processes, etc..)


   The overhead comes from python startup time + rootwrap loading.

   I suppose that rootwrap was designed for lower amount of system 
calls (nova?).


   And, I understand what rootwrap provides, a level of filtering that 
sudo cannot offer. But it raises some question:


1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

   0) already being done: coalescing system calls. But I'm unsure 
that's enough. (if we coalesce 15 calls to 3 on this system we get: 
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).


   a) Rewriting rules into sudo (to the extent that it's possible), and 
live with that.
   b) How secure is neutron about command injection to that point? How 
much is user input filtered on the API calls?
   c) Even if "b" is ok , I suppose that if the DB gets compromised, 
that could lead to command injection.


   d) Re-writing rootwrap into C (it's 600 python LOCs now).

   e) Doing the command filtering at neutron-side, as a library and 
live with sudo with simple filtering. (we kill the python/rootwrap 
startup overhead).


3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
structures, I wonder if that time could be reduced by conversion

of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)

Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation on 
this machine


real0m0.000s
user0m0.000s
sys0m0.000s


[root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'

real0m0.032s
user0m0.010s
sys0m0.019s


[root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'

real0m0.057s
user0m0.016s
sys0m0.011s

[root@rhos4-neutron2 ~]# time neutron-rootwrap --help
/usr/bin/neutron-rootwrap: No command specified

real0m0.309s
user0m0.128s
sys0m0.037s

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev