I'd like to bring the attention back to this topic:
Mark, could you reconsider removing the -2 here?
https://review.openstack.org/#/c/93889/
Your reason was:
Until the upstream blueprint
(https://blueprints.launchpad.net/oslo/+spec/rootwrap-daemon-mode )
merges in Oslo it does not
On 03/24/2014 07:23 PM, Yuriy Taraday wrote:
On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin c...@ecbaldwin.net
mailto:c...@ecbaldwin.net wrote:
Don't discard the first number so quickly.
For example, say we use a timeout mechanism for the daemon running
inside namespaces to avoid
It's the first call starting the daemon / loading config files, etc?,
May be that first sample should be discarded from the mean for all
processes (it's an outlier value).
On 03/21/2014 05:32 PM, Yuriy Taraday wrote:
On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez thie...@openstack.org
Don't discard the first number so quickly.
For example, say we use a timeout mechanism for the daemon running
inside namespaces to avoid using too much memory with a daemon in
every namespace. That means we'll pay the startup cost repeatedly but
in a way that amortizes it down.
Even if it is
On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin c...@ecbaldwin.net wrote:
Don't discard the first number so quickly.
For example, say we use a timeout mechanism for the daemon running
inside namespaces to avoid using too much memory with a daemon in
every namespace. That means we'll pay the
I was thinking that we could document the information about sudo and
iproute2 patches with the upcoming release. How would I go about
doing this? Is there any section in our documentation about OS level
tweaks or requirements such as these that could present this
information as part of the
Yuriy Taraday wrote:
On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:
If this coupled to neutron in a way that it can be accepted for
Icehouse (we're killing a performance bug), or that at least it can
be y backported,
Yuriy Taraday wrote:
Benchmark included showed on my machine these numbers (average over 100
iterations):
Running 'ip a':
ip a : 4.565ms
sudo ip a : 13.744ms
sudo rootwrap
On 03/21/2014 10:42 AM, Thierry Carrez wrote:
Yuriy Taraday wrote:
On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:
If this coupled to neutron in a way that it can be accepted for
Icehouse (we're killing a performance bug),
On 03/21/2014 11:01 AM, Thierry Carrez wrote:
Yuriy Taraday wrote:
Benchmark included showed on my machine these numbers (average over 100
iterations):
Running 'ip a':
ip a : 4.565ms
sudo ip a :
On 03/21/2014 05:42 AM, Thierry Carrez wrote:
Yuriy Taraday wrote:
On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:
If this coupled to neutron in a way that it can be accepted for
Icehouse (we're killing a performance bug), or
Sean Dague wrote:
Sounds great. One of the things I hope happens with this is a look at
some place rootwrap is used with such an open policy, that it's
completely moot. For instance the nova-cpu policy includes tee dd with
no arg limitting (which has been that way forever from my look in git
On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez thie...@openstack.orgwrote:
Yuriy Taraday wrote:
Benchmark included showed on my machine these numbers (average over 100
iterations):
Running 'ip a':
ip a : 4.565ms
On 03/19/2014 10:54 PM, Joe Gordon wrote:
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:
An advance on the changes that it's requiring to have a
py-c++ compiled rootwrap as a mitigation POC for havana/icehouse.
On 03/20/2014 05:31 AM, Miguel Angel Ajo wrote:
On 03/19/2014 10:54 PM, Joe Gordon wrote:
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:
An advance on the changes that it's requiring to have a
py-c++ compiled rootwrap as a
On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday yorik@gmail.com wrote:
I'm aiming at ~100 new lines of code for daemon. Of course I'll use some
batteries included with Python stdlib but they should be safe already.
It should be rather easy to audit them.
Here's my take on this:
Wow Yuriy, amazing and fast :-), benchmarks included ;-)
The daemon solution only adds 4.5ms, good work. I'll add some
comments in a while.
Recently I talked with another engineer in Red Hat (working
in ovirt/vdsm), and they have something like this daemon, and they
are using
On 03/20/2014 12:32 PM, Monty Taylor wrote:
On 03/20/2014 05:31 AM, Miguel Angel Ajo wrote:
On 03/19/2014 10:54 PM, Joe Gordon wrote:
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:
An advance on the changes that it's
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin c...@ecbaldwin.net wrote:
https://etherpad.openstack.org/p/neutron-agent-exec-performance
I've added info on how we can speedup work with namespaces by setting
namespaces by ourselves using setns() without ip netns exec overhead.
--
Kind
On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones rick.jon...@hp.com wrote:
On 03/20/2014 05:41 AM, Yuriy Taraday wrote:
Benchmark included showed on my machine these numbers (average over 100
iterations):
Running 'ip a':
ip a : 4.565ms
On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo majop...@redhat.comwrote:
Wow Yuriy, amazing and fast :-), benchmarks included ;-)
The daemon solution only adds 4.5ms, good work. I'll add some comments
in a while.
Recently I talked with another engineer in Red Hat (working
in
On 03/20/2014 09:07 AM, Yuriy Taraday wrote:
On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones rick.jon...@hp.com
mailto:rick.jon...@hp.com wrote:
Interesting result. Which versions of sudo and ip and with how many
interfaces on the system?
Here are the numbers:
% sudo -V
Sudo version
On Thu, Mar 20, 2014 at 8:23 PM, Rick Jones rick.jon...@hp.com wrote:
On 03/20/2014 09:07 AM, Yuriy Taraday wrote:
On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones rick.jon...@hp.com
mailto:rick.jon...@hp.com wrote:
Interesting result. Which versions of sudo and ip and with how many
An advance on the changes that it's requiring to have a
py-c++ compiled rootwrap as a mitigation POC for havana/icehouse.
https://github.com/mangelajo/shedskin.rootwrap/commit/e4167a6491dfbc71e2d0f6e28ba93bc8a1dd66c0
The current translation output is included.
It looks like doable (almost
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo majop...@redhat.comwrote:
An advance on the changes that it's requiring to have a
py-c++ compiled rootwrap as a mitigation POC for havana/icehouse.
https://github.com/mangelajo/shedskin.rootwrap/commit/
Hi Joe, thank you very much for the positive feedback,
I plan to spend a day during this week on the shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to make
it compile under shedskin [1] : nothing done yet.
It's a short-term alternative until we can
Joe Gordon wrote:
And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.
[majopela@redcylon tmp]$ cat EOF test.py
import sys
print hello world
sys.exit(0)
EOF
[majopela@redcylon tmp]$ time
On Mon, Mar 17, 2014 at 1:01 PM, IWAMOTO Toshihiro iwam...@valinux.co.jpwrote:
I've added a couple of security-related comments (pickle decoding and
token leak) on the etherpad.
Please check.
Hello. Thanks for your input.
- We can avoid pickle using xmlrpclib.
- Token won't leak because we
At Thu, 13 Mar 2014 07:48:53 -0700,
Aaron Rosen wrote:
[1 multipart/alternative (7bit)]
[1.1 text/plain; ISO-8859-1 (7bit)]
The easiest/quickest thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason.
See:
Yuriy Taraday wrote:
Another option would be to allow rootwrap to run in daemon mode and
provide RPC interface. This way Neutron can spawn rootwrap (with its
CPython startup overhead) once and send new commands to be run later
over UNIX socket.
This way we won't need learn new language
On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mangel...@redhat.com wrote:
I have included on the etherpad, the option to write a sudo
plugin (or several), specific for openstack.
And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should
On Mon, Mar 17, 2014 at 4:48 PM, Joe Gordon joe.gord...@gmail.com wrote:
On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mangel...@redhat.com wrote:
I have included on the etherpad, the option to write a sudo
plugin (or several), specific for openstack.
And this is a test
As we said on the Thursday meeting, I've filled a bug with the details
https://bugs.launchpad.net/neutron/+bug/1292598
Feel free to add / ask for any missing details.
Best,
Miguel Ángel.
On 03/13/2014 10:52 PM, Carl Baldwin wrote:
Right, the L3 agent does do this already. Agreed that the
On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo majop...@redhat.comwrote:
I'm not familiar with unix domain sockets at low level, but , I wonder
if authentication could be achieved just with permissions (only users in
group neutron or group rootwrap accessing this service.
It can be
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin c...@ecbaldwin.net wrote:
All,
I was writing down a summary of all of this and decided to just do it
on an etherpad. Will you help me capture the big picture there? I'd
like to come up with some actions this week to try to address at least
Yuri, could you elaborate your idea in detail? , I'm lost at some
points with your unix domain / token authentication.
Where does the token come from?,
Who starts rootwrap the first time?
If you could write a full interaction sequence, on the etherpad, from
rootwrap daemon start ,to a simple
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo majop...@redhat.comwrote:
Yuri, could you elaborate your idea in detail? , I'm lost at some
points with your unix domain / token authentication.
Where does the token come from?,
Who starts rootwrap the first time?
If you could write a
The easiest/quickest thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason.
See: https://review.openstack.org/#/c/28914/ which did this for thr
dhcp-agent.
Best,
Aaron
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo
Aaron,
I thought the l3-agent already did this if doing a full sync?
_sync_routers_task()-_process_routers()-spawn_n(self.process_router, ri)
So each router gets processed in a greenthread.
It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
limiting factor, at least on
Right, the L3 agent does do this already. Agreed that the limiting
factor is the cumulative effect of the wrappers and executables' start
up overhead.
Carl
On Thu, Mar 13, 2014 at 9:47 AM, Brian Haley brian.ha...@hp.com wrote:
Aaron,
I thought the l3-agent already did this if doing a full
I have included on the etherpad, the option to write a sudo
plugin (or several), specific for openstack.
And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.
[majopela@redcylon tmp]$ cat EOF test.py
import sys
print hello world
Hi Yuri Stephen, thanks a lot for the clarification.
I'm not familiar with unix domain sockets at low level, but , I wonder
if authentication could be achieved just with permissions (only users in
group neutron or group rootwrap accessing this service.
I find it an interesting alternative,
All,
I was writing down a summary of all of this and decided to just do it
on an etherpad. Will you help me capture the big picture there? I'd
like to come up with some actions this week to try to address at least
part of the problem before Icehouse releases.
Hi Carl, thank you, good idea.
I started reviewing it, but I will do it more carefully tomorrow morning.
- Original Message -
All,
I was writing down a summary of all of this and decided to just do it
on an etherpad. Will you help me capture the big picture there? I'd
like to
I looked into the python to C options and haven't found anything promising
yet.
I tried Cython, and RPython, on a trivial hello world app, but git similar
startup times to standard python.
The one thing that did work was adding a '-S' when starting python.
-S Disable the import of
On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon joe.gord...@gmail.com wrote:
I looked into the python to C options and haven't found anything promising
yet.
I tried Cython, and RPython, on a trivial hello world app, but git similar
startup times to standard python.
The one thing that did work
We had this same issue with the dhcp-agent. Code was added that paralleled
the initial sync here: https://review.openstack.org/#/c/28914/ that made
things a good bit faster if I remember correctly. Might be worth doing
something similar for the l3-agent.
Best,
Aaron
On Mon, Mar 10, 2014 at
Hello.
On Wed, Mar 5, 2014 at 6:42 PM, Miguel Angel Ajo majop...@redhat.comwrote:
2) What alternatives can we think about to improve this situation.
0) already being done: coalescing system calls. But I'm unsure that's
enough. (if we coalesce 15 calls to 3 on this system we get:
I thought of this option, but didn't consider it, as It's somehow
risky to expose an RPC end executing priviledged (even filtered) commands.
If I'm not wrong, once you have credentials for messaging, you can
send messages to any end, even filtered, I somehow see this as a higher
risk option.
Hi,
Given that Yuriy says explicitly 'unix socket', I dont think he means
'MQ' when he says 'RPC'. I think he just means a daemon listening on a
unix socket for execution requests. This seems like a reasonably
sensible idea to me.
Cheers,
On 07/03/14 12:52, Miguel Angel Ajo wrote:
I
On Mar 6, 2014, at 3:31 AM, Miguel Angel Ajo majop...@redhat.com wrote:
Yes, one option could be to coalesce all calls that go into
a namespace into a shell script and run this in the
ootwrap ip netns exec
But we might find a mechanism to determine if some of the steps failed, and
I had a reply drafted up to Miguel's original post and now I realize
that I never actually sent it. :( So, I'll clean up and update my
draft and send it. This is a huge impediment to scaling Neutron and I
believe this needs some attention before Icehouse releases.
I believe this problem needs
On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
stephen.g...@theguardian.comwrote:
Hi,
Given that Yuriy says explicitly 'unix socket', I dont think he means 'MQ'
when he says 'RPC'. I think he just means a daemon listening on a unix
socket for execution requests. This seems like a reasonably
Miguel Angel Ajo wrote:
[...]
The overhead comes from python startup time + rootwrap loading.
I suppose that rootwrap was designed for lower amount of system calls
(nova?).
Yes, it was not really designed to escalate rights on hundreds of
separate shell commands in a row.
And, I
This has actually come up before, too:
http://lists.openstack.org/pipermail/openstack-dev/2013-July/012539.html
-Ben
On 2014-03-05 08:42, Miguel Angel Ajo wrote:
Hello,
Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much
- Original Message -
Miguel Angel Ajo wrote:
[...]
The overhead comes from python startup time + rootwrap loading.
I suppose that rootwrap was designed for lower amount of system calls
(nova?).
Yes, it was not really designed to escalate rights on hundreds of
On 03/05/2014 06:42 AM, Miguel Angel Ajo wrote:
Hello,
Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes
it's wrapping itself.
On a database with 1 public network, 192 private networks,
On Wed, Mar 5, 2014 at 8:51 AM, Miguel Angel Ajo Pelayo
mangel...@redhat.com wrote:
- Original Message -
Miguel Angel Ajo wrote:
[...]
The overhead comes from python startup time + rootwrap loading.
I suppose that rootwrap was designed for lower amount of system calls
On Mar 5, 2014, at 6:42 AM, Miguel Angel Ajo majop...@redhat.com wrote:
Hello,
Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes it's
wrapping itself.
On a database with 1 public
-dev@lists.openstack.org
Sent: Wednesday, March 5, 2014 1:13:33 PM
Subject: Re: [openstack-dev] [neutron][rootwrap] Performance considerations,
sudo?
On Mar 5, 2014, at 6:42 AM, Miguel Angel Ajo majop...@redhat.com wrote:
Hello,
Recently, I found a serious issue about network-nodes
At Wed, 05 Mar 2014 15:42:54 +0100,
Miguel Angel Ajo wrote:
3) I also find 10 minutes a long time to setup 192 networks/basic tenant
structures, I wonder if that time could be reduced by conversion
of system process calls into system library calls (I know we don't have
libraries for iproute,
61 matches
Mail list logo