Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-04 Thread Jianghua Wang
Thanks Ihar, Thierry and Bob. I think we've agreed to go with the 1st option - 
"Get Neutron to call XenAPI directly rather than trying to use a daemon".
I will refine the POC patch to make it ready for the formal review.

R.g.t the test, I did some basic test in a real lab manually and it worked as 
expected. For sure more tests will be done forwarding. We will evaluate if to 
change the XenServer CI to use the daemon mode once this fix got merged.

Regards,
Jianghua

-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com] 
Sent: Thursday, November 03, 2016 10:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem 
for XenServer

Bob Ball  wrote:

> Hi Ihar,
>
>> I am puzzled. Is Neutron the only component that need to call to dom0?
>
> No it's not.  Nova has similar code to call plugins in dom0[1], and 
> Ceilometer will also need to make the calls for some metrics not 
> exposed through the formal API.
>
> We don't want code duplication, and are working on a common os-xenapi 
> library which will include session management.
> It would, of course, make sense for Neutron to use this common library 
> when it is available to replace the session management already 
> existing[2], but I'd argue that as there is existing XenAPI session 
> management code, the refactor to avoid using a per-command rootwrap 
> should be independent of using the session code from os-xenapi.

Seems like you are in a position that requires you to hook into neutron 
processes somehow, and it’s either neutron itself (your solution), or a library 
used by neutron (oslo.rootwrap or similar). I understand why you picked the 
first option.

>
>> I would think that Neutron is not in business of handling hypervisor 
>> privilege isolation mechanics, and that some other components will 
>> handle that for Neutron (and other services that may need it), that’s 
>> why I suggested to consider oslo.* realm for the proposed code.
>
> This is less about hypervisor privilege isolation and more about the 
> location of the logical component being updated.  Neutron is assuming 
> that the OVS being updated is running in the same location as Neutron 
> itself.  For XenAPI that is not true; the OVS is running in the 
> hypervisor, whereas Neutron is running in a VM (or potentially 
> elsewhere entirely).
>
> If oslo.* is going to decide whether to run a command using a specific 
> abstraction or locally, then it would need some way of making that 
> decision - perhaps either command-based (very ugly and fragile) or 
> with the caller telling oslo.* what logical component was being 
> affected by the call.  The latter sounds to me much more as a 
> Neutron-specific decision.

I believe os-xenapi is a nice path forward. I understand it will take some time 
to shape.

As for the dom0/domU routing decision, yes, it’s indeed on Neutron to make it. 
But it does not mean that we could not rely on existing filtering mechanisms 
(oslo.rootwrap ‘.filters’ files) to define that decision. The fact that current 
netwrap script for Xen duplicates filters from rootwrap is unfortunate. It 
should be a single source of truth.

It would probably require some extensive work in the library, and considering 
that oslo folks moved the library into maintenance mode, it probably won’t 
happen. As for oslo.privsep, that would be a better place for such a feature, 
but we won’t get there in Ocata. Bummer…

I guess I will unblock the patch, and we’ll see what others think. I left some 
initial review comments.

>
>> Side note: if we are going to make drastic changes to existing Xen-wrap  
>> script, we should first have Xen
>> third-party CI testing running against it, not to introduce regressions.  
>> AFAIK it’s not happening right now.
>
> It already is running, and has been for several months - see "Citrix  
> XenServer CI"s "dsvm-tempest-neutron-network" job on  
> https://review.openstack.org/#/c/391308/ as an example.  The CI is  
> non-voting but if it were added to the neutron-ci group we would be very  
> happy to make it voting.

Oh right. It does not validate the new change though. Would be nice to see  
the new ‘daemon’-ic mode behaves in real world.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-03 Thread Bob Ball
>> Side note: we should first have Xen third-party CI testing running
>
> It already is running

> Oh right. It does not validate the new change though. Would be nice to see  
> the new ‘daemon’-ic mode behaves in real world.

100% agreed.  I'll work with Jianghua to make sure we get automated test 
coverage of this.

Thanks for all your input,

Bob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-03 Thread Ihar Hrachyshka

Bob Ball  wrote:


Hi Ihar,


I am puzzled. Is Neutron the only component that need to call to dom0?


No it's not.  Nova has similar code to call plugins in dom0[1], and  
Ceilometer will also need to make the calls for some metrics not exposed  
through the formal API.


We don't want code duplication, and are working on a common os-xenapi  
library which will include session management.
It would, of course, make sense for Neutron to use this common library  
when it is available to replace the session management already  
existing[2], but I'd argue that as there is existing XenAPI session  
management code, the refactor to avoid using a per-command rootwrap  
should be independent of using the session code from os-xenapi.


Seems like you are in a position that requires you to hook into neutron  
processes somehow, and it’s either neutron itself (your solution), or a  
library used by neutron (oslo.rootwrap or similar). I understand why you  
picked the first option.




I would think that Neutron is not in business of handling hypervisor  
privilege isolation mechanics, and that
some other components will handle that for Neutron (and other services  
that may need it), that’s why I

suggested to consider oslo.* realm for the proposed code.


This is less about hypervisor privilege isolation and more about the  
location of the logical component being updated.  Neutron is assuming  
that the OVS being updated is running in the same location as Neutron  
itself.  For XenAPI that is not true; the OVS is running in the  
hypervisor, whereas Neutron is running in a VM (or potentially elsewhere  
entirely).


If oslo.* is going to decide whether to run a command using a specific  
abstraction or locally, then it would need some way of making that  
decision - perhaps either command-based (very ugly and fragile) or with  
the caller telling oslo.* what logical component was being affected by  
the call.  The latter sounds to me much more as a Neutron-specific  
decision.


I believe os-xenapi is a nice path forward. I understand it will take some  
time to shape.


As for the dom0/domU routing decision, yes, it’s indeed on Neutron to make  
it. But it does not mean that we could not rely on existing filtering  
mechanisms (oslo.rootwrap ‘.filters’ files) to define that decision. The  
fact that current netwrap script for Xen duplicates filters from rootwrap  
is unfortunate. It should be a single source of truth.


It would probably require some extensive work in the library, and  
considering that oslo folks moved the library into maintenance mode, it  
probably won’t happen. As for oslo.privsep, that would be a better place  
for such a feature, but we won’t get there in Ocata. Bummer…


I guess I will unblock the patch, and we’ll see what others think. I left  
some initial review comments.




Side note: if we are going to make drastic changes to existing Xen-wrap  
script, we should first have Xen
third-party CI testing running against it, not to introduce regressions.  
AFAIK it’s not happening right now.


It already is running, and has been for several months - see "Citrix  
XenServer CI"s "dsvm-tempest-neutron-network" job on  
https://review.openstack.org/#/c/391308/ as an example.  The CI is  
non-voting but if it were added to the neutron-ci group we would be very  
happy to make it voting.


Oh right. It does not validate the new change though. Would be nice to see  
the new ‘daemon’-ic mode behaves in real world.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-03 Thread Bob Ball
Hi Ihar,

> I am puzzled. Is Neutron the only component that need to call to dom0?

No it's not.  Nova has similar code to call plugins in dom0[1], and Ceilometer 
will also need to make the calls for some metrics not exposed through the 
formal API.

We don't want code duplication, and are working on a common os-xenapi library 
which will include session management.
It would, of course, make sense for Neutron to use this common library when it 
is available to replace the session management already existing[2], but I'd 
argue that as there is existing XenAPI session management code, the refactor to 
avoid using a per-command rootwrap should be independent of using the session 
code from os-xenapi.

> I would think that Neutron is not in business of handling hypervisor 
> privilege isolation mechanics, and that
> some other components will handle that for Neutron (and other services that 
> may need it), that’s why I
> suggested to consider oslo.* realm for the proposed code.

This is less about hypervisor privilege isolation and more about the location 
of the logical component being updated.  Neutron is assuming that the OVS being 
updated is running in the same location as Neutron itself.  For XenAPI that is 
not true; the OVS is running in the hypervisor, whereas Neutron is running in a 
VM (or potentially elsewhere entirely).

If oslo.* is going to decide whether to run a command using a specific 
abstraction or locally, then it would need some way of making that decision - 
perhaps either command-based (very ugly and fragile) or with the caller telling 
oslo.* what logical component was being affected by the call.  The latter 
sounds to me much more as a Neutron-specific decision.

> Side note: if we are going to make drastic changes to existing Xen-wrap 
> script, we should first have Xen
> third-party CI testing running against it, not to introduce regressions. 
> AFAIK it’s not happening right now.

It already is running, and has been for several months - see "Citrix XenServer 
CI"s "dsvm-tempest-neutron-network" job on 
https://review.openstack.org/#/c/391308/ as an example.  The CI is non-voting 
but if it were added to the neutron-ci group we would be very happy to make it 
voting.

Thanks,

[1] 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/xenapi/client/session.py#n214
[2] 
https://git.openstack.org/cgit/openstack/neutron/tree/bin/neutron-rootwrap-xen-dom0#n112
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-03 Thread Ihar Hrachyshka

Thierry Carrez  wrote:


Bob Ball wrote:

Oslo.privsep seem try to launch a daemon process and set caps for this

daemon; but for XenAPI, there is no need to spawn the daemon.

I guess I'm lacking some context... If you don't need special rights,  
why use a
rootwrap-like thing at all ? Why go through a separate process to call  
into

XenAPI ? Why not call in directly from Neutron code ?


It does not need to go through a separate process at all, or need  
special rights - see the prototype code at  
https://review.openstack.org/#/c/390931/ which started this thread,  
which is directly calling from Neutron code.


I guess the argument is that we are trying to run "configure something"  
which in some cases is privileged in the same host as is running the  
Neutron code itself, hence the easiest way to do that is to use a  
rootwrap.  To me, the very use of a "rootwrap" or "privsep" implies that  
we're running the commands in the same host.


OK, I think I get it now: Neutron is using its rootwrap call interface
as an indirection layer to route configuration commands either to
execute as root locally (through classic rootwrap) or to execute on dom0
through a XenAPI connection (through rootwrap-xen-dom0). The latter
should really have been called xendom0wrap :)

Arguably we should have a "per logical component" wrapper - in this case  
the network / OVS instance that's being managed - as each component  
could be in a different location.
Mounting a loopback device (which Nova has needed to do in the past)  
clearly needs a rootwrap that runs in the same host as Nova, but when  
managing the OVS in XenServer's dom0 it needs a similar mechanism to  
what we are proposing for Neutron.


For reference, Nova has a XenAPI session similar to the above and will  
invoke plugins that exist in Dom0 directly, to make the required  
modifications.  This is similar in approach to the prototype code above.


The current XenAPI rootwrap for Neutron[1] is stupidly inefficient as it  
was based on the original rootwrap concept (which neutron replaced with  
the daemon mode to improve performance).  As this is a separate  
executable, called once for each command, it will create a new session  
with each call.  There are (as always) multiple ways to fix this:


1) Get Neutron to call XenAPI directly rather than trying to use a  
daemon - the session management would move from  
neutron-rootwrap-xen-dom0 into xen_rootwrap_client.py (perhaps this  
could be better named)


I personally like that option.


I am puzzled. Is Neutron the only component that need to call to dom0? I  
would think that Neutron is not in business of handling hypervisor  
privilege isolation mechanics, and that some other components will handle  
that for Neutron (and other services that may need it), that’s why I  
suggested to consider oslo.* realm for the proposed code.


Side note: if we are going to make drastic changes to existing Xen-wrap  
script, we should first have Xen third-party CI testing running against it,  
not to introduce regressions. AFAIK it’s not happening right now.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-03 Thread Thierry Carrez
Bob Ball wrote:
>>> Oslo.privsep seem try to launch a daemon process and set caps for this
>> daemon; but for XenAPI, there is no need to spawn the daemon.
>>
>> I guess I'm lacking some context... If you don't need special rights, why 
>> use a
>> rootwrap-like thing at all ? Why go through a separate process to call into
>> XenAPI ? Why not call in directly from Neutron code ?
> 
> It does not need to go through a separate process at all, or need special 
> rights - see the prototype code at https://review.openstack.org/#/c/390931/ 
> which started this thread, which is directly calling from Neutron code.
> 
> I guess the argument is that we are trying to run "configure something" which 
> in some cases is privileged in the same host as is running the Neutron code 
> itself, hence the easiest way to do that is to use a rootwrap.  To me, the 
> very use of a "rootwrap" or "privsep" implies that we're running the commands 
> in the same host.

OK, I think I get it now: Neutron is using its rootwrap call interface
as an indirection layer to route configuration commands either to
execute as root locally (through classic rootwrap) or to execute on dom0
through a XenAPI connection (through rootwrap-xen-dom0). The latter
should really have been called xendom0wrap :)

> Arguably we should have a "per logical component" wrapper - in this case the 
> network / OVS instance that's being managed - as each component could be in a 
> different location.
> Mounting a loopback device (which Nova has needed to do in the past) clearly 
> needs a rootwrap that runs in the same host as Nova, but when managing the 
> OVS in XenServer's dom0 it needs a similar mechanism to what we are proposing 
> for Neutron.
> 
> For reference, Nova has a XenAPI session similar to the above and will invoke 
> plugins that exist in Dom0 directly, to make the required modifications.  
> This is similar in approach to the prototype code above.
> 
> The current XenAPI rootwrap for Neutron[1] is stupidly inefficient as it was 
> based on the original rootwrap concept (which neutron replaced with the 
> daemon mode to improve performance).  As this is a separate executable, 
> called once for each command, it will create a new session with each call.  
> There are (as always) multiple ways to fix this:
> 
> 1) Get Neutron to call XenAPI directly rather than trying to use a daemon - 
> the session management would move from neutron-rootwrap-xen-dom0 into 
> xen_rootwrap_client.py (perhaps this could be better named) 

I personally like that option.

> 2) Get Neutron to call a local rootwrap daemon (as per the current 
> implementation) which maintains a pool of connections and can efficiently 
> call through to XenAPI

That is an option, yes. Basically rootwrap-xen-dom0 was created after
rootwrap, so it's doable to evolve it into a rootwrap-xen-dom0-daemon,
created after rootwrap-daemon. It is slightly more costly/complex than
running in-process, but would make it more reusable I guess.

> 3) Extend oslo.rootwrap (and I presume also privsep) to know that some 
> commands can run in different places, and put the logic for connecting to 
> those different places in there.

We don't really evolve rootwrap itself anymore, to focus on privsep. I
guess privsep could in the future grow beyond pure privilege separation
toward a more generic interface for executing code over security
boundaries... But that sounds pretty far away (time for neutron to adopt
privsep, time for privsep to grows desired features). So other options
are a more immediate fix to an immediate performance issue.

> We did have a prototype implementation of #2 but it was messy, and #1 seemed 
> architecturally cleaner.

All 3 options are valid. I have a slight preference for #1 over #2 over
#3. Also code for #1 is already up.

Thanks for taking the time to explain it to me :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Bob Ball
> > Oslo.privsep seem try to launch a daemon process and set caps for this
> daemon; but for XenAPI, there is no need to spawn the daemon.
> 
> I guess I'm lacking some context... If you don't need special rights, why use 
> a
> rootwrap-like thing at all ? Why go through a separate process to call into
> XenAPI ? Why not call in directly from Neutron code ?

It does not need to go through a separate process at all, or need special 
rights - see the prototype code at https://review.openstack.org/#/c/390931/ 
which started this thread, which is directly calling from Neutron code.

I guess the argument is that we are trying to run "configure something" which 
in some cases is privileged in the same host as is running the Neutron code 
itself, hence the easiest way to do that is to use a rootwrap.  To me, the very 
use of a "rootwrap" or "privsep" implies that we're running the commands in the 
same host.

Arguably we should have a "per logical component" wrapper - in this case the 
network / OVS instance that's being managed - as each component could be in a 
different location.
Mounting a loopback device (which Nova has needed to do in the past) clearly 
needs a rootwrap that runs in the same host as Nova, but when managing the OVS 
in XenServer's dom0 it needs a similar mechanism to what we are proposing for 
Neutron.

For reference, Nova has a XenAPI session similar to the above and will invoke 
plugins that exist in Dom0 directly, to make the required modifications.  This 
is similar in approach to the prototype code above.

The current XenAPI rootwrap for Neutron[1] is stupidly inefficient as it was 
based on the original rootwrap concept (which neutron replaced with the daemon 
mode to improve performance).  As this is a separate executable, called once 
for each command, it will create a new session with each call.  There are (as 
always) multiple ways to fix this:

1) Get Neutron to call XenAPI directly rather than trying to use a daemon - the 
session management would move from neutron-rootwrap-xen-dom0 into 
xen_rootwrap_client.py (perhaps this could be better named) 
2) Get Neutron to call a local rootwrap daemon (as per the current 
implementation) which maintains a pool of connections and can efficiently call 
through to XenAPI
3) Extend oslo.rootwrap (and I presume also privsep) to know that some commands 
can run in different places, and put the logic for connecting to those 
different places in there.

We did have a prototype implementation of #2 but it was messy, and #1 seemed 
architecturally cleaner.

Bob 

[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/bin/neutron-rootwrap-xen-dom0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Thierry Carrez
Jianghua Wang wrote:
> Is Neutron ready to switch oslo.rootwrap to oslo.privsep?

You'll have to ask neutron-core for an updated status... I think it's
ready, but as I mentioned in my other email the current review
introducing it is currently stalled.

> Oslo.privsep seem try to launch a daemon process and set caps for this 
> daemon; but for XenAPI, there is no need to spawn the daemon. All of the 
> commands to be executed are sent to the common dom0 XAPI daemon (which will 
> invoke a dedicated plugin to execute the commands). So I'm confused how to 
> apply the privileged.entrypoint function. Could you help to share more 
> details? Thanks very much.

I guess I'm lacking some context... If you don't need special rights,
why use a rootwrap-like thing at all ? Why go through a separate process
to call into XenAPI ? Why not call in directly from Neutron code ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Jianghua Wang
Thanks Thierry.
Is Neutron ready to switch oslo.rootwrap to oslo.privsep?
Oslo.privsep seem try to launch a daemon process and set caps for this daemon; 
but for XenAPI, there is no need to spawn the daemon. All of the commands to be 
executed are sent to the common dom0 XAPI daemon (which will invoke a dedicated 
plugin to execute the commands). So I'm confused how to apply the 
privileged.entrypoint function. Could you help to share more details? Thanks 
very much.

Jianghua

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Wednesday, November 2, 2016 10:06 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem 
for XenServer

Ihar Hrachyshka wrote:
> Tony Breeds  wrote:
> 
>> On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:
>>
>>> I suggested in the bug and the PoC review that neutron is not the 
>>> right project to solve the issue. Seems like oslo.rootwrap is a 
>>> better place to maintain privilege management code for OpenStack. 
>>> Ideally, a solution would be found in scope of the library that 
>>> would not require any changes per-project.
>>
>> With the change of direction from oslo.roowrap to oslo.provsep I 
>> doubt that there is scope to land this in oslo.rootwarp.
> 
> It may take a while for projects to switch to caps for privilege 
> separation.

oslo.privsep doesn't require projects to switch to caps (just that you rewrite 
the commands you call in Python) and can be done incrementally (while keeping 
rootwrap around for not-yet-migrated stuff)...

> It may be easier to unblock xen folks with a small enhancement in 
> oslo.rootwrap scope and handle transition to oslo.privsep on a 
> separate schedule. I would like to hear from oslo folks on where 
> alternative hypervisors fit in their rootwrap/privsep plans.

Like Tony said at this point new features are added to oslo.privsep rather than 
oslo.rootwrap. In this specific case the most forward-looking solution (and 
also best performance and security) would be to write a Neutron 
@privileged.entrypoint function to call into XenAPI and cache the connection.

https://review.openstack.org/#/c/155631 failed to land in Newton, would be 
great if someone could pick it up (maybe a smaller version to introduce privsep 
first, then migrate commands one by one).

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Thierry Carrez
Ihar Hrachyshka wrote:
> Tony Breeds  wrote:
> 
>> On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:
>>
>>> I suggested in the bug and the PoC review that neutron is not the right
>>> project to solve the issue. Seems like oslo.rootwrap is a better
>>> place to
>>> maintain privilege management code for OpenStack. Ideally, a solution
>>> would
>>> be found in scope of the library that would not require any changes
>>> per-project.
>>
>> With the change of direction from oslo.roowrap to oslo.provsep I doubt
>> that
>> there is scope to land this in oslo.rootwarp.
> 
> It may take a while for projects to switch to caps for privilege
> separation.

oslo.privsep doesn't require projects to switch to caps (just that you
rewrite the commands you call in Python) and can be done incrementally
(while keeping rootwrap around for not-yet-migrated stuff)...

> It may be easier to unblock xen folks with a small
> enhancement in oslo.rootwrap scope and handle transition to oslo.privsep
> on a separate schedule. I would like to hear from oslo folks on where
> alternative hypervisors fit in their rootwrap/privsep plans.

Like Tony said at this point new features are added to oslo.privsep
rather than oslo.rootwrap. In this specific case the most
forward-looking solution (and also best performance and security) would
be to write a Neutron @privileged.entrypoint function to call into
XenAPI and cache the connection.

https://review.openstack.org/#/c/155631 failed to land in Newton, would
be great if someone could pick it up (maybe a smaller version to
introduce privsep first, then migrate commands one by one).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Ihar Hrachyshka

Tony Breeds  wrote:


On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:


I suggested in the bug and the PoC review that neutron is not the right
project to solve the issue. Seems like oslo.rootwrap is a better place to
maintain privilege management code for OpenStack. Ideally, a solution  
would

be found in scope of the library that would not require any changes
per-project.


With the change of direction from oslo.roowrap to oslo.provsep I doubt that
there is scope to land this in oslo.rootwarp.


It may take a while for projects to switch to caps for privilege  
separation. It may be easier to unblock xen folks with a small enhancement  
in oslo.rootwrap scope and handle transition to oslo.privsep on a separate  
schedule. I would like to hear from oslo folks on where alternative  
hypervisors fit in their rootwrap/privsep plans.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-01 Thread Tony Breeds
On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:

> I suggested in the bug and the PoC review that neutron is not the right
> project to solve the issue. Seems like oslo.rootwrap is a better place to
> maintain privilege management code for OpenStack. Ideally, a solution would
> be found in scope of the library that would not require any changes
> per-project.

With the change of direction from oslo.roowrap to oslo.provsep I doubt that
there is scope to land this in oslo.rootwarp.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-01 Thread Ihar Hrachyshka

Jianghua Wang  wrote:


Hi Neutron guys,

I’m trying to explain a problem with the XenServer rootwrap and give a  
proposal to resolve it. I need some input on how to proceed with this  
proposal: e.g. if requires a spec? Any concerns need further discussion  
or clarification?


Problem description:
As we’ve known, some neutron services need run commands with root  
privileges and it’s achieved by running commands via the rootwrap. And in  
order to resolve performance issue, it has been improved to support  
daemon mode for the rootwrap [1]. Either way has the commands running on  
the same node/VM which has relative neutron services running on.


But as a type-1 hypervisor, XenServer OpenStack has different behavior.  
Neutron’s compute agent neutron-openvswitch-agent need run commands in  
dom0, as the tenants’ interfaces are plugged in an integration OVS which  
locates in Dom0. Currently the script of  
https://github.com/openstack/neutron/blob/master/bin/neutron-rootwrap-xen-dom0is  
used as XenServer OpenStack’s rootwrap. This script will create a XenAPI  
session with dom0 and passes the commands to dom0 for the real execution.  
Each command execution will run this script once. So it has the similar  
performance issue as the non-daemon mode of rootwrap on other  
hypervisors:  For each command, it has to parse the  
neutron-rootwrap-xen-dom0 script and the rootwrap configure file.  
Furthermore, this rootwrap script will create a XenAPI for each command  
execution and XenServer by default will log the XenAPI session creation  
events. It will cause frequent log file rotation and so other real useful  
log is lost.


Proposal:
The os.rootwrap support daemon mode for other hypervisors; but  
XenServer’s compute agent can’t use that as again it need run commands in  
Dom0. But we can refer to that design and implement the daemon mode for  
XenServer. After creating a XenAPI session, Dom0’s XAPI will accept the  
command running requests from the session and reply with the running  
result. So logically we’ve had a daemon in dom0. So we can support daemon  
mode rootwrap with the following design:
1. Develop a daemon client module for XenServer: The agent service will  
use this client module to create a XenAPI session, and keep this session  
during the service’s whole life.
2. once need run command on dom0, use the above client to runs commands  
in dom0.
It should be able to result the issues mentioned above, as the client  
module need import only once for each agent service and only use a single  
session for all commands. The prototype code[3] works well.


Any concern or comments for the above proposal? And how I can proceed  
with solution? We’ve filed a RFE bug[2] which is in wishlist  
status. Per the neutron policy[4], it seems need neutron-drivers team to  
evaluate the RFE and determine if a spec is required. Could anyone help  
to evaluate this proposal and tell me how I should proceed? And I’m also  
open and happy for any comments. Thanks very much.


[1]  
https://specs.openstack.org/openstack/oslo-specs/specs/juno/rootwrap-daemon-mode.html

[2] https://bugs.launchpad.net/neutron/+bug/1585510
[3]prototype code: https://review.openstack.org/#/c/390931/
[4] http://docs.openstack.org/developer/neutron/policies/blueprints.html



I suggested in the bug and the PoC review that neutron is not the right  
project to solve the issue. Seems like oslo.rootwrap is a better place to  
maintain privilege management code for OpenStack. Ideally, a solution would  
be found in scope of the library that would not require any changes  
per-project.


I moved the bug to Opinion since I don’t believe it’s in scope for neutron;  
I also added oslo.rootwrap to the list of affected projects to collect  
feedback from oslo folks. Finally, I blocked the PoC patch with -2 until we  
agree on how to scope the feature for neutron.


I hope it helps,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-10-27 Thread Jianghua Wang
Hi Neutron guys,

I'm trying to explain a problem with the XenServer rootwrap and give a proposal 
to resolve it. I need some input on how to proceed with this proposal: e.g. if 
requires a spec? Any concerns need further discussion or clarification?

Problem description:
As we've known, some neutron services need run commands with root privileges 
and it's achieved by running commands via the rootwrap. And in order to resolve 
performance issue, it has been improved to support daemon mode for the rootwrap 
[1]. Either way has the commands running on the same node/VM which has relative 
neutron services running on.

But as a type-1 hypervisor, XenServer OpenStack has different behavior. 
Neutron's compute agent neutron-openvswitch-agent need run commands in dom0, as 
the tenants' interfaces are plugged in an integration OVS which locates in 
Dom0. Currently the script of 
https://github.com/openstack/neutron/blob/master/bin/neutron-rootwrap-xen-dom0 
is used as XenServer OpenStack's rootwrap. This script will create a XenAPI 
session with dom0 and passes the commands to dom0 for the real execution. Each 
command execution will run this script once. So it has the similar performance 
issue as the non-daemon mode of rootwrap on other hypervisors:  For each 
command, it has to parse the neutron-rootwrap-xen-dom0 script and the rootwrap 
configure file. Furthermore, this rootwrap script will create a XenAPI for each 
command execution and XenServer by default will log the XenAPI session creation 
events. It will cause frequent log file rotation and so other real useful log 
is lost.

Proposal:
The os.rootwrap support daemon mode for other hypervisors; but XenServer's 
compute agent can't use that as again it need run commands in Dom0. But we can 
refer to that design and implement the daemon mode for XenServer. After 
creating a XenAPI session, Dom0's XAPI will accept the command running requests 
from the session and reply with the running result. So logically we've had a 
daemon in dom0. So we can support daemon mode rootwrap with the following 
design:
1. Develop a daemon client module for XenServer: The agent service will use 
this client module to create a XenAPI session, and keep this session during the 
service's whole life.
2. once need run command on dom0, use the above client to runs commands in dom0.
It should be able to result the issues mentioned above, as the client module 
need import only once for each agent service and only use a single session for 
all commands. The prototype code[3] works well.

Any concern or comments for the above proposal? And how I can proceed with 
solution? We've filed a RFE bug[2] which is in wishlist status. Per 
the neutron policy[4], it seems need neutron-drivers team to evaluate the RFE 
and determine if a spec is required. Could anyone help to evaluate this 
proposal and tell me how I should proceed? And I'm also open and happy for any 
comments. Thanks very much.

[1] 
https://specs.openstack.org/openstack/oslo-specs/specs/juno/rootwrap-daemon-mode.html
[2] https://bugs.launchpad.net/neutron/+bug/1585510
[3]prototype code: https://review.openstack.org/#/c/390931/
[4] http://docs.openstack.org/developer/neutron/policies/blueprints.html

Regards,
Jianghua

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev