Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-11 Thread Eric Windisch
>
>
> > We consider mounting untrusted filesystems on the host kernel to be
> > an unacceptable security risk. A user can craft a malicious filesystem
> > that expliots bugs in the kernel filesystem drivers. This is particularly
> > bad if you allow the kernel to probe for filesystem type since Linux
> > has many many many filesystem drivers most of which are likely not
> > audited enough to be considered safe against malicious data. Even the
> > mainstream ext4 driver had a crasher bug present for many years
> >
> >   https://lwn.net/Articles/538898/
> >   http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems
>
> Actually, there's a hidden assumption here that makes this statement not
> necessarily correct for containers.  You're assuming the container has
> to have raw access to the device it's mounting.


I believe it does in the context of the Cinder API, but it does not in the
general context of mounting devices.

I advocate having a filesystem-as-a-service or host-mount-API which nicely
aligns with desires to mount devices on behalf of containers "on the host".
However, it doesn't exclude the fact that there are APIs and services those
contract is, explicitly, to provide block into guests. I'll reiterate again
and say that is where the contract should end (it should not extend to the
ability of guest operating systems to mount, that would be silly).

None of this excludes having an opinion that mounting inside of a guest is
a *useful feature*, even if I don't believe it to be a contractually
obligated one. There is probably no harm in contemplating what mounting
inside of a guest would look like.


> For hypervisors, this
> is true, but it doesn't have to be for containers because the mount
> operation is separate from raw read and write so we can allow or deny
> them granularly.
>

I have been considering allowing containers read-only view of a block
device. We could use seccomp to allow the mount syscall to succeed inside a
container, although it would be forbidden by a missing SYS_CAP_ADMIN
capability. The syscall would instead be trapped and performed by a
privileged process elsewhere on the host.

The read-only view of the block device should not itself be a security
concern. In fact, it could prove to be a useful feature in its own right.
It is the ability to write to the block device which is a risk should it be
mounted.

Having that read-only view also provides a certain awareness to the
container of the existence of that volume. It allows the container to
ATTEMPT to perform a mount operation, even if its denied by policy. That,
of course, is where seccomp would come into play...

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-11 Thread Daniel P. Berrange
On Fri, Jul 11, 2014 at 09:53:47AM -0400, Eric Windisch wrote:
> >
> >
> > > Actually, there's a hidden assumption here that makes this statement not
> > > necessarily correct for containers.  You're assuming the container has
> > > to have raw access to the device it's mounting.  For hypervisors, this
> > > is true, but it doesn't have to be for containers because the mount
> > > operation is separate from raw read and write so we can allow or deny
> > > them granularly.
> >
> 
> I agree that a container does not have to have raw access to a device that
> it is mounting, but I also believe that the right contract for Cinder
> support is to expose raw access to those devices into containers. I don't
> believe that Cinder support should imply an ability to support mounting the
> arbitrary filesystems that might live on those volumes, just as we do not
> today require the guest OS on KVM, VMware, or Xen to support mounting any
> arbitrary filesystem that might live on a Cinder volume.
> 
> It might be stretching the contract slightly to say that containers cannot
> currently support ANY of the potential filesystems we might expect on
> Cinder volumes, but I really don't think this should be an issue or point
> of contention.  I'll remind everyone, too, that raw access to volumes
> (block devices) inside of containers is not a useless exercise. There are
> valid things one can do with a block device that have nothing to do with
> the kernel's ability to mount it as a filesystem.
> 
> I believe that for the use-case Cinder typically solves for VMs, however,
> containers folks should be backing a new filesystem-as-a-service API. I'm
> not yet certain that Manila is the right solution here, but it may be.
> 
> Finally, for those that do ultimately want the ability to mount from inside
> containers, I think it's ultimately possible. There are ways to allow safer
> mounting inside containers with varying trade-offs. I just don't think it's
> a necessary part of the Nova+Cinder contract as it pertains to the
> capability of the guest, not the capability of the hypervisor (in a sense).

Cinder can easily be used to provide filesystems to the container,
rather than block devices. There are patches pending for Libvirt LXC
which allow booting of the container from a root filesystem that is
backed by a cinder volume. The cinder volume is, however, mounted
in host context. Thus the container never sees the raw block device
and never has the ability to craft malicious filesystem structures
to exploit kernel bugs I mentioned before. This could easily be
extended to work for non-root filesystems too - all that is required
is an extra field to associated with the block device mapping which
specifies the container mount point. The host mgmt layer would be
responsible for all mkfs / mount operations, so the container never
need be exposed to block devices, and still make full use of features
that cinder provides.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-11 Thread Eric Windisch
>
>
> > Actually, there's a hidden assumption here that makes this statement not
> > necessarily correct for containers.  You're assuming the container has
> > to have raw access to the device it's mounting.  For hypervisors, this
> > is true, but it doesn't have to be for containers because the mount
> > operation is separate from raw read and write so we can allow or deny
> > them granularly.
>

I agree that a container does not have to have raw access to a device that
it is mounting, but I also believe that the right contract for Cinder
support is to expose raw access to those devices into containers. I don't
believe that Cinder support should imply an ability to support mounting the
arbitrary filesystems that might live on those volumes, just as we do not
today require the guest OS on KVM, VMware, or Xen to support mounting any
arbitrary filesystem that might live on a Cinder volume.

It might be stretching the contract slightly to say that containers cannot
currently support ANY of the potential filesystems we might expect on
Cinder volumes, but I really don't think this should be an issue or point
of contention.  I'll remind everyone, too, that raw access to volumes
(block devices) inside of containers is not a useless exercise. There are
valid things one can do with a block device that have nothing to do with
the kernel's ability to mount it as a filesystem.

I believe that for the use-case Cinder typically solves for VMs, however,
containers folks should be backing a new filesystem-as-a-service API. I'm
not yet certain that Manila is the right solution here, but it may be.

Finally, for those that do ultimately want the ability to mount from inside
containers, I think it's ultimately possible. There are ways to allow safer
mounting inside containers with varying trade-offs. I just don't think it's
a necessary part of the Nova+Cinder contract as it pertains to the
capability of the guest, not the capability of the hypervisor (in a sense).


> Where you could avoid the risk is if the image you're getting from
> glance is not in fact a filesystem, but rather a tar.gz of the container
> filesystem. Then Nova would simply be extracting the contents of the
> tar archive and not accessing an untrusted filessytem image from
> glance. IIUC, this is more or less what Docker does.
>

Yes, this is what Docker does.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Daniel P. Berrange
On Thu, Jul 10, 2014 at 08:19:36AM -0700, James Bottomley wrote:
> On Thu, 2014-07-10 at 14:47 +0100, Daniel P. Berrange wrote:
> > On Thu, Jul 10, 2014 at 05:36:59PM +0400, Dmitry Guryanov wrote:
> > > I have a question about mounts - in OpenVZ project each container has its 
> > > own 
> > > filesystem in an image file. So to start a container we mount this 
> > > filesystem 
> > > in host OS (because all containers share the same linux kernel). Is it a 
> > > security problem from the Openstack's developers vision?
> > > 
> > > 
> > > I have this question, because libvirt's driver uses libguestfs to copy 
> > > some 
> > > files into guest filesystem instead of simple mount on host. Mounting 
> > > with 
> > > libguestfs is slower, then mount on host, so there should be strong 
> > > reasons, 
> > > why libvirt driver does it.
> > 
> > We consider mounting untrusted filesystems on the host kernel to be
> > an unacceptable security risk. A user can craft a malicious filesystem
> > that expliots bugs in the kernel filesystem drivers. This is particularly
> > bad if you allow the kernel to probe for filesystem type since Linux
> > has many many many filesystem drivers most of which are likely not
> > audited enough to be considered safe against malicious data. Even the
> > mainstream ext4 driver had a crasher bug present for many years
> > 
> >   https://lwn.net/Articles/538898/
> >   http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems
> 
> Actually, there's a hidden assumption here that makes this statement not
> necessarily correct for containers.  You're assuming the container has
> to have raw access to the device it's mounting.  For hypervisors, this
> is true, but it doesn't have to be for containers because the mount
> operation is separate from raw read and write so we can allow or deny
> them granularly.

I wasn't actually. In the Libvirt LXC case, Nova takes an image from
glance and mounts it on the host, and then sets up the container to
have its root at the filesystem on the host where it mounted the
image. So the container does not have any raw block access, but Nova
is still mounting an untrusted image from Glance in the host which
is a risk.

> Consider the old use case, where the container root is actually a
> subdirectory of the host filesystem which gets bind mounted.  The
> container has no possibility of altering the underlying block device
> there.  For block roots, which we also do, at least in the VPS world,
> they're mostly initialised by the hosting provider and the VPS
> environment doesn't actually get to read or write directly to them
> (there's often a block on this).  Of course, they *can* be set up so the
> VPS has raw access and I believe some are, but it's a choice not a
> requirement.

Where you could avoid the risk is if the image you're getting from
glance is not in fact a filesystem, but rather a tar.gz of the container
filesystem. Then Nova would simply be extracting the contents of the
tar archive and not accessing an untrusted filessytem image from
glance. IIUC, this is more or less what Docker does.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread James Bottomley
On Thu, 2014-07-10 at 14:47 +0100, Daniel P. Berrange wrote:
> On Thu, Jul 10, 2014 at 05:36:59PM +0400, Dmitry Guryanov wrote:
> > I have a question about mounts - in OpenVZ project each container has its 
> > own 
> > filesystem in an image file. So to start a container we mount this 
> > filesystem 
> > in host OS (because all containers share the same linux kernel). Is it a 
> > security problem from the Openstack's developers vision?
> > 
> > 
> > I have this question, because libvirt's driver uses libguestfs to copy some 
> > files into guest filesystem instead of simple mount on host. Mounting with 
> > libguestfs is slower, then mount on host, so there should be strong 
> > reasons, 
> > why libvirt driver does it.
> 
> We consider mounting untrusted filesystems on the host kernel to be
> an unacceptable security risk. A user can craft a malicious filesystem
> that expliots bugs in the kernel filesystem drivers. This is particularly
> bad if you allow the kernel to probe for filesystem type since Linux
> has many many many filesystem drivers most of which are likely not
> audited enough to be considered safe against malicious data. Even the
> mainstream ext4 driver had a crasher bug present for many years
> 
>   https://lwn.net/Articles/538898/
>   http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems

Actually, there's a hidden assumption here that makes this statement not
necessarily correct for containers.  You're assuming the container has
to have raw access to the device it's mounting.  For hypervisors, this
is true, but it doesn't have to be for containers because the mount
operation is separate from raw read and write so we can allow or deny
them granularly.

Consider the old use case, where the container root is actually a
subdirectory of the host filesystem which gets bind mounted.  The
container has no possibility of altering the underlying block device
there.  For block roots, which we also do, at least in the VPS world,
they're mostly initialised by the hosting provider and the VPS
environment doesn't actually get to read or write directly to them
(there's often a block on this).  Of course, they *can* be set up so the
VPS has raw access and I believe some are, but it's a choice not a
requirement.

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Daniel P. Berrange
On Thu, Jul 10, 2014 at 06:18:52PM +0400, Dmitry Guryanov wrote:
> On Thursday 10 July 2014 14:47:11 Daniel P. Berrange wrote:
> > On Thu, Jul 10, 2014 at 05:36:59PM +0400, Dmitry Guryanov wrote:
> > > I have a question about mounts - in OpenVZ project each container has its
> > > own filesystem in an image file. So to start a container we mount this
> > > filesystem in host OS (because all containers share the same linux
> > > kernel). Is it a security problem from the Openstack's developers vision?
> > > 
> > > 
> > > I have this question, because libvirt's driver uses libguestfs to copy
> > > some
> > > files into guest filesystem instead of simple mount on host. Mounting with
> > > libguestfs is slower, then mount on host, so there should be strong
> > > reasons, why libvirt driver does it.
> > 
> > We consider mounting untrusted filesystems on the host kernel to be
> > an unacceptable security risk. A user can craft a malicious filesystem
> > that expliots bugs in the kernel filesystem drivers. This is particularly
> > bad if you allow the kernel to probe for filesystem type since Linux
> > has many many many filesystem drivers most of which are likely not
> > audited enough to be considered safe against malicious data. Even the
> > mainstream ext4 driver had a crasher bug present for many years
> > 
> >   https://lwn.net/Articles/538898/
> >   http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems
> > 
> > Now that all said, there are no absolutes in security. You have to
> > decide what risks are important to you and which are not. In the case
> > of KVM, I think this host filesystem risk is unacceptable because you
> > presumably chose to use machine based virt in order get strong separation
> > of kernels. If you have explicitly made the decision to use a container
> > based virt solution (which inherantly has a shared kernel between host
> > and guest) then I think it would be valid for you to say this filesystem
> > risk is one you are prepared to accept, as it is not much worse than
> > the risk you already have by using a single shared kernel for all tenants.
> > 
> 
> Thanks, Daniel, it seems you've answered this question second time :)
> 
> > So, IMHO, OpenStack should not dictate the security policy for things
> > like this. Different technologies within openstack will provide protection
> > against different attack scenarios. It is a deployment decision for the
> > cloud administrator which of those risks they want to mitigate in their
> > usage.  This is why we still kept the option of using a non-libguestfs
> > approach for file injection.
> > 
> 
> That's exactly what I'd like to know.
> I've also found the spec about starting LXC container from a block device: 
> https://github.com/openstack/nova-specs/blob/master/specs/juno/libvirt-start-lxc-from-block-devices.rst
> 
> Is it up-to-date?

Yes, just recently approved and code for it is in process of review.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Dmitry Guryanov
On Thursday 10 July 2014 14:47:11 Daniel P. Berrange wrote:
> On Thu, Jul 10, 2014 at 05:36:59PM +0400, Dmitry Guryanov wrote:
> > I have a question about mounts - in OpenVZ project each container has its
> > own filesystem in an image file. So to start a container we mount this
> > filesystem in host OS (because all containers share the same linux
> > kernel). Is it a security problem from the Openstack's developers vision?
> > 
> > 
> > I have this question, because libvirt's driver uses libguestfs to copy
> > some
> > files into guest filesystem instead of simple mount on host. Mounting with
> > libguestfs is slower, then mount on host, so there should be strong
> > reasons, why libvirt driver does it.
> 
> We consider mounting untrusted filesystems on the host kernel to be
> an unacceptable security risk. A user can craft a malicious filesystem
> that expliots bugs in the kernel filesystem drivers. This is particularly
> bad if you allow the kernel to probe for filesystem type since Linux
> has many many many filesystem drivers most of which are likely not
> audited enough to be considered safe against malicious data. Even the
> mainstream ext4 driver had a crasher bug present for many years
> 
>   https://lwn.net/Articles/538898/
>   http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems
> 
> Now that all said, there are no absolutes in security. You have to
> decide what risks are important to you and which are not. In the case
> of KVM, I think this host filesystem risk is unacceptable because you
> presumably chose to use machine based virt in order get strong separation
> of kernels. If you have explicitly made the decision to use a container
> based virt solution (which inherantly has a shared kernel between host
> and guest) then I think it would be valid for you to say this filesystem
> risk is one you are prepared to accept, as it is not much worse than
> the risk you already have by using a single shared kernel for all tenants.
> 

Thanks, Daniel, it seems you've answered this question second time :)

> So, IMHO, OpenStack should not dictate the security policy for things
> like this. Different technologies within openstack will provide protection
> against different attack scenarios. It is a deployment decision for the
> cloud administrator which of those risks they want to mitigate in their
> usage.  This is why we still kept the option of using a non-libguestfs
> approach for file injection.
> 

That's exactly what I'd like to know.
I've also found the spec about starting LXC container from a block device: 
https://github.com/openstack/nova-specs/blob/master/specs/juno/libvirt-start-lxc-from-block-devices.rst

Is it up-to-date?


> Regards,
> Daniel

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Daniel P. Berrange
On Thu, Jul 10, 2014 at 05:57:46PM +0400, Dmitry Guryanov wrote:
> On Tuesday 08 July 2014 14:10:25 Michael Still wrote:
> > Joe has a good answer, but you should also be aware of the hypervisor
> > support matrix (https://wiki.openstack.org/wiki/HypervisorSupportMatrix),
> > which hopefully comes some way to explaining what we expect of a nova
> > driver.
> 
> I've seen this document. Honestly, its not clear, what nova driver developer 
> needs to implement.

Agreed, I don't really consider that document very useful in terms of
identifying what is the "minimal required functionality" for a usable
virt driver. It is far too terse / leaves out loads of detail.

> For example, there are 3 rows with graphical consoles: VNC, spice and RDP. 
> All 
> 3 consoles do the same thing, so I think VNC console support is enough, am I 
> right?

Currently the only ways to interact with a VM are over the network
(eg SSH) or via the graphical console (VNC or SPICE or RDP). I
think it is reasonable to say a method of interaction is required
that doesn't rely on the network/ssh. So currently I'd say at least
one of the graphical console methods is a requirement.

For Juno though, there is a blueprint adding support for direct
interactive serial console support. Once that lands, then I don't
think graphical console support should be a requirement - serial
console is a valid alternative to target.

> Also I think networking support should include vif types, which driver 
> supports (VIF_TYPE_BRIDGE, VIF_TYPE_OVS, VIF_TYPE_IVS and others), and if a 
> drivers claims to support VIF_TYPE_OVS it means than it supports all 
> features, 
> supported by openvswitch and ML2 neutron plugins (floating IPs, flat 
> networking, security groups, vlan networking).
> So is it enough to implement OVS vif type?

IIUC, VIF_TYPE_BRIDGE is the only one that works with Nova network, so
I'd consider that one a prerequisite. I'd probably suggest VIF_TYPE_OVS
is pretty close to a prerequisite too, since as you say, it enables a
lot of the Neutron functionality.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Dmitry Guryanov
On Tuesday 08 July 2014 14:10:25 Michael Still wrote:
> Joe has a good answer, but you should also be aware of the hypervisor
> support matrix (https://wiki.openstack.org/wiki/HypervisorSupportMatrix),
> which hopefully comes some way to explaining what we expect of a nova
> driver.

I've seen this document. Honestly, its not clear, what nova driver developer 
needs to implement.

For example, there are 3 rows with graphical consoles: VNC, spice and RDP. All 
3 consoles do the same thing, so I think VNC console support is enough, am I 
right?

Also I think networking support should include vif types, which driver 
supports (VIF_TYPE_BRIDGE, VIF_TYPE_OVS, VIF_TYPE_IVS and others), and if a 
drivers claims to support VIF_TYPE_OVS it means than it supports all features, 
supported by openvswitch and ML2 neutron plugins (floating IPs, flat 
networking, security groups, vlan networking).
So is it enough to implement OVS vif type?


> 
> Cheers,
> Michael
> 
> On Tue, Jul 8, 2014 at 9:11 AM, Joe Gordon  wrote:
> > On Jul 3, 2014 11:43 AM, "Dmitry Guryanov"  
wrote:
> >> Hi, All!
> >> 
> >> As far as I know, there are some requirements, which virt driver must
> >> meet
> >> to
> >> use Openstack 'label'. For example, it's not allowed to mount cinder
> >> volumes
> >> inside host OS.
> > 
> > I am a little unclear on what your question is. If it is simply about the
> > OpenStack label then:
> > 
> > 'OpenStack' is a trademark that is enforced by the OpenStack foundation.
> > You should check with the foundation to get a formal answer on commercial
> > trademark usage. (As an OpenStack developer, my personal view is having
> > out of tree drivers is a bad idea, but that decision isn't up to me.)
> > 
> > If this is about contributing your driver to nova (great!), then this is
> > the right forum to begin that discussion. We don't have a formal list of
> > requirements for contributing new drivers to nova besides the need for CI
> > testing. If you are interested in contributing a new nova driver, can you
> > provide a brief overview along with your questions to get the discussion
> > started.
> > 
> > Also there is an existing efforts to add container support into nova and I
> > hear they are making excellent progress; do you plan on collaborating with
> > those folks?
> > 
> >> Are there any documents, describing all such things? How can I determine,
> >> if
> >> my virtualization driver for nova (developed outside of nova mainline)
> >> works
> >> correctly and meet nova's security requirements?
> >> 
> >> 
> >> --
> >> Dmitry Guryanov
> >> 
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Dmitry Guryanov
On Monday 07 July 2014 16:11:21 Joe Gordon wrote:
> On Jul 3, 2014 11:43 AM, "Dmitry Guryanov"  wrote:
> > Hi, All!
> > 
> > As far as I know, there are some requirements, which virt driver must
> 
> meet to
> 
> > use Openstack 'label'. For example, it's not allowed to mount cinder
> 
> volumes
> 
> > inside host OS.
> 
> I am a little unclear on what your question is. If it is simply about the
> OpenStack label then:
> 
> 'OpenStack' is a trademark that is enforced by the OpenStack foundation.
> You should check with the foundation to get a formal answer on commercial
> trademark usage. (As an OpenStack developer, my personal view is having out
> of tree drivers is a bad idea, but that decision isn't up to me.)
> 
> If this is about contributing your driver to nova (great!), then this is
> the right forum to begin that discussion. We don't have a formal list of
> requirements for contributing new drivers to nova besides the need for CI
> testing. If you are interested in contributing a new nova driver, can you
> provide a brief overview along with your questions to get the discussion
> started.
> 
> Also there is an existing efforts to add container support into nova and I
> hear they are making excellent progress; do you plan on collaborating with
> those folks?
> 
> > Are there any documents, describing all such things? How can I determine,
> 
> if
> 
> > my virtualization driver for nova (developed outside of nova mainline)
> 
> works
> 
> > correctly and meet nova's security requirements?

We have developed a driver, pcs-nova-driver 
(https://github.com/parallels/pcs-nova-driver), but decided to froze it and put 
efforts to containers driver 
together with nova-containers team and libvirt's driver.


> > 
> > 
> > --
> > Dmitry Guryanov
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Daniel P. Berrange
On Thu, Jul 10, 2014 at 05:36:59PM +0400, Dmitry Guryanov wrote:
> I have a question about mounts - in OpenVZ project each container has its own 
> filesystem in an image file. So to start a container we mount this filesystem 
> in host OS (because all containers share the same linux kernel). Is it a 
> security problem from the Openstack's developers vision?
> 
> 
> I have this question, because libvirt's driver uses libguestfs to copy some 
> files into guest filesystem instead of simple mount on host. Mounting with 
> libguestfs is slower, then mount on host, so there should be strong reasons, 
> why libvirt driver does it.

We consider mounting untrusted filesystems on the host kernel to be
an unacceptable security risk. A user can craft a malicious filesystem
that expliots bugs in the kernel filesystem drivers. This is particularly
bad if you allow the kernel to probe for filesystem type since Linux
has many many many filesystem drivers most of which are likely not
audited enough to be considered safe against malicious data. Even the
mainstream ext4 driver had a crasher bug present for many years

  https://lwn.net/Articles/538898/
  http://libguestfs.org/guestfs.3.html#security-of-mounting-filesystems

Now that all said, there are no absolutes in security. You have to
decide what risks are important to you and which are not. In the case
of KVM, I think this host filesystem risk is unacceptable because you
presumably chose to use machine based virt in order get strong separation
of kernels. If you have explicitly made the decision to use a container
based virt solution (which inherantly has a shared kernel between host
and guest) then I think it would be valid for you to say this filesystem
risk is one you are prepared to accept, as it is not much worse than
the risk you already have by using a single shared kernel for all tenants.

So, IMHO, OpenStack should not dictate the security policy for things
like this. Different technologies within openstack will provide protection
against different attack scenarios. It is a deployment decision for the
cloud administrator which of those risks they want to mitigate in their
usage.  This is why we still kept the option of using a non-libguestfs
approach for file injection.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-10 Thread Dmitry Guryanov
On Monday 07 July 2014 16:11:21 Joe Gordon wrote:
> On Jul 3, 2014 11:43 AM, "Dmitry Guryanov"  wrote:
> > Hi, All!
> > 
> > As far as I know, there are some requirements, which virt driver must
> 
> meet to
> 
> > use Openstack 'label'. For example, it's not allowed to mount cinder
> 
> volumes
> 
> > inside host OS.
> 
> I am a little unclear on what your question is. If it is simply about the
> OpenStack label then:
> 
> 'OpenStack' is a trademark that is enforced by the OpenStack foundation.
> You should check with the foundation to get a formal answer on commercial
> trademark usage. (As an OpenStack developer, my personal view is having out
> of tree drivers is a bad idea, but that decision isn't up to me.)
> 
> If this is about contributing your driver to nova (great!), then this is
> the right forum to begin that discussion. We don't have a formal list of
> requirements for contributing new drivers to nova besides the need for CI
> testing. If you are interested in contributing a new nova driver, can you
> provide a brief overview along with your questions to get the discussion
> started.

OK, thanks!

Actually we are discussing, how to implement containers support in nova-
containers team.

I have a question about mounts - in OpenVZ project each container has its own 
filesystem in an image file. So to start a container we mount this filesystem 
in host OS (because all containers share the same linux kernel). Is it a 
security problem from the Openstack's developers vision?


I have this question, because libvirt's driver uses libguestfs to copy some 
files into guest filesystem instead of simple mount on host. Mounting with 
libguestfs is slower, then mount on host, so there should be strong reasons, 
why libvirt driver does it.


> 
> Also there is an existing efforts to add container support into nova and I
> hear they are making excellent progress; do you plan on collaborating with
> those folks?
> 
> > Are there any documents, describing all such things? How can I determine,
> 
> if
> 
> > my virtualization driver for nova (developed outside of nova mainline)
> 
> works
> 
> > correctly and meet nova's security requirements?
> > 
> > 
> > --
> > Dmitry Guryanov
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-07 Thread Michael Still
Joe has a good answer, but you should also be aware of the hypervisor
support matrix (https://wiki.openstack.org/wiki/HypervisorSupportMatrix),
which hopefully comes some way to explaining what we expect of a nova
driver.

Cheers,
Michael

On Tue, Jul 8, 2014 at 9:11 AM, Joe Gordon  wrote:
>
> On Jul 3, 2014 11:43 AM, "Dmitry Guryanov"  wrote:
>>
>> Hi, All!
>>
>> As far as I know, there are some requirements, which virt driver must meet
>> to
>> use Openstack 'label'. For example, it's not allowed to mount cinder
>> volumes
>> inside host OS.
>
> I am a little unclear on what your question is. If it is simply about the
> OpenStack label then:
>
> 'OpenStack' is a trademark that is enforced by the OpenStack foundation. You
> should check with the foundation to get a formal answer on commercial
> trademark usage. (As an OpenStack developer, my personal view is having out
> of tree drivers is a bad idea, but that decision isn't up to me.)
>
> If this is about contributing your driver to nova (great!), then this is the
> right forum to begin that discussion. We don't have a formal list of
> requirements for contributing new drivers to nova besides the need for CI
> testing. If you are interested in contributing a new nova driver, can you
> provide a brief overview along with your questions to get the discussion
> started.
>
> Also there is an existing efforts to add container support into nova and I
> hear they are making excellent progress; do you plan on collaborating with
> those folks?
>
>>
>> Are there any documents, describing all such things? How can I determine,
>> if
>> my virtualization driver for nova (developed outside of nova mainline)
>> works
>> correctly and meet nova's security requirements?
>>
>>
>> --
>> Dmitry Guryanov
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers] Nova virt driver requirements

2014-07-07 Thread Joe Gordon
On Jul 3, 2014 11:43 AM, "Dmitry Guryanov"  wrote:
>
> Hi, All!
>
> As far as I know, there are some requirements, which virt driver must
meet to
> use Openstack 'label'. For example, it's not allowed to mount cinder
volumes
> inside host OS.

I am a little unclear on what your question is. If it is simply about the
OpenStack label then:

'OpenStack' is a trademark that is enforced by the OpenStack foundation.
You should check with the foundation to get a formal answer on commercial
trademark usage. (As an OpenStack developer, my personal view is having out
of tree drivers is a bad idea, but that decision isn't up to me.)

If this is about contributing your driver to nova (great!), then this is
the right forum to begin that discussion. We don't have a formal list of
requirements for contributing new drivers to nova besides the need for CI
testing. If you are interested in contributing a new nova driver, can you
provide a brief overview along with your questions to get the discussion
started.

Also there is an existing efforts to add container support into nova and I
hear they are making excellent progress; do you plan on collaborating with
those folks?

>
> Are there any documents, describing all such things? How can I determine,
if
> my virtualization driver for nova (developed outside of nova mainline)
works
> correctly and meet nova's security requirements?
>
>
> --
> Dmitry Guryanov
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev