Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-11 Thread Ben Swartzlander


On 03/04/2015 09:33 AM, Danny Al-Gaaf wrote:

Am 04.03.2015 um 15:18 schrieb Csaba Henk:

Hi Danny,

- Original Message -

From: Danny Al-Gaaf danny.al-g...@bisect.de
To: Deepak Shetty dpkshe...@gmail.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org,
ceph-de...@vger.kernel.org
Sent: Wednesday, March 4, 2015 3:05:46 PM
Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila


...

Another level of indirection. I really like the approach of filesystem
passthrough ... the only critical question is if virtfs/p9 is still
supported in some way (and the question if not: why?).

That only seems to be a biggie, isn't it?

Yes, it is.


We -- Red Hat -- considered a similar, virtfs based driver for GlusterFS
but we dropped that plan exactly for virtfs being abandonware.

As far as I know it was meant to be a research project, and providing
a fairly well working POC it was concluded -- but Deepak knows more of
the story.

Would like to understand why it was abandoned. I see the need of
filesystem passthrough in the area of virtualization. Is there another
solution available?


Danny, I read through this thread and I wasn't sure I had anything to 
add, but now that it's gone quiet, I'm wondering what your plan is.


I wasn't aware that VirtFS is being considered abandonware but it did 
seem to me that it wasn't being actively maintained. After looking at 
the alternatives I considered VirtFS to be the best option for doing 
what it does, but it's applicability is so narrow that it's hard to find 
it appealing. I have the following problems with VirtFS:
* It requires a QEMU/KVM or Xen hypervisor. VMware and HyperV have zero 
support nor any plans to support it.
* It requires a Linux or BSD. Windows guests can't use it at all. Some 
googling turned up various projects that might give you a headstart 
writing a Windows VirtFS client, but we're a long way from having 
something usable.
* VirtFS boils the filesystem down to the bare minimum, thanks to its P9 
heritage. Interesting features like caching, locking, security 
(authentication/authorization/privacy), name mapping, and multipath I/O 
are either not implemented or delegated to the hypervisor which may or 
may not meet the needs of the guest application.
* Applications designed to run on multiple nodes with shared filesystem 
storage tend to be tested and supported on NFS or CIFS because those 
have been around forever. VirtFS is tested and supported by nobody so 
getting application level support will be impossible.


The third one is the one that kills it for me. VirtFS is useful in 
extremely narrow use cases only. Manila is trying to provide shared 
filesystems in as wide a set of applications as possible. VirtFS offers 
nothing that can't also be achieved another way. That's not to say the 
other way is always ideal. If your use case happens to match exactly 
what VirtFS does well (QEMU hypervisor, Linux guest, no special 
filesystem requirements) then the alternatives may not look so good.


I'm completely in favor of seeing virtfs support go into Nova and doing 
integration with it from the Manila side. I'm concerned though that it 
might be a lot of work, and it might benefits only a few people. Have 
you found any others who share your goal and are willing to help?




Danny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Gregory Farnum
On Wed, Mar 4, 2015 at 7:03 AM, Csaba Henk ch...@redhat.com wrote:


 - Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de
 To: Csaba Henk ch...@redhat.com, OpenStack Development Mailing List 
 (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: ceph-de...@vger.kernel.org
 Sent: Wednesday, March 4, 2015 3:26:52 PM
 Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

 Am 04.03.2015 um 15:12 schrieb Csaba Henk:
  - Original Message -
  From: Danny Al-Gaaf danny.al-g...@bisect.de To: OpenStack
  Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org
  Sent: Sunday, March 1, 2015 3:07:36 PM Subject: Re:
  [openstack-dev] [Manila] Ceph native driver for manila
  ...
  For us security is very critical, as the performance is too. The
  first solution via ganesha is not what we prefer (to use CephFS
  via p9 and NFS would not perform that well I guess). The second
  solution, to use
 
  Can you please explain that why does the Ganesha based stack
  involve 9p? (Maybe I miss something basic, but I don't know.)

 Sorry, seems that I mixed it up with the p9 case. But the performance
 is may still an issue if you use NFS on top of CephFS (incl. all the
 VM layer involved within this setup).

 For me the question with all these NFS setups is: why should I use NFS
 on top on CephFS? What is the right to exist of CephFS in this case? I
 would like to use CephFS directly or via filesystem passthrough.

 That's a good question. Or indeed, two questions:

 1. Why to use NFS?
 2. Why does the NFS export of Ceph need to involve CephFS?

 1.

 As of why NFS -- it's probably a good selling point that it's
 standard filesystem export technology and the tenants can remain
 backend-unaware as long as the backend provides NFS export.

 We are working on the Ganesha library --

 https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha

 with the aim to make it easy to create Ganesha based drivers. So if you have
 already an FSAL, you can get at an NFS exporting driver almost for free (with 
 a
 modest amount of glue code). So you could consider making such a driver for
 Ceph, to satisfy customers who demand NFS access, even if there is a native
 driver which gets the limelight.

 (See commits implementing this under Work Items of the BP -- one is the
 actual Ganesha library and the other two show how it can be hooked in, by the
 example of the Gluster driver. At the moment flat network (share-server-less)
 drivers are supported.)

 2.

 As of why CephFS was the technology chosen for implementing the Ceph FSAL for
 Ganesha, that's something I'd also like to know. I have the following naive
 question in mind: Would it not have been better to implement Ceph FSAL with
 something »closer to« Ceph?, and I have three actual questions about it:

 - does this question make sense in this form, and if not, how to amend?
 - I'm asking the question itself, or the amended version of it.
 - If the answer is yes, is there a chance someone would create an alternative
   Ceph FSAL on that assumed closer-to-Ceph technology?

I don't understand. What closer-to-Ceph technology do you want than
native use of the libcephfs library? Are you saying to use raw RADOS
to provide storage instead of CephFS?

In that case, it doesn't make a lot of sense: CephFS is how you
provide a real filesystem in the Ceph ecosystem. I suppose if you
wanted to create a lighter-weight pseudo-filesystem you could do so
(somebody is building a RadosFS, I think from CERN?) but then it's
not interoperable with other stuff.
-Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Danny Al-Gaaf
Am 04.03.2015 um 15:18 schrieb Csaba Henk:
 Hi Danny,
 
 - Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de
 To: Deepak Shetty dpkshe...@gmail.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 ceph-de...@vger.kernel.org
 Sent: Wednesday, March 4, 2015 3:05:46 PM
 Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

 ...
 Another level of indirection. I really like the approach of filesystem
 passthrough ... the only critical question is if virtfs/p9 is still
 supported in some way (and the question if not: why?).
 
 That only seems to be a biggie, isn't it?

Yes, it is.

 We -- Red Hat -- considered a similar, virtfs based driver for GlusterFS
 but we dropped that plan exactly for virtfs being abandonware.
 
 As far as I know it was meant to be a research project, and providing
 a fairly well working POC it was concluded -- but Deepak knows more of
 the story.

Would like to understand why it was abandoned. I see the need of
filesystem passthrough in the area of virtualization. Is there another
solution available?

Danny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Danny Al-Gaaf
Am 04.03.2015 um 15:12 schrieb Csaba Henk:
 Hi Danny,
 
 - Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de To: OpenStack
 Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org 
 Sent: Sunday, March 1, 2015 3:07:36 PM Subject: Re:
 [openstack-dev] [Manila] Ceph native driver for manila
 ...
 For us security is very critical, as the performance is too. The
 first solution via ganesha is not what we prefer (to use CephFS
 via p9 and NFS would not perform that well I guess). The second
 solution, to use
 
 Can you please explain that why does the Ganesha based stack
 involve 9p? (Maybe I miss something basic, but I don't know.)

Sorry, seems that I mixed it up with the p9 case. But the performance
is may still an issue if you use NFS on top of CephFS (incl. all the
VM layer involved within this setup).

For me the question with all these NFS setups is: why should I use NFS
on top on CephFS? What is the right to exist of CephFS in this case? I
would like to use CephFS directly or via filesystem passthrough.

Danny


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Csaba Henk
Hi Danny,

- Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de
 To: Deepak Shetty dpkshe...@gmail.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 ceph-de...@vger.kernel.org
 Sent: Wednesday, March 4, 2015 3:05:46 PM
 Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila
 
...
 Another level of indirection. I really like the approach of filesystem
 passthrough ... the only critical question is if virtfs/p9 is still
 supported in some way (and the question if not: why?).

That only seems to be a biggie, isn't it?

We -- Red Hat -- considered a similar, virtfs based driver for GlusterFS
but we dropped that plan exactly for virtfs being abandonware.

As far as I know it was meant to be a research project, and providing
a fairly well working POC it was concluded -- but Deepak knows more of
the story.

What's your take on it? I'm really curious if there is any chance for
a positive answer.

Cheers,
Csaba 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Danny Al-Gaaf
Am 04.03.2015 um 05:19 schrieb Deepak Shetty:
 On Wed, Mar 4, 2015 at 5:10 AM, Danny Al-Gaaf
 danny.al-g...@bisect.de wrote:
 Am 03.03.2015 um 19:31 schrieb Deepak Shetty: [...]
[...]
 
 I was curious to understand. IIUC Neutron provides private and 
 public networks and for VMs to access external CephFS network,
 the tenant private network needs to be bridged/routed to the
 external provider network and there are ways neturon achives
 it.
 
 Are you saying that this approach of neutron is insecure ?
 
 I don't say neutron itself is insecure.
 
 The problem is: we don't want any VM to get access to the ceph
 public network at all since this would mean access to all MON,
 OSDs and MDS daemons.
 
 If a tenant VM has access to the ceph public net, which is needed
 to use/mount native cephfs in this VM, one critical issue would
 be: the client can attack any ceph component via this network.
 Maybe I misses something, but routing doesn't change this fact.
 
 
 Agree, but there are ways you can restrict the tenant VMs to
 specific network ports only using neutron security groups and limit
 what tenant VM can do. On the CephFS side one can use selinux
 labels to provide addnl level of security for Ceph daemons, where
 in only certain process can access/modify them, I am just thinking
 aloud here, i m not sure how well cephfs works with selinux 
 combined.

I don't see how neutron security groups would help here. The problem
is if a VM has access, in which way ever, to the Ceph network a
attacker/user can on one hand attack ALL ceph daemons and on the other
 also, if there is a bug, crash all daemons and you would lose the
complete cluster.

SELinux profiles can may help with preventing subvert security or gain
privileges it would not help in this case prevent the VM user to
crash the cluster.

 Thinking more, it seems like then you need a solution that goes via
 the serviceVM approach but provide native CephFS mounts instead of
 NFS ?

Another level of indirection. I really like the approach of filesystem
passthrough ... the only critical question is if virtfs/p9 is still
supported in some way (and the question if not: why?).

Danny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Csaba Henk
Hi Danny,

- Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 ceph-de...@vger.kernel.org
 Sent: Sunday, March 1, 2015 3:07:36 PM
 Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila
...
 For us security is very critical, as the performance is too. The first
 solution via ganesha is not what we prefer (to use CephFS via p9 and
 NFS would not perform that well I guess). The second solution, to use

Can you please explain that why does the Ganesha based stack involve 9p?
(Maybe I miss something basic, but I don't know.)

Cheers
Csaba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-03 Thread Deepak Shetty
On Tue, Mar 3, 2015 at 12:51 AM, Luis Pabon lpa...@redhat.com wrote:

 What is the status on virtfs?  I am not sure if it is being maintained.
 Does anyone know?


The last i knew its not maintained.
Also for what its worth, p9 won't work for windows guest (unless there is a
p9 driver for windows ?) if that is part of your usecase/scenario ?

Last but not the least, p9/virtfs would expose a p9 mount , not a ceph
mount to VMs, which means if there are cephfs specific mount options they
may not work




 - Luis

 - Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org
 Sent: Sunday, March 1, 2015 9:07:36 AM
 Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

 Am 27.02.2015 um 01:04 schrieb Sage Weil:
  [sorry for ceph-devel double-post, forgot to include
  openstack-dev]
 
  Hi everyone,
 
  The online Ceph Developer Summit is next week[1] and among other
  things we'll be talking about how to support CephFS in Manila.  At
  a high level, there are basically two paths:

 We discussed the CephFS Manila topic also on the last Manila Midcycle
 Meetup (Kilo) [1][2]

  2) Native CephFS driver
 
  As I currently understand it,
 
  - The driver will set up CephFS auth credentials so that the guest
  VM can mount CephFS directly - The guest VM will need access to the
  Ceph network.  That makes this mainly interesting for private
  clouds and trusted environments. - The guest is responsible for
  running 'mount -t ceph ...'. - I'm not sure how we provide the auth
  credential to the user/guest...

 The auth credentials need to be handled currently by a application
 orchestration solution I guess. I see currently no solution on the
 Manila layer level atm.


There were some discussion in the past in Manila community on guest auto
mount
but i guess nothing was conclusive there.

Appln orchestration can be achived by having tenant specific VM images with
creds
pre-loaded or have the creds injected via cloud-init too should work ?



 If Ceph would provide OpenStack Keystone authentication for
 rados/cephfs instead of CephX, it could be handled via app orch easily.

  This would perform better than an NFS gateway, but there are
  several gaps on the security side that make this unusable currently
  in an untrusted environment:
 
  - The CephFS MDS auth credentials currently are _very_ basic.  As
  in, binary: can this host mount or it cannot.  We have the auth cap
  string parsing in place to restrict to a subdirectory (e.g., this
  tenant can only mount /tenants/foo), but the MDS does not enforce
  this yet.  [medium project to add that]
 
  - The same credential could be used directly via librados to access
  the data pool directly, regardless of what the MDS has to say about
  the namespace.  There are two ways around this:
 
  1- Give each tenant a separate rados pool.  This works today.
  You'd set a directory policy that puts all files created in that
  subdirectory in that tenant's pool, then only let the client access
  those rados pools.
 
  1a- We currently lack an MDS auth capability that restricts which
  clients get to change that policy.  [small project]
 
  2- Extend the MDS file layouts to use the rados namespaces so that
   users can be separated within the same rados pool.  [Medium
  project]
 
  3- Something fancy with MDS-generated capabilities specifying which
   rados objects clients get to read.  This probably falls in the
  category of research, although there are some papers we've seen
  that look promising. [big project]
 
  Anyway, this leads to a few questions:
 
  - Who is interested in using Manila to attach CephFS to guest VMs?


I didn't get this question... Goal of manila is to provision shared FS to
VMs
so everyone interested in using CephFS would be interested to attach (
'guess you meant mount?)
CephFS to VMs, no ?



  - What use cases are you interested? - How important is security in
  your environment?


NFS-Ganesha based service VM approach (for network isolation) in Manila is
still
 under works, afaik.



 As you know we (Deutsche Telekom) are may interested to provide shared
 filesystems via CephFS to VMs instead of e.g. via NFS. We can
 provide/discuss use cases at CDS.

 For us security is very critical, as the performance is too. The first
 solution via ganesha is not what we prefer (to use CephFS via p9 and
 NFS would not perform that well I guess). The second solution, to use
 CephFS directly to the VM would be a bad solution from the security
 point of view since we can't expose the Ceph public network directly
 to the VMs to prevent all the security issues we discussed already.


Is there any place the security issues are captured for the case where VMs
access
CephFS directly ? I was curious to understand. IIUC Neutron provides
private and public
networks and for VMs to access external CephFS network

Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-03 Thread Danny Al-Gaaf
Am 03.03.2015 um 19:31 schrieb Deepak Shetty:
[...]
 For us security is very critical, as the performance is too. The
 first solution via ganesha is not what we prefer (to use CephFS
 via p9 and NFS would not perform that well I guess). The second
 solution, to use CephFS directly to the VM would be a bad
 solution from the security point of view since we can't expose
 the Ceph public network directly to the VMs to prevent all the
 security issues we discussed already.
 
 
 Is there any place the security issues are captured for the case
 where VMs access CephFS directly ?

No there isn't any place and this is the issue for us.

 I was curious to understand. IIUC Neutron provides private and
 public networks and for VMs to access external CephFS network, the
 tenant private network needs to be bridged/routed to the external
 provider network and there are ways neturon achives it.
 
 Are you saying that this approach of neutron is insecure ?

I don't say neutron itself is insecure.

The problem is: we don't want any VM to get access to the ceph public
network at all since this would mean access to all MON, OSDs and MDS
daemons.

If a tenant VM has access to the ceph public net, which is needed to
use/mount native cephfs in this VM, one critical issue would be: the
client can attack any ceph component via this network. Maybe I misses
something, but routing doesn't change this fact.

Danny




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-03 Thread Deepak Shetty
On Wed, Mar 4, 2015 at 5:10 AM, Danny Al-Gaaf danny.al-g...@bisect.de
wrote:

 Am 03.03.2015 um 19:31 schrieb Deepak Shetty:
 [...]
  For us security is very critical, as the performance is too. The
  first solution via ganesha is not what we prefer (to use CephFS
  via p9 and NFS would not perform that well I guess). The second
  solution, to use CephFS directly to the VM would be a bad
  solution from the security point of view since we can't expose
  the Ceph public network directly to the VMs to prevent all the
  security issues we discussed already.
 
 
  Is there any place the security issues are captured for the case
  where VMs access CephFS directly ?

 No there isn't any place and this is the issue for us.

  I was curious to understand. IIUC Neutron provides private and
  public networks and for VMs to access external CephFS network, the
  tenant private network needs to be bridged/routed to the external
  provider network and there are ways neturon achives it.
 
  Are you saying that this approach of neutron is insecure ?

 I don't say neutron itself is insecure.

 The problem is: we don't want any VM to get access to the ceph public
 network at all since this would mean access to all MON, OSDs and MDS
 daemons.

 If a tenant VM has access to the ceph public net, which is needed to
 use/mount native cephfs in this VM, one critical issue would be: the
 client can attack any ceph component via this network. Maybe I misses
 something, but routing doesn't change this fact.


Agree, but there are ways you can restrict the tenant VMs to specific
network ports
only using neutron security groups and limit what tenant VM can do. On the
CephFS side one can use selinux labels to provide addnl level of security
for
Ceph daemons, where in only certain process can access/modify them, I am
just thinking aloud here, i m not sure how well cephfs works with selinux
combined.

Thinking more, it seems like then you need a solution that goes via the
serviceVM
approach but provide native CephFS mounts instead of NFS ?

thanx,
deepak



 Danny




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-02 Thread Luis Pabon
What is the status on virtfs?  I am not sure if it is being maintained.  Does 
anyone know?

- Luis

- Original Message -
From: Danny Al-Gaaf danny.al-g...@bisect.de
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org
Sent: Sunday, March 1, 2015 9:07:36 AM
Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

Am 27.02.2015 um 01:04 schrieb Sage Weil:
 [sorry for ceph-devel double-post, forgot to include
 openstack-dev]
 
 Hi everyone,
 
 The online Ceph Developer Summit is next week[1] and among other
 things we'll be talking about how to support CephFS in Manila.  At
 a high level, there are basically two paths:

We discussed the CephFS Manila topic also on the last Manila Midcycle
Meetup (Kilo) [1][2]

 2) Native CephFS driver
 
 As I currently understand it,
 
 - The driver will set up CephFS auth credentials so that the guest
 VM can mount CephFS directly - The guest VM will need access to the
 Ceph network.  That makes this mainly interesting for private
 clouds and trusted environments. - The guest is responsible for
 running 'mount -t ceph ...'. - I'm not sure how we provide the auth
 credential to the user/guest...

The auth credentials need to be handled currently by a application
orchestration solution I guess. I see currently no solution on the
Manila layer level atm.

If Ceph would provide OpenStack Keystone authentication for
rados/cephfs instead of CephX, it could be handled via app orch easily.

 This would perform better than an NFS gateway, but there are
 several gaps on the security side that make this unusable currently
 in an untrusted environment:
 
 - The CephFS MDS auth credentials currently are _very_ basic.  As
 in, binary: can this host mount or it cannot.  We have the auth cap
 string parsing in place to restrict to a subdirectory (e.g., this
 tenant can only mount /tenants/foo), but the MDS does not enforce
 this yet.  [medium project to add that]
 
 - The same credential could be used directly via librados to access
 the data pool directly, regardless of what the MDS has to say about
 the namespace.  There are two ways around this:
 
 1- Give each tenant a separate rados pool.  This works today.
 You'd set a directory policy that puts all files created in that
 subdirectory in that tenant's pool, then only let the client access
 those rados pools.
 
 1a- We currently lack an MDS auth capability that restricts which 
 clients get to change that policy.  [small project]
 
 2- Extend the MDS file layouts to use the rados namespaces so that
  users can be separated within the same rados pool.  [Medium
 project]
 
 3- Something fancy with MDS-generated capabilities specifying which
  rados objects clients get to read.  This probably falls in the
 category of research, although there are some papers we've seen
 that look promising. [big project]
 
 Anyway, this leads to a few questions:
 
 - Who is interested in using Manila to attach CephFS to guest VMs? 
 - What use cases are you interested? - How important is security in
 your environment?

As you know we (Deutsche Telekom) are may interested to provide shared
filesystems via CephFS to VMs instead of e.g. via NFS. We can
provide/discuss use cases at CDS.

For us security is very critical, as the performance is too. The first
solution via ganesha is not what we prefer (to use CephFS via p9 and
NFS would not perform that well I guess). The second solution, to use
CephFS directly to the VM would be a bad solution from the security
point of view since we can't expose the Ceph public network directly
to the VMs to prevent all the security issues we discussed already.

We discussed during the Midcycle a third option:

Mount CephFS directly on the host system and provide the filesystem to
the VMs via p9/virtfs. This need nova integration (I will work on a
POC patch for this) to setup libvirt config correctly for virtfs. This
solve the security issue and the auth key distribution for the VMs,
but it may introduces performance issues due to virtfs usage. We have
to check what the specific performance impact will be. Currently this
is the preferred solution for our use cases.

What's still missing in this solution is user/tenant/subtree
separation as in the 2th option. But this is needed anyway for CephFS
in general.

Danny

[1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
[2] https://etherpad.openstack.org/p/manila-meetup-winter-2015

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-01 Thread Danny Al-Gaaf
Am 27.02.2015 um 01:04 schrieb Sage Weil:
 [sorry for ceph-devel double-post, forgot to include
 openstack-dev]
 
 Hi everyone,
 
 The online Ceph Developer Summit is next week[1] and among other
 things we'll be talking about how to support CephFS in Manila.  At
 a high level, there are basically two paths:

We discussed the CephFS Manila topic also on the last Manila Midcycle
Meetup (Kilo) [1][2]

 2) Native CephFS driver
 
 As I currently understand it,
 
 - The driver will set up CephFS auth credentials so that the guest
 VM can mount CephFS directly - The guest VM will need access to the
 Ceph network.  That makes this mainly interesting for private
 clouds and trusted environments. - The guest is responsible for
 running 'mount -t ceph ...'. - I'm not sure how we provide the auth
 credential to the user/guest...

The auth credentials need to be handled currently by a application
orchestration solution I guess. I see currently no solution on the
Manila layer level atm.

If Ceph would provide OpenStack Keystone authentication for
rados/cephfs instead of CephX, it could be handled via app orch easily.

 This would perform better than an NFS gateway, but there are
 several gaps on the security side that make this unusable currently
 in an untrusted environment:
 
 - The CephFS MDS auth credentials currently are _very_ basic.  As
 in, binary: can this host mount or it cannot.  We have the auth cap
 string parsing in place to restrict to a subdirectory (e.g., this
 tenant can only mount /tenants/foo), but the MDS does not enforce
 this yet.  [medium project to add that]
 
 - The same credential could be used directly via librados to access
 the data pool directly, regardless of what the MDS has to say about
 the namespace.  There are two ways around this:
 
 1- Give each tenant a separate rados pool.  This works today.
 You'd set a directory policy that puts all files created in that
 subdirectory in that tenant's pool, then only let the client access
 those rados pools.
 
 1a- We currently lack an MDS auth capability that restricts which 
 clients get to change that policy.  [small project]
 
 2- Extend the MDS file layouts to use the rados namespaces so that
  users can be separated within the same rados pool.  [Medium
 project]
 
 3- Something fancy with MDS-generated capabilities specifying which
  rados objects clients get to read.  This probably falls in the
 category of research, although there are some papers we've seen
 that look promising. [big project]
 
 Anyway, this leads to a few questions:
 
 - Who is interested in using Manila to attach CephFS to guest VMs? 
 - What use cases are you interested? - How important is security in
 your environment?

As you know we (Deutsche Telekom) are may interested to provide shared
filesystems via CephFS to VMs instead of e.g. via NFS. We can
provide/discuss use cases at CDS.

For us security is very critical, as the performance is too. The first
solution via ganesha is not what we prefer (to use CephFS via p9 and
NFS would not perform that well I guess). The second solution, to use
CephFS directly to the VM would be a bad solution from the security
point of view since we can't expose the Ceph public network directly
to the VMs to prevent all the security issues we discussed already.

We discussed during the Midcycle a third option:

Mount CephFS directly on the host system and provide the filesystem to
the VMs via p9/virtfs. This need nova integration (I will work on a
POC patch for this) to setup libvirt config correctly for virtfs. This
solve the security issue and the auth key distribution for the VMs,
but it may introduces performance issues due to virtfs usage. We have
to check what the specific performance impact will be. Currently this
is the preferred solution for our use cases.

What's still missing in this solution is user/tenant/subtree
separation as in the 2th option. But this is needed anyway for CephFS
in general.

Danny

[1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
[2] https://etherpad.openstack.org/p/manila-meetup-winter-2015


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev