TripleO currently supports deploying CephFS and it can be used as a
backend for Manila service, so user can mount ceph shares by using
either ceph kernel driver or ceph-fuse on his side.

There is an ongoing ganesha-nfs project [1] which can be used as a
proxy when accessing CephFS. The benefit is that user then interacts
only with ganesha server and mounts shares from this server using NFS

And Manila will soon support both variants of ceph backend :
1) ceph is used directly (what we support now)
user instance <-- ceph protocol --> ceph cluster

2) ceph is used through ganesha server (what we don't support yet but
we would like)
user instance <-- NFS protocol --> ganesha server <-- ceph protocol
--> ceph cluster

We would like to enable both deployment options in TripleO and I
wonder how ganesha deployment should look like.

Prerequisities are:
- use of ganesha servers will be optional - operators can still choose
to use ceph directly, ideally it should be possible to deploy Manila
both with direct ceph and ganesha backends
- ganesha servers should not run on controller nodes (e.g. collocated
with manila-share service) because of data traffic, ideally ganesha
servers should be dedicated  (which is probably not a probablem with
composable services)
- ganesha servers will probably use active/passive HA model and will
be managed by pacemaker+corosync - AFAIK detailed HA architecture is
not specified yet and is still in progress by ganesha folks.

I imagine that (extremely simplified) setup would look
something like this from TripleO point of view:
1) define a new role (e.g. NfsStorage) which represents ganesha servers
2) define a new VIP (for IP failover) and 2 networks for NfsStorage role:
    a) a frontend network between users and ganesha servers (e.g.
NfsNetwork name), used by tenants to mount nfs shares - this network
should be accessible from user instances.
    b) a backend network between ganesha servers ans ceph cluster -
this could just map to the existing StorageNetwork I think.
3) pacemaker and ganesha setup magic happens - I wonder if the
existing puppet pacemaker modules could be used for setting up another
pacemaker cluster on dedicated ganesha nodes? It seems the long term
plan is to switch to ceph-ansible for ceph setup in TripleO. So should
be the whole HA setup of ganesha server delegated to the ceph ansible

What i'm not sure at all is how network definition should look like.
There are following Overcloud deployment options:
1) no network isolation is used - then both direct ceph mount and
mount through ganesha should work because StorageNetwork and
NfsNetwork are accessible from user instances (there is no restriction
in accessing other networks it seems).
2) network isolation is used:
    a) ceph is used directly - user instances need access to the ceph
public network (which is StorageNetwork in Overcloud) - how should I
enable access to this network? I filled a bug for this deployment
variant here [3]
    b) ceph is used through ganesha - user instances need access to
ganesha servers (NfsNetwork in previous paragraph) - how should I
enable access to this network?

The ultimate (and future) plan is to deploy ganesha-nfs in VMs (which
will run in Overcloud, probably managed by manila ceph driver), in
this deployment mode a user should have access to ganesha servers and
only ganesha server VMs should have access to ceph public network.
Ganesha VMs would run in a separate tenant so I wonder if it's
possible to manage access to the ceph public network (StorageNetwork
in Overcloud) on per-tenant level?

Any thoughts and hints?

Thanks, Jan

[1] https://github.com/nfs-ganesha/nfs-ganesha/wiki
[2] https://github.com/ceph/ceph-ansible/tree/master/roles/ceph-nfs
[3] https://bugs.launchpad.net/tripleo/+bug/1680749

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Reply via email to