I'm not really an expert on composable roles so I'll leave that to someone else, but see my thoughts inline on the networking aspect.

On 04/10/2017 03:22 AM, Jan Provaznik wrote:
2) define a new VIP (for IP failover) and 2 networks for NfsStorage role:
    a) a frontend network between users and ganesha servers (e.g.
NfsNetwork name), used by tenants to mount nfs shares - this network
should be accessible from user instances.

Adding a new network is non-trivial today, so I think we want to avoid that if possible. Is there a reason the Storage network couldn't be used for this? That is already present on compute nodes by default so it would be available to user instances, and it seems like the intended use of the Storage network matches this use case. In a Ceph deployment today that's the network which exposes data to user instances.

    b) a backend network between ganesha servers ans ceph cluster -
this could just map to the existing StorageNetwork I think.

This actually sounds like a better fit for StorageMgmt to me. It's non-user-facing storage communication, which is what StorageMgmt is used for in the vanilla Ceph case.

What i'm not sure at all is how network definition should look like.
There are following Overcloud deployment options:
1) no network isolation is used - then both direct ceph mount and
mount through ganesha should work because StorageNetwork and
NfsNetwork are accessible from user instances (there is no restriction
in accessing other networks it seems).

There are no other networks without network-isolation. Everything runs over the provisioning network. The network-isolation templates should mostly handle this for you though.

2) network isolation is used:
    a) ceph is used directly - user instances need access to the ceph
public network (which is StorageNetwork in Overcloud) - how should I
enable access to this network? I filled a bug for this deployment
variant here [3]

So does this mean that the current manila implementation is completely broken in network-isolation? If so, that's rather concerning.

If I'm understanding correctly, it sounds like what needs to happen is to make the Storage network routable so it's available from user instances. That's not actually something TripleO can do, it's an underlying infrastructure thing. I'm not sure what the security implications of it are either.

Well, on second thought it might be possible to make the Storage network only routable within overcloud Neutron by adding a bridge mapping for the Storage network and having the admin configure a shared Neutron network for it. That would be somewhat more secure since it wouldn't require the Storage network to be routable by the world. I also think this would work today in TripleO with no changes.

Alternatively I guess you could use ServiceNetMap to move the public Ceph traffic to the public network, which has to be routable. That seems like it might have a detrimental effect on the public network's capacity, but it might be okay in some instances.

    b) ceph is used through ganesha - user instances need access to
ganesha servers (NfsNetwork in previous paragraph) - how should I
enable access to this network?

I think the answer here will be the same as for vanilla Ceph. You need to make the network routable to instances, and you'd have the same options as I discussed above.

The ultimate (and future) plan is to deploy ganesha-nfs in VMs (which
will run in Overcloud, probably managed by manila ceph driver), in
this deployment mode a user should have access to ganesha servers and
only ganesha server VMs should have access to ceph public network.
Ganesha VMs would run in a separate tenant so I wonder if it's
possible to manage access to the ceph public network (StorageNetwork
in Overcloud) on per-tenant level?

This would suggest that the bridged Storage network approach is the best. In that case access to the ceph public network is controlled by the overcloud Neutron, so you would just need to only give access to it to the tenant running the Ganesha VMs. User VMs would only get access to a separate shared network providing access to the public Ganesha API, and the Ganesha VMs would straddle both networks.

Any thoughts and hints?

Thanks, Jan

[1] https://github.com/nfs-ganesha/nfs-ganesha/wiki
[2] https://github.com/ceph/ceph-ansible/tree/master/roles/ceph-nfs
[3] https://bugs.launchpad.net/tripleo/+bug/1680749

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Reply via email to