On 11/18/2015 05:31 AM, John Spray wrote:
On Wed, Nov 18, 2015 at 4:33 AM, Ben Swartzlander <b...@swartzlander.org> wrote:
On 11/17/2015 10:02 AM, John Spray wrote:

Hi all,

As you may know, there is ongoing work on a spec for Nova to define an
"attach/detach" API for tighter integration with Manila.

The concept here is that this mechanism will be needed to implement
hypervisor mediated FS access using vsock, but that the mechanism
should also be applicable more generally to an "attach" concept for
filesystems accessed over IP networks (like existing NFS filers).

In the hypervisor-mediated case, attach would involve the hypervisor
host connecting as a filesystem client and then re-exporting to the
guest via a local address.  We think this would apply to
driver_handles_share_servers type drivers that support share networks,
by mapping the attach/detach share API to attaching/detaching the
share network from the guest VM.

Does that make sense to people maintaining this type of driver?  For
example, for the netapp and generic drivers, is it reasonable to
expose nova attach/detach APIs that attach and detach the associated
share network?


I'm not sure this proposal makes sense. I would like the share attach/detach
semantics to be the same for all types of shares, regardless of the driver
type.

The main challenge with attaching to shares on share servers (with share
networks) is that there may not exist a network route from the hypervisor to
the share server, because share servers are only required to be accessible
from the share network from which they are created. This has been a known
problem since Liberty because this behaviour prevents migration from
working, therefore we're proposing a mechanism for share-server drivers to
provide admin-network-facing interfaces for all share servers. This same
mechanism should be usable by the Nova when doing share attach/detach. Nova
would just need to list the export locations using an admin-context to see
the admin-facing export location that it should use.

For these drivers, we're not proposing connecting to them from the
hypervisor -- we would still be connecting directly from the guest via
the share network.

The change would be from the existing workflow:
  * Create share
  * Attach guest network to guest VM (need to look up network info,
talk to neutron API)

I think this is the point of confusion. The design for share-networks in Manila is to re-use your existing networks. A VM with no network connection is not particularly useful -- we assume that all VMs have at least 1 private network. The idea behind the share network is to map shares onto that private network so that the above step in unnecessary.

It's true that if you have _no_ network at all or if you have a particularly complicated network configuration, you may need to do an additional network attachment to access Manila shares. That should be an exceptional case though, no the norm.

  * Add IP access permission for the guest to access the share (need to
know IP of the guest)

If the guest has a private network and it's mapped to the share network, then the access only needs to be granted to the private IP. I agree it would be nicer to grant access by instance ID rather than IP, but in reality that mapping can be performed automatically and cheaply.

  * Mount from guest VM

To a new workflow:
  * Create share
  * Attach share to guest (knowing only share ID and guest instance ID)
  * Mount from guest VM

The idea is to abstract the networking part away, so that the user
just has to say "I want to be able to mount share X from guest Y",
without knowing about the networking stuff going on under the hood.
While this is partly because it's slicker, this is mainly so that
applications can use IP-networking shares interchangeably with future
hypervisor mediated shares: they call "attach" and don't have to worry
about whether that's a share network operation under the hood or a
hypervisor-twiddling operation under the hood.

I think we are already closer than you think. In most cases the only missing automation is knowing how to turn an instance ID into an IP address to give to Manila's access-allow API. There are 2 potential solutions to that problem:

1) Add and API to Manila or a layer on top of Manila that gives grant-access-by-instance-ID semantics.

2) Do the nova attach implementation and always use the hypervisor IP instead of the guest IP when talking to Manila.

John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to