On Wed, Nov 18, 2015 at 4:33 AM, Ben Swartzlander <[email protected]> wrote: > On 11/17/2015 10:02 AM, John Spray wrote: >> >> Hi all, >> >> As you may know, there is ongoing work on a spec for Nova to define an >> "attach/detach" API for tighter integration with Manila. >> >> The concept here is that this mechanism will be needed to implement >> hypervisor mediated FS access using vsock, but that the mechanism >> should also be applicable more generally to an "attach" concept for >> filesystems accessed over IP networks (like existing NFS filers). >> >> In the hypervisor-mediated case, attach would involve the hypervisor >> host connecting as a filesystem client and then re-exporting to the >> guest via a local address. We think this would apply to >> driver_handles_share_servers type drivers that support share networks, >> by mapping the attach/detach share API to attaching/detaching the >> share network from the guest VM. >> >> Does that make sense to people maintaining this type of driver? For >> example, for the netapp and generic drivers, is it reasonable to >> expose nova attach/detach APIs that attach and detach the associated >> share network? > > > I'm not sure this proposal makes sense. I would like the share attach/detach > semantics to be the same for all types of shares, regardless of the driver > type. > > The main challenge with attaching to shares on share servers (with share > networks) is that there may not exist a network route from the hypervisor to > the share server, because share servers are only required to be accessible > from the share network from which they are created. This has been a known > problem since Liberty because this behaviour prevents migration from > working, therefore we're proposing a mechanism for share-server drivers to > provide admin-network-facing interfaces for all share servers. This same > mechanism should be usable by the Nova when doing share attach/detach. Nova > would just need to list the export locations using an admin-context to see > the admin-facing export location that it should use.
For these drivers, we're not proposing connecting to them from the hypervisor -- we would still be connecting directly from the guest via the share network. The change would be from the existing workflow: * Create share * Attach guest network to guest VM (need to look up network info, talk to neutron API) * Add IP access permission for the guest to access the share (need to know IP of the guest) * Mount from guest VM To a new workflow: * Create share * Attach share to guest (knowing only share ID and guest instance ID) * Mount from guest VM The idea is to abstract the networking part away, so that the user just has to say "I want to be able to mount share X from guest Y", without knowing about the networking stuff going on under the hood. While this is partly because it's slicker, this is mainly so that applications can use IP-networking shares interchangeably with future hypervisor mediated shares: they call "attach" and don't have to worry about whether that's a share network operation under the hood or a hypervisor-twiddling operation under the hood. John __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
