I think the 1st option creates the challenges both with security (e.g. do you fully trust the users of your VMs not to do bad things as root either maliciously or accidentally? how do you ensure userids are properly mapped inside the guest?) and logistically (as VMs come and go how do you automate adding them/removing them to/from the GPFS cluster).

I think the 2nd option is ideal perhaps using something like 9p (http://www.linux-kvm.org/page/9p_virtio) to export filesystems from the hypervisor to the guest. I'm not sure how you would integrate this with Nova and I've heard from others that there are stability issues, but I can't comment first hand. Another option might be to NFS/CIFS export the filesystems from the hypervisor to the guests via the 169.254.169.254 metadata address although I don't know how feasible that may or may not be. The advantage to using the metadata address is it should scale well and it should take the pain out of a guest mapping an IP address to its local hypervisor using an external method.

Perhaps number 3 is the best way to go, especially (arguably only) if you use kerberized NFS or SMB. That way you don't have to trust anything about the guest and you theoretically should get decent performance.

I'm really curious what other folks have done on this front.

-Aaron

On 1/17/17 4:50 PM, Brian Marshall wrote:
UG,

I have a GPFS filesystem.

I have a OpenStack private cloud.

What is the best way for Nova Compute VMs to have access to data inside
the GPFS filesystem?

1)Should VMs mount GPFS directly with a GPFS client?
2) Should the hypervisor mount GPFS and share to nova computes?
3) Should I create GPFS protocol servers that allow nova computes to
mount of NFS?

All advice is welcome.


Best,
Brian Marshall
Virginia Tech


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to