So as I understand things, Manila is an OpenStack component which allows 
tenants to create and destroy shares for their instances which would be 
accessed over NFS. Perhaps I’ve not done enough research in to this though – 
I’m also not an OpenStack expert.

The tenants don’t have root access to the file system, but the Manila component 
must act as a wrapper to file system administrative equivalents like 
mmcrfileset, mmdelfileset, link and unlink. The shares are created as GPFS 
filesets which are then presented over NFS.

The unlinking of the fileset worries me for the reasons stated previously.

From: [email protected] 
[mailto:[email protected]] On Behalf Of Wahl, Edward
Sent: 15 June 2015 15:00
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] OpenStack Manila Driver

Perhaps I misunderstand here, but if the tenants have administrative (ie:root) 
privileges to the underlying file system management commands I think 
mmunlinkfileset might be a minor concern here.  There are FAR more destructive 
things that could occur.

I am not an OpenStack expert and I've not even looked at anything past Kilo,  
but my understanding was that these commands were not necessary for tenants.  
They access a virtual block device that backs to GPFS, correct?

Ed Wahl
OSC
________________________________


++
From: 
[email protected]<mailto:[email protected]> 
[[email protected]] on behalf of Luke Raimbach 
[[email protected]]
Sent: Monday, June 15, 2015 4:35 AM
To: [email protected]<mailto:[email protected]>
Subject: [gpfsug-discuss] OpenStack Manila Driver

Dear All,



We are looking forward to using the manila driver for auto-provisioning of file 
shares using GPFS. However, I have some concerns...



Manila  presumably gives tenant users access to file system commands like 
mmlinkfileset and mmunlinkfileset. Given that mmunlinkfileset quiesces the file 
system, there is potentially an impact from one tenant on another - i.e. 
someone unlinking and deleting a lot of filesets during a tenancy cleanup might 
cause a cluster pause long enough to trigger other failure events or even start 
evicting nodes. You can see why this would be bad in a cloud environment.





Has this scenario been addressed at all?



Cheers,

Luke.


Luke Raimbach​
Senior HPC Data and Storage Systems Engineer
The Francis Crick Institute
Gibbs Building
215 Euston Road
London NW1 2BE

E: [email protected]<mailto:[email protected]>
W: www.crick.ac.uk<http://www.crick.ac.uk/>


The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wales no. 06885462, 
with its registered office at 215 Euston Road, London NW1 2BE.

The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wales no. 06885462, 
with its registered office at 215 Euston Road, London NW1 2BE.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to