Kurt,

Your plan should work fine and subdir mounts is certainly something which has 
been used with Lustre already.  A couple caveats and notes:

  *
not all clients need the same mounts, which is important because you want to 
write data to that 'ro' dir somehow. 😉
  *
there were some bugs with lfs, PCC, and other tools not working quite right on 
subdir mounts.  I think all the ones that we were hitting have been addressed, 
but you may want to take a look at LU-8585: "All Lustre test suites should pass 
with subdirectory mount"
  *
some operations, such as changing filesystem default striping, still require a 
mount of the root dir AFAIK.  So have a plan of where and how you will access 
that when needed.

Regards,
Nathan

________________________________
From: lustre-discuss <[email protected]> on behalf of 
Kurt Strosahl via lustre-discuss <[email protected]>
Sent: Wednesday, August 28, 2024 2:28 PM
To: [email protected] <[email protected]>
Subject: [lustre-discuss] Mounting subdirectories of the same lustre file 
system separately on a single host

Good Afternoon,

   We are exploring ways to have a directory tree within one of our lustre file 
systems set to be read only, while the rest of the tree remains in a read/write 
state.  Since the directory is at the base of the lustre file system one 
thought is to mount that branch with the ro flag, and the other branchs would 
be mounted as rw.

Are there any complications or issues that come with having branches of the 
same lustre file system mounted on the same system at the same time?

In short we are looking to go from:
/lustre mounted on /lustre (rw)

to
/lustre/dir1 mounted on /dir1 (ro)
/lustre/dir2 mounted on /dir2 (rw)
/lustre/dir3 mounted on /dir3 (rw)

w/r,

Kurt J. Strosahl (he/him)
System Administrator: Lustre, HPC
Scientific Computing Group, Thomas Jefferson National Accelerator Facility
_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to