Oh, the one *huge* gotcha I thought I'd share-- we wrote a perl script
to drive the migration and part of the perl script's process was to
clone quotas from old uid numbers to the new number. I upset our GPFS
cluster during a particular migration in which the user was over the
grace period of the quota so after a certain point every chown() put the
destination UID even further over its quota. The problem with this being
that at this point every chown() operation would cause GPFS to do some
cluster-wide quota accounting-related RPCs. That hurt. It's worth making
sure there are no quotas defined for the destination UID numbers and if
they are that the data coming from the source UID number will fit.
-Aaron
On 8/2/17 9:00 PM, Aaron Knister wrote:
I'm a little late to the party here but I thought I'd share our recent
experiences.
We recently completed a mass UID number migration (half a billion
inodes) and developed two tools ("luke filewalker" and the
"mmilleniumfacl") to get the job done. Both luke filewalker and the
mmilleniumfacl are based heavily on the code in
/usr/lpp/mmfs/samples/util/tsreaddir.c and
/usr/lpp/mmfs/samples/util/tsinode.c.
luke filewalker targets traditional POSIX permissions whereas
mmilleniumfacl targets posix ACLs. Both tools traverse the filesystem in
parallel and both but particularly the 2nd, are extremely I/O intensive
on your metadata disks.
The gist of luke filewalker is to scan the inode structures using the
gpfs APIs and populate a mapping of inode number to gid and uid number.
It then walks the filesystem in parallel using the APIs, looks up the
inode number in an in-memory hash, and if appropriate changes ownership
using the chown() API.
The mmilleniumfacl doesn't have the luxury of scanning for POSIX ACLs
using the GPFS inode API so it walks the filesystem and reads the ACL of
any and every file, updating the ACL entries as appropriate.
I'm going to see if I can share the source code for both tools, although
I don't know if I can post it here since it modified existing IBM source
code. Could someone from IBM chime in here? If I were to send the code
to IBM could they publish it perhaps on the wiki?
-Aaron
On 6/30/17 11:20 AM, [email protected] wrote:
Hello,
We're trying to change most of our users uids, is there a clean
way to
migrate all of one users files with say `mmapplypolicy`? We have to
change the
owner of around 273539588 files, and my estimates for runtime are
around 6 days.
What we've been doing is indexing all of the files and splitting
them up by
owner which takes around an hour, and then we were locking the user
out while we
chown their files. I made it multi threaded as it weirdly gave a 10%
speedup
despite my expectation that multi threading access from a single node
would not
give any speedup.
Generally I'm looking for advice on how to make the chowning
faster. Would
spreading the chowning processes over multiple nodes improve
performance? Should
I not stat the files before running lchown on them, since lchown
checks the file
before changing it? I saw mention of inodescan(), in an old gpfsug
email, which
speeds up disk read access, by not guaranteeing that the data is up to
date. We
have a maintenance day coming up where all users will be locked out,
so the file
handles(?) from GPFS's perspective will not be able to go stale. Is
there a
function with similar constraints to inodescan that I can use to speed
up this
process?
Thank you for your time,
Luke
Storrs-HPC
University of Connecticut
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss