I’m looking at migration 3-4 Billion files, maybe 3PB of data between GPFS 
clusters. Most of the files are small - 60% 8K or less. Ideally I’d like to 
copy at least 15-20M files per day - ideally 50M.

Any thoughts on how achievable this is? Or what to use? Either with AFM, 
mpifileutils, rsync.. other? Many of these files would be in 4k inodes. 
Destination is ESS.


Bob Oesterlin
Sr Principal Storage Engineer, Nuance

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to