Dear Marc,
If you "must" exceed 1000 filesets because you are assigning each project to its own fileset, my suggestion is this: Yes, there are scaling/performance/manageability benefits to using mmbackup over independent filesets. But maybe you don't need 10,000 independent filesets -- maybe you can hash or otherwise randomly assign projects that each have their own (dependent) fileset name to a lesser number of independent filesets that will serve as management groups for (mm)backup... OK, if that might be doable, whats then the performance impact of having to specify Include/Exclude lists for each independent fileset in order to specify which dependent fileset should be backed up and which one not? I don’t remember exactly, but I think I’ve heard at some time, that Include/Exclude and mmbackup have to be used with caution. And the same question holds true for running mmapplypolicy for a “job” on a single dependent fileset? Is the scan runtime linear to the size of the underlying independent fileset or are there some optimisations when I just want to scan a subfolder/dependent fileset of an independent one? Like many things in life, sometimes compromises are necessary! Hmm, can I reference this next time, when we negotiate Scale License pricing with the ISS sales people? ;) Best Regards, Stephan Peinkofer
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss