We are running the mmbackup on an AIX system
oslevel -s
6100-07-10-1415
Current GPFS build: "4.1.0.8 ".

So we only use one node for the policy run.

Stephan

On 10/26/15 22:12, Wayne Sawdon wrote:

> From: Stephan Graf <[email protected]><mailto:[email protected]>
>
> For backup we use mmbackup (dsmc)
>     for the user HOME directory (no ILM)
>     #120 mio files => 3 hours get candidate list + x hour backup

That seems rather slow. What version of GPFS are you running? How many nodes 
are you using? Are you using a "-g global shared directory"?

The original mmapplypolicy code was targeted to a single node, so by default it 
still runs on a single node and you have to specify -N to run it in parallel.  
When you run multi-node there is a "-g" option that defines a global shared 
directory that must be visible to all nodes specified in the -N list.  Using 
"-g" with "-N" enables a scale-out parallel algorithm that substantially 
reduces the time for candidate selection.

-Wayne




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to