pfsug main discussion list
>
> Date: 03/08/2019 10:13 AM
> Subject: Re: [gpfsug-discuss] Follow-up: migrating billions of files
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
>
>
>
> I had to do this twice too. Once i had to copy a 4 PB filesystem
I had to do this twice too. Once i had to copy a 4 PB filesystem as fast as
possible when NSD disk descriptors were corrupted and shutting down GPFS would
have led to me loosing those files forever, and the other was a regular
maintenance but had to copy similar data in less time.
In both the
We had a similar situation and ended up using parsyncfp, which generates
multiple parallel rsyncs based on file lists. If they're on the same IB
fabric (as ours were) you can use that instead of ethernet, and it
worked pretty well. One caveat is that you need to follow the parallel
transfers
On Wed, 2019-03-06 at 12:44 +, Oesterlin, Robert wrote:
> Some of you had questions to my original post. More information:
>
> Source:
> - Files are straight GPFS/Posix - no extended NFSV4 ACLs
> - A solution that requires $’s to be spent on software (ie, Aspera)
> isn’t a very viable option
egistergericht: Amtsgericht Stuttgart,
HRB 17122
From: Stephen Ulmer
To: gpfsug main discussion list
Date: 06/03/2019 16:55
Subject: Re: [gpfsug-discuss] Follow-up: migrating billions of
files
Sent by:gpfsug-discuss-boun...@spectrumscale.org
In the case where t
In the case where tar -C doesn’t work, you can always use a subshell (I do this
regularly):
tar -cf . | ssh someguy@otherhost "(cd targetdir; tar -xvf - )"
Only use -v on one end. :)
Also, for parallel work that’s not designed that way, don't underestimate the
-P option to GNU and BSD
Hi, in that case I'd open several tar pipes in parallel, maybe using
directories carefully selected, like
tar -c | ssh "tar -x"
I am not quite sure whether "-C /" for tar works here ("tar -C / -x"), but
along these lines might be a good efficient method. target_hosts should be
all nodes