Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
[email protected]
----- Original message -----
From: "Yaron Daniel" <[email protected]>
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>
Cc:
Subject: Re: [gpfsug-discuss] Migrating billions of files?
Date: Wed, Mar 6, 2019 4:18 AM
Hi
U can also use today Aspera - which will replicate gpfs extended attr.
Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally
http://www.redbooks.ibm.com/redpieces/abstracts/redp5527.html?Open
Regards
Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect – IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: [email protected] IBM Israel
![]()
![]()
From: Simon Thompson <[email protected]>
To: gpfsug main discussion list <[email protected]>
Date: 03/06/2019 11:08 AM
Subject: Re: [gpfsug-discuss] Migrating billions of files?
Sent by: [email protected]
AFM doesn’t work well if you have dependent filesets though .. which we did for quota purposes.
Simon
From: <[email protected]> on behalf of "[email protected]" <[email protected]>
Reply-To: "[email protected]" <[email protected]>
Date: Wednesday, 6 March 2019 at 09:01
To: "[email protected]" <[email protected]>
Subject: Re: [gpfsug-discuss] Migrating billions of files?
Hi
What permissions you have ? Do u have only Posix , or also SMB attributes ?
If only posix attributes you can do the following:
- rsync (which will work on different filesets/directories in parallel.
- AFM (but in case you need rollback - it will be problematic)
Regards
Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect – IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: [email protected] IBM Israel
![]()
From: "Oesterlin, Robert" <[email protected]>
To: gpfsug main discussion list <[email protected]>
Date: 03/05/2019 11:57 PM
Subject: [gpfsug-discuss] Migrating billions of files?
Sent by: [email protected]
I’m looking at migration 3-4 Billion files, maybe 3PB of data between GPFS clusters. Most of the files are small - 60% 8K or less. Ideally I’d like to copy at least 15-20M files per day - ideally 50M.
Any thoughts on how achievable this is? Or what to use? Either with AFM, mpifileutils, rsync.. other? Many of these files would be in 4k inodes. Destination is ESS.
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
