-discuss-boun...@spectrumscale.org
on behalf of Venkateswara R Puvvada
Sent: Monday, November 23, 2020 9:41 PM
To: gpfsug main discussion list
Cc: gpfsug-discuss-boun...@spectrumscale.org
Subject: Re: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over
NFS?
AFM provides near
.A. Yeep"
To: gpfsug main discussion list
Cc: gpfsug-discuss-boun...@spectrumscale.org
Date: 11/24/2020 07:40 PM
Subject:[EXTERNAL] Re: [gpfsug-discuss] Migrate/syncronize data
from Isilon to Scale over NFS?
Sent by:gpfsug-discuss-boun...@spectrumscale.org
Hi Ven
t;
>
> From:"Frederick Stock"
> To:gpfsug-discuss@spectrumscale.org
> Cc:gpfsug-discuss@spectrumscale.org
> Date: 11/17/2020 03:14 AM
> Subject: [EXTERNAL] Re: [gpfsug-discuss] Migrate/syncronize data
> from Isilon to
~Venkat (vpuvv...@in.ibm.com)
From: "Frederick Stock"
To: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Date: 11/17/2020 03:14 AM
Subject:[EXTERNAL] Re: [gpfsug-discuss] Migrate/syncronize data
from Isilon to Scale over
Ø I would counsel in the strongest possible terms against that approach.
Ø Basically you have to be assured that none of your file names have "wacky"
characters in them, because handling "wacky" characters in file
Ø names is exceedingly difficult. I cannot stress how hard it is and the above
On Wed, 18 Nov 2020 11:48:52 +, Jonathan Buzzard said:
> So what do I mean by "wacky" characters. Well remember a file name can
> have just about anything in it on Linux with the exception of '/', and
You want to see some fireworks? At least at one time, it was possible to use
a file system
Hi Jonathan,
i would be very interested in seeing your scripts when they are posted. Let me
know where to get them!
Thanks a bunch!
Andi Christiansen
> On 11/18/2020 12:48 PM Jonathan Buzzard wrote:
>
>
> On 17/11/2020 23:17, Chris Schlipalius wrote:
> > So at my last job we used to rsync
On 17/11/2020 23:17, Chris Schlipalius wrote:
So at my last job we used to rsync data between isilons across campus,
and isilon to Windows File Cluster (and back).
I recommend using dry run to generate a list of files and then use this
to run with rysnc.
This allows you also to be able to
So at my last job we used to rsync data between isilons across campus, and
isilon to Windows File Cluster (and back).
I recommend using dry run to generate a list of files and then use this to run
with rysnc.
This allows you also to be able to break up the transfer into batches, and
check if
Hi Jonathan,
yes you are correct! but we plan to resync this once or twice every week for
the next 3-4months to be sure everything is as it should be.
Right now we are focused on getting them synced up and then we will run
scheduled resyncs/checks once or twice a week depending on the data
On 17/11/2020 15:55, Simon Thompson wrote:
Fortunately, we seem committed to GPFS so it might be we never have to do
another bulk transfer outside of the filesystem...
Until you want to move a v3 or v4 created file-system to v5 block sizes __
You forget the v2 to v3 for more than
>Fortunately, we seem committed to GPFS so it might be we never have to do
>another bulk transfer outside of the filesystem...
Until you want to move a v3 or v4 created file-system to v5 block sizes __
I hopes we won't be doing that sort of thing again...
Simon
On Tue, Nov 17, 2020 at 01:53:43PM +, Jonathan Buzzard wrote:
> On 17/11/2020 11:51, Andi Christiansen wrote:
> > Hi all,
> >
> > thanks for all the information, there was some interesting things
> > amount it..
> >
> > I kept on going with rsync and ended up making a file with all top
> >
On 17/11/2020 11:51, Andi Christiansen wrote:
Hi all,
thanks for all the information, there was some interesting things
amount it..
I kept on going with rsync and ended up making a file with all top
level user directories and splitting them into chunks of 347 per
rsync session(total 42000 ish
>
> > > IBM Deutschland Business & Technology Services GmbH
> > > Geschäftsführung: Sven Schooss, Stefan Hierl
> > > Sitz der Gesellschaft: Ehningen
> > > Registergericht: Amtsgericht Stuttgart, HRB 17122
> >
>
>
>
> From: Uwe Falke/Germany/IBM
> To: gpfsug main discussion list
> Date: 17/11/2020 09:50
> Subject:Re: [EXTERNAL] [gpfsug-discuss] Migrate/syncronize data
> from Isilon to Scale over NFS?
>
>
> Hi Andi,
>
> what about leaving
art, HRB 17122
From: Uwe Falke/Germany/IBM
To: gpfsug main discussion list
Date: 17/11/2020 09:50
Subject:Re: [EXTERNAL] [gpfsug-discuss] Migrate/syncronize data
from Isilon to Scale over NFS?
Hi Andi,
what about leaving NFS completeley out and using rsync (multipl
stiansen
To: "gpfsug-discuss@spectrumscale.org"
Date: 16/11/2020 20:44
Subject: [EXTERNAL] [gpfsug-discuss] Migrate/syncronize data from
Isilon to Scale overNFS?
Sent by:gpfsug-discuss-boun...@spectrumscale.org
Hi all,
i have got a case where a customer wa
On 16/11/2020 21:58, Skylar Thompson wrote:
When we did a similar (though larger, at ~2.5PB) migration, we used rsync
as well, but ran one rsync process per Isilon node, and made sure the NFS
clients were hitting separate Isilon nodes for their reads. We also didn't
have more than one rsync
On 16/11/2020 19:44, Andi Christiansen wrote:
Hi all,
i have got a case where a customer wants 700TB migrated from isilon to
Scale and the only way for him is exporting the same directory on NFS
from two different nodes...
as of now we are using multiple rsync processes on different parts
When we did a similar (though larger, at ~2.5PB) migration, we used rsync
as well, but ran one rsync process per Isilon node, and made sure the NFS
clients were hitting separate Isilon nodes for their reads. We also didn't
have more than one rsync process running per client, as the Linux NFS
- Original message -From: Andi Christiansen Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: "gpfsug-discuss@spectrumscale.org" Cc:Subject: [EXTERNAL] [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS?Date: Mon, Nov 16, 2020 2:44 PM
Hi all,
i have
Hi all,
i have got a case where a customer wants 700TB migrated from isilon to Scale
and the only way for him is exporting the same directory on NFS from two
different nodes...
as of now we are using multiple rsync processes on different parts of folders
within the main directory. this is
23 matches
Mail list logo