On 16/11/2020 21:58, Skylar Thompson wrote:
When we did a similar (though larger, at ~2.5PB) migration, we used rsync
as well, but ran one rsync process per Isilon node, and made sure the NFS
clients were hitting separate Isilon nodes for their reads. We also didn't
have more than one rsync
On 16/11/2020 19:44, Andi Christiansen wrote:
Hi all,
i have got a case where a customer wants 700TB migrated from isilon to
Scale and the only way for him is exporting the same directory on NFS
from two different nodes...
as of now we are using multiple rsync processes on different parts
When we did a similar (though larger, at ~2.5PB) migration, we used rsync
as well, but ran one rsync process per Isilon node, and made sure the NFS
clients were hitting separate Isilon nodes for their reads. We also didn't
have more than one rsync process running per client, as the Linux NFS
Have you considered using the AFM feature of Spectrum Scale? I doubt it will provide any speed improvement but it would allow for data to be accessed as it was being migrated.
Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com
Hi all,
i have got a case where a customer wants 700TB migrated from isilon to Scale
and the only way for him is exporting the same directory on NFS from two
different nodes...
as of now we are using multiple rsync processes on different parts of folders
within the main directory. this is
Hi,
while the other nodes can well block the local one, as Frederick suggests,
there should at least be something visible locally waiting for these
other nodes.
Looking at all waiters might be a good thing, but this case looks strange
in other ways. Mind statement there are almost no local