Hi Jay,

There's a lot of good advice in this thread around networking, but have you
tried compressing the files before sending and decompressing on the other
side? I know its not always practical, but its often a really simple win.
Especially if you use some of the more modern compression algorithms - I've
had a dataset I've been using recently that compresses about 2x better with
zstd than gzip for example.


On Thu, Sep 13, 2018 at 5:41 PM Jay Askren <[email protected]> wrote:

> We need to push 40 TB of images per day from our scanning department in
> Utah to our storage servers in Virginia and then we download about 4 TB of
> processed images per day back to Utah.  In our previous process we had no
> problem getting the throughput we needed by using Robocopy which comes with
> Windows, but our old storage servers were here in Utah.  We can get
> Robocopy to work across the WAN but we have to run 3 or 4 Robocopy
> processes under different Windows users which is somewhat fragile and feels
> like a bad hack.  The files here in Utah are on a Windows server because of
> the proprietary software needed to run the scanner.  All of our servers in
> Virginia run Centos.
>
> Any thoughts on how to transfer files over long distance and still get
> high throughput?  I believe the issue we are running into is high latency.
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>


-- 
regards,

  Richard Warburton

  http://insightfullogic.com
  @RichardWarburto <http://twitter.com/richardwarburto>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to