Thanks everyone for the great ideas. To answer several questions and offer some clarification. We have 7 Windows servers on site that do the pushing to our storage server in Virginia. If we can get around 80% saturation of each of the 1Gbps NICs on these machines that is more than enough throughput for our purposes now. The 10 Gbps line has very minimal traffic other than us uploading these files. I believe the storage server has a 10 Gbps NIC.
As far as compression, we looked at that early on but decided against it because the compression plus the file transfer took more time than the file transfer and we were trying to move the images off the Windows server as quickly as possible. But, I think it would be worth revisiting as I only tried gzip before. I am looking into the other technologies you suggested. Thank you! On Thursday, September 13, 2018 at 10:41:50 AM UTC-6, Jay Askren wrote: > > We need to push 40 TB of images per day from our scanning department in > Utah to our storage servers in Virginia and then we download about 4 TB of > processed images per day back to Utah. In our previous process we had no > problem getting the throughput we needed by using Robocopy which comes with > Windows, but our old storage servers were here in Utah. We can get > Robocopy to work across the WAN but we have to run 3 or 4 Robocopy > processes under different Windows users which is somewhat fragile and feels > like a bad hack. The files here in Utah are on a Windows server because of > the proprietary software needed to run the scanner. All of our servers in > Virginia run Centos. > > Any thoughts on how to transfer files over long distance and still get > high throughput? I believe the issue we are running into is high latency. > -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
