On Thu, Jan 22, 2015 at 1:25 PM, Brian Scholl <[email protected]> wrote: > I think we can discard the 500 error for now, I just can't find it and for > all I know at this point it was a one-off. > > The database is 2.5TB and it's running on an ec2 hs1.8xlarge. It's sitting > on the ephemeral disks in a raid0.
Hm. That's a fairly large instance. First of all, the optimizations ought to help in this case, because the previous CPU usage per byte was drastically higher. How fast do you want to take the backup? Basic math suggests that around 50% of gigabit saturation yields about 12 hours of backing up. You are going to have to beef up the parallelism settings considerably to go much faster, and if for any reason it's not possible to read several files at the same time, you are out of luck. -- You received this message because you are subscribed to the Google Groups "wal-e" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
