Hi,

> On Mar 27, 2017, at 3:07 PM, Hugo Mills <h...@carfax.org.uk> wrote:
> 
>   On my hardware (consumer HDDs and SATA, RAID-1 over 6 devices), it
> takes about a minute to move 1 GiB of data. At that rate, it would
> take 1000 minutes (or about 16 hours) to move 1 TiB of data.
> 
>   However, there are cases where some items of data can take *much*
> longer to move. The biggest of these is when you have lots of
> snapshots. When that happens, some (but not all) of the metadata can
> take a very long time. In my case, with a couple of hundred snapshots,
> some metadata chunks take 4+ hours to move.

Thanks for that info. The 1min per 1GiB is what I saw too - the “it can take 
longer” wasn’t really explainable to me.

As I’m not using snapshots: would large files (100+gb) with long chains of CoW 
history (specifically reflink copies) also hurt?

Something I’d like to verify: does having traffic on the volume have the 
potential to delay this infinitely? I.e. does the system write to any segments 
that we’re trying to free so it may have to work on the same chunk over and 
over again? If not, then this means it’s just slow and we’re looking forward to 
about 2 months worth of time shrinking this volume. (And then again on the next 
bigger server probably about 3-4 months).

(Background info: we’re migrating large volumes from btrfs to xfs and can only 
do this step by step: copying some data, shrinking the btrfs volume, extending 
the xfs volume, rinse repeat. If someone should have any suggestions to speed 
this up and not having to think in terms of _months_ then I’m all ears.)

Cheers,
Christian

--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian. Theune, Christian. Zagrodnick

Attachment: signature.asc
Description: Message signed with OpenPGP

Reply via email to