I've been given a task to convert a single 300TB lvm with an XFS filesystem 
from an 8x way stripe under vmfs disks to a 16x way stripe using luns directly 
off an all flash array.  

My first inclination was to create a new filesystem and rsync, or xfs_dump but 
the downtime is problematic.  I'm trying to modify the DR filesystem which 
write out batches of writes every 240 seconds and trying to modify things 
before we flip over to it later this year where it will be doing reads & 
writes.  Normally on the DR side it takes ~40 seconds to apply a batch of 
replicated writes as fast as it can.

Looking at other options to move it online, I've been able to do "lvconvert 
--type mirror <lvm> --stripes 16 --stripesize 4M --mirrorlog core" in a test 
enviornment successfully and then remount specifying the different stripe width 
but have a couple of questions:

1) It looks like it should restripe the existing data mirroring at the full LV 
but not sure if it is more like lvmraid where it adds stripes and doesn't 
really restripe existing.  Is this really doing what I'm wanting?

2) As I'd expect this is fairly performance impacting but are there any options 
to reduce the impact?   I've tried increasing the queue depth, and setting no 
merges, etc with little luck.   I also tried using lvmraid converting to raid5, 
reshape adding stripes, converting back to raid0 but didn't seem to be that 
much better.
     a) Initial sync increases the batch apply time from 40 seconds to 220 
seconds
     b) After it's fully sync'd it still takes 190 seconds to apply a batch 
until I split it where it drops back down to under 40 seconds.  So I'm guessing 
the sync isn't the most impacting but the reshape from 8x to 16x is since after 
100% in sync the impact is still so high

Reply via email to