I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to 
a new larger single drive (getting rid of the old drives). Following various 
information on the subject, it seems the procedure here is to first convert 
from stripe to mirror, and then from mirror to linear. While attempting this, I 
seem to have hit an issue on the second part of that process, and am not having 
much luck resolving it.

So to start, the drives at play are sda (a new larger drive), sdb (one of the 
older drives being removed), & sdc (the other drive being removed). The VG name 
is "ssd".
This is what the initial layout looked like:

# lvs -o+lv_layout,stripes -a
  LV                                     VG   Attr       LSize   Pool Origin    
                    Data%  Meta%  Move Log Cpy%Sync Convert Layout      #Str
  [lvol0_pmspare]                        ssd  ewi-a----- 236.00m                
                                                            linear         1
  thin                                   ssd  twi-aotz-- 930.59g                
                    92.40  98.76                            thin,pool      1
  [thin_tdata]                           ssd  Twi-ao---- 930.59g                
                                                            striped        2
  [thin_tmeta]                           ssd  ewi-ao---- 236.00m                
                                                            linear         1
(plus some other LVs which are using the thin pool volume that I've omitted)

I initiated the mirror with:
# lvconvert -m 1 ssd/thin_tdata
  Replaced LV type raid1 with possible type raid5_n.
  Repeat this command to convert to raid1 after an interim conversion has 
finished.
Are you sure you want to convert striped LV ssd/thin_tdata to raid5_n type? 
[y/n]: y
  Logical volume ssd/thin_tdata successfully converted.

And this is where I'm stuck. If I follow the instructions there and repeat the 
command, I get a nasty warning:
# lvconvert -m 1 ssd/thin_tdata

  Using default stripesize 64.00 KiB.
  Converting raid5_n LV ssd/thin_tdata to 2 stripes first.
  WARNING: Removing stripes from active and open logical volume ssd/thin_tdata 
will shrink it from 930.59 GiB to <465.30 GiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  Interrupt the conversion and run "lvresize -y -l476464 ssd/thin_tdata" to 
keep the current size if not done already!
  If that leaves the logical volume larger than 476464 extents due to stripe 
rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have 
to run "lvconvert --stripes 1 ssd/thin_tdata"
  Can't remove stripes without --force option.
  Reshape request failed on LV ssd/thin_tdata.

If I go with other information I've found online, and skip to `-m 0` here 
instead of repeating the `-m 1` command, I get:
# lvconvert -m 0 ssd/thin_tdata
  Using default stripesize 64.00 KiB.
  No change in RAID LV ssd/thin_tdata layout, freeing reshape space.
  LV ssd/thin_tdata does not have reshape space allocated.
  Reshape request failed on LV ssd/thin_tdata.

This is what the layout currently looks like:
# lvs -o+lv_layout,stripes,devices -a 
  LV                                     VG   Attr       LSize    Pool Origin   
                     Data%  Meta%  Move Log Cpy%Sync Convert Layout      #Str 
Devices                                                             
  [lvol0_pmspare]                        ssd  ewi-a-----  236.00m               
                                                             linear         1 
/dev/sda(0)                                                         
  thin                                   ssd  twi-aotz--  930.59g               
                     92.40  98.76                            thin,pool      1 
thin_tdata(0)                                                       
  [thin_tdata]                           ssd  rwi-aor---  930.59g               
                                            100.00           raid,raid5     3 
thin_tdata_rimage_0(0),thin_tdata_rimage_1(0),thin_tdata_rimage_2(0)
  [thin_tdata_rimage_0]                  ssd  iwi-aor--- <465.30g               
                                                             linear         1 
/dev/sda(118)                                                       
  [thin_tdata_rimage_1]                  ssd  iwi-aor--- <465.30g               
                                                             linear         1 
/dev/sdb(0)                                                         
  [thin_tdata_rimage_2]                  ssd  iwi-aor--- <465.30g               
                                                             linear         1 
/dev/sdc(1)                                                         
  [thin_tdata_rmeta_0]                   ssd  ewi-aor---    4.00m               
                                                             linear         1 
/dev/sda(119234)                                                    
  [thin_tdata_rmeta_1]                   ssd  ewi-aor---    4.00m               
                                                             linear         1 
/dev/sdb(119116)                                                    
  [thin_tdata_rmeta_2]                   ssd  ewi-aor---    4.00m               
                                                             linear         1 
/dev/sdc(0)                                                         
  [thin_tmeta]                           ssd  ewi-ao----  236.00m               
                                                             linear         1 
/dev/sda(59)                                                        

This is with lvm2-2.03.23-1 on Fedora 40

Any idea how I get past this point? I could just build a completely new logical 
volume and manually copy the data, but there's around 40 logical volumes on 
this thin pool, many of which are snapshots, so would be much easier to just 
convert it if possible.

--
Patrick


Reply via email to