On 11/05/2020 11:58, Andy Smith via GLLUG wrote:
...
Yes

I actually think it is possible and is a reasonable plan, though
backups will still be advised. I didn't suggest this at first
because initially we thought there were unequal-sized devices (4T
and 8T).

Same here.

I believe modern mdadm can reshape a RAID-1 into a RAID-0 then a
RAID-0 into a RAID-10 and then add extra devices.

     https://www.berthon.eu/2017/converting-raid1-to-raid10-online/

I have done it myself long ago... see below

There will be a scary time when it is RAID-0 and therefore no
redundancy.

Yes depending on how it's done.

My main uncertainty about this is that I'm fairly sure converting
from RAID-1 to RAID-0 leaves you with a RAID-0 of one device and one
marked as spare, then I'm not sure if it does support going to
RAID-10 from that. Should be easy to prove with a test on small
loopback files as block devices though.

Another way it can be done now that we know all the devices are the
same size is to:

1. create a new RAID-10 array that is missing two members.

2. Bring it online, put a filesystem (or LVM) on it,

3. copy data over to it,

4. boot from it making sure that everything works,

5. nuke the old array, add its members to the new RAID-10 thus
making it fully redundant again.

And I seem to recall that's how I did it.

again, for the time period where the second RAID-10 has two members
missing it has no redundancy at all.

Indeed. But the new disks are then the non-redundant RAID-10 which may be safer.
...


I think it can be done only with mdadm though.

I believe so.

On further consideration if it was my machine I'd either follow Andy's plan or do this:

1. Buy a Seagate 8/10TB USB backup device. They are generally cheaper than a raw disk (or were, pre covid-19, I am certain of this as I just then bought two to backup client data.

2. Replicate the data to the backup disk

3. Verify backup

4. Destroy existing raid and wipe disks (if paranoid, keep just one until later)

5. Test existing disks (and if cautious, the new ones)

6. Build new 4-unit RAID10 (if paranoid, with one existing disk missing as per above)

7. Copy data back

8. If paranoid once happy wipe test add the other old disk.

Really I would not be happy having half my data array on 5 year old disks even in RAID 10 - it can stand 2 disk loss but you need to feel lucky. Disks DO fail together. I do have systems (well one backup server) with older (2TB 7+ year old!) disks (but only as a small minority in RAID 6 or 60 arrays). But each to their own... and I did lose two at once in that system.

I'm very fond of LVM and have used it on large filesystems without an underlying partition in the days when Red Hat did not support >2TB, as a workaround, now not needed. It was 100% solid over the 5 year life of the system. This approach risked confusing people though.

But the only times I have lost data (twice) on mdadm-backed RAID is with LVM over large RAID5 and multiple disk failures, making recovery impossible, so I tend to avoid LVM on RAID (data restored from backup). But then I don't use RAID 5 any more on >2TB disks. Or RAID6, indeed. It's all RAID 10 now for me, and maybe ZFS in the future if it ever gets more performant on Linux...

MeJ

--
In accordance with UK Government directives due to the Covid-19 situation, our office is temporarily closed. All staff are working from home. These arrangements will continue in accordance with UK Government advice. Please do not send any correspondence or cheques to our office as these cannot currently be dealt with, seen, or paid in.

All communication will be via phone or email. It is important, both for most rapid response and in order that all staff can respond, that you raise all issues via our Support Desk at:

supp...@stabilys.com

If you need help with working from home please contact us using the above methods.

Stabilys Ltd            www.stabilys.com
244 Kilburn Lane
LONDON
W10 4BA

0845 838 5370

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Reply via email to