Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-12 Thread Dr. Axel Stammler via GLLUG

Hi,

And thanks again, I have followed your advice, creating an 8TB mirrored logical 
volume on my two new drives, i.e. using 1 mirror = 2 copies.

On Mon 2020-05-11 13.15.55, Greater London Linux User Group wrote:

Hi,

What size are the partitions on the old 8TB disks?
Is it a single partition for all 8TB ?
If you have a separate data from the OS partition:
you could "rsync -avpP"  the data/image/picture/whatever files over to
the new disks on top of LVM.
You could handle the OS partition offline.
You can then do a final "catch up" rsync of the data in offline mode.

Once everything is copied and you have unmounted the old disks, and
have everything booting and running nicely on the new disks.
You could then wipe the old disks, and redo them with LVM on them.

I would allow at least a day or two for the first rsync of 8TB.


What do you think of the following procedure? Once I have copied and tested 
everything, I wipe the old disks and add them to the 8TB LV as copies #2 and 
#3. After synchronisation, I remove one of the new disks and one of the old 
ones, wipe them again and then extend or resize the LV to contain two pairs of 
mirrored 8TB data = 16TB.

As each pair would consist of an old disk combined with a new one, would the 
whole system not be less likely to be destroyed by two disks failing 
simultaneously?

What are the exact command combinations to accomplish this?

(vgextend, vgcreate, lvextend, lvresize)


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-12 Thread Dr. Axel Stammler via GLLUG

Hi,

On Mon 2020-05-11 22.31.44, James Courtier-Dutton wrote:


Which filesystem are you using on the old 8TB disk?
For example, if it is btrfs, you don't have to copy anything about.
btrfs does its own raid 0.
You can just add more disks as you need them and btrfs just uses them.


Interesting. I did not know about this and just went with the flow during 
installation and used ext4.


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread James Courtier-Dutton via GLLUG
Hi,

I forgot another obvious question.
Which filesystem are you using on the old 8TB disk?
For example, if it is btrfs, you don't have to copy anything about.
btrfs does its own raid 0.
You can just add more disks as you need them and btrfs just uses them.

Kind Regards

James

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread John Winters via GLLUG

On 11/05/2020 20:14, Dr. Axel Stammler via GLLUG wrote:
[snip]
I feel rather nervous as I am about to do all this, and I do have some 
questions (see below).



[snip detailed steps]

I wouldn't do it that way.  Possibly someone else can tell us a reason 
why it's worth moving the RAID functionality into LVM, but you don't 
need to, and given that your existing RAID is done with mdadm, I'd be 
inclined to stick with it.  It gives you less to have to undo.


My steps would be:

1. Create single large partition on each of the new HDDs.

2. Combine these into a new RAID1 device - mdN.

3. Give that device to LVM as its first Physical Volume, creating a 
Volume Group.


4. Create a single large Logical Volume, and then format that with 
whatever filing system you want to use - ext4 I think you said.


5. Copy files over with rsync as detailed previously.

6. Once you're sure you have all the files over, unmount the old filing 
system.


7. Delete the old partitions on your old RAID1 device.

8. Add that RAID1 device to your Volume Group.

9. Enlarge your Logical Volume.

HTH
John

--
Xronos Scheduler - https://xronos.uk/
All your school's schedule information in one place.
Timetable, activities, homework, public events - the lot
Live demo at https://schedulerdemo.xronos.uk/

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread Dr. Axel Stammler via GLLUG

Hi,

And thank you for all contributions.

On Mon 2020-05-11 19.40.15, Greater London Linux User Group wrote:


On Mon, May 11, 2020 at 01:15:55PM +0100, James Courtier-Dutton via GLLUG wrote:


If you have a separate data from the OS partition:
you could "rsync -avpP"  the data/image/picture/whatever files over to
the new disks on top of LVM.
You could handle the OS partition offline.
You can then do a final "catch up" rsync of the data in offline mode.



Can I add a suggestion to use '-H' ('--hard-links') to the list of
rsync options to make sure that files which are hard links are not
copied as separate non-linked files?

This is important for the OS partitions, where several deb packages
install the same file under different names (eg bzip2).

It does have the downside of being slightly slower as it has to check
for hard links, so might be skipped for those filesystems where you
know there are no hard links.


I feel rather nervous as I am about to do all this, and I do have some 
questions (see below).

1. Create Physical Volumes on the two new 8 TB hard disk drives:

  pvcreate /dev/sdc
  pvcreate /dev/sde

2. Create a Volume Group containing these Physical Volumes

  vgcreate vg_blobs /dev/sdc /dev/sde

3. Create a mirrored Logical Volume using this Volume Group

  lvcreate -n lv_blobs -m1 vg_blobs
  
  Questions:

  - How do I make the new volume use all available space? Will mirroring choose 
the physical volumes automatically?
  - Where does the log go? = Do I need a partition on a separate disk for it? 
How large should it be and how do I incorporate it?

4. Create a new file system on the mirrored LV

  mkfs.ext4 /dev/vg_blobs/lv_blobs

5. Copy the data from the existing RAID-1 system (on sda and sdb) to the new 
mirrored LV

  rsync -at   # or -avpP or -avpPH?

6. How do I then remove sda + sdb from RAID-1 md127p1?

7. How do I eliminate RAID-1 md127p1?

8. Create Physical Volumes on the two old 8 TB hard disk drives:

  pvcreate /dev/sda
  pvcreate /dev/sdb

9. Do I now extend the Volume Group created in step 2 or do I create a new 
Volume Group?

  vgextend vg_blobs
  
  Or:
  
  vgcreate vg_blobs2


10. ?

  lvextend

  Or:
  
  lvresize


11. Grow the filesystem to 16 TB.

Are there any more ideas concerning these commands and their options?


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread Dr. Axel Stammler via GLLUG

Hi,

On Mon 2020-05-11 10.58.13, Greater London Linux User Group wrote:


I believe modern mdadm can reshape a RAID-1 into a RAID-0 then a
RAID-0 into a RAID-10 and then add extra devices.

   https://www.berthon.eu/2017/converting-raid1-to-raid10-online/

There will be a scary time when it is RAID-0 and therefore no
redundancy.


Well, to make it less scary this idea includes the --backup-file option but I 
do not know what this backup file will contain. Either, it is all the data — 
then I defdinitely do not have room for it and it would take about as much time 
as using R Sync to move to a new Logical Volume. Or, it is just RAID 
configuration data — in that case, it is much too scary for me.


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread John Edwards via GLLUG
Hi


On Mon, May 11, 2020 at 01:15:55PM +0100, James Courtier-Dutton via GLLUG wrote:
 
> If you have a separate data from the OS partition:
> you could "rsync -avpP"  the data/image/picture/whatever files over to
> the new disks on top of LVM.
> You could handle the OS partition offline.
> You can then do a final "catch up" rsync of the data in offline mode.


Can I add a suggestion to use '-H' ('--hard-links') to the list of
rsync options to make sure that files which are hard links are not
copied as separate non-linked files?

This is important for the OS partitions, where several deb packages
install the same file under different names (eg bzip2).

It does have the downside of being slightly slower as it has to check
for hard links, so might be skipped for those filesystems where you
know there are no hard links.


-- 
#-#
|John Edwards   Email: j...@cornerstonelinux.co.uk|
#-#

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread Dr. Axel Stammler via GLLUG

Hi,

On Mon 2020-05-11 10.02.00, Greater London Linux User Group wrote:

On 10/05/2020 21:35, Andy Smith via GLLUG wrote:

Hello,

On Sun, May 10, 2020 at 10:03:32PM +0200, Dr. Axel Stammler via GLLUG wrote:

On Sun 2020-05-10 08.53.16, James Courtier-Dutton wrote:

So, I think moving to an "LVM mirror" solution is your best bet for
future extensibility.


I haven't reviewed all the recent replies, but is there any reason why 
you can't add the the two new disks of the same size and migrate from 
RAID 1 to RAID 10, e.g:


https://blog.voina.org/convert-an-existing-2-disk-raid-1-to-a-4-disk-raid-10/

(though that has LVM on top, shouldn't make a difference in these 
circumstances, just a quick search, there's many other references, 
YMMV)


Unfortunately, there is no LVM on top of the existing RAID-1 system.


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread Dr. Axel Stammler via GLLUG

Hi, Andy,

On Sun 2020-05-10 20.35.20, Greater London Linux User Group wrote:


On Sun, May 10, 2020 at 10:03:32PM +0200, Dr. Axel Stammler via GLLUG wrote:

On Sun 2020-05-10 08.53.16, James Courtier-Dutton wrote:
>So, I think moving to an "LVM mirror" solution is your best bet for
>future extensibility.

After reviewing all options, this indeed seems to be the best one in my case.


But it still doesn't let you move filesystems that aren't on LVM
in to LVM. I don't understand why you keep thinking that LVM lets
you do this. My very first reply to you pointed out this would be an
issue for you!


Thank you, and I am sorry I did not make it clear that I had dropped that idea. 
I'll do what was suggested here: create a mirrored LV on the new drives, use R 
Sync to copy my data from my old RAID-1 system to the LV, create new physical 
volumes on the old drives (destroying the RAID), and extend the volume group 
and the logical volume.


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread Dr. Axel Stammler via GLLUG

Hi,

On Mon 2020-05-11 13.15.55, Greater London Linux User Group wrote:


What size are the partitions on the old 8TB disks?
Is it a single partition for all 8TB ?


Yes, it is. The system is on a separate hard disk drive.


If you have a separate data from the OS partition:
you could "rsync -avpP"  the data/image/picture/whatever files over to
the new disks on top of LVM.
You could handle the OS partition offline.
You can then do a final "catch up" rsync of the data in offline mode.

Once everything is copied and you have unmounted the old disks, and
have everything booting and running nicely on the new disks.
You could then wipe the old disks, and redo them with LVM on them.

I would allow at least a day or two for the first rsync of 8TB.


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread James Roberts via GLLUG

On 11/05/2020 11:58, Andy Smith via GLLUG wrote:
...

Yes


I actually think it is possible and is a reasonable plan, though
backups will still be advised. I didn't suggest this at first
because initially we thought there were unequal-sized devices (4T
and 8T).


Same here.


I believe modern mdadm can reshape a RAID-1 into a RAID-0 then a
RAID-0 into a RAID-10 and then add extra devices.

 https://www.berthon.eu/2017/converting-raid1-to-raid10-online/


I have done it myself long ago... see below


There will be a scary time when it is RAID-0 and therefore no
redundancy.


Yes depending on how it's done.


My main uncertainty about this is that I'm fairly sure converting
from RAID-1 to RAID-0 leaves you with a RAID-0 of one device and one
marked as spare, then I'm not sure if it does support going to
RAID-10 from that. Should be easy to prove with a test on small
loopback files as block devices though.

Another way it can be done now that we know all the devices are the
same size is to:

1. create a new RAID-10 array that is missing two members.

2. Bring it online, put a filesystem (or LVM) on it,

3. copy data over to it,

4. boot from it making sure that everything works,

5. nuke the old array, add its members to the new RAID-10 thus
making it fully redundant again.


And I seem to recall that's how I did it.


again, for the time period where the second RAID-10 has two members
missing it has no redundancy at all.


Indeed. But the new disks are then the non-redundant RAID-10 which may 
be safer.

...



I think it can be done only with mdadm though.


I believe so.

On further consideration if it was my machine I'd either follow Andy's 
plan or do this:


1. Buy a Seagate 8/10TB USB backup device. They are generally cheaper 
than a raw disk (or were, pre covid-19, I am certain of this as I just 
then bought two to backup client data.


2. Replicate the data to the backup disk

3. Verify backup

4. Destroy existing raid and wipe disks (if paranoid, keep just one 
until later)


5. Test existing disks (and if cautious, the new ones)

6. Build new 4-unit RAID10 (if paranoid, with one existing disk missing 
as per above)


7. Copy data back

8. If paranoid once happy wipe test add the other old disk.

Really I would not be happy having half my data array on 5 year old 
disks even in RAID 10 - it can stand 2 disk loss but you need to feel 
lucky. Disks DO fail together. I do have systems (well one backup 
server) with older (2TB 7+ year old!) disks (but only as a small 
minority in RAID 6 or 60 arrays). But each to their own... and I did 
lose two at once in that system.


I'm very fond of LVM and have used it on large filesystems without an 
underlying partition in the days when Red Hat did not support >2TB, as a 
workaround, now not needed. It was 100% solid over the 5 year life of 
the system. This approach risked confusing people though.


But the only times I have lost data (twice) on mdadm-backed RAID is with 
LVM over large RAID5 and multiple disk failures, making recovery 
impossible, so I tend to avoid LVM on RAID (data restored from backup). 
But then I don't use RAID 5 any more on >2TB disks. Or RAID6, indeed. 
It's all RAID 10 now for me, and maybe ZFS in the future if it ever gets 
more performant on Linux...


MeJ

--
In accordance with UK Government directives due to the Covid-19 
situation, our office is temporarily closed. All staff are working from 
home. These arrangements will continue in accordance with UK Government 
advice. Please do not send any correspondence or cheques to our office 
as these cannot currently be dealt with, seen, or paid in.


All communication will be via phone or email. It is important, both for 
most rapid response and in order that all staff can respond, that you 
raise all issues via our Support Desk at:


supp...@stabilys.com

If you need help with working from home please contact us using the 
above methods.


Stabilys Ltdwww.stabilys.com
244 Kilburn Lane
LONDON
W10 4BA

0845 838 5370

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread James Courtier-Dutton via GLLUG
Hi,

What size are the partitions on the old 8TB disks?
Is it a single partition for all 8TB ?
If you have a separate data from the OS partition:
you could "rsync -avpP"  the data/image/picture/whatever files over to
the new disks on top of LVM.
You could handle the OS partition offline.
You can then do a final "catch up" rsync of the data in offline mode.

Once everything is copied and you have unmounted the old disks, and
have everything booting and running nicely on the new disks.
You could then wipe the old disks, and redo them with LVM on them.

I would allow at least a day or two for the first rsync of 8TB.

Kind Regards

James

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread Andy Smith via GLLUG
Hello,

On Mon, May 11, 2020 at 10:22:46AM +0100, John Winters via GLLUG wrote:
> On 11/05/2020 10:02, James Roberts via GLLUG wrote:
> >I haven't reviewed all the recent replies, but is there any reason why you
> >can't add the the two new disks of the same size and migrate from RAID 1
> >to RAID 10
> 
> Yes

I actually think it is possible and is a reasonable plan, though
backups will still be advised. I didn't suggest this at first
because initially we thought there were unequal-sized devices (4T
and 8T).

I believe modern mdadm can reshape a RAID-1 into a RAID-0 then a
RAID-0 into a RAID-10 and then add extra devices.

https://www.berthon.eu/2017/converting-raid1-to-raid10-online/

There will be a scary time when it is RAID-0 and therefore no
redundancy.

My main uncertainty about this is that I'm fairly sure converting
from RAID-1 to RAID-0 leaves you with a RAID-0 of one device and one
marked as spare, then I'm not sure if it does support going to
RAID-10 from that. Should be easy to prove with a test on small
loopback files as block devices though.

Another way it can be done now that we know all the devices are the
same size is to:

1. create a new RAID-10 array that is missing two members.

2. Bring it online, put a filesystem (or LVM) on it,

3. copy data over to it,

4. boot from it making sure that everything works,

5. nuke the old array, add its members to the new RAID-10 thus
making it fully redundant again.

again, for the time period where the second RAID-10 has two members
missing it has no redundancy at all.

If you are very very sure about what you are doing you can do all
that online and only boot into it later but personally I would want
to be satisfied that it booted properly using only the new RAID-10
alone.

This approach is detailed here:

https://superuser.com/a/726063/100242

> >https://blog.voina.org/convert-an-existing-2-disk-raid-1-to-a-4-disk-raid-10/
> >
> >(though that has LVM on top, shouldn't make a difference in these
> >circumstances,
> 
> You've put your finger on why it won't work in this case.  The presence of
> an existing LVM setup on top of the existing RAID1 is what is used to
> migrate the data over.

Yes, that example uses LVM and is therefore not applicable here.

I think it can be done only with mdadm though.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-11 Thread James Roberts via GLLUG

On 10/05/2020 21:35, Andy Smith via GLLUG wrote:

Hello,

On Sun, May 10, 2020 at 10:03:32PM +0200, Dr. Axel Stammler via GLLUG wrote:

On Sun 2020-05-10 08.53.16, James Courtier-Dutton wrote:

So, I think moving to an "LVM mirror" solution is your best bet for
future extensibility.


I haven't reviewed all the recent replies, but is there any reason why 
you can't add the the two new disks of the same size and migrate from 
RAID 1 to RAID 10, e.g:


https://blog.voina.org/convert-an-existing-2-disk-raid-1-to-a-4-disk-raid-10/

(though that has LVM on top, shouldn't make a difference in these 
circumstances, just a quick search, there's many other references, YMMV)


RAID 10 slightly enhances failure resistance, increases read speeds and 
keeps it simple. Although with two old disks, whatever you do, they will 
likely fail first.


The only thing I'd emphasise is whatever you do, if you care about the 
data you MUST have a backup first! (though in crisis hero mode I have 
been known to ignore my own advice, that's only on my OWN data...)


MeJ

--
In accordance with UK Government directives due to the Covid-19 
situation, our office is temporarily closed. All staff are working from 
home. These arrangements will continue in accordance with UK Government 
advice. Please do not send any correspondence or cheques to our office 
as these cannot currently be dealt with, seen, or paid in.


All communication will be via phone or email. It is important, both for 
most rapid response and in order that all staff can respond, that you 
raise all issues via our Support Desk at:


supp...@stabilys.com

If you need help with working from home please contact us using the 
above methods.


Stabilys Ltdwww.stabilys.com
244 Kilburn Lane
LONDON
W10 4BA

0845 838 5370

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-10 Thread Andy Smith via GLLUG
Hello,

On Sun, May 10, 2020 at 10:03:32PM +0200, Dr. Axel Stammler via GLLUG wrote:
> On Sun 2020-05-10 08.53.16, James Courtier-Dutton wrote:
> >So, I think moving to an "LVM mirror" solution is your best bet for
> >future extensibility.
> 
> After reviewing all options, this indeed seems to be the best one in my case.

But it still doesn't let you move filesystems that aren't on LVM
in to LVM. I don't understand why you keep thinking that LVM lets
you do this. My very first reply to you pointed out this would be an
issue for you!

Personally I do not like to do redundancy at the LVM level. The main
reason I use RAID is to avoid the system becoming unavailable (not
booting fully) when a storage device dies. It used to be the case
that an LVM Volume Group would not activate if any PVs were missing,
so if a device failed and you rebooted the system wouldn't come up
without manual intervention. They did fix that after a few years
(initramfs is now willing to activate degraded VGs):

https://bugzilla.redhat.com/show_bug.cgi?id=1337220

So I guess my main problem with it is no longer relevant, but still,
I just prefer redundancy being provided by mdadm.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-10 Thread Dr. Axel Stammler via GLLUG

On Sun 2020-05-10 08.53.16, James Courtier-Dutton wrote:


So, there is a solution that uses tiled RAID. LVM has a "mirror" option.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/mirrored_volumes
If you used that, you would not need a RAID layer at all.
You would create all the disks as a LVM volume group, and then create
LVM partitions using the LVM mirror option.
An LVM mirror divides the device being copied into regions that are
typically 512KB in size, so a big improvement over the 500GB chunks
suggestion above.
This would also give flexibility, you could choose some of your data
to be "mirror" and some not.
LVM "mirror" also lets you migrate data while it is still mounted.
You have the original LVM volume, mirror it onto a new disk, remove
the original copy.

So, I think moving to an "LVM mirror" solution is your best bet for
future extensibility.


After reviewing all options, this indeed seems to be the best one in my case. 
As I am doing for the first time — are these the correct steps?

* additional sources I looked at:
- https://wiki.debian.org/LVM
- 
https://www.debian.org/doc/manuals/debian-handbook/advanced-administration.en.html#sect.raid-and-lvm

[new hard disk drives, still blank: sdc, sde; old hard disk drives as RAID-1 
system md127p1]

pvcreate /dev/sdc
pvcreate /dev/sde

vgcreate vg_blobs /dev/sdc /dev/sde

lvcreate -n lv_blobs -m1 vg_blobs # How do I make the new volume use all 
available space? Will mirroring choose the physical volumes automatically? 
Where does the log go? = Do I need a partition on a separate disk for it? How 
large should it be and how do I incorporate it?

mkfs.ext4 /dev/vg_blobs/lv_blobs

rsync … # old RAID-1 system ↔ new mirrored volume group

# ? remove sda + sdb from RAID-1 md127p1
# ? eliminate RAID-1 md127p1

pvcreate /dev/sda
pvcreate /dev/sdb

vgextend vg_blobs *** # or vgcreate vg_blobs2 ?

lvextend / lvresize # ?

# grow filesystem


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-10 Thread John Edwards via GLLUG
Hi

On Sun, May 10, 2020 at 03:12:54PM +0200, Dr. Axel Stammler via GLLUG wrote:
 
> Hmm. How long would it take to copy (nearly) 8 TB? 

Depends a lot of the size and type of the data. If large video files
then those will copy fast, but if the files are all small or are hard
links then there is more metadata to update in the filesystem which
will longer.

A typical modern spinning disk SATA drive can do between about 100 and
160 MBytes/s for large files.

So a rough back of envelope calculation would be 800 / 100 / 3600
~= 22 hours. So I would expect this to be a job that would last a
couple of days.


> Obviously I would have to prevent any write access for that period of time.

In general a good idea, but it would depend on the nature of those
writes and how important the additional data is to you.

If they are altering existing files then probably yes, but if they
were just adding files then you could do several runs of 'rsync' to
complete the copy afterwards.


-- 
#-#
|John Edwards   Email: j...@cornerstonelinux.co.uk|
#-#

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-10 Thread John Winters via GLLUG

On 10/05/2020 14:12, Dr. Axel Stammler via GLLUG wrote:
[snip]
One thing which might work for you is to make your two new drives into 
a RAID1 set and then use resulting device as your first Physical 
Volume. Create a large logical volume within it, copy all your 
existing files over (boring), delete the partition on the old RAID1 
set, create a second PV, add it to your Volume Group and then expand 
the LV.


It does mean moving all your data from old drives to new, but at least 
you'd then end up with one much larger filing system.


Hmm. How long would it take to copy (nearly) 8 TB? Obviously I would 
have to prevent any write access for that period of time.


Hard to say how long it would take - it depends very much on your 
system.  However, depending on what you use your disk storage for you 
might not need to disable write access for that long.


(It's an annoying bootstrap problem - if only you were already using LVM 
there are all sorts of clever things which it can do to make it look 
like you have a copy of your data whilst the data are still copying. 
Not an option in this case though.)


If the rate of change to your data is not high, you could potentially 
copy it all over using rsync (rsync -at) whilst leaving your source 
disks write enabled and having fresh data arrive whilst the copy 
happens.  As soon as the initial copy completes (probably several hours) 
issue the same command again and it will complete in a very few minutes. 
 Do it again and it will be faster still.


Once the time gets down to a period acceptable for a "no writing" 
interval, disable writes, issue the command one last time.  Unmount the 
source, mount the destination in its place and Robert's your 
progenitor's male sibling.


HTH,
John


--
Xronos Scheduler - https://xronos.uk/
All your school's schedule information in one place.
Timetable, activities, homework, public events - the lot
Live demo at https://schedulerdemo.xronos.uk/

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-10 Thread Dr. Axel Stammler via GLLUG

Thank you.

On Sun 2020-05-10 07.39.31, Greater London Linux User Group wrote:

On 09/05/2020 20:32, Dr. Axel Stammler via GLLUG wrote:

On Sat 2020-05-09 11.24.15, Greater London Linux User Group wrote:


How do you intend to combine them? You won't be able to put your
existing array into the LVM without destroying its contents.


https://www.debian.org/doc/manuals/debian-handbook/advanced-administration.en.html#sect.raid-and-lvm


says you can use lvresize and resize2fs to do just that. I am still 
hoping this will work.


I think you may be mis-reading the manual.  What you can do is 
increase the size of an existing Logical Volume, but you can't do an 
in-situ conversion of an existing non-LV filing system.


You are quite right, I was mis-reading the manual. So I cannot incorporate an 
existing filesystem (no matter whether regular or on a RAID-1 system) into a 
new Logical Volume system; I see.

It really sounded to good to be true: ”RAID-1+0: This isn't strictly speaking, 
a RAID level, but a stacking of two RAID groupings. Starting from 2×N disks, 
one first sets them up by pairs into N RAID-1 volumes; these N volumes are then 
aggregated into one, either by “linear RAID” or (increasingly) by LVM. This 
last case goes farther than pure RAID, but there's no problem with that.“

One thing which might work for you is to make your two new drives into 
a RAID1 set and then use resulting device as your first Physical 
Volume. Create a large logical volume within it, copy all your 
existing files over (boring), delete the partition on the old RAID1 
set, create a second PV, add it to your Volume Group and then expand 
the LV.


It does mean moving all your data from old drives to new, but at least 
you'd then end up with one much larger filing system.


Hmm. How long would it take to copy (nearly) 8 TB? Obviously I would have to 
prevent any write access for that period of time.


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-10 Thread James Courtier-Dutton via GLLUG
On Sat, 9 May 2020 at 12:03, Dr. Axel Stammler
 wrote:
>
> Hi,
>
> Thank you for your detailed look at possible setups. I remembered my old 
> setup incorrectly, though, so that I am not sure everything is applicable. My 
> original (2016) setup included two hard disk drives of not 4 TB but 8 TB 
> capacity in a RAID-1 that has reached 92 per cent capacity.
>

Ah, so the original disk drives are 8TB and not 4TB.
So, going from:
A: 8TB (current)
B: 8TB (current)
C: 8TB (new)
D: 8TB (new)

Some things to consider.
1) 8TB does not equal 8TB.  Although the drives might say they are
8TB, the exact amount of sectors on the disk might differ between disk
models.

I would therefore chop the disk into 500GB partitions so that you can
move them around at a later point if you wish.
You RAID the 500GB partitions.
You then put LVM (Logical Volume Management) on top, using LVM to join
all the RAID 500GB volumes into a single LVM Volume group.
You can then use LVM to expand/reduce filesystems as you need to.

You did not say whether your existing disks used LVM or not. If not,
build the new disks with LVM on top of RAID.  (LVM on top of RAID is
better than RAID on top of LVM)

Extensibility:
It would probably be a good time to think about extensibility in
future. I.e What happens when you add more disks.
The reasoning behind the RAID 500GB, is so you could migrate in 500GB
chunks, were you to need to change the RAID method later, or such
like.
You can make the chunks even smaller if you wish, you just end up with
more of them. It gives you a sort of tiled RAID.

Solution:
So, there is a solution that uses tiled RAID. LVM has a "mirror" option.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/mirrored_volumes
If you used that, you would not need a RAID layer at all.
You would create all the disks as a LVM volume group, and then create
LVM partitions using the LVM mirror option.
An LVM mirror divides the device being copied into regions that are
typically 512KB in size, so a big improvement over the 500GB chunks
suggestion above.
This would also give flexibility, you could choose some of your data
to be "mirror" and some not.
LVM "mirror" also lets you migrate data while it is still mounted.
You have the original LVM volume, mirror it onto a new disk, remove
the original copy.

So, I think moving to an "LVM mirror" solution is your best bet for
future extensibility.

There are also other options like RAID 5 and RAID 6, but they have the
associated "extensibility" problems.

Kind Regards

James

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-09 Thread Andy Smith via GLLUG
On Sat, May 09, 2020 at 11:24:15AM +, Andy Smith via GLLUG wrote:
> How do you intend to combine them? You won't be able to put your
> existing array into the LVM without destroying its contents.

Forgot to mention; this sort of conundrum is why it's often useful
to put things in LVM to begin with even when you see no immediate
need.

If your current array already uses LVM then it's trivial to add a
new PV to that.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-09 Thread Andy Smith via GLLUG
Hi Axel,

On Sat, May 09, 2020 at 01:03:37PM +0200, Dr. Axel Stammler via GLLUG wrote:
> My original (2016) setup included two hard disk drives of not 4 TB
> but 8 TB capacity in a RAID-1 that has reached 92 per cent
> capacity.

[…]

> I have ordered two more […] my first idea was to create a new
> RAID-1 and to combine the two resulting systems via Logical Volume
> Management. What do you think?

How do you intend to combine them? You won't be able to put your
existing array into the LVM without destroying its contents.

You could do a thing where you:

1) set up the new array as the boot device

2) copy OS and data over to it from the original array

3) boot into this and confirm everything works, if necessary
   physically removing the old array to be sure that everything
   works and all your data is there

4) Nuke the old array, recreate it and add it to your LVM. You now
   have one big LVM.

Downside of this is that because only two devices existed at first,
all your data is still on two devices. The other two are largely
empty, so performance-wise all reads will come from only two of your
four devices. As you (re)write things over time they will spread out
if you have set your LVM allocation method to stripe.

If you have good backups then you could of course nuke everything
and put it back straight onto your LVM of 2x RAID-1.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

Please consider the environment before reading this e-mail.
 — John Levine

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-09 Thread Dr. Axel Stammler via GLLUG

Hi,

Thank you for your detailed look at possible setups. I remembered my old setup 
incorrectly, though, so that I am not sure everything is applicable. My 
original (2016) setup included two hard disk drives of not 4 TB but 8 TB 
capacity in a RAID-1 that has reached 92 per cent capacity.

On Tue 2020-04-28 13.19.10, James Courtier-Dutton wrote:


First for RAID, avoid SMR HDDs. (Shingled magnetic recording)
I would probably RAID 5 them.
4+4 = 8 for the first disk, against the two other 8 disks.
So, say disks are A(4TB), B(4TB), C(8TB), D(8TB)
Partitions the 8TB in half.
A(4TB), B(4TB), C1(4TB), C2(4TB), D1(4TB), D2(4TB)
RAID 5: A,C1,D1
RAID 5: B,C2,D2
Then put the two RAID arrays in the same LVM VG, so that they look
like one big disk for the OS.

Another alternative is using XFS or BTRFS and configure them with replicas.
That is where the filesystem does the replication, thus not needing RAID at all.


As 8 TB hard drives still seem to be the best value for money per TB, I have 
ordered two more, making sure they use perpendicular magnetic recording. The 
existing drives look fine both in SMART logs and tests (I even have a 1 TB from 
2009 in perfect working order, cannot imagine how.), so my first idea was to 
create a new RAID-1 and to combine the two resulting systems via Logical Volume 
Management. What do you think?


Or, you could take the approach I take. I remove the old 4TB disks and
only copy the few files I need on to the 8TB disks going forward.
I can always plug the old 4TB disks in if I need an old file.
I have written my own indexer for this. It scans the whole disk,
creating an index and thumbnails and then only store the index and
thumbnails on the 8TB disks.
The index is stored in Elastic Search, so makes it easy to find the
files again, and also which disk they are on.
So, files I hardly ever need are stored on powered off disks.


Unfortunately, in my case, I cannot tell which data are going to be needed more 
often or sooner.

Kind regards,

Axel


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-02 Thread James Courtier-Dutton via GLLUG
Hi,

Regarding which are SMR. Here is a good place to start:
https://hardware.slashdot.org/story/20/04/29/2119250/toshiba-publishes-full-list-of-its-drives-using-slower-smr-technology
TOSHIBA:   
https://toshiba.semicon-storage.com/ap-en/company/news/news-topics/2020/04/storage-20200428-1.html
WD: https://www.tomshardware.com/news/wd-lists-all-drives-slower-smr-techNOLOGY
Seagate: 
https://blocksandfiles.com/2020/04/15/seagate-2-4-and-8tb-barracuda-and-desktop-hdd-smr/

It seems that drive manufactures were being a bit secretive about it,
but they have all come clean now, documenting which HDD are SMR.

Kind Regards

James




On Sat, 2 May 2020 at 16:57, Dr. Axel Stammler via GLLUG
 wrote:
>
> Hi,
>
> On Tue 2020-04-28 16.10.42, Greater London Linux User Group wrote:
>
> >Next up, if your drives don't support SCTERC timeout facility then
> >this is not ideal for a Linux RAID system but can be worked around
>
> Thanks. This is another great tip. Is there any way to find out if a drive 
> has that problem before buying it?
>
> Cheers,
>
> Axel
> --
> GLLUG mailing list
> GLLUG@mailman.lug.org.uk
> https://mailman.lug.org.uk/mailman/listinfo/gllug

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-02 Thread Dr. Axel Stammler via GLLUG

Hi,

On Tue 2020-04-28 16.10.42, Greater London Linux User Group wrote:


Next up, if your drives don't support SCTERC timeout facility then
this is not ideal for a Linux RAID system but can be worked around


Thanks. This is another great tip. Is there any way to find out if a drive has 
that problem before buying it?

Cheers,

Axel


signature.asc
Description: PGP signature
-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-02 Thread Mike Brodbelt via GLLUG

On 02/05/2020 13:48, Dr. Axel Stammler via GLLUG wrote:

Hi,

Thanks for these valuable tips, especially the one about SMR. I have 
looked at the topic in more detail and I am really glad I did. Could you 
suggest a way of finding non-SMR hard disk drives, especially at decent 
prices?


Cheap is starting to mean SMR in more and more places. All the high 
capacity lines that are decent are non-SMR - WD Red Pro, WD Gold, the 
Toshiba NAS drives, WD/HGST Ultrastar drives, etc. They're also the 
expensive ones.


I think what we're really seeing here is that in the 1-6 TB desktop 
range, customers are price sensitive, and moving to SMR means the 
manufacturer can drop the platter count, which is going to reduce their 
costs. That, in and of itself is OK, but the way they've tried to hide 
this from customers is pretty underhanded, and happily seems to have 
backfired quite badly. WD in particular should never have put SMR into 
NAS drives, where it was clearly unsuitable. I'm not quite sure what 
they were thinking there - it feels like marketing were driving the 
ship, and were drunk at the wheel.


This is a list of SMR drives - known to cause problems with Synology NAS 
devices:-


https://www.synology.com/en-global/compatibility?search_by=category=hdds_no_ssd_trim_feature=SMR=1

Toshiba seem to have been somewhat more honest, but have some SMR drives 
in the lineup as well. Details here:-


https://blocksandfiles.com/2020/04/16/toshiba-desktop-disk-drives-undocumented-shingle-magnetic-recording/

The real root cause here I think is this though:-

https://en.wikipedia.org/wiki/List_of_defunct_hard_disk_manufacturers

There's not much real competition any more, and we're now starting to 
see the results of that in the way customers are being treated... At the 
2TB drive level, they have to get the price down quite far now, as I can 
pick up a 2TB SSD for £300 or so, which is faster, smaller, lower power, 
and potentially more durable. So it's cut every corner you can


Mike

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-05-02 Thread Dr. Axel Stammler via GLLUG

On Tue 2020-04-28 13.19.10, James Courtier-Dutton wrote:


First for RAID, avoid SMR HDDs. (Shingled magnetic recording)


Hi,

Thanks for these valuable tips, especially the one about SMR. I have looked at 
the topic in more detail and I am really glad I did. Could you suggest a way of 
finding non-SMR hard disk drives, especially at decent prices?

Kind regards,

Axel

--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-04-28 Thread Andy Smith via GLLUG
Hi,

On Tue, Apr 28, 2020 at 01:18:08PM +0200, Dr. Axel Stammler via GLLUG wrote:
> I have a 4 TB RAID system (two identical hard disks combined in a
> RAID-1, created using mdadm). Now, after a few years, this has
> reached 90% capacity, and I am thinking about first adding another
> similar 8 TB RAID system and then combining them into one 12 GB
> RAID 1+0 filesystem.

> Which hardware parameters should I look at?

You already received tips to avoid SMR. This cannot be stressed
enough. Worse still, Seagate and WD are selling SMR drives without
marking them as such.

https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/
https://rml527.blogspot.com/2010/09/hdd-platter-database-seagate-25.html

Do not try to use an SMR drive in a RAID array of any kind.

Next up, if your drives don't support SCTERC timeout facility then
this is not ideal for a Linux RAID system but can be worked around
(and should be). Here is an article I wrote many years ago about
this but it is still the case.


http://strugglers.net/~andy/blog/2015/11/09/linux-software-raid-and-drive-timeouts/

On the linux-raid list many of the requests for help from people
whose arrays won't automatically assemble after a failure are
because their drives don't support SCTERC and they didn't work
around it.

> Which method should I use to combine both RAID systems into one?
> 
> - linear RAID

Doable but not great because:

- Complexity of having arrays be part of an array, e.g. you'll have
  md0 and md1 be your two RAID-1s then md2 will be a linear of
  those.

- Not ideal performance since all IO will go to one pair and
  then once capacity is reached all IO will go other.

- Not sure if you can continue to grow this one later by adding more
  devices.

> - RAID-0

Doable but not great because:

- Complexity of having arrays be part of an array.

- Uneven performance because one "half" is actually twice the size
  of the other "half".

- Cannot grow this setup when you add more drives without rebuilding
  it all again.

If all your devices were the same size there would also be the
option of reshaping RAID-1 to RAID-10, which is possible with recent
mdadm. It turns the RAID-1 into a RAID-0 and then turns that into a
single RAID-10. No further grows/reshapes would be possible after that
though.

> - Large Volume Management (using pvcreate, vgcreate, lvcreate)

(LVM stands for Logical Volume Manager btw )

For ease I most likely would go this way.

You'd make your existing md device be one Physical Volume, make the
new md device be another PV, then make a Volume Group that is both
of those PVs with an allocation mode of stripe.

Pros:

- Can keep adding RAID-1 pairs like this as PVs forever without
  having to move your data about again.

- Pretty simple to manage and understand what is going on.

Cons:

- Will still be a bit uneven performance since the smaller half will
  fill up first and then LVM will only allocate from the larger PV.

- If you've never used LVM before then it's a whole set of new
  concepts.

In all of the above options though, you are going to have to destroy
your filesystem(s) that are on the RAID-1 and put them back onto
whatever system you come up with.

If you're starting over you could consider ZFS-on-Linux.

I've been burnt by btrfs and still see showstopping data loss and
availability problems on the btrfs list on a weekly basis so would
not recommend it at this time. There is likely to be someone who will
say they have been using it for years without issue; if you aren't
convinced then subscribe to linux-btrfs for a month and see what
other people are still dealing with!

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Re: [GLLUG] Link two RAIDs in one LVM?

2020-04-28 Thread James Courtier-Dutton via GLLUG
On Tue, 28 Apr 2020 at 12:18, Dr. Axel Stammler via GLLUG
 wrote:
>
> Hi,
>
> I have a 4 TB RAID system (two identical hard disks combined in a RAID-1, 
> created using mdadm). Now, after a few years, this has reached 90% capacity, 
> and I am thinking about first adding another similar 8 TB RAID system and 
> then combining them into one 12 GB RAID 1+0 filesystem. I should be grateful 
> for any tips, especially about buying two 8 TB harddisks.
>
> Which hardware parameters should I look at?
>
> Which method should I use to combine both RAID systems into one?
>
> - linear RAID
> - RAID-0
> - Large Volume Management (using pvcreate, vgcreate, lvcreate)
>

Hi,

First for RAID, avoid SMR HDDs. (Shingled magnetic recording)
I would probably RAID 5 them.
4+4 = 8 for the first disk, against the two other 8 disks.
So, say disks are A(4TB), B(4TB), C(8TB), D(8TB)
Partitions the 8TB in half.
A(4TB), B(4TB), C1(4TB), C2(4TB), D1(4TB), D2(4TB)
RAID 5: A,C1,D1
RAID 5: B,C2,D2
Then put the two RAID arrays in the same LVM VG, so that they look
like one big disk for the OS.

Another alternative is using XFS or BTRFS and configure them with replicas.
That is where the filesystem does the replication, thus not needing RAID at all.

Or, you could take the approach I take. I remove the old 4TB disks and
only copy the few files I need on to the 8TB disks going forward.
I can always plug the old 4TB disks in if I need an old file.
I have written my own indexer for this. It scans the whole disk,
creating an index and thumbnails and then only store the index and
thumbnails on the 8TB disks.
The index is stored in Elastic Search, so makes it easy to find the
files again, and also which disk they are on.
So, files I hardly ever need are stored on powered off disks.

Kind Regards

James

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug