Craig Falconer wrote:
Then two ways to progress....
0 Boot in single user mode
1 Add one new drive to the machine, partition it with similar but
larger partitions as appropriate.
2 Then use
mdadm --add /dev/md3 /dev/sdb4
mdadm --add /dev/md2 /dev/sdb3
mdadm --add /dev/md1 /dev/sdb2
mdadm --add /dev/md0 /dev/sdb1
sysctl -w dev.raid.speed_limit_max=99999999
3 While this is happening run
watch --int 10 cat /proc/mdstat
Wait until all the drives are synched
4 If you boot off this raidset you'll need to reinstall a boot
loader on each drive
5 Down the machine and remove the last 320 GB drive.
6 Install the other new drive, then boot.
7 Partition the other new drive the same as the first big drive
8 Repeat steps 2 and 3 but use sda rather than sdb
Once they're finished synching you can grow your filesystems to
their full available space
9 Do the boot loader install onto both drives again
10 Then you can reboot and it should all be good.
I have a new drive installed, partitioned and formatted, ready to add to
the raidset, first some questions related to the above, to ease my mind
before proceeding.
Is it necessary to boot to single user mode (and why?) since this will
make the machine unavailable to the network as a file server for the
duration of the process? Machine is used solely to serve up files.
Based on the time it took to re-add the drive last week, it would need
to go offline for some hours, and therefore means a very late (start
and) finish to a work day or needing to be at a weekend to keep it
available to users during working days.
From my reading of man mdadm, it suggests doing a fail and remove of
the faulty drive, possibly at the same time as adding a new device, like:
mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1
Is this a good process to follow or is it redundant/unnecessary?
Just in case I run into issues reinstalling the boot loader from a live
CD, I understand that I would (as an interim measure) be able to boot
the machine with a single partition marked as bootable from just the
current good drive by disconnecting the new drive?
Finally, I'm somewhat unclear how the resulting partitions are going to
work out, current failing drive is /dev/sdb, /dev/sdc holds backups, new
larger drive comes up as /dev/sdd. Surely once sdb is physically removed
sdc and sdd move up a letter and this messes with adding to the raid
array as sdd? Or, is a better approach to do a fail & remove of the
failing drive, physically remove it and put the new drive on the same
sata connector?
Cheers,
Roger