Re: SATA hotplug + mdadm raid
Sam Smith wrote: > But I still need to swap the first drive (sda) and I don't really want > to have to reboot this time. So what can I do to ensure that once I pull > the old drive and put in the new one that it comes back up as "sda"? Or > does that even matter? (seems like it would..) As Andy Smith wrote mdadm is using the metadata particularly UUID blkid /dev/md4 /dev/md4: UUID="b426722d-ec49-4c6b-a638-5941f24debfd" TYPE="swap" Watch out that you update your mdadm.conf file man mdadm.conf and don't forget to update initrd after changing if you boot from raid, because it is using the conf file to assemble the disks at boot time. regards
Re: SATA hotplug + mdadm raid
On 05/18/2017 12:30 PM, Dan Ritter wrote: It doesn't matter that much. Use this: mdadm --fail /dev/md0 /dev/sda1 mdadm --remove /dev/md0 /dev/sda1 mdadm --add /dev/md0 /dev/whatever1 then check on progress with cat /proc/mdstat -dsr- Ok, I just went ahead and yanked the drive and stuck the other one in. It actually came up as sda, probably because I had previously used dd to copy the first 1G onto it from the old drive. I partitioned it and added it to the array, all is well now. Thanks, Samuel Smith
Re: SATA hotplug + mdadm raid
Hi Sam, It doesn't matter what your devices are called. In fact you are best advised to avoid use of the /dev/sd* names where possible as these names may change for reasons other than drives being hotplugged. For example if your storage controller needs a module to detect drives, then order of module loading may affect device naming. Try to use the paths in /dev/disk/by-id/ or similar. mdadm itself recognises array component devices by its own metadata so does not care what they are called (as long as you haven't told mdadm to ignore those device names). The only thing you might want to check out is whether your BIOS is going to see a bootloader on the drive it tries to boot from next time. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: SATA hotplug + mdadm raid
On Thu, May 18, 2017 at 11:49:11AM -0500, Sam Smith wrote: > Hi, > > I recently upgraded my home server to an HP ml30 tower server. It came with > a 4 drive hotplug SATA cage. I loaded two old unused drives in it and > installed Debian Stretch on it putting the drives in a software raid1 via > the debian installer. My plan was once I got the new box up and running that > I would shut off the old machine and use the drives from that, swapping them > out one at a time and letting mdadm resync. > > Before putting the new box into "production", I pulled the 2nd drive (sdb) > to see what would happen (with the machine still running). Of course mdadm > went into degraded mode with a one disk raid1. I slid the same drive back in > and it came back as /dev/sdb. Later on, after a few reboots, I pulled the > 2nd drive and swapped it with a drive from the old machine (it was also > still up and running with mdadm raid1). However this time the new drive > didn't come up as sdb, but as /dev/sdc. I set up the partitions the same as > sda but I didn't join it to the mdadm array yet. I wasn't sure if joining as > "sdc" and then on reboot having it come up possibly as "sdb" would mess up > anything so I just rebooted. On reboot it came up as "sdb" and I joined it > to the array and all was well. > > But I still need to swap the first drive (sda) and I don't really want to > have to reboot this time. So what can I do to ensure that once I pull the > old drive and put in the new one that it comes back up as "sda"? Or does > that even matter? (seems like it would..) It doesn't matter that much. Use this: mdadm --fail /dev/md0 /dev/sda1 mdadm --remove /dev/md0 /dev/sda1 mdadm --add /dev/md0 /dev/whatever1 then check on progress with cat /proc/mdstat -dsr-
SATA hotplug + mdadm raid
Hi, I recently upgraded my home server to an HP ml30 tower server. It came with a 4 drive hotplug SATA cage. I loaded two old unused drives in it and installed Debian Stretch on it putting the drives in a software raid1 via the debian installer. My plan was once I got the new box up and running that I would shut off the old machine and use the drives from that, swapping them out one at a time and letting mdadm resync. Before putting the new box into "production", I pulled the 2nd drive (sdb) to see what would happen (with the machine still running). Of course mdadm went into degraded mode with a one disk raid1. I slid the same drive back in and it came back as /dev/sdb. Later on, after a few reboots, I pulled the 2nd drive and swapped it with a drive from the old machine (it was also still up and running with mdadm raid1). However this time the new drive didn't come up as sdb, but as /dev/sdc. I set up the partitions the same as sda but I didn't join it to the mdadm array yet. I wasn't sure if joining as "sdc" and then on reboot having it come up possibly as "sdb" would mess up anything so I just rebooted. On reboot it came up as "sdb" and I joined it to the array and all was well. But I still need to swap the first drive (sda) and I don't really want to have to reboot this time. So what can I do to ensure that once I pull the old drive and put in the new one that it comes back up as "sda"? Or does that even matter? (seems like it would..) Regards, Samuel Smith