Re: [CentOS] Drive failed in 4-drive md RAID 10

2020-09-20 Thread Simon Matter
> --On Friday, September 18, 2020 10:53 PM +0200 Simon Matter > wrote: > >> mdadm --remove /dev/md127 /dev/sdf1 >> >> and then the same with --add should hotremove and add dev device again. >> >> If it rebuilds fine it may again work for a long time. > > This worked like a charm. When I added it

Re: [CentOS] Drive failed in 4-drive md RAID 10

2020-09-19 Thread Kenneth Porter
--On Friday, September 18, 2020 10:53 PM +0200 Simon Matter wrote: mdadm --remove /dev/md127 /dev/sdf1 and then the same with --add should hotremove and add dev device again. If it rebuilds fine it may again work for a long time. This worked like a charm. When I added it back, it told me

Re: [CentOS] Drive failed in 4-drive md RAID 10

2020-09-18 Thread Jon Pruente
On Fri, Sep 18, 2020 at 3:20 PM Kenneth Porter wrote: > Thanks. That reminds me: If I need to replace it, is there some easy way > to > figure out which drive bay is sdf? It's an old Supermicro rack chassis > with > 6 drive bays. Perhaps a way to blink the drive light? > It's easy enough with

Re: [CentOS] Drive failed in 4-drive md RAID 10

2020-09-18 Thread Michael Schumacher
Hi, > Thanks. That reminds me: If I need to replace it, is there some easy way to > figure out which drive bay is sdf? # smartctl -all /dev/sdf will give you the serial number if the drive. That will be printed on the disk label. But you still have to shut down your machine and pull all drives

Re: [CentOS] Drive failed in 4-drive md RAID 10

2020-09-18 Thread Kenneth Porter
--On Friday, September 18, 2020 10:53 PM +0200 Simon Matter wrote: mdadm --remove /dev/md127 /dev/sdf1 and then the same with --add should hotremove and add dev device again. If it rebuilds fine it may again work for a long time. Thanks. That reminds me: If I need to replace it, is there

Re: [CentOS] Drive failed in 4-drive md RAID 10

2020-09-18 Thread Simon Matter
> I got the email that a drive in my 4-drive RAID10 setup failed. What are > my > options? > > Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). > > mdadm.conf: > > # mdadm.conf written out by anaconda > MAILADDR root > AUTO +imsm +1.x -all > ARRAY /dev/md/root level=raid10 num-devices=4 >

[CentOS] Drive failed in 4-drive md RAID 10

2020-09-18 Thread Kenneth Porter
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4