------- Original Message -------
On Tuesday, November 15th, 2022 at 2:58 PM, John Jason Jordan <joh...@gmx.com> 
wrote:


> OK, I may be making some headway here.
> 
> sudo mdadm -D /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Mon Jan 25 13:53:32 2021
> Raid Level : raid0
> Array Size : 30005334016 (27.94 TiB 30.73 TB)
> Raid Devices : 4
> Total Devices : 4
> Persistence : Superblock is persistent
> 
> Update Time : Mon Jan 25 13:53:32 2021
> State : broken, FAILED
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 0
> Spare Devices : 0
> 
> Layout : -unknown-
> Chunk Size : 512K
> 
> Consistency Policy : none
> 
> Number Major Minor RaidDevice State
> 0 259 2 0 active sync missing
> 1 259 1 1 active sync missing
> 2 259 0 2 active sync missing
> 3 259 3 3 active sync missing
> 
> Also tried sudo mdadm --run /dev/md0, but there was no output.
> 
> Is this repairable without wiping out the data and recreating the array?


mdadm is giving a state of  "broken, FAILED". A failed state in RAID usually 
means that the number of drive failures exceeds the tolerance of the RAID mode 
you chose. DEGRADED indicates drive failure with a potential to rebuild. 

In your case, however, mdadm is saying that all 4 devices are missing. This 
probably means that the array is broken simply because the devices are no 
longer present at their specified location. Since this is a removable device 
it's possible that the storage devices are using new entries in /dev/.

Since this occurred after a brief power outage and because mount says that a 
mountpoint is busy even though said mountpoint no longer exists... this sounds 
like a case of reassigned drive letters and you need to hunt down the RAID 
drives in your /dev folder. lsblk can help identify them. 
-Ben

Reply via email to