# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Apr 11 18:48:11 2013
Raid Level : raid1
Array Size : 487104 (475.77 MiB 498.79 MB)
Used Dev Size : 487104 (475.77 MiB 498.79 MB)
Raid Devices : 6
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sat Apr 16 05:15:16 2022
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Name : tatooine:0
UUID : a3cf1d73:5a14862c:8affcca7:036adaa0
Events : 16482
Number Major Minor RaidDevice State
6 8 33 0 active sync /dev/sdc1
1 8 17 1 active sync /dev/sdb1
7 8 65 2 active sync /dev/sde1
3 8 49 3 active sync /dev/sdd1
8 8 81 4 active sync /dev/sdf1
5 0 0 5 removed
# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sun Apr 3 03:36:58 2022
Raid Level : raid6
Array Size : 717376512 (684.14 GiB 734.59 GB)
Used Dev Size : 239125504 (228.05 GiB 244.86 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Apr 22 21:47:14 2022
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : tatooine:1
UUID : 748c2cdc:113ecda4:8a52c229:384d3438
Events : 2294
Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 50 1 active sync /dev/sdd2
2 8 66 2 active sync /dev/sde2
3 8 82 3 active sync /dev/sdf2
4 8 34 4 active sync /dev/sdc2
# pvdisplay
# lvdisplay
No volume groups found
# vgdisplay
No volume groups found
obviously, md0 works just fine, being a raid1 with at least 1 drive
surviving.
md1 with 4 drives out of 6 is somewhat less clear that it could survive. it
did rebuild, so I can only assume that means it at least believes the data
is intact. I guess I'm looking for a way to validate or invalidate this
belief.
-wes
On Sat, Apr 30, 2022 at 2:57 PM Robert Citek <[email protected]> wrote:
> Greetings, Wes.
>
> From the information you provided, I’m guessing you built the RAID from
> five drives ( /dev/sd{f,g,h,i,j} ), which created a single /dev/mda device.
> That device was partitioned to create devices /dev/mda{1-10}. And those
> partitions were then used for the LVM.
>
> But that’s just a guess.
>
> What are the five drives named? Are they really entire drives or
> partitions on a drive? e.g. /dev/sda vs /dev/sda7.
>
> What is the output from the mdadm command that shows you the RAID’s
> configuration? It will show the devices used, how it’s put together, and
> it’s current state. ( the actual command syntax with options slips my mind
> at the moment. )
>
> What is the output of the various LVM display commands? pvdisplay,
> vgdisplay, lvdisplay
>
> Regards,
> - Robert
>
> On Sat, Apr 30, 2022 at 1:09 PM wes <[email protected]> wrote:
>
> > I have a raid6 array built with mdadm, 5 drives with a spare. it lost 2
> > drives, so I replaced them and the array rebuilt itself without much
> > complaint. I believe the array contained an lvm pv (I didn't originally
> > build it), but I can't seem to get it to detect now. on my other systems
> > configured the same way, the first characters on the block device are
> > "LABELONE LVM2" - not so on this broken system.
> >
> > running strings on the broken volume returns what appear to be filenames:
> >
> > [jop61.gz
> > [jop61.gz
> > Ftof~.1.gz
> > [ehzqp.1.gz
> > Fwteh.1.gz
> > Fwteh.1.gz
> > utvame.1.gz
> > utvame.1.gz
> >
> > so clearly there is _something_ there but I can't figure out how to tell
> > what it is.
> >
> > # file -s /dev/md1
> > /dev/md1: sticky data
> >
> > any ideas on things to check or try? this is not a critical system so
> this
> > is mostly an academic exercise. I would like to understand more about
> this
> > area of system administration.
> >
> > thanks,
> > -wes
> >
>