Greetings, Wes.

>From the information you provided, I’m guessing you built the RAID from
five drives ( /dev/sd{f,g,h,i,j} ), which created a single /dev/mda device.
That device was partitioned to create devices /dev/mda{1-10}. And those
partitions were then used for the LVM.

But that’s just a guess.

What are the five drives named?  Are they really entire drives or
partitions on a drive? e.g. /dev/sda vs /dev/sda7.

What is the output from the mdadm command that shows you the RAID’s
configuration? It will show the devices used, how it’s put together, and
it’s current state. ( the actual command syntax with options slips my mind
at the moment. )

What is the output of the various LVM display commands? pvdisplay,
vgdisplay, lvdisplay

Regards,
- Robert

On Sat, Apr 30, 2022 at 1:09 PM wes <[email protected]> wrote:

> I have a raid6 array built with mdadm, 5 drives with a spare. it lost 2
> drives, so I replaced them and the array rebuilt itself without much
> complaint. I believe the array contained an lvm pv (I didn't originally
> build it), but I can't seem to get it to detect now. on my other systems
> configured the same way, the first characters on the block device are
> "LABELONE LVM2" - not so on this broken system.
>
> running strings on the broken volume returns what appear to be filenames:
>
> [jop61.gz
> [jop61.gz
> Ftof~.1.gz
> [ehzqp.1.gz
> Fwteh.1.gz
> Fwteh.1.gz
> utvame.1.gz
> utvame.1.gz
>
> so clearly there is _something_ there but I can't figure out how to tell
> what it is.
>
> # file -s /dev/md1
> /dev/md1: sticky data
>
> any ideas on things to check or try? this is not a critical system so this
> is mostly an academic exercise. I would like to understand more about this
> area of system administration.
>
> thanks,
> -wes
>

Reply via email to